Apply

Data Engineer

Posted 2024-11-14

View full description

πŸ’Ž Seniority level: Senior, Minimum 3 years

πŸ“ Location: Los Angeles, Boston

πŸ’Έ Salary: 120000 - 150000 USD per year

πŸ” Industry: Healthcare

🏒 Company: Heyday Health

⏳ Experience: Minimum 3 years

πŸͺ„ Skills: AWSSQLData AnalysisETLMachine LearningSnowflakeData analysisData scienceCollaborationCompliance

Requirements:
  • Bachelor’s degree in Computer Science, Data Science, Engineering, or a related field.
  • Minimum 3 years of experience as a Data Engineer.
  • Expertise in Snowflake, DBT, Looker, and Fivetran.
  • Proficient in SQL and large-scale database management.
  • Experience in cloud-based ETL processes, particularly in AWS.
  • Strong understanding of healthcare data and experience with healthcare claims and EHRs.
  • Familiarity with healthcare data compliance and HIPAA regulations.
  • Strong collaborative skills and effective communication with stakeholders.
Responsibilities:
  • Oversee the design and architecture of the Heyday data warehouse for data analysis.
  • Manage ETL infrastructure, including data integration and secure storage in AWS.
  • Develop and optimize ETL jobs for healthcare data operations.
  • Administer Snowflake for query optimization and data access.
  • Maintain healthcare data models and ensure regulatory compliance.
  • Support data analysts with workflows for business-critical analysis.
  • Collaborate with the engineering team on data modeling best practices.
  • Partner with stakeholders to gather insights and create data-driven solutions.
  • Provide support for machine learning data infrastructure.
Apply

Related Jobs

Apply

πŸ“ United States

πŸ” Consumer insights

  • Strong PL/SQL and SQL development skills.
  • Proficient in Python and Java.
  • 3-5 years of experience in Data Engineering with Oracle and MS SQL.
  • Experience with cloud services like Snowflake and Azure.
  • Familiar with data orchestration tools such as Azure Data Factory and DataBricks.
  • Understanding of data privacy regulations.

  • Design, implement and maintain scalable data pipelines and architecture.
  • Unit test and document solutions that meet product quality standards.
  • Identify and resolve performance bottlenecks in data processing workflows.
  • Implement data quality checks to ensure accuracy and consistency.
  • Collaborate with cross-functional teams to address data needs.

PythonSQLETLGitJavaOracleQASnowflakeJiraAzureData engineeringCI/CD

Posted 2024-11-21
Apply
Apply

πŸ“ US

🧭 Full-Time

πŸ’Έ 206700 - 289400 USD per year

πŸ” Social media / Online community

  • MS or PhD in a quantitative discipline: engineering, statistics, operations research, computer science, informatics, applied mathematics, economics, etc.
  • 7+ years of experience with large-scale ETL systems, building clean, maintainable, object-oriented code (Python preferred).
  • Strong programming proficiency in Python, SQL, Spark, Scala.
  • Experience with data modeling, ETL concepts, and manipulating large structured and unstructured data.
  • Experience with data workflows (e.g., Airflow) and data visualization tools (e.g., Looker, Tableau).
  • Deep understanding of technical and functional designs for relational and MPP databases.
  • Proven track record of collaboration and excellent communication skills.
  • Experience in mentoring junior data scientists and analytics engineers.

  • Act as the analytics engineering lead within Ads DS team and contribute to data science data quality and automation initiatives.
  • Ensure high-quality data through ETLs, reporting dashboards, and data aggregations for business tracking and ML model development.
  • Develop and maintain robust data pipelines and workflows for data ingestion, processing, and transformation.
  • Create user-friendly tools for internal use across Data Science and cross-functional teams.
  • Lead efforts to build a data-driven culture by enabling data self-service.
  • Provide mentorship and coaching to data analysts and act as a thought partner for data teams.

LeadershipPythonSQLData AnalysisETLTableauStrategyAirflowData analysisData engineeringData scienceSparkCommunication SkillsCollaborationMentoringCoaching

Posted 2024-11-21
Apply
Apply

πŸ“ US

πŸ” Consumer insights

  • Strong PL/SQL and SQL development skills.
  • Proficient in multiple data engineering languages such as Python and Java.
  • Minimum 3-5 years of experience in Data engineering with Oracle and MS SQL.
  • Experience with data warehousing concepts and cloud-based technologies like Snowflake.
  • Experience with cloud platforms such as Azure.
  • Knowledge of data orchestration tools like Azure Data Factory and DataBricks workflows.
  • Understanding of data privacy regulations and best practices.
  • Experience working with remote teams.
  • Familiarity with tools like Git, Jira.
  • Bachelor's degree in Computer Science or Computer Engineering.

  • Design, implement and maintain scalable pipelines and architecture to collect, process, and store data from various sources.
  • Unit test and document solutions that meet product quality standards prior to release to QA.
  • Identify and resolve performance bottlenecks in pipelines to ensure efficient data delivery.
  • Implement data quality checks and validation processes.
  • Work with Data Architect to implement best practices for data governance, quality, and security.
  • Collaborate with cross-functional teams to address data needs.

PythonSQLETLGitJavaOracleQASnowflakeJiraAzureData engineeringCI/CD

Posted 2024-11-21
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 95000 - 110000 USD per year

πŸ” Healthcare

🏒 Company: Wider Circle

  • Degree in Computer Science, Information Systems, or equivalent education or work experience.
  • 3+ Years experience with AWS or similar technologies (S3, Redshift, RDS, EMR).
  • 3+ Years strong abilities with SQL and Python.
  • Experience building test automation suites for test and production environments.
  • Experience using APIs for data extraction and updating.
  • Experience with Git and version control.

  • Develop and maintain data quality and accuracy dashboards, and scorecards to track data quality and model performance.
  • Develop, maintain, and enhance a comprehensive data quality framework that defines data standards, quality and accuracy expectations, and validation processes.
  • Enhance data quality through rapid testing, feedback, and insights.
  • Partner with Engineering & Product to predict data quality issues and production flaws.
  • Conceptualize data architecture (visually) and implement practically into logical structures.
  • Perform testing of data after ingesting and database loading.
  • Manage internal SLAs for data quality and frequency.
  • Provide expert support for solving complex problems of data integration across multiple data sets.
  • Update and evolve data ecosystem to streamline processes for maximum efficiency.

AWSPythonSQLGitProduct DevelopmentData engineering

Posted 2024-11-17
Apply
Apply
πŸ”₯ Data Engineer
Posted 2024-11-16

πŸ“ Arizona, California, Colorado, Indiana, Massachusetts, Minnesota, New York, Oregon, Pennsylvania, Texas, Utah, Washington

πŸ’Έ 105000 - 115000 USD per year

πŸ” Consumer Goods

🏒 Company: Deckers

  • Strong coding and design skills in Python and SQL.
  • Experience in designing and deploying cloud data infrastructure in AWS.
  • Experience working in a Linux environment.
  • Working knowledge of Git & GitHub.

  • Develop integrations with external resources (sFTP, APIs, etc.) to bring assets into the data warehouse.
  • Produce custom datasets to enable Data Scientists and Data Analysts.
  • Collaborate in designing and developing the next generation of our AWS data infrastructure.
  • Write and test production-level code that can be deployed within our existing cloud framework.
  • Create data pipeline monitoring automation.

AWSPythonSQLAgileETLGitLinux

Posted 2024-11-16
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 124300 - 186500 USD per year

πŸ” Technology / Cloud Services

🏒 Company: SMX

  • Two + years of experience in relevant fields.
  • Expertise in complex SQL.
  • Knowledge of AWS technologies.
  • Solid understanding of RDBMS concepts, including Postgres, RedShift, and SQL Server.
  • Experience with logical data modeling and database/query optimization.
  • Familiarity with AWS data migration tools (DMS).
  • Scripting knowledge, especially in Python and Lambda.
  • Experience with version control and CI/CD tools such as Git, TFS, and Azure DevOps.
  • Awareness of network authentication and authorization protocols (Kerberos, SAML/OAUTH).
  • Some knowledge of networks.
  • Ability to obtain and maintain a Public Trust clearance; US Citizenship is required.
  • Strong collaboration and communication skills.

  • Assist Data Architect and customer in collecting requirements and documenting tasks for the data loading platform, focusing on performance and data quality.
  • Implement data loading and quality control activities based on project requirements and customer tickets.
  • Create CI/CD pipelines for infrastructure and data loading related to data warehouse maintenance.
  • Code and implement unique data migration requirements using AWS technologies like DMS and Lambda/Python.
  • Resolve identity and access management issues for various data sets in Postgres, Redshift, and SQL Server.

AWSPythonSQLETLGitOAuthAzurePostgresRDBMSCI/CDDevOps

Posted 2024-11-15
Apply
Apply

πŸ“ Latin America, United States, Canada

πŸ” Life insurance

  • The ideal candidate will be independent and a great communicator.
  • Attention to detail is critical.
  • Must possess problem-solving skills.

  • Develop and maintain enterprise data and analytics systems for a US client.
  • Optimize performance by building and supporting decision-making tools.
  • Collaborate closely with software engineering, AI/ML, Cybersecurity, and DevOps/SysOps teams.
  • Support end-to-end data pipelines using Python or Scala.
  • Participate in Agile framework-related tasks.

PythonSoftware DevelopmentAgileData engineeringDevOpsAttention to detail

Posted 2024-11-15
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 210000 - 220000 USD per year

πŸ” Healthcare

  • 10+ years of experience in data engineering with a strong background in building and scaling data architectures.
  • Advanced knowledge of SQL, relational databases, and big data tools like Spark and Kafka.
  • Proficient in cloud-based data warehousing and services, especially Snowflake and AWS.
  • Understanding of AI/ML workflows.
  • Experience in service-oriented and event-based architecture with strong API development skills.
  • Strong communication and project management skills.

  • Lead the Design and Implementation using modern data architecture principles.
  • Scale Data Platform for optimal data extraction, transformation, and loading.
  • Design and build scalable AI and ML platforms.
  • Collaborate with Executive, Product, Clinical, Data, and Design teams.
  • Build and optimize complex data pipelines.
  • Create and maintain data tools and pipelines for analytics and data innovation.
  • Provide technical leadership and mentorship to the data engineering team.

AWSLeadershipProject ManagementPythonSQLJavaKafkaSnowflakeC++AirflowData engineeringSparkCommunication SkillsProblem SolvingOrganizational skills

Posted 2024-11-15
Apply
Apply

πŸ“ North America, South America, Europe

πŸ’Έ 100000 - 500000 USD per year

πŸ” Web3, blockchain

🏒 Company: Edge & Node

  • A self-motivated, team member with keen attention to detail.
  • Proactive collaboration with team members and a willingness to adapt to a growing environment.
  • Familiarity and experience with Rust, particularly focusing on data transformation and ingestion.
  • A strong understanding of blockchain data structures and ingestion interfaces.
  • Experience in real-time data handling, including knowledge of reorg handling.
  • Familiarity with blockchain clients like Geth and Reth is a plus.
  • Adaptability to a dynamic and fully-remote work environment.
  • Rigorous approach to software development that reflects a commitment to excellence.

  • Develop and maintain data ingestion adapters for various blockchain networks and web3 protocols.
  • Implement data ingestion strategies for both historical and recent data.
  • Apply strategies for handling block reorgs.
  • Optimize the latency of block ingestion at the chain head.
  • Write interfaces with file storage protocols such as IPFS and Arweave.
  • Collaborate with upstream data sources, such as chain clients and tracing frameworks, and monitor the latest upstream developments.
  • Perform data quality checks, cross-checking data across multiple sources and investigating any discrepancies that arise.

Software DevelopmentBlockchainData StructuresRustCollaborationAttention to detail

Posted 2024-11-15
Apply
Apply

πŸ“ United States (U.S.)

🧭 Full-Time

πŸ” Cloud Computing and Data Engineering

🏒 Company: Via Logic

  • Bachelor's degree required.
  • Minimum of 5 years of experience designing and implementing solutions on AWS.
  • Expertise in CloudFormation, Terraform, Ansible, and Chef for Infrastructure as Code strategies.
  • Proficiency in CI/CD automation with tools like Jenkins and Gitlab.
  • Strong programming and scripting skills in Java, Python, and Bash.
  • Skilled in data models and management using SQL and Python.
  • Experience with cloud security standards like FISMA, FedRAMP, and NIST SP 800-53.
  • Ability to manage multiple projects and demonstrate problem-solving capabilities.
  • Strong teamwork and communication skills.

  • Design, manage, and optimize secure CI/CD pipelines integrating security checks.
  • Utilize containerization technologies to manage microservices.
  • Build, deploy, and optimize cloud infrastructure to support scalable applications.
  • Ensure high availability and reliability of production systems.
  • Implement monitoring solutions for performance tracking.
  • Develop infrastructure as code using Terraform, Ansible, or CloudFormation.
  • Assess and maintain security postures to meet compliance standards.
  • Engage in agile teams for process improvement and DevSecOps practices.
  • Develop automated tests for infrastructure and deployment scripts.
  • Manage information flow ensuring data integrity and accuracy.

AWSDockerPostgreSQLProject ManagementPythonSQLAgileBashCybersecurityDynamoDBETLJavaJenkinsKubernetesOracleProduct ManagementTableauData StructuresRedisCommunication SkillsCollaborationCI/CDProblem SolvingDevOpsTerraformMicroservicesCompliance

Posted 2024-11-14
Apply