Apply

Data Engineer

Posted 9 days agoViewed

View full description

💎 Seniority level: Middle, Minimum of 5 years of relevant experience in data engineering

📍 Location: United States, EST, NOT STATED

💸 Salary: 103500.0 - 143500.0 USD per year

🔍 Industry: Public health

🗣️ Languages: English

⏳ Experience: Minimum of 5 years of relevant experience in data engineering

🪄 Skills: AWSPostgreSQLPythonSQLCloud ComputingETLHadoopKafkaMongoDBData engineeringNosqlData modeling

Requirements:
  • Bachelor's degree in Computer Science, Information Technology, Data Science, or related field preferred.
  • Minimum of 5 years of relevant experience in data engineering.
  • Proficiency in programming languages such as Python, Java, Scala, or SQL.
  • Experience with large-scale projects using Amazon Web Services.
  • Strong technical writing skills for documentation and policies.
  • Knowledge of data warehousing concepts and cloud computing platforms.
Responsibilities:
  • Collaborate with data scientists and analysts to understand data needs.
  • Design scalable solutions and maintain ETL processes.
  • Implement security measures and manage data storage systems.
  • Create efficient data pipelines and monitor for performance issues.
  • Provide technical guidance and communicate with partners at all levels.
Apply

Related Jobs

Apply
🔥 Senior Data Engineer
Posted about 16 hours ago

📍 United States, Canada

🧭 Regular

💸 125000.0 - 160000.0 USD per year

🔍 Digital driver assistance services

🏢 Company: Agero👥 1001-5000💰 $4,750,000 over 2 years agoAutomotiveInsurTechInformation TechnologyInsurance

  • Bachelor's degree in a technical field and 5+ years or Master's degree with 3+ years of industry experience.
  • Extensive experience with Snowflake or other cloud-based data warehousing solutions.
  • Expertise in ETL/ELT pipelines using tools like Airflow, DBT, Fivetran.
  • Proficiency in Python for data processing and advanced SQL for managing databases.
  • Solid understanding of data modeling techniques and cost management strategies.
  • Experience with data quality frameworks and deploying data solutions in the cloud.
  • Familiarity with version control systems and implementing CI/CD pipelines.
  • Develop and maintain ETL/ELT pipelines to ingest data from diverse sources.
  • Monitor and optimize cloud costs while performing query optimization in Snowflake.
  • Establish modern data architectures including data lakes and warehouses.
  • Apply dimensional modeling techniques and develop transformations using DBT or Spark.
  • Write reusable and efficient code, and develop data-intensive UIs and dashboards.
  • Implement data quality frameworks and observability solutions.
  • Collaborate cross-functionally and document data flows, processes, and architecture.

AWSPythonSQLApache AirflowDynamoDBETLFlaskMongoDBSnowflakeFastAPIPandasCI/CDData modeling

Posted about 16 hours ago
Apply
Apply
🔥 Data Engineer
Posted 5 days ago

📍 California

🧭 Full-Time

💸 145000.0 USD per year

🔍 Health Insurance

🏢 Company: Sidecar Health👥 101-250💰 $165,000,000 Series D 7 months ago🫂 Last layoff over 2 years agoHealth InsuranceInsurTechInsuranceHealth CareFinTech

  • Master’s degree or foreign degree equivalent in Computer Science or a related field.
  • 1+ years of experience in Data Engineer or Software Engineer roles.
  • Proficiency in SQL and Python, with the ability to write complex SQL statements.
  • Hands-on experience with ETL processes, real-time and batch data processing.
  • Familiarity with Spark, Athena, Docker, and version control systems like GIT.
  • Knowledge of secure, scalable, cloud-based architectures compliant with HIPAA or PCI.
  • Experience in creating data visualizations using Tableau or ThoughtSpot.
  • Ability to translate business requirements into scalable software solutions.
  • Use SQL and Python on AWS to build ETL jobs and data pipelines for data integration into Snowflake.
  • Leverage DBT to transform data, consolidate records, and create clean data models.
  • Utilize AWS technologies to send reports and support business teams.
  • Containerize and orchestrate data pipelines with Docker and Airflow.
  • Perform data quality checks and ensure data reliability.
  • Develop reports and dashboards using Tableau and ThoughtSpot.
  • Participate in agile development activities.

AWSDockerPythonSQLETLSnowflakeTableauAirflowSpark

Posted 5 days ago
Apply
Apply

📍 United States

💸 124300.0 - 186500.0 USD per year

🔍 Technology

🏢 Company: SMX👥 1001-5000Cloud ComputingAnalyticsCloud SecurityInformation TechnologyCyber Security

  • Two + years of experience in a related field.
  • Expertise in complex SQL.
  • Knowledge of AWS technologies.
  • Solid understanding of RDBMS concepts (Postgres, RedShift, SQL Server), logical data modeling, and database/query optimization.
  • Familiarity with AWS data migration tools (DMS).
  • Scripting knowledge in Python/Lambda.
  • Ability to obtain and maintain a Public Trust clearance; US Citizenship is required.
  • Strong team collaboration and communication skills.
  • Assist Data Architect and customer in collecting requirements and documenting tasks for maintaining and enhancing data loading platform (ETL/data pipelines).
  • Implement data loading and quality control activities based on project requirements and customer tickets.
  • Implement CI/CD pipelines related to data warehouse maintenance.
  • Code and implement unique data migration requirements using AWS technologies like DMS and Lambda/Python.
  • Implement and resolve issues for user identity and access management to various datasets.

AWSPostgreSQLPythonSQLETLCI/CD

Posted 5 days ago
Apply
Apply

📍 United States

🧭 Full-Time

💸 142771.0 - 225000.0 USD per year

🔍 Media and Analytics

  • Master's degree in Computer Science, Data Science, engineering, mathematics, or a related quantitative field plus 3 years of experience in analytics software solutions.
  • Bachelor's degree in similar fields plus 5 years of experience is also acceptable.
  • 3 years of experience with Python and associated packages including Spark, AWS, S3, Java, JavaScript, and Adobe Analytics.
  • Proficiency in SQL for querying and managing data.
  • Experience in analytics programming languages such as Python (with Pandas).
  • Experience in handling large volumes of data and code management tools like Git.
  • 2 years of experience managing computer program orchestrations and using open-source management platforms like AirFlow.
  • Develop, test, and orchestrate econometric, statistical, and machine learning modules.
  • Conduct unit, integration, and regression testing.
  • Create data processing systems for analytic research and development.
  • Design, document, and present process flows for analytical systems.
  • Partner with Software Engineering for cloud-based solutions.
  • Orchestrate modules via directed acyclic graphs using workflow management systems.
  • Work in an agile development environment.

AWSPythonSQLApache AirflowGitMachine LearningData engineeringRegression testingPandasSpark

Posted 6 days ago
Apply
Apply

📍 U.S.

🧭 Full-Time

💸 142771.0 - 225000.0 USD per year

🔍 Media and analytics

  • Master’s degree in Computer Science, Data Science, engineering, mathematics or a related quantitative field plus 3 years of experience in delivering analytics software solutions or a Bachelor’s degree plus 5 years.
  • Must have 3 years of experience with Python, associated packages including Spark, AWS, and SQL for data management.
  • Experience with analytics programming languages, parallel processing, and code management tools like Git.
  • Two years of experience managing program orchestrations and working with open-source management platforms such as AirFlow.
  • Modern analytics programming: developing, testing and orchestrating econometric, statistical and machine learning modules.
  • Unit, integration and regression testing.
  • Understanding the deployment of econometric models and learning methods.
  • Create data processing systems for analytics research and development.
  • Design, write, and test modules for Nielsen analytics cloud-based platforms.
  • Extract data using SQL and orchestrate modules via workflow management platforms.
  • Design, document, and present process flows for analytical systems.
  • Partner with software engineering to build analytical solutions in an agile environment.

AWSPythonSQLApache AirflowGitMachine LearningSpark

Posted 6 days ago
Apply
Apply

📍 United States of America

🧭 Full-Time

💸 110000.0 - 160000.0 USD per year

🔍 Insurance industry

🏢 Company: Verikai_External

  • Bachelor's degree or above in Computer Science, Data Science, or a related field.
  • At least 5 years of relevant experience.
  • Proficient in SQL, Python, and data processing frameworks such as Spark.
  • Hands-on experience with AWS services including Lambda, Athena, Dynamo, Glue, Kinesis, and Data Wrangler.
  • Expertise in handling large datasets using technologies like Hadoop and Spark.
  • Experience working with PII and PHI under HIPAA constraints.
  • Strong commitment to data security, accuracy, and compliance.
  • Exceptional ability to communicate complex technical concepts to stakeholders.
  • Design, build, and maintain robust ETL processes and data pipelines for large-scale data ingestion and transformation.
  • Manage third-party data sources and customer data to ensure clean and deduplicated datasets.
  • Develop scalable data storage systems using cloud platforms like AWS.
  • Collaborate with data scientists and product teams to support data needs.
  • Implement data validation and quality checks, ensuring accuracy and compliance with regulations.
  • Integrate new data sources to enhance the data ecosystem and document data strategies.
  • Continuously optimize data workflows and research new tools for the data infrastructure.

AWSPythonSQLDynamoDBETLSpark

Posted 7 days ago
Apply
Apply

📍 Colorado

💸 106000.0 - 139000.0 USD per year

🔍 Legal services

🏢 Company: Rocket Lawyer👥 251-500💰 $223,000,000 Debt Financing almost 4 years agoLegal TechLaw EnforcementLegal

  • 5+ years of Python experience.
  • 3+ years leveraging technologies such as Airflow and Apache Spark.
  • Experience working with large language models, diffusion models, or other generative models.
  • Experience with MLOps tools and practices.
  • Strong understanding of data architectures and patterns.
  • Experience with containerization technologies like Docker and Kubernetes.
  • Contributions to open-source projects.
  • Experience in DataOps and MLOps implementation and support.
  • Experience in building and supporting AI/ML platforms.
  • Design, develop, and maintain robust, scalable data pipelines for processing large datasets for AI model training.
  • Perform data cleaning, normalization, transformation, and feature engineering, including handling unstructured data.
  • Build and manage data infrastructure like data lakes and warehouses, optimized for AI workloads.
  • Implement data quality checks and monitoring systems for data accuracy and consistency.
  • Contribute to MLOps best practices for data management and model deployment.
  • Work with GCP and Snowflake for data and AI offerings.
  • Optimize data pipelines and infrastructure for performance, scalability, and cost-effectiveness.

DockerPythonApache AirflowGCPKubernetesSnowflakeData engineering

Posted 8 days ago
Apply
Apply
🔥 Staff Data Engineer
Posted 8 days ago

📍 United States

🧭 Full-Time

💸 170000.0 - 195000.0 USD per year

🔍 Healthcare

🏢 Company: Parachute Health👥 101-250💰 $1,000 about 5 years agoMedicalHealth CareSoftware

  • 5+ years of relevant experience.
  • Experience in Data Engineering with Python.
  • Experience building customer-facing software.
  • Strong listening and communication skills.
  • Time management and organizational skills.
  • Proactive, a driven self-starter who can work independently or as part of a team.
  • Ability to think with the 'big picture' in mind.
  • Passionate about improving patient outcomes in the healthcare space.
  • Architect solutions to integrate and manage large volumes of data across various internal and external systems.
  • Establish best practices and data governance standards to ensure that data infrastructure is built for long-term scalability.
  • Build and maintain a reporting product for external customers that visualizes data and provides tabular reports.
  • Collaborate across the organization to assess data engineering needs.

PythonETLAirflowData engineeringData visualization

Posted 8 days ago
Apply
Apply

📍 North Carolina

🧭 Full-Time

💸 99000.0 - 131000.0 USD per year

🔍 Legal services

🏢 Company: Rocket Lawyer👥 251-500💰 $223,000,000 Debt Financing almost 4 years agoLegal TechLaw EnforcementLegal

  • 5+ years of experience as a Data Engineer with successful data warehouse/lake implementation.
  • Deep expertise in Snowflake or similar tools for building data systems.
  • Strong skills in HiveQL and SQL, with knowledge of data warehousing concepts.
  • Proficient programming skills in Python.
  • Understanding of data architectures and patterns.
  • Experience with Apache Spark for large-scale processing is a plus.
  • Familiarity with dbt for modeling in Snowflake preferred.
  • Experience with Google Cloud Platform (GCP) and data services is a plus.
  • Excellent communication skills for cross-functional collaboration.
  • Strong problem-solving skills and dedication to scalable data solutions.
  • Snowflake certification is preferred.
  • Design and develop data pipelines using tools like Apache Airflow or Snowflake's external tables.
  • Translate HiveQL queries to Snowflake SQL for efficient data processing.
  • Utilize dbt in Snowflake for data modeling, focusing on governance.
  • Configure and manage data pipelines in Google Cloud Platform (GCP).
  • Collaborate with analysts to understand data requirements for migration.
  • Monitor and optimize data pipeline performance.
  • Implement automated testing for data quality post-migration.
  • Document the migration process and support the migrated data warehouse.

PythonSQLApache AirflowGCPSnowflake

Posted 9 days ago
Apply
Apply
🔥 Lead Data Engineer
Posted 15 days ago

📍 United States

🔍 Defense and Financial Technology

🏢 Company: 540

  • Bachelor's Degree.
  • 8+ years of related experience.
  • Well-versed in Python.
  • Experience building and managing data pipelines.
  • Proficient in data analytics tools such as Databricks.
  • Experience building dashboards using PowerBI and/or similar tools.
  • Experience working via the terminal / command line.
  • Experience consuming data via APIs.
  • Hands-on experience using Jira and Confluence.
  • Working directly with government leadership managing teams, customers, and data requirements.
  • Assisting Audit teams with monthly data ingestions from Army systems.
  • Management of data initiatives and small projects from start to finish.
  • Working with Army FM&C Lead to prioritize Advana data product requirements.
  • Developing recurring and ad hoc financial datasets.
  • Developing Advana datasets and analytical products to enable the Army reporting on all Financial System data.
  • Reviewing data pipeline code via GitLab to ensure it meets team and code standards.
  • Overseeing overall architecture and technical direction for FM&C data projects.

AWSPython

Posted 15 days ago
Apply