Apply

Data Engineer

Posted about 1 month agoViewed

View full description

πŸ’Ž Seniority level: Middle, 3-5+ years

πŸ“ Location: United States, Egypt, Brazil, Argentina, Spain

πŸ” Industry: Healthcare or tech industry

🏒 Company: Bask HealthπŸ‘₯ 11-50πŸ’° $759,987 Seed over 1 year agoElectronic Health Record (EHR)SaaSWellnessHealth CareHome Health Care

πŸ—£οΈ Languages: English

⏳ Experience: 3-5+ years

πŸͺ„ Skills: AWSSQLApache AirflowETLGCPJavascriptTypeScriptAzureData modeling

Requirements:
  • Bachelor’s degree in Computer Science, Data Engineering, or a related field.
  • 3-5+ years of professional experience in data engineering, data infrastructure, or similar roles, preferably in the healthcare or tech industry.
  • Proven ability to design and implement scalable, reliable, and secure data pipelines in production environments.
  • Advanced proficiency in SQL and programming languages such as Javascript or Typescript.
  • Hands-on experience with data pipeline orchestration tools (e.g., Apache Airflow, dbt, Luigi).
  • Expertise in ETL/ELT processes, including designing and maintaining data warehouses or lakes.
  • Familiarity with cloud platforms like AWS, GCP, or Azure.
  • Strong understanding of data modeling and schema design for large-scale analytics.
  • Experience working with streaming data frameworks.
Responsibilities:
  • Data Engineers at Bask Health build and maintain the robust data infrastructure that powers our innovative telehealth platform.
  • Design pipelines and models for product analysis and craft patient-facing data products.
  • Create scalable, efficient, and reliable data solutions to enhance the quality of care and drive meaningful healthcare insights.
  • Contribute to informed decision-making across teams and ensure innovation in healthcare experiences for patients and providers worldwide.
Apply

Related Jobs

Apply
πŸ”₯ Data Engineer
Posted about 2 hours ago

πŸ“ United States

🧭 Full-Time

πŸ’Έ 130000.0 - 200000.0 USD per year

πŸ” Healthcare

🏒 Company: Datavant

  • 3+ years of experience as a data engineer, analytics engineer, or data scientist.
  • 1+ year of experience building and maintaining an enterprise-scale data lake and/or data warehouse.
  • Strong collaborative and communication skills.
  • Mastery of ANSI SQL and data modelling best practices.
  • Deep experience with data warehouse technologies like Snowflake, BigQuery, or Redshift.
  • Expertise in Python.
  • Deliver a world-class data platform from the ground up.
  • Plan and delegate complex projects with broad scope.
  • Mentor and grow early career developers or engineers.
  • Facilitate technical discussions to solve problems effectively.
  • Engage with stakeholders to meet their needs.
  • Build, upgrade, and maintain data-related infrastructure and monitoring across multiple clouds.
  • Write performant, readable, and reusable code.
  • Review code to ensure high technical quality.

PythonSQLApache AirflowETLSnowflakeData engineeringData modeling

Posted about 2 hours ago
Apply
Apply
πŸ”₯ Data Engineer
Posted about 7 hours ago

πŸ“ United States

πŸ’Έ 150000.0 - 190000.0 USD per year

πŸ” Healthcare

🏒 Company: Oshi HealthπŸ‘₯ 51-100πŸ’° $60,000,000 Series C 4 months agoMedicalMobileHealth Care

  • Hold a BS/BA degree in Computer Science, Math, Physics, or related field, or equivalent experience.
  • 3+ years of data development experience in startup environments.
  • Ability to understand complex requirements and develop scalable solutions.
  • Advanced SQL skills and knowledge of data warehousing standards.
  • Proficient in programming languages such as Golang or Python.
  • Familiar with dbt (Data Build Tool) for warehouse transformations.
  • Knowledge of cloud environments and FHIR standards is a plus.
  • Understanding of data security and HIPAA compliance is advantageous.
  • Contribute to Oshi's existing data warehouse for product, clinical, and strategy teams.
  • Collaborate with marketing and growth teams to build supporting data pipelines.
  • Develop reusable queries, data quality tests, and insights for reporting.
  • Design and implement complex data models, including real-time analytics.
  • Work across data stack including CI/CD pipelines and platform integrations.
  • Support and standardize data governance structures for sensitive client data.

PythonSQLCloud ComputingETLData engineeringCI/CDData modeling

Posted about 7 hours ago
Apply
Apply

πŸ“ United States

πŸ’Έ 131414.0 - 197100.0 USD per year

πŸ” Mental healthcare

🏒 Company: HeadspaceπŸ‘₯ 11-50WellnessHealth CareChild Care

  • 10+ years of success in enterprise data solutions and high-impact initiatives.
  • Expertise in platforms like Databricks, Snowflake, dbt, and Redshift.
  • Experience designing and optimizing real-time and batch ETL pipelines.
  • Demonstrated leadership and mentorship abilities in engineering.
  • Strong collaboration skills with product and analytics stakeholders.
  • Bachelor’s or advanced degree in Computer Science, Engineering, or a related field.
  • Drive the architecture and implementation of pySpark data pipelines.
  • Create and enforce design patterns in code and schema.
  • Design and lead secure and compliant data warehousing platforms.
  • Partner with analytics and product leaders for actionable insights.
  • Mentor team members on dbt architecture and foster a data-first culture.
  • Act as a thought leader on data strategy and cross-functional roadmaps.

SQLCloud ComputingETLSnowflakeData engineeringData modelingData analytics

Posted 1 day ago
Apply
Apply

πŸ“ Spain

πŸ’Έ 80000.0 - 110000.0 EUR per year

πŸ” Financial services

  • 5+ years of professional experience in Data Engineering or similar roles.
  • Proficiency in SQL and DBT for data transformations.
  • Fluency in Python or other modern programming languages.
  • Experience with infrastructure as code languages like Terraform.
  • Knowledge of data modeling, data warehouse technologies, and cloud infrastructures.
  • Experience with AWS or other cloud platforms like Azure or GCP.
  • Ability to provide constructive code reviews.
  • Strong communication and collaboration skills.
  • Work with engineering managers and tech leads to identify and plan projects based on team goals.
  • Collaborate with teams across engineering, analytics, and product to deliver technology for analytical use cases.
  • Write high-quality, understandable code.
  • Review teammates' work and offer feedback.
  • Serve as a technical mentor for other engineers.
  • Promote a respectful and supportive team environment.
  • Participate in on-call rotation.

AWSPythonSQLTerraformData modeling

Posted 4 days ago
Apply
Apply

πŸ“ Poland, Spain, United Kingdom

πŸ” Beauty marketplace

🏒 Company: BooksyπŸ‘₯ 501-1000πŸ’° Debt Financing 4 months agoMobile PaymentsMarketplaceSaaSPaymentsMobile AppsWellnessSoftware

  • 5+ years of experience in backend and data engineering, with strong system design skills.
  • Practical proficiency in cloud technologies (ideally GCP), with expertise in tools like BigQuery, Dataflow, Pub/Sub, or similar.
  • Hands-on experience with CI/CD tools (e.g., GitLab CI) and infrastructure as code.
  • Strong focus on data quality, governance, and building scalable, automated workflows.
  • Experience designing self-service data platforms and infrastructure.
  • Proven ability to mentor and support others, fostering data literacy across teams.
  • Design and implement robust data solutions.
  • Enable teams to make informed, data-driven decisions.
  • Ensure data is accessible, reliable, and well-governed.
  • Play a key role in driving growth, innovation, and operational excellence.

GCPData engineeringCI/CDData modeling

Posted 5 days ago
Apply
Apply

πŸ“ Latin America

πŸ” AI economy, workforce development

🏒 Company: Correlation OneπŸ‘₯ 251-500πŸ’° $5,000,000 Series A almost 7 years agoInformation ServicesAnalyticsInformation Technology

  • 7+ years in a Data Engineering role with experience in data warehouses and ETL/ELT.
  • Advanced SQL experience and skills in database design.
  • Familiarity with pipeline monitoring and cloud environments (e.g., GCP).
  • Experience with APIs, Airflow, dbt, Git, and creating microservices.
  • Knowledge of implementing CDC with technologies like Kafka.
  • Solid understanding of software development practices and agile methodologies.
  • Proficiency in object-oriented scripting languages such as Python or Scala.
  • Experience with CI/CD processes and source control tools like GitHub.
  • Act as the data lake subject matter expert to develop technical vision.
  • Design the architecture for a well-architected data lakehouse.
  • Collaborate with architects to design the ELT process from data ingestion to analytics.
  • Create standard frameworks for software development.
  • Mentor junior engineers and support development teams.
  • Monitor database performance and adhere to data engineering best practices.
  • Develop schema design for reports and analytics.
  • Engage in hands-on development across the technical stack.

PostgreSQLPythonSQLApache AirflowETLGCPGitKafkaMongoDBData engineeringCI/CDTerraformMicroservicesScala

Posted 5 days ago
Apply
Apply

πŸ“ Serbia, Spain, Portugal

🧭 Full-Time

πŸ” EdTech

🏒 Company: LearnlightπŸ‘₯ 251-500πŸ’° Private almost 2 years agoCorporate TrainingE-LearningProfessional Services

  • Advanced level of English language.
  • Proficiency in SQL, Python, and experience with SQL and NoSQL databases.
  • Experience with APIs, familiarity with Airflow, and knowledge of Big Data technologies is a plus.
  • Excellent collaboration and communication skills.
  • Strong organizational skills, attention to detail, problem-solving, critical thinking.
  • Bachelor's degree in information systems, information technology, computer science, or similar.
  • Design and maintain scalable data pipelines.
  • Build and optimize data pipelines for large datasets.
  • Assist in database design and development.
  • Write and automate data scripts using SQL and Python.
  • Collaborate with internal teams to translate business needs into database designs.
  • Improve database performance through code review and troubleshooting.
  • Ensure data security and document processes.

PythonSQLApache AirflowNosql

Posted 5 days ago
Apply
Apply

πŸ“ Spain

🧭 Full-Time

πŸ” Technology and innovation

🏒 Company: Plain ConceptsπŸ‘₯ 251-500ConsultingAppsMobile AppsInformation TechnologyMobile

  • At least 2 years of experience in data engineering.
  • Strong experience with Python or Scala and Spark for processing large datasets.
  • Solid experience in Cloud platforms (Azure or AWS).
  • Hands-on experience building data pipelines (CI/CD).
  • Experience with testing (unit, integration, etc.).
  • Knowledge of SQL and NoSQL databases.
  • Bonus points for experience with Databricks, Snowflake, or Fabric.
  • Bonus points for experience with IaC (Infrastructure as Code).
  • Participate in the design and development of Data solutions for challenging projects.
  • Develop projects from scratch with minimal supervision and strong team collaboration.
  • Be a key player in fostering best practices, clean and reusable code.
  • Develop ETLs using Spark (Python/Scala).
  • Work on cloud-based projects (Azure/AWS).
  • Build scalable pipelines using a variety of technologies.

AWSPythonSQLETLAzureNosqlSparkCI/CDScala

Posted 6 days ago
Apply
Apply

πŸ“ United States, Canada

🧭 Regular

πŸ’Έ 125000.0 - 160000.0 USD per year

πŸ” Digital driver assistance services

🏒 Company: AgeroπŸ‘₯ 1001-5000πŸ’° $4,750,000 over 2 years agoAutomotiveInsurTechInformation TechnologyInsurance

  • Bachelor's degree in a technical field and 5+ years or Master's degree with 3+ years of industry experience.
  • Extensive experience with Snowflake or other cloud-based data warehousing solutions.
  • Expertise in ETL/ELT pipelines using tools like Airflow, DBT, Fivetran.
  • Proficiency in Python for data processing and advanced SQL for managing databases.
  • Solid understanding of data modeling techniques and cost management strategies.
  • Experience with data quality frameworks and deploying data solutions in the cloud.
  • Familiarity with version control systems and implementing CI/CD pipelines.
  • Develop and maintain ETL/ELT pipelines to ingest data from diverse sources.
  • Monitor and optimize cloud costs while performing query optimization in Snowflake.
  • Establish modern data architectures including data lakes and warehouses.
  • Apply dimensional modeling techniques and develop transformations using DBT or Spark.
  • Write reusable and efficient code, and develop data-intensive UIs and dashboards.
  • Implement data quality frameworks and observability solutions.
  • Collaborate cross-functionally and document data flows, processes, and architecture.

AWSPythonSQLApache AirflowDynamoDBETLFlaskMongoDBSnowflakeFastAPIPandasCI/CDData modeling

Posted 6 days ago
Apply
Apply

πŸ“ Brazil, Argentina, Peru, Colombia, Uruguay

πŸ” AdTech

🏒 Company: Workana Premium

  • 6+ years of experience in data engineering or related roles, preferably within the AdTech industry.
  • Expertise in SQL and experience with relational databases such as BigQuery and SpannerDB or similar.
  • Experience with GCP services, including Dataflow, Pub/Sub, and Cloud Storage.
  • Experience building and optimizing ETL/ELT pipelines in support of audience segmentation and analytics use cases.
  • Experience with Docker and Kubernetes for containerization and orchestration.
  • Familiarity with message queues or event-streaming tools, such as Kafka or Pub/Sub.
  • Knowledge of data modeling, schema design, and query optimization for performance at scale.
  • Programming experience in languages like Python, Go, or Java for data engineering tasks.
  • Build and optimize data pipelines and ETL/ELT processes to support AdTech products: Insights, Activation, and Measurement.
  • Leverage GCP tools like BigQuery, SpannerDB, and Dataflow to process and analyze real-time consumer-permissioned data.
  • Design scalable and robust data solutions to power audience segmentation, targeted advertising, and outcome measurement.
  • Develop and maintain APIs to facilitate data sharing and integration across the platform’s products.
  • Optimize database and query performance to ensure efficient delivery of advertising insights and analytics.
  • Work with event-driven architectures using tools like Pub/Sub or Kafka to ensure seamless data processing.
  • Proactively monitor and troubleshoot issues to maintain data accuracy, security, and performance.
  • Drive innovation by identifying opportunities to enhance the platform’s capabilities in audience targeting and measurement.

DockerPythonSQLETLGCPJavaKafkaKubernetesGoData modeling

Posted 10 days ago
Apply