Apply

Senior Data Engineer

Posted about 2 months agoViewed

View full description

💎 Seniority level: Senior, 5+ years

📍 Location: United States, EU

💸 Salary: 200000.0 - 250000.0 USD per year

🔍 Industry: Crypto, Blockchain

🏢 Company: Phantom👥 51-100💰 $109,000,000 Series B about 3 years agoCryptocurrencyEthereumBitcoinFinTech

🗣️ Languages: English

⏳ Experience: 5+ years

🪄 Skills: PythonSQLSnowflakeData modelingA/B testing

Requirements:
  • 5+ years of experience building data infrastructure
  • Experience in startup environments
  • Deep expertise in Snowflake and dbt
  • Strong background in data modeling and architecture
  • Experience implementing data quality frameworks
  • Expert-level SQL skills and proficiency in Python
Responsibilities:
  • Design and implement robust data architecture
  • Drive data quality initiatives
  • Lead sophisticated data modeling efforts
  • Build and scale A/B testing frameworks
  • Collaborate with stakeholders for data solutions
  • Mentor teams on data best practices
Apply

Related Jobs

Apply

📍 Germany, Italy, Netherlands, Portugal, Romania, Spain, UK

🧭 Full-Time

🔍 Wellness

  • You have a proven track record of designing and building robust, scalable, and maintainable data models and corresponding pipelines from business requirements.
  • You are skilled at engaging with engineering and product teams to elicit requirements.
  • You are comfortable with big data concepts, ensuring data is efficiently ingested, processed, and made available for data scientists, business analysts, and product teams.
  • You are experienced in maintaining data consistency across the entire data ecosystem.
  • You have experience maintaining and debugging data pipelines in production environments with high criticality, ensuring reliability and performance.
  • Develop and maintain efficient and scalable data models and structures to support analytical workloads.
  • Design, develop, and maintain data pipelines that transform and process large volumes of data while embedding business context and semantics.
  • Implement automated data quality checks to ensure consistency, accuracy, and reliability of data.
  • Ensure correct adoption and usage of Wellhub’s data by data practitioners across the company
  • Live the mission: inspire and empower others by genuinely caring for your own wellbeing and your colleagues. Bring wellbeing to the forefront of work, and create a supportive environment where everyone feels comfortable taking care of themselves, taking time off, and finding work-life balance.

SQLApache AirflowKubernetesApache KafkaData engineeringSparkData modeling

Posted 3 days ago
Apply
Apply

📍 Portugal

🧭 Full-Time

🏢 Company: Wellhub

  • Proven track record of designing and building robust, scalable, and maintainable data models and corresponding pipelines from business requirements.
  • Skilled at engaging with engineering and product teams to elicit requirements.
  • Comfortable with big data concepts, ensuring data is efficiently ingested, processed, and made available for data scientists, business analysts, and product teams.
  • Experienced in maintaining data consistency across the entire data ecosystem.
  • Experience maintaining and debugging data pipelines in production environments with high criticality, ensuring reliability and performance.
  • Motivated to contribute to a data-driven culture and take pride in seeing the impact of your work across the company
  • Develop and maintain efficient and scalable data models and structures to support analytical workloads.
  • Design, develop, and maintain data pipelines that transform and process large volumes of data while embedding business context and semantics.
  • Implement automated data quality checks to ensure consistency, accuracy, and reliability of data.
  • Ensure correct adoption and usage of Wellhub’s data by data practitioners across the company
  • Live the mission: inspire and empower others by genuinely caring for your own wellbeing and your colleagues. Bring wellbeing to the forefront of work, and create a supportive environment where everyone feels comfortable taking care of themselves, taking time off, and finding work-life balance.

SQLApache AirflowETLKubernetesApache KafkaData engineeringSparkData visualizationData modelingData analyticsData management

Posted 3 days ago
Apply
Apply
🔥 Senior Data Engineer
Posted 5 days ago

📍 Worldwide

🔍 Hospitality

🏢 Company: Lighthouse

  • 4+ years of professional experience using Python, Java, or Scala for data processing (Python preferred)
  • You stay up-to-date with industry trends, emerging technologies, and best practices in data engineering.
  • Improve, manage, and teach standards for code maintainability and performance in code submitted and reviewed
  • Ship large features independently, generate architecture recommendations and have the ability to implement them
  • Great communication: Regularly achieve consensus amongst teams
  • Familiarity with GCP, Kubernetes (GKE preferred),  CI/CD tools (Gitlab CI preferred), familiarity with the concept of Lambda Architecture.
  • Experience with Apache Beam or Apache Spark for distributed data processing or event sourcing technologies like Apache Kafka.
  • Familiarity with monitoring tools like Grafana & Prometheus.
  • Design and develop scalable, reliable data pipelines using the Google Cloud stack.
  • Optimise data pipelines for performance and scalability.
  • Implement and maintain data governance frameworks, ensuring data accuracy, consistency, and compliance.
  • Monitor and troubleshoot data pipeline issues, implementing proactive measures for reliability and performance.
  • Collaborate with the DevOps team to automate deployments and improve developer experience on the data front.
  • Work with data science and analytics teams to enable them to bring their research to production grade data solutions, using technologies like airflow, dbt or MLflow (but not limited to)
  • As a part of a platform team, you will communicate effectively with teams across the entire engineering organisation, to provide them with reliable foundational data models and data tools.
  • Mentor and provide technical guidance to other engineers working with data.

PythonSQLApache AirflowETLGCPKubernetesApache KafkaData engineeringCI/CDMentoringTerraformScalaData modeling

Posted 5 days ago
Apply
Apply
🔥 Senior Data Engineer
Posted 6 days ago

📍 United States

🧭 Full-Time

💸 183600.0 - 216000.0 USD per year

🔍 Software Development

  • 6+ years of experience in a data engineering role building products, ideally in a fast-paced environment
  • Good foundations in Python and SQL.
  • Experience with Spark, PySpark, DBT, Snowflake and Airflow
  • Knowledge of visualization tools, such as Metabase, Jupyter Notebooks (Python)
  • Collaborate on the design and improvements of the data infrastructure
  • Partner with product and engineering to advocate best practices and build supporting systems and infrastructure for the various data needs
  • Create data pipelines that stitch together various data sources in order to produce valuable business insights
  • Create real-time data pipelines in collaboration with the Data Science team

PythonSQLSnowflakeAirflowData engineeringSparkData visualizationData modeling

Posted 6 days ago
Apply
Apply
🔥 Senior Data Engineer
Posted 6 days ago

📍 United States

🧭 Full-Time

🔍 Healthcare

🏢 Company: Rad AI👥 101-250💰 $60,000,000 Series C 2 months agoArtificial Intelligence (AI)Enterprise SoftwareHealth Care

  • 4+ years relevant experience in data engineering.
  • Expertise in designing and developing distributed data pipelines using big data technologies on large scale data sets.
  • Deep and hands-on experience designing, planning, productionizing, maintaining and documenting reliable and scalable data infrastructure and data products in complex environments.
  • Solid experience with big data processing and analytics on AWS, using services such as Amazon EMR and AWS Batch.
  • Experience in large scale data processing technologies such as Spark.
  • Expertise in orchestrating workflows using tools like Metaflow.
  • Experience with various database technologies including SQL, NoSQL databases (e.g., AWS DynamoDB, ElasticSearch, Postgresql).
  • Hands-on experience with containerization technologies, such as Docker and Kubernetes.
  • Design and implement the data architecture, ensuring scalability, flexibility, and efficiency using pipeline authoring tools like Metaflow and large-scale data processing technologies like Spark.
  • Define and extend our internal standards for style, maintenance, and best practices for a high-scale data platform.
  • Collaborate with researchers and other stakeholders to understand their data needs including model training and production monitoring systems and develop solutions that meet those requirements.
  • Take ownership of key data engineering projects and work independently to design, develop, and maintain high-quality data solutions.
  • Ensure data quality, integrity, and security by implementing robust data validation, monitoring, and access controls.
  • Evaluate and recommend data technologies and tools to improve the efficiency and effectiveness of the data engineering process.
  • Continuously monitor, maintain, and improve the performance and stability of the data infrastructure.

AWSDockerSQLElasticSearchETLKubernetesData engineeringNosqlSparkData modeling

Posted 6 days ago
Apply
Apply

📍 Worldwide

🧭 Full-Time

NOT STATED
  • Own the design and implementation of cross-domain data models that support key business metrics and use cases.
  • Partner with analysts and data engineers to translate business logic into performant, well-documented dbt models.
  • Champion best practices in testing, documentation, CI/CD, and version control, and guide others in applying them.
  • Act as a technical mentor to other analytics engineers, supporting their development and reviewing their code.
  • Collaborate with central data platform and embedded teams to improve data quality, metric consistency, and lineage tracking.
  • Drive alignment on model architecture across domains—ensuring models are reusable, auditable, and trusted.
  • Identify and lead initiatives to reduce technical debt and modernise legacy reporting pipelines.
  • Contribute to the long-term vision of analytics engineering at Pleo and help shape our roadmap for scalability and impact.

SQLData AnalysisETLData engineeringCI/CDMentoringDocumentationData visualizationData modelingData analyticsData management

Posted 6 days ago
Apply
Apply
🔥 Senior Data Engineer
Posted 6 days ago

📍 United States

🧭 Full-Time

💸 183600.0 - 216000.0 USD per year

🔍 Mental Healthcare

🏢 Company: Headway👥 201-500💰 $125,000,000 Series C over 1 year agoMental Health Care

  • 6+ years of experience in a data engineering role building products, ideally in a fast-paced environment
  • Good foundations in Python and SQL.
  • Experience with Spark, PySpark, DBT, Snowflake and Airflow
  • Knowledge of visualization tools, such as Metabase, Jupyter Notebooks (Python)
  • A knack for simplifying data, expressing information in charts and tables
  • Collaborate on the design and improvements of the data infrastructure
  • Partner with product and engineering to advocate best practices and build supporting systems and infrastructure for the various data needs
  • Create data pipelines that stitch together various data sources in order to produce valuable business insights
  • Create real-time data pipelines in collaboration with the Data Science team

PythonSQLETLSnowflakeAirflowData engineeringRDBMSSparkRESTful APIsData visualizationData modeling

Posted 6 days ago
Apply
Apply

📍 Germany, Austria, Italy, Spain, Portugal

🔍 Financial and Real Estate

🏢 Company: PriceHubble👥 101-250💰 Non-equity Assistance over 3 years agoArtificial Intelligence (AI)PropTechBig DataMachine LearningAnalyticsReal Estate

  • 3+ years of experience building and maintaining production data pipelines.
  • Excellent English communication skills, both spoken and written, to effectively collaborate with cross-functional teams and mentor other engineers.
  • Clear writing is key in our remote-first setup.
  • Proficient in working with geospatial data and leveraging geospatial features.
  • Work with backend engineers and data scientists to turn raw data into trusted insights, handling everything from scraping and ingestion to transformation and monitoring.
  • Navigate cost-value trade-offs to make decisions that deliver value to customers at an appropriate cost.
  • Develop solutions that work in over 10 countries, considering local specifics.
  • Lead a project from concept to launch with a temporary team of engineers.
  • Raise the bar and drive the team to deliver high-quality products, services, and processes.
  • Improve the performance, data quality, and cost-efficiency of our data pipelines at scale.
  • Maintain and monitor the data systems your team owns.

AWSDockerLeadershipPostgreSQLPythonSQLApache AirflowCloud ComputingData AnalysisETLGitKubernetesApache KafkaData engineeringData scienceSparkCI/CDProblem SolvingRESTful APIsMentoringLinuxExcellent communication skillsTeamworkCross-functional collaborationData visualizationData modelingData managementEnglish communication

Posted 6 days ago
Apply
Apply

📍 Poland, Ukraine, Cyprus

🧭 Full-Time

🔍 Software Development

🏢 Company: Competera👥 51-100💰 $3,000,000 Seed about 1 year agoArtificial Intelligence (AI)Big DataE-CommerceRetailMachine LearningAnalyticsRetail TechnologyInformation TechnologyEnterprise SoftwareSoftware

  • 5+ years of experience in data engineer role.
  • Strong knowledge of SQL, Spark, Python, Airflow, binary file formats.
  • Contribute to the development of the new data platform.
  • Collaborate with platform and ML teams to create ETL pipelines that efficiently deliver clean and trustworthy data.
  • Engage in architectural decisions regarding the current and future state of the data platform.
  • Design and optimize data models based on business and engineering needs.

PythonSQLETLKafkaAirflowSparkData modeling

Posted 6 days ago
Apply
Apply

📍 United States, Canada

🧭 Full-Time

🔍 Software Development

  • Strong hands-on experience with Python and core Python Data Processing tools such as pandas, numpy, scipy, scikit
  • Experience with cloud tools and environments like Docker, Kubernetes, GCP, and/or Azure
  • Experience with Spark/PySpark
  • Experience with Data Lineage and Data Cataloging
  • Relational and non-relational database experience
  • Experience with Data Warehouses and Lakes, such as Bigquery, Databricks, or Snowflake
  • Experience in designing and building data pipelines that scale
  • Strong communication skills, with the ability to convey technical solutions to both technical and non-technical stakeholders
  • Experience working effectively in a fast-paced, agile environment as part of a collaborative team
  • Ability to work independently and as part of a team
  • Willingness and enthusiasm to learn new technologies and tackle challenging problems
  • Experience in Infrastructure as Code tools like Terraform
  • Advanced SQL expertise, including experience with complex queries, query optimization, and working with various database systems
  • Work with business stakeholders to understand their goals, challenges, and decisions
  • Assist with building solutions that standardize their data approach to common problems across the company
  • Incorporate observability and testing best practices into projects
  • Assist in the development of processes to ensure their data is trusted and well-documented
  • Effectively work with data analysts on refining the data model used for reporting and analytical purposes
  • Improve the availability and consistency of data points crucial for analysis
  • Standing up a reporting system in BigQuery from scratch, including data replication, infrastructure setup, dbt model creation, and Integration with reporting endpoints
  • Revamping orchestration and execution to reduce critical data delivery times
  • Database archiving to move data from a live database to cold storage

AWSSQLCloud ComputingData AnalysisETLData engineeringData visualizationData modeling

Posted 12 days ago
Apply