Apply

Data Engineer

Posted 17 days agoViewed

View full description

💎 Seniority level: Senior, 3+ years

📍 Location: United States

💸 Salary: 140000.0 - 165000.0 USD per year

🔍 Industry: FinTech SaaS

🏢 Company: MANTL👥 51-200💰 Convertible Note over 4 years agoE-CommerceConsumer GoodsHealth CareMen's

⏳ Experience: 3+ years

🪄 Skills: PythonSQLApache AirflowData engineering

Requirements:
  • 3+ years of experience writing production-level code in Python.
  • Building data pipelines and wrangling SQL.
  • Experience architecting solutions in Airflow, DBT, and Python.
  • Knowledge in digging into logs and ensuring high-quality integrations.
  • Curiosity and a passion for solving complex problems.
Responsibilities:
  • Work on impactful Data Integrations to deliver trusted data and intelligence.
  • Own new integrations from design to GA.
  • Stand up platform and tooling for real-time data flows.
  • Collaborate with engineering, design, product, and development teams.
  • Rapidly triage and solve support tickets.
  • Devise quick fixes for document-type mapping changes and SQL tweaks.
  • Maintain rock-solid integrations while driving revenue.
Apply

Related Jobs

Apply
🔥 Data Engineer- FinTech
Posted about 8 hours ago

📍 U.S.

🧭 Contract

💸 55.0 - 60.0 USD per hour

🔍 Financial services

  • 3-5 years of experience as a Data Engineer, preferably in financial services.
  • Strong expertise in SQL, Python, and data pipeline development (ETL/ELT).
  • Experience working with third-party data vendors and integrating external datasets into enterprise systems.
  • Knowledge of cloud-based data solutions (AWS, Azure, or GCP).
  • Understanding of credit card line assignment and small business credit data (preferred).
  • Strong problem-solving skills with the ability to optimize large-scale data processing workflows.
  • Clear and concise communication skills, with the ability to present findings to leadership, clients, and stakeholders.
  • Develop and maintain ETL pipelines to ingest, process, and integrate third-party small business data with internal datasets.
  • Evaluate third-party data sources, ensuring data quality, completeness, and reliability for credit analysis.
  • Optimize data workflows to support credit line assignment decisions and enhance data-driven insights.
  • Implement data validation, transformation, and storage solutions for structured and unstructured data.
  • Support data testing and troubleshooting to ensure accuracy and reliability in reports.
  • Collaborate with data scientists and analysts to develop data models, conduct A/B testing, and generate insights.
  • Ensure compliance with data security and governance policies, particularly in the financial services sector.

AWSPythonSQLETLGCPAzure

Posted about 8 hours ago
Apply
Apply
🔥 Senior Data Engineer
Posted about 16 hours ago

📍 United States, Canada

🧭 Regular

💸 125000.0 - 160000.0 USD per year

🔍 Digital driver assistance services

🏢 Company: Agero👥 1001-5000💰 $4,750,000 over 2 years agoAutomotiveInsurTechInformation TechnologyInsurance

  • Bachelor's degree in a technical field and 5+ years or Master's degree with 3+ years of industry experience.
  • Extensive experience with Snowflake or other cloud-based data warehousing solutions.
  • Expertise in ETL/ELT pipelines using tools like Airflow, DBT, Fivetran.
  • Proficiency in Python for data processing and advanced SQL for managing databases.
  • Solid understanding of data modeling techniques and cost management strategies.
  • Experience with data quality frameworks and deploying data solutions in the cloud.
  • Familiarity with version control systems and implementing CI/CD pipelines.
  • Develop and maintain ETL/ELT pipelines to ingest data from diverse sources.
  • Monitor and optimize cloud costs while performing query optimization in Snowflake.
  • Establish modern data architectures including data lakes and warehouses.
  • Apply dimensional modeling techniques and develop transformations using DBT or Spark.
  • Write reusable and efficient code, and develop data-intensive UIs and dashboards.
  • Implement data quality frameworks and observability solutions.
  • Collaborate cross-functionally and document data flows, processes, and architecture.

AWSPythonSQLApache AirflowDynamoDBETLFlaskMongoDBSnowflakeFastAPIPandasCI/CDData modeling

Posted about 16 hours ago
Apply
Apply
🔥 Data Engineer
Posted 5 days ago

📍 California

🧭 Full-Time

💸 145000.0 USD per year

🔍 Health Insurance

🏢 Company: Sidecar Health👥 101-250💰 $165,000,000 Series D 7 months ago🫂 Last layoff over 2 years agoHealth InsuranceInsurTechInsuranceHealth CareFinTech

  • Master’s degree or foreign degree equivalent in Computer Science or a related field.
  • 1+ years of experience in Data Engineer or Software Engineer roles.
  • Proficiency in SQL and Python, with the ability to write complex SQL statements.
  • Hands-on experience with ETL processes, real-time and batch data processing.
  • Familiarity with Spark, Athena, Docker, and version control systems like GIT.
  • Knowledge of secure, scalable, cloud-based architectures compliant with HIPAA or PCI.
  • Experience in creating data visualizations using Tableau or ThoughtSpot.
  • Ability to translate business requirements into scalable software solutions.
  • Use SQL and Python on AWS to build ETL jobs and data pipelines for data integration into Snowflake.
  • Leverage DBT to transform data, consolidate records, and create clean data models.
  • Utilize AWS technologies to send reports and support business teams.
  • Containerize and orchestrate data pipelines with Docker and Airflow.
  • Perform data quality checks and ensure data reliability.
  • Develop reports and dashboards using Tableau and ThoughtSpot.
  • Participate in agile development activities.

AWSDockerPythonSQLETLSnowflakeTableauAirflowSpark

Posted 5 days ago
Apply
Apply

📍 United States

💸 124300.0 - 186500.0 USD per year

🔍 Technology

🏢 Company: SMX👥 1001-5000Cloud ComputingAnalyticsCloud SecurityInformation TechnologyCyber Security

  • Two + years of experience in a related field.
  • Expertise in complex SQL.
  • Knowledge of AWS technologies.
  • Solid understanding of RDBMS concepts (Postgres, RedShift, SQL Server), logical data modeling, and database/query optimization.
  • Familiarity with AWS data migration tools (DMS).
  • Scripting knowledge in Python/Lambda.
  • Ability to obtain and maintain a Public Trust clearance; US Citizenship is required.
  • Strong team collaboration and communication skills.
  • Assist Data Architect and customer in collecting requirements and documenting tasks for maintaining and enhancing data loading platform (ETL/data pipelines).
  • Implement data loading and quality control activities based on project requirements and customer tickets.
  • Implement CI/CD pipelines related to data warehouse maintenance.
  • Code and implement unique data migration requirements using AWS technologies like DMS and Lambda/Python.
  • Implement and resolve issues for user identity and access management to various datasets.

AWSPostgreSQLPythonSQLETLCI/CD

Posted 5 days ago
Apply
Apply

📍 United States

🧭 Full-Time

💸 142771.0 - 225000.0 USD per year

🔍 Media and Analytics

  • Master's degree in Computer Science, Data Science, engineering, mathematics, or a related quantitative field plus 3 years of experience in analytics software solutions.
  • Bachelor's degree in similar fields plus 5 years of experience is also acceptable.
  • 3 years of experience with Python and associated packages including Spark, AWS, S3, Java, JavaScript, and Adobe Analytics.
  • Proficiency in SQL for querying and managing data.
  • Experience in analytics programming languages such as Python (with Pandas).
  • Experience in handling large volumes of data and code management tools like Git.
  • 2 years of experience managing computer program orchestrations and using open-source management platforms like AirFlow.
  • Develop, test, and orchestrate econometric, statistical, and machine learning modules.
  • Conduct unit, integration, and regression testing.
  • Create data processing systems for analytic research and development.
  • Design, document, and present process flows for analytical systems.
  • Partner with Software Engineering for cloud-based solutions.
  • Orchestrate modules via directed acyclic graphs using workflow management systems.
  • Work in an agile development environment.

AWSPythonSQLApache AirflowGitMachine LearningData engineeringRegression testingPandasSpark

Posted 6 days ago
Apply
Apply

📍 U.S.

🧭 Full-Time

💸 142771.0 - 225000.0 USD per year

🔍 Media and analytics

  • Master’s degree in Computer Science, Data Science, engineering, mathematics or a related quantitative field plus 3 years of experience in delivering analytics software solutions or a Bachelor’s degree plus 5 years.
  • Must have 3 years of experience with Python, associated packages including Spark, AWS, and SQL for data management.
  • Experience with analytics programming languages, parallel processing, and code management tools like Git.
  • Two years of experience managing program orchestrations and working with open-source management platforms such as AirFlow.
  • Modern analytics programming: developing, testing and orchestrating econometric, statistical and machine learning modules.
  • Unit, integration and regression testing.
  • Understanding the deployment of econometric models and learning methods.
  • Create data processing systems for analytics research and development.
  • Design, write, and test modules for Nielsen analytics cloud-based platforms.
  • Extract data using SQL and orchestrate modules via workflow management platforms.
  • Design, document, and present process flows for analytical systems.
  • Partner with software engineering to build analytical solutions in an agile environment.

AWSPythonSQLApache AirflowGitMachine LearningSpark

Posted 6 days ago
Apply
Apply

📍 United States of America

🧭 Full-Time

💸 110000.0 - 160000.0 USD per year

🔍 Insurance industry

🏢 Company: Verikai_External

  • Bachelor's degree or above in Computer Science, Data Science, or a related field.
  • At least 5 years of relevant experience.
  • Proficient in SQL, Python, and data processing frameworks such as Spark.
  • Hands-on experience with AWS services including Lambda, Athena, Dynamo, Glue, Kinesis, and Data Wrangler.
  • Expertise in handling large datasets using technologies like Hadoop and Spark.
  • Experience working with PII and PHI under HIPAA constraints.
  • Strong commitment to data security, accuracy, and compliance.
  • Exceptional ability to communicate complex technical concepts to stakeholders.
  • Design, build, and maintain robust ETL processes and data pipelines for large-scale data ingestion and transformation.
  • Manage third-party data sources and customer data to ensure clean and deduplicated datasets.
  • Develop scalable data storage systems using cloud platforms like AWS.
  • Collaborate with data scientists and product teams to support data needs.
  • Implement data validation and quality checks, ensuring accuracy and compliance with regulations.
  • Integrate new data sources to enhance the data ecosystem and document data strategies.
  • Continuously optimize data workflows and research new tools for the data infrastructure.

AWSPythonSQLDynamoDBETLSpark

Posted 7 days ago
Apply
Apply

📍 Colorado

💸 106000.0 - 139000.0 USD per year

🔍 Legal services

🏢 Company: Rocket Lawyer👥 251-500💰 $223,000,000 Debt Financing almost 4 years agoLegal TechLaw EnforcementLegal

  • 5+ years of Python experience.
  • 3+ years leveraging technologies such as Airflow and Apache Spark.
  • Experience working with large language models, diffusion models, or other generative models.
  • Experience with MLOps tools and practices.
  • Strong understanding of data architectures and patterns.
  • Experience with containerization technologies like Docker and Kubernetes.
  • Contributions to open-source projects.
  • Experience in DataOps and MLOps implementation and support.
  • Experience in building and supporting AI/ML platforms.
  • Design, develop, and maintain robust, scalable data pipelines for processing large datasets for AI model training.
  • Perform data cleaning, normalization, transformation, and feature engineering, including handling unstructured data.
  • Build and manage data infrastructure like data lakes and warehouses, optimized for AI workloads.
  • Implement data quality checks and monitoring systems for data accuracy and consistency.
  • Contribute to MLOps best practices for data management and model deployment.
  • Work with GCP and Snowflake for data and AI offerings.
  • Optimize data pipelines and infrastructure for performance, scalability, and cost-effectiveness.

DockerPythonApache AirflowGCPKubernetesSnowflakeData engineering

Posted 8 days ago
Apply
Apply
🔥 Staff Data Engineer
Posted 8 days ago

📍 United States

🧭 Full-Time

💸 170000.0 - 195000.0 USD per year

🔍 Healthcare

🏢 Company: Parachute Health👥 101-250💰 $1,000 about 5 years agoMedicalHealth CareSoftware

  • 5+ years of relevant experience.
  • Experience in Data Engineering with Python.
  • Experience building customer-facing software.
  • Strong listening and communication skills.
  • Time management and organizational skills.
  • Proactive, a driven self-starter who can work independently or as part of a team.
  • Ability to think with the 'big picture' in mind.
  • Passionate about improving patient outcomes in the healthcare space.
  • Architect solutions to integrate and manage large volumes of data across various internal and external systems.
  • Establish best practices and data governance standards to ensure that data infrastructure is built for long-term scalability.
  • Build and maintain a reporting product for external customers that visualizes data and provides tabular reports.
  • Collaborate across the organization to assess data engineering needs.

PythonETLAirflowData engineeringData visualization

Posted 8 days ago
Apply
Apply
🔥 Sr. Data Engineer
Posted 8 days ago

📍 United States, Denver, Colorado, New York, New York, Dallas, Texas, Lisbon, Portugal

🧭 Full-Time

💸 190000.0 - 215000.0 USD per year

🔍 Healthcare

🏢 Company: Cleerly👥 11-50Real Estate InvestmentPersonal FinanceBankingWealth ManagementInsurance

  • Bachelor’s degree in Computer Science, Engineering, or equivalent experience.
  • 5+ years of software development experience.
  • 2+ years as a Data Engineer.
  • Strong SQL developer skills including experience in Postgres, Redshift, Snowflake or similar.
  • Experience developing and maintaining Airflow DAGs.
  • Excellent knowledge of Python, TypeScript, or other programming languages.
  • Experience implementing and maintaining CI/CD solutions.
  • Experience working with version control systems such as GitHub.
  • Strong written and verbal communication skills in English.
  • Contribute to all facets of the Cleerly data platform focusing on scale, performance, and reliability.
  • Work closely with business intelligence, data science, and operations teams, as well as interdepartmental stakeholders.
  • Build Airflow DAGs for data transformation and create normalized data structures to ingest disparate client data.
  • Contribute to CI/CD pipeline and capabilities, build validation and training data sets in coordination with data scientists.
  • Design methods for searching and storing large amounts of imaging data at scale, and generate aggregate reporting tables.

PythonSQLSnowflakeAirflowPostgresCI/CD

Posted 8 days ago
Apply