Apache Airflow Jobs

Find remote positions requiring Apache Airflow skills. Browse through opportunities where you can utilize your expertise and grow your career.

Apache Airflow
49 jobs found. to receive daily emails with new job openings that match your preferences.
49 jobs found.

Set alerts to receive daily emails with new job openings that match your preferences.

Apply

πŸ“ CA, WA, NY, NJ, CT, all other U.S. states

🧭 Full-Time

πŸ’Έ 200000.0 - 275000.0 USD per year

πŸ” Financial Technology

  • 8+ years of experience designing, developing, and launching backend systems using Python or Kotlin.
  • Extensive experience with highly available distributed systems utilizing AWS, MySQL, Spark, and Kubernetes.
  • Experience with online, real-time ML infrastructure like model servers or feature stores.
  • Developed offline environments for large scale data analysis and model training using Spark, Kubeflow, Ray, and Airflow.
  • Experience delivering major system features and writing high quality code.
  • Comfortable navigating from low-level language idioms to large system architecture.
  • Mastered gathering feedback and strong communication skills.
  • Bachelor's degree in a related field or equivalent practical experience.

  • Responsible for setting technical strategy for the team on a year-long time scale and linking it with business-impacting projects.
  • Collaborate across teams in the ML development lifecycle with machine learning engineers, platform engineers, and product management.
  • Act as a force-multiplier, defining and advocating for technical solutions and operational processes.
  • Ensure team operations and availability through monitoring, triage rotations, and testing.
  • Foster a culture of quality and ownership by setting standards and advocating beyond the team.
  • Develop talent by providing feedback, guidance, and leading by example.

AWSPythonApache AirflowKotlinKubeflowKubernetesMySQLSpark

Posted 3 days ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 120000.0 - 130000.0 USD per year

πŸ” Rewards and loyalty app

🏒 Company: Fetch

  • Proficient in SQL and understand the difference between SQL that works and SQL that performs.
  • Have worked with data modeling and orchestration tools.
  • Experience with relational (SQL), non-relational (NoSQL), and/or object data stores (e.g., Snowflake, MongoDB, S3, HDFS, Postgres, Redis, DynamoDB).
  • Solid understanding of ETL vs. ELT processes, data warehouses, and business intelligence tools.
  • Prior experience clearly communicating about data with internal and external customers.
  • Highly motivated to work autonomously, managing multiple work streams.
  • Interest in building and experimenting with different tools and tech, sharing learnings with the broader organization.
  • Experience developing and maintaining DBT or Airflow in production environments.
  • Experience programmatically deploying cloud resources on AWS, Azure, or GCP.
  • Implemented data quality, data governance, or disaster recovery initiatives.
  • Proficient in at least one imperative programming language (i.e., Python).

  • Model and analyze data utilizing SQL best practices for OLAP / OLTP query and database performance.
  • Leverage Data Build Tool (DBT), Snowflake, Airflow, AWS infrastructure, CI/CD, testing, and engineering best practices.
  • Generate innovative approaches to datasets with millions of daily active users and terabytes of data.
  • Translate business requirements for near-real-time actionable insights into data models and artifacts.
  • Communicate findings clearly both verbally and in writing to a broad range of stakeholders.
  • Administrative duties for Snowflake, Tableau, and DBT/Airflow infrastructure.
  • Test, monitor, and report on data health and data quality.
  • Lead data documentation and data discovery initiatives.

AWSPythonSQLApache AirflowBusiness IntelligenceETLSnowflake

Posted 3 days ago
Apply
Apply

πŸ“ Ukraine

  • 4+ years of experience in software/data engineering, data architecture, or a related field.
  • Strong programming skills in at least one language: Java, Scala, Python, or Go.
  • Experience with SQL and data modeling.
  • Hands-on experience with Apache Big Data frameworks such as Hadoop, Hive, Spark, Airflow, etc.
  • Proficiency in AWS cloud services.
  • Strong understanding of distributed systems, large-scale data processing, and data storage/retrieval.
  • Experience with data governance, security, and compliance is a plus.
  • Familiarity with CI/CD and DevOps practices is a plus.
  • Excellent communication and problem-solving skills.

  • Design, build, and maintain scalable and reliable data storage solutions.
  • Optimize and scale the platform for increasing data volumes and user requests.
  • Improve data storage, retrieval, query performance, and overall system performance.
  • Collaborate with data scientists, analysts, and stakeholders for tailored solutions.
  • Ensure proper integration of data pipelines, analytics tools, and ETL processes.
  • Troubleshoot and resolve platform issues in a timely manner.
  • Develop monitoring and alerting systems to ensure platform reliability.
  • Participate in code reviews and design discussions.
  • Evaluate new technologies to enhance the data platform.

AWSPythonSQLApache AirflowApache HadoopKafkaKubernetesData engineeringScalaData modeling

Posted 3 days ago
Apply
Apply

πŸ“ United States, Canada

🧭 Full-Time

πŸ’Έ 216700.0 - 303400.0 USD per year

πŸ” Internet and online communities

  • 5+ years of professional experience as a Machine Learning Engineer with a focus on data-intensive systems.
  • Strong theoretical knowledge of machine learning and statistical concepts.
  • Proven experience with large-scale data processing technologies such as Kubeflow, Airflow, BigQuery, GraphQL, Kafka, and Redis.
  • Proficiency in SQL to derive insights from large datasets.
  • Excellent communication and teamwork skills.

  • Work closely with engineers to enhance the experience of new and low-signal users by applying advanced ML techniques.
  • Train, evaluate, and deploy sophisticated machine learning models to improve user experiences.
  • Participate in the full software development cycle including design, development, QA, deployment, experimentation, analysis, and iteration.
  • Collaborate with other ML teams and disciplines to find technical solutions to complex challenges.
  • Handle large-scale data, models, pipelines, and product integration.

PythonSQLApache AirflowKafkaKubeflowMachine LearningRedis

Posted 7 days ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 178000.0 - 228000.0 USD per year

πŸ” Financial Technology

  • 8+ years of experience as a machine learning engineer, with a relevant PhD counting for up to 2 years.
  • Experience developing ML models at scale from inception to business impact.
  • Proficiency in various ML techniques like Generalized Linear Models, Gradient Boosting, Deep Learning.
  • Strong engineering skills in Python and data manipulation using SQL.
  • Experience with large-scale distributed systems like Spark or Ray.
  • Familiarity with open-source tools such as scikit-learn, pandas, NumPy, XGBoost, PyTorch, Kubeflow.
  • Experience with Kubernetes, Docker, and Airflow is a plus.
  • Excellent written and oral communication skills.

  • Use proprietary and third-party data to develop ML models predicting default likelihood.
  • Partner with engineering teams to build model systems for training, decisioning, and monitoring.
  • Research and prototype innovative solutions for credit decisioning.
  • Implement and scale essential data pipelines and features for production models.
  • Collaborate across teams to define requirements for new products.

DockerPythonSQLApache AirflowKubeflowKubernetesMachine LearningPyTorchSpark

Posted 13 days ago
Apply
Apply

πŸ“ Brazil

🧭 Full-Time

πŸ” Technology in government affairs

  • Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
  • 5+ years in data engineering and development of data-driven products.
  • Expertise in building data pipelines and architectures; AWS services including EC2, EMR, RDS, Redshift.
  • Proficient in big data tools (Hadoop, Spark, Kafka) and machine learning frameworks (TensorFlow, PyTorch).
  • 3+ years experience with Python.
  • Deep knowledge of SQL and NoSQL databases, and workflow management tools.

  • Architect and implement scalable RAG data pipelines for legislative bills, social media, documents, and testimonies.
  • Design data pipelines for real-time processing and analysis.
  • Develop data cleansing and transformation processes for AI products.
  • Oversee cloud-based deployments in AWS, focusing on performance, security, and cost-efficiency.
  • Innovate data architecture to meet dynamic product needs.

AWSPythonSQLApache AirflowETLHadoopKafkaMachine LearningPyTorchNosqlSparkTensorflow

Posted 18 days ago
Apply
Apply

πŸ“ USA

πŸ’Έ 142800.0 - 196350.0 USD per year

πŸ” Artificial Intelligence

🏒 Company: AppOmniπŸ‘₯ 101-250πŸ’° Series C about 2 years agoSaaSCloud SecurityCyber SecurityCloud ManagementSoftware

  • At least 5 years of experience in new product development, ideally with a software engineering background.
  • At least 3 years of experience in machine learning.
  • Strong programming skills in Python for production implementation.
  • Experience with libraries: Pandas, PySpark, TensorFlow, Keras, Langchain.
  • Degree in Engineering, Computer Science, Data Science, or a related field.
  • Experience in data exploration and science with large datasets.
  • Experience with machine learning algorithms in large datasets.
  • Experience in implementing Generative AI.
  • Excellent communication and collaboration skills.

  • Design and develop models using tools like Langchain, Scipy, Pandas, etc.
  • Build data pipelines for large datasets to enable model training.
  • Select appropriate algorithms for various problems.
  • Identify patterns and correlations through data modeling.
  • Research and develop data-driven solutions for business problems.
  • Analyze and interpret data for insights.
  • Clean and preprocess data, handling missing values.
  • Evaluate and test model performance with metrics.
  • Implement Generative AI where applicable.
  • Integrate AI solutions into existing systems.
  • Translate stakeholder requirements into technical solutions.

AWSPythonSQLApache AirflowCloud ComputingData AnalysisGCPKerasMachine LearningPyTorchData sciencePandasTensorflow

Posted 19 days ago
Apply
Apply
πŸ”₯ Data Engineer
Posted 20 days ago

πŸ“ United States, Egypt, Brazil, Argentina, Spain

πŸ” Healthcare or tech industry

🏒 Company: Bask HealthπŸ‘₯ 11-50πŸ’° $759,987 Seed over 1 year agoElectronic Health Record (EHR)SaaSWellnessHealth CareHome Health Care

  • Bachelor’s degree in Computer Science, Data Engineering, or a related field.
  • 3-5+ years of professional experience in data engineering, data infrastructure, or similar roles, preferably in the healthcare or tech industry.
  • Proven ability to design and implement scalable, reliable, and secure data pipelines in production environments.
  • Advanced proficiency in SQL and programming languages such as Javascript or Typescript.
  • Hands-on experience with data pipeline orchestration tools (e.g., Apache Airflow, dbt, Luigi).
  • Expertise in ETL/ELT processes, including designing and maintaining data warehouses or lakes.
  • Familiarity with cloud platforms like AWS, GCP, or Azure.
  • Strong understanding of data modeling and schema design for large-scale analytics.
  • Experience working with streaming data frameworks.

  • Data Engineers at Bask Health build and maintain the robust data infrastructure that powers our innovative telehealth platform.
  • Design pipelines and models for product analysis and craft patient-facing data products.
  • Create scalable, efficient, and reliable data solutions to enhance the quality of care and drive meaningful healthcare insights.
  • Contribute to informed decision-making across teams and ensure innovation in healthcare experiences for patients and providers worldwide.

AWSSQLApache AirflowETLGCPJavascriptTypeScriptAzureData modeling

Posted 20 days ago
Apply
Apply

πŸ“ U.S.

πŸ’Έ 120000.0 - 180000.0 USD per year

πŸ” Real estate technology

  • Proficiency in SQL.
  • Proficiency in any statistical programming language (i.e., Python or R).
  • Hands-on experience building and managing ETL and rETL workflows.
  • Strong understanding of data engineering principles, including ETL/ELT processes, data modeling, and data warehousing.
  • Experience with workflow orchestration tools, such as Airflow.
  • Experience with ETL/ELT tools, such as Fivetran.
  • Experience with cloud data warehouse and transformation tools (e.g., snowflake, DBT).

  • Design, develop, and maintain scalable data pipelines and ELT processes for our data lake/data warehouse.
  • Implement data orchestration tools to automate and streamline data workflows.
  • Work with AI Solutions team to enhance processes throughout the organization.
  • Integrate machine learning models and data pipelines into analytics solutions.
  • Build and manage reverse ETL workflows to business systems such as the CRM.
  • Support development of predictive models for strategic and operational initiatives.
  • Develop and maintain documentation for data processes and systems.
  • Build production-ready data models and schemas using DBT.
  • Collaborate on data governance policies and best practices.
  • Manage analytics tech stack for dashboards and reports.
  • Support analysts in developing dashboards and ad hoc reports.

PythonSQLApache AirflowETLMachine LearningSnowflakeData engineeringData modeling

Posted 21 days ago
Apply
Apply

πŸ“ United States, El Salvador, Ecuador

πŸ” Enterprise cloud industry

🏒 Company: Wursta Corporation

  • Fluency in Spanish and/or experience with Latin American-based customers.
  • 4+ years' experience as a Cloud Engineer/Architect.
  • Experience architecting, deploying, and managing cloud infrastructure.
  • Advanced knowledge of software and network configurations.
  • Experience with infrastructure-as-code, Kubernetes, and containerization processes.
  • Experience in configuring logging, monitoring, and alerting systems.
  • Experience implementing client-to-site, site-to-site, and high availability VPNs.
  • Certification in one or more cloud platforms (AWS, GCP, Azure) preferred.

  • Develop and maintain strong client relationships to understand their existing architecture.
  • Collaborate on cloud migration strategies, including implementation plans and readiness assessments.
  • Spearhead technical discovery, solution design, and implementation processes.
  • Develop and manage thorough technical documentation during all project phases.
  • Create and provide cloud architecture diagrams for presales and project implementations.
  • Standardize solutions to optimize functionality, safety, and compliance.
  • Mentor and cross-train colleagues to foster a collaborative and knowledgeable team.

AWSApache AirflowETLGCPKafkaKubernetesAzureSparkCI/CDDevOpsTerraform

Posted 22 days ago
Apply
Shown 10 out of 49