Airflow Jobs

Find remote positions requiring Airflow skills. Browse through opportunities where you can utilize your expertise and grow your career.

Airflow
119 jobs found. to receive daily emails with new job openings that match your preferences.
119 jobs found.

Set alerts to receive daily emails with new job openings that match your preferences.

Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 185800.0 - 322000.0 USD per year

πŸ” Technology, Internet Services

🏒 Company: RedditπŸ‘₯ 1001-5000πŸ’° $410,000,000 Series F over 3 years agoπŸ«‚ Last layoff over 1 year agoNewsContentSocial NetworkSocial Media

  • 3-10+ years of industry experience as a machine learning engineer or software engineer developing backend/infrastructure at scale.
  • Experience building machine learning models using PyTorch or Tensorflow.
  • Experience with search & recommender systems and pipelines.
  • Production-quality code experience with testing, evaluation, and monitoring using Python and Golang.
  • Familiarity with GraphQL, REST, HTTP, Thrift, or gRPC and design of APIs.
  • Experience developing applications with large scale data stacks such as Kubeflow, Airflow, BigQuery, Kafka, Redis.

  • Develop and enhance Search Retrievals and Ranking models.
  • Design and build pipelines and algorithms for user answers.
  • Collaborate with product managers, data scientists, and platform engineers.
  • Develop and test new pipeline components and deploy ML models.
  • Ensure high uptime and low latency for search systems.

GraphQLPythonKafkaKubeflowMachine LearningPyTorchAirflowRedisTensorflow

Posted 3 days ago
Apply
Apply

πŸ“ New Zealand, Australia

🧭 Full-Time

πŸ” Technology and Consulting

  • Hands-on experience in modern data platform architecture and engineering.
  • Strong experience with platforms like Snowflake, Data Bricks, and Azure.
  • Familiarity with data processing tools such as HDFS, Spark, and Kafka.
  • Experience with ETL/ELT packages such as DBT and Informatica.
  • Understanding of virtualization technologies like AWS EC2 and Docker.
  • Strong expertise in relational databases and SQL.
  • Knowledge of data modeling and structures.
  • Experience in data security engineering practices.

  • Lead and grow the data practice.
  • Architect, design, and engineer modern data platforms.
  • Provide consulting and technical support for client projects.
  • Present technical information to diverse audiences.

DockerPythonSQLGCPJavaKafkaKubernetesSnowflakeTypeScriptTableauAirflowAzureGoSparkData modeling

Posted 6 days ago
Apply
Apply

πŸ“ US

🧭 Full-Time

πŸ’Έ 206700.0 - 289400.0 USD per year

πŸ” Social Media / Technology

  • MS or PhD in a quantitative discipline: engineering, statistics, operations research, computer science, informatics, applied mathematics, economics, etc.
  • 7+ years of experience with large-scale ETL systems and building clean, maintainable code (Python preferred).
  • Strong programming proficiency in Python, SQL, Spark, Scala.
  • Experience with data modeling, ETL concepts, and patterns for data governance.
  • Experience with data workflows, data modeling, and engineering.
  • Experience in data visualization and dashboard design using tools like Looker, Tableau, and D3.
  • Deep understanding of relational and MPP database designs.
  • Proven track record of cross-functional collaboration and excellent communication skills.

  • Act as the analytics engineering lead within Ads DS team contributing to data science data quality and automation initiatives.
  • Work on ETLs, reporting dashboards, and data aggregations for business tracking and ML model development.
  • Develop and maintain robust data pipelines for data ingestion, processing, and transformation.
  • Create user-friendly tools for internal team use, streamlining analysis and reporting processes.
  • Lead efforts to build a data-driven culture by enabling data self-service.
  • Provide technical guidance and mentorship to data analysts.

PythonSQLETLAirflowSparkScalaData visualizationData modeling

Posted 11 days ago
Apply
Apply

πŸ“ Armenia

🧭 Full-Time

πŸ’Έ 90000.0 - 110000.0 USD per year

πŸ” Ecommerce

🏒 Company: Constructor

  • Hands-on experience building and owning services for production with Python.
  • Experience with monitoring and quality assurance for services affecting customer experience.
  • Readiness to dive into the experimental/data analytics domain.
  • Curiosity about product usage beyond technical aspects.
  • Experience with PySpark or ETL pipelines (5-10% time) is favorable.
  • Experience building analytical services or experiment platforms is a plus.

  • Owning the whole experimentation process from traffic splitting to revealing experiment results.
  • Developing the internal experiments platform for ML/DS teams.
  • Implementing user-facing features for running and analyzing the results of experiments.
  • Improving the performance and scalability of services.
  • Integrating state-of-the-art approaches for A/B tests to enhance trustworthiness.

AWSPostgreSQLPythonSQLFlaskAirflowData scienceFastAPIA/B testing

Posted 18 days ago
Apply
Apply
πŸ”₯ Data Engineer
Posted 21 days ago

πŸ“ Mexico

πŸ” Software and digital transformation

🏒 Company: DaCodes

  • Bachelor's degree or equivalent in Information Systems or related field.
  • Knowledge of software engineering practices including version control (Git), unit testing, CI/CD, and API design.
  • Experience with handling and processing large volumes of data.
  • Strong knowledge of data scraping techniques.
  • Extensive understanding of data architectures like data lakes, data warehouses, and data lakehouses.
  • Database expertise in modeling fact tables and dimension tables.
  • Familiarity with new OLAP structures.
  • Proficient in Redshift, BigQuery, Snowflake, or similar technologies.
  • Strong skills in performance tuning and indexing.
  • Experience with ETL tools such as Pandas, Apache Beam, Apache Flink, Kafka, and Pyspark.
  • Familiarity with cloud services, especially AWS and Google Cloud.
  • Solid skills in data orchestration tools like Airflow, Nifi, or similar.

  • Design and develop pipelines for data extraction and ingestion.
  • Manage data modeling and performance testing for final data consumption layers.
  • Define best practices for ETLs, temporary tables, stored procedures, and data warehouse integrations.
  • Collaborate with developers, analysts, and data scientists for data preparation.
  • Identify areas for improvement and develop innovative solutions for scalability and performance.
  • Proactively resolve data process issues to ensure effectiveness and efficiency.

AWSETLGCPGitSnowflakeAirflowApache KafkaData engineeringPandasSparkCI/CDData modeling

Posted 21 days ago
Apply
Apply

πŸ“ United States

πŸ’Έ 170000.0 - 190000.0 USD per year

πŸ” Consumer smart home technology

🏒 Company: SenseπŸ‘₯ 251-500πŸ’° $50,000,000 Series D about 3 years agoInternetStaffing AgencyHuman ResourcesSaaSRecruitingProfessional ServicesSoftware

  • 5+ years of experience designing, building, and maintaining production AI/ML applications.
  • Strong programming skills in languages such as Python, C, etc.
  • Experience with ML libraries including PyTorch, scikit-learn, TensorFlow.
  • Cloud experience, preferably with AWS.
  • Experience with frameworks such as MLFlow, Kubeflow, Airflow, Docker, etc.
  • Strong communication and collaboration skills.
  • Must be authorized to work in the U.S.

  • Design, build, and maintain model training and serving infrastructure for CI/CD.
  • Collaborate with data scientists and data engineers to improve productivity by increasing automation for model training and experimentation workflows.
  • Provide technical leadership to data engineers.
  • Support annotation and ground truth collection through tool improvement and automation.
  • Champion best practices for data science software development processes and serve as architect for shared data science software libraries.
  • Work with the infrastructure guild to upgrade and patch libraries; maintain and optimize pipelines.
  • Ensure data science production software is robust and scalable.
  • Identify and implement cost savings; monitor and manage costs.

AWSDockerPythonKubeflowMLFlowPyTorchAirflowTensorflowCI/CD

Posted 25 days ago
Apply
Apply

πŸ“ United States, Latin America, India

🧭 Full-Time

πŸ” Data Solutions and Cloud Services

  • 6+ years as a hands-on Solutions Architect and/or Data Engineer designing and implementing data solutions.
  • 2+ years of consulting experience managing projects for external customers.
  • Proven ability to multitask, prioritize, and deliver on multiple projects.
  • Expertise in programming languages such as Java, Python, and/or Scala.
  • Experience with core cloud data platforms including Snowflake, AWS, Azure, Databricks, and GCP.
  • Strong SQL skills, including writing, debugging, and optimizing queries.
  • Client-facing communication and presentation skills.

  • Design and implement end-to-end technical data solutions, ensuring performance, security, scalability, and robust data integration.
  • Lead and manage teams of Senior and Junior Data Engineers through coaching and mentoring.
  • Collaborate with client stakeholders, technology partners, and cross-functional teams for successful project delivery.
  • Create and deliver detailed presentations and solution documentation.

AWSPythonSQLETLGCPJavaKafkaSnowflakeAirflowAzureSparkDocumentationScala

Posted 29 days ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ” AI-driven narrative intelligence.

🏒 Company: Blackbird.AIπŸ‘₯ 51-100πŸ’° 6 months agoArtificial Intelligence (AI)SecurityMachine LearningEnterprise SoftwareIntrusion Detection

  • Bachelor's degree in Computer Science or a related field.
  • Minimum of 5 years of experience in data engineering and building data platforms.
  • Minimum of 2 years of professional experience in Machine Learning or a closely related field.
  • Proficiency in databases and query optimization (PostgresSQL, ElasticSearch, MongoDB, Redis, Druid).
  • Expertise in Kafka and Airflow, and experience in big data processing systems like Apache Spark, Flink, or Beam.
  • Expert-level Python coding skills.
  • Strong skills in build automation, continuous integration, and deployment (CI/CD) tools.

  • Design and implement real-time distributed data processing systems analyzing public data and detecting emergent threats.
  • Oversee the gathering and annotating of large custom datasets for classification and related challenges.
  • Lead the optimization of ETL processes for various data formats from social media, news, and web sources.
  • Develop and manage the database architecture for a real-time streaming analytics platform.
  • Spearhead build automation, continuous integration, deployment, and performance optimization efforts.

AWSPostgreSQLPythonElasticSearchETLKafkaMachine LearningMongoDBAirflowData engineeringRedisNosqlCI/CD

Posted about 1 month ago
Apply
Apply

πŸ“ Poland

πŸ” Financial services

🏒 Company: CapcoπŸ‘₯ 101-250Electric VehicleProduct DesignMechanical EngineeringManufacturing

  • Strong cloud provider’s experience on GCP
  • Hands-on experience using Python; Scala and Java are nice to have
  • Experience in data and cloud technologies such as Hadoop, HIVE, Spark, PySpark, DataProc
  • Hands-on experience with schema design using semi-structured and structured data structures
  • Experience using messaging technologies – Kafka, Spark Streaming
  • Strong experience in SQL
  • Understanding of containerisation (Docker, Kubernetes)
  • Experience in design, build and maintain CI/CD Pipelines
  • Enthusiasm to pick up new technologies as needed

  • Work alongside clients to interpret requirements and define industry-leading solutions
  • Design and develop robust, well tested data pipelines
  • Demonstrate and help clients adhere to best practices in engineering and SDLC
  • Lead and mentor the team of junior and mid-level engineers
  • Contribute to security designs and have advanced knowledge of key security technologies
  • Support internal Capco capabilities by sharing insight, experience and credentials

DockerPythonSQLETLGCPGitHadoopKafkaKubernetesSnowflakeAirflowSparkCI/CD

Posted about 1 month ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 200000.0 - 240000.0 USD per year

πŸ” Retail ecommerce

🏒 Company: Bluecore, Inc.

  • 3+ years as a tech lead for a high-performing data team.
  • 5+ years working in data pipeline orchestration & data warehouse design.
  • 3+ years using dbt, with extensive experience in orchestration and performance optimizations.
  • 5+ years architecting & developing cloud data warehouses.
  • Advanced knowledge of Python for analytics.
  • Expert-level knowledge of SQL.
  • Demonstrated success in managing the modern data stack.

  • Define and execute Bluecore’s long-term data strategy.
  • Lead development of complex transformation projects.
  • Architect and develop data models and ETL/ELT.
  • Evaluate and integrate new technologies.
  • Lead performance and cost-saving initiatives.
  • Mentor a team of analytics engineers.
  • Ensure data quality and establish governance standards.

PythonSQLETLGCPSnowflakeAirflowData modeling

Posted about 1 month ago
Apply
Shown 10 out of 119