Apache Airflow Job Salaries

Find salary information for remote positions requiring Apache Airflow skills. Make data-driven decisions about your career path.

Apache Airflow

Median high-range salary for jobs requiring Apache Airflow:

$180,000

This analysis is based on salary ranges collected from 65 job descriptions that match the search and allow working remotely. Choose a country to narrow down the search and view statistics exclusively for remote jobs available in that location.

The Median Salary Range is $135,000 - $180,000

  • 25% of job descriptions advertised a maximum salary above $204,250.
  • 5% of job descriptions advertised a maximum salary above $305,425.

Skills and Salary

Specific skills can have a substantial impact on salary ranges for jobs that align with these search preferences. Certain in-demand skills are highly valued by employers and can significantly boost compensation. These skills often reflect the unique requirements and challenges faced by professionals in these roles. Some of the most sought-after skills that correlate with higher salaries include Data engineering, AWS and Python. Mastering these skills can demonstrate expertise and make individuals more competitive in the job market. Employers often prioritize candidates who possess these skills, as they can contribute directly to the organization's success. The ability to effectively utilize these skills can lead to increased earning potential and career advancement opportunities.

  1. Data engineering

    63% jobs mention Data engineering as a required skill. The Median Salary Range for these jobs is $150,000 - $200,000

    • 25% of job descriptions advertised a maximum salary above $221,250.
    • 5% of job descriptions advertised a maximum salary above $328,000.
  2. AWS

    60% jobs mention AWS as a required skill. The Median Salary Range for these jobs is $144,000 - $185,000

    • 25% of job descriptions advertised a maximum salary above $213,000.
    • 5% of job descriptions advertised a maximum salary above $332,000.
  3. Python

    89% jobs mention Python as a required skill. The Median Salary Range for these jobs is $132,500 - $180,000

    • 25% of job descriptions advertised a maximum salary above $200,000.
    • 5% of job descriptions advertised a maximum salary above $307,560.
  4. SQL

    78% jobs mention SQL as a required skill. The Median Salary Range for these jobs is $135,000 - $180,000

    • 25% of job descriptions advertised a maximum salary above $204,750.
    • 5% of job descriptions advertised a maximum salary above $302,730.
  5. ETL

    51% jobs mention ETL as a required skill. The Median Salary Range for these jobs is $130,000 - $180,000

    • 25% of job descriptions advertised a maximum salary above $201,000.
    • 5% of job descriptions advertised a maximum salary above $277,275.
  6. Data modeling

    42% jobs mention Data modeling as a required skill. The Median Salary Range for these jobs is $130,000 - $180,000

    • 25% of job descriptions advertised a maximum salary above $204,750.
    • 5% of job descriptions advertised a maximum salary above $363,268.1.
  7. CI/CD

    38% jobs mention CI/CD as a required skill. The Median Salary Range for these jobs is $144,000 - $180,000

    • 25% of job descriptions advertised a maximum salary above $200,000.
    • 5% of job descriptions advertised a maximum salary above $289,613.5.
  8. Kubernetes

    34% jobs mention Kubernetes as a required skill. The Median Salary Range for these jobs is $140,000 - $180,000

    • 25% of job descriptions advertised a maximum salary above $200,000.
    • 5% of job descriptions advertised a maximum salary above $361,381.6.
  9. Snowflake

    45% jobs mention Snowflake as a required skill. The Median Salary Range for these jobs is $130,000 - $170,000

    • 25% of job descriptions advertised a maximum salary above $197,025.
    • 5% of job descriptions advertised a maximum salary above $234,100.

Industries and Salary

Industry plays a crucial role in determining salary ranges for jobs that align with these search preferences. Certain industries offer significantly higher compensation packages compared to others. Some in-demand industries known for their competitive salaries in these roles include Game Development, Cybersecurity and Software Development. These industries often have a strong demand for skilled professionals and are willing to invest in talent to meet their growth objectives. Factors such as industry size, profitability, and market trends can influence salary levels within these sectors. It's important to consider industry-specific factors when evaluating potential career paths and salary expectations.

  1. Game Development

    3% jobs are in Game Development industry. The Median Salary Range for these jobs is $175,000 - $230,000

    • 25% of job descriptions advertised a maximum salary above $270,000.
  2. Cybersecurity

    2% jobs are in Cybersecurity industry. The Median Salary Range for these jobs is $176,000 - $207,000

  3. Software Development

    26% jobs are in Software Development industry. The Median Salary Range for these jobs is $157,500 - $200,000

    • 25% of job descriptions advertised a maximum salary above $231,000.
    • 5% of job descriptions advertised a maximum salary above $393,495.1.
  4. Mental Health Technology

    3% jobs are in Mental Health Technology industry. The Median Salary Range for these jobs is $170,000 - $200,000

  5. Healthcare technology

    3% jobs are in Healthcare technology industry. The Median Salary Range for these jobs is $170,000 - $195,000

    • 25% of job descriptions advertised a maximum salary above $200,000.
  6. Data Engineering

    11% jobs are in Data Engineering industry. The Median Salary Range for these jobs is $130,000 - $170,000

    • 25% of job descriptions advertised a maximum salary above $200,000.
    • 5% of job descriptions advertised a maximum salary above $259,000.
  7. Healthcare

    8% jobs are in Healthcare industry. The Median Salary Range for these jobs is $150,000 - $170,000

    • 25% of job descriptions advertised a maximum salary above $190,000.
    • 5% of job descriptions advertised a maximum salary above $205,000.
  8. Legal Services

    8% jobs are in Legal Services industry. The Median Salary Range for these jobs is $99,000 - $160,000

    • 25% of job descriptions advertised a maximum salary above $162,750.
    • 5% of job descriptions advertised a maximum salary above $171,000.
  9. DataOps

    3% jobs are in DataOps industry. The Median Salary Range for these jobs is $130,000 - $155,000

    • 25% of job descriptions advertised a maximum salary above $160,000.
  10. Biotechnology, Public Health

    2% jobs are in Biotechnology, Public Health industry. The Median Salary Range for these jobs is $115,000 - $150,000

Disclaimer: This analysis is based on salary ranges advertised in job descriptions found on Remoote.app. While it provides valuable insights into potential compensation, it's important to understand that advertised salary ranges may not always reflect the actual salaries paid to employees. Furthermore, not all companies disclose salary ranges, which can impact the accuracy of this analysis. Several factors can influence the final compensation package, including:

  • Negotiation: Salary ranges often serve as a starting point for negotiation. Your experience, skills, and qualifications can influence the final offer you receive.
  • Benefits: Salaries are just one component of total compensation. Some companies may offer competitive benefits packages that include health insurance, paid time off, retirement plans, and other perks. The value of these benefits can significantly affect your overall compensation.
  • Cost of Living: The cost of living in a particular location can impact salary expectations. Some areas may require higher salaries to maintain a similar standard of living compared to others.

Jobs

79 jobs found. to receive daily emails with new job openings that match your preferences.
79 jobs found.

Set alerts to receive daily emails with new job openings that match your preferences.

Apply

πŸ“ United States, Europe

🧭 Full-Time

πŸ’Έ 180000.0 - 220000.0 USD per year

πŸ” Software Development

🏒 Company: Eppo

  • Experience with OLAP SQL queries and processing
  • Expertise with software engineering practices like test coverage and CI/CD workflows
  • Experience with Nest.js (Typescript)
  • Building and maintaining backend APIs that power internal and external systems alike
  • Write backend software using Nest.js (Typescript) to power AirFlow tasks and our front-end API

Backend DevelopmentSoftware DevelopmentSQLApache AirflowCloud ComputingSnowflakeTypeScriptAPI testingData engineeringREST APINest.jsCI/CDData modelingNodeJSSoftware Engineering

Posted 40 minutes ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 160000.0 - 200000.0 USD per year

πŸ” Software Development

🏒 Company: GameChanger

  • 4+ years of hands-on cloud engineering experience with a focus on scaling ML/AI infrastructure in high-throughput, real-time environments.
  • Proficiency in Infrastructure-as-Code (Terraform, CloudFormation, or equivalent) to automate cloud infrastructure for ML systems.
  • Deep knowledge of AWS services, patterns and best practices for resiliency, scalability, and security.
  • Hands-on experience with container orchestration (e.g., Kubernetes, ECS) with a focus on solving real-world scalability challenges.
  • Architect and scale cloud-based infrastructure purpose-built for real-time and batch ML workloads, ensuring system efficiency and reliability.
  • Design model-serving infrastructure to handle high-throughput inference workloads, model versioning, and multi-model interactions.
  • Design observability strategies, ensuring model performance, system reliability, and infrastructure health are continuously monitored.
  • Collaborate with ML/DevOps/Backend/Security Engineers on projects, integrating new tools, improving deployment speeds, scalability, and resiliency to support their evolving roadmaps.

AWSDockerPythonApache AirflowCloud ComputingGitKerasKubernetesMachine LearningMLFlowNumpyAlgorithmsAPI testingData engineeringData StructuresREST APIServerlessPandasSparkTensorflowCI/CDLinuxTerraformMicroservicesJSON

Posted about 16 hours ago
Apply
Apply

πŸ“ Canada

πŸ’Έ 98400.0 - 137800.0 CAD per year

πŸ” Software Development

  • A degree in Computer Science or Engineering, and 5-8 years of experience in developing and maintaining software or an equivalent level of education or work experience, and a track record of substantial contributions to software projects with high business impact
  • Experience writing clean code that performs well at scale; ideally experienced with languages like Python, Scala, Java, Go, and shell script
  • Passionate interest in data engineering and infrastructure; ingestion, storage and compute in relational, NoSQL, and serverless architectures
  • Experience with various types of data stores, query engines and frameworks, e.g. PostgreSQL, MySQL, S3, Redshift/Spectrum, Presto/Athena, Spark
  • Experience working with message queues such as Kafka and Kinesis
  • Experience developing data pipelines and integrations for high volume, velocity and variety of data
  • Experience with data warehousing and data modeling best practices
  • Work within a cross-functional team (including analysts, product managers, and other developers) to deliver data products and services to our internal stakeholders
  • Conduct directed research and technical analysis of new candidate technologies that fill a development team’s business or technical need
  • Provide technical advice, act as a role model for your teammates, flawlessly execute complicated plans, and navigate many levels of the organization
  • Contribute enhancements to development, build, deployment, and monitoring processes with an emphasis on security, reliability and performance
  • Implement our technical roadmap as we scale our services and build new data products
  • Participate in code reviews, attend regular team meetings, and apply software development best practices
  • Take ownership of your work, and work autonomously when necessary
  • Recognize opportunities to improve efficiency in our data systems and processes, increase data quality, and enable consistent and reliable results
  • Participate in the design and implementation of our next generation data platform to empower Hootsuite with data
  • Participate in the development of the technical hiring process and interview scripts with an aim of attracting and hiring the best developers

AWSPostgreSQLPythonSoftware DevelopmentSQLAgileApache AirflowCloud ComputingData AnalysisData MiningETLJavaKafkaMySQLSoftware ArchitectureAlgorithmsAPI testingData engineeringData StructuresGoServerlessSparkCI/CDRESTful APIsMicroservicesScalaData visualizationData modelingData management

Posted 1 day ago
Apply
Apply
πŸ”₯ Data Engineer
Posted 1 day ago

πŸ“ Italy

🧭 Full-Time

πŸ’Έ 40000.0 - 60000.0 EUR per year

πŸ” Fintech

🏒 Company: Qomodo

  • Experience in the design and development of scalable data pipelines
  • Excellent knowledge of SQL and relational databases (we use PostgreSQL)
  • You like Python and you wink at PySpark!
  • Familiarity with workflow orchestration tools (we use Airflow and Glue Workflow)
  • Knowledge of cloud services for data management (we mainly use AWS Glue and Athena)
  • Experience with ETL/ELT tools and data modeling practices
  • Understanding of best practices for data governance, quality and data security
  • Model data in a way that makes it easy for the business to extract insights
  • Create robust and scalable pipelines to support analysis, reporting, and decision-making

PostgreSQLPythonSQLApache AirflowCloud ComputingETLData engineeringData modeling

Posted 1 day ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 232000.0 - 310000.0 USD per year

πŸ” Software Development

  • 10+ years of experience designing, developing and launching backend systems at scale using languages like Python or Kotlin.
  • Strong experience leading multiple engineering teams to deliver high quality software
  • Track record of successfully leading engineering teams at both rapidly scaling startups and complex larger technology companies.
  • Expertise in synthesizing complex technical requirements, designs, trade-offs, and capabilities into clear decisions to influence ML & engineering direction
  • Extensive experience developing highly available distributed systems using technologies like AWS, MySQL, Spark and Kubernetes.
  • Experience building and operating online, real-time ML infrastructure including a model server and a feature store
  • Experience developing an offline environment for large scale data analysis and model training using technologies including Spark, Kubeflow, Ray, and Airflow
  • Experience delivering major features and system components
  • Set the multi-year, multi-team technical strategy for ML Platform and deliver it through direct implementation or broad technical leadership
  • Partner with technical leaders across the company to create joint roadmaps that will achieve business impacting goals through the advancement of machine learning
  • Act as a force-multiplier for your teams through your definition and advocacy of technical solutions and operational processes
  • You have an ownership mindset, and you will proactively champion investments in availability so that every project in your area achieves its availability targets
  • You will foster a culture of quality and ownership on your team by setting system design standards for your team, and advocating for them beyond your team through your writing and tech talks
  • You will help develop talent on your team by providing feedback and guidance, and leading by example

AWSBackend DevelopmentLeadershipProject ManagementPythonApache AirflowData AnalysisKotlinKubeflowKubernetesMachine LearningMySQLSoftware ArchitectureCross-functional Team LeadershipData engineeringSparkCommunication SkillsRESTful APIsDevOps

Posted 2 days ago
Apply
Apply

πŸ“ Argentina, Colombia, Peru, Bolivia, Plurinational State of, Mexico

🧭 Contract

πŸ’Έ 2300.0 - 2500.0 USD per month

πŸ” Software Development

🏒 Company: Workana

  • Experience with Selenium, Puppeteer or Playwright.
  • Experience with Java (Spring Boot, Jsoup, HttpClient) and Python (Scrapy, Selenium, Playwright, FastAPI).
  • Experience with pandas and NumPy.
  • Knowledge in SQL databases (PostgreSQL, MySQL, SQL Server).
  • Implementation of scrapers in scalable environments with Docker and Kubernetes.
  • Deployment in AWS or GCP.
  • Development and maintenance of scrapers for extracting data from web portals.
  • Refactorization and optimization of legacy scrapers in Java towards Python.
  • Implementation of more efficient architectures to improve performance.

AWSBackend DevelopmentDockerGraphQLPostgreSQLPythonSQLApache AirflowETLGCPJavaJava EEKubernetesMySQLNumpySpring BootData engineeringFastAPIREST APIPandasSeleniumCI/CDMicroservicesJSONData management

Posted 3 days ago
Apply
Apply

πŸ“ Lithuania

πŸ’Έ 4000.0 - 6000.0 EUR per month

πŸ” Software Development

🏒 Company: Softeta

  • 4+ years of experience as a Data Engineer
  • Experience with Azure (Certifications are a Plus)
  • Experience with Databricks, Azure Data Lake, Data Factory and Apache Airflow
  • CI/CD or infrastructure as code
  • Knowledge of Medallion Architecture or Multihop architecture
  • Experience developing and administering ETL processes in the Cloud (Azure, AWS or GCP) environment
  • Strong programming skills in Python and SQL
  • Strong problem-solving and analytical skills
  • Design, develop, and maintain data pipelines and ETL processes
  • Data modeling, data cleansing
  • Automating data processing workflows using tools such as Airflow or other workflow management tools
  • Optimizing the performance of databases, including designing and implementing data structures and using indexes appropriately
  • Implement data quality and data governance processes
  • Being a data advocate and helping unlock business value by using data

PythonSQLApache AirflowETLAzureData engineeringCI/CDData modeling

Posted 3 days ago
Apply
Apply

πŸ“ United States

πŸ’Έ 144000.0 - 180000.0 USD per year

πŸ” Software Development

🏒 Company: HungryrootπŸ‘₯ 101-250πŸ’° $40,000,000 Series C almost 4 years agoArtificial Intelligence (AI)Food and BeverageE-CommerceRetailConsumer GoodsSoftware

  • 5+ years of experience in ETL development and data modeling
  • 5+ years of experience in both Scala and Python
  • 5+ years of experience in Spark
  • Excellent problem-solving skills and the ability to translate business problems into practical solutions
  • 2+ years of experience working with the Databricks Platform
  • Develop pipelines in Spark (Python + Scala) in the Databricks Platform
  • Build cross-functional working relationships with business partners in Food Analytics, Operations, Marketing, and Web/App Development teams to power pipeline development for the business
  • Ensure system reliability and performance
  • Deploy and maintain data pipelines in production
  • Set an example of code quality, data quality, and best practices
  • Work with Analysts and Data Engineers to enable high quality self-service analytics for all of Hungryroot
  • Investigate datasets to answer business questions, ensuring data quality and business assumptions are understood before deploying a pipeline

AWSPythonSQLApache AirflowData MiningETLSnowflakeAlgorithmsAmazon Web ServicesData engineeringData StructuresSparkCI/CDRESTful APIsMicroservicesJSONScalaData visualizationData modelingData analyticsData management

Posted 3 days ago
Apply
Apply
πŸ”₯ Data Engineer
Posted 7 days ago

πŸ“ United States

πŸ’Έ 112800.0 - 126900.0 USD per year

πŸ” Software Development

🏒 Company: Titan Cloud

  • 4+ years of relevant employment experience
  • 4+ years of work experience with ETL, Data Modeling, Data Analysis, and Data Architecture.
  • MySQL, MSSQL Database, Postgres, Python
  • Experience operating very large data warehouses or data lakes.
  • Experience with building data pipelines and applications to stream and process datasets at low latencies.
  • Design, implement, and maintain standardized data models that align with business needs and analytical use cases.
  • Optimize data structures and schemas for efficient querying, scalability, and performance across various storage and compute platforms.
  • Provide guidance and best practices for data storage, partitioning, indexing, and query optimization.
  • Developing and maintaining a data pipeline design.
  • Build robust and scalable ETL/ELT data pipelines to transform raw data into structured datasets optimized for analysis.
  • Collaborate with data scientists to streamline feature engineering and improve the accessibility of high-value data assets.
  • Designing, building, and maintaining the data architecture needed to support business decisions and data-driven applications.β€―This includes collecting, storing, processing, and analyzing large amounts of data using AWS, Azure, and local tools and services.
  • Develop and enforce data governance standards to ensure consistency, accuracy, and reliability of data across the organization.
  • Ensure data quality, integrity, and completeness in all pipelines by implementing automated validation and monitoring mechanisms.
  • Implement data cataloging, metadata management, and lineage tracking to enhance data discoverability and usability.
  • Work with Engineering to manage and optimize data warehouse and data lake architectures, ensuring efficient storage and retrieval of structured and semi-structured data.
  • Evaluate and integrate emerging cloud-based data technologies to improve performance, scalability, and cost efficiency.
  • Assist with designing and implementing automated tools for collecting and transferring data from multiple source systems to the AWS and Azure cloud platform.
  • Work with DevOps Engineers to integrate any new code into existing pipelines
  • Collaborate with teams in trouble shooting functional and performance issues.
  • Must be a team player to be able to work in an agile environment

AWSPostgreSQLPythonSQLAgileApache AirflowCloud ComputingData AnalysisETLHadoopMySQLData engineeringData scienceREST APISparkCommunication SkillsAnalytical SkillsCI/CDProblem SolvingTerraformAttention to detailOrganizational skillsMicroservicesTeamworkData visualizationData modelingScripting

Posted 7 days ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 108000.0 - 162000.0 USD per year

πŸ” Insurance

🏒 Company: OpenlyπŸ‘₯ 251-500πŸ’° $100,000,000 Series D over 1 year agoLife InsuranceProperty InsuranceInsuranceCommercial InsuranceAuto Insurance

  • 1 to 2 years of data engineering and data management experience.
  • Scripting skills in one or more of the following: Python.
  • Basic understanding and usage of a development and deployment lifecycle, automated code deployments (CI/CD), code repositories, and code management.
  • Experience with Google Cloud data store and data orchestration technologies and concepts.
  • Hands-on experience and understanding of the entire data pipeline architecture: Data replication tools, staging data, data transformation, data movement, and cloud based data platforms.
  • Understanding of a modern next generation data warehouse platform, such as the Lakehouse and multi-data layered warehouse.
  • Proficiency with SQL optimization and development.
  • Ability to understand data architecture and modeling as it relates to business goals and objectives.
  • Ability to gain an understanding of data requirements, translate them into source to target data mappings, and build a working solution.
  • Experience with terraform preferred but not required.
  • Design, create, and maintain data solutions. This includes data pipelines and data structures.
  • Work with data users, data science, and business intelligence personnel, to create data solutions to be used in various projects.
  • Translating concepts to code to enhance our data management frameworks and services to strive towards providing a high quality data product to our data users.
  • Collaborate with our product, operations, and technology teams to develop and deploy new solutions related to data architecture and data pipelines to enable a best-in-class product for our data users.
  • Collaborating with teammates to derive design and solution decisions related to architecture, operations, deployment techniques, technologies, policies, processes, etc.
  • Participate in domain, stand ups, weekly 1:1's, team collaborations, and biweekly retros
  • Assist in educating others on different aspects of data (e.g. data management best practices, data pipelining best practices, SQL tuning)
  • Build and share your knowledge within the data engineer team and with others in the company (e.g. tech all-hands, tech learning hour, domain meetings, code sync meetings, etc.)

DockerPostgreSQLPythonSQLApache AirflowCloud ComputingETLGCPKafkaKubernetesData engineeringGoREST APICI/CDTerraformData modelingScriptingData management

Posted 7 days ago
Apply
Shown 10 out of 79