Apache Airflow Job Salaries

Find salary information for remote positions requiring Apache Airflow skills. Make data-driven decisions about your career path.

Apache Airflow

Median high-range salary for jobs requiring Apache Airflow:

$180,000

This analysis is based on salary ranges collected from 58 job descriptions that match the search and allow working remotely. Choose a country to narrow down the search and view statistics exclusively for remote jobs available in that location.

The Median Salary Range is $143,769 - $180,000

  • 25% of job descriptions advertised a maximum salary above $207,000.
  • 5% of job descriptions advertised a maximum salary above $290,340.

Skills and Salary

Specific skills can have a substantial impact on salary ranges for jobs that align with these search preferences. Certain in-demand skills are highly valued by employers and can significantly boost compensation. These skills often reflect the unique requirements and challenges faced by professionals in these roles. Some of the most sought-after skills that correlate with higher salaries include AWS, Data engineering and CI/CD. Mastering these skills can demonstrate expertise and make individuals more competitive in the job market. Employers often prioritize candidates who possess these skills, as they can contribute directly to the organization's success. The ability to effectively utilize these skills can lead to increased earning potential and career advancement opportunities.

  1. AWS

    60% jobs mention AWS as a required skill. The Median Salary Range for these jobs is $150,000 - $200,000

    • 25% of job descriptions advertised a maximum salary above $226,875.
    • 5% of job descriptions advertised a maximum salary above $338,475.
  2. Data engineering

    60% jobs mention Data engineering as a required skill. The Median Salary Range for these jobs is $155,000 - $200,000

    • 25% of job descriptions advertised a maximum salary above $223,750.
    • 5% of job descriptions advertised a maximum salary above $338,475.
  3. CI/CD

    36% jobs mention CI/CD as a required skill. The Median Salary Range for these jobs is $150,000 - $185,000

    • 25% of job descriptions advertised a maximum salary above $212,500.
    • 5% of job descriptions advertised a maximum salary above $329,304.3.
  4. Python

    90% jobs mention Python as a required skill. The Median Salary Range for these jobs is $143,769 - $180,000

    • 25% of job descriptions advertised a maximum salary above $212,500.
    • 5% of job descriptions advertised a maximum salary above $300,510.
  5. SQL

    84% jobs mention SQL as a required skill. The Median Salary Range for these jobs is $144,767 - $180,000

    • 25% of job descriptions advertised a maximum salary above $211,250.
    • 5% of job descriptions advertised a maximum salary above $306,205.
  6. Data modeling

    47% jobs mention Data modeling as a required skill. The Median Salary Range for these jobs is $144,767 - $180,000

    • 25% of job descriptions advertised a maximum salary above $223,125.
    • 5% of job descriptions advertised a maximum salary above $363,268.1.
  7. Docker

    34% jobs mention Docker as a required skill. The Median Salary Range for these jobs is $137,383.5 - $177,500

    • 25% of job descriptions advertised a maximum salary above $210,000.
    • 5% of job descriptions advertised a maximum salary above $339,227.
  8. Snowflake

    43% jobs mention Snowflake as a required skill. The Median Salary Range for these jobs is $130,000 - $170,000

    • 25% of job descriptions advertised a maximum salary above $198,500.
    • 5% of job descriptions advertised a maximum salary above $258,500.
  9. ETL

    53% jobs mention ETL as a required skill. The Median Salary Range for these jobs is $130,000 - $169,789

    • 25% of job descriptions advertised a maximum salary above $203,000.
    • 5% of job descriptions advertised a maximum salary above $229,900.

Industries and Salary

Industry plays a crucial role in determining salary ranges for jobs that align with these search preferences. Certain industries offer significantly higher compensation packages compared to others. Some in-demand industries known for their competitive salaries in these roles include Data activation and SaaS, Game Development and Cybersecurity. These industries often have a strong demand for skilled professionals and are willing to invest in talent to meet their growth objectives. Factors such as industry size, profitability, and market trends can influence salary levels within these sectors. It's important to consider industry-specific factors when evaluating potential career paths and salary expectations.

  1. Data activation and SaaS

    2% jobs are in Data activation and SaaS industry. The Median Salary Range for these jobs is $200,000 - $260,000

  2. Game Development

    3% jobs are in Game Development industry. The Median Salary Range for these jobs is $175,000 - $230,000

    • 25% of job descriptions advertised a maximum salary above $270,000.
  3. Cybersecurity

    2% jobs are in Cybersecurity industry. The Median Salary Range for these jobs is $176,000 - $207,000

  4. Software Development

    21% jobs are in Software Development industry. The Median Salary Range for these jobs is $138,750 - $202,000

    • 25% of job descriptions advertised a maximum salary above $229,000.
    • 5% of job descriptions advertised a maximum salary above $418,608.6.
  5. Mental Health Technology

    3% jobs are in Mental Health Technology industry. The Median Salary Range for these jobs is $170,000 - $200,000

  6. Healthcare

    9% jobs are in Healthcare industry. The Median Salary Range for these jobs is $150,000 - $185,000

    • 25% of job descriptions advertised a maximum salary above $206,250.
    • 5% of job descriptions advertised a maximum salary above $210,000.
  7. Biotechnology

    2% jobs are in Biotechnology industry. The Median Salary Range for these jobs is $144,767 - $169,789

  8. Data Engineering

    10% jobs are in Data Engineering industry. The Median Salary Range for these jobs is $128,050 - $169,075

    • 25% of job descriptions advertised a maximum salary above $200,000.
  9. DataOps

    5% jobs are in DataOps industry. The Median Salary Range for these jobs is $130,000 - $160,000

    • 25% of job descriptions advertised a maximum salary above $190,000.
    • 5% of job descriptions advertised a maximum salary above $200,000.
  10. Legal Services

    7% jobs are in Legal Services industry. The Median Salary Range for these jobs is $102,500 - $149,500

    • 25% of job descriptions advertised a maximum salary above $165,500.
    • 5% of job descriptions advertised a maximum salary above $171,000.

Disclaimer: This analysis is based on salary ranges advertised in job descriptions found on Remoote.app. While it provides valuable insights into potential compensation, it's important to understand that advertised salary ranges may not always reflect the actual salaries paid to employees. Furthermore, not all companies disclose salary ranges, which can impact the accuracy of this analysis. Several factors can influence the final compensation package, including:

  • Negotiation: Salary ranges often serve as a starting point for negotiation. Your experience, skills, and qualifications can influence the final offer you receive.
  • Benefits: Salaries are just one component of total compensation. Some companies may offer competitive benefits packages that include health insurance, paid time off, retirement plans, and other perks. The value of these benefits can significantly affect your overall compensation.
  • Cost of Living: The cost of living in a particular location can impact salary expectations. Some areas may require higher salaries to maintain a similar standard of living compared to others.

Jobs

67 jobs found. to receive daily emails with new job openings that match your preferences.
67 jobs found.

Set alerts to receive daily emails with new job openings that match your preferences.

Apply
🔥 Data Engineer
Posted 1 day ago

📍 United States

💸 112800.0 - 126900.0 USD per year

🔍 Software Development

🏢 Company: Titan Cloud

  • 4+ years of work experience with ETL, Data Modeling, Data Analysis, and Data Architecture.
  • Experience operating very large data warehouses or data lakes.
  • Experience with building data pipelines and applications to stream and process datasets at low latencies.
  • MySQL, MSSQL Database, Postgres, Python
  • Design, implement, and maintain standardized data models that align with business needs and analytical use cases.
  • Optimize data structures and schemas for efficient querying, scalability, and performance across various storage and compute platforms.
  • Provide guidance and best practices for data storage, partitioning, indexing, and query optimization.
  • Developing and maintaining a data pipeline design.
  • Build robust and scalable ETL/ELT data pipelines to transform raw data into structured datasets optimized for analysis.
  • Collaborate with data scientists to streamline feature engineering and improve the accessibility of high-value data assets.
  • Designing, building, and maintaining the data architecture needed to support business decisions and data-driven applications. This includes collecting, storing, processing, and analyzing large amounts of data using AWS, Azure, and local tools and services.
  • Develop and enforce data governance standards to ensure consistency, accuracy, and reliability of data across the organization.
  • Ensure data quality, integrity, and completeness in all pipelines by implementing automated validation and monitoring mechanisms.
  • Implement data cataloging, metadata management, and lineage tracking to enhance data discoverability and usability.
  • Work with Engineering to manage and optimize data warehouse and data lake architectures, ensuring efficient storage and retrieval of structured and semi-structured data.
  • Evaluate and integrate emerging cloud-based data technologies to improve performance, scalability, and cost efficiency.
  • Assist with designing and implementing automated tools for collecting and transferring data from multiple source systems to the AWS and Azure cloud platform.
  • Work with DevOps Engineers to integrate any new code into existing pipelines
  • Collaborate with teams in trouble shooting functional and performance issues.
  • Must be a team player to be able to work in an agile environment

AWSPostgreSQLPythonSQLAgileApache AirflowCloud ComputingData AnalysisETLHadoopMySQLData engineeringData scienceREST APISparkCommunication SkillsAnalytical SkillsCI/CDProblem SolvingTerraformAttention to detailOrganizational skillsMicroservicesTeamworkData visualizationData modelingScripting

Posted 1 day ago
Apply
Apply

📍 United States

🧭 Full-Time

💸 108000.0 - 162000.0 USD per year

🔍 Insurance

🏢 Company: Openly👥 251-500💰 $100,000,000 Series D over 1 year agoLife InsuranceProperty InsuranceInsuranceCommercial InsuranceAuto Insurance

  • 1 to 2 years of data engineering and data management experience.
  • Scripting skills in one or more of the following: Python.
  • Basic understanding and usage of a development and deployment lifecycle, automated code deployments (CI/CD), code repositories, and code management.
  • Experience with Google Cloud data store and data orchestration technologies and concepts.
  • Hands-on experience and understanding of the entire data pipeline architecture: Data replication tools, staging data, data transformation, data movement, and cloud based data platforms.
  • Understanding of a modern next generation data warehouse platform, such as the Lakehouse and multi-data layered warehouse.
  • Proficiency with SQL optimization and development.
  • Ability to understand data architecture and modeling as it relates to business goals and objectives.
  • Ability to gain an understanding of data requirements, translate them into source to target data mappings, and build a working solution.
  • Experience with terraform preferred but not required.
  • Design, create, and maintain data solutions. This includes data pipelines and data structures.
  • Work with data users, data science, and business intelligence personnel, to create data solutions to be used in various projects.
  • Translating concepts to code to enhance our data management frameworks and services to strive towards providing a high quality data product to our data users.
  • Collaborate with our product, operations, and technology teams to develop and deploy new solutions related to data architecture and data pipelines to enable a best-in-class product for our data users.
  • Collaborating with teammates to derive design and solution decisions related to architecture, operations, deployment techniques, technologies, policies, processes, etc.
  • Participate in domain, stand ups, weekly 1:1's, team collaborations, and biweekly retros
  • Assist in educating others on different aspects of data (e.g. data management best practices, data pipelining best practices, SQL tuning)
  • Build and share your knowledge within the data engineer team and with others in the company (e.g. tech all-hands, tech learning hour, domain meetings, code sync meetings, etc.)

DockerPostgreSQLPythonSQLApache AirflowCloud ComputingETLGCPKafkaKubernetesData engineeringGoREST APICI/CDTerraformData modelingScriptingData management

Posted 1 day ago
Apply
Apply

📍 United States

💸 70000.0 - 105000.0 USD per year

🔍 Software Development

🏢 Company: VUHL

  • Relevant experience in data engineering or a related discipline.
  • Demonstrated ability to code effectively and a solid understanding of software engineering principles.
  • Experience using SQL or other query language to manage and process data.
  • Experience using Python to build ETL pipelines
  • Experience working with data from various sources and in various formats, including flat files, REST APIs, Excel files, JSON, XML, etc.
  • Experience with Snowflake, SQL Server, or related database technologies.
  • Experience using orchestration tools like Dagster (preferred), Apache Airflow, or similar.
  • Preference for Agile product delivery.
  • Familiarity with GIT, Change Management, and application lifecycle management tools.
  • Ability to influence others without positional control.
  • Create and deliver functional ETL pipelines and other data solutions using core technologies like SQL, Python, Snowflake, Dagster, and SSIS in an agile development environment. Apply sound database design principles and adhere to Clean Code practices.
  • Engage in whole team planning, retrospectives, and communication. Interact with Architects and Product Owners to translate requirements into actionable business logic.
  • Participate in proposing and adopting Engineering standards related to architectural considerations and non-functional requirements such as security, reliability, and stability. Ensure proper management and visibility of borrower data and the life of a loan. Contribute to data governance initiatives.
  • Actively contribute to strengthening the team and culture by taking on various duties as needed, excluding licensed activities.

PythonSQLAgileApache AirflowETLGitSnowflakeData engineeringREST APIJSONData modelingSoftware EngineeringData management

Posted 1 day ago
Apply
Apply

📍 Canada

💸 98400.0 - 137800.0 CAD per year

🔍 Data Technology

🏢 Company: Hootsuite👥 1001-5000💰 $50,000,000 Debt Financing almost 7 years ago🫂 Last layoff about 2 years agoDigital MarketingSocial Media MarketingSocial Media ManagementApps

  • A degree in Computer Science or Engineering, and senior-level experience in developing and maintaining software or an equivalent level of education or work experience, and a track record of substantial contributions to software projects with high business impact.
  • Experience planning and leading a team using Scrum agile methodology ensuring timely delivery and continuous improvement.
  • Experience liaising with various business stakeholders to understand their data requirements and convey the technical solutions.
  • Experience with data warehousing and data modeling best practices.
  • Passionate interest in data engineering and infrastructure; ingestion, storage and compute in relational, NoSQL, and serverless architectures
  • Experience developing data pipelines and integrations for high volume, velocity and variety of data.
  • Experience writing clean code that performs well at scale; ideally experienced with languages like Python, Scala, SQL and shell script.
  • Experience with various types of data stores, query engines and data frameworks, e.g. PostgreSQL, MySQL, S3, Redshift, Presto/Athena, Spark and dbt.
  • Experience working with message queues such as Kafka and Kinesis
  • Experience with ETL and pipeline orchestration such as Airflow, AWS Glue
  • Experience with JIRA in managing sprints and roadmaps
  • Lead development and maintenance of scalable and efficient data pipeline architecture
  • Work within cross-functional teams, including Data Science, Analytics, Software Development, and business units, to deliver data products and services.
  • Collaborate with business stakeholders and translate requirements into scalable data solutions.
  • Monitor and communicate project statuses while mitigating risk and resolving issues.
  • Work closely with the Senior Manager to align team priorities with business objectives.
  • Assess and prioritize the team's work, appropriately delegating to others and encouraging team ownership.
  • Proactively share information, actively solicit feedback, and facilitate communication, within teams and other departments.
  • Design, write, test, and deploy high quality scalable code.
  • Maintain high standards of security, reliability, scalability, performance, and quality in all delivered projects.
  • Contribute to shape our technical roadmap as we scale our services and build our next generation data platform.
  • Build, support and lead a high performance, cohesive team of developers, in close partnership with the Senior Manager, Data Analytics.
  • Participate in the hiring process, with an aim of attracting and hiring the best developers.
  • Facilitate ongoing development conversations with your team to support their learning and career growth.

AWSLeadershipPostgreSQLPythonSoftware DevelopmentSQLAgileApache AirflowETLMySQLSCRUMCross-functional Team LeadershipAlgorithmsApache KafkaData engineeringData StructuresSparkCommunication SkillsAnalytical SkillsCI/CDProblem SolvingMentoringDevOpsWritten communicationCoachingScalaData visualizationTeam managementData modelingData analyticsData management

Posted 1 day ago
Apply
Apply

📍 United States

💸 169000.0 - 240000.0 USD per year

🔍 Software Development

  • 4+ years of experience designing, developing and launching backend systems at scale using languages like Python or Kotlin.
  • A track record of developing highly available distributed systems using technologies like AWS, MySQL and Kubernetes.
  • Experience building and managing Workflow Orchestration frameworks like Airflow, Flyte, Prefect, Temporal, Luigi, etc.
  • Experience with or working knowledge for efficiently scaling frameworks like Spark/Flink for extremely large scale datasets on Kubernetes.
  • Experience defining a technical plan for the delivery of a significant feature or system component with an elegant, simple and extensible design. You write high quality code that is easily understood and used by others.
  • Proficient at making significant changes in a large code base, and have developed a suite of tools and practices that enable you and your team to do so safely.
  • Experience demonstrates that you take ownership of your growth, proactively seeking feedback from your team, your manager, and your stakeholders.
  • Strong verbal and written communication skills that support effective collaboration with our global engineering team.
  • This position requires either equivalent practical experience or a Bachelor’s degree in a related field
  • Be responsible for owning and delivering quarterly goals for your team, leading engineers on your team through ambiguity to solve open-ended problems, and ensuring that everyone is supported throughout delivery.
  • Support your peers and stakeholders in the product development lifecycle by collaborating with product management, design & analytics by participating in ideation, articulating technical constraints, and partnering on decisions that properly consider risks and trade-offs.
  • Proactively identify project, process, technology or business issues, advocate for them, and lead in solving them.
  • Support the operations and availability of your team’s artifacts by creating and monitoring metrics, escalating when needed, and supporting “keep the lights on” & on-call efforts.
  • Foster a culture of quality and ownership on your team by setting or improving code review and design standards for your team, and advocating for them beyond your team through your writing and tech talks.
  • Help develop talent on your team by providing feedback and guidance, and leading by example.

AWSBackend DevelopmentDockerLeadershipPythonSQLApache AirflowKotlinKubernetesMySQLSoftware ArchitectureAlgorithmsData engineeringData StructuresREST APISparkCommunication SkillsAnalytical SkillsCollaborationCI/CDProblem SolvingRESTful APIsMentoringDevOpsWritten communicationMicroservicesTeam managementSoftware Engineering

Posted 2 days ago
Apply
Apply

📍 United States, Canada

🧭 Full-Time

💸 147500.0 - 227500.0 USD per year

🔍 Financial Technology

  • 4+ years in software engineering for data systems.
  • Experience in scalable infrastructure to support batch, micro-batch or streaming processing
  • Experience in business domains such as payment systems, credit cards, bank transfers, or blockchains.
  • Experience in data governance and provenance.
  • Internal knowledge of open-source data technologies.
  • Ability to tackle complex and ambiguous problems.
  • Self-starter who takes ownership and enjoys moving at a fast pace.
  • Excellent communication skills, with the ability to collaborate across multiple remote teams, share ideas and present concepts effectively.
  • Design, build, and operate data platform services (warehousing, orchestration, and catalogs).
  • Continuously enhance platform operations by improving monitoring, performance, reliability, and resource optimization.
  • Design, build and maintain the data ingestion framework to source the required data for various analytical and reporting needs, which include onchain data, internal system data, and partner data.
  • Be a domain expert in data warehousing, modeling, pipelines, and quality. Work closely across multiple stakeholders–including Product, Engineering, Data Science, Security and Compliance teams–on data contract modeling, data lifecycle management, governance and regulatory/legal compliance.
  • Provide ML data platform capabilities for AI/Data Science teams to perform data preparation, model training and management, and experiment execution.
  • Develop and maintain core services and libraries to enhance critical platform functionalities, such as cataloging data assets and lineage, tracking data versioning and quality, managing auto-backfilling, implementing access controls on data assets.

AWSDockerPostgreSQLPythonSQLApache AirflowBlockchainCloud ComputingETLJavaKafkaKubernetesMachine LearningSnowflakeAlgorithmsData engineeringData scienceData StructuresREST APICI/CDRESTful APIsMicroservicesData visualizationData modelingSoftware EngineeringData analyticsData management

Posted 2 days ago
Apply
Apply

📍 United States

🧭 Full-Time

💸 217000.0 - 303900.0 USD per year

🔍 Digital Advertising

🏢 Company: Reddit👥 1001-5000💰 $410,000,000 Series F over 3 years ago🫂 Last layoff almost 2 years agoNewsContentSocial NetworkSocial Media

  • M.S.: 10+ years of industry data science experience, emphasizing experimentation and causal inference.
  • Ph.D.: 6+ years of industry data science experience, emphasizing experimentation and causal inference
  • Master's or Ph.D. in Statistics, Economics, Computer Science, or a related quantitative field
  • Expertise in experimental design, A/B testing, and causal inference
  • Proficiency in statistical programming (Python/R) and SQL
  • Demonstrated ability to apply statistical principles of experimentation (hypothesis testing, p-values, etc.)
  • Experience with large-scale data analysis and manipulation
  • Strong technical communication skills for both technical and non-technical audiences
  • Ability to thrive in fast-paced, ambiguous environments and drive action
  • Desire to mentor and elevate data science practices
  • Experience with digital advertising and marketplace dynamics (preferred)
  • Experience with advertising technology (preferred)
  • Lead the design, implementation, and analysis of sophisticated A/B tests and experiments, leveraging innovative techniques like Bayesian approaches and causal inference to optimize complex ad strategies
  • Extract critical insights through in-depth analysis, developing automated tools and actionable recommendations to drive impactful decisions Define and refine key metrics to empower product teams with a deeper understanding of feature performance
  • Partner with product and engineering to shape experiment roadmaps and drive data-informed product development
  • Provide technical leadership, mentor junior data scientists, and establish best practices for experimentation
  • Drive impactful results by collaborating effectively with product, engineering, sales, and marketing teams

AWSPythonSQLApache AirflowData AnalysisHadoopMachine LearningNumpyCross-functional Team LeadershipProduct DevelopmentAlgorithmsData engineeringData scienceRegression testingPandasSparkCommunication SkillsAnalytical SkillsMentoringData visualizationData modelingA/B testing

Posted 5 days ago
Apply
Apply
🔥 Sr. Data Engineer
Posted 7 days ago

📍 United States

🧭 Full-Time

💸 126100.0 - 168150.0 USD per year

🔍 Data Engineering

🏢 Company: firstamericancareers

  • 5+ years of development experience with any of the following software languages: Python or Scala, and SQL (we use SQL & Python) with cloud experience (Azure preferred or AWS).
  • Hands-on data security and cloud security methodologies. Experience in configuration and management of data security to meet compliance and CISO security requirements.
  • Experience creating and maintaining data intensive distributed solutions (especially involving data warehouse, data lake, data analytics) in a cloud environment.
  • Hands-on experience in modern Data Analytics architectures encompassing data warehouse, data lake etc. designed and engineered in a cloud environment.
  • Proven professional working experience in Event Streaming Platforms and data pipeline orchestration tools like Apache Kafka, Fivetran, Apache Airflow, or similar tools
  • Proven professional working experience in any of the following: Databricks, Snowflake, BigQuery, Spark in any flavor, HIVE, Hadoop, Cloudera or RedShift.
  • Experience developing in a containerized local environment like Docker, Rancher, or Kubernetes preferred
  • Data Modeling
  • Build high-performing cloud data solutions to meet our analytical and BI reporting needs.
  • Design, implement, test, deploy, and maintain distributed, stable, secure, and scalable data intensive engineering solutions and pipelines in support of data and analytics projects on the cloud, including integrating new sources of data into our central data warehouse, and moving data out to applications and other destinations.
  • Identify, design, and implement internal process improvements, such as automating manual processes, optimizing data delivery, and re-designing infrastructure for greater scalability, etc.
  • Build and enhance a shared data lake that powers decision-making and model building.
  • Partner with teams across the business to understand their needs and develop end-to-end data solutions.
  • Collaborate with analysts and data scientists to perform exploratory analysis and troubleshoot issues.
  • Manage and model data using visualization tools to provide the company with a collaborative data analytics platform.
  • Build tools and processes to help make the correct data accessible to the right people.
  • Participate in active rotational support role for production during or after business hours supporting business continuity.
  • Engage in collaboration and decision making with other engineers.
  • Design schema and data pipelines to extract, transform, and load (ETL) data from various sources into the data warehouse or data lake.
  • Create, maintain, and optimize database structures to efficiently store and retrieve large volumes of data.
  • Evaluate data trends and model simple to complex data solutions that meet day-to-day business demand and plan for future business and technological growth.
  • Implement data cleansing processes and oversee data quality to maintain accuracy.
  • Function as a key member of the team to drive development, delivery, and continuous improvement of the cloud-based enterprise data warehouse architecture.

AWSDockerPythonSQLAgileApache AirflowCloud ComputingETLHadoopKubernetesSnowflakeApache KafkaAzureData engineeringSparkScalaData visualizationData modelingData analytics

Posted 7 days ago
Apply
Apply

📍 Chile

🧭 Full-Time

💸 2500.0 USD per month

🏢 Company: Workana Premium

  • Experiencia de 2 a 4 años trabajando en la posición de Data Engineer.
  • Formación en Ingeniería Civil Informática, Industrial o carrera afín.
  • Conocimientos sólidos en SQL.
  • Conocimientos sólidos en Python.
  • Conocimientos sólidos en ETL.
  • Conocimientos sólidos en Apache Airflow (DAGs).
  • Conocimientos sólidos en BigQuery.
  • Conocimientos sólidos en CI/CD.
  • Diseñar arquitecturas escalables en la nube utilizando herramientas de Google Cloud Platform (GCP) como BigQuery y Pub/Sub.
  • Implementar y mantener pipelines de datos optimizados para grandes volúmenes de información en tiempo real.
  • Desarrollar y gestionar bases de datos SQL, asegurando un rendimiento óptimo.
  • Implementar procesos ETL (Extract, Transform, Load) para la integración de datos desde múltiples fuentes.

PythonSQLApache AirflowETLGCPJenkinsData engineeringCI/CDTerraform

Posted 10 days ago
Apply
Apply

📍 Chile

🧭 Full-Time

💸 3100.0 USD per month

🔍 Software Development

🏢 Company: Workana Premium

  • Más de 4 años de experiencia trabajando en la posición de Data Engineer.
  • Formación en Ingeniería Civil Informática, Industrial o carrera afín.
  • Conocimientos sólidos en SQL.
  • Conocimientos sólidos en Python.
  • Conocimientos sólidos en ETL.
  • Conocimientos sólidos en Apache Airflow (DAGs).
  • Conocimientos sólidos en BigQuery.
  • Conocimientos sólidos en CI/CD.
  • Diseñar arquitecturas escalables en la nube utilizando herramientas de Google Cloud Platform (GCP) como BigQuery y Pub/Sub.
  • Implementar y mantener pipelines de datos eficientes para gestionar grandes volúmenes de información en tiempo real.
  • Desarrollar y gestionar bases de datos SQL, asegurando un rendimiento óptimo.
  • Diseñar e implementar procesos ETL (Extract, Transform, Load) para la integración de datos desde múltiples fuentes.

PythonSQLApache AirflowETLGCPData engineeringCI/CD

Posted 10 days ago
Apply
Shown 10 out of 67