Remote Data Science Jobs

Airflow
132 jobs found. to receive daily emails with new job openings that match your preferences.
132 jobs found.

Set alerts to receive daily emails with new job openings that match your preferences.

Apply
πŸ”₯ Senior Analytics Engineer
Posted about 2 hours ago

πŸ“ USA

🏒 Company: Engine

  • 5+ years of industry experience as an Analytics Engineer in high-growth environments.
  • Strong expertise using SQL, Snowflake, Airflow, and BI tools such as Looker.
  • A Bachelor's degree in Computer Science, Information Technology, Engineering, or a related technical field, or equivalent practical experience
  • Develop and implement tools and strategies to improve the data quality, reliability, and governance at Engine.
  • Collaborate with engineering, analytics, and business stakeholders to ensure high quality data empowers every business decision to drive measurable business impact.
  • Enhance data infrastructure and analytics capabilities by working closely with our data infrastructure and analyst teams.
  • Design and build our data pipelines to support long term business growth without compromising on our day to day execution speed.

AWSDockerSQLETLGitSnowflakeAirflow

Posted about 2 hours ago
Apply
Apply
πŸ”₯ Data Engineer (Contract)
Posted about 5 hours ago

πŸ“ LatAm

🧭 Contract

🏒 Company: AbleRentalProperty ManagementReal Estate

  • 10+ years of data engineering experience with enterprise-scale systems
  • Expertise in Apache Spark and Delta Lake, including ACID transactions, time travel, Z-ordering, and compaction
  • Deep knowledge of Databricks (Jobs, Clusters, Workspaces, Delta Live Tables, Unity Catalog)
  • Experience building scalable ETL/ELT pipelines using tools like Airflow, Glue, Dataflow, or ADF
  • Advanced SQL for data modeling and transformation
  • Strong programming skills in Python (or Scala)
  • Hands-on experience with data formats such as Parquet, Avro, and JSON
  • Familiarity with schema evolution, versioning, and backfilling strategies
  • Working knowledge of at least one major cloud platform: AWS (S3, Athena, Redshift, Glue Catalog, Step Functions), GCP (BigQuery, Cloud Storage, Dataflow, Pub/Sub), or Azure (Synapse, Data Factory, Azure Databricks)
  • Experience designing data architectures with real-time or streaming data (Kafka, Kinesis)
  • Consulting or client-facing experience with strong communication and leadership skills
  • Experience with data mesh architectures and domain-driven data design
  • Knowledge of metadata management, data cataloging, and lineage tracking tools
  • Shape large-scale data architecture vision and roadmap across client engagements
  • Establish governance, security frameworks, and regulatory compliance standards
  • Lead strategy around platform selection, integration, and scaling
  • Guide organizations in adopting data lakehouse and federated data models
  • Lead technical discovery sessions to understand client needs
  • Translate complex architectures into clear, actionable value for stakeholders
  • Build trusted advisor relationships and guide strategic decisions
  • Align architecture recommendations with business growth and goals
  • Design and implement modern data lakehouse architectures with Delta Lake and Databricks
  • Build and manage ETL/ELT pipelines at scale using Spark (PySpark preferred)
  • Leverage Delta Live Tables, Unity Catalog, and schema evolution features
  • Optimize storage and queries on cloud object storage (e.g., AWS S3, Azure Data Lake)
  • Integrate with cloud-native services like AWS Glue, GCP Dataflow, and Azure Synapse Analytics
  • Implement data quality monitoring, lineage tracking, and schema versioning
  • Build scalable pipelines with tools like Apache Airflow, Step Functions, and Cloud Composer
  • Develop cost-optimized, scalable, and compliant data solutions
  • Design POCs and pilots to validate technical approaches
  • Translate business requirements into production-ready data systems
  • Define and track success metrics for platform and pipeline initiatives

AWSPythonSQLCloud ComputingETLGCPKafkaAirflowAzureData engineeringScalaData modeling

Posted about 5 hours ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ” Software Development

🏒 Company: Buzz Solutions

  • 8+ years of industry experience with modern systems development, ideally end to end pipelines and applications development
  • Track record of shipping complex backend features end-to-end
  • Ability to translate customer requirements into technical solutions
  • Strong programming and computer science fundamentals and quality standards
  • Experience with Python and modern web frameworks (FastAPI) and Pydantic
  • Experience designing, implementing, and debugging web technologies and server architecture
  • Experience with modern python packaging and distribution (uv, poetry)
  • Deep understanding of distributed systems and scalable architecture
  • Experience building reusable, modular systems that enable rapid development and easy modification
  • Strong experience with data storage systems (PostgreSQL, Redis, BigQuery, MongoDB)
  • Expertise with queuing/streaming systems (RabbitMQ, Kafka, SQS)
  • Expertise with workflow orchestration frameworks (Celery, Temporal, Airflow) and DAG-based processing
  • Proficiency in utilizing and maintaining cloud infrastructure services (Google Cloud/AWS/Azure)
  • Experience with Kubernetes for container orchestration and deployment
  • Solid grasp of system design patterns and tradeoffs
  • Experience and in-depth understanding of AI/ML systems integration
  • Deep understanding of the ML Lifecyle
  • Experience with big data technologies and data pipeline development
  • Experience containerizing and deploying ML applications (Docker) for training and inference workloads
  • Experience with real-time streaming and batch processing systems for ML model workflows
  • Experience with vector databases and search systems for similarity search and embeddings
  • Partner closely with engineering (software, data, and machine learning), product, and design leadership to define product-led growth strategy with an ownership-driven approach
  • Establish best practices, frameworks, and repeatable processes to measure the impact of every feature shipped, taking initiative to identify and solve problems proactively
  • Make effective tradeoffs considering business priorities, user experience, and sustainable technical foundation with a startup mindset focused on rapid iteration and results
  • Develop and lead team execution against both short-term and long-term roadmaps, demonstrating self-starter qualities and end-to-end accountability
  • Mentor and grow team members to be successful contributors while fostering an ownership culture and entrepreneurial thinking
  • Build and maintain backend systems and data pipelines for AI-based software platforms, integrating SQL/NoSQL databases and collaborating with engineering teams to enhance performance
  • Design, deploy, and optimize cloud infrastructure on Google Cloud Platform, including Kubernetes clusters, virtual machines, and cost-effective scalable architecture
  • Implement comprehensive MLOps workflows including model registry, deployment pipelines, monitoring systems for model drift, and CI/CD automation for ML-based backend services
  • Establish robust testing, monitoring, and security frameworks including unit/stress testing, vulnerability assessments, and customer usage analytics
  • Drive technical excellence through documentation, code reviews, standardized practices, and strategic technology stack recommendations

AWSBackend DevelopmentDockerPostgreSQLPythonSQLCloud ComputingGCPKafkaKubernetesMachine LearningMLFlowMongoDBRabbitmqAirflowFastAPIRedisNosqlCI/CDRESTful APIsMicroservices

Posted about 7 hours ago
Apply
Apply

πŸ“ Ontario Canada, British Columbia Canada

🧭 Full-Time

πŸ’Έ 104200.0 - 130200.0 CAD per year

πŸ” Software Development

🏒 Company: MarqetaπŸ‘₯ 1001-5000πŸ’° Post-IPO Equity almost 4 years agoπŸ«‚ Last layoff about 2 years agoCryptocurrencyDebit CardsCredit CardsPaymentsFinTech

  • 5+ years experience managing technical programs or projects involving the engineering, delivery, and operations of online services
  • Proficient in agile software development practices
  • Familiarity with modern cloud-based services technologies
  • Understanding of modern SSDLC (Secure Software Development Life Cycle) practices including OWASP top 10 defense
  • Knowledge of fundamental modern practices for ongoing delivery of high-availability online services
  • Provide excellent technical program management driving a portfolio of technical programs and projects to deliver and evolve Marqeta’s online services
  • Develop and leverage strong partnerships for transformation while remaining agile to respond to changing business needs
  • Proactively work with key stakeholders to identify the highest priority Initiatives
  • Define and execute communication plans to report each program across multiple stakeholders in Marqeta including executive-ready reporting
  • Manage risks, schedules, and blockers, facilitating problem-solving for the team
  • Validate assumptions, define success metrics, and use data to drive strategic improvements
  • Suggest improvements to TPM standards to enhance PMO service quality
  • Use scrum and agile practices to boost initiative delivery across engineering teams

AWSLeadershipProject ManagementSoftware DevelopmentAgileCloud ComputingCybersecurityData AnalysisKafkaSCRUMSnowflakeCross-functional Team LeadershipAirflowAzureData engineeringCommunication SkillsAnalytical SkillsCI/CDProblem SolvingAgile methodologiesRESTful APIsLinuxDevOpsComplianceExcellent communication skillsRisk ManagementStakeholder managementChange Management

Posted about 23 hours ago
Apply
Apply

πŸ“ BRAZIL

πŸ” Software Development

🏒 Company: Wellhub

  • Knowledge of data pipeline tools and technologies (e.g., Airflow, EMR, Kafka)
  • Able to collaborate with different teams to understand data needs and develop effective data pipelines
  • Comfortable understanding big data concepts to ingest, process and make data available for data scientists, business analysts and product teams
  • Comfortable maintaining data consistency across the entire data ecosystem
  • Develop and maintain data models and structures that enable efficient querying and analysis
  • Design, develop, and maintain data pipelines to transform and process large volumes of data from various sources, while adding business context and semantics to the data
  • Implement automated data quality checks to guarantee data quality and consistency across the whole data life cycle

AWSSQLCloud ComputingETLKafkaAirflowData engineeringData modelingData analyticsData management

Posted 1 day ago
Apply
Apply

πŸ“ Texas, Denver, CO

πŸ’Έ 148000.0 - 189000.0 USD per year

πŸ” SaaS

🏒 Company: Branch Metrics

  • 4+ years of relevant experience in data science, analytics, or related fields.
  • Degree in Statistics, Mathematics, Computer Science, or related field.
  • Proficiency with Python, SQL, Spark, Bazel, CLI (Bash/Zsh).
  • Expertise in Spark, Presto, Airflow, Docker, Kafka, Jupyter.
  • Strong knowledge of ML frameworks (scikit-learn, pandas, xgboost, lightgbm).
  • Experience deploying models to production on AWS infrastructure and experience with the basic AWS services.
  • Advanced statistical knowledge (regression, A/B testing, Multi-Armed Bandits, time-series anomaly detection).
  • Collaborate with stakeholders to identify data-driven business opportunities.
  • Perform data mining, analytics, and predictive modeling to optimize business outcomes.
  • Conduct extensive research and evaluate innovative approaches for new product initiatives.
  • Develop, deploy, and monitor custom models and algorithms.
  • Deliver end-to-end production-ready solutions through close collaboration with engineering and product teams.
  • Identify opportunities to measure and monitor key performance metrics, assessing the effectiveness of existing ML-based products.
  • Serve as a cross-functional advisor, proposing innovative solutions and guiding product and engineering teams toward the best approaches.
  • Anticipate and clearly articulate potential risks in ML-driven products.
  • Effectively integrate solutions into existing engineering infrastructure.

AWSDockerPythonSQLBashKafkaMachine LearningAirflowRegression testingPandasSparkRESTful APIsTime ManagementA/B testing

Posted 1 day ago
Apply
Apply

πŸ“ United States, Canada

🧭 Full-Time

πŸ’Έ 95000.0 - 146000.0 USD per year

πŸ” Software Development

🏒 Company: 1Password

  • Minimum of 2+ years of experience as a Analytics Engineer, Data Engineer or in a similar role with a proven track record in shipping canonical datasets
  • Minimum of 2+ years technical experience leveraging dbt and SQL for data transformation
  • Minimum of 2+ years building LookML models in Looker (or equivalent experience in other Business Intelligence tools with a semantic layer)
  • Proficiency in at least one functional/OOP language such as Python or R
  • Proficiency in version control (e.g., Git) and command-line tools
  • Familiarity with leveraging distributed data stores (e.g. S3, Trino, Hive, Spark)
  • Experience building multi-step ETL jobs coupled with orchestrating workflows (e.g. Airflow, Dagster)
  • Experience in writing unit tests to validate data products and version control (e.g. GitHub, Stash)
  • Experience solving ambiguous problem statements in an early stage environment
  • Collaborate with team members to collect business requirements, define successful analytics outcomes, and design & build data models
  • Full stack analytics engineering development, building models to consume, transform, and expose data to stakeholders and production systems
  • Drive a culture of experimental design, testing agenda, and best practices
  • Contribute to the culture of 1Password’s Data team by influencing processes, tools, and systems that will allow us to make better decisions in a scalable way
  • Collaborate with Analytics, Business, Product, Engineering and Data Infra teams to develop roadmaps and measure success
  • Work closely with Data Engineering teams to capture, move, store, and transform raw data into highly actionable insights, and partner with business teams to turn those insights into action

PythonSQLETLGitAirflowData modeling

Posted 1 day ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 160000.0 - 230000.0 USD per year

πŸ” Daily Fantasy Sports

  • 7+ years of experience in a data Engineering, or data-oriented software engineering role creating and pushing end-to-end data engineering pipelines.
  • 3+ years of experience acting as technical lead and providing mentorship and feedback to junior engineers.
  • Extensive experience building and optimizing cloud-based data streaming pipelines and infrastructure.
  • Extensive experience exposing real-time predictive model outputs to production-grade systems leveraging large-scale distributed data processing and model training.
  • Experience in most of the following: SQL/NoSQL databases/warehouses: Postgres, BigQuery, BigTable, Materialize, AlloyDB, etc
  • Replication/ELT services: Data Stream, Hevo, etc.
  • Data Transformation services: Spark, Dataproc, etc
  • Scripting languages: SQL, Python, Go.
  • Cloud platform services in GCP and analogous systems: Cloud Storage, Cloud Compute Engine, Cloud Functions, Kubernetes Engine etc.
  • Data Processing and Messaging Systems: Kafka, Pulsar, Flink
  • Code version control: Git
  • Data pipeline and workflow tools: Argo, Airflow, Cloud Composer.
  • Monitoring and Observability platforms: Prometheus, Grafana, ELK stack, Datadog
  • Infrastructure as Code platforms: Terraform, Google Cloud Deployment Manager.
  • Other platform tools such as Redis, FastAPI, and Streamlit.
  • Excellent organizational, communication, presentation, and collaboration experience with organizational technical and non-technical teams
  • Graduate degree in Computer Science, Mathematics, Informatics, Information Systems or other quantitative field
  • Enhance the capabilities of our existing Core Data Platform and develop new integrations with both internal and external APIs within the Data organization.
  • Develop and maintain advanced data pipelines and transformation logic using Python and Go, ensuring efficient and reliable data processing.
  • Collaborate with Data Scientists and Data Science Engineers to support the needs of advanced ML development.
  • Collaborate with Analytics Engineers to enhance data transformation processes, streamline CI/CD pipelines, and optimize team collaboration workflows Using DBT.
  • Work closely with DevOps and Infrastructure teams to ensure the maturity and success of the Core Data platform.
  • Guide teams in implementing and maintaining comprehensive monitoring, alerting, and documentation practices, and coordinate with Engineering teams to ensure continuous feature availability.
  • Design and implement Infrastructure as Code (IaC) solutions to automate and streamline data infrastructure deployment, ensuring scalable, consistent configurations aligned with data engineering best practices.
  • Build and maintain CI/CD pipelines to automate the deployment of data solutions, ensuring robust testing, seamless integration, and adherence to best practices in version control, automation, and quality assurance.
  • Experienced in designing and automating data governance workflows and tool integrations across complex environments, ensuring data integrity and protection throughout the data lifecycle.
  • Serve as a Staff Engineer within the broader PrizePicks technology organization by staying current with emerging technologies, implementing innovative solutions, and sharing knowledge and best practices with junior team members and collaborators.
  • Ensure code is thoroughly tested, effectively integrated, and efficiently deployed, in alignment with industry best practices for version control, automation, and quality assurance.
  • Mentor and support junior engineers by providing guidance, coaching and educational opportunities
  • Provide on-call support as part of a shared rotation between the Data and Analytics Engineering teams to maintain system reliability and respond to critical issues.

LeadershipPythonSQLCloud ComputingETLGCPGitKafkaKubernetesAirflowData engineeringGoPostgresREST APISparkCI/CDMentoringDevOpsTerraformData visualizationData modelingScripting

Posted 2 days ago
Apply
Apply

πŸ“ Mexico

πŸ” Software Development

🏒 Company: DaCodes

  • Experience working with Snowflake, Oracle, Docker, and DAG tools such as Dagster or Airflow.
  • Solid knowledge of SQL and Python.
  • Solid knowledge of database handling and monitoring, as well as implementation and maintenance of DWH according to best practices in data handling.
  • Prior experience in the design and management of cloud infrastructure, preferably Azure.
  • Strong analytical and problem-solving skills.
  • Ability to work in a team environment and communicate effectively.
  • Implement and maintain data infrastructure in Snowflake and Azure.
  • Develop and optimize Oracle databases, both on-premise and in the cloud.
  • Create and manage efficient and scalable data workflows, implementing and using DAG tools like Dagster and Airflow.
  • Monitor the performance of the data infrastructure and make adjustments as needed.
  • Manage with key stakeholders on data traceability and consumption.
  • Ensure data integrity, availability, and security.
  • Solve technical problems and provide continuous support for operational data systems.
  • Work closely with data scientists and analysts to understand their data needs and develop appropriate solutions.
  • Participate in the planning and execution of data integration and migration projects.
  • Document processes and maintain updated records of the data infrastructure.

DockerPythonSQLCloud ComputingETLMicrosoft AzureOracleSnowflakeAirflowData engineeringRDBMSData modeling

Posted 3 days ago
Apply
Apply

πŸ“ Brazil

🧭 Full-Time

πŸ” Payments platform

🏒 Company: Alternative PaymentsπŸ‘₯ 11-50Financial ServicesOnline PortalsPayments

  • 5+ years in software or data engineering, with hands-on experience provisioning and maintaining cloud data warehouses (e.g., Snowflake, Amazon Redshift, Google BigQuery)
  • Proficiency with Infrastructure-as-Code tools (Terraform, CloudFormation, Pulumi) to automate data platform deployments
  • Strong SQL skills and experience building ETL pipelines in Python or Java/Scala
  • Familiarity with orchestration frameworks (Airflow, Prefect, Dagster) or transformation tools (dbt)
  • Architect and spin up our production and sandbox data warehouse environments using IaC
  • Build and deploy the first wave of ETL pipelines to ingest transactional, event and third-party data
  • Embed data quality tests and SLA tracking into every pipeline
  • Establish coding conventions, pipeline templates and best practices for all future data projects

AWSPythonSQLCloud ComputingETLSnowflakeAirflowData engineeringCommunication SkillsAnalytical SkillsCollaborationCI/CDMentoringDevOpsTerraformDocumentationData modeling

Posted 3 days ago
Apply
Shown 10 out of 132

Ready to Start Your Remote Journey?

Apply to 5 jobs per day for free, or get unlimited applications with a subscription starting at €5/week.

Why Remote Data Science Jobs Are Becoming More Popular

The remote work from home is increasingly in demand among computer and IT professionals for several reasons:

  • Flexibility in time and location.
  • Collaboration with international companies.
  • Higher salary levels.
  • Lack of ties to the office.

Remote work opens up new opportunities for specialists, allowing them to go beyond geographical limits and build a successful remote IT career. This employment model is transforming traditional work approaches, making it more convenient, efficient, and accessible for professionals worldwide.

Why do Job Seekers Choose Remoote.app?

Our platform offers convenient conditions for finding remote IT jobs from home:

  • localized search β€” filter job listings based on your country of residence;
  • AI-powered job processing β€” artificial intelligence analyzes thousands of listings, highlighting key details so you don’t have to read long descriptions;
  • advanced filters β€” sort vacancies by skills, experience, qualification level, and work model;
  • regular database updates β€” we monitor job relevance and remove outdated listings;
  • personalized notifications β€” get tailored job offers directly via email or Telegram;
  • resume builder β€” create a professional VC with ease using our customizable templates and AI-powered suggestions;
  • data security β€” modern encryption technologies ensure the protection of your personal information.

Join our platform and find your dream job today! We offer flexible pricing β€” up to 5 applications per day for free, with weekly, monthly, and yearly subscription plans for extended access.