Remote Data Science Jobs

Data engineering
796 jobs found. to receive daily emails with new job openings that match your preferences.
796 jobs found.

Set alerts to receive daily emails with new job openings that match your preferences.

Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 138000.0 - 150000.0 USD per year

πŸ” Education

🏒 Company: Gradient Learning

  • 5+ years of experience working in the K-12 space
  • 3+ years of experience in data product development
  • 3+ years of experience translating complex data into educator-friendly visualizations using Tableau
  • 3+ years of people management experience
  • 3+ years of experience with Snowflake or comparable cloud-based data warehousing platforms (strongly preferred)
  • Experience using AI or machine learning to enhance data analysis or deliver scalable, educator-facing insights (strongly preferred)
  • Familiarity with LTI standards, LMS platforms, and education data interoperability; direct experience with Canvas LMS (strongly preferred)
  • Knowledge of data privacy, security, and protection standards, particularly as they relate to PII and educational data (FERPA, COPPA, etc.) (preferred)
  • Design, Refine, and Lead the Data & Insights Product Strategy
  • Oversee Data & Insights Product Development and Delivery
  • Strengthen Data Infrastructure in Partnership with Information Systems
  • Lead the Data & Insights Product Delivery Team

SQLData AnalysisETLMachine LearningPeople ManagementProduct ManagementSnowflakeUser Experience DesignCross-functional Team LeadershipTableauProduct DevelopmentData engineeringCommunication SkillsAgile methodologiesRESTful APIsData visualizationStakeholder managementStrategic thinkingData modelingData analyticsData management

Posted about 1 hour ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ” Information Security

  • 5+ years of experience in security engineering, with a primary focus on SIEM platforms.
  • Hands-on experience with at least two of the following SIEM platforms: Splunk, Microsoft Sentinel, Elastic, Google SecOps, CrowdStrike NG-SIEM, LogScale
  • 2+ years of experience with Cribl or similar observability pipeline tools (e.g., Logstash, Fluentd, Kafka).
  • Strong knowledge of log formats, data normalization, and event correlation.
  • Familiarity with detection engineering, threat modeling, and MITRE ATT&CK framework.
  • Proficiency with scripting (e.g., Python, PowerShell, Bash) and regular expressions.
  • Deep understanding of logging from cloud (AWS, Azure, GCP) and on-prem environments.
  • Architect, implement, and maintain SIEM solutions with a focus on modern platforms
  • Design and manage log ingestion pipelines using tools such as Cribl Stream, Edge, or Search (or similar).
  • Optimize data routing, enrichment, and filtering to improve SIEM efficiency and cost control.
  • Collaborate with cybersecurity, DevOps, and cloud infrastructure teams to integrate log sources and telemetry data.
  • Develop custom parsers, dashboards, correlation rules, and alerting logic for security analytics and threat detection.
  • Maintain and enhance system reliability, scalability, and performance of logging infrastructure.
  • Provide expertise and guidance on log normalization, storage strategy, and data retention policies.
  • Lead incident response investigations and assist with root cause analysis leveraging SIEM insights.
  • Mentor junior engineers and contribute to strategic security monitoring initiatives.

AWSPythonBashCloud ComputingGCPKafkaKubernetesAPI testingAzureData engineeringCI/CDRESTful APIsLinuxDevOpsJSONAnsibleScripting

Posted about 2 hours ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 185500.0 - 293750.0 USD per year

πŸ” Software Development

  • Strong technical expertise in designing and building scalable ML infrastructure.
  • Experience with distributed systems and cloud-based ML platforms.
  • Proficiency in programming languages such as Python, Java, or Scala.
  • Deep understanding of ML workflows, including data pipelines, model training, and deployment.
  • Passion for innovation and eagerness to implement the latest advancements in ML infrastructure.
  • Strong problem-solving skills and ability to optimize complex systems for performance and reliability.
  • Collaborative mindset with excellent communication skills to work across teams.
  • Ability to thrive in a fast-paced, dynamic environment with evolving technical challenges.
  • Design, implement, and optimize distributed systems and infrastructure components to support large-scale machine learning workflows, including data ingestion, feature engineering, model training, and serving.
  • Develop and maintain frameworks, libraries, and tools that streamline the end-to-end machine learning lifecycle, from data preparation and experimentation to model deployment and monitoring.
  • Architect and implement highly available, fault-tolerant, and secure systems that meet the performance and scalability requirements of production machine learning workloads.
  • Collaborate with machine learning researchers and data scientists to understand their requirements and translate them into scalable and efficient software solutions.
  • Stay current with advancements in machine learning infrastructure, distributed computing, and cloud technologies, integrating them into our platform to drive innovation.
  • Mentor junior engineers, conduct code reviews, and uphold engineering best practices to ensure the delivery of high-quality software solutions.

AWSDockerPythonCloud ComputingKubernetesMachine LearningAlgorithmsData engineeringData scienceCI/CDRESTful APIsScalaSoftware Engineering

Posted about 4 hours ago
Apply
Apply
πŸ”₯ Data Analyst - Finance
Posted about 4 hours ago

πŸ“ USA

🧭 Full-Time

πŸ” Fintech

🏒 Company: Comun

  • Expert-level SQL knowledge with demonstrated ability to optimize complex queries (non-negotiable)
  • At least 3+ years of practical experience in data engineering or analytics roles with financial data
  • 3+ years of experience at a fintech or financial services industries in the payments or lending space at a similar role
  • Solid understanding of finance concepts and principles
  • Proven track record building data pipelines and ETL processes
  • Experience implementing cost modeling and optimization analytics
  • Problem-solving mindset with strong analytical skills
  • Excellent communication skills to explain complex technical and financial concepts
  • Design and implement scalable data pipelines to establish and maintain a solid fund flow process
  • Automate financial reconciliation processes and generate actionable reports
  • Develop and maintain revenue and cost models to identify growth opportunities and provide insights for strategic decision-making
  • Build analytical tools to identify and quantify cost optimization opportunities across the organization
  • Monitor vendor performance metrics and evaluate new vendor opportunities
  • Implement data solutions to detect financial anomalies and uncover efficiency opportunities that drive business value
  • Perform cohort-level performance analysis to develop a deeper understanding on customer unit economics
  • Collaborate with finance, data, growth, product, and engineering teams to develop robust financial data architecture
  • Contribute to our mission of financial inclusion by enabling data-informed product and pricing decisions

AWSPostgreSQLPythonSQLData AnalysisETLSnowflakeData engineeringFastAPIFinancial analysisData modelingFinance

Posted about 4 hours ago
Apply
Apply
πŸ”₯ Data Engineer (Contract)
Posted about 6 hours ago

πŸ“ LatAm

🧭 Contract

🏒 Company: AbleRentalProperty ManagementReal Estate

  • 10+ years of data engineering experience with enterprise-scale systems
  • Expertise in Apache Spark and Delta Lake, including ACID transactions, time travel, Z-ordering, and compaction
  • Deep knowledge of Databricks (Jobs, Clusters, Workspaces, Delta Live Tables, Unity Catalog)
  • Experience building scalable ETL/ELT pipelines using tools like Airflow, Glue, Dataflow, or ADF
  • Advanced SQL for data modeling and transformation
  • Strong programming skills in Python (or Scala)
  • Hands-on experience with data formats such as Parquet, Avro, and JSON
  • Familiarity with schema evolution, versioning, and backfilling strategies
  • Working knowledge of at least one major cloud platform: AWS (S3, Athena, Redshift, Glue Catalog, Step Functions), GCP (BigQuery, Cloud Storage, Dataflow, Pub/Sub), or Azure (Synapse, Data Factory, Azure Databricks)
  • Experience designing data architectures with real-time or streaming data (Kafka, Kinesis)
  • Consulting or client-facing experience with strong communication and leadership skills
  • Experience with data mesh architectures and domain-driven data design
  • Knowledge of metadata management, data cataloging, and lineage tracking tools
  • Shape large-scale data architecture vision and roadmap across client engagements
  • Establish governance, security frameworks, and regulatory compliance standards
  • Lead strategy around platform selection, integration, and scaling
  • Guide organizations in adopting data lakehouse and federated data models
  • Lead technical discovery sessions to understand client needs
  • Translate complex architectures into clear, actionable value for stakeholders
  • Build trusted advisor relationships and guide strategic decisions
  • Align architecture recommendations with business growth and goals
  • Design and implement modern data lakehouse architectures with Delta Lake and Databricks
  • Build and manage ETL/ELT pipelines at scale using Spark (PySpark preferred)
  • Leverage Delta Live Tables, Unity Catalog, and schema evolution features
  • Optimize storage and queries on cloud object storage (e.g., AWS S3, Azure Data Lake)
  • Integrate with cloud-native services like AWS Glue, GCP Dataflow, and Azure Synapse Analytics
  • Implement data quality monitoring, lineage tracking, and schema versioning
  • Build scalable pipelines with tools like Apache Airflow, Step Functions, and Cloud Composer
  • Develop cost-optimized, scalable, and compliant data solutions
  • Design POCs and pilots to validate technical approaches
  • Translate business requirements into production-ready data systems
  • Define and track success metrics for platform and pipeline initiatives

AWSPythonSQLCloud ComputingETLGCPKafkaAirflowAzureData engineeringScalaData modeling

Posted about 6 hours ago
Apply
Apply

πŸ“ USA

πŸ” SaaS

🏒 Company: DevRevπŸ‘₯ 251-500πŸ’° $100,825,173 Series A 10 months agoDeveloper PlatformCustomer ServiceCRMArtificial Intelligence (AI)Developer APIsSoftware

  • 3+ years in software development, AI/ML engineering, or technical consulting.
  • Strong proficiency in Python and/or Golang.
  • Familiarity with large language models (LLMs), prompt engineering, and frameworks like RAG and function calling.
  • Hands-on experience with AWS, GCP, or Azure, and modern DevOps practices (CI/CD, containers, observability).
  • Design & Deploy AI Agents
  • Integrate Systems
  • Optimize Performance
  • Own Requirements
  • Prototype & Iterate
  • Lead Execution
  • Advise Customers

AWSDockerPythonSoftware DevelopmentCloud ComputingData AnalysisGCPKubernetesMachine LearningAlgorithmsAPI testingAzureData engineeringCommunication SkillsCI/CDCustomer serviceRESTful APIsDevOpsSaaS

Posted about 6 hours ago
Apply
Apply
πŸ”₯ Data Engineer (m/f/d)
Posted about 6 hours ago

πŸ“ Germany

🧭 Full-Time

🏒 Company: RoadsurferπŸ‘₯ 501-1000πŸ’° $5,330,478 almost 4 years agoLeisureRentalTourismRecreational Vehicles

  • Experience with Segment, Braze, or similar CDP/CEP platforms
  • Basic knowledge of data transformation tools
  • Familiarity with data governance practices, such as data ownership, naming conventions, and data lineage
  • Experience implementing data privacy measures such as consent tracking and anonymization
  • Familiarity with data quality metrics and monitoring techniques
  • Understanding of data privacy regulations (GDPR, CCPA)
  • Good communication skills, with the ability to work with cross-functional teams and stakeholders
  • Ensure reliability through automated tests, versioned models, and data lineage
  • Assist in implementing data governance policies to ensure data consistency, quality, and integrity across the CDP and CEP platforms
  • Support the automation of data validation and quality checks, including schema validation and data integrity monitoring
  • Help define and track data quality metrics and provide regular insights on data cleanliness and health
  • Assist in ensuring compliance with data privacy regulations (e.g., GDPR, CCPA), including implementing consent tracking and anonymization measures
  • Work with cross-functional teams to standardize data definitions, naming conventions, and ownership practices
  • Help maintain data cleanliness through automated data cleanup processes and identify areas for improvement
  • Support the analytics team by ensuring data is structured correctly for reporting and analysis

SQLApache AirflowETLData engineeringPostgresRESTful APIsComplianceJSONData visualizationData modelingData analyticsData management

Posted about 6 hours ago
Apply
Apply

πŸ“ Brazil

🧭 Full-Time

πŸ” Software Development

🏒 Company: Grupo QuintoAndar

  • Solid understanding of the engineering challenges of deploying machine learning systems to production;
  • Solid understanding of systems, software and data engineering best practices;
  • Proficiency with cloud-based services;
  • Proficiency in Python or another major programming language;
  • Have experience building backend systems, event driven architectures and rest/http applications
  • Have experience building solutions with LLMs (RAG, fine-tuning);
  • Have experience building AI agents and AI powered applications, familiarity with agentic frameworks (LangGraph, LangChain, AutoGen, CrewAI);
  • Experience leading teams and managing careers.
  • Lead a small team of Data Scientists and Machine Learning Engineers to build solutions based on AI/ML.
  • Be a technical reference to the team, including doing hands-on engineering work.
  • Shaping the technical direction of our products, translating business requirements into solutions.
  • Discuss business requirements with the Product Manager and other stakeholders.

AWSBackend DevelopmentDockerLeadershipProject ManagementPythonSoftware DevelopmentSQLArtificial IntelligenceCloud ComputingKubernetesMachine LearningSoftware ArchitectureAlgorithmsData engineeringData scienceData StructuresREST APICommunication SkillsCI/CDAgile methodologiesRESTful APIsExcellent communication skillsProblem-solving skillsTeam managementTechnical supportData modeling

Posted about 15 hours ago
Apply
Apply
πŸ”₯ Engineering Manager (Data)
Posted about 17 hours ago

πŸ“ Romania

🧭 Full-Time

πŸ” Software Development

🏒 Company: Plain ConceptsπŸ‘₯ 251-500ConsultingAppsMobile AppsInformation TechnologyMobile

  • At least 3 years of experience as a Delivery Manager, Engineering Manager, or similar role in software, data-intensive or analytics projects.
  • Proven experience managing client relationships and navigating stakeholder expectations.
  • Strong technical background in Data Engineering (e.g., Python, Spark, SQL) and Cloud Data Platforms (e.g., Azure Data Services, AWS, or similar).
  • Solid understanding of scalable software and data architectures, CI/CD practices for data pipelines, and cloud-native data solutions.
  • Experience with data pipelines, sensor integration, edge computing, or real-time analytics is a big plus.
  • Ability to read, write, and discuss technical documentation with confidence.
  • Strong analytical and consultative skills to identify impactful opportunities.
  • Agile mindset, always focused on delivering real value fast.
  • Conflict resolution skills and a proactive approach to identifying and mitigating risks.
  • Understanding the business and technical objectives of data-driven projects.
  • Leading multidisciplinary teams to deliver scalable and robust software and data solutions on time and within budget.
  • Maintaining proactive and transparent communication with clients, helping them understand the impact of data products.
  • Supporting the team during key client interactions and solution presentations.
  • Designing scalable architectures for data ingestion, processing, and analytics.
  • Collaborating with data engineers, analysts, and data scientists to align solutions with client needs.
  • Ensuring the quality and scalability of data solutions and deliverables across cloud environments.
  • Analyzing system performance and recommending improvements using data-driven insights.
  • Providing hands-on technical guidance and mentorship to your team and clients when needed

AWSPythonSQLAgileCloud ComputingAzureData engineeringSparkCommunication SkillsCI/CDClient relationship managementTeam managementStakeholder managementData analytics

Posted about 17 hours ago
Apply
Apply

πŸ“ United States, Canada

🧭 Full-Time

πŸ’Έ 140000.0 - 160000.0 USD per year

πŸ” Software Development

🏒 Company: JobgetherπŸ‘₯ 11-50πŸ’° $1,493,585 Seed about 2 years agoInternet

  • 5+ years of experience building scalable backend applications and APIs.
  • Proficiency in Go, Python, or Java, with a strong grasp of SQL and NoSQL databases (e.g., Bigtable, BigQuery, DynamoDB).
  • Experience working with cloud infrastructure, preferably AWS or GCP, and CI/CD pipelines.
  • Familiarity with containerization technologies such as Docker and Kubernetes.
  • Strong problem-solving and analytical skills, with the ability to communicate complex concepts clearly.
  • Design and implement ETL pipelines capable of processing large-scale datasets efficiently.
  • Build and maintain robust APIs for data retrieval, including support for complex query types.
  • Architect scalable data storage and retrieval systems using SQL/NoSQL technologies.
  • Transform raw data into structured, high-value data products to support business and operational decisions.
  • Collaborate with internal stakeholders to align data architecture with product and customer needs.
  • Document technical processes and mentor junior team members.
  • Ensure performance, security, and scalability across the data platform.

AWSBackend DevelopmentDockerPythonSQLDynamoDBETLGCPJavaKubernetesAPI testingData engineeringGoNosqlCI/CDRESTful APIsData modelingSoftware EngineeringData analytics

Posted about 17 hours ago
Apply
Shown 10 out of 796

Ready to Start Your Remote Journey?

Apply to 5 jobs per day for free, or get unlimited applications with a subscription starting at €5/week.

Why Remote Data Science Jobs Are Becoming More Popular

The remote work from home is increasingly in demand among computer and IT professionals for several reasons:

  • Flexibility in time and location.
  • Collaboration with international companies.
  • Higher salary levels.
  • Lack of ties to the office.

Remote work opens up new opportunities for specialists, allowing them to go beyond geographical limits and build a successful remote IT career. This employment model is transforming traditional work approaches, making it more convenient, efficient, and accessible for professionals worldwide.

Why do Job Seekers Choose Remoote.app?

Our platform offers convenient conditions for finding remote IT jobs from home:

  • localized search β€” filter job listings based on your country of residence;
  • AI-powered job processing β€” artificial intelligence analyzes thousands of listings, highlighting key details so you don’t have to read long descriptions;
  • advanced filters β€” sort vacancies by skills, experience, qualification level, and work model;
  • regular database updates β€” we monitor job relevance and remove outdated listings;
  • personalized notifications β€” get tailored job offers directly via email or Telegram;
  • resume builder β€” create a professional VC with ease using our customizable templates and AI-powered suggestions;
  • data security β€” modern encryption technologies ensure the protection of your personal information.

Join our platform and find your dream job today! We offer flexible pricing β€” up to 5 applications per day for free, with weekly, monthly, and yearly subscription plans for extended access.