Remote Data Science Jobs

Pandas
188 jobs found. to receive daily emails with new job openings that match your preferences.
188 jobs found.

Set alerts to receive daily emails with new job openings that match your preferences.

Apply

πŸ“ Poland, Estonia, Bulgaria, Serbia, Romania

πŸ” Software Development

🏒 Company: TeramindπŸ‘₯ 51-100Productivity ToolsSecurityCyber SecurityEnterprise SoftwareSoftware

  • 5 + years of hands-on experience in Data Science experiments design and working with Generative AI tools and LLMs (e.g. Gemini, OpenAI, Claude, LLama).
  • Strong statistical analysis skills, including hypothesis testing, regression, clustering, and time-series analysis.
  • Proficiency in Python, including libraries for data analysis (e.g., Pandas, NumPy), machine learning (e.g., scikit-learn), and data visualization (e.g., Matplotlib, Tableau).
  • Proven experience in data science, with a focus on online behavioural analytics, user journey analysis, or clickstream data.
  • Familiarity with Advanced Prompting Strategies, agentic AI frameworks, techniques for retrieval-augmented generation (RAG) and context enrichment using behavioural data.
  • Experience with Knowledge Graphs, SQL and data extraction from large-scale databases or web logs.
  • Excellent problem-solving skills and the ability to translate complex data into clear, actionable insights.
  • Strong communication and collaboration skills for cross-functional teamwork and stakeholder engagement.
  • Advanced degree (Master’s or Ph.D.) in Computer Science, Statistics, Mathematics, Behavioural Science, or a related field.
  • Work closely with AI Engineering and Product teams to collect, clean, and preprocess clickstream data from web and mobile applications, ensuring data quality and consistency.
  • Design and implement methods to extract behavioural signals from raw click data, including sessionization, event tracking, and feature engineering.
  • Apply advanced statistical analysis and machine learning techniques to identify user patterns, predict behaviours, and segment audiences.
  • Conduct exploratory data analysis (EDA) to uncover trends, anomalies, and key behavioural drivers.
  • Design and evaluate experiments (such as A/B tests) to assess the impact of changes to user interfaces or features on user behaviour.
  • Collaborate with cross-functional teams (product, UX, marketing) to translate business questions into data-driven analyses and actionable recommendations.
  • Develop and maintain dashboards and visualizations to communicate findings to both technical and non-technical stakeholders.
  • Design, build, and evaluate AI workloads utilising Generative AI models using frameworks such as LangChain, Model Context Protocol, and Hugging Face Transformers, with a focus on integrating and augmenting these models with user behavioural data for enhanced context and personalization.
  • Implement and work with agentic AI frameworks to orchestrate multi-step reasoning and autonomous workflows powered by LLMs, leveraging user clickstream and behavioural data as contextual inputs.
  • Stay updated with the latest developments in behavioural data science, analytics tools, Generative AI, and agentic AI methodologies.

AWSDockerPythonSQLData AnalysisGCPKubernetesMachine LearningNumpyTableauBehavioral scienceData sciencePandasData visualizationA/B testing

Posted about 22 hours ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 193600.0 - 296600.0 USD per year

πŸ” Software Development

🏒 Company: careers_gm

  • 7+ years of experience in machine learning, engineering, data science, or a related field of expertise
  • ISO 8800, ISO 24118 and other applicable industry standards and best practices for autonomous vehicles, aerospace and/or robotics.
  • Setting the strategy for E2E validation using techniques appropriate to validate AI models
  • Python, R, Java, PySpark, PyTorch, TensorFlow, Scikit-learn, LangChain, SQL
  • Large Language Models (LLMs), Generative AI, RAG, Deep learning, Reinforcement Learning, Natural Language Processing (NLP), SVM, XGBoost, Random Forest, Decision Trees, Clustering
  • Microsoft Azure (Data Lake, Machine Learning, Databricks)
  • MLflow, Model Monitoring & Versioning, Docker & Kubernetes, GitHub, Jira
  • Tableau, PowerBI, Pandas, NumPy
  • Proven track record providing technical safety leadership in AI/ML and AV development
  • Referencing ISO 8800, ISO 24118 and AV industry best practices, develop the strategy for ensuring safe AI/ML and autonomous system development, deployment and maintenance.
  • Work with software, data science and systems engineering teams to ensure GM safely trains new machine learning models to solve complex business problems.
  • Ensure continuity of safety as we enhance existing machine learning models to increase performance and adapt to our changing business landscape.
  • Set the safety standard for how we prototype, test and deploy new AI solutions, including Generative AI, to solve business problems.
  • Set the strategy for testing and validation of data sets and develop an assurance plan.
  • Set the strategy for how we systematically break down operational design domain components and driving behavior components and how these are validated in aggregate and on a per behavior level.
  • Work with data science, systems engineering and software teams to set the strategy for how we establish safety launch targets across vehicle behaviors and in aggregate
  • Setup an assurance process to validate launch targets have been achieved

DockerLeadershipPythonSQLArtificial IntelligenceCloud ComputingData AnalysisJavaJava EEKubernetesMachine LearningMicrosoft AzureMLFlowNumpyPyTorchJiraTableauAlgorithmsData sciencePandasTensorflowCommunication SkillsAnalytical SkillsCollaborationAgile methodologiesRESTful APIsOrganizational skillsWritten communicationProblem-solving skillsTeamworkRisk ManagementData visualizationStrategic thinkingData modelingDebugging

Posted 1 day ago
Apply
Apply

πŸ“ Texas, Denver, CO

πŸ’Έ 148000.0 - 189000.0 USD per year

πŸ” SaaS

🏒 Company: Branch Metrics

  • 4+ years of relevant experience in data science, analytics, or related fields.
  • Degree in Statistics, Mathematics, Computer Science, or related field.
  • Proficiency with Python, SQL, Spark, Bazel, CLI (Bash/Zsh).
  • Expertise in Spark, Presto, Airflow, Docker, Kafka, Jupyter.
  • Strong knowledge of ML frameworks (scikit-learn, pandas, xgboost, lightgbm).
  • Experience deploying models to production on AWS infrastructure and experience with the basic AWS services.
  • Advanced statistical knowledge (regression, A/B testing, Multi-Armed Bandits, time-series anomaly detection).
  • Collaborate with stakeholders to identify data-driven business opportunities.
  • Perform data mining, analytics, and predictive modeling to optimize business outcomes.
  • Conduct extensive research and evaluate innovative approaches for new product initiatives.
  • Develop, deploy, and monitor custom models and algorithms.
  • Deliver end-to-end production-ready solutions through close collaboration with engineering and product teams.
  • Identify opportunities to measure and monitor key performance metrics, assessing the effectiveness of existing ML-based products.
  • Serve as a cross-functional advisor, proposing innovative solutions and guiding product and engineering teams toward the best approaches.
  • Anticipate and clearly articulate potential risks in ML-driven products.
  • Effectively integrate solutions into existing engineering infrastructure.

AWSDockerPythonSQLBashKafkaMachine LearningAirflowRegression testingPandasSparkRESTful APIsTime ManagementA/B testing

Posted 1 day ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 211536.0 - 287100.0 USD per year

πŸ” Software Development

🏒 Company: jobs

  • SQL and Python programming to query and validate the accuracy of datasets
  • Design and develop workflow orchestration tools
  • Python scripting to develop statistical and machine learning models for classification
  • Use agile software development principle to design, plan and structure deployment of software products
  • Develop machine learning models to segment customer behavior, identify market concentration and volatility using Python and Spark ML
  • Building KPIs (Key Performance Indicators) and metrics, validating using statistical hypothesis testing
  • Expertise in Cloud Computing resources and maintaining data on cloud storage
  • Big Data processing for data cleaning
  • Deploy self-serving data visualization tools, automating, generating reports and consolidating visually on tableau dashboards
  • Develop data engineering pipelines and transformations
  • Lead, build and implement analytics functions for Honey features
  • Conduct impactful data analysis to improve customer experiences and inform product development
  • Collaborate cross-functional support teams to build world-class products and design hypothesis-driven experiments
  • Gather and collate business performance and metrics to recommend improvements, automation, and data science directives for overall business performance
  • Present findings and recommendations to senior level/non-technical stakeholders
  • Maintain large datasets by performing batch scheduling and pipelining ETL operations
  • Perform ad-hoc exploratory analysis on datasets to generate insights and automate production ready solutions
  • Develop machine learning-based models to improve forecasting and predictive analytics
  • Implement innovative quantitative analyses, test new data wrangling techniques, and experiment with new visualization tools to deliver scalable analytics
  • Develop and create programming paradigms and utilizing tools like git, data structures, OOP, and network algorithms

PythonSQLCloud ComputingData AnalysisETLGitMachine LearningNumpyTableauAlgorithmsData engineeringData StructuresPandasSparkTensorflowAgile methodologiesData visualizationData modeling

Posted 2 days ago
Apply
Apply

πŸ“ LATAM

🧭 Full-Time

πŸ” Software Development

🏒 Company: NearsureπŸ‘₯ 501-1000Staffing AgencyOutsourcingSoftware

  • 5+ Years of experience working with the Python programming language, with the ability to work across AI, infrastructure, and tooling codebases.
  • 3+ Years of experience working in cloud environments (AWS or GCP).
  • 2+ Years of experience working with scripting languages (e.g., Bash, Go).
  • 1+ Years of experience working with containerization (Docker) and orchestration (Kubernetes), and with CI/CD systems and infrastructure automation tools (e.g., Terraform, Ansible).
  • 1+ Years of practical, hands-on experience with code-generation tools such as Codex, StarCoder, Claude, or GPT family.
  • Solid understanding of the full Software Development Lifecycle (planning, coding, testing, releasing, monitoring).
  • Practical, hands-on experience with large language models (LLMs).
  • Experience prototyping, benchmarking, and deploying AI solutions to improve developer workflows.
  • Strong understanding of software development automation and developer productivity enhancement techniques.
  • Familiarity with prompt engineering and retrieval-augmented generation (RAG) pipelines.
  • Ability to explain complex AI solutions and technical concepts to cross-functional and non-technical teams.
  • Research-driven mindset with curiosity for testing new AI ideas, tools, and approaches.
  • Strong problem-solving skills focused on identifying inefficiencies and applying AI to increase delivery velocity.
  • Advanced English Level is required for this role as you will work with US clients. Effective communication in English is essential to deliver the best solutions to our clients and expand your horizons.
  • Build AI-powered tools to enhance each stage of the SDLC: from code generation and code reviews to testing, documentation, and deployment.
  • Develop internal assistants and copilots to reduce cognitive load and empower developers to move faster with fewer manual steps.
  • Automate repetitive development and operational tasks (e.g., writing IaC, generating release notes, updating documentation).
  • Investigate GenAI solutions (e.g., GitHub Copilot, CodeLlama, GPT-Engineer) to improve CI/CD workflows and eliminate friction in delivery pipelines.
  • Design, benchmark, and deploy AI-driven solutions that reduce lead time, increase deployment frequency, and enhance reliability.
  • Explore custom model fine-tuning and retrieval-augmented generation (RAG) approaches when needed.
  • Prototype features like AI-driven deployment optimizations, cost-aware autoscaling, or predictive incident resolution.
  • Embed AI into DevOps processes to identify bottlenecks, automate root cause analysis, and surface continuous delivery insights.
  • Collaborate with SRE and developer experience teams to evolve our Internal Developer Platform (IDP) with intelligent automation.
  • Partner with engineering, DevOps, and product teams to identify key pain points and deliver impactful AI interventions.
  • Share findings through workshops, whitepapers, and internal demos to scale learnings across the organization.

AWSBackend DevelopmentDockerPythonSoftware DevelopmentArtificial IntelligenceBashCloud ComputingData AnalysisFlaskFrontend DevelopmentFull Stack DevelopmentGCPKubernetesMachine LearningNumpyAlgorithmsAPI testingData StructuresFastAPIGoWebRTCPandasTensorflowCI/CDRESTful APIsDevOpsTerraformJSONAnsibleSoftware EngineeringDebugging

Posted 2 days ago
Apply
Apply

πŸ“ USA

🧭 Temporary

πŸ” Bioinformatics

🏒 Company: Personalis, Inc

  • Expertise in Python, particularly in using pandas data frames for data wrangling.
  • Familiarity with standard bioinformatics NGS data formats and annotations.
  • Proficiency in Unix/Linux environments and Bash scripting.
  • Understanding of different assays and modes of sequencing (WGS vs. Exome, RNA-seq, Tumor vs. Normal, Somatic vs. Germline, etc.).
  • Understanding of NGS biomarkers related to cancer (e.g. copy numbers, fusions, SNV and INDELs).
  • Work with our existing frameworks to execute custom report generation.
  • Work within a team scrum context to ensure timely delivery across multiple concurrent projects.
  • Perform detailed QC of the generated reports to ensure all of the content meets the customers' specifications prior to delivery.
  • Using established QC tools for systematically testing the accuracy of generated custom reports to detect hidden errors and irregularities in the data.
  • Collaborate with and provide feedback to the Custom Data Report (CDR) team to drive continuous improvement.

PythonSQLBashGitJiraPandasConfluence

Posted 2 days ago
Apply
Apply

πŸ“ 21 U.S. states

🧭 Full-Time

πŸ’Έ 151500.0 - 215500.0 USD per year

πŸ” Software Development

🏒 Company: UpworkπŸ‘₯ 501-1000πŸ’° over 8 years agoπŸ«‚ Last layoff about 2 years agoMarketplaceFreelanceCopywritingPeer to Peer

  • Strong software engineering background with deep experience in building data collection, transformation, and featurization pipelines at scale.
  • Proficiency in Python, including async programming and concurrency tools, as well as data-centric frameworks such as Pandas, Spark, or Apache Beam.
  • Familiarity with ML model development workflows and infrastructure, including dataset versioning, experiment tracking, and model evaluation.
  • Experience deploying and scaling AI systems in cloud environments such as AWS, GCP, or Azure.
  • Proven success operating in highly ambiguous environments such as research labs, startups, or fast-paced product teams.
  • A track record of working with or alongside high-caliber peers in top engineering teams, research groups, or startup ecosystems.
  • Growth mindset, strong communication skills, and a commitment to inclusive collaboration and continuous learning.
  • Design and implement systems to collect and curate high-quality training datasets for supervised, unsupervised, and reinforcement learning use cases.
  • Build scalable featurization and preprocessing pipelines to transform raw data into structured inputs for AI/ML model development.
  • Partner with ML engineers and researchers to define data requirements and production workflows that support LLM-based agents and autonomous AI systems.
  • Lead the development of infrastructure that enables experimentation, evaluation, and deployment of machine learning models in production environments.
  • Support orchestration and real-time inference pipelines using Python and modern cloud-native tools, ensuring low-latency and high availability.
  • Mentor engineers and foster a high-performance, collaborative engineering culture grounded in technical excellence and curiosity.
  • Drive cross-functional alignment with product, infrastructure, and research stakeholders, ensuring clarity on progress, goals, and architecture.

AWSDockerLeadershipPythonSQLApache AirflowCloud ComputingGitKubernetesMachine LearningAlgorithmsData engineeringData scienceData StructuresREST APIPandasSparkCommunication SkillsCI/CDMentoringTeamworkSoftware Engineering

Posted 3 days ago
Apply
Apply

πŸ“ Bratislava, Kyiv

πŸ” Software Development

🏒 Company: Altamira.ai

  • Programming: Python, TypeScript
  • Backend: FastAPI, Pydantic, Pandas, PostgreSQL, Redis
  • Frontend: Angular (preferred), React, or Vue
  • Data Analysis: Pandas, NumPy, SciPy
  • Data Visualization: D3.js, Seaborn, Plotly, Matplotlib
  • Security: OWASP best practices
  • DevOps & Cloud:
  • Containerization: Docker, Kubernetes
  • CI/CD pipelines
  • Cloud platforms: DigitalOcean, AWS, or GCP
  • Other: Linux (all code runs in Linux-based environments)
  • Develop and maintain full-stack solutions with a focus on data analysis and visualization.
  • Design and implement interactive dashboards and data exploration tools.
  • Optimize data processing workflows for performance and scalability.
  • Ensure best practices in security, UX, and software development.
  • Collaborate with data scientists, product managers, and engineers to refine insights delivery.

AWSBackend DevelopmentDockerPostgreSQLPythonData AnalysisFrontend DevelopmentFull Stack DevelopmentGCPKubernetesNumpyTypeScriptFastAPIAngularRedisPandasReactCI/CDLinuxDevOpsData visualization

Posted 3 days ago
Apply
Apply

πŸ“ USA

πŸ’Έ 175000.0 - 225000.0 USD per year

🏒 Company: Red Cell PartnersπŸ‘₯ 11-50Financial ServicesVenture CapitalFinance

  • ML Systems Expertise: Proven experience in developing, optimizing, and deploying ML systems in production environments.
  • Model Training and Pipeline Mastery: Strong background in building and managing end-to-end training pipelines for ML models.
  • LLM Fine-Tuning: Extensive knowledge and hands-on experience in fine-tuning large language models for specific use cases and optimizing them for targeted outcomes.
  • Framework Proficiency: Skilled in ML frameworks such as TensorFlow, PyTorch, or similar tools used in ML model development.
  • Programming Skills: Proficient in Python with a focus on writing efficient, clean, and maintainable code for ML applications.
  • Clear Communicator: Ability to distill complex ML concepts for both technical and non-technical audiences.
  • Educational Background: Bachelor’s or Master’s degree in Machine Learning, Computer Science, Data Engineering, or a related field.
  • Impactful ML Solutions: A track record of delivering and implementing machine learning solutions that have successfully driven value in real-world applications.
  • Architect, Build, and Optimize ML Systems: Develop and deploy robust ML models that deliver high-impact results for real-world applications.
  • Training Pipeline Development: Design and implement efficient, scalable pipelines to train and retrain ML models, ensuring they meet business needs.
  • Fine-Tuning Large Language Models (LLMs): Continuously fine-tune LLMs to align with specific enterprise requirements, enhancing accuracy, relevance, and performance.
  • Feedback Systems Design: Implement and refine feedback loops to iteratively improve the effectiveness of ML models over time.
  • Cross-Functional Collaboration: Work closely with product and business teams to understand and translate requirements into ML solutions that provide tangible outcomes.
  • Stay Current with ML Advancements: Keep up with the latest in ML research and best practices, applying insights to our ML infrastructure to ensure it remains at the cutting edge.
  • Mentorship and Knowledge Sharing: Guide and mentor junior team members, fostering a culture of continuous improvement and technical growth.
  • Technical Communication: Clearly and effectively communicate ML methodologies, results, and insights to non-technical stakeholders.

DockerPythonSQLCloud ComputingData AnalysisGitMachine LearningNumpyPyTorchAlgorithmsData StructuresPandasTensorflowCommunication SkillsAnalytical SkillsProblem SolvingRESTful APIsCross-functional collaboration

Posted 3 days ago
Apply
Apply

πŸ“ Canada

πŸ’Έ 80000.0 - 110000.0 USD per year

πŸ” E-commerce

🏒 Company: Constructor

  • Proficient in BI tooling (data analysis, building dashboards for engineers and non-technical folks).
  • Proficient at SQL (any variant) and well-versed in exploratory data analysis with Python (pandas & NumPy, data visualization libraries).
  • Nice to have: Practical familiarity with the big data stack (Spark, Presto/Athena, Hive).
  • A minimum of three years of professional experience or relevant academic experience.
  • English on C1 level (CEFR) or higher.
  • Develop analytical tools: Build tools and dashboards for our Account Executives to help them create better demos for prospective customers.
  • Tune our AI models: Work closely with Account Executives to fine-tune our AI models, ensuring they are optimized to deliver the most relevant and impactful results for each prospect's unique needs.
  • Seal the deal: Collaborate with Frontend Engineers and Account Executives to secure deals with prominent e-commerce brands.
  • Improve product: Take initiative and perform data exploration to understand user behaviour, suggest opportunities for improving our recommendation & search systems.
  • Champion team culture: Actively foster our team’s sense of community by expanding our roster of Friday team-building games and promoting a culture of empathy, personal growth, and support for others.

PythonSQLBusiness IntelligenceData AnalysisData MiningETLNumpyTableauPandasSparkCommunication SkillsData visualizationData modelingData analytics

Posted 3 days ago
Apply
Shown 10 out of 188

Ready to Start Your Remote Journey?

Apply to 5 jobs per day for free, or get unlimited applications with a subscription starting at €5/week.

Why Remote Data Science Jobs Are Becoming More Popular

The remote work from home is increasingly in demand among computer and IT professionals for several reasons:

  • Flexibility in time and location.
  • Collaboration with international companies.
  • Higher salary levels.
  • Lack of ties to the office.

Remote work opens up new opportunities for specialists, allowing them to go beyond geographical limits and build a successful remote IT career. This employment model is transforming traditional work approaches, making it more convenient, efficient, and accessible for professionals worldwide.

Why do Job Seekers Choose Remoote.app?

Our platform offers convenient conditions for finding remote IT jobs from home:

  • localized search β€” filter job listings based on your country of residence;
  • AI-powered job processing β€” artificial intelligence analyzes thousands of listings, highlighting key details so you don’t have to read long descriptions;
  • advanced filters β€” sort vacancies by skills, experience, qualification level, and work model;
  • regular database updates β€” we monitor job relevance and remove outdated listings;
  • personalized notifications β€” get tailored job offers directly via email or Telegram;
  • resume builder β€” create a professional VC with ease using our customizable templates and AI-powered suggestions;
  • data security β€” modern encryption technologies ensure the protection of your personal information.

Join our platform and find your dream job today! We offer flexible pricing β€” up to 5 applications per day for free, with weekly, monthly, and yearly subscription plans for extended access.