Remote Data Science Jobs

GCP
839 jobs found. to receive daily emails with new job openings that match your preferences.
839 jobs found.

Set alerts to receive daily emails with new job openings that match your preferences.

Apply

πŸ“ United States

🧭 Full-Time

πŸ” Information Security

  • 5+ years of experience in security engineering, with a primary focus on SIEM platforms.
  • Hands-on experience with at least two of the following SIEM platforms: Splunk, Microsoft Sentinel, Elastic, Google SecOps, CrowdStrike NG-SIEM, LogScale
  • 2+ years of experience with Cribl or similar observability pipeline tools (e.g., Logstash, Fluentd, Kafka).
  • Strong knowledge of log formats, data normalization, and event correlation.
  • Familiarity with detection engineering, threat modeling, and MITRE ATT&CK framework.
  • Proficiency with scripting (e.g., Python, PowerShell, Bash) and regular expressions.
  • Deep understanding of logging from cloud (AWS, Azure, GCP) and on-prem environments.
  • Architect, implement, and maintain SIEM solutions with a focus on modern platforms
  • Design and manage log ingestion pipelines using tools such as Cribl Stream, Edge, or Search (or similar).
  • Optimize data routing, enrichment, and filtering to improve SIEM efficiency and cost control.
  • Collaborate with cybersecurity, DevOps, and cloud infrastructure teams to integrate log sources and telemetry data.
  • Develop custom parsers, dashboards, correlation rules, and alerting logic for security analytics and threat detection.
  • Maintain and enhance system reliability, scalability, and performance of logging infrastructure.
  • Provide expertise and guidance on log normalization, storage strategy, and data retention policies.
  • Lead incident response investigations and assist with root cause analysis leveraging SIEM insights.
  • Mentor junior engineers and contribute to strategic security monitoring initiatives.

AWSPythonBashCloud ComputingGCPKafkaKubernetesAPI testingAzureData engineeringCI/CDRESTful APIsLinuxDevOpsJSONAnsibleScripting

Posted about 2 hours ago
Apply
Apply
πŸ”₯ Data Engineer (Contract)
Posted about 5 hours ago

πŸ“ LatAm

🧭 Contract

🏒 Company: AbleRentalProperty ManagementReal Estate

  • 10+ years of data engineering experience with enterprise-scale systems
  • Expertise in Apache Spark and Delta Lake, including ACID transactions, time travel, Z-ordering, and compaction
  • Deep knowledge of Databricks (Jobs, Clusters, Workspaces, Delta Live Tables, Unity Catalog)
  • Experience building scalable ETL/ELT pipelines using tools like Airflow, Glue, Dataflow, or ADF
  • Advanced SQL for data modeling and transformation
  • Strong programming skills in Python (or Scala)
  • Hands-on experience with data formats such as Parquet, Avro, and JSON
  • Familiarity with schema evolution, versioning, and backfilling strategies
  • Working knowledge of at least one major cloud platform: AWS (S3, Athena, Redshift, Glue Catalog, Step Functions), GCP (BigQuery, Cloud Storage, Dataflow, Pub/Sub), or Azure (Synapse, Data Factory, Azure Databricks)
  • Experience designing data architectures with real-time or streaming data (Kafka, Kinesis)
  • Consulting or client-facing experience with strong communication and leadership skills
  • Experience with data mesh architectures and domain-driven data design
  • Knowledge of metadata management, data cataloging, and lineage tracking tools
  • Shape large-scale data architecture vision and roadmap across client engagements
  • Establish governance, security frameworks, and regulatory compliance standards
  • Lead strategy around platform selection, integration, and scaling
  • Guide organizations in adopting data lakehouse and federated data models
  • Lead technical discovery sessions to understand client needs
  • Translate complex architectures into clear, actionable value for stakeholders
  • Build trusted advisor relationships and guide strategic decisions
  • Align architecture recommendations with business growth and goals
  • Design and implement modern data lakehouse architectures with Delta Lake and Databricks
  • Build and manage ETL/ELT pipelines at scale using Spark (PySpark preferred)
  • Leverage Delta Live Tables, Unity Catalog, and schema evolution features
  • Optimize storage and queries on cloud object storage (e.g., AWS S3, Azure Data Lake)
  • Integrate with cloud-native services like AWS Glue, GCP Dataflow, and Azure Synapse Analytics
  • Implement data quality monitoring, lineage tracking, and schema versioning
  • Build scalable pipelines with tools like Apache Airflow, Step Functions, and Cloud Composer
  • Develop cost-optimized, scalable, and compliant data solutions
  • Design POCs and pilots to validate technical approaches
  • Translate business requirements into production-ready data systems
  • Define and track success metrics for platform and pipeline initiatives

AWSPythonSQLCloud ComputingETLGCPKafkaAirflowAzureData engineeringScalaData modeling

Posted about 5 hours ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 216000.0 - 289000.0 USD per year

πŸ” Grocery

  • Expertise in Cloud Infrastructure Security (AWS is a MUST with GCP or Azure strongly desirable)
  • Strong experience in one or more of the following languages: Python, Ruby, Go, Shell and regularly commit code or contribute to open source projects
  • Experience working with containerized environments and related orchestrations techniques (Docker and/or Kubernetes)
  • Experience scaling infrastructure with code or deploying Terraform
  • Functional understanding of distributed systems and service oriented architectures
  • Strong system and networking fundamentals such as TCP/IP, kernel operations, memory and file system management, particularly on linux platforms
  • Enjoy working collaboratively with internal customers and stakeholders and can navigate security/productivity trade-offs
  • Drive and improve the security posture of our cloud infrastructure AWS & GCP environment.
  • Build and deploy tools and services to automate enforcement of security baseline across our cloud infrastructure, including: IAM and configuration management, Container and system security and vulnerability management, PKI and secret management
  • Partner with our incident response team to design and implement detection and response capability on our cloud Infrastructure
  • Work closely with IT to harden and secure our corporate and endpoint infrastructure
  • Provide advisory and consulting service to engineering, product and IT teams to ensure their services are built with security in mind
  • Participate in the team’s on-call rotation and help drive critical infrastructure incidents to resolution

AWSDockerPythonBashCloud ComputingCybersecurityGCPKubernetesCI/CDRESTful APIsLinuxDevOpsTerraformScripting

Posted about 6 hours ago
Apply
Apply

πŸ“ United States of America

  • 10+ year in Revenue Operations, Sales Operations, Sales Finance or combinations of all three
  • 10+ years of experience is software sales, specifically subscription or SaaS offerings
  • Expertise in Sales Forecasting tools and Process
  • Salesforce CRM experience
  • Experience with Pipeline Analytics and pipeline goal setting
  • Experience in Reporting tools such as Tableau, Power BI or like reporting tools
  • Experience in Go-to-market Strategy, compensation, territory design
  • Represent our Commercial Accounts Sales team across the Revenue Operations team to align/design process/content specific to the Commercial Accounts
  • Involved in Quota Setting and Territory Design
  • Facilitate Sales Forecasting and driving an Operating Cadence
  • Establish Pipeline analytics working closely with the Reporting Team
  • Establish Sales training working closely with the Enablement Team
  • Partner with Sales Leaders on the annual planning process to include Go-To-Market strategy, Investment opportunities, and Sales Compensation
  • Represent Sales in cross functional projects
  • Represent Sales in CPQ design
  • Actively involved in roll out of new tools

GCPSalesforceTableauCRM

Posted about 6 hours ago
Apply
Apply

πŸ“ USA

πŸ” SaaS

🏒 Company: DevRevπŸ‘₯ 251-500πŸ’° $100,825,173 Series A 10 months agoDeveloper PlatformCustomer ServiceCRMArtificial Intelligence (AI)Developer APIsSoftware

  • 3+ years in software development, AI/ML engineering, or technical consulting.
  • Strong proficiency in Python and/or Golang.
  • Familiarity with large language models (LLMs), prompt engineering, and frameworks like RAG and function calling.
  • Hands-on experience with AWS, GCP, or Azure, and modern DevOps practices (CI/CD, containers, observability).
  • Design & Deploy AI Agents
  • Integrate Systems
  • Optimize Performance
  • Own Requirements
  • Prototype & Iterate
  • Lead Execution
  • Advise Customers

AWSDockerPythonSoftware DevelopmentCloud ComputingData AnalysisGCPKubernetesMachine LearningAlgorithmsAPI testingAzureData engineeringCommunication SkillsCI/CDCustomer serviceRESTful APIsDevOpsSaaS

Posted about 6 hours ago
Apply
Apply
πŸ”₯ Systems Engineer
Posted about 6 hours ago

πŸ“ North America

🧭 Full-Time

πŸ” IT, cybersecurity

🏒 Company: Axlora

  • 1-3 years of experience within the IT, cybersecurity, compliance, devops/software engineering/site reliability or related field
  • Bachelors degree in Computer Science, Computer Engineering, IT, Systems Engineering, Cybersecurity or related field OR demonstrated technical competence and interest in rapidly learning new technologies
  • Implement technical controls to meet compliance standards such as SOC 2, ISO 27001, GDPR, and NIST 800-171 across customer tech stacks using technology such as Vanta, AWS, GCP, Azure, Addigy, Huntress, Google Workspace, Tailscale, etc.
  • Be organized using our project management systems to ensure customers meet timelines
  • Solve ad-hoc customer requests in a quick and competent manner
  • Communicate via Slack with customers in a way that is succinct, friendly, with authority, and proactive about taking work away from them

AWSProject ManagementCloud ComputingCybersecurityGCPAzureCommunication SkillsProblem SolvingCustomer serviceDevOpsComplianceScripting

Posted about 6 hours ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ” Software Development

🏒 Company: Buzz Solutions

  • 8+ years of industry experience with modern systems development, ideally end to end pipelines and applications development
  • Track record of shipping complex backend features end-to-end
  • Ability to translate customer requirements into technical solutions
  • Strong programming and computer science fundamentals and quality standards
  • Experience with Python and modern web frameworks (FastAPI) and Pydantic
  • Experience designing, implementing, and debugging web technologies and server architecture
  • Experience with modern python packaging and distribution (uv, poetry)
  • Deep understanding of distributed systems and scalable architecture
  • Experience building reusable, modular systems that enable rapid development and easy modification
  • Strong experience with data storage systems (PostgreSQL, Redis, BigQuery, MongoDB)
  • Expertise with queuing/streaming systems (RabbitMQ, Kafka, SQS)
  • Expertise with workflow orchestration frameworks (Celery, Temporal, Airflow) and DAG-based processing
  • Proficiency in utilizing and maintaining cloud infrastructure services (Google Cloud/AWS/Azure)
  • Experience with Kubernetes for container orchestration and deployment
  • Solid grasp of system design patterns and tradeoffs
  • Experience and in-depth understanding of AI/ML systems integration
  • Deep understanding of the ML Lifecyle
  • Experience with big data technologies and data pipeline development
  • Experience containerizing and deploying ML applications (Docker) for training and inference workloads
  • Experience with real-time streaming and batch processing systems for ML model workflows
  • Experience with vector databases and search systems for similarity search and embeddings
  • Partner closely with engineering (software, data, and machine learning), product, and design leadership to define product-led growth strategy with an ownership-driven approach
  • Establish best practices, frameworks, and repeatable processes to measure the impact of every feature shipped, taking initiative to identify and solve problems proactively
  • Make effective tradeoffs considering business priorities, user experience, and sustainable technical foundation with a startup mindset focused on rapid iteration and results
  • Develop and lead team execution against both short-term and long-term roadmaps, demonstrating self-starter qualities and end-to-end accountability
  • Mentor and grow team members to be successful contributors while fostering an ownership culture and entrepreneurial thinking
  • Build and maintain backend systems and data pipelines for AI-based software platforms, integrating SQL/NoSQL databases and collaborating with engineering teams to enhance performance
  • Design, deploy, and optimize cloud infrastructure on Google Cloud Platform, including Kubernetes clusters, virtual machines, and cost-effective scalable architecture
  • Implement comprehensive MLOps workflows including model registry, deployment pipelines, monitoring systems for model drift, and CI/CD automation for ML-based backend services
  • Establish robust testing, monitoring, and security frameworks including unit/stress testing, vulnerability assessments, and customer usage analytics
  • Drive technical excellence through documentation, code reviews, standardized practices, and strategic technology stack recommendations

AWSBackend DevelopmentDockerPostgreSQLPythonSQLCloud ComputingGCPKafkaKubernetesMachine LearningMLFlowMongoDBRabbitmqAirflowFastAPIRedisNosqlCI/CDRESTful APIsMicroservices

Posted about 7 hours ago
Apply
Apply

πŸ“ United Kingdom

🧭 Full-Time

πŸ” Mental Health

🏒 Company: KoothπŸ‘₯ 101-250Mental HealthWellnessHealth Care

  • 3+ years of hands-on experience in full-stack software development in a product-oriented environment.
  • Proficient in TypeScript, Node.js, and either React or React Native (or able to ramp up quickly with a solid engineering foundation), as well as database technologies such as PostgreSQL and experience in delivering a web or native mobile application.
  • Deep enthusiasm for full-stack software engineering with strong problem-solving capabilities.
  • Solid understanding of modern system architecture and the ability to contribute to its evolution.
  • Commitment to quality,Β  with experience shipping maintainable, scalable, and well-tested code.
  • Ownership mentality with a focus on pragmatic delivery and continuous improvement.
  • Skilled in agile practices, data-informed decision-making, and building reliability.
  • Excellent communication and collaboration skills, including mentoring and inspiring peers.
  • A team player who values collective success and nurtures a positive, inclusive engineering culture.
  • A proactive approach to solving technical challenges and influencing engineering direction.
  • Focus on pragmatic delivery, able to take ownership appropriately
  • Strong communication skills, builds great colleague relationships across disciplines.
  • Designing and building RESTful Node APIs, React frontends, and/or React Native mobile appsβ€”contributing to system design and architectural evolution.
  • Leading by example in trunk-based development, automated testing, CI/CD, and infrastructure-as-code principles.
  • Taking ownership of performance, resilience, observability, maintainability, security, and accessibility.
  • Building and operating a suite of Node.js backend services, React-based web apps, and React Native mobile experiences that form the backbone of our mental health platform.
  • Taking end-to-end ownership of features, from idea through to production, with a strong sense of accountability and user impact.
  • Improving the systems you work on by applying thoughtful, pragmatic solutions to technical challenges.
  • Actively collaborating across disciplines and mentoring colleagues through pairing, code reviews, and knowledge-sharing.
  • Driving a shared understanding of user needs, commercial priorities, and how technical decisions influence business outcomes.
  • Contributing to and occasionally leading technical discussions and decisions.
  • Staying current with industry best practices in engineering, CI/CD, and architecture.
  • Supporting onboarding and professional growth of junior engineers and new hires.
  • Participating in the out-of-hours on-call rota and improving system reliability and incident response processes.
  • Continuously improving the systems you work on, applying a thoughtful and pragmatic approach to technical tradeoffs.

AWSBackend DevelopmentDockerNode.jsPostgreSQLAgileFrontend DevelopmentFull Stack DevelopmentGCPKubernetesReact NativeTypeScriptCI/CDRESTful APIsLinuxSoftware Engineering

Posted about 7 hours ago
Apply
Apply

πŸ“ LATAM

πŸ” Telecommunications

🏒 Company: NearsureπŸ‘₯ 501-1000Staffing AgencyOutsourcingSoftware

  • 5+ Years of experience in Data Architecture, including designing robust data models (e.g., Splunk CIM, star/snowflake schemas) and data governance frameworks tailored to large-scale, high-volume telemetry data (preferably within telecommunications).
  • 3+ Years of hands-on experience in at least two of the following observability platforms: Splunk (including Splunk Search Processing Language – SPL, and Common Information Model – CIM), ELK Stack (Elasticsearch, Logsearch complex pipelines, Kibana dashboards), Grafana and Prometheus for implementing observability solutions.
  • 2+ Years of strong scripting skills (e.g., Python, Bash, PowerShell) for automation of telemetry system deployment, configuration management, and operational tasks.
  • Experience designing and implementing observability solutions with audit trails for user/system activities.
  • Familiarity with distributed tracing concepts and tools (e.g., OpenTelemetry, Grafana Tempo, Jaeger).
  • Understanding of telco network architectures (4G/LTE, 5G) and common data sources (CDRs, IPDRs, signaling data).
  • Proven experience designing scalable observability systems in telco or high-throughput environments.
  • Knowledge of security best practices for telemetry platforms (data encryption, RBAC).
  • Experience with cloud environments (AWS, Azure or GCP) and native cloud monitoring tools (CloudWatch, Azure Monitor or Google Operations Suite).
  • Experience designing and implementing data warehousing or data lake solutions for high-volume telemetry data.
  • Design, implement, and manage telemetry systems to enhance telecommunications operations.
  • Develop and optimize telemetry and observability platforms using tools like Splunk, ELK Stack, Grafana, and distributed tracing solutions.
  • Ensure network performance, service assurance, and security through advanced telemetry strategies.

AWSPythonBashCloud ComputingGCPAzureGrafanaCI/CDData modeling

Posted about 16 hours ago
Apply
Apply

πŸ“ United States of America

πŸ’Έ 179156.0 - 211501.0 USD per year

πŸ” Biotech

🏒 Company: careers

  • 10+ years of experience in clinical development, clinical operations, or clinical quality assurance within pharmaceutical, biotech, or CRO environments.
  • Demonstrated experience leading large-scale process transformation and change management in a regulated (GxP) environment.
  • Strong knowledge of controlled document management frameworks, strategies and inspection readiness principles.
  • Proven ability to lead cross-functional initiatives, manage complexity, and influence across a matrixed organization.
  • Experienced in vendor operational oversight and working with external experts to bring in industry best practices.
  • Comfortable with ambiguity and building frameworks from the ground up; strong strategic and analytical thinking and problem-solving skills with demonstrated ability to bring structure to vaguely defined problems.
  • Excellent written and verbal communication skills; effective at stakeholder engagement with solid ability to drive decisions and change management.
  • Familiarity with Quality Management System (QMS) principles and digital learning platforms (e.g., LMS, Confluence, knowledge bases).
  • Experience aligning process design with digital platforms that support clinical trial execution (e.g., CTMS, eTMF, workflow automation tools).
  • Background in large-scale organizational transformation, change enablement, or process optimization initiatives.
  • Design and implement a comprehensive documentation framework for clinical trial processes, integrating changes from technology enablement and operating model updates.
  • Contribute to and implement R&D standards for document types including SOPs, work instructions, guidance documents and training content ensuring alignment with regulatory requirements and internal quality expectations with a focus on logical flows and linkages.
  • Solicit and identify operational dependencies impacting documentation design and implementation from deep clinical trial experts to shape process transformation strategies.
  • Direct partner vendors providing technical writing, business process mapping, and change management support. Ensure outputs are aligned with strategic goals and delivered on time.
  • Collaborate with business process owners (BPO), transformation leads, Quality and QMS teams, and learning and development partners to ensure documentation supports process clarity, compliance, and usability.
  • Lead the tracking and reconciliation between future-state and current-state process taxonomy. Develop and implement a systematic approach for tracking and mapping implementation of process changes on trials newly starting up or migrating to new processes
  • Partner with training and change management leads to ensure new or updated documentation enables effective process understanding, critical thinking and behavior change that drive collectively own operational excellence across all roles.
  • Ensure all documentation supports GxP compliance and inspection readiness. Maintain a high standard of quality, traceability, and audit ability.
  • Take a forward-thinking, β€œclean slate” approach to design future-state documentation and processes that are user-centric, intuitive, and connected across functions.
  • Establish accountability structures for business process owners to support document lifecycle management, including periodic review and updates to ensure ongoing relevance and compliance. Measure key performance metrics of documentation effectiveness in conjunction with BPOs (e.g. User Readability, Process Compliance, Approach Consistency and Speed to Contribution)

LeadershipProject ManagementGCPCommunication SkillsDocumentationComplianceTrainingCross-functional collaborationQuality AssuranceRisk ManagementStakeholder managementStrategic thinkingChange ManagementConfluence

Posted about 17 hours ago
Apply
Shown 10 out of 839

Ready to Start Your Remote Journey?

Apply to 5 jobs per day for free, or get unlimited applications with a subscription starting at €5/week.

Why Remote Data Science Jobs Are Becoming More Popular

The remote work from home is increasingly in demand among computer and IT professionals for several reasons:

  • Flexibility in time and location.
  • Collaboration with international companies.
  • Higher salary levels.
  • Lack of ties to the office.

Remote work opens up new opportunities for specialists, allowing them to go beyond geographical limits and build a successful remote IT career. This employment model is transforming traditional work approaches, making it more convenient, efficient, and accessible for professionals worldwide.

Why do Job Seekers Choose Remoote.app?

Our platform offers convenient conditions for finding remote IT jobs from home:

  • localized search β€” filter job listings based on your country of residence;
  • AI-powered job processing β€” artificial intelligence analyzes thousands of listings, highlighting key details so you don’t have to read long descriptions;
  • advanced filters β€” sort vacancies by skills, experience, qualification level, and work model;
  • regular database updates β€” we monitor job relevance and remove outdated listings;
  • personalized notifications β€” get tailored job offers directly via email or Telegram;
  • resume builder β€” create a professional VC with ease using our customizable templates and AI-powered suggestions;
  • data security β€” modern encryption technologies ensure the protection of your personal information.

Join our platform and find your dream job today! We offer flexible pricing β€” up to 5 applications per day for free, with weekly, monthly, and yearly subscription plans for extended access.