Remote Data Science Jobs

Kafka
466 jobs found. to receive daily emails with new job openings that match your preferences.
466 jobs found.

Set alerts to receive daily emails with new job openings that match your preferences.

Apply

📍 Poland, Ukraine, Cyprus

🧭 Full-Time

🔍 Software Development

🏢 Company: Competera👥 51-100💰 $3,000,000 Seed about 1 year agoArtificial Intelligence (AI)Big DataE-CommerceRetailMachine LearningAnalyticsRetail TechnologyInformation TechnologyEnterprise SoftwareSoftware

  • 5+ years of experience in data engineer role.
  • Strong knowledge of SQL, Spark, Python, Airflow, binary file formats.
  • Contribute to the development of the new data platform.
  • Collaborate with platform and ML teams to create ETL pipelines that efficiently deliver clean and trustworthy data.
  • Engage in architectural decisions regarding the current and future state of the data platform.
  • Design and optimize data models based on business and engineering needs.

PythonSQLETLKafkaAirflowSparkData modeling

Posted 2 days ago
Apply
Apply

📍 India

🧭 Full-Time

🔍 Software Development

  • 7+ years of applied ML experience.
  • Proficiency in Python, Java or Golang is preferred.
  • Extensive experience in feature engineering and developing data-driven frameworks that enhance identity matching algorithms.
  • Strong background in the foundations of machine learning and building blocks of modern deep learning
  • Deep understanding of machine learning frameworks and libraries such as TensorFlow, PyTorch, or Scikit-learn.
  • Experience with big data technologies like Apache Spark or Hadoop, and familiarity with cloud platforms (AWS, Azure, Google Cloud) for scalable data processing.
  • Familiarity with ML Ops concepts related to testing and maintaining models in production such as testing, retraining, and monitoring.
  • Experienced with modern data storage, messaging, and processing tools (Kafka, Apache Spark, Hadoop, Presto, DynamoDB etc.) and demonstrated experience designing and coding in big-data components such as DynamoDB or similar
  • Experience working in an agile team environment with changing priorities
  • Experience of working on AWS
  • Design, implement, and refine machine learning models that improve the precision and recall of identity resolution algorithms.
  • Develop and optimize feature engineering methodologies to extract meaningful patterns from large and complex datasets that enhance identity matching and unification.
  • Develop and maintain scalable data infrastructure to support the deployment and training of machine learning models, ensuring that they run efficiently under varying loads.
  • Build and maintain scalable machine learning solutions in production
  • Train and validate both deep learning-based and statistical-based models considering use-case, complexity, performance, and robustness
  • Demonstrate end-to-end understanding of applications and develop a deep understanding of the “why” behind our models & systems
  • Partner with product managers, tech leads, and stakeholders to analyze business problems, clarify requirements and define the scope of the systems needed
  • Ensure high standards of operational excellence by implementing efficient processes, monitoring system performance, and proactively addressing potential issues.
  • Drive engineering best practices around code reviews, automated testing and monitoring

AWSPythonDynamoDBHadoopKafkaMachine LearningPyTorchAlgorithmsData engineeringTensorflow

Posted 2 days ago
Apply
Apply

📍 United Kingdom

🏢 Company: careers_gm

  • Proficiency in at least one programming language (e.g., Python, Go, Java) and familiarity with multiple language ecosystems.
  • Solid understanding of operating systems, networking, distributed systems, databases, and storage architectures.
  • Deep understanding of how code runs on underlying hardware, including operating systems, algorithms, and data structures. Ability to optimize or troubleshoot code by understanding its execution and the impact on system resources.
  • Experience handling production incidents, including root cause analysis, mitigation, and working through complex system failures.
  • Strong communication skills, with an ability to explain technical concepts to both engineering and business stakeholders. Commitment to collaborative problem-solving and shared ownership of services.
  • Proven experience in automating manual processes, building deployment pipelines, or managing configuration systems
  • Develop tools and software to automate operational processes, improve system reliability, and reduce manual intervention.
  • Lead, Implement and improve monitoring and observability frameworks, enabling proactive detection and resolution of incidents.
  • Participate in an on-call rotation to diagnose, troubleshoot, and mitigate production incidents, ensuring minimal downtime and swift resolution.
  • Work alongside developers to ensure the quality, scalability, and reliability of our services. Practice shared ownership of services in production, fostering a "You build it, you run it" culture.
  • Manage Service Level Indicators (SLIs), Service Level Objectives (SLOs), and Service Level Agreements (SLAs) to manage reliability expectations effectively.
  • Strong understanding of common application reliability patterns, with hands-on experience implementing them.
  • Conduct deep-dive analyses of incidents and collaborate on post-incident reviews to derive learnings and prevent recurrence. Champion a culture of continuous improvement.
  • Evaluate system performance and advocate for optimisations that reduce infrastructure costs while maintaining service reliability.

AWSBackend DevelopmentDockerPostgreSQLPythonSQLCloud ComputingGCPJavaJava EEJenkinsKafkaKubernetesSpring BootSpring MVCZabbixAlgorithmsAzureData StructuresGoGrafanaJava SpringPrometheusRDBMSCI/CDRESTful APIsLinuxDevOpsTerraformMicroservicesNetworkingAnsibleScriptingDebugging

Posted 2 days ago
Apply
Apply

📍 Argentina, Colombia, Mexico

🧭 Full-Time

💸 4000.0 USD per month

🔍 Software Development

🏢 Company: Workana

  • Más de 5 años de experiencia desarrollando con Erlang/OTP.
  • Experiencia en sistemas backend de alta concurrencia y baja latencia.
  • Sólidos conocimientos en arquitectura de software distribuido.
  • Experiencia con Redis, Kafka o RabbitMQ.
  • Integración de protocolos como TCP, UDP, SCTP, WebSockets y gRPC.
  • Experiencia en contenedores (Docker) y administración de entornos Linux.
  • Conocimiento en monitoreo y optimización de sistemas distribuidos.
  • Capacidad de resolución de problemas complejos en entornos concurrentes.
  • Diseñar y desarrollar sistemas backend robustos, escalables y tolerantes a fallos utilizando Erlang/OTP.
  • Aplicar principios arquitectónicos para garantizar eficiencia, concurrencia y resiliencia.
  • Diseñar e implementar mecanismos de mensajería y eventos con Redis, Kafka y RabbitMQ.
  • Optimizar la gestión de procesos en Erlang, asegurando el uso eficiente de recursos.
  • Implementar arquitecturas basadas en actor model, message passing y event-driven.
  • Integrar protocolos como WebSockets, TCP, UDP, gRPC o SCTP.
  • Mantener y mejorar entornos con Docker y Linux.
  • Participar en arquitecturas distribuidas con Kubernetes (deseable).
  • Realizar pruebas de carga y diagnósticos de rendimiento.
  • Colaborar con equipos de infraestructura y DevOps para mejorar despliegues y disponibilidad.
  • Documentar arquitecturas y decisiones técnicas.

Backend DevelopmentDockerErlangKafkaKubernetesRabbitmqSoftware ArchitecturegRPCRedisCI/CDLinuxMicroservices

Posted 3 days ago
Apply
Apply

📍 United States

🧭 Full-Time

💸 72700.0 - 145400.0 USD per year

🔍 Software Development

🏢 Company: careers

  • 5+ years of roven experience as a full-stack developer, with a focus on performance, scalability, and integration.
  • 5+ years of experience in PHP, .NET Core, Entity Framework
  • Extensive Experience with the following AWS services & technologies: S3, DynamoDB, OpenSearch, RDS, Kinesis Firehose, AWS Lambda, MQTT, Kafka.
  • Develop, maintain, and enhance our existing web application written primarily in PHP and C# .NET.
  • Design and implement APIs and microservices to support new features and integrations, focusing on scalability and performance.
  • Collaborate with product managers to translate design mockups into responsive and intuitive frontend UIs.
  • Ensure the reliability, security, and scalability of the application by leveraging cloud services such as AWS, Azure, and Google Cloud.
  • Implement best practices for code quality, testing, and deployment automation across both frontend and backend components.
  • Provide technical guidance and mentorship to junior members of the team.

AWSBackend DevelopmentDockerPHPSoftware DevelopmentSQLAgileCloud ComputingDynamoDBFrontend DevelopmentFull Stack DevelopmentGitHTMLCSSJavascriptKafkaC#API testing.NETAngularReactCommunication SkillsCI/CDProblem SolvingRESTful APIsMentoringMicroservicesJSON

Posted 3 days ago
Apply
Apply

📍 United States

🧭 Full-Time

🏢 Company: Serv Recruitment Agency

  • Proven CTO or senior tech leadership in a high-growth company (ideally $500M+ valuation), scaling transaction volumes from thousands to tens of thousands per hour.
  • Hands-on experience with trading platforms (preferably futures), strongly preferred, including risk management, order execution, and real-time data systems.
  • Deep expertise in building high-performance, scalable architectures focused on high frequency transactions, high availability, and low latency.
  • Knowledge of C# (C Sharp)—our platform’s core language.
  • Advanced knowledge of code architecture for high-transaction systems and API integrations.
  • Experience with cloud architectures (e.g., AWS, GCP, Azure) and containerization (e.g., Kubernetes).
  • Skilled in API design, third-party orchestration, and resolving integration bottlenecks.
  • Track record of recruiting and leading technical teams (engineers, DevOps, etc.), with flexibility to build globally or in the US.
  • Define and execute the technical strategy to scale the global near real-time trading platform, ensuring high availability and low latency.
  • Recruit, expand and lead an in-house digital, AI and technology team (engineers, DevOps, QA, etc.), choosing the structure and location (US or global) to support scaling.
  • Partner with the leadership team to integrate, modernize and further enhance the security of our platform and tech landscape..
  • Design a scalable, cloud-based architecture using AWS, GCP, or Azure.
  • Implement event-driven architecture with Kafka, AWS or other tools for real-time processing.
  • Optimize microservices architecture, ensuring modularity, security, and performance.
  • Define and enforce API design best practices for high-performance integrations.
  • Implement high-availability and fault-tolerant solutions with multi-region cloud deployments.
  • Leverage containerization (Docker, Kubernetes, etc) to enhance system orchestration and resilience.
  • Drive serverless computing adoption for event-driven processing.
  • Optimize distributed caching strategies
  • Enforce real-time data processing
  • Stress-test the platform, establish SLAs, and implement monitoring for low-latency, high-throughput operations.
  • Articulate the technical strategy to stakeholders, supporting valuation and partnership goals.

AWSBackend DevelopmentDockerSQLCloud ComputingKafkaKubernetesSoftware ArchitectureC#RESTful APIsDevOpsMicroservices

Posted 3 days ago
Apply
Apply

📍 India

🔍 Software Development

🏢 Company: YipitData👥 251-500💰 Debt Financing 10 months agoMarket ResearchAnalyticsData Visualization

  • Bachelor's degree in Computer Science, or related majors, 5+ yrs backend experience.
  • Solid computer foundation and programming skills, familiar with common data structures and algorithms.
  • Excellent in one of the following languages: Go/Python/C/C++/Java.
  • Familiar with one of open source components: Mysql/PostgreSQL/Redis/Kafka/ElasticSearch/Message Queue/Nosql.
  • Experience in architecture and developing large-scale distributed systems.
  • Exposure to cloud infrastructure, such as kubernates/docker, Azure/AWS/GCP.
  • Familiarity with ERP systems.
  • Implement connectors to fetch ERP data.
  • Implement or upgrade backend APIs
  • Take charge of the ERP system’s data storage.
  • Design technical solutions
  • Maintain existing services
  • Work with US/SG/China teams

AWSBackend DevelopmentDockerPostgreSQLPythonSoftware DevelopmentSQLCloud ComputingElasticSearchGCPJavaKafkaKubernetesMySQLC++AlgorithmsData engineeringData StructuresGoREST APIRedisCommunication SkillsAnalytical SkillsCI/CDProblem SolvingAgile methodologiesRESTful APIsLinuxDevOpsMicroservicesExcellent communication skillsData visualizationData modelingScriptingData analyticsData managementDebugging

Posted 3 days ago
Apply
Apply
🔥 Sr Software Engineer
Posted 3 days ago

📍 United States

🧭 Full-Time

💸 120000.0 - 160000.0 USD per year

🔍 Software Development

🏢 Company: Lirio👥 51-100💰 $3,000,000 Debt Financing over 2 years agoArtificial Intelligence (AI)Machine LearningInformation TechnologyHealth Care

  • 5+ years developing secure, scalable, enterprise systems using Spring Boot and Java
  • Proficiency in at least one other programming language (e.g., TypeScript, Python, or C#), and demonstrated ability to quickly learn new technology stacks and deliver production-ready solutions in a timely manner
  • Strong Kubernetes experience as a developer, deployer and supporter of workloads
  • Knowledge of distributed systems of micro-services running on cloud infrastructure
  • Experience with Reliability Engineering, SRE, custom metrics, and some observability platform
  • System design ability: can break ambiguous problem statement into concrete requirements and craft an architecture and design that satisfies them
  • Desire to innovate, grow, and make a difference in the world by working with modern technology and a great team to achieve worthwhile healthcare goals
  • Design, implement, test and deploy production application software as a strong technical contributor
  • Write exemplary clean and maintainable code with appropriate tests
  • Collaborate with other engineers by sharing knowledge and leading by example in terms of software craftsmanship and modeling a culture of collaboration and respect
  • Review code and design contributions from others, promoting readability and maintainability
  • Support and improve Lirio’s engineering practices including an emphasis on quality and security
  • Document architectural decisions, solution designs, processes, and best practices
  • Contribute to the quality culture including performing test planning and execution
  • Pursue technology and process innovations aimed at resiliency, increased security, developer experience, increased efficiency, and reduced cost
  • Assist in project planning, estimation, story refinement, and internal demos
  • Implement and support build & CI pipeline engineering efforts as needed
  • Provide production system support on a rotating schedule
  • Contribute to and pick up projects built with unfamiliar technologies in a timely and productive manner as needed or required
  • Pursue continuous learning through individual study, online courses, product documentation, and community resources to bring innovation to the technical organization

AWSBackend DevelopmentSQLJavaKafkaKubernetesSpring BootCI/CDRESTful APIsMicroservicesSoftware Engineering

Posted 3 days ago
Apply
Apply

📍 United States

🧭 Full-Time

💸 150363.0 - 180870.0 USD per year

🔍 Software Development

🏢 Company: phData👥 501-1000💰 $2,499,997 Seed about 7 years agoInformation ServicesAnalyticsInformation Technology

  • At least a Bachelors Degree or foreign equivalent in Computer Science, Computer Engineering, Electrical and Electronics Engineering, or a closely related technical field, and at least five (5) years of post-bachelor’s, progressive experience writing shell scripts; validating data; and engaging in data wrangling.
  • Experience must include at least three (3) years of experience debugging data; transforming data into Microsoft SQL server; developing processes to import data into HDFS using Sqoop; and using Java, UNIX Shell Scripts, and Python.
  • Experience must also include at least one (1) year of experience developing Hive scripts for data transformation on data lake projects; converting Hive scripts to Pyspark applications; automating in Hadoop; and implementing CI/CD pipelines.
  • Design, develop, test, and implement Big Data technical solutions.
  • Recommend the right technologies and solutions for a given use case, from the application layer to infrastructure.
  • Lead the delivery of compiling and installing database systems, integrating data from a variety of data sources (data warehouse, data marts) utilizing on-prem or cloud-based data structures.
  • Drive solution architecture and perform deployments of data pipelines and applications.
  • Author DDL and DML SQL spanning technical tacks.
  • Develop data transformation code and highly complex provisioning pipelines.
  • Ingest data from relational databases.
  • Execute automation strategy.

AWSPythonSQLETLHadoopJavaKafkaSnowflakeData engineeringSparkCI/CDLinuxScala

Posted 3 days ago
Apply
Apply

📍 North America

🧭 Full-Time

🔍 Advertising

  • Extensive experience with big data processing, ideally at the scale of terabytes or more.
  • Strong technical leadership skills with a proven ability to define and drive long-term engineering strategies.
  • Hybrid expertise in data engineering and software development – not just someone who runs queries, but someone who has built scalable data systems and engineering solutions.
  • Hands-on experience with data warehouse technologies is highly desirable.
  • Bonus: Familiarity with tools like Trino/Presto, Snowflake, and other modern data warehouse platforms.
  • Track record of building and scaling robust data pipelines and systems in production environments.
  • An ability to think strategically, lead technically, and inspire the team toward delivering high-impact, scalable solutions.
  • Architect scalable low-latency backend systems and data pipelines.
  • Lead and mentor a team of talented engineers within the backend distributed systems team
  • Make a positive impact on the team's productivity and growth
  • Promote software development best-practices and conduct rigorous code reviews
  • Rigorously identify and solve technical challenges
  • Conduct interviews to attract and identify potential high performing candidates
  • Balance and prioritize projects to maximize efficiency and ensure company objectives are achieved

AWSBackend DevelopmentDockerLeadershipProject ManagementPythonSoftware DevelopmentSQLCloud ComputingElasticSearchKafkaRuby on RailsSoftware ArchitectureAlgorithmsData engineeringData StructuresGoRedisNosqlCI/CDProblem SolvingRESTful APIsMentoringMicroservicesData visualizationTeam managementData modelingData management

Posted 3 days ago
Apply
Shown 10 out of 466

Ready to Start Your Remote Journey?

Apply to 5 jobs per day for free, or get unlimited applications with a subscription starting at €5/week.

Why Remote Data Science Jobs Are Becoming More Popular

The remote work from home is increasingly in demand among computer and IT professionals for several reasons:

  • Flexibility in time and location.
  • Collaboration with international companies.
  • Higher salary levels.
  • Lack of ties to the office.

Remote work opens up new opportunities for specialists, allowing them to go beyond geographical limits and build a successful remote IT career. This employment model is transforming traditional work approaches, making it more convenient, efficient, and accessible for professionals worldwide.

Why do Job Seekers Choose Remoote.app?

Our platform offers convenient conditions for finding remote IT jobs from home:

  • localized search — filter job listings based on your country of residence;
  • AI-powered job processing — artificial intelligence analyzes thousands of listings, highlighting key details so you don’t have to read long descriptions;
  • advanced filters — sort vacancies by skills, experience, qualification level, and work model;
  • regular database updates — we monitor job relevance and remove outdated listings;
  • personalized notifications — get tailored job offers directly via email or Telegram;
  • resume builder — create a professional VC with ease using our customizable templates and AI-powered suggestions;
  • data security — modern encryption technologies ensure the protection of your personal information.

Join our platform and find your dream job today! We offer flexible pricing — up to 5 applications per day for free, with weekly, monthly, and yearly subscription plans for extended access.