Remote Working

Remote working from home provides convenience and freedom, a lifestyle embraced by millions of people around the world. With our platform, finding the right job, whether full-time or part-time, becomes quick and easy thanks to AI, precise filters, and daily updates. Sign up now and start your online career today β€” fast and easy!

Remote IT JobsRemote Job Salaries
Spark
300 jobs found. to receive daily emails with new job openings that match your preferences.
300 jobs found.

Set alerts to receive daily emails with new job openings that match your preferences.

Apply
πŸ”₯ Director | Data Science
Posted about 1 hour ago

πŸ“ United States

🧭 Full-Time

πŸ” Healthcare

🏒 Company: MachinifyπŸ‘₯ 51-100πŸ’° $10,000,000 Series A over 6 years agoArtificial Intelligence (AI)Business IntelligencePredictive AnalyticsSaaSMachine LearningAnalytics

  • 10+ years of data science experience, with at least 5 years in a leadership role, including a leadership role at a start-up. Proven track record of managing data teams and delivering complex, high-impact products from concept to deployment
  • Strong knowledge of data privacy regulations and best practices in data security
  • Exceptional team management abilities, with experience in building and leading high-performing teams
  • Ability to think strategically and execute methodically
  • Ability to drive change and inspire a distributed team
  • Strong problem-solving skills and a data-driven mindset
  • Ability to communicate effectively, collaborating with diverse groups to solve complex problems
  • Provide direction and guidance to a team of Senior and Staff Data Scientists, enabling them to do their best work
  • Collaborate with the leadership team to define key technical and business metrics and objectives
  • Translate objectives into internal team priorities and assignments
  • Drive sprints and work with cross-functional stakeholders to appropriately prioritize various initiatives to improve customer metrics
  • Hire, mentor and develop team members
  • Foster a culture of innovation, collaboration, and continuous improvement
  • Communicate technical concepts and strategies to technical and non-technical stakeholders effectively
  • Own the success of various models on the field by continuously monitoring KPI’s and initiating projects to improve quality.

AWSLeadershipPythonSQLApache AirflowCloud ComputingData AnalysisETLKerasMachine LearningMLFlowNumpyPeople ManagementCross-functional Team LeadershipAlgorithmsData engineeringData scienceData StructuresPandasSparkTensorflowCommunication SkillsProblem SolvingAgile methodologiesRESTful APIsMentoringData visualizationTeam managementStrategic thinkingData modeling

Posted about 1 hour ago
Apply
Apply
πŸ”₯ Software Engineer 2
Posted about 20 hours ago

πŸ“ UK

πŸ” Cybersecurity

🏒 Company: AbnormalπŸ‘₯ 501-1000πŸ’° $250,000,000 Series D 10 months agoArtificial Intelligence (AI)EmailInformation TechnologyCyber SecurityNetwork Security

  • Streaming data systems - using Kafka, Spark, Map/Reduce or similar to process large data sets
  • Experience with building and operating distributed systems and services at a high scale (~billions of transactions each day)
  • Working with external party APIs
  • 3-5 years of overall software engineering experience
  • Strong sense of best practices in developing software
  • Build out streaming infrastructure for our data integration platform
  • Be able to capture data from slack, teams and other streaming data platforms for processing within our Data Ingestion Platform (DIP)
  • Work to integrate customers into the new streaming infrastructure, migrating from the older polling model where necessary
  • Work with Product Managers, Designers & Account TakeOver (ATO) detection team on product requirements and frontend implementation
  • Partner with our ATO Detection team
  • Understand the workflows and processes of the ATO Detection team. Be an effective liaison between ATO Infrastructure <> ATO Detection to understand and represent ATO Detection team needs, and convert those needs into ATO Infrastructure team deliverables
  • Help build our group through excellent interview practices
  • Be a talent magnet - someone who through the interview process demonstrates their own strengths in a way that attracts candidates to Abnormal and to the ATO team and ensures that we close the candidates we want to close

Backend DevelopmentPythonSoftware DevelopmentCybersecurityApache KafkaAPI testingSparkCommunication SkillsCI/CDRESTful APIsDevOpsMicroservicesSoftware Engineering

Posted about 20 hours ago
Apply
Apply

πŸ“ Romania

🧭 Full-Time

πŸ” Software Development

🏒 Company: Plain ConceptsπŸ‘₯ 251-500ConsultingAppsMobile AppsInformation TechnologyMobile

  • At least 3 years of experience as a Delivery Manager, Engineering Manager, or similar role in software, data-intensive or analytics projects.
  • Proven experience managing client relationships and navigating stakeholder expectations.
  • Strong technical background in Data Engineering (e.g., Python, Spark, SQL) and Cloud Data Platforms (e.g., Azure Data Services, AWS, or similar).
  • Solid understanding of scalable software and data architectures, CI/CD practices for data pipelines, and cloud-native data solutions.
  • Experience with data pipelines, sensor integration, edge computing, or real-time analytics is a big plus.
  • Ability to read, write, and discuss technical documentation with confidence.
  • Strong analytical and consultative skills to identify impactful opportunities.
  • Agile mindset, always focused on delivering real value fast.
  • Conflict resolution skills and a proactive approach to identifying and mitigating risks.
  • Understanding the business and technical objectives of data-driven projects.
  • Leading multidisciplinary teams to deliver scalable and robust software and data solutions on time and within budget.
  • Maintaining proactive and transparent communication with clients, helping them understand the impact of data products.
  • Supporting the team during key client interactions and solution presentations.
  • Designing scalable architectures for data ingestion, processing, and analytics.
  • Collaborating with data engineers, analysts, and data scientists to align solutions with client needs.
  • Ensuring the quality and scalability of data solutions and deliverables across cloud environments.
  • Analyzing system performance and recommending improvements using data-driven insights.
  • Providing hands-on technical guidance and mentorship to your team and clients when needed

AWSPythonSQLAgileCloud ComputingAzureData engineeringSparkCommunication SkillsCI/CDClient relationship managementTeam managementStakeholder managementData analytics

Posted 1 day ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 180000.0 - 220000.0 USD per year

πŸ” Software Development

🏒 Company: PreparedπŸ‘₯ 51-100πŸ’° $27,000,000 Series B 8 months agoEnterprise SoftwarePublic Safety

  • 5+ years of experience in data engineering, software engineering with a data focus, data science, or a related role
  • Knowledge of designing data pipelines from a variety of source (e.g. streaming, flat files, APIs)
  • Proficiency in SQL and experience with relational databases (e.g., PostgreSQL)
  • Experience with real-time data processing frameworks (e.g., Apache Kafka, Spark Streaming, Flink, Pulsar, Redpanda)
  • Strong programming skills in common data-focused languages (e.g., Python, Scala)
  • Experience with data pipeline and workflow management tools (e.g., Apache Airflow, Prefect, Temporal)
  • Familiarity with AWS-based data solutions
  • Strong understanding of data warehousing concepts and technologies (Snowflake)
  • Experience documenting data dependency maps and data lineage
  • Strong communication and collaboration skills
  • Ability to work independently and take initiative
  • Proficiency in containerization and orchestration tools (e.g., Docker, Kubernetes)
  • Design, implement, and maintain scalable data pipelines and infrastructure
  • Collaborate with software engineers, product managers, customer success managers, and others across the business to understand data requirements
  • Optimize and manage our data storage solutions
  • Ensure data quality, reliability, and security across the data lifecycle
  • Develop and maintain ETL processes and frameworks
  • Work with stakeholders to define data availability SLAs
  • Create and manage data models to support business intelligence and analytics

AWSDockerPostgreSQLPythonSQLApache AirflowETLKubernetesSnowflakeApache KafkaData engineeringSparkScalaData modeling

Posted 1 day ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ” Software Development

  • 7+ years of experience related to Java development (Kotlin preferred) in addition to data engineering and modeling complex data
  • Strong experience in SQL, data modeling, and manipulating and extracting large data sets.
  • Hands-on experience working with data warehouse technologies.
  • Experience building high-quality APIs and working with microservices (Spring Boot, REST).
  • Experience with cloud infrastructure and containerization (Docker, Kubernetes).
  • Proficiency with Git, CI/CD pipelines, and build tools (Gradle preferred).
  • Work with your engineering squad to design and build a robust platform that will handle terabytes of real-time and batch data flowing through internal and external systems.
  • Build high volume and low latency services that are reliable at scale.
  • Create and manage ETL/ELT workflows that transform our billions of raw data points daily into quickly accessible information across our databases and data warehouses
  • Develop big data solutions using commercial and open-source frameworks.
  • Collaborate with and explain complex technical issues to your technical peers and non-technical stakeholders.

Backend DevelopmentDockerSQLCloud ComputingDesign PatternsETLGitJavaKafkaKotlinKubernetesSpring BootAlgorithmsAPI testingData engineeringData StructuresREST APISparkCI/CDRESTful APIsMicroservicesData modeling

Posted 2 days ago
Apply
Apply

πŸ“ Texas, Denver, CO

πŸ’Έ 148000.0 - 189000.0 USD per year

πŸ” SaaS

🏒 Company: Branch Metrics

  • 4+ years of relevant experience in data science, analytics, or related fields.
  • Degree in Statistics, Mathematics, Computer Science, or related field.
  • Proficiency with Python, SQL, Spark, Bazel, CLI (Bash/Zsh).
  • Expertise in Spark, Presto, Airflow, Docker, Kafka, Jupyter.
  • Strong knowledge of ML frameworks (scikit-learn, pandas, xgboost, lightgbm).
  • Experience deploying models to production on AWS infrastructure and experience with the basic AWS services.
  • Advanced statistical knowledge (regression, A/B testing, Multi-Armed Bandits, time-series anomaly detection).
  • Collaborate with stakeholders to identify data-driven business opportunities.
  • Perform data mining, analytics, and predictive modeling to optimize business outcomes.
  • Conduct extensive research and evaluate innovative approaches for new product initiatives.
  • Develop, deploy, and monitor custom models and algorithms.
  • Deliver end-to-end production-ready solutions through close collaboration with engineering and product teams.
  • Identify opportunities to measure and monitor key performance metrics, assessing the effectiveness of existing ML-based products.
  • Serve as a cross-functional advisor, proposing innovative solutions and guiding product and engineering teams toward the best approaches.
  • Anticipate and clearly articulate potential risks in ML-driven products.
  • Effectively integrate solutions into existing engineering infrastructure.

AWSDockerPythonSQLBashKafkaMachine LearningAirflowRegression testingPandasSparkRESTful APIsTime ManagementA/B testing

Posted 2 days ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 211536.0 - 287100.0 USD per year

πŸ” Software Development

🏒 Company: jobs

  • SQL and Python programming to query and validate the accuracy of datasets
  • Design and develop workflow orchestration tools
  • Python scripting to develop statistical and machine learning models for classification
  • Use agile software development principle to design, plan and structure deployment of software products
  • Develop machine learning models to segment customer behavior, identify market concentration and volatility using Python and Spark ML
  • Building KPIs (Key Performance Indicators) and metrics, validating using statistical hypothesis testing
  • Expertise in Cloud Computing resources and maintaining data on cloud storage
  • Big Data processing for data cleaning
  • Deploy self-serving data visualization tools, automating, generating reports and consolidating visually on tableau dashboards
  • Develop data engineering pipelines and transformations
  • Lead, build and implement analytics functions for Honey features
  • Conduct impactful data analysis to improve customer experiences and inform product development
  • Collaborate cross-functional support teams to build world-class products and design hypothesis-driven experiments
  • Gather and collate business performance and metrics to recommend improvements, automation, and data science directives for overall business performance
  • Present findings and recommendations to senior level/non-technical stakeholders
  • Maintain large datasets by performing batch scheduling and pipelining ETL operations
  • Perform ad-hoc exploratory analysis on datasets to generate insights and automate production ready solutions
  • Develop machine learning-based models to improve forecasting and predictive analytics
  • Implement innovative quantitative analyses, test new data wrangling techniques, and experiment with new visualization tools to deliver scalable analytics
  • Develop and create programming paradigms and utilizing tools like git, data structures, OOP, and network algorithms

PythonSQLCloud ComputingData AnalysisETLGitMachine LearningNumpyTableauAlgorithmsData engineeringData StructuresPandasSparkTensorflowAgile methodologiesData visualizationData modeling

Posted 3 days ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 160000.0 - 230000.0 USD per year

πŸ” Daily Fantasy Sports

  • 7+ years of experience in a data Engineering, or data-oriented software engineering role creating and pushing end-to-end data engineering pipelines.
  • 3+ years of experience acting as technical lead and providing mentorship and feedback to junior engineers.
  • Extensive experience building and optimizing cloud-based data streaming pipelines and infrastructure.
  • Extensive experience exposing real-time predictive model outputs to production-grade systems leveraging large-scale distributed data processing and model training.
  • Experience in most of the following: SQL/NoSQL databases/warehouses: Postgres, BigQuery, BigTable, Materialize, AlloyDB, etc
  • Replication/ELT services: Data Stream, Hevo, etc.
  • Data Transformation services: Spark, Dataproc, etc
  • Scripting languages: SQL, Python, Go.
  • Cloud platform services in GCP and analogous systems: Cloud Storage, Cloud Compute Engine, Cloud Functions, Kubernetes Engine etc.
  • Data Processing and Messaging Systems: Kafka, Pulsar, Flink
  • Code version control: Git
  • Data pipeline and workflow tools: Argo, Airflow, Cloud Composer.
  • Monitoring and Observability platforms: Prometheus, Grafana, ELK stack, Datadog
  • Infrastructure as Code platforms: Terraform, Google Cloud Deployment Manager.
  • Other platform tools such as Redis, FastAPI, and Streamlit.
  • Excellent organizational, communication, presentation, and collaboration experience with organizational technical and non-technical teams
  • Graduate degree in Computer Science, Mathematics, Informatics, Information Systems or other quantitative field
  • Enhance the capabilities of our existing Core Data Platform and develop new integrations with both internal and external APIs within the Data organization.
  • Develop and maintain advanced data pipelines and transformation logic using Python and Go, ensuring efficient and reliable data processing.
  • Collaborate with Data Scientists and Data Science Engineers to support the needs of advanced ML development.
  • Collaborate with Analytics Engineers to enhance data transformation processes, streamline CI/CD pipelines, and optimize team collaboration workflows Using DBT.
  • Work closely with DevOps and Infrastructure teams to ensure the maturity and success of the Core Data platform.
  • Guide teams in implementing and maintaining comprehensive monitoring, alerting, and documentation practices, and coordinate with Engineering teams to ensure continuous feature availability.
  • Design and implement Infrastructure as Code (IaC) solutions to automate and streamline data infrastructure deployment, ensuring scalable, consistent configurations aligned with data engineering best practices.
  • Build and maintain CI/CD pipelines to automate the deployment of data solutions, ensuring robust testing, seamless integration, and adherence to best practices in version control, automation, and quality assurance.
  • Experienced in designing and automating data governance workflows and tool integrations across complex environments, ensuring data integrity and protection throughout the data lifecycle.
  • Serve as a Staff Engineer within the broader PrizePicks technology organization by staying current with emerging technologies, implementing innovative solutions, and sharing knowledge and best practices with junior team members and collaborators.
  • Ensure code is thoroughly tested, effectively integrated, and efficiently deployed, in alignment with industry best practices for version control, automation, and quality assurance.
  • Mentor and support junior engineers by providing guidance, coaching and educational opportunities
  • Provide on-call support as part of a shared rotation between the Data and Analytics Engineering teams to maintain system reliability and respond to critical issues.

LeadershipPythonSQLCloud ComputingETLGCPGitKafkaKubernetesAirflowData engineeringGoPostgresREST APISparkCI/CDMentoringDevOpsTerraformData visualizationData modelingScripting

Posted 3 days ago
Apply
Apply

πŸ“ United States, Canada

🧭 Full-Time

πŸ’Έ 158000.0 - 239000.0 USD per year

πŸ” Software Development

🏒 Company: 1Password

  • Minimum of 8+ years of professional software engineering experience.
  • Minimum of 7 years technical engineering experience building data processing applications (batch and streaming) with coding in languages.
  • In-depth, hands-on experience on extensible data modeling, query optimizations and work in Java, Scala, Python, and related technologies.
  • Experience in data modeling across external facing product insights and business processes, such as revenue/sales operations, finance, and marketing.
  • Experience with Big Data query engines such as Hive, Presto, Trino, Spark.
  • Experience with data stores such as Redshift, MySQL, Postgres, Snowflake, etc.
  • Experience using Realtime technologies like Apache Kafka, Kinesis, Flink, etc.
  • Experience building scalable services on top of public cloud infrastructure like Azure, AWS, or GCP with extensive use of datastores like RDBMS, key-value stores, etc.
  • Experience leveraging distributed systems at scale and systems knowledge on infrastructure hardware, resources bare-metal hosts to containers to networking.
  • Design, develop, and automate large-scale, high-performance batch and streaming data processing systems to drive business growth and enhance product experience.
  • Build data engineering strategy that supports a rapidly growing tech company and aligns with the priorities across our product strategy and internal business organizations’ desire to leverage data for more competitive advantages.
  • Build scalable data pipelines using best-in-class software engineering practices.
  • Develop optimal data models for storage and retrieval, meeting critical product and business requirements.
  • Establish and execute short and long-term architectural roadmaps in collaboration with Analytics, Data Platform, Business Systems, Engineering, Privacy and Security.
  • Lead efforts on continuous improvement to the efficiency and flexibility of the data, platform, and services.
  • Mentor Analytics & Data Engineers on best practices, standards and forward-looking approaches on building robust, extensible and reusable data solutions.
  • Influence and evangelize high standard of code quality, system reliability, and performance.

AWSPythonSQLETLGCPJavaKubernetesMySQLSnowflakeAlgorithmsApache KafkaAzureData engineeringData StructuresPostgresRDBMSSparkCI/CDRESTful APIsMentoringScalaData visualizationData modelingSoftware EngineeringData analyticsData management

Posted 4 days ago
Apply
Apply

πŸ“ Ukraine

🧭 Full-Time

πŸ” SaaS

🏒 Company: AdaptiqπŸ‘₯ 51-100ConsultingProfessional ServicesSoftware

  • A Master’s or PhD in Computer Science, Physics, Applied Mathematics or a related field, demonstrating a strong foundation in analytical thinking.
  • At least 5 years of professional experience in end-to-end machine learning lifecycle (design, development, deployment, and monitoring).
  • At least 5 years of professional experience with Python development, including OOP, writing production-grade code, testing, and optimization.
  • At least 5 years of experience with data mining, statistical analysis, and effective data visualization techniques.
  • Deep familiarity with modern ML/DL methods and frameworks (e.g., PyTorch, XGBoost, scikit-learn, statsmodels).
  • Develop and Optimize Advanced ML Models: Build, improve, and deploy machine learning and statistical models for forecasting demand, analyzing price elasticities, and recommending optimal pricing strategies.
  • Lead End-to-End Data Science Projects: Own your projects fully, from conceptualization and experimentation through production deployment, monitoring, and iterative improvement.
  • Innovate with Generative and Predictive AI Solutions: Leverage state-of-the-art generative and predictive modeling techniques to automate complex pricing scenarios and adapt to rapidly changing market dynamics.

AWSPythonSQLApache HadoopData AnalysisData MiningMachine LearningPyTorchSparkCI/CDData visualization

Posted 4 days ago
Apply
Shown 10 out of 300

Ready to Start Your Remote Journey?

Apply to 5 jobs per day for free, or get unlimited applications with a subscription starting at €5/week.

Why do Job Seekers Choose Our Platform for Remote Work Opportunities?

We’ve developed a well-thought-out service for home job matching, making the searching process easier and more efficient.

AI-powered Job Processing and Advanced Filters

Our algorithms process thousands of offers postings daily, extracting only the key information from each listing. This allows you to skip lengthy texts and focus only on the offers that match your requirements.

With powerful skill filters, you can specify your core competencies to instantly receive a selection of job opportunities that align with your experience. 

Search by Country of Residence

For those looking for fully remote jobs in their own country, our platform offers the ability to customize the search based on your location. This is especially useful if you want to adhere to local laws, consider time zones, or work with employers familiar with local specifics.

If necessary, you can also work remotely with employers from other countries without being limited by geographical boundaries.

Regular Data Update

Our platform features over 40,000 remote work offers with full-time or part-time positions from 7,000 companies. This wide range ensures you can find offers that suit your preferences, whether from startups or large corporations.

We regularly verify the validity of vacancy listings and automatically remove outdated or filled positions, ensuring that you only see active and relevant opportunities.

Job Alerts

Once you register, you can set up convenient notification methods, such as receiving tailored job listings directly to your email or via Telegram. This ensures you never miss out on a great opportunity.

Our job board allows you to apply for up to 5 vacancies per day absolutely for free. If you wish to apply for more, you can choose a suitable subscription plan with weekly, monthly, or annual payments.

Wide Range of Completely Remote Online Jobs

On our platform, you'll find fully remote work positions in the following fields:

  • IT and Programming β€” software development, website creation, mobile app development, system administration, testing, and support.
  • Design and Creative β€” graphic design, UX/UI design, video content creation, animation, 3D modeling, and illustrations.
  • Marketing and Sales β€” digital marketing, SMM, contextual advertising, SEO, product management, sales, and customer service.
  • Education and Online Tutoring β€” teaching foreign languages, school and university subjects, exam preparation, training, and coaching.
  • Content β€” creating written content for websites, blogs, and social media; translation, editing, and proofreading.
  • Administrative Roles (Assistants, Operators) β€” Virtual assistants, work organization support, calendar management, and document workflow assistance.
  • Finance and Accounting β€” bookkeeping, reporting, financial consulting, and taxes.

Other careers include: online consulting, market research, project management, and technical support.

All Types of Employment

The platform offers online remote jobs with different types of work:

  • Full-time β€” the ideal choice for those who value stability and predictability;
  • part-time β€” perfect for those looking for a side home job or seeking a balance between work and personal life;
  • Contract β€” suited for professionals who want to work on projects for a set period.
  • Temporary β€” short-term work that can be either full-time or part-time. These positions are often offered for seasonal or urgent tasks;
  • Internship β€” a form of on-the-job training that allows you to gain practical experience in your chosen field.

Whether you're looking for stable full-time employment, the flexibility of freelancing, or a part-time side gig, you'll find plenty of options on Remoote.app.

Remote Working Opportunities for All Expertise Levels

We feature offers for people with all levels of expertise:

  • for beginners β€” ideal positions for those just starting their journey in internet working from home;
  • for intermediate specialists β€” if you already have experience, you can explore positions requiring specific skills and knowledge in your field;
  • for experts β€” roles for highly skilled professionals ready to tackle complex tasks.

How to Start Your Online Job Search Through Our Platform?

To begin searching for home job opportunities, follow these three steps:

  1. Register and complete your profile. This process takes minimal time.
  2. Specify your skills, country of residence, and the preferable position.
  3. Receive notifications about new vacancy openings and apply to suitable ones.

If you don't have a resume yet, use our online builder. It will help you create a professional document, highlighting your key skills and achievements. The AI will automatically optimize it to match job requirements, increasing your chances of a successful response. You can update your profile information at any time: modify your skills, add new preferences, or upload an updated resume.