Remote Working

Remote working from home provides convenience and freedom, a lifestyle embraced by millions of people around the world. With our platform, finding the right job, whether full-time or part-time, becomes quick and easy thanks to AI, precise filters, and daily updates. Sign up now and start your online career today β€” fast and easy!

Remote IT Jobs
Spark
191 jobs found. to receive daily emails with new job openings that match your preferences.
191 jobs found.

Set alerts to receive daily emails with new job openings that match your preferences.

Apply
πŸ”₯ Senior Data Engineer
Posted about 9 hours ago

πŸ“ United States

πŸ’Έ 144000.0 - 180000.0 USD per year

πŸ” Software Development

🏒 Company: HungryrootπŸ‘₯ 101-250πŸ’° $40,000,000 Series C almost 4 years agoArtificial Intelligence (AI)Food and BeverageE-CommerceRetailConsumer GoodsSoftware

  • 5+ years of experience in ETL development and data modeling
  • 5+ years of experience in both Scala and Python
  • 5+ years of experience in Spark
  • Excellent problem-solving skills and the ability to translate business problems into practical solutions
  • 2+ years of experience working with the Databricks Platform
  • Develop pipelines in Spark (Python + Scala) in the Databricks Platform
  • Build cross-functional working relationships with business partners in Food Analytics, Operations, Marketing, and Web/App Development teams to power pipeline development for the business
  • Ensure system reliability and performance
  • Deploy and maintain data pipelines in production
  • Set an example of code quality, data quality, and best practices
  • Work with Analysts and Data Engineers to enable high quality self-service analytics for all of Hungryroot
  • Investigate datasets to answer business questions, ensuring data quality and business assumptions are understood before deploying a pipeline

AWSPythonSQLApache AirflowData MiningETLSnowflakeAlgorithmsAmazon Web ServicesData engineeringData StructuresSparkCI/CDRESTful APIsMicroservicesJSONScalaData visualizationData modelingData analyticsData management

Posted about 9 hours ago
Apply
Apply
πŸ”₯ Solutions Architect
Posted about 9 hours ago

πŸ“ United States, Latin America, India

πŸ” Software Development

  • 8+ years as a hands-on Solutions Architect and/or Data Engineer
  • Programming expertise in Java, Python and/or Scala
  • Core cloud data platforms including Snowflake, AWS, Azure, Databricks and GCP
  • SQL and the ability to write, debug, and optimize SQL queries
  • 4-year Bachelor's degree in Computer Science or a related field
  • Production experience in core data platforms: Snowflake, AWS, Azure, GCP, Hadoop, Databricks
  • Design and implement data solutions
  • Lead and/or mentor other engineers
  • Develop end-to-end technical solutions into production β€” and to help ensure performance, security, scalability, and robust data integration
  • Programming expertise in Java, Python and/or Scala
  • Client-facing written and verbal communication skills and experience
  • Create and deliver detailed presentations
  • Detailed solution documentation (e.g. including POCS and roadmaps, sequence diagrams, class hierarchies, logical system views, etc.)

AWSLeadershipPythonSQLCloud ComputingETLGCPJavaSnowflakeAzureData engineeringREST APISparkPresentation skillsDocumentationClient relationship managementScalaData visualizationMentorshipData modeling

Posted about 9 hours ago
Apply
Apply

πŸ“ United States

πŸ’Έ 135000.0 - 155000.0 USD per year

πŸ” Software Development

🏒 Company: JobgetherπŸ‘₯ 11-50πŸ’° $1,493,585 Seed about 2 years agoInternet

  • 8+ years of experience as a data engineer, with a strong background in data lake systems and cloud technologies.
  • 4+ years of hands-on experience with AWS technologies, including S3, Redshift, EMR, Kafka, and Spark.
  • Proficient in Python or Node.js for developing data pipelines and creating ETLs.
  • Strong experience with data integration and frameworks like Informatica and Python/Scala.
  • Expertise in creating and managing AWS services (EC2, S3, Lambda, etc.) in a production environment.
  • Solid understanding of Agile methodologies and software development practices.
  • Strong analytical and communication skills, with the ability to influence both IT and business teams.
  • Design and develop scalable data pipelines that integrate enterprise systems and third-party data sources.
  • Build and maintain data infrastructure to ensure speed, accuracy, and uptime.
  • Collaborate with data science teams to build feature engineering pipelines and support machine learning initiatives.
  • Work with AWS cloud technologies like S3, Redshift, and Spark to create a world-class data mesh environment.
  • Ensure proper data governance and implement data quality checks and lineage at every stage of the pipeline.
  • Develop and maintain ETL processes using AWS Glue, Lambda, and other AWS services.
  • Integrate third-party data sources and APIs into the data ecosystem.

AWSNode.jsPythonSQLETLKafkaData engineeringSparkAgile methodologiesScalaData modelingData management

Posted about 11 hours ago
Apply
Apply

πŸ“ United States, Latin America, India

πŸ” Software Development

  • At least 6 years experience as a Machine Learning Engineer, Software Engineer, or Data Engineer
  • 4-year Bachelor's degree in Computer Science or a related field
  • Experience deploying machine learning models in a production setting
  • Expertise in Python, Scala, Java, or another modern programming language
  • The ability to build and operate robust data pipelines using a variety of data sources, programming languages, and toolsets
  • Strong working knowledge of SQL and the ability to write, debug, and optimize distributed SQL queries
  • Hands-on experience in one or more big data ecosystem products/languages such as Spark, Snowflake, Databricks, etc.
  • Familiarity with multiple data sources (e.g. JMS, Kafka, RDBMS, DWH, MySQL, Oracle, SAP)
  • Systems-level knowledge in network/cloud architecture, operating systems (e.g., Linux), and storage systems (e.g., AWS, Databricks, Cloudera)
  • Production experience in core data technologies (e.g. Spark, HDFS, Snowflake, Databricks, Redshift, & Amazon EMR)
  • Development of APIs and web server applications (e.g. Flask, Django, Spring)
  • Complete software development lifecycle experience, including design, documentation, implementation, testing, and deployment
  • Excellent communication and presentation skills; previous experience working with internal or external customers
  • Design and create environments for data scientists to build models and manipulate data
  • Work within customer systems to extract data and place it within an analytical environment
  • Learn and understand customer technology environments and systems
  • Define the deployment approach and infrastructure for models and be responsible for ensuring that businesses can use the models we develop
  • Demonstrate the business value of data by working with data scientists to manipulate and transform data into actionable insights
  • Reveal the true value of data by working with data scientists to manipulate and transform data into appropriate formats in order to deploy actionable machine learning models
  • Partner with data scientists to ensure solution deployabilityβ€”at scale, in harmony with existing business systems and pipelines, and such that the solution can be maintained throughout its life cycle
  • Create operational testing strategies, validate and test the model in QA, and implementation, testing, and deployment
  • Ensure the quality of the delivered product

AWSDockerPythonSoftware DevelopmentSQLCloud ComputingData AnalysisETLHadoopJavaKerasKubernetesMachine LearningMLFlowSnowflakeSoftware ArchitectureAlgorithmsAPI testingData engineeringData scienceREST APISparkTensorflowCommunication SkillsAnalytical SkillsCI/CDLinuxDevOpsPresentation skillsExcellent communication skillsScalaData modelingDebugging

Posted about 14 hours ago
Apply
Apply

πŸ“ United States, Latin America, India

πŸ” Software Development

  • At least 4+ years experience as a Software Engineer, Data Engineer or Data Analyst
  • Programming expertise in Java, Python and/or Scala
  • Core cloud data platforms including Snowflake, AWS, Azure, Databricks and GCP
  • SQL and the ability to write, debug, and optimize SQL queries
  • Client-facing written and verbal communication skills and experience
  • 4-year Bachelor's degree in Computer Science or a related field
  • Develop end-to-end technical solutions into production β€” and to help ensure performance, security, scalability, and robust data integration.
  • Create and deliver detailed presentations
  • Detailed solution documentation (e.g. including POCS and roadmaps, sequence diagrams, class hierarchies, logical system views, etc.)

AWSPythonSQLCloud ComputingData AnalysisETLGCPJavaKafkaSnowflakeAirflowAzureData engineeringSparkCommunication SkillsScalaData modelingSoftware Engineering

Posted about 15 hours ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 240000.0 - 265000.0 USD per year

πŸ” Software Development

🏒 Company: TRM LabsπŸ‘₯ 101-250πŸ’° $70,000,000 Series B over 2 years agoCryptocurrencyComplianceBlockchainBig Data

  • 7+ years of hands-on experience in architecting distributed system architecture, guiding projects from initial ideation through to successful production deployment.
  • Exceptional programming skills in Python, as well as adeptness in SQL or SparkSQL.
  • In-depth experience with data stores such as Icerberg, Trino, BigQuery, and StarRocks, and Citus.
  • Proficiency in data pipeline and workflow orchestration tools like Airflow, DBT, etc.
  • Expertise in data processing technologies and streaming workflows including Spark, Kafka, and Flink.
  • Competence in deploying and monitoring infrastructure within public cloud platforms, utilizing tools such as Docker, Terraform, Kubernetes, and Datadog.
  • Proven ability in loading, querying, and transforming extensive datasets.
  • Build highly reliable data services to integrate with dozens of blockchains.
  • Develop complex ETL pipelines that transform and process petabytes of structured and unstructured data in real-time.
  • Design and architect intricate data models for optimal storage and retrieval to support sub-second latency for querying blockchain data.
  • Oversee the deployment and monitoring of large database clusters with an unwavering focus on performance and high availability.
  • Collaborate across departments, partnering with data scientists, backend engineers, and product managers to design and implement novel data models that enhance TRM’s products.

AWSDockerPythonSQLCloud ComputingETLKafkaKubernetesAirflowData engineeringPostgresSparkTerraformData modeling

Posted about 17 hours ago
Apply
Apply

πŸ“ MedellΓ­n, Antioquia, Colombia

πŸ” Software Development

🏒 Company: Coactive AIπŸ‘₯ 11-50πŸ’° $30,000,000 Series B 10 months agoArtificial Intelligence (AI)Big DataMachine LearningInformation Technology

  • 5+ years extensive experience in cloud infrastructure, particularly with AWS, and knowledge of GCP and Azure.
  • Hands-on expertise in Kubernetes, including deployment, scaling, and management, as well as related tools like Envoy and Keda.
  • Proficiency with data platforms and tools, including Kafka, Spark, and Databricks.
  • Strong Python programming skills for automation and infrastructure management.
  • Solid understanding of networking concepts and experience setting up and managing network infrastructure.
  • Experience with monitoring and logging tools, such as Datadog, to ensure observability and system reliability.
  • Proven experience with databases like MongoDB, Postgres, and Redis.
  • Ability to work independently in a fast-paced startup environment and solve complex problems with minimal guidance.
  • Design, build, and maintain highly scalable and reliable infrastructure on cloud platforms like AWS, GCP, and Azure.
  • Develop and optimize CI/CD pipelines to streamline deployment processes for microservices and data pipelines.
  • Manage and enhance real-time data workflows using technologies such as Kafka, Spark, and Databricks.
  • Implement robust monitoring and logging solutions to ensure system health and performance using tools like Datadog.
  • Establish and enforce cloud security best practices to ensure the integrity and safety of data and applications.
  • Automate infrastructure using tools like Terraform and orchestrate workloads with Kubernetes and related technologies.
  • Collaborate with ML and data engineering teams to create a seamless integration between infrastructure and data models.
  • Ensure efficient network setup and manage databases such as MongoDB, Postgres, and Redis.

AWSDockerPostgreSQLPythonBashCloud ComputingGCPKafkaKubernetesMongoDBAzureRedisSparkCI/CDRESTful APIsTerraformMicroservicesNetworkingAnsible

Posted 1 day ago
Apply
Apply

πŸ“ United States

πŸ” Software Development

🏒 Company: ge_externalsite

  • Exposure to industry standard data modeling tools (e.g., ERWin, ER Studio, etc.).
  • Exposure to Extract, Transform & Load (ETL) tools like Informatica or Talend
  • Exposure to industry standard data catalog, automated data discovery and data lineage tools (e.g., Alation, Collibra, TAMR etc., )
  • Hands-on experience in programming languages like Java, Python or Scala
  • Hands-on experience in writing SQL scripts for Oracle, MySQL, PostgreSQL or HiveQL
  • Experience with Big Data / Hadoop / Spark / Hive / NoSQL database engines (i.e. Cassandra or HBase)
  • Exposure to unstructured datasets and ability to handle XML, JSON file formats
  • Work independently as well as with a team to develop and support Ingestion jobs
  • Evaluate and understand various data sources (databases, APIs, flat files etc. to determine optimal ingestion strategies
  • Develop a comprehensive data ingestion architecture, including data pipelines, data transformation logic, and data quality checks, considering scalability and performance requirements.
  • Choose appropriate data ingestion tools and frameworks based on data volume, velocity, and complexity
  • Design and build data pipelines to extract, transform, and load data from source systems to target destinations, ensuring data integrity and consistency
  • Implement data quality checks and validation mechanisms throughout the ingestion process to identify and address data issues
  • Monitor and optimize data ingestion pipelines to ensure efficient data processing and timely delivery
  • Set up monitoring systems to track data ingestion performance, identify potential bottlenecks, and trigger alerts for issues
  • Work closely with data engineers, data analysts, and business stakeholders to understand data requirements and align ingestion strategies with business objectives.
  • Build technical data dictionaries and support business glossaries to analyze the datasets
  • Perform data profiling and data analysis for source systems, manually maintained data, machine generated data and target data repositories
  • Build both logical and physical data models for both Online Transaction Processing (OLTP) and Online Analytical Processing (OLAP) solutions
  • Develop and maintain data mapping specifications based on the results of data analysis and functional requirements
  • Perform a variety of data loads & data transformations using multiple tools and technologies.
  • Build automated Extract, Transform & Load (ETL) jobs based on data mapping specifications
  • Maintain metadata structures needed for building reusable Extract, Transform & Load (ETL) components.
  • Analyze reference datasets and familiarize with Master Data Management (MDM) tools.
  • Analyze the impact of downstream systems and products
  • Derive solutions and make recommendations from deep dive data analysis.
  • Design and build Data Quality (DQ) rules needed

AWSPostgreSQLPythonSQLApache AirflowApache HadoopData AnalysisData MiningErwinETLHadoop HDFSJavaKafkaMySQLOracleSnowflakeCassandraClickhouseData engineeringData StructuresREST APINosqlSparkJSONData visualizationData modelingData analyticsData management

Posted 2 days ago
Apply
Apply

πŸ“ United States

πŸ” Medical device, pharmaceutical, clinical, or biotechnology

🏒 Company: JobgetherπŸ‘₯ 11-50πŸ’° $1,493,585 Seed about 2 years agoInternet

  • Proficiency in SQL and programming with R or Python
  • Experience with Google Cloud Platform (BigQuery, Storage, Compute Engine) is highly valued.
  • Strong problem-solving skills and the ability to work independently or as part of a team.
  • Excellent communication skills, able to convey complex statistical concepts to non-technical stakeholders.
  • Organize and merge data from diverse sources, ensuring data quality and integrity.
  • Identify and resolve bottlenecks in data processing and analysis, implementing solutions like automation and optimization.
  • Collaborate with clinical and technical teams to streamline data collection and entry processes.
  • Perform statistical analysis, data visualization, and generate reports for clinical studies and other projects.
  • Prepare summary statistics, tables, figures, and listings for presentations and publications.

PythonSQLApache AirflowData MiningETLGCPAlgorithmsData engineeringPandasSparkData visualizationData modelingData analyticsData management

Posted 2 days ago
Apply
Apply
πŸ”₯ Data Engineer
Posted 4 days ago

πŸ“ United States

πŸ’Έ 112800.0 - 126900.0 USD per year

πŸ” Software Development

🏒 Company: Titan Cloud

  • 4+ years of work experience with ETL, Data Modeling, Data Analysis, and Data Architecture.
  • Experience operating very large data warehouses or data lakes.
  • Experience with building data pipelines and applications to stream and process datasets at low latencies.
  • MySQL, MSSQL Database, Postgres, Python
  • Design, implement, and maintain standardized data models that align with business needs and analytical use cases.
  • Optimize data structures and schemas for efficient querying, scalability, and performance across various storage and compute platforms.
  • Provide guidance and best practices for data storage, partitioning, indexing, and query optimization.
  • Developing and maintaining a data pipeline design.
  • Build robust and scalable ETL/ELT data pipelines to transform raw data into structured datasets optimized for analysis.
  • Collaborate with data scientists to streamline feature engineering and improve the accessibility of high-value data assets.
  • Designing, building, and maintaining the data architecture needed to support business decisions and data-driven applications.β€―This includes collecting, storing, processing, and analyzing large amounts of data using AWS, Azure, and local tools and services.
  • Develop and enforce data governance standards to ensure consistency, accuracy, and reliability of data across the organization.
  • Ensure data quality, integrity, and completeness in all pipelines by implementing automated validation and monitoring mechanisms.
  • Implement data cataloging, metadata management, and lineage tracking to enhance data discoverability and usability.
  • Work with Engineering to manage and optimize data warehouse and data lake architectures, ensuring efficient storage and retrieval of structured and semi-structured data.
  • Evaluate and integrate emerging cloud-based data technologies to improve performance, scalability, and cost efficiency.
  • Assist with designing and implementing automated tools for collecting and transferring data from multiple source systems to the AWS and Azure cloud platform.
  • Work with DevOps Engineers to integrate any new code into existing pipelines
  • Collaborate with teams in trouble shooting functional and performance issues.
  • Must be a team player to be able to work in an agile environment

AWSPostgreSQLPythonSQLAgileApache AirflowCloud ComputingData AnalysisETLHadoopMySQLData engineeringData scienceREST APISparkCommunication SkillsAnalytical SkillsCI/CDProblem SolvingTerraformAttention to detailOrganizational skillsMicroservicesTeamworkData visualizationData modelingScripting

Posted 4 days ago
Apply
Shown 10 out of 191

Ready to Start Your Remote Journey?

Apply to 5 jobs per day for free, or get unlimited applications with a subscription starting at €5/week.

Why do Job Seekers Choose Our Platform for Remote Work Opportunities?

We’ve developed a well-thought-out service for home job matching, making the searching process easier and more efficient.

AI-powered Job Processing and Advanced Filters

Our algorithms process thousands of offers postings daily, extracting only the key information from each listing. This allows you to skip lengthy texts and focus only on the offers that match your requirements.

With powerful skill filters, you can specify your core competencies to instantly receive a selection of job opportunities that align with your experience. 

Search by Country of Residence

For those looking for fully remote jobs in their own country, our platform offers the ability to customize the search based on your location. This is especially useful if you want to adhere to local laws, consider time zones, or work with employers familiar with local specifics.

If necessary, you can also work remotely with employers from other countries without being limited by geographical boundaries.

Regular Data Update

Our platform features over 40,000 remote work offers with full-time or part-time positions from 7,000 companies. This wide range ensures you can find offers that suit your preferences, whether from startups or large corporations.

We regularly verify the validity of vacancy listings and automatically remove outdated or filled positions, ensuring that you only see active and relevant opportunities.

Job Alerts

Once you register, you can set up convenient notification methods, such as receiving tailored job listings directly to your email or via Telegram. This ensures you never miss out on a great opportunity.

Our job board allows you to apply for up to 5 vacancies per day absolutely for free. If you wish to apply for more, you can choose a suitable subscription plan with weekly, monthly, or annual payments.

Wide Range of Completely Remote Online Jobs

On our platform, you'll find fully remote work positions in the following fields:

  • IT and Programming β€” software development, website creation, mobile app development, system administration, testing, and support.
  • Design and Creative β€” graphic design, UX/UI design, video content creation, animation, 3D modeling, and illustrations.
  • Marketing and Sales β€” digital marketing, SMM, contextual advertising, SEO, product management, sales, and customer service.
  • Education and Online Tutoring β€” teaching foreign languages, school and university subjects, exam preparation, training, and coaching.
  • Content β€” creating written content for websites, blogs, and social media; translation, editing, and proofreading.
  • Administrative Roles (Assistants, Operators) β€” Virtual assistants, work organization support, calendar management, and document workflow assistance.
  • Finance and Accounting β€” bookkeeping, reporting, financial consulting, and taxes.

Other careers include: online consulting, market research, project management, and technical support.

All Types of Employment

The platform offers online remote jobs with different types of work:

  • Full-time β€” the ideal choice for those who value stability and predictability;
  • part-time β€” perfect for those looking for a side home job or seeking a balance between work and personal life;
  • Contract β€” suited for professionals who want to work on projects for a set period.
  • Temporary β€” short-term work that can be either full-time or part-time. These positions are often offered for seasonal or urgent tasks;
  • Internship β€” a form of on-the-job training that allows you to gain practical experience in your chosen field.

Whether you're looking for stable full-time employment, the flexibility of freelancing, or a part-time side gig, you'll find plenty of options on Remoote.app.

Remote Working Opportunities for All Expertise Levels

We feature offers for people with all levels of expertise:

  • for beginners β€” ideal positions for those just starting their journey in internet working from home;
  • for intermediate specialists β€” if you already have experience, you can explore positions requiring specific skills and knowledge in your field;
  • for experts β€” roles for highly skilled professionals ready to tackle complex tasks.

How to Start Your Online Job Search Through Our Platform?

To begin searching for home job opportunities, follow these three steps:

  1. Register and complete your profile. This process takes minimal time.
  2. Specify your skills, country of residence, and the preferable position.
  3. Receive notifications about new vacancy openings and apply to suitable ones.

If you don't have a resume yet, use our online builder. It will help you create a professional document, highlighting your key skills and achievements. The AI will automatically optimize it to match job requirements, increasing your chances of a successful response. You can update your profile information at any time: modify your skills, add new preferences, or upload an updated resume.