Remote Working

Remote working from home provides convenience and freedom, a lifestyle embraced by millions of people around the world. With our platform, finding the right job, whether full-time or part-time, becomes quick and easy thanks to AI, precise filters, and daily updates. Sign up now and start your online career today — fast and easy!

Remote IT Jobs
Hadoop
72 jobs found. to receive daily emails with new job openings that match your preferences.
72 jobs found.

Set alerts to receive daily emails with new job openings that match your preferences.

Apply

📍 United States of America

💸 128250.0 - 266875.0 USD per year

🔍 Software Development

🏢 Company: careers

  • MS in Computer Science or related field with strong understanding of the fundamentals including Data Structures, Algorithms
  • Experience in algorithm design and ML/AI
  • Experience working with languages such as Java or Python
  • Familiarity with data mining, text processing and classification algorithms
  • Demonstrated problem solving skills and taking initiatives
  • Great communication skills, joy in helping people, ability to remain friendly and constructive under stress
  • 2+ years experience working with Applied ML
  • 5+ years of experience in the industry working with Backend Systems
  • Research and develop innovative algorithms for information retrieval, processing and ranking.
  • Build, enhance, optimize and deploy tools, workflows, systems to process Yahoo mail data for extraction relevant information.
  • Be responsible for improving information extraction using machine learning and natural language processing techniques.
  • Participate in agile development to add incremental value to the business.
  • Collaborate with Research scientists to build M/L models for classification, extraction and recommendations in automated fashion.

Backend DevelopmentPythonSoftware DevelopmentSQLAgileApache HadoopData AnalysisData MiningHadoopJavaMachine LearningPyTorchAlgorithmsData StructuresTensorflowRESTful APIsData modeling

Posted about 1 hour ago
Apply
Apply
🔥 Data Architect
Posted about 22 hours ago

📍 Canada

🧭 Fulltime

🏢 Company: Top Hat

  • 5 years of experience with data modelling in an agile work environment.
  • Profess in multiple modelling techniques.
  • In-depth understanding of database storage principles.
  • Alignment with a culture of experimentation and A/B testing.
  • Experience gathering and analyzing product requirements.
  • Knowledge of data mining and segmentation techniques.
  • Expertise in SQL and MySQL.
  • Creation of ETL Specifications to satisfy product requirements.
  • Testing of Data products.
  • Familiarity with ERwin data modelling tool.
  • Familiarity with big data systems like Hadoop, Spark and dbt.
  • Familiarity with database management systems like PostgreSQL, MongoDB
  • Familiarity with data visualization tools.
  • Proven analytical skills.
  • Problem-solving attitude.
  • Comfortable with agile practices.
  • Possesses an experimentation mindset.
  • BSc in Computer Science or relevant field.
  • Analyze requirements and translate them to conceptual and logical data models for our applications and the data warehouse.
  • Optimize and/or perform migration on existing database systems.
  • Improve system performance by conducting tests, troubleshooting and integrating new elements.
  • Develop and maintain a data dictionary and metadata management strategy.
  • Develop and maintain data governance policies, procedures, and standards to ensure the integrity and accuracy of the data.
  • Coordinate with the Data Science and Revenue Operation teams to identify future needs and requirements.
  • Provide operational support for downstream business units.

PostgreSQLSQLAgileData AnalysisData MiningErwinETLHadoopMongoDBMySQLNosqlSparkAnalytical SkillsData visualizationData modeling

Posted about 22 hours ago
Apply
Apply

📍 United States

🧭 Full-Time

💸 166372.19 - 207965.24 USD per year

🔍 Software Development

  • Deep understanding of GSI and Reseller pre-sales motions, including how solution architects, pre-sales technical specialists and technical teams participate in discovery workshops, opportunity qualification, proof-of-concepts, and technical validation sessions. This includes understanding how technology choices impact project economics and delivery timelines.
  • Experience with GSI and Reseller sales methodologies, particularly how they identify and qualify opportunities through both client-direct and vendor-partner channels and how they manage complex, multi-stakeholder, multi-vendor, multi-tech option sales cycles.
  • Knowledge of how GSIs and Resellers approach technical enablement and skills development, including their internal training programs and certification paths, and how they maintain technical excellence across delivery teams while managing utilization targets.
  • Cloud data platforms and data lakes: Snowflake, Databricks, Redshift, BigQuery, Synapse, S3, ADLS, OneLake, Hadoop - Hands-on experience with modern cloud data platforms and data lake architectures
  • Data workloads including data warehousing, data lakes, AI, ML, Gen AI, data applications, data engineering
  • Open table formats: Iceberg, Delta Lake, Hudi
  • Data movement technologies: Fivetran, HVR, Matillion, Airbyte, Informatica, Talend, Datastage, ADF, AWS Glue. Hands-on experience with traditional (or more modern) ETL/ELT solutions
  • Databases: Oracle, SQL Server, PostgreSQL, MySQL, MongoDB, CosmosDB, SAP HANA.
  • Applications: Salesforce, Google Analytics, ServiceNow, Hubspot, etc.
  • ERP solutions: SAP, Oracle, Workday, etc.
  • Data transformation solutions: dbt Core, dbt Cloud, Coalesce, Informatica
  • Gen AI: approaches, concepts, and the Gen AI technology ecosystem for hyperscalers, cloud data platforms, and solution-specific ISVs
  • REST APIs: Experience programmatically interacting with REST APIs
  • Data Infrastructure, Data Security, Data Governance and Application Development: strong understanding of technical architectures and approaches
  • Programming languages: Python, SQL
  • Create, develop, build, and take ownership of ongoing relationships with GSI and Reseller pre-sales technical team members, lead architects, and other technical recommenders to create multiple Fivetran technical evangelists and champions within each GSI and Reseller partner.
  • Interact daily with Fivetran’s top GSI and Reseller partners while tracking, measuring, and driving 3 primary KPIs - partners enabled, gtm activities supported, and technical content created.
  • Represent Fivetran as the partner technical liaison plus product, data and solution expert. Collaborate with the strategic GSI and Reseller Partner Sales Managers to enable new and existing GSI and Reseller partners to understand Fivetran's value, differentiation, messaging, and technical/business value.
  • Assist Fivetran Partner Sales Managers in communicating and demonstrating Fivetran’s value prop to existing and prospective partners to get them excited about partnering with Fivetran
  • Get hands-on with all aspects of Fivetran and the broader data ecosystem technologies listed above in building out live demos, hands-on labs, solution prototypes, and Fivetran/GSI/Reseller joint solutions.
  • Communicate customer requirements and competitor solutions. Provide a strong technical response to partner requests. Engage the customer SE team and communicate client requirements for individual client opportunities.

PostgreSQLPythonSQLBusiness IntelligenceCloud ComputingETLHadoopMongoDBMySQLOracleSalesforceSAPSnowflakeGoogle AnalyticsData engineeringAnalytical SkillsMentoringPresentation skillsExcellent communication skillsRelationship buildingStrong communication skills

Posted 1 day ago
Apply
Apply

📍 Japan

🔍 Software infrastructure

🏢 Company: CIQ👥 101-250💰 $26,000,000 Series A almost 3 years agoInformation TechnologySoftware

  • Extensive knowledge of infrastructure - both on-prem, hybrid, and multi-cloud with familiarity with Open-Source, HPC, enterprise performance computing, AI/ML, and NLP infrastructure
  • Deep understanding of the enterprise software ecosystem, particularly with Linux distributions like Rocky Linux, and experience in complex regulatory and compliance environments
  • Exceptional account management and business development skills, with a track record of closing complex, high-value deals in Japan’s business landscape
  • Expertise in navigating corporate structures and partnerships, particularly within the networks of various corporate families
  • Familiarity with Japan’s corporate and legal frameworks, especially regarding establishing local entities and employment structures
  • Strong executive presence with exceptional sales and presentation skills. Ability to confidently engage and influence executives, delivering compelling narratives that drive decision-making and close high-value deals
  • Great organizational skills and the ability to manage multiple projects while driving strategic growth for CIQ Japan
  • Fluency in both Japanese and English, with a deep cultural understanding of how to effectively manage business relationships in Japan
  • Develop and nurture relationships with key decision-makers at major Japanese companies, maintain a deep understanding of their business objectives and challenges to effectively position CIQ’s offerings
  • Identify and pursue new collaboration, growth, and revenue opportunities in the Japanese market, expanding the adoption of CIQ’s solutions, such as Rocky Linux.
  • Work closely with CIQ’s technical, product, and sales teams to align with the goals of Japanese entities, manage joint projects, and support key customer accounts and channel partnerships
  • Develop and execute account strategies that align with CIQ’s broader business objectives, formalizing account plans.
  • Ensure Japanese customers receive top-tier support, addressing technical and operational challenges, and ensure a smooth delivery of solutions like Rocky Linux and Fuzzball
  • Track and report on account performance, pipeline growth, project milestones, and customer satisfaction, using data-driven insights to inform strategy and decision-making

LeadershipBusiness DevelopmentCloud ComputingHadoopStrategic ManagementCI/CDNegotiationLinuxComplianceAccount ManagementRelationship managementSales experience

Posted 2 days ago
Apply
Apply

📍 Worldwide

💸 90000.0 - 160000.0 USD per year

🔍 Software Development

🏢 Company: Automattic Careers

  • 5+ years of experience in data science, analytics, or a related role within a tech company, preferably SaaS, working with large-scale, complex, product-focused data sets to drive measurable business impact.
  • Strong expertise in experimentation, statistical analysis (e.g., hypothesis testing, t-tests, regression analysis, time series analysis, bootstrapping), machine learning, and predictive modeling.
  • Proficiency in Python and key data science libraries, including Pandas, NumPy, Scikit-learn, Matplotlib, and Seaborn.
  • Fluency using distributed SQL query engines like Hive, Presto, or Trino.
  • Familiarity with big data technologies (e.g., Hadoop, Spark).
  • Employ technical expertise in quantitative analysis, advanced statistics, experimentation, and data mining to develop data-driven strategies for products that democratize publishing and commerce.
  • Proactively identify, define, and test opportunities and levers to enhance the product, ensuring data-driven decision-making that drives measurable business and customer impact. Translate insights into actionable recommendations that influence roadmaps and strategic investments.
  • Contribute to the development and implementation of advanced data analysis techniques and statistical models across multiple business units.
  • Design and drive company-wide data initiatives, shaping the direction of data analytics and insights to support strategic decisions.
  • Mentor and guide data scientists and analysts.
  • Partner with Growth, Product, Engineering, Design, and senior leadership to inform, influence, and execute product and growth strategies, leveraging key insights to drive measurable, company-wide impact.
  • Communicate complex strategic insights and recommendations to non-technical stakeholders, facilitating data-driven decision-making at the highest levels of the company.

PythonSQLData MiningHadoopMachine LearningNumpyData sciencePandasSparkData visualizationData modelingA/B testing

Posted 4 days ago
Apply
Apply

📍 São Paulo, Rio Grande do Sul, Rio de Janeiro, Belo Horizonte

🔍 Data field

🏢 Company: TELUS Digital Brazil

  • 3+ years in the Data field
  • Experience in the construction and optimization of data pipelines, architectures, and 'big data' datasets
  • Proficiency with Apache Spark with a general-purpose programming language such as Python or Scala
  • Ability to create processes supporting data transformation, data structures, metadata, dependency, and workload management
  • Experience with big data tools: Hadoop, Spark, Kafka, etc.
  • Experience with relational SQL databases
  • Experience with data pipeline and workflow management tools
  • Be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams;
  • Create and maintain optimal data pipeline architecture;
  • Assemble large, complex data sets that meet business requirements;
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc;
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources.
  • Work with data and analytics experts to strive for greater functionality in our data systems.

AWSPythonSQLCloud ComputingETLGCPHadoopKafkaAirflowAzureData engineeringScala

Posted 7 days ago
Apply
Apply
🔥 Data Engineer
Posted 7 days ago

📍 United States

💸 112800.0 - 126900.0 USD per year

🔍 Software Development

🏢 Company: Titan Cloud

  • 4+ years of relevant employment experience
  • 4+ years of work experience with ETL, Data Modeling, Data Analysis, and Data Architecture.
  • MySQL, MSSQL Database, Postgres, Python
  • Experience operating very large data warehouses or data lakes.
  • Experience with building data pipelines and applications to stream and process datasets at low latencies.
  • Design, implement, and maintain standardized data models that align with business needs and analytical use cases.
  • Optimize data structures and schemas for efficient querying, scalability, and performance across various storage and compute platforms.
  • Provide guidance and best practices for data storage, partitioning, indexing, and query optimization.
  • Developing and maintaining a data pipeline design.
  • Build robust and scalable ETL/ELT data pipelines to transform raw data into structured datasets optimized for analysis.
  • Collaborate with data scientists to streamline feature engineering and improve the accessibility of high-value data assets.
  • Designing, building, and maintaining the data architecture needed to support business decisions and data-driven applications. This includes collecting, storing, processing, and analyzing large amounts of data using AWS, Azure, and local tools and services.
  • Develop and enforce data governance standards to ensure consistency, accuracy, and reliability of data across the organization.
  • Ensure data quality, integrity, and completeness in all pipelines by implementing automated validation and monitoring mechanisms.
  • Implement data cataloging, metadata management, and lineage tracking to enhance data discoverability and usability.
  • Work with Engineering to manage and optimize data warehouse and data lake architectures, ensuring efficient storage and retrieval of structured and semi-structured data.
  • Evaluate and integrate emerging cloud-based data technologies to improve performance, scalability, and cost efficiency.
  • Assist with designing and implementing automated tools for collecting and transferring data from multiple source systems to the AWS and Azure cloud platform.
  • Work with DevOps Engineers to integrate any new code into existing pipelines
  • Collaborate with teams in trouble shooting functional and performance issues.
  • Must be a team player to be able to work in an agile environment

AWSPostgreSQLPythonSQLAgileApache AirflowCloud ComputingData AnalysisETLHadoopMySQLData engineeringData scienceREST APISparkCommunication SkillsAnalytical SkillsCI/CDProblem SolvingTerraformAttention to detailOrganizational skillsMicroservicesTeamworkData visualizationData modelingScripting

Posted 7 days ago
Apply
Apply

📍 United States

🧭 Full-Time

💸 177000.0 - 213000.0 USD per year

🔍 FinTech

🏢 Company: Flex

  • A minimum of 6 years of industry experience in the data infrastructure/data engineering domain.
  • A minimum of 6 years of experience with Python and SQL.
  • A minimum of 3 years of industry experience using DBT.
  • A minimum of 3 years of industry experience using Snowflake and its basic features.
  • Familiarity with AWS services, with industry experience using Lambda, Step Functions, Glue, RDS, EKS, DMS, EMR, etc.
  • Industry experience with different big data platforms and tools such as Snowflake, Kafka, Hadoop, Hive, Spark, Cassandra, Airflow, etc.
  • Industry experience working with relational and NoSQL databases in a production environment.
  • Strong fundamentals in data structures, algorithms, and design patterns.
  • Design, implement, and maintain high-quality data infrastructure services, including but not limited to Data Lake, Kafka, Amazon Kinesis, and data access layers.
  • Develop robust and efficient DBT models and jobs to support analytics reporting and machine learning modeling.
  • Closely collaborating with the Analytics team for data modeling, reporting, and data ingestion.
  • Create scalable real-time streaming pipelines and offline ETL pipelines.
  • Design, implement, and manage a data warehouse that provides secure access to large datasets.
  • Continuously improve data operations by automating manual processes, optimizing data delivery, and redesigning infrastructure for greater scalability.
  • Create engineering documentation for design, runbooks, and best practices.

AWSPythonSQLBashDesign PatternsETLHadoopJavaKafkaSnowflakeAirflowAlgorithmsCassandraData engineeringData StructuresNosqlSparkCommunication SkillsCI/CDRESTful APIsTerraformWritten communicationDocumentationData modelingDebugging

Posted 8 days ago
Apply
Apply

📍 USA

🧭 Full-Time

💸 142000.0 - 199000.0 USD per year

🔍 Drug Discovery

🏢 Company: SandboxAQ👥 101-250💰 $25,000,000 Grant 5 months agoArtificial Intelligence (AI)SaaSInformation TechnologyCyber Security

  • PhD in computational physics, computational chemistry, or a related discipline.
  • 1-5 years of relevant experience, including hands-on experience with computational chemistry applied to drug discovery in biotech, pharma, or related industries.
  • A deep commitment to critical thinking—comfortable questioning methodologies, challenging assumptions, and refining approaches to improve predictive accuracy.
  • A collaborative mindset, working closely with colleagues to identify problems and communicate technical solutions clearly and effectively.
  • A hands-on mentality, eager to dive into complex problems and drive projects to completion.
  • Experience in structure-based drug design and familiarity with ligand-based drug design methods.
  • Experience with GPU-accelerated MD codes like OpenMM.
  • Proficiency in the Python data science stack (NumPy, Pandas, SciPy, etc.).
  • Familiarity with ligand-protein free energy of binding prediction methods such as Free Energy Perturbation (FEP) or similar.
  • Experience running computational chemistry/quantum simulations on high-performance computing (GPU) environments for corporate R&D, innovation labs, or academic research.
  • A passion for solving scientific problems in chemistry and biology through computational and data-driven methods, with a commitment to rigorous validation and scientific integrity.
  • Apply computational solutions to address unmet drug discovery challenges, questioning assumptions and rigorously validating results.
  • Work closely with the development team to enhance SandboxAQ’s unique technology and Large Quantitative Models (LQMs) for large-scale impact.
  • Apply SandboxAQ’s computational platform to provide high-value drug discovery solutions while critically evaluating its methodologies and results.
  • Translate insights from molecular dynamics, free energy perturbation, molecular docking, quantum, and ML methods into actionable, testable drug discovery hypotheses.
  • Collaborate closely with ML experts and cross-functional teams to prototype and scale innovative drug design solutions.
  • Develop and deploy computational methods and workflows to generate and evaluate hypotheses, guiding design decisions and influencing project direction.
  • Challenge conventional thinking by critically assessing and interpreting computational outputs, ensuring robust scientific conclusions.
  • Directly contribute to the discovery of innovative medicines by integrating computational chemistry techniques into multidisciplinary drug discovery teams.

PythonData AnalysisHadoopMachine LearningNumpyAlgorithmsData StructuresRDBMSREST APIPandas

Posted 9 days ago
Apply
Apply

📍 Netherlands

🔍 Software Development

🏢 Company: Dataiku👥 1001-5000💰 $200,000,000 Series F over 2 years agoArtificial Intelligence (AI)Big DataData IntegrationAnalyticsEnterprise Software

  • At least 3 years of experience in a client-facing engineering or technical role, ideally involving a complex and rapidly evolving software/product
  • Experience with cloud platforms such as AWS, Azure, and GCP
  • Experience with Docker and Kubernetes
  • Collaborative and helpful mindset with a focus on always working as a team
  • A strong competency in technical problem solving with demonstrated experience performing advanced log analysis, debugging, and reproducing errors
  • Proficiency working with Unix-based operating systems
  • Experience with relational databases (or data warehouses like Snowflake) and SQL
  • Ability to read and write Python or R code
  • Help EMEA and global customers solve their technical issues with Dataiku through a variety of communication channels
  • Communicate with our R&D team to solve complex issues and/or share feedback from our EMEA customers for future product improvement
  • Work with other customer-facing teams when escalating or rerouting issues to help ensure a proper and efficient / timely resolution
  • Document knowledge in the form of technical articles and contribute to knowledge bases or forums within specific areas of expertise
  • Occasionally wear multiple hats and help out with other activities in a fast-paced and dynamic startup team environment

AWSDockerPythonSQLGCPHadoopKubernetesLDAPAzureREST APISparkCommunication SkillsAnalytical SkillsCI/CDProblem SolvingWritten communicationExcellent communication skillsTroubleshootingActive listeningJSONData visualizationTechnical supportScriptingDebuggingCustomer supportEnglish communication

Posted 10 days ago
Apply
Shown 10 out of 72

Ready to Start Your Remote Journey?

Apply to 5 jobs per day for free, or get unlimited applications with a subscription starting at €5/week.

Why do Job Seekers Choose Our Platform for Remote Work Opportunities?

We’ve developed a well-thought-out service for home job matching, making the searching process easier and more efficient.

AI-powered Job Processing and Advanced Filters

Our algorithms process thousands of offers postings daily, extracting only the key information from each listing. This allows you to skip lengthy texts and focus only on the offers that match your requirements.

With powerful skill filters, you can specify your core competencies to instantly receive a selection of job opportunities that align with your experience. 

Search by Country of Residence

For those looking for fully remote jobs in their own country, our platform offers the ability to customize the search based on your location. This is especially useful if you want to adhere to local laws, consider time zones, or work with employers familiar with local specifics.

If necessary, you can also work remotely with employers from other countries without being limited by geographical boundaries.

Regular Data Update

Our platform features over 40,000 remote work offers with full-time or part-time positions from 7,000 companies. This wide range ensures you can find offers that suit your preferences, whether from startups or large corporations.

We regularly verify the validity of vacancy listings and automatically remove outdated or filled positions, ensuring that you only see active and relevant opportunities.

Job Alerts

Once you register, you can set up convenient notification methods, such as receiving tailored job listings directly to your email or via Telegram. This ensures you never miss out on a great opportunity.

Our job board allows you to apply for up to 5 vacancies per day absolutely for free. If you wish to apply for more, you can choose a suitable subscription plan with weekly, monthly, or annual payments.

Wide Range of Completely Remote Online Jobs

On our platform, you'll find fully remote work positions in the following fields:

  • IT and Programming — software development, website creation, mobile app development, system administration, testing, and support.
  • Design and Creative — graphic design, UX/UI design, video content creation, animation, 3D modeling, and illustrations.
  • Marketing and Sales — digital marketing, SMM, contextual advertising, SEO, product management, sales, and customer service.
  • Education and Online Tutoring — teaching foreign languages, school and university subjects, exam preparation, training, and coaching.
  • Content — creating written content for websites, blogs, and social media; translation, editing, and proofreading.
  • Administrative Roles (Assistants, Operators) — Virtual assistants, work organization support, calendar management, and document workflow assistance.
  • Finance and Accounting — bookkeeping, reporting, financial consulting, and taxes.

Other careers include: online consulting, market research, project management, and technical support.

All Types of Employment

The platform offers online remote jobs with different types of work:

  • Full-time — the ideal choice for those who value stability and predictability;
  • part-time — perfect for those looking for a side home job or seeking a balance between work and personal life;
  • Contract — suited for professionals who want to work on projects for a set period.
  • Temporary — short-term work that can be either full-time or part-time. These positions are often offered for seasonal or urgent tasks;
  • Internship — a form of on-the-job training that allows you to gain practical experience in your chosen field.

Whether you're looking for stable full-time employment, the flexibility of freelancing, or a part-time side gig, you'll find plenty of options on Remoote.app.

Remote Working Opportunities for All Expertise Levels

We feature offers for people with all levels of expertise:

  • for beginners — ideal positions for those just starting their journey in internet working from home;
  • for intermediate specialists — if you already have experience, you can explore positions requiring specific skills and knowledge in your field;
  • for experts — roles for highly skilled professionals ready to tackle complex tasks.

How to Start Your Online Job Search Through Our Platform?

To begin searching for home job opportunities, follow these three steps:

  1. Register and complete your profile. This process takes minimal time.
  2. Specify your skills, country of residence, and the preferable position.
  3. Receive notifications about new vacancy openings and apply to suitable ones.

If you don't have a resume yet, use our online builder. It will help you create a professional document, highlighting your key skills and achievements. The AI will automatically optimize it to match job requirements, increasing your chances of a successful response. You can update your profile information at any time: modify your skills, add new preferences, or upload an updated resume.