Remote Working

Remote working from home provides convenience and freedom, a lifestyle embraced by millions of people around the world. With our platform, finding the right job, whether full-time or part-time, becomes quick and easy thanks to AI, precise filters, and daily updates. Sign up now and start your online career today β€” fast and easy!

Remote IT Jobs
Apache Hadoop
14 jobs found. to receive daily emails with new job openings that match your preferences.
14 jobs found.

Set alerts to receive daily emails with new job openings that match your preferences.

Apply
πŸ”₯ Data Engineer
Posted about 18 hours ago

πŸ“ United States

🧭 Full-Time

πŸ” Sustainable Agriculture

🏒 Company: Agrovision

  • Experience with RDBMS (e.g., Teradata, MS SQL Server, Oracle) in production environments is preferred
  • Hands-on experience in data engineering and databases/data warehouses
  • Familiarity with Big Data platforms (e.g., Hadoop, Spark, Hive, HBase, Map/Reduce)
  • Expert level understanding of Python (e.g., Pandas)
  • Proficient in shell scripting (e.g., Bash) and Python data application development (or similar)
  • Excellent collaboration and communication skills with teams
  • Strong analytical and problem-solving skills, essential for tackling complex challenges
  • Experience working with BI teams and tooling (e.g. PowerBI), supporting analytics work and interfacing with Data Scientists
  • Collaborate with data scientists to ensure high-quality, accessible data for analytical and predictive modeling
  • Design and implement data pipelines (ETL’s) tailored to meet business needs and digital/analytics solutions
  • Enhance data integrity, security, quality, and automation, addressing system gaps proactively
  • Support pipeline maintenance, troubleshoot issues, and optimize performance
  • Lead and contribute to defining detailed scalable data models for our global operations
  • Ensure data security standards are met and upheld by contributors, partners and regional teams through programmatic solutions and tooling

PythonSQLApache HadoopBashETLData engineeringData scienceRDBMSPandasSparkCommunication SkillsAnalytical SkillsCollaborationProblem SolvingData modeling

Posted about 18 hours ago
Apply
Apply

πŸ“ United States

πŸ” Software Development

🏒 Company: ge_externalsite

  • Exposure to industry standard data modeling tools (e.g., ERWin, ER Studio, etc.).
  • Exposure to Extract, Transform & Load (ETL) tools like Informatica or Talend
  • Exposure to industry standard data catalog, automated data discovery and data lineage tools (e.g., Alation, Collibra, TAMR etc., )
  • Hands-on experience in programming languages like Java, Python or Scala
  • Hands-on experience in writing SQL scripts for Oracle, MySQL, PostgreSQL or HiveQL
  • Experience with Big Data / Hadoop / Spark / Hive / NoSQL database engines (i.e. Cassandra or HBase)
  • Exposure to unstructured datasets and ability to handle XML, JSON file formats
  • Work independently as well as with a team to develop and support Ingestion jobs
  • Evaluate and understand various data sources (databases, APIs, flat files etc. to determine optimal ingestion strategies
  • Develop a comprehensive data ingestion architecture, including data pipelines, data transformation logic, and data quality checks, considering scalability and performance requirements.
  • Choose appropriate data ingestion tools and frameworks based on data volume, velocity, and complexity
  • Design and build data pipelines to extract, transform, and load data from source systems to target destinations, ensuring data integrity and consistency
  • Implement data quality checks and validation mechanisms throughout the ingestion process to identify and address data issues
  • Monitor and optimize data ingestion pipelines to ensure efficient data processing and timely delivery
  • Set up monitoring systems to track data ingestion performance, identify potential bottlenecks, and trigger alerts for issues
  • Work closely with data engineers, data analysts, and business stakeholders to understand data requirements and align ingestion strategies with business objectives.
  • Build technical data dictionaries and support business glossaries to analyze the datasets
  • Perform data profiling and data analysis for source systems, manually maintained data, machine generated data and target data repositories
  • Build both logical and physical data models for both Online Transaction Processing (OLTP) and Online Analytical Processing (OLAP) solutions
  • Develop and maintain data mapping specifications based on the results of data analysis and functional requirements
  • Perform a variety of data loads & data transformations using multiple tools and technologies.
  • Build automated Extract, Transform & Load (ETL) jobs based on data mapping specifications
  • Maintain metadata structures needed for building reusable Extract, Transform & Load (ETL) components.
  • Analyze reference datasets and familiarize with Master Data Management (MDM) tools.
  • Analyze the impact of downstream systems and products
  • Derive solutions and make recommendations from deep dive data analysis.
  • Design and build Data Quality (DQ) rules needed

AWSPostgreSQLPythonSQLApache AirflowApache HadoopData AnalysisData MiningErwinETLHadoop HDFSJavaKafkaMySQLOracleSnowflakeCassandraClickhouseData engineeringData StructuresREST APINosqlSparkJSONData visualizationData modelingData analyticsData management

Posted 3 days ago
Apply
Apply

πŸ“ India

🏒 Company: BlackStone eITπŸ‘₯ 251-500Augmented RealityRoboticsAnalyticsProject Management

  • 5+ years of experience in AI or machine learning roles.
  • Strong proficiency in programming languages such as Python, Java, or C++.
  • Expertise with machine learning frameworks like TensorFlow or PyTorch.
  • In-depth knowledge of AI algorithms and techniques, including machine learning, deep learning, and NLP.
  • Experience with big data technologies like Hadoop or Spark.
  • Familiarity with cloud services (AWS, Google Cloud, Azure) for deploying AI models.
  • Design, develop, and implement advanced artificial intelligence solutions that enhance our products and services.
  • Work closely with data scientists, software engineers, and other stakeholders to transform business requirements into robust AI applications.

AWSPythonApache HadoopArtificial IntelligenceJavaMachine LearningMicrosoft AzurePyTorchC++SparkTensorflow

Posted 8 days ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ” Software Development

🏒 Company: Worth AIπŸ‘₯ 11-50πŸ’° $12,000,000 Seed over 1 year agoArtificial Intelligence (AI)Business IntelligenceRisk ManagementFinTech

  • Bachelor's degree in Computer Science, Software Engineering, or a related field.
  • Proven experience as a Software Engineer, with a focus on infrastructure development and operations.
  • Strong programming skills in languages such as Python, Javascript, or Go.
  • Experience with cloud platforms (preferably AWS) and cost optimization strategies.
  • Familiarity with container orchestration (e.g., Kubernetes, Docker).
  • Expertise in Infrastructure as Code (IaC) tools, particularly Terraform (AWS CDK is a plus).
  • Design, scale, and maintain infrastructure to support Big Data workloads and real-time streaming systems such as Apache Spark, Hadoop, and Kafka.
  • Understanding of networking concepts, protocols, and security practices.
  • Proficiency in source control systems, especially Git.
  • Experience with CI/CD tools such as GitHub Actions and ArgoCD.
  • Familiarity with observability tools (Datadog, New Relic, etc.) for monitoring and logging.
  • Excellent problem-solving skills and the ability to work in a collaborative environment.
  • Strong communication skills to effectively share knowledge with team members.
  • Experience in the Risk, Underwriting, and/or Payments Industry is a plus.
  • Design and develop cloud infrastructure components and services to support our AI-driven platforms.
  • Collaborate with software engineers to integrate applications with underlying infrastructure.
  • Automate deployment processes and infrastructure management using Infrastructure as Code (IaC) practices.
  • Implement monitoring and logging strategies to optimize system performance and availability.
  • Optimize infrastructure for cost efficiency, ensuring resources are utilized effectively without compromising performance.
  • Coordinate with security teams to ensure the infrastructure is compliant with best practices and standards.
  • Troubleshoot and resolve infrastructure-related issues efficiently.
  • Continuously evaluate, recommend, and implement changes to improve system reliability and performance.
  • Maintain documentation for infrastructure services and processes.
  • Support on-call rotation as needed for critical infrastructure issues.
  • Other Duties as assigned

AWSDockerPythonSQLApache HadoopCloud ComputingGitHadoopJavascriptKafkaKubernetesAlgorithmsApache KafkaData StructuresGoCI/CDRESTful APIsLinuxDevOpsTerraformMicroservicesNetworkingSoftware Engineering

Posted 16 days ago
Apply
Apply

πŸ“ United States, Canada

🧭 Full-Time

πŸ” Software Development

🏒 Company: Global InfoTek, Inc.

  • 10-12 years of experience in cloud engineering
  • Working knowledge of AWS, Azure, or Google Cloud
  • Experience with programming languages like Python, Java, or C#
  • Design and implement cloud infrastructure
  • Engineer integration of applications into cloud and hybrid environments
  • Monitor cloud system performance and optimize resources

AWSDockerPythonApache HadoopCybersecurityElasticSearchKubernetesC#AzureCI/CDTerraformNetworkingAnsible

Posted 27 days ago
Apply
Apply

πŸ“ USA

πŸ’Έ 176000.0 - 207000.0 USD per year

πŸ” Cybersecurity

🏒 Company: Abnormal SecurityπŸ‘₯ 501-1000πŸ’° $250,000,000 Series D 7 months agoArtificial Intelligence (AI)EmailInformation TechnologyCyber SecurityNetwork Security

  • 5+ years of experience as a data engineer or similar role, with hands-on experience in building data-focused solutions.
  • Expertise in ETL, data pipeline design, and data engineering tools and technologies (e.g., Apache Spark, Hadoop, Airflow, Kafka).
  • Experience with maintaining real-time and near real-time data pipelines or streaming services at high scale.
  • Experience with maintaining large scale distributed systems on cloud platforms such as AWS, GCP, or Azure.
  • Background in implementing data quality frameworks, including validation, monitoring, and anomaly detection.
  • Proven ability to collaborate effectively with cross-functional teams.
  • Excellent problem-solving skills and ability to work independently in a fast-paced environment.
  • Architect, design, build, and deploy backend ETL jobs and infrastructure that support a world-class Detection Engine.
  • Ownership projects that enable us to meet ambitious goals, including scaling components of Detection’s Data Pipeline by 10x.
  • Own real-time, near real-time streaming pipelines and online feature serving services.
  • Collaborate closely with MLE and Data Science teams, distilling feedback and executing strategy.
  • Coach and mentor junior engineers through 1on1s, pair programming, code reviews, and design reviews.

AWSApache AirflowApache HadoopETLGCPKafkaAzureData engineering

Posted about 1 month ago
Apply
Apply

πŸ“ United States, India

🧭 Contract, Part-Time, Full-Time

πŸ” Life sciences

🏒 Company: ValGenesisπŸ‘₯ 501-1000πŸ’° $24,000,000 Private almost 4 years agoPharmaceuticalMedical DeviceSoftware

  • Bachelor’s or Master’s in Computer Science, Data Science, or related field.
  • 8+ years in AI/ML solution development.
  • Proven software development experience in life sciences or regulated industries.
  • Strong analytical thinking and problem-solving skills.
  • Excellent communication and collaboration abilities.
  • Knowledge of life sciences validation processes and regulatory compliance.
  • Build scalable AI/ML models for document classification, intelligent search, and predictive analytics.
  • Implement image processing solutions for visual inspections and anomaly detection.
  • Define AI architecture and select technologies from open-source and commercial offerings.
  • Deploy AI/ML solutions in cloud-based environments with a focus on high availability and security.
  • Mentor a team of AI/ML engineers, fostering collaborative research and development.

AWSDockerPostgreSQLPythonSQLApache HadoopArtificial IntelligenceCloud ComputingGitImage ProcessingJenkinsKubernetesMachine LearningMongoDBNumpyOpenCVPyTorchTableauAzurePandasSparkTensorflowCI/CDComplianceData visualization

Posted about 1 month ago
Apply
Apply

πŸ“ United States, Canada

🧭 Full-Time

πŸ” Information Technology

  • 5+ years of experience in IT focusing on full stack development
  • 3+ years of software engineering for front-end and back-end applications
  • Experience with Apache Hadoop, Jenkins, and microservices architecture
  • Develop and deploy applications in AWS cloud environments
  • Create and manage CI/CD pipelines
  • Collaborate with functional teams on application development

AWSApache HadoopJavaJavascriptJenkinsKotlinKubernetesSpring BootTypeScriptCassandragRPCPrometheusTomcatReactSeleniumSparkCI/CDMicroservicesJSONScala

Posted about 2 months ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 200000.0 - 255000.0 USD per year

πŸ” Blockchain intelligence and financial crime prevention

🏒 Company: TRM LabsπŸ‘₯ 101-250πŸ’° $70,000,000 Series B over 2 years agoCryptocurrencyComplianceBlockchainBig Data

  • Academic background in a quantitative field such as Computer Science, Mathematics, Engineering, or Physics.
  • Strong knowledge of algorithm design and data structures with practical application experience.
  • Experience optimizing large-scale distributed data processing systems like Apache Spark, Apache Hadoop, Dask, and graph databases.
  • Experience in converting academic research into products with a history of collaborating on feature releases.
  • Strong programming experience in Python and SQL.
  • Excellent communication skills for technical and non-technical audiences.
  • Delivery-oriented with the ability to lead feature development from start to finish.
  • Autonomous ownership of work, capable of moving swiftly and efficiently.
  • Knowledge of basic graph theory concepts.
  • Designing and implementing graph algorithms that analyze large cryptocurrency transaction networks at multi-blockchain scale.
  • Researching new graph-native technology to evaluate benefit to data science and data engineering teams at TRM.
  • Collaborating with cryptocurrency investigators to identify key user stories and requirements for new graph algorithms and features.
  • Understanding and refining TRM’s risk models to assign risk scores to addresses.
  • Communicating complex implementation details to various audiences from investigators to data engineers.
  • Integrating with diverse data inputs ranging from raw blockchain data to model outputs.

PythonSQLApache HadoopAlgorithmsData engineeringData scienceData Structures

Posted about 2 months ago
Apply
Apply

πŸ“ US

πŸ’Έ 177309.0 - 310291.0 USD per year

πŸ” Software Development

🏒 Company: PinterestπŸ‘₯ 5001-10000πŸ’° Post-IPO Equity over 2 years agoπŸ«‚ Last layoff about 2 years agoInternetSocial NetworkSoftwareSocial MediaSocial Bookmarking

  • 4+ years of industry experience applying machine learning methods (e.g., user modeling, personalization, recommender systems, search, ranking, natural language processing, reinforcement learning, and graph representation learning)
  • End-to-end hands-on experience with building data processing pipelines, large scale machine learning systems, and big data technologies (e.g., Hadoop/Spark)
  • MS/PhD in Computer Science, ML, NLP, Statistics, Information Sciences, related field, or equivalent experience.
  • Build cutting edge technology using the latest advances in deep learning and machine learning to personalize Pinterest
  • Partner closely with teams across Pinterest to experiment and improve ML models for various product surfaces (Homefeed, Ads, Growth, Shopping, and Search), while gaining knowledge of how ML works in different areas
  • Use data driven methods and leverage the unique properties of our data to improve candidates retrieval
  • Work in a high-impact environment with quick experimentation and product launches
  • Keeping up with industry trends in recommendation systems

PythonSoftware DevelopmentApache HadoopMachine LearningNumpyPyTorchAlgorithmsData StructuresSparkTensorflow

Posted 2 months ago
Apply
Shown 10 out of 14

Ready to Start Your Remote Journey?

Apply to 5 jobs per day for free, or get unlimited applications with a subscription starting at €5/week.

Why do Job Seekers Choose Our Platform for Remote Work Opportunities?

We’ve developed a well-thought-out service for home job matching, making the searching process easier and more efficient.

AI-powered Job Processing and Advanced Filters

Our algorithms process thousands of offers postings daily, extracting only the key information from each listing. This allows you to skip lengthy texts and focus only on the offers that match your requirements.

With powerful skill filters, you can specify your core competencies to instantly receive a selection of job opportunities that align with your experience. 

Search by Country of Residence

For those looking for fully remote jobs in their own country, our platform offers the ability to customize the search based on your location. This is especially useful if you want to adhere to local laws, consider time zones, or work with employers familiar with local specifics.

If necessary, you can also work remotely with employers from other countries without being limited by geographical boundaries.

Regular Data Update

Our platform features over 40,000 remote work offers with full-time or part-time positions from 7,000 companies. This wide range ensures you can find offers that suit your preferences, whether from startups or large corporations.

We regularly verify the validity of vacancy listings and automatically remove outdated or filled positions, ensuring that you only see active and relevant opportunities.

Job Alerts

Once you register, you can set up convenient notification methods, such as receiving tailored job listings directly to your email or via Telegram. This ensures you never miss out on a great opportunity.

Our job board allows you to apply for up to 5 vacancies per day absolutely for free. If you wish to apply for more, you can choose a suitable subscription plan with weekly, monthly, or annual payments.

Wide Range of Completely Remote Online Jobs

On our platform, you'll find fully remote work positions in the following fields:

  • IT and Programming β€” software development, website creation, mobile app development, system administration, testing, and support.
  • Design and Creative β€” graphic design, UX/UI design, video content creation, animation, 3D modeling, and illustrations.
  • Marketing and Sales β€” digital marketing, SMM, contextual advertising, SEO, product management, sales, and customer service.
  • Education and Online Tutoring β€” teaching foreign languages, school and university subjects, exam preparation, training, and coaching.
  • Content β€” creating written content for websites, blogs, and social media; translation, editing, and proofreading.
  • Administrative Roles (Assistants, Operators) β€” Virtual assistants, work organization support, calendar management, and document workflow assistance.
  • Finance and Accounting β€” bookkeeping, reporting, financial consulting, and taxes.

Other careers include: online consulting, market research, project management, and technical support.

All Types of Employment

The platform offers online remote jobs with different types of work:

  • Full-time β€” the ideal choice for those who value stability and predictability;
  • part-time β€” perfect for those looking for a side home job or seeking a balance between work and personal life;
  • Contract β€” suited for professionals who want to work on projects for a set period.
  • Temporary β€” short-term work that can be either full-time or part-time. These positions are often offered for seasonal or urgent tasks;
  • Internship β€” a form of on-the-job training that allows you to gain practical experience in your chosen field.

Whether you're looking for stable full-time employment, the flexibility of freelancing, or a part-time side gig, you'll find plenty of options on Remoote.app.

Remote Working Opportunities for All Expertise Levels

We feature offers for people with all levels of expertise:

  • for beginners β€” ideal positions for those just starting their journey in internet working from home;
  • for intermediate specialists β€” if you already have experience, you can explore positions requiring specific skills and knowledge in your field;
  • for experts β€” roles for highly skilled professionals ready to tackle complex tasks.

How to Start Your Online Job Search Through Our Platform?

To begin searching for home job opportunities, follow these three steps:

  1. Register and complete your profile. This process takes minimal time.
  2. Specify your skills, country of residence, and the preferable position.
  3. Receive notifications about new vacancy openings and apply to suitable ones.

If you don't have a resume yet, use our online builder. It will help you create a professional document, highlighting your key skills and achievements. The AI will automatically optimize it to match job requirements, increasing your chances of a successful response. You can update your profile information at any time: modify your skills, add new preferences, or upload an updated resume.