Hadoop Job Salaries

Find salary information for remote positions requiring Hadoop skills. Make data-driven decisions about your career path.

Hadoop

Median high-range salary for jobs requiring Hadoop:

$190,570

This analysis is based on salary ranges collected from 15 job descriptions that match the search and allow working remotely. Choose a country to narrow down the search and view statistics exclusively for remote jobs available in that location.

The Median Salary Range is $164,621 - $190,570

  • 25% of job descriptions advertised a maximum salary above $211,194.
  • 5% of job descriptions advertised a maximum salary above $404,815.5.

Skills and Salary

Specific skills can have a substantial impact on salary ranges for jobs that align with these search preferences. Certain in-demand skills are highly valued by employers and can significantly boost compensation. These skills often reflect the unique requirements and challenges faced by professionals in these roles. Some of the most sought-after skills that correlate with higher salaries include Machine Learning, Data modeling and Communication Skills. Mastering these skills can demonstrate expertise and make individuals more competitive in the job market. Employers often prioritize candidates who possess these skills, as they can contribute directly to the organization's success. The ability to effectively utilize these skills can lead to increased earning potential and career advancement opportunities.

  1. Machine Learning

    40% jobs mention Machine Learning as a required skill. The Median Salary Range for these jobs is $167,310.5 - $217,838

    • 25% of job descriptions advertised a maximum salary above $303,900.
    • 5% of job descriptions advertised a maximum salary above $438,454.
  2. Data modeling

    40% jobs mention Data modeling as a required skill. The Median Salary Range for these jobs is $151,550 - $201,785

    • 25% of job descriptions advertised a maximum salary above $303,900.
    • 5% of job descriptions advertised a maximum salary above $438,454.
  3. Communication Skills

    40% jobs mention Communication Skills as a required skill. The Median Salary Range for these jobs is $173,500 - $196,500

    • 25% of job descriptions advertised a maximum salary above $303,900.
    • 5% of job descriptions advertised a maximum salary above $438,454.
  4. Python

    93% jobs mention Python as a required skill. The Median Salary Range for these jobs is $164,621 - $195,285

    • 25% of job descriptions advertised a maximum salary above $213,000.
    • 5% of job descriptions advertised a maximum salary above $411,543.2.
  5. SQL

    93% jobs mention SQL as a required skill. The Median Salary Range for these jobs is $164,621 - $195,285

    • 25% of job descriptions advertised a maximum salary above $213,000.
    • 5% of job descriptions advertised a maximum salary above $411,543.2.
  6. ETL

    47% jobs mention ETL as a required skill. The Median Salary Range for these jobs is $126,100 - $190,570

    • 25% of job descriptions advertised a maximum salary above $205,776.
    • 5% of job descriptions advertised a maximum salary above $213,000.
  7. Data engineering

    80% jobs mention Data engineering as a required skill. The Median Salary Range for these jobs is $164,621 - $190,285

    • 25% of job descriptions advertised a maximum salary above $209,388.
    • 5% of job descriptions advertised a maximum salary above $424,998.6.
  8. Spark

    60% jobs mention Spark as a required skill. The Median Salary Range for these jobs is $165,000 - $190,000

    • 25% of job descriptions advertised a maximum salary above $217,225.
    • 5% of job descriptions advertised a maximum salary above $303,900.
  9. AWS

    67% jobs mention AWS as a required skill. The Median Salary Range for these jobs is $135,550 - $182,500

    • 25% of job descriptions advertised a maximum salary above $213,000.
    • 5% of job descriptions advertised a maximum salary above $438,454.

Industries and Salary

Industry plays a crucial role in determining salary ranges for jobs that align with these search preferences. Certain industries offer significantly higher compensation packages compared to others. Some in-demand industries known for their competitive salaries in these roles include Digital Advertising, Online Advertising and Fintech. These industries often have a strong demand for skilled professionals and are willing to invest in talent to meet their growth objectives. Factors such as industry size, profitability, and market trends can influence salary levels within these sectors. It's important to consider industry-specific factors when evaluating potential career paths and salary expectations.

  1. Digital Advertising

    7% jobs are in Digital Advertising industry. The Median Salary Range for these jobs is $217,000 - $303,900

  2. Online Advertising

    7% jobs are in Online Advertising industry. The Median Salary Range for these jobs is $164,200 - $229,900

  3. Fintech

    7% jobs are in Fintech industry. The Median Salary Range for these jobs is $177,000 - $213,000

  4. Data Movement and Management

    7% jobs are in Data Movement and Management industry. The Median Salary Range for these jobs is $164,621 - $205,776

  5. Data movement and analytics technology

    7% jobs are in Data movement and analytics technology industry. The Median Salary Range for these jobs is $164,621 - $205,776

  6. Data Engineering

    13% jobs are in Data Engineering industry. The Median Salary Range for these jobs is $153,050 - $184,075

    • 25% of job descriptions advertised a maximum salary above $200,000.
  7. SMS cloud communications

    7% jobs are in SMS cloud communications industry. The Median Salary Range for these jobs is $170,000 - $180,000

  8. Data Security and Compliance

    7% jobs are in Data Security and Compliance industry. The Median Salary Range for these jobs is $145,000 - $175,000

  9. Software Development

    27% jobs are in Software Development industry. The Median Salary Range for these jobs is $138,900 - $158,450

    • 25% of job descriptions advertised a maximum salary above $314,227.
    • 5% of job descriptions advertised a maximum salary above $438,454.
  10. Benefits technology and services

    7% jobs are in Benefits technology and services industry. The Median Salary Range for these jobs is $100,000 - $130,000

Disclaimer: This analysis is based on salary ranges advertised in job descriptions found on Remoote.app. While it provides valuable insights into potential compensation, it's important to understand that advertised salary ranges may not always reflect the actual salaries paid to employees. Furthermore, not all companies disclose salary ranges, which can impact the accuracy of this analysis. Several factors can influence the final compensation package, including:

  • Negotiation: Salary ranges often serve as a starting point for negotiation. Your experience, skills, and qualifications can influence the final offer you receive.
  • Benefits: Salaries are just one component of total compensation. Some companies may offer competitive benefits packages that include health insurance, paid time off, retirement plans, and other perks. The value of these benefits can significantly affect your overall compensation.
  • Cost of Living: The cost of living in a particular location can impact salary expectations. Some areas may require higher salaries to maintain a similar standard of living compared to others.

Jobs

17 jobs found. to receive daily emails with new job openings that match your preferences.
17 jobs found.

Set alerts to receive daily emails with new job openings that match your preferences.

Apply
πŸ”₯ Data Engineer
Posted 1 day ago

πŸ“ United States

πŸ’Έ 112800.0 - 126900.0 USD per year

πŸ” Software Development

🏒 Company: Titan Cloud

  • 4+ years of work experience with ETL, Data Modeling, Data Analysis, and Data Architecture.
  • Experience operating very large data warehouses or data lakes.
  • Experience with building data pipelines and applications to stream and process datasets at low latencies.
  • MySQL, MSSQL Database, Postgres, Python
  • Design, implement, and maintain standardized data models that align with business needs and analytical use cases.
  • Optimize data structures and schemas for efficient querying, scalability, and performance across various storage and compute platforms.
  • Provide guidance and best practices for data storage, partitioning, indexing, and query optimization.
  • Developing and maintaining a data pipeline design.
  • Build robust and scalable ETL/ELT data pipelines to transform raw data into structured datasets optimized for analysis.
  • Collaborate with data scientists to streamline feature engineering and improve the accessibility of high-value data assets.
  • Designing, building, and maintaining the data architecture needed to support business decisions and data-driven applications.β€―This includes collecting, storing, processing, and analyzing large amounts of data using AWS, Azure, and local tools and services.
  • Develop and enforce data governance standards to ensure consistency, accuracy, and reliability of data across the organization.
  • Ensure data quality, integrity, and completeness in all pipelines by implementing automated validation and monitoring mechanisms.
  • Implement data cataloging, metadata management, and lineage tracking to enhance data discoverability and usability.
  • Work with Engineering to manage and optimize data warehouse and data lake architectures, ensuring efficient storage and retrieval of structured and semi-structured data.
  • Evaluate and integrate emerging cloud-based data technologies to improve performance, scalability, and cost efficiency.
  • Assist with designing and implementing automated tools for collecting and transferring data from multiple source systems to the AWS and Azure cloud platform.
  • Work with DevOps Engineers to integrate any new code into existing pipelines
  • Collaborate with teams in trouble shooting functional and performance issues.
  • Must be a team player to be able to work in an agile environment

AWSPostgreSQLPythonSQLAgileApache AirflowCloud ComputingData AnalysisETLHadoopMySQLData engineeringData scienceREST APISparkCommunication SkillsAnalytical SkillsCI/CDProblem SolvingTerraformAttention to detailOrganizational skillsMicroservicesTeamworkData visualizationData modelingScripting

Posted 1 day ago
Apply
Apply

πŸ“ United States, Australia, Canada, South America

🧭 Full-Time

πŸ’Έ 177000.0 - 213000.0 USD per year

πŸ” FinTech

🏒 Company: Flex

  • A minimum of 6 years of industry experience in the data infrastructure/data engineering domain.
  • A minimum of 6 years of experience with Python and SQL.
  • A minimum of 3 years of industry experience using DBT.
  • A minimum of 3 years of industry experience using Snowflake and its basic features.
  • Familiarity with AWS services, with industry experience using Lambda, Step Functions, Glue, RDS, EKS, DMS, EMR, etc.
  • Industry experience with different big data platforms and tools such as Snowflake, Kafka, Hadoop, Hive, Spark, Cassandra, Airflow, etc.
  • Industry experience working with relational and NoSQL databases in a production environment.
  • Strong fundamentals in data structures, algorithms, and design patterns.
  • Design, implement, and maintain high-quality data infrastructure services, including but not limited to Data Lake, Kafka, Amazon Kinesis, and data access layers.
  • Develop robust and efficient DBT models and jobs to support analytics reporting and machine learning modeling.
  • Closely collaborating with the Analytics team for data modeling, reporting, and data ingestion.
  • Create scalable real-time streaming pipelines and offline ETL pipelines.
  • Design, implement, and manage a data warehouse that provides secure access to large datasets.
  • Continuously improve data operations by automating manual processes, optimizing data delivery, and redesigning infrastructure for greater scalability.
  • Create engineering documentation for design, runbooks, and best practices.

AWSPythonSQLBashDesign PatternsETLHadoopJavaKafkaSnowflakeAirflowAlgorithmsCassandraData engineeringData StructuresNosqlSparkCommunication SkillsCI/CDRESTful APIsTerraformWritten communicationDocumentationData modelingDebugging

Posted 2 days ago
Apply
Apply

πŸ“ United States

πŸ’Έ 117400.0 - 190570.0 USD per year

🏒 Company: healthfirst

  • 8+ Years overall IT experience
  • Enterprise experience in scripting languages primarily Python and Pyspark building enterprise frameworks
  • Enterprise experience in data ingestion methodologies using different etl tools(Glue,DBT or any other)
  • Enterprise experience in data warehousing concepts and big data technologies like EMR, Hadoop
  • Enterprise experience in any cloud infrastructure like AWS,GCP,Azure
  • Strong SQL expertise across different relational and NoSQL Databases.
  • Designs and implements standardized data management procedures around data staging, data ingestion, data preparation, data provisioning, and data destruction (e.g., scripts, programs, automation, etc.)
  • Ensures quality of technical solutions as data moves across multiple zones and environments
  • Provides insight into the changing data environment, data processing, data storage and utilization requirements for the company, and offer suggestions for solutions
  • Ensures managed analytic assets to support the company’s strategic goals by creating and verifying data acquisition requirements and strategy
  • Develops, constructs, tests, and maintains architectures
  • Aligns architecture with business requirements and uses programming language and tools
  • Identifies ways to improve data reliability, efficiency, and quality
  • Conducts research for industry and business questions
  • Deploys sophisticated analytics programs, machine learning, and statistical methods to efficiently implement solutions
  • Prepares data for predictive and prescriptive modeling and find hidden patterns using data
  • Uses data to discover tasks that can be automated
  • Creates data monitoring capabilities for each business process and works with data consumers on updates
  • Aligns data architecture to the solution architecture; contributes to overall solution architecture
  • Develops patterns for standardizing the environment technology stack
  • Helps maintain the integrity and security of company data
  • Additional duties as assigned or required

AWSPythonSQLETLHadoopData engineeringCI/CDDevOpsData modelingScripting

Posted 3 days ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 217000.0 - 303900.0 USD per year

πŸ” Digital Advertising

🏒 Company: RedditπŸ‘₯ 1001-5000πŸ’° $410,000,000 Series F over 3 years agoπŸ«‚ Last layoff almost 2 years agoNewsContentSocial NetworkSocial Media

  • M.S.: 10+ years of industry data science experience, emphasizing experimentation and causal inference.
  • Ph.D.: 6+ years of industry data science experience, emphasizing experimentation and causal inference
  • Master's or Ph.D. in Statistics, Economics, Computer Science, or a related quantitative field
  • Expertise in experimental design, A/B testing, and causal inference
  • Proficiency in statistical programming (Python/R) and SQL
  • Demonstrated ability to apply statistical principles of experimentation (hypothesis testing, p-values, etc.)
  • Experience with large-scale data analysis and manipulation
  • Strong technical communication skills for both technical and non-technical audiences
  • Ability to thrive in fast-paced, ambiguous environments and drive action
  • Desire to mentor and elevate data science practices
  • Experience with digital advertising and marketplace dynamics (preferred)
  • Experience with advertising technology (preferred)
  • Lead the design, implementation, and analysis of sophisticated A/B tests and experiments, leveraging innovative techniques like Bayesian approaches and causal inference to optimize complex ad strategies
  • Extract critical insights through in-depth analysis, developing automated tools and actionable recommendations to drive impactful decisions Define and refine key metrics to empower product teams with a deeper understanding of feature performance
  • Partner with product and engineering to shape experiment roadmaps and drive data-informed product development
  • Provide technical leadership, mentor junior data scientists, and establish best practices for experimentation
  • Drive impactful results by collaborating effectively with product, engineering, sales, and marketing teams

AWSPythonSQLApache AirflowData AnalysisHadoopMachine LearningNumpyCross-functional Team LeadershipProduct DevelopmentAlgorithmsData engineeringData scienceRegression testingPandasSparkCommunication SkillsAnalytical SkillsMentoringData visualizationData modelingA/B testing

Posted 5 days ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 126100.0 - 168150.0 USD per year

πŸ” Data Engineering

🏒 Company: firstamericancareers

  • 5+ years of development experience with any of the following software languages: Python or Scala, and SQL (we use SQL & Python) with cloud experience (Azure preferred or AWS).
  • Hands-on data security and cloud security methodologies. Experience in configuration and management of data security to meet compliance and CISO security requirements.
  • Experience creating and maintaining data intensive distributed solutions (especially involving data warehouse, data lake, data analytics) in a cloud environment.
  • Hands-on experience in modern Data Analytics architectures encompassing data warehouse, data lake etc. designed and engineered in a cloud environment.
  • Proven professional working experience in Event Streaming Platforms and data pipeline orchestration tools like Apache Kafka, Fivetran, Apache Airflow, or similar tools
  • Proven professional working experience in any of the following: Databricks, Snowflake, BigQuery, Spark in any flavor, HIVE, Hadoop, Cloudera or RedShift.
  • Experience developing in a containerized local environment like Docker, Rancher, or Kubernetes preferred
  • Data Modeling
  • Build high-performing cloud data solutions to meet our analytical and BI reporting needs.
  • Design, implement, test, deploy, and maintain distributed, stable, secure, and scalable data intensive engineering solutions and pipelines in support of data and analytics projects on the cloud, including integrating new sources of data into our central data warehouse, and moving data out to applications and other destinations.
  • Identify, design, and implement internal process improvements, such as automating manual processes, optimizing data delivery, and re-designing infrastructure for greater scalability, etc.
  • Build and enhance a shared data lake that powers decision-making and model building.
  • Partner with teams across the business to understand their needs and develop end-to-end data solutions.
  • Collaborate with analysts and data scientists to perform exploratory analysis and troubleshoot issues.
  • Manage and model data using visualization tools to provide the company with a collaborative data analytics platform.
  • Build tools and processes to help make the correct data accessible to the right people.
  • Participate in active rotational support role for production during or after business hours supporting business continuity.
  • Engage in collaboration and decision making with other engineers.
  • Design schema and data pipelines to extract, transform, and load (ETL) data from various sources into the data warehouse or data lake.
  • Create, maintain, and optimize database structures to efficiently store and retrieve large volumes of data.
  • Evaluate data trends and model simple to complex data solutions that meet day-to-day business demand and plan for future business and technological growth.
  • Implement data cleansing processes and oversee data quality to maintain accuracy.
  • Function as a key member of the team to drive development, delivery, and continuous improvement of the cloud-based enterprise data warehouse architecture.

AWSDockerPythonSQLAgileApache AirflowCloud ComputingETLHadoopKubernetesSnowflakeApache KafkaAzureData engineeringSparkScalaData visualizationData modelingData analytics

Posted 7 days ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 180000.0 - 200000.0 USD per year

πŸ” Data Engineering

🏒 Company: InMarketπŸ‘₯ 251-500πŸ’° $11,500,000 Debt Financing almost 4 years agoDigital MarketingAdvertisingMobile AdvertisingMarketing

  • Strong SQL experience
  • Expert in a data pipelining framework (Airflow, Luigi, etc.)
  • Experience building ETL pipelines with Python, SQL, and Spark
  • Strong software engineering skills in Java or Python
  • Experience optimizing data warehouses on cloud platforms
  • Understanding of Big Data Technologies (Hadoop, Spark)
  • Knowledge of Kubernetes, Docker, and CD/CI best practices
  • B.S. or M.S. in Computer Science or a related field
  • Design and implement ETL pipelines in Apache Airflow, Big Query, Python, and Spark
  • Promote Data Engineering best practices
  • Architect and plan complex cross team projects
  • Provide technical guidance to engineers
  • Communicate analyses effectively to stakeholders
  • Identify areas for process improvement

DockerPythonSQLApache AirflowGCPHadoopKubernetesSpark

Posted 11 days ago
Apply
Apply

πŸ“ United States, Canada

🧭 Full-Time

πŸ’Έ 145000.0 - 175000.0 USD per year

πŸ” Data Security and Compliance

🏒 Company: BigIDπŸ‘₯ 251-500πŸ’° $60,000,000 Series E 12 months agoArtificial Intelligence (AI)Big DataRisk ManagementCyber SecuritySoftware

  • 3+ years in pre-sales engineering or consulting
  • Experience with RDBMS and no-SQL data stores
  • Familiarity with cloud technologies like AWS
  • Collaborate with Sales as a technical expert
  • Consult on best practices and solution design
  • Ensure customer realizes value from BigID Platform

AWSDockerDynamoDBHadoopMongoDBCassandraRDBMSLinux

Posted 17 days ago
Apply
Apply
πŸ”₯ Data Engineer
Posted 22 days ago

πŸ“ United States

πŸ’Έ 165000.0 - 190000.0 USD per year

πŸ” Software Development

🏒 Company: ThalamusGME

  • 5+ years experience in data engineering
  • 5+ years demonstrated experience with Spark & PySpark
  • 5+ demonstrated experience with a variety of JSON, data parsing, XML, YAML, unstructured and structured data
  • 5+ years with python best practices for coding and documentation
  • 5+ years experience with BI (tableau reporting) visualization & database experience (postgres, sqlserver, and snowflake) and distributed databases (hadoop, databricks)
  • Strong SQL knowledge
  • Working in Azure or AWS
  • Collaborate with application engineers, data scientists, product managers, and technical support (CX team)
  • Implement data pipelines from RDBMS, application logs and unstructured data sources
  • Implement data aggregation, data cubes, verification, and cleansing solutions
  • Work with devops engineers to design scalable cloud solutions
  • Write readable, maintainable code for agile development in a highly collaborative workspace
  • Plan new data acquisition, storage and maintenance solutions
  • Work with application engineers to create efficient, reliable data products for ingestion into other applications
  • Develop data engineering model and best practices for a growing data-centric organization

AWSPostgreSQLPythonSQLHadoopSnowflakeTableauData engineeringSparkJSONData visualization

Posted 22 days ago
Apply
Apply

πŸ“ United States

🧭 Regular

πŸ’Έ 212964.0 - 438454.0 USD per year

πŸ” Software Development

🏒 Company: PinterestπŸ‘₯ 5001-10000πŸ’° Post-IPO Equity over 2 years agoπŸ«‚ Last layoff about 2 years agoInternetSocial NetworkSoftwareSocial MediaSocial Bookmarking

  • Experience leading eng teams on large-scale ML recommendation or ads systems.
  • Domain expertise on ML. Experience in areas such as user modeling, NLP and recommendation systems is a bonus.
  • Ability to drive the roadmap and directions of scalable production quality systems end-to-end.
  • 5+ years of experience managing an ML engineering team working on production ML systems. 10+ years of industry experience.
  • Bachelor’s or Master’s degree in a relevant field such as computer science, or equivalent experience.
  • Lead a team of experienced ML & backend engineers to build cutting edge user understanding models and systems, which are widely incorporated in Pinterest products across Discovery (Homefeed, Search, Related Pins), Ads, Shopping and Growth.
  • Partner closely with vertical teams across Pinterest to experiment new ML models / systems and deliver end-to-end metric impact.
  • Be a thought leader on user modeling and recommender systems, set and execute technical vision, and improve state-of-the-art technology.
  • Partner with stakeholders to expand impact across the company, including product management, data scientists and design.
  • Hire, mentor and grow managers, leaders and engineers on the team. Build a culture of excellence and expertise.

AWSBackend DevelopmentDockerLeadershipProject ManagementPythonSQLApache AirflowArtificial IntelligenceCloud ComputingData AnalysisHadoopKubernetesMachine LearningMLFlowPeople ManagementCross-functional Team LeadershipAlgorithmsData engineeringData StructuresREST APICommunication SkillsAnalytical SkillsCI/CDMentoringTerraformData modelingSoftware Engineering

Posted about 1 month ago
Apply
Apply
πŸ”₯ Data Scientist, Ads
Posted about 1 month ago

πŸ“ United States

🧭 Full-Time

πŸ’Έ 164200.0 - 229900.0 USD per year

πŸ” Online Advertising

🏒 Company: RedditπŸ‘₯ 1001-5000πŸ’° $410,000,000 Series F over 3 years agoπŸ«‚ Last layoff almost 2 years agoNewsContentSocial NetworkSocial Media

  • MS or PhD in Computer Science, Statistics, Mathematics, or a related field required.
  • 3+ years of experience in data science, machine learning, or a related field.
  • Strong understanding of statistical modeling, machine learning algorithms, causal inference, and experimental design.
  • Experience with large-scale data processing using tools such as Spark, Hadoop, or Hive; knowledge of BigQuery is a plus.
  • Proficiency in Python or R and experience with machine learning libraries such as scikit-learn, TensorFlow, or PyTorch.
  • Experience with SQL and relational databases.
  • Excellent communication and presentation skills.
  • Passion for Reddit and the online advertising industry.
  • Design, develop, and apply data science solutions to improve advertiser experience and performance of Reddit's ad platform.
  • Analyze large-scale datasets to identify trends and insights for enhancing advertising effectiveness.
  • Collaborate with product managers and engineers to define product requirements and implement data science solutions.
  • Develop machine learning models to enhance anomaly detection, prediction, and pattern recognition.
  • Communicate findings and recommendations to various stakeholders.
  • Stay up-to-date on advancements in machine learning and data science.

PythonSQLHadoopMachine LearningPyTorchData scienceSparkTensorflow

Posted about 1 month ago
Apply
Shown 10 out of 17