Scala Job Salaries

Find salary information for remote positions requiring Scala skills. Make data-driven decisions about your career path.

Scala

Median high-range salary for jobs requiring Scala:

$189,000

This analysis is based on salary ranges collected from 33 job descriptions that match the search and allow working remotely. Choose a country to narrow down the search and view statistics exclusively for remote jobs available in that location.

The Median Salary Range is $150,000 - $189,000

  • 25% of job descriptions advertised a maximum salary above $259,000.
  • 5% of job descriptions advertised a maximum salary above $358,635.

Skills and Salary

Specific skills can have a substantial impact on salary ranges for jobs that align with these search preferences. Certain in-demand skills are highly valued by employers and can significantly boost compensation. These skills often reflect the unique requirements and challenges faced by professionals in these roles. Some of the most sought-after skills that correlate with higher salaries include Java, Algorithms and Docker. Mastering these skills can demonstrate expertise and make individuals more competitive in the job market. Employers often prioritize candidates who possess these skills, as they can contribute directly to the organization's success. The ability to effectively utilize these skills can lead to increased earning potential and career advancement opportunities.

  1. Java

    33% jobs mention Java as a required skill. The Median Salary Range for these jobs is $185,500 - $259,000

    • 25% of job descriptions advertised a maximum salary above $314,937.5.
    • 5% of job descriptions advertised a maximum salary above $702,255.
  2. Algorithms

    33% jobs mention Algorithms as a required skill. The Median Salary Range for these jobs is $185,500 - $250,000

    • 25% of job descriptions advertised a maximum salary above $287,812.5.
    • 5% of job descriptions advertised a maximum salary above $362,945.
  3. Docker

    33% jobs mention Docker as a required skill. The Median Salary Range for these jobs is $155,000 - $210,600

    • 25% of job descriptions advertised a maximum salary above $286,312.5.
    • 5% of job descriptions advertised a maximum salary above $362,945.
  4. Kubernetes

    42% jobs mention Kubernetes as a required skill. The Median Salary Range for these jobs is $152,500 - $207,800

    • 25% of job descriptions advertised a maximum salary above $293,750.
    • 5% of job descriptions advertised a maximum salary above $356,480.
  5. Python

    61% jobs mention Python as a required skill. The Median Salary Range for these jobs is $165,000 - $197,000

    • 25% of job descriptions advertised a maximum salary above $264,500.
    • 5% of job descriptions advertised a maximum salary above $343,550.
  6. AWS

    48% jobs mention AWS as a required skill. The Median Salary Range for these jobs is $156,000 - $189,500

    • 25% of job descriptions advertised a maximum salary above $267,000.
    • 5% of job descriptions advertised a maximum salary above $352,170.
  7. Data engineering

    39% jobs mention Data engineering as a required skill. The Median Salary Range for these jobs is $160,000 - $182,000

    • 25% of job descriptions advertised a maximum salary above $231,000.
    • 5% of job descriptions advertised a maximum salary above $656,062.5.
  8. SQL

    55% jobs mention SQL as a required skill. The Median Salary Range for these jobs is $152,000 - $181,000

    • 25% of job descriptions advertised a maximum salary above $264,000.
    • 5% of job descriptions advertised a maximum salary above $560,800.
  9. Spark

    42% jobs mention Spark as a required skill. The Median Salary Range for these jobs is $152,000 - $181,000

    • 25% of job descriptions advertised a maximum salary above $218,000.
    • 5% of job descriptions advertised a maximum salary above $649,020.

Industries and Salary

Industry plays a crucial role in determining salary ranges for jobs that align with these search preferences. Certain industries offer significantly higher compensation packages compared to others. Some in-demand industries known for their competitive salaries in these roles include Entertainment, Digital Advertising and Software Development. These industries often have a strong demand for skilled professionals and are willing to invest in talent to meet their growth objectives. Factors such as industry size, profitability, and market trends can influence salary levels within these sectors. It's important to consider industry-specific factors when evaluating potential career paths and salary expectations.

  1. Entertainment

    3% jobs are in Entertainment industry. The Median Salary Range for these jobs is $170,000 - $720,000

  2. Digital Advertising

    3% jobs are in Digital Advertising industry. The Median Salary Range for these jobs is $260,800 - $365,100

  3. Software Development

    36% jobs are in Software Development industry. The Median Salary Range for these jobs is $152,000 - $214,300

    • 25% of job descriptions advertised a maximum salary above $264,500.
    • 5% of job descriptions advertised a maximum salary above $319,175.
  4. Crypto and Web3

    3% jobs are in Crypto and Web3 industry. The Median Salary Range for these jobs is $152,000 - $190,000

  5. AdTech

    3% jobs are in AdTech industry. The Median Salary Range for these jobs is $160,000 - $182,000

  6. Cloud-native application analytics and security

    3% jobs are in Cloud-native application analytics and security industry. The Median Salary Range for these jobs is $155,000 - $180,000

  7. Data Engineering

    3% jobs are in Data Engineering industry. The Median Salary Range for these jobs is $126,100 - $168,150

  8. Finance, Software Development

    3% jobs are in Finance, Software Development industry. The Median Salary Range for these jobs is $140,000 - $160,000

  9. Financial Services

    3% jobs are in Financial Services industry. The Median Salary Range for these jobs is $140,000 - $160,000

  10. AI Consultancy

    3% jobs are in AI Consultancy industry. The Median Salary Range for these jobs is $100,000 - $120,000

Disclaimer: This analysis is based on salary ranges advertised in job descriptions found on Remoote.app. While it provides valuable insights into potential compensation, it's important to understand that advertised salary ranges may not always reflect the actual salaries paid to employees. Furthermore, not all companies disclose salary ranges, which can impact the accuracy of this analysis. Several factors can influence the final compensation package, including:

  • Negotiation: Salary ranges often serve as a starting point for negotiation. Your experience, skills, and qualifications can influence the final offer you receive.
  • Benefits: Salaries are just one component of total compensation. Some companies may offer competitive benefits packages that include health insurance, paid time off, retirement plans, and other perks. The value of these benefits can significantly affect your overall compensation.
  • Cost of Living: The cost of living in a particular location can impact salary expectations. Some areas may require higher salaries to maintain a similar standard of living compared to others.

Jobs

37 jobs found. to receive daily emails with new job openings that match your preferences.
37 jobs found.

Set alerts to receive daily emails with new job openings that match your preferences.

Apply

πŸ“ Canada

πŸ’Έ 98400.0 - 137800.0 CAD per year

πŸ” Software Development

  • A degree in Computer Science or Engineering, and 5-8 years of experience in developing and maintaining software or an equivalent level of education or work experience, and a track record of substantial contributions to software projects with high business impact
  • Experience writing clean code that performs well at scale; ideally experienced with languages like Python, Scala, Java, Go, and shell script
  • Passionate interest in data engineering and infrastructure; ingestion, storage and compute in relational, NoSQL, and serverless architectures
  • Experience with various types of data stores, query engines and frameworks, e.g. PostgreSQL, MySQL, S3, Redshift/Spectrum, Presto/Athena, Spark
  • Experience working with message queues such as Kafka and Kinesis
  • Experience developing data pipelines and integrations for high volume, velocity and variety of data
  • Experience with data warehousing and data modeling best practices
  • Work within a cross-functional team (including analysts, product managers, and other developers) to deliver data products and services to our internal stakeholders
  • Conduct directed research and technical analysis of new candidate technologies that fill a development team’s business or technical need
  • Provide technical advice, act as a role model for your teammates, flawlessly execute complicated plans, and navigate many levels of the organization
  • Contribute enhancements to development, build, deployment, and monitoring processes with an emphasis on security, reliability and performance
  • Implement our technical roadmap as we scale our services and build new data products
  • Participate in code reviews, attend regular team meetings, and apply software development best practices
  • Take ownership of your work, and work autonomously when necessary
  • Recognize opportunities to improve efficiency in our data systems and processes, increase data quality, and enable consistent and reliable results
  • Participate in the design and implementation of our next generation data platform to empower Hootsuite with data
  • Participate in the development of the technical hiring process and interview scripts with an aim of attracting and hiring the best developers

AWSPostgreSQLPythonSoftware DevelopmentSQLAgileApache AirflowCloud ComputingData AnalysisData MiningETLJavaKafkaMySQLSoftware ArchitectureAlgorithmsAPI testingData engineeringData StructuresGoServerlessSparkCI/CDRESTful APIsMicroservicesScalaData visualizationData modelingData management

Posted about 5 hours ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 216000.0 - 264000.0 USD per year

πŸ” Healthcare

🏒 Company: MachinifyπŸ‘₯ 51-100πŸ’° $10,000,000 Series A over 6 years agoArtificial Intelligence (AI)Business IntelligencePredictive AnalyticsSaaSMachine LearningAnalytics

  • 10+ years of backend focused experience in the field of application programming
  • Strong working experience with Java or Scala
  • Experience reading and understanding complex enterprise-grade code, quickly contributing to it, and suggesting improvements.
  • Working experience writing SQL queries
  • Additional Python and C# or other backend languages are a plus
  • Strong CS foundation (data structures, asynchronous programming)
  • Excellences in test writing discipline
  • Critical thinking and problem solving skills working in a high growth environment
  • Comfortable navigating ambiguity
  • BS or MS in Computer Science (or equivalent experience)
  • Contribute to backend server-side development to ensure our application is extensible, scalable, and secure
  • Recognize and prioritize between Customer deliverables & Tech debt to develop a sustainable software suite of products.
  • Enjoy designing and architecting complex frameworks for applying ML techniques to large data volumes and simplifying labor-intensive processes
  • Deliver resilient enterprise software solutions

AWSBackend DevelopmentDockerLeadershipProject ManagementSQLDesign PatternsGitJavaKubernetesMachine LearningSoftware ArchitectureSpring BootAlgorithmsData StructuresJava SpringREST APICommunication SkillsCI/CDProblem SolvingRESTful APIsMicroservicesCritical thinkingScalaSoftware Engineering

Posted 1 day ago
Apply
Apply

πŸ“ United States

πŸ’Έ 144000.0 - 180000.0 USD per year

πŸ” Software Development

🏒 Company: HungryrootπŸ‘₯ 101-250πŸ’° $40,000,000 Series C almost 4 years agoArtificial Intelligence (AI)Food and BeverageE-CommerceRetailConsumer GoodsSoftware

  • 5+ years of experience in ETL development and data modeling
  • 5+ years of experience in both Scala and Python
  • 5+ years of experience in Spark
  • Excellent problem-solving skills and the ability to translate business problems into practical solutions
  • 2+ years of experience working with the Databricks Platform
  • Develop pipelines in Spark (Python + Scala) in the Databricks Platform
  • Build cross-functional working relationships with business partners in Food Analytics, Operations, Marketing, and Web/App Development teams to power pipeline development for the business
  • Ensure system reliability and performance
  • Deploy and maintain data pipelines in production
  • Set an example of code quality, data quality, and best practices
  • Work with Analysts and Data Engineers to enable high quality self-service analytics for all of Hungryroot
  • Investigate datasets to answer business questions, ensuring data quality and business assumptions are understood before deploying a pipeline

AWSPythonSQLApache AirflowData MiningETLSnowflakeAlgorithmsAmazon Web ServicesData engineeringData StructuresSparkCI/CDRESTful APIsMicroservicesJSONScalaData visualizationData modelingData analyticsData management

Posted 2 days ago
Apply
Apply

πŸ“ United States

πŸ’Έ 135000.0 - 155000.0 USD per year

πŸ” Software Development

🏒 Company: JobgetherπŸ‘₯ 11-50πŸ’° $1,493,585 Seed about 2 years agoInternet

  • 8+ years of experience as a data engineer, with a strong background in data lake systems and cloud technologies.
  • 4+ years of hands-on experience with AWS technologies, including S3, Redshift, EMR, Kafka, and Spark.
  • Proficient in Python or Node.js for developing data pipelines and creating ETLs.
  • Strong experience with data integration and frameworks like Informatica and Python/Scala.
  • Expertise in creating and managing AWS services (EC2, S3, Lambda, etc.) in a production environment.
  • Solid understanding of Agile methodologies and software development practices.
  • Strong analytical and communication skills, with the ability to influence both IT and business teams.
  • Design and develop scalable data pipelines that integrate enterprise systems and third-party data sources.
  • Build and maintain data infrastructure to ensure speed, accuracy, and uptime.
  • Collaborate with data science teams to build feature engineering pipelines and support machine learning initiatives.
  • Work with AWS cloud technologies like S3, Redshift, and Spark to create a world-class data mesh environment.
  • Ensure proper data governance and implement data quality checks and lineage at every stage of the pipeline.
  • Develop and maintain ETL processes using AWS Glue, Lambda, and other AWS services.
  • Integrate third-party data sources and APIs into the data ecosystem.

AWSNode.jsPythonSQLETLKafkaData engineeringSparkAgile methodologiesScalaData modelingData management

Posted 2 days ago
Apply
Apply

πŸ“ Canada

🧭 Full-Time

πŸ’Έ 98400.0 - 137800.0 CAD per year

πŸ” Data Technology

  • A degree in Computer Science or Engineering, and senior-level experience in developing and maintaining software or an equivalent level of education or work experience, and a track record of substantial contributions to software projects with high business impact.
  • Experience planning and leading a team using Scrum agile methodology ensuring timely delivery and continuous improvement.
  • Experience liaising with various business stakeholders to understand their data requirements and convey the technical solutions.
  • Experience with data warehousing and data modeling best practices.
  • Passionate interest in data engineering and infrastructure; ingestion, storage and compute in relational, NoSQL, and serverless architectures
  • Experience developing data pipelines and integrations for high volume, velocity and variety of data.
  • Experience writing clean code that performs well at scale; ideally experienced with languages like Python, Scala, SQL and shell script.
  • Experience with various types of data stores, query engines and data frameworks, e.g. PostgreSQL, MySQL, S3, Redshift, Presto/Athena, Spark and dbt.
  • Experience working with message queues such as Kafka and Kinesis
  • Experience with ETL and pipeline orchestration such as Airflow, AWS Glue
  • Experience with JIRA in managing sprints and roadmaps
  • Lead development and maintenance of scalable and efficient data pipeline architecture
  • Work within cross-functional teams, including Data Science, Analytics, Software Development, and business units, to deliver data products and services.
  • Collaborate with business stakeholders and translate requirements into scalable data solutions.
  • Monitor and communicate project statuses while mitigating risk and resolving issues.
  • Work closely with the Senior Manager to align team priorities with business objectives.
  • Assess and prioritize the team's work, appropriately delegating to others and encouraging team ownership.
  • Proactively share information, actively solicit feedback, and facilitate communication, within teams and other departments.
  • Design, write, test, and deploy high quality scalable code.
  • Maintain high standards of security, reliability, scalability, performance, and quality in all delivered projects.
  • Contribute to shape our technical roadmap as we scale our services and build our next generation data platform.
  • Build, support and lead a high performance, cohesive team of developers, in close partnership with the Senior Manager, Data Analytics.
  • Participate in the hiring process, with an aim of attracting and hiring the best developers.
  • Facilitate ongoing development conversations with your team to support their learning and career growth.

AWSBackend DevelopmentLeadershipPostgreSQLPythonSoftware DevelopmentSQLApache AirflowCloud ComputingETLKafkaKubernetesMySQLSCRUMJiraCross-functional Team LeadershipAlgorithmsAmazon Web ServicesAPI testingData engineeringData StructuresREST APIServerlessStrategic ManagementSparkCommunication SkillsAnalytical SkillsCollaborationCI/CDProblem SolvingAgile methodologiesMentoringLinuxDevOpsOrganizational skillsTime ManagementWritten communicationProblem-solving skillsScalaRisk ManagementData visualizationTeam managementTechnical supportData modelingData analytics

Posted 6 days ago
Apply
Apply

πŸ“ Canada

πŸ’Έ 98400.0 - 137800.0 CAD per year

πŸ” Data Technology

🏒 Company: HootsuiteπŸ‘₯ 1001-5000πŸ’° $50,000,000 Debt Financing almost 7 years agoπŸ«‚ Last layoff about 2 years agoDigital MarketingSocial Media MarketingSocial Media ManagementApps

  • A degree in Computer Science or Engineering, and senior-level experience in developing and maintaining software or an equivalent level of education or work experience, and a track record of substantial contributions to software projects with high business impact.
  • Experience planning and leading a team using Scrum agile methodology ensuring timely delivery and continuous improvement.
  • Experience liaising with various business stakeholders to understand their data requirements and convey the technical solutions.
  • Experience with data warehousing and data modeling best practices.
  • Passionate interest in data engineering and infrastructure; ingestion, storage and compute in relational, NoSQL, and serverless architectures
  • Experience developing data pipelines and integrations for high volume, velocity and variety of data.
  • Experience writing clean code that performs well at scale; ideally experienced with languages like Python, Scala, SQL and shell script.
  • Experience with various types of data stores, query engines and data frameworks, e.g. PostgreSQL, MySQL, S3, Redshift, Presto/Athena, Spark and dbt.
  • Experience working with message queues such as Kafka and Kinesis
  • Experience with ETL and pipeline orchestration such as Airflow, AWS Glue
  • Experience with JIRA in managing sprints and roadmaps
  • Lead development and maintenance of scalable and efficient data pipeline architecture
  • Work within cross-functional teams, including Data Science, Analytics, Software Development, and business units, to deliver data products and services.
  • Collaborate with business stakeholders and translate requirements into scalable data solutions.
  • Monitor and communicate project statuses while mitigating risk and resolving issues.
  • Work closely with the Senior Manager to align team priorities with business objectives.
  • Assess and prioritize the team's work, appropriately delegating to others and encouraging team ownership.
  • Proactively share information, actively solicit feedback, and facilitate communication, within teams and other departments.
  • Design, write, test, and deploy high quality scalable code.
  • Maintain high standards of security, reliability, scalability, performance, and quality in all delivered projects.
  • Contribute to shape our technical roadmap as we scale our services and build our next generation data platform.
  • Build, support and lead a high performance, cohesive team of developers, in close partnership with the Senior Manager, Data Analytics.
  • Participate in the hiring process, with an aim of attracting and hiring the best developers.
  • Facilitate ongoing development conversations with your team to support their learning and career growth.

AWSLeadershipPostgreSQLPythonSoftware DevelopmentSQLAgileApache AirflowETLMySQLSCRUMCross-functional Team LeadershipAlgorithmsApache KafkaData engineeringData StructuresSparkCommunication SkillsAnalytical SkillsCI/CDProblem SolvingMentoringDevOpsWritten communicationCoachingScalaData visualizationTeam managementData modelingData analyticsData management

Posted 6 days ago
Apply
Apply

πŸ“ United States, Canada

🧭 Full-Time

πŸ’Έ 230000.0 - 322000.0 USD per year

πŸ” Software Development

  • 7+ years of contributing high-quality code to production systems that operate at scale.
  • 5+ years of experience building control systems, PID controllers, multi-armed bandits, reinforcement learning algorithms, or bid/pricing optimization systems.
  • Experience leading large engineering teams and collaborating with cross-functional partners is required.
  • Experience designing optimization algorithms in an ad serving platform and/or other marketplaces is preferred.
  • Experience with state of the art control systems, reinforcement learning algorithms is a strong plus.
  • Building Reddit-scale optimizations to improve advertiser outcomes using cutting-edge techniques in the industry.
  • Leverage live auction data and model predictions to adjust campaign bids in real time.
  • Incorporate knowledge of the Reddit ads marketplace into budget pacing algorithms powered by control & reinforcement learning systems
  • Lead the team on designing new bid & budget optimization products and algorithms as well as conducting rigorous A/B experiments to evaluate the business impact.
  • Actively participate and work with other leads to set the long term direction for the team, plan and oversee engineering designs and project execution.

AWSDockerLeadershipPostgreSQLPythonSQLCloud ComputingData AnalysisElasticSearchGCPJavaKubernetesMachine LearningPyTorchCross-functional Team LeadershipAlgorithmsCassandraData StructuresREST APIRedisTensorflowScalaData modelingA/B testing

Posted 7 days ago
Apply
Apply

πŸ“ United States

πŸ’Έ 220000.0 - 250000.0 USD per year

πŸ” Machine Learning

🏒 Company: InspirenπŸ‘₯ 11-50πŸ’° $2,720,602 over 2 years agoMachine LearningAnalyticsInformation TechnologyHealth Care

  • 8+ years of professional experience in machine learning, software engineering, or a similar domain
  • 5+ years with machine learning algorithms, model development, and model deployment
  • 5+ years of experience with computer vision
  • 1+ years of experience with generative AI
  • 5+ years of demonstrated ability to provide technical leadership, mentor team members, and drive consensus among diverse stakeholders
  • 3+ years in advanced algorithm design and analysis
  • Proficiency in Python, R, or Scala languages for data science
  • Experience with TensorFlow, Keras, or PyTorch for neural network modeling
  • Ability to optimize machine learning models for performance
  • Proven track record of deploying machine learning models into production
  • Excellent communication and presentation skills, with the ability to explain complex ideas clearly and concisely
  • Proven ability to collaborate effectively in a cross-functional environment
  • Develop strategic goals for machine learning that are in sync with current industry trends and Inspiren’s business objectives. Provide guidance on resource distribution for projects and identify risks and opportunities within the machine learning landscape to inform decision-making.
  • Oversee the design of innovative machine learning models and algorithms, and refine existing ones to enhance performance and accuracy. Collaborate with cross-functional teams to successfully integrate these algorithms into our product offerings.
  • Mentor and nurture the professional growth of senior machine learning experts. Foster an environment that emphasizes continuous learning and innovation among the team.
  • Drive innovative research and development in machine learning to keep our technology at the forefront. Implement the latest machine learning technologies to enhance product capabilities and maintain competitive edge.

PythonData AnalysisImage ProcessingKerasMachine LearningNumpyPyTorchAlgorithmsData StructuresPandasTensorflowScalaData modeling

Posted 8 days ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 185500.0 - 293750.0 USD per year

πŸ” Software Development

🏒 Company: UpworkπŸ‘₯ 501-1000πŸ’° about 8 years agoπŸ«‚ Last layoff almost 2 years agoMarketplaceFreelanceCopywritingPeer to Peer

  • Strong technical expertise in designing and building scalable ML infrastructure.
  • Experience with distributed systems and cloud-based ML platforms.
  • Proficiency in programming languages such as Python, Java, or Scala.
  • Deep understanding of ML workflows, including data pipelines, model training, and deployment.
  • Passion for innovation and eagerness to implement the latest advancements in ML infrastructure.
  • Strong problem-solving skills and ability to optimize complex systems for performance and reliability.
  • Collaborative mindset with excellent communication skills to work across teams.
  • Ability to thrive in a fast-paced, dynamic environment with evolving technical challenges.
  • Design, implement, and optimize distributed systems and infrastructure components to support large-scale machine learning workflows, including data ingestion, feature engineering, model training, and serving.
  • Develop and maintain frameworks, libraries, and tools that streamline the end-to-end machine learning lifecycle, from data preparation and experimentation to model deployment and monitoring.
  • Architect and implement highly available, fault-tolerant, and secure systems that meet the performance and scalability requirements of production machine learning workloads.
  • Collaborate with machine learning researchers and data scientists to understand their requirements and translate them into scalable and efficient software solutions.
  • Stay current with advancements in machine learning infrastructure, distributed computing, and cloud technologies, integrating them into our platform to drive innovation.
  • Mentor junior engineers, conduct code reviews, and uphold engineering best practices to ensure the delivery of high-quality software solutions.

AWSDockerLeadershipPythonSoftware DevelopmentSQLCloud ComputingJavaKubeflowKubernetesMachine LearningMLFlowAlgorithmsData engineeringData StructuresREST APICollaborationCI/CDProblem SolvingMentoringLinuxDevOpsTerraformExcellent communication skillsScalaData modeling

Posted 8 days ago
Apply
Apply
πŸ”₯ Sr. Data Engineer
Posted 12 days ago

πŸ“ United States

🧭 Full-Time

πŸ’Έ 126100.0 - 168150.0 USD per year

πŸ” Data Engineering

🏒 Company: firstamericancareers

  • 5+ years of development experience with any of the following software languages: Python or Scala, and SQL (we use SQL & Python) with cloud experience (Azure preferred or AWS).
  • Hands-on data security and cloud security methodologies. Experience in configuration and management of data security to meet compliance and CISO security requirements.
  • Experience creating and maintaining data intensive distributed solutions (especially involving data warehouse, data lake, data analytics) in a cloud environment.
  • Hands-on experience in modern Data Analytics architectures encompassing data warehouse, data lake etc. designed and engineered in a cloud environment.
  • Proven professional working experience in Event Streaming Platforms and data pipeline orchestration tools like Apache Kafka, Fivetran, Apache Airflow, or similar tools
  • Proven professional working experience in any of the following: Databricks, Snowflake, BigQuery, Spark in any flavor, HIVE, Hadoop, Cloudera or RedShift.
  • Experience developing in a containerized local environment like Docker, Rancher, or Kubernetes preferred
  • Data Modeling
  • Build high-performing cloud data solutions to meet our analytical and BI reporting needs.
  • Design, implement, test, deploy, and maintain distributed, stable, secure, and scalable data intensive engineering solutions and pipelines in support of data and analytics projects on the cloud, including integrating new sources of data into our central data warehouse, and moving data out to applications and other destinations.
  • Identify, design, and implement internal process improvements, such as automating manual processes, optimizing data delivery, and re-designing infrastructure for greater scalability, etc.
  • Build and enhance a shared data lake that powers decision-making and model building.
  • Partner with teams across the business to understand their needs and develop end-to-end data solutions.
  • Collaborate with analysts and data scientists to perform exploratory analysis and troubleshoot issues.
  • Manage and model data using visualization tools to provide the company with a collaborative data analytics platform.
  • Build tools and processes to help make the correct data accessible to the right people.
  • Participate in active rotational support role for production during or after business hours supporting business continuity.
  • Engage in collaboration and decision making with other engineers.
  • Design schema and data pipelines to extract, transform, and load (ETL) data from various sources into the data warehouse or data lake.
  • Create, maintain, and optimize database structures to efficiently store and retrieve large volumes of data.
  • Evaluate data trends and model simple to complex data solutions that meet day-to-day business demand and plan for future business and technological growth.
  • Implement data cleansing processes and oversee data quality to maintain accuracy.
  • Function as a key member of the team to drive development, delivery, and continuous improvement of the cloud-based enterprise data warehouse architecture.

AWSDockerPythonSQLAgileApache AirflowCloud ComputingETLHadoopKubernetesSnowflakeApache KafkaAzureData engineeringSparkScalaData visualizationData modelingData analytics

Posted 12 days ago
Apply
Shown 10 out of 37