Spark Job Salaries

Find salary information for remote positions requiring Spark skills. Make data-driven decisions about your career path.

Spark

Median high-range salary for jobs requiring Spark:

$220,000

This analysis is based on salary ranges collected from 73 job descriptions that match the search and allow working remotely. Choose a country to narrow down the search and view statistics exclusively for remote jobs available in that location.

The Median Salary Range is $169,000 - $220,000

  • 25% of job descriptions advertised a maximum salary above $256,250.
  • 5% of job descriptions advertised a maximum salary above $318,543.65.

Skills and Salary

Specific skills can have a substantial impact on salary ranges for jobs that align with these search preferences. Certain in-demand skills are highly valued by employers and can significantly boost compensation. These skills often reflect the unique requirements and challenges faced by professionals in these roles. Some of the most sought-after skills that correlate with higher salaries include Kafka, Kubernetes and Algorithms. Mastering these skills can demonstrate expertise and make individuals more competitive in the job market. Employers often prioritize candidates who possess these skills, as they can contribute directly to the organization's success. The ability to effectively utilize these skills can lead to increased earning potential and career advancement opportunities.

  1. Kafka

    27% jobs mention Kafka as a required skill. The Median Salary Range for these jobs is $195,000 - $251,400

    • 25% of job descriptions advertised a maximum salary above $270,000.
    • 5% of job descriptions advertised a maximum salary above $372,400.
  2. Kubernetes

    30% jobs mention Kubernetes as a required skill. The Median Salary Range for these jobs is $181,600 - $250,000

    • 25% of job descriptions advertised a maximum salary above $275,000.
    • 5% of job descriptions advertised a maximum salary above $332,040.
  3. Algorithms

    32% jobs mention Algorithms as a required skill. The Median Salary Range for these jobs is $177,000 - $240,000

    • 25% of job descriptions advertised a maximum salary above $290,000.
    • 5% of job descriptions advertised a maximum salary above $367,655.
  4. Machine Learning

    51% jobs mention Machine Learning as a required skill. The Median Salary Range for these jobs is $172,000 - $235,900

    • 25% of job descriptions advertised a maximum salary above $276,250.
    • 5% of job descriptions advertised a maximum salary above $369,845.
  5. AWS

    64% jobs mention AWS as a required skill. The Median Salary Range for these jobs is $169,000 - $224,000

    • 25% of job descriptions advertised a maximum salary above $258,200.
    • 5% of job descriptions advertised a maximum salary above $308,300.
  6. Python

    84% jobs mention Python as a required skill. The Median Salary Range for these jobs is $169,000 - $220,000

    • 25% of job descriptions advertised a maximum salary above $253,350.
    • 5% of job descriptions advertised a maximum salary above $309,030.95.
  7. Communication Skills

    41% jobs mention Communication Skills as a required skill. The Median Salary Range for these jobs is $175,500 - $220,000

    • 25% of job descriptions advertised a maximum salary above $250,000.
    • 5% of job descriptions advertised a maximum salary above $303,900.
  8. Data engineering

    47% jobs mention Data engineering as a required skill. The Median Salary Range for these jobs is $164,940 - $218,000

    • 25% of job descriptions advertised a maximum salary above $240,000.
    • 5% of job descriptions advertised a maximum salary above $303,120.
  9. SQL

    64% jobs mention SQL as a required skill. The Median Salary Range for these jobs is $164,200 - $215,000

    • 25% of job descriptions advertised a maximum salary above $247,500.
    • 5% of job descriptions advertised a maximum salary above $300,585.

Industries and Salary

Industry plays a crucial role in determining salary ranges for jobs that align with these search preferences. Certain industries offer significantly higher compensation packages compared to others. Some in-demand industries known for their competitive salaries in these roles include Digital Advertising, Data and AI and Blockchain intelligence and financial services. These industries often have a strong demand for skilled professionals and are willing to invest in talent to meet their growth objectives. Factors such as industry size, profitability, and market trends can influence salary levels within these sectors. It's important to consider industry-specific factors when evaluating potential career paths and salary expectations.

  1. Digital Advertising

    3% jobs are in Digital Advertising industry. The Median Salary Range for these jobs is $238,900 - $334,500

    • 25% of job descriptions advertised a maximum salary above $365,100.
  2. Data and AI

    3% jobs are in Data and AI industry. The Median Salary Range for these jobs is $135,740 - $267,900

    • 25% of job descriptions advertised a maximum salary above $300,000.
  3. Blockchain intelligence and financial services

    3% jobs are in Blockchain intelligence and financial services industry. The Median Salary Range for these jobs is $220,000 - $262,500

    • 25% of job descriptions advertised a maximum salary above $270,000.
  4. Fintech

    3% jobs are in Fintech industry. The Median Salary Range for these jobs is $204,500 - $261,500

    • 25% of job descriptions advertised a maximum salary above $310,000.
  5. Online Advertising

    3% jobs are in Online Advertising industry. The Median Salary Range for these jobs is $177,500 - $248,500

    • 25% of job descriptions advertised a maximum salary above $267,100.
  6. Cybersecurity

    3% jobs are in Cybersecurity industry. The Median Salary Range for these jobs is $174,000 - $242,500

    • 25% of job descriptions advertised a maximum salary above $295,000.
  7. Software Development

    21% jobs are in Software Development industry. The Median Salary Range for these jobs is $169,000 - $240,000

    • 25% of job descriptions advertised a maximum salary above $265,700.
    • 5% of job descriptions advertised a maximum salary above $309,718.25.
  8. Game Development

    3% jobs are in Game Development industry. The Median Salary Range for these jobs is $175,000 - $230,000

    • 25% of job descriptions advertised a maximum salary above $270,000.
  9. Healthcare

    5% jobs are in Healthcare industry. The Median Salary Range for these jobs is $170,000 - $197,500

    • 25% of job descriptions advertised a maximum salary above $215,000.
    • 5% of job descriptions advertised a maximum salary above $220,000.
  10. Data Engineering

    3% jobs are in Data Engineering industry. The Median Salary Range for these jobs is $153,050 - $184,075

    • 25% of job descriptions advertised a maximum salary above $200,000.

Disclaimer: This analysis is based on salary ranges advertised in job descriptions found on Remoote.app. While it provides valuable insights into potential compensation, it's important to understand that advertised salary ranges may not always reflect the actual salaries paid to employees. Furthermore, not all companies disclose salary ranges, which can impact the accuracy of this analysis. Several factors can influence the final compensation package, including:

  • Negotiation: Salary ranges often serve as a starting point for negotiation. Your experience, skills, and qualifications can influence the final offer you receive.
  • Benefits: Salaries are just one component of total compensation. Some companies may offer competitive benefits packages that include health insurance, paid time off, retirement plans, and other perks. The value of these benefits can significantly affect your overall compensation.
  • Cost of Living: The cost of living in a particular location can impact salary expectations. Some areas may require higher salaries to maintain a similar standard of living compared to others.

Jobs

78 jobs found. to receive daily emails with new job openings that match your preferences.
78 jobs found.

Set alerts to receive daily emails with new job openings that match your preferences.

Apply
🔥 Data Engineer
Posted 1 day ago

📍 United States

💸 112800.0 - 126900.0 USD per year

🔍 Software Development

🏢 Company: Titan Cloud

  • 4+ years of work experience with ETL, Data Modeling, Data Analysis, and Data Architecture.
  • Experience operating very large data warehouses or data lakes.
  • Experience with building data pipelines and applications to stream and process datasets at low latencies.
  • MySQL, MSSQL Database, Postgres, Python
  • Design, implement, and maintain standardized data models that align with business needs and analytical use cases.
  • Optimize data structures and schemas for efficient querying, scalability, and performance across various storage and compute platforms.
  • Provide guidance and best practices for data storage, partitioning, indexing, and query optimization.
  • Developing and maintaining a data pipeline design.
  • Build robust and scalable ETL/ELT data pipelines to transform raw data into structured datasets optimized for analysis.
  • Collaborate with data scientists to streamline feature engineering and improve the accessibility of high-value data assets.
  • Designing, building, and maintaining the data architecture needed to support business decisions and data-driven applications. This includes collecting, storing, processing, and analyzing large amounts of data using AWS, Azure, and local tools and services.
  • Develop and enforce data governance standards to ensure consistency, accuracy, and reliability of data across the organization.
  • Ensure data quality, integrity, and completeness in all pipelines by implementing automated validation and monitoring mechanisms.
  • Implement data cataloging, metadata management, and lineage tracking to enhance data discoverability and usability.
  • Work with Engineering to manage and optimize data warehouse and data lake architectures, ensuring efficient storage and retrieval of structured and semi-structured data.
  • Evaluate and integrate emerging cloud-based data technologies to improve performance, scalability, and cost efficiency.
  • Assist with designing and implementing automated tools for collecting and transferring data from multiple source systems to the AWS and Azure cloud platform.
  • Work with DevOps Engineers to integrate any new code into existing pipelines
  • Collaborate with teams in trouble shooting functional and performance issues.
  • Must be a team player to be able to work in an agile environment

AWSPostgreSQLPythonSQLAgileApache AirflowCloud ComputingData AnalysisETLHadoopMySQLData engineeringData scienceREST APISparkCommunication SkillsAnalytical SkillsCI/CDProblem SolvingTerraformAttention to detailOrganizational skillsMicroservicesTeamworkData visualizationData modelingScripting

Posted 1 day ago
Apply
Apply

📍 Canada

💸 98400.0 - 137800.0 CAD per year

🔍 Data Technology

🏢 Company: Hootsuite👥 1001-5000💰 $50,000,000 Debt Financing almost 7 years ago🫂 Last layoff about 2 years agoDigital MarketingSocial Media MarketingSocial Media ManagementApps

  • A degree in Computer Science or Engineering, and senior-level experience in developing and maintaining software or an equivalent level of education or work experience, and a track record of substantial contributions to software projects with high business impact.
  • Experience planning and leading a team using Scrum agile methodology ensuring timely delivery and continuous improvement.
  • Experience liaising with various business stakeholders to understand their data requirements and convey the technical solutions.
  • Experience with data warehousing and data modeling best practices.
  • Passionate interest in data engineering and infrastructure; ingestion, storage and compute in relational, NoSQL, and serverless architectures
  • Experience developing data pipelines and integrations for high volume, velocity and variety of data.
  • Experience writing clean code that performs well at scale; ideally experienced with languages like Python, Scala, SQL and shell script.
  • Experience with various types of data stores, query engines and data frameworks, e.g. PostgreSQL, MySQL, S3, Redshift, Presto/Athena, Spark and dbt.
  • Experience working with message queues such as Kafka and Kinesis
  • Experience with ETL and pipeline orchestration such as Airflow, AWS Glue
  • Experience with JIRA in managing sprints and roadmaps
  • Lead development and maintenance of scalable and efficient data pipeline architecture
  • Work within cross-functional teams, including Data Science, Analytics, Software Development, and business units, to deliver data products and services.
  • Collaborate with business stakeholders and translate requirements into scalable data solutions.
  • Monitor and communicate project statuses while mitigating risk and resolving issues.
  • Work closely with the Senior Manager to align team priorities with business objectives.
  • Assess and prioritize the team's work, appropriately delegating to others and encouraging team ownership.
  • Proactively share information, actively solicit feedback, and facilitate communication, within teams and other departments.
  • Design, write, test, and deploy high quality scalable code.
  • Maintain high standards of security, reliability, scalability, performance, and quality in all delivered projects.
  • Contribute to shape our technical roadmap as we scale our services and build our next generation data platform.
  • Build, support and lead a high performance, cohesive team of developers, in close partnership with the Senior Manager, Data Analytics.
  • Participate in the hiring process, with an aim of attracting and hiring the best developers.
  • Facilitate ongoing development conversations with your team to support their learning and career growth.

AWSLeadershipPostgreSQLPythonSoftware DevelopmentSQLAgileApache AirflowETLMySQLSCRUMCross-functional Team LeadershipAlgorithmsApache KafkaData engineeringData StructuresSparkCommunication SkillsAnalytical SkillsCI/CDProblem SolvingMentoringDevOpsWritten communicationCoachingScalaData visualizationTeam managementData modelingData analyticsData management

Posted 1 day ago
Apply
Apply

📍 United States, Australia, Canada, South America

🧭 Full-Time

💸 177000.0 - 213000.0 USD per year

🔍 FinTech

🏢 Company: Flex

  • A minimum of 6 years of industry experience in the data infrastructure/data engineering domain.
  • A minimum of 6 years of experience with Python and SQL.
  • A minimum of 3 years of industry experience using DBT.
  • A minimum of 3 years of industry experience using Snowflake and its basic features.
  • Familiarity with AWS services, with industry experience using Lambda, Step Functions, Glue, RDS, EKS, DMS, EMR, etc.
  • Industry experience with different big data platforms and tools such as Snowflake, Kafka, Hadoop, Hive, Spark, Cassandra, Airflow, etc.
  • Industry experience working with relational and NoSQL databases in a production environment.
  • Strong fundamentals in data structures, algorithms, and design patterns.
  • Design, implement, and maintain high-quality data infrastructure services, including but not limited to Data Lake, Kafka, Amazon Kinesis, and data access layers.
  • Develop robust and efficient DBT models and jobs to support analytics reporting and machine learning modeling.
  • Closely collaborating with the Analytics team for data modeling, reporting, and data ingestion.
  • Create scalable real-time streaming pipelines and offline ETL pipelines.
  • Design, implement, and manage a data warehouse that provides secure access to large datasets.
  • Continuously improve data operations by automating manual processes, optimizing data delivery, and redesigning infrastructure for greater scalability.
  • Create engineering documentation for design, runbooks, and best practices.

AWSPythonSQLBashDesign PatternsETLHadoopJavaKafkaSnowflakeAirflowAlgorithmsCassandraData engineeringData StructuresNosqlSparkCommunication SkillsCI/CDRESTful APIsTerraformWritten communicationDocumentationData modelingDebugging

Posted 2 days ago
Apply
Apply

📍 United States

💸 169000.0 - 240000.0 USD per year

🔍 Software Development

  • 4+ years of experience designing, developing and launching backend systems at scale using languages like Python or Kotlin.
  • A track record of developing highly available distributed systems using technologies like AWS, MySQL and Kubernetes.
  • Experience building and managing Workflow Orchestration frameworks like Airflow, Flyte, Prefect, Temporal, Luigi, etc.
  • Experience with or working knowledge for efficiently scaling frameworks like Spark/Flink for extremely large scale datasets on Kubernetes.
  • Experience defining a technical plan for the delivery of a significant feature or system component with an elegant, simple and extensible design. You write high quality code that is easily understood and used by others.
  • Proficient at making significant changes in a large code base, and have developed a suite of tools and practices that enable you and your team to do so safely.
  • Experience demonstrates that you take ownership of your growth, proactively seeking feedback from your team, your manager, and your stakeholders.
  • Strong verbal and written communication skills that support effective collaboration with our global engineering team.
  • This position requires either equivalent practical experience or a Bachelor’s degree in a related field
  • Be responsible for owning and delivering quarterly goals for your team, leading engineers on your team through ambiguity to solve open-ended problems, and ensuring that everyone is supported throughout delivery.
  • Support your peers and stakeholders in the product development lifecycle by collaborating with product management, design & analytics by participating in ideation, articulating technical constraints, and partnering on decisions that properly consider risks and trade-offs.
  • Proactively identify project, process, technology or business issues, advocate for them, and lead in solving them.
  • Support the operations and availability of your team’s artifacts by creating and monitoring metrics, escalating when needed, and supporting “keep the lights on” & on-call efforts.
  • Foster a culture of quality and ownership on your team by setting or improving code review and design standards for your team, and advocating for them beyond your team through your writing and tech talks.
  • Help develop talent on your team by providing feedback and guidance, and leading by example.

AWSBackend DevelopmentDockerLeadershipPythonSQLApache AirflowKotlinKubernetesMySQLSoftware ArchitectureAlgorithmsData engineeringData StructuresREST APISparkCommunication SkillsAnalytical SkillsCollaborationCI/CDProblem SolvingRESTful APIsMentoringDevOpsWritten communicationMicroservicesTeam managementSoftware Engineering

Posted 2 days ago
Apply
Apply

📍 United States, Canada

🧭 Full-Time

💸 105825.0 - 136950.0 CAD per year

🔍 Data Engineering

🏢 Company: Samsara👥 1001-5000💰 Secondary Market over 4 years ago🫂 Last layoff almost 5 years agoCloud Data ServicesBusiness IntelligenceInternet of ThingsSaaSSoftware

  • BS degree in Computer Science, Statistics, Engineering, or a related quantitative discipline
  • 6+ years experience in a data engineering and data science-focused role
  • ​​Proficiency in data manipulation and processing in SQL and Python
  • Expertise building data pipelines with new API endpoints from their documentation
  • Proficiency in building ETL pipelines to handle large volumes of data
  • Demonstrated experience in designing data models at scale
  • Build and maintain highly reliable computed tables, incorporating data from various sources, including unstructured and highly sensitive data
  • Access, manipulate, and integrate external datasets with internal data
  • Building analytical and statistical models to identify patterns, anomalies, and root causes
  • Leverage SQL and Python to shape and aggregate data
  • Incorporate generative AI tools (ChatGPT Enterprise) into production data pipelines and automated workflows
  • Collaborate closely with data scientists, data analysts, and Tableau developers to ship top quality analytic products
  • Champion, role model, and embed Samsara’s cultural principles (Focus on Customer Success, Build for the Long Term, Adopt a Growth Mindset, Be Inclusive, Win as a Team) as we scale globally and across new offices

PythonSQLETLTableauAPI testingData engineeringData scienceSparkCommunication SkillsAnalytical SkillsData visualizationData modeling

Posted 3 days ago
Apply
Apply

📍 United States, Canada

💸 120000.0 - 235900.0 USD per year

🔍 Data/AI

  • 6+ years of relevant experience in technical pre-sales, technical enablement, or data/ai technical-adjacent roles.
  • Experience delivering large-scale training and enablement solutions in a Tech or Data/AI company, targeted at a technical audience.
  • An understanding of the processes and nuances associated with a technical platform-as-a-service sales and delivery motion.
  • Possess or be willing to develop proficiency in foundational Data/AI concepts, the Lakehouse Architecture and Databricks product.
  • Experience supporting and enabling technical audiences in creating complex proofs-of-concept and technical solutions to support customer needs.
  • Exceptional communication, storytelling, and presentation skills, coupled with a strong executive presence and the ability to effectively influence and engage large audiences
  • Develop and execute Field Engineering enablement programming for the BU.
  • Partner closely with leadership, Sales Enablement and other stakeholders to discover, validate, prioritize, and scale technical enablement initiatives.
  • Drive global Field Engineering enablement strategy through innovative programs, covering analysis, design, development, implementation, and reporting oversight.
  • Align programs with BU-strategic priorities to contribute to the global roadmap.
  • Collaborate cross-functionally to keep up-to-date with a fast-evolving Databricks Platform, product, messaging, capabilities, and new processes.
  • Lead, facilitate, and coordinate enablement sessions, workshops, and other launches in the region for a technical audience.

SQLArtificial IntelligenceData AnalysisETLMachine LearningMLFlowData engineeringSparkCommunication SkillsPresentation skillsSales experienceData visualization

Posted 3 days ago
Apply
Apply

📍 United States

🧭 Full-Time

💸 217000.0 - 303900.0 USD per year

🔍 Digital Advertising

🏢 Company: Reddit👥 1001-5000💰 $410,000,000 Series F over 3 years ago🫂 Last layoff almost 2 years agoNewsContentSocial NetworkSocial Media

  • M.S.: 10+ years of industry data science experience, emphasizing experimentation and causal inference.
  • Ph.D.: 6+ years of industry data science experience, emphasizing experimentation and causal inference
  • Master's or Ph.D. in Statistics, Economics, Computer Science, or a related quantitative field
  • Expertise in experimental design, A/B testing, and causal inference
  • Proficiency in statistical programming (Python/R) and SQL
  • Demonstrated ability to apply statistical principles of experimentation (hypothesis testing, p-values, etc.)
  • Experience with large-scale data analysis and manipulation
  • Strong technical communication skills for both technical and non-technical audiences
  • Ability to thrive in fast-paced, ambiguous environments and drive action
  • Desire to mentor and elevate data science practices
  • Experience with digital advertising and marketplace dynamics (preferred)
  • Experience with advertising technology (preferred)
  • Lead the design, implementation, and analysis of sophisticated A/B tests and experiments, leveraging innovative techniques like Bayesian approaches and causal inference to optimize complex ad strategies
  • Extract critical insights through in-depth analysis, developing automated tools and actionable recommendations to drive impactful decisions Define and refine key metrics to empower product teams with a deeper understanding of feature performance
  • Partner with product and engineering to shape experiment roadmaps and drive data-informed product development
  • Provide technical leadership, mentor junior data scientists, and establish best practices for experimentation
  • Drive impactful results by collaborating effectively with product, engineering, sales, and marketing teams

AWSPythonSQLApache AirflowData AnalysisHadoopMachine LearningNumpyCross-functional Team LeadershipProduct DevelopmentAlgorithmsData engineeringData scienceRegression testingPandasSparkCommunication SkillsAnalytical SkillsMentoringData visualizationData modelingA/B testing

Posted 5 days ago
Apply
Apply

📍 United States

💸 185000.0 - 295000.0 USD per year

🔍 Cybersecurity

🏢 Company: crowdstrikecareers

  • 6+ years of experience in leading and building SAAS and hybrid-cloud application development, at large organizations or innovative startups.
  • 3+ years of experience with Agentic AI platforms such as Microsoft Copilot Studio, Amazon Bedrock, Google Vertex AI, Open AI etc. Experience with AI based conversational UI.
  • Strong programming skills in one of Python, Go, Scala or Java, with experience in building distributed systems.
  • Demonstrated experience in platform development (APIs, Databases, Serverless architecture) of cloud applications.
  • Proven expertise with algorithms, distributed systems design and the software development lifecycle.
  • Experience in design hybrid cloud applications with considerations of load balancing, network infrastructure and micro services architecture.
  • Proven ability to work effectively with remote teams.
  • Lead Agentic AI solutions including sophisticated AI agents and fine tuning & integrating with models
  • Play key role in design, development, and deployment AI applications using LLM's, Agentic framework, and other related technologies. Scalability, accuracy and reliability will be key success criteria.
  • Collaborate with enterprise teams to integrate LLM models and deliver on committed roadmaps.
  • Mentor team members to build and grow expertise in this domain.
  • Define benchmarks with metrics to evaluate the performance of agents and Agentic frameworks
  • Stay up to date with the latest advancements in AI, LLM, Agentic frameworks and apply this knowledge to improve existing systems and develop new ones
  • Tackle highly complex and unique challenges requiring in-depth evaluation across multiple areas or the enterprise, delivering solutions aligned with technical vision and business objectives
  • Trouble shoot, isolate and fix issues found during various stages of software development including production.
  • Leverage advanced technology experience to further organizations tactical and strategic business objectives
  • Act as an advisor to leadership to influence AI development strategies

AWSBackend DevelopmentDockerLeadershipPostgreSQLPythonSQLArtificial IntelligenceCloud ComputingCybersecurityData AnalysisJavaKerasKubernetesMachine LearningNumpyPyTorchSoftware ArchitectureAlgorithmsAPI testingData scienceData StructuresGoServerlessPandasSparkTensorflowCommunication SkillsCI/CDRESTful APIsMentoringDevOpsMicroservicesScalaSaaS

Posted 5 days ago
Apply
Apply
🔥 Sr. Data Engineer
Posted 7 days ago

📍 United States

🧭 Full-Time

💸 126100.0 - 168150.0 USD per year

🔍 Data Engineering

🏢 Company: firstamericancareers

  • 5+ years of development experience with any of the following software languages: Python or Scala, and SQL (we use SQL & Python) with cloud experience (Azure preferred or AWS).
  • Hands-on data security and cloud security methodologies. Experience in configuration and management of data security to meet compliance and CISO security requirements.
  • Experience creating and maintaining data intensive distributed solutions (especially involving data warehouse, data lake, data analytics) in a cloud environment.
  • Hands-on experience in modern Data Analytics architectures encompassing data warehouse, data lake etc. designed and engineered in a cloud environment.
  • Proven professional working experience in Event Streaming Platforms and data pipeline orchestration tools like Apache Kafka, Fivetran, Apache Airflow, or similar tools
  • Proven professional working experience in any of the following: Databricks, Snowflake, BigQuery, Spark in any flavor, HIVE, Hadoop, Cloudera or RedShift.
  • Experience developing in a containerized local environment like Docker, Rancher, or Kubernetes preferred
  • Data Modeling
  • Build high-performing cloud data solutions to meet our analytical and BI reporting needs.
  • Design, implement, test, deploy, and maintain distributed, stable, secure, and scalable data intensive engineering solutions and pipelines in support of data and analytics projects on the cloud, including integrating new sources of data into our central data warehouse, and moving data out to applications and other destinations.
  • Identify, design, and implement internal process improvements, such as automating manual processes, optimizing data delivery, and re-designing infrastructure for greater scalability, etc.
  • Build and enhance a shared data lake that powers decision-making and model building.
  • Partner with teams across the business to understand their needs and develop end-to-end data solutions.
  • Collaborate with analysts and data scientists to perform exploratory analysis and troubleshoot issues.
  • Manage and model data using visualization tools to provide the company with a collaborative data analytics platform.
  • Build tools and processes to help make the correct data accessible to the right people.
  • Participate in active rotational support role for production during or after business hours supporting business continuity.
  • Engage in collaboration and decision making with other engineers.
  • Design schema and data pipelines to extract, transform, and load (ETL) data from various sources into the data warehouse or data lake.
  • Create, maintain, and optimize database structures to efficiently store and retrieve large volumes of data.
  • Evaluate data trends and model simple to complex data solutions that meet day-to-day business demand and plan for future business and technological growth.
  • Implement data cleansing processes and oversee data quality to maintain accuracy.
  • Function as a key member of the team to drive development, delivery, and continuous improvement of the cloud-based enterprise data warehouse architecture.

AWSDockerPythonSQLAgileApache AirflowCloud ComputingETLHadoopKubernetesSnowflakeApache KafkaAzureData engineeringSparkScalaData visualizationData modelingData analytics

Posted 7 days ago
Apply
Apply

📍 United States, Canada

🧭 Full-Time

💸 70168.0 - 176880.0 USD per year

🔍 Video Gaming

🏢 Company: thatgamecompany👥 101-250💰 about 3 years agoDeveloper ToolsVideo GamesConsole GamesFamilyMMO GamesSocial NetworkMobileOnline Games

  • 3+ Years of Experience in applied Data Science or Machine Learning
  • Knowledge of Machine learning and Statistical methods (Classical ML, Deep Learning, NLP, and Anomaly Detection) and the ability to identify the most suitable solution for the problem
  • Experience building and deploying complex and scalable machine learning models in production environments, ideally in Kubernetes
  • Experience writing clean, efficient code in Python, Java, or Scala
  • Experience writing optimized SQL Queries to build and analyze datasets
Complete a project from start to finish. That includes requirements gathering, experimentation, model development, deployment, monitoring, documentation, support, and communication

PythonSQLData AnalysisETLGCPKubernetesMachine LearningNumpyAlgorithmsData scienceData StructuresREST APIPandasSparkTensorflowCommunication SkillsCI/CDProblem SolvingScalaData visualization

Posted 8 days ago
Apply
Shown 10 out of 78