Data engineering Job Salaries

Find salary information for remote positions requiring Data engineering skills. Make data-driven decisions about your career path.

Data engineering

Median high-range salary for jobs requiring Data Engineering:

$204,500

This analysis is based on salary ranges collected from 186 job descriptions that match the search and allow working remotely. Choose a country to narrow down the search and view statistics exclusively for remote jobs available in that location.

The Median Salary Range is $160,000 - $204,500

  • 25% of job descriptions advertised a maximum salary above $236,200.
  • 5% of job descriptions advertised a maximum salary above $300,000.

Skills and Salary

Specific skills can have a substantial impact on salary ranges for jobs that align with these search preferences. Certain in-demand skills are highly valued by employers and can significantly boost compensation. These skills often reflect the unique requirements and challenges faced by professionals in these roles. Some of the most sought-after skills that correlate with higher salaries include Machine Learning, Communication Skills and CI/CD. Mastering these skills can demonstrate expertise and make individuals more competitive in the job market. Employers often prioritize candidates who possess these skills, as they can contribute directly to the organization's success. The ability to effectively utilize these skills can lead to increased earning potential and career advancement opportunities.

  1. Machine Learning

    41% jobs mention Machine Learning as a required skill. The Median Salary Range for these jobs is $164,880 - $225,000

    • 25% of job descriptions advertised a maximum salary above $264,500.
    • 5% of job descriptions advertised a maximum salary above $302,730.
  2. Communication Skills

    31% jobs mention Communication Skills as a required skill. The Median Salary Range for these jobs is $170,000 - $213,000

    • 25% of job descriptions advertised a maximum salary above $248,375.
    • 5% of job descriptions advertised a maximum salary above $305,915.
  3. CI/CD

    31% jobs mention CI/CD as a required skill. The Median Salary Range for these jobs is $157,500 - $213,000

    • 25% of job descriptions advertised a maximum salary above $250,625.
    • 5% of job descriptions advertised a maximum salary above $302,362.5.
  4. Cloud Computing

    30% jobs mention Cloud Computing as a required skill. The Median Salary Range for these jobs is $163,273 - $210,000

    • 25% of job descriptions advertised a maximum salary above $250,000.
    • 5% of job descriptions advertised a maximum salary above $293,750.
  5. AWS

    53% jobs mention AWS as a required skill. The Median Salary Range for these jobs is $160,000 - $205,500

    • 25% of job descriptions advertised a maximum salary above $247,000.
    • 5% of job descriptions advertised a maximum salary above $302,340.
  6. Python

    76% jobs mention Python as a required skill. The Median Salary Range for these jobs is $160,000 - $200,000

    • 25% of job descriptions advertised a maximum salary above $230,000.
    • 5% of job descriptions advertised a maximum salary above $300,000.
  7. SQL

    73% jobs mention SQL as a required skill. The Median Salary Range for these jobs is $155,000 - $200,000

    • 25% of job descriptions advertised a maximum salary above $230,000.
    • 5% of job descriptions advertised a maximum salary above $302,925.
  8. ETL

    42% jobs mention ETL as a required skill. The Median Salary Range for these jobs is $150,000 - $198,550

    • 25% of job descriptions advertised a maximum salary above $221,600.
    • 5% of job descriptions advertised a maximum salary above $273,000.
  9. Data modeling

    41% jobs mention Data modeling as a required skill. The Median Salary Range for these jobs is $152,250 - $198,550

    • 25% of job descriptions advertised a maximum salary above $229,000.
    • 5% of job descriptions advertised a maximum salary above $298,125.

Industries and Salary

Industry plays a crucial role in determining salary ranges for jobs that align with these search preferences. Certain industries offer significantly higher compensation packages compared to others. Some in-demand industries known for their competitive salaries in these roles include Health tech, Healthcare Technology and Software Development. These industries often have a strong demand for skilled professionals and are willing to invest in talent to meet their growth objectives. Factors such as industry size, profitability, and market trends can influence salary levels within these sectors. It's important to consider industry-specific factors when evaluating potential career paths and salary expectations.

  1. Health tech

    2% jobs are in Health tech industry. The Median Salary Range for these jobs is $200,000 - $250,000

    • 25% of job descriptions advertised a maximum salary above $325,000.
    • 5% of job descriptions advertised a maximum salary above $350,000.
  2. Healthcare Technology

    2% jobs are in Healthcare Technology industry. The Median Salary Range for these jobs is $170,000 - $225,000

    • 25% of job descriptions advertised a maximum salary above $267,600.
    • 5% of job descriptions advertised a maximum salary above $281,800.
  3. Software Development

    30% jobs are in Software Development industry. The Median Salary Range for these jobs is $160,000 - $214,000

    • 25% of job descriptions advertised a maximum salary above $253,700.
    • 5% of job descriptions advertised a maximum salary above $310,500.
  4. Fintech

    2% jobs are in Fintech industry. The Median Salary Range for these jobs is $147,500 - $213,000

    • 25% of job descriptions advertised a maximum salary above $223,875.
    • 5% of job descriptions advertised a maximum salary above $227,500.
  5. Mental Health Technology

    2% jobs are in Mental Health Technology industry. The Median Salary Range for these jobs is $170,000 - $200,000

  6. Healthcare

    5% jobs are in Healthcare industry. The Median Salary Range for these jobs is $150,000 - $192,500

    • 25% of job descriptions advertised a maximum salary above $210,000.
    • 5% of job descriptions advertised a maximum salary above $260,590.
  7. Mental Health

    3% jobs are in Mental Health industry. The Median Salary Range for these jobs is $126,650 - $179,000

    • 25% of job descriptions advertised a maximum salary above $197,100.
  8. Data Engineering

    2% jobs are in Data Engineering industry. The Median Salary Range for these jobs is $128,050 - $169,075

    • 25% of job descriptions advertised a maximum salary above $195,000.
    • 5% of job descriptions advertised a maximum salary above $220,000.
  9. Legal Services

    2% jobs are in Legal Services industry. The Median Salary Range for these jobs is $106,000 - $160,000

    • 25% of job descriptions advertised a maximum salary above $189,250.
    • 5% of job descriptions advertised a maximum salary above $199,000.
  10. EdTech

    2% jobs are in EdTech industry. The Median Salary Range for these jobs is $145,000 - $155,000

Disclaimer: This analysis is based on salary ranges advertised in job descriptions found on Remoote.app. While it provides valuable insights into potential compensation, it's important to understand that advertised salary ranges may not always reflect the actual salaries paid to employees. Furthermore, not all companies disclose salary ranges, which can impact the accuracy of this analysis. Several factors can influence the final compensation package, including:

  • Negotiation: Salary ranges often serve as a starting point for negotiation. Your experience, skills, and qualifications can influence the final offer you receive.
  • Benefits: Salaries are just one component of total compensation. Some companies may offer competitive benefits packages that include health insurance, paid time off, retirement plans, and other perks. The value of these benefits can significantly affect your overall compensation.
  • Cost of Living: The cost of living in a particular location can impact salary expectations. Some areas may require higher salaries to maintain a similar standard of living compared to others.

Jobs

211 jobs found. to receive daily emails with new job openings that match your preferences.
211 jobs found.

Set alerts to receive daily emails with new job openings that match your preferences.

Apply

πŸ“ United States

πŸ’Έ 138000.0 - 238050.0 USD per year

πŸ” Life Science

  • Minimum Bachelor's degree in Cybersecurity, Data analytics, Information Systems, or a related field.
  • A minimum of 8 years of demonstrable experience in cyber metrics and reporting, business analytics, requirements analysis, forecasting, industry research, planning, and/or management consulting required to enable data storytelling to enable visualization
  • Experience in SQL data modeling, SQL query development and understanding of ETL is required.
  • Understanding of designing and developing cybersecurity dashboards using tools like Tableau, Power BI, or similar.
  • Strong experience with requirements analysis and testing
  • Awareness of relevant industry trends and standard processes in cybersecurity metrics and reporting.
  • Ability to collect, analyze, and interpret large datasets to derive relevant insights and trends.
  • Ability to effectively collect and communicate complex information to customers.
  • Proactive and self-motivated approach to work, with excellent problem-solving and analytical skills.
  • Superb communication and storytelling skills, with the ability to shape messages for various audiences; including insights to executive-level audiences.
  • Analytical approach, with the ability to analyze data and identify insights.
  • Strong collaboration and customer management skills with a track record of successful delivery across multi-functional teams.
  • Ability to work effectively at all levels of the organization, from executive committee to individual employees.
  • Demonstrated Project Management, Scrum, Agile skills is required
  • Shape, develop and implement an effective cybersecurity metrics & reporting requirements capability
  • Partner with customers to define report requirements, ensuring clarity and feasibility
  • Documents detailed reporting specifications, including data sources, metrics, and presentation format.
  • Validates report designs, data, and developed reports against business requirements.
  • Conducts detailed testing on reports, focusing on data preciseness, visual integrity, and usability.
  • Works with visualization and data & analytics teams to resolve any identified issues.
  • Conduct data discovery tied to requirements, including exploratory data analysis, data quality assessment.
  • Analyze requirements and data sources to identify patterns, key insights, action items, and storytelling potential.
  • Translate data into value while assessing effective data visualization, experience design and trending analytics linked to business and cybersecurity objectives.
  • Describe the key elements of a successful data story: knowing one’s audience, defining a goal, maintaining engagement, and being explicit about the takeaways.
  • Measure how a data story achieves its intended goals and where it can be optimized.
  • Identify, design and implement requirements to inform reporting, dashboards and strategy.
  • Identify and create metrics to measure performance through automated dashboards.

SQLBusiness IntelligenceCybersecurityData AnalysisETLTableauData engineeringData visualizationData modelingData analytics

Posted about 6 hours ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 175500.0 - 277500.0 USD per year

πŸ” Software Development

  • Proven expertise in full-stack development with deep specialization in frontend technologies (HTML, CSS, JavaScript, React, Angular, or Vue.js).
  • Experience leading engineering teams of 10+ people, with a focus on mentorship, collaboration, and technical leadership.
  • A track record of driving product innovation and successfully transforming complex ideas into scalable solutions.
  • Strong problem-solving, communication, and cross-functional collaboration skills in a fast-paced, iterative environment.
  • Hands-on experience applying AI/Gen AI concepts, integrating ML models into applications, and working with LLMs. Experience with agentic platforms is a plus!
  • Full-Stack Leadership: Leverage your frontend expertise (React, Angular, or Vue.js) and robust backend knowledge to build and maintain scalable, high-performance services.
  • Architectural Excellence: Design resilient, cloud-native architectures, including microservices and data pipelines, for high availability and scalability using AWS, GCP, or Azure.
  • Product Innovation & Rapid Prototyping: Drive product transformation by identifying opportunities for innovation, creating prototypes, and iterating quickly toward customer-ready solutions.
  • Software Lifecycle Management: Implement CI/CD pipelines for model training, deployment, monitoring, and versioning, ensuring seamless production rollouts and model observability.
  • AI & Machine Learning Integration: Collaborate with ML experts to embed AI/Gen AI capabilities, including LLMs, into our products to enhance intelligence and user value.
  • Tech Trend Awareness: Stay ahead of emerging AI/ML trends (e.g., prompt engineering, transformer architectures) and promote continuous learning within the team.
  • Team Leadership & Growth: Build and lead a high-performing technical team, fostering collaboration, mentorship, and engineering excellence.
  • Cross-functional collaboration: Work closely with product, data science, design, and ML teams to deliver features based on A/B testing, user feedback, and analytics.

AWSBackend DevelopmentLeadershipCloud ComputingDesign PatternsFrontend DevelopmentFull Stack DevelopmentGCPMachine LearningReact.jsSoftware ArchitectureVue.JsCross-functional Team LeadershipAPI testingAzureData engineeringAngularCommunication SkillsCollaborationCI/CDProblem SolvingRESTful APIsMentoringMicroservicesTeam managementPrototypingSoftware Engineering

Posted about 10 hours ago
Apply
Apply

πŸ“ United States

πŸ’Έ 123000.0 - 235900.0 USD per year

πŸ” Data/AI

🏒 Company: DatabricksπŸ‘₯ 1001-5000πŸ’° $684,559,082 Series I over 1 year agoArtificial Intelligence (AI)Machine LearningAnalyticsInformation Technology

  • 7+ years of experience in technical pre-sales, technical enablement, technical program management, or consulting with a focus on data, AI, or cloud technologies.
  • Experience building, delivering, and scaling technical enablement programs for highly skilled technical teams.
  • Proven ability to create, manage, and execute large-scale enablement programs, balancing technical rigor with structured program management.
  • Exceptional communication and presentation skills, with the ability to engage technical and executive audiences.
  • Strong stakeholder management and collaboration skills, with the ability to align multiple teams toward a common goal.
  • Experience in technical pre-sales roles, building proofs-of-concept, or implementing technical solutions for customers (Preferred)
  • Databricks certification or experience with Apache Sparkβ„’, MLflow, Delta Lake, and other open-source technologies (Preferred)
  • Design, implement, and scale enablement solutions that foster domain specialization, hands-on expertise, and technical mastery.
  • Introduce innovative multi-signal validation methods that assess expertise through real-world application and structured learning.
  • Facilitate enablement sessions, workshops, and hands-on activities that reinforce applied problem-solving and deep technical skills.
  • Develop and maintain technical content, including reference architectures, solution guides, and POC templates.
  • Measure impact and iterate on enablement programs, leveraging feedback and performance data to drive improvements.
  • Collaborate with technical field teams, enablement leaders, and stakeholders to continuously refine and scale high-impact training programs.
  • Drive adoption of enablement programs and strategies among senior leaders by proposing solutions that align with business priorities, address key challenges, and incorporate industry trends.

AWSProject ManagementPythonSQLCloud ComputingData AnalysisETLGCPMachine LearningMLFlowApache KafkaAzureData engineeringREST APICommunication SkillsCollaborationCI/CDProblem SolvingMentoringPresentation skillsTrainingData visualizationStakeholder managementStrategic thinkingData modelingCustomer Success

Posted about 11 hours ago
Apply
Apply
πŸ”₯ Data Engineer
Posted 1 day ago

πŸ“ United States

πŸ’Έ 112800.0 - 126900.0 USD per year

πŸ” Software Development

🏒 Company: Titan Cloud

  • 4+ years of work experience with ETL, Data Modeling, Data Analysis, and Data Architecture.
  • Experience operating very large data warehouses or data lakes.
  • Experience with building data pipelines and applications to stream and process datasets at low latencies.
  • MySQL, MSSQL Database, Postgres, Python
  • Design, implement, and maintain standardized data models that align with business needs and analytical use cases.
  • Optimize data structures and schemas for efficient querying, scalability, and performance across various storage and compute platforms.
  • Provide guidance and best practices for data storage, partitioning, indexing, and query optimization.
  • Developing and maintaining a data pipeline design.
  • Build robust and scalable ETL/ELT data pipelines to transform raw data into structured datasets optimized for analysis.
  • Collaborate with data scientists to streamline feature engineering and improve the accessibility of high-value data assets.
  • Designing, building, and maintaining the data architecture needed to support business decisions and data-driven applications.β€―This includes collecting, storing, processing, and analyzing large amounts of data using AWS, Azure, and local tools and services.
  • Develop and enforce data governance standards to ensure consistency, accuracy, and reliability of data across the organization.
  • Ensure data quality, integrity, and completeness in all pipelines by implementing automated validation and monitoring mechanisms.
  • Implement data cataloging, metadata management, and lineage tracking to enhance data discoverability and usability.
  • Work with Engineering to manage and optimize data warehouse and data lake architectures, ensuring efficient storage and retrieval of structured and semi-structured data.
  • Evaluate and integrate emerging cloud-based data technologies to improve performance, scalability, and cost efficiency.
  • Assist with designing and implementing automated tools for collecting and transferring data from multiple source systems to the AWS and Azure cloud platform.
  • Work with DevOps Engineers to integrate any new code into existing pipelines
  • Collaborate with teams in trouble shooting functional and performance issues.
  • Must be a team player to be able to work in an agile environment

AWSPostgreSQLPythonSQLAgileApache AirflowCloud ComputingData AnalysisETLHadoopMySQLData engineeringData scienceREST APISparkCommunication SkillsAnalytical SkillsCI/CDProblem SolvingTerraformAttention to detailOrganizational skillsMicroservicesTeamworkData visualizationData modelingScripting

Posted 1 day ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 108000.0 - 162000.0 USD per year

πŸ” Insurance

🏒 Company: OpenlyπŸ‘₯ 251-500πŸ’° $100,000,000 Series D over 1 year agoLife InsuranceProperty InsuranceInsuranceCommercial InsuranceAuto Insurance

  • 1 to 2 years of data engineering and data management experience.
  • Scripting skills in one or more of the following: Python.
  • Basic understanding and usage of a development and deployment lifecycle, automated code deployments (CI/CD), code repositories, and code management.
  • Experience with Google Cloud data store and data orchestration technologies and concepts.
  • Hands-on experience and understanding of the entire data pipeline architecture: Data replication tools, staging data, data transformation, data movement, and cloud based data platforms.
  • Understanding of a modern next generation data warehouse platform, such as the Lakehouse and multi-data layered warehouse.
  • Proficiency with SQL optimization and development.
  • Ability to understand data architecture and modeling as it relates to business goals and objectives.
  • Ability to gain an understanding of data requirements, translate them into source to target data mappings, and build a working solution.
  • Experience with terraform preferred but not required.
  • Design, create, and maintain data solutions. This includes data pipelines and data structures.
  • Work with data users, data science, and business intelligence personnel, to create data solutions to be used in various projects.
  • Translating concepts to code to enhance our data management frameworks and services to strive towards providing a high quality data product to our data users.
  • Collaborate with our product, operations, and technology teams to develop and deploy new solutions related to data architecture and data pipelines to enable a best-in-class product for our data users.
  • Collaborating with teammates to derive design and solution decisions related to architecture, operations, deployment techniques, technologies, policies, processes, etc.
  • Participate in domain, stand ups, weekly 1:1's, team collaborations, and biweekly retros
  • Assist in educating others on different aspects of data (e.g. data management best practices, data pipelining best practices, SQL tuning)
  • Build and share your knowledge within the data engineer team and with others in the company (e.g. tech all-hands, tech learning hour, domain meetings, code sync meetings, etc.)

DockerPostgreSQLPythonSQLApache AirflowCloud ComputingETLGCPKafkaKubernetesData engineeringGoREST APICI/CDTerraformData modelingScriptingData management

Posted 1 day ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 137000.0 - 270000.0 USD per year

πŸ” Software Development

🏒 Company: MongoDBπŸ‘₯ 1001-5000πŸ’° Post-IPO Equity about 7 years agoDatabaseOpen SourceCloud ComputingSaaSSoftware

  • 10+ years software engineering, primarily focused on backend systems
  • Strong systems programming background; Go and Java experience a plus (but not required)
  • Extensive experience designing and operating distributed systems at scale; Kubernetes experience is a plus (but not required)
  • Experience with large-scale data processing or storage systems (e.g. data lake technologies, distributed databases, disaggregated storage, etc.)
  • Knowledge of modern observability practices and tools
  • Proven ability to lead technical initiatives as an individual contributor
  • Experience mentoring senior engineers and driving technical excellence within a team
  • Lead initiatives to improve system observability, stability, and resource management
  • Design and implement advanced autoscaling solutions for query execution and data processing
  • Reduce incident rates through holistic improvements to system resilience
  • Identify opportunities to improve operating costs in storage and query systems
  • Guide architectural decisions and conduct design reviews across two engineering teams
  • Mentor senior engineers in distributed systems design and operational excellence
  • Collaborate with Product Management on technical roadmap development
  • Drive cross-team technical initiatives and standards
  • Participate in on-call rotation and provide senior oversight for incident response and postmortem retrospectives
  • Design and implement improvements to our distributed query execution engine
  • Optimize data archival pipelines for increased throughput, durability and reliability
  • Design and implement solutions for single-tenant isolation requirements

Backend DevelopmentLeadershipSQLCloud ComputingJavaKubernetesMongoDBData engineeringGoREST APINosqlMentoringLinuxMicroservicesScriptingSoftware EngineeringData management

Posted 1 day ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 126650.0 - 179000.0 USD per year

πŸ” Mental Healthcare

🏒 Company: HeadwayπŸ‘₯ 201-500πŸ’° $125,000,000 Series C over 1 year agoMental Health Care

  • 6+ years of full-time experience using data to surfacing key insights and recommendations from data, solving real world problems, and supporting decision making within growth, product, revenue, or operations teams
  • 4+ years of full-time experience at a high-growth technology startup
  • Proficient in SQL, developing and documenting data models, and experience managing a modern BI tool like Metabase or Looker
  • Become a trusted strategic partner to our operations and business teams (growth, revenue, and customer success), and work with them to define analytics requirements, identify important questions, and help answer them with data.
  • Support the operational needs of business stakeholders by delivering high-quality output from data modeling and reverse ETL.
  • Collaborate with Data and Product Analysts, Data Engineers, Data Scientists, and stakeholders across the company to model data in a scalable way and present it in an impactful and digestible way that leads to better business decisions.
  • Tell compelling stories through data, articulating technical information to a non-technical audience

PythonSQLData AnalysisETLSnowflakeData engineeringREST APIData visualizationData modelingData analytics

Posted 1 day ago
Apply
Apply

πŸ“ United States

πŸ’Έ 70000.0 - 105000.0 USD per year

πŸ” Software Development

🏒 Company: VUHL

  • Relevant experience in data engineering or a related discipline.
  • Demonstrated ability to code effectively and a solid understanding of software engineering principles.
  • Experience using SQL or other query language to manage and process data.
  • Experience using Python to build ETL pipelines
  • Experience working with data from various sources and in various formats, including flat files, REST APIs, Excel files, JSON, XML, etc.
  • Experience with Snowflake, SQL Server, or related database technologies.
  • Experience using orchestration tools like Dagster (preferred), Apache Airflow, or similar.
  • Preference for Agile product delivery.
  • Familiarity with GIT, Change Management, and application lifecycle management tools.
  • Ability to influence others without positional control.
  • Create and deliver functional ETL pipelines and other data solutions using core technologies like SQL, Python, Snowflake, Dagster, and SSIS in an agile development environment. Apply sound database design principles and adhere to Clean Code practices.
  • Engage in whole team planning, retrospectives, and communication. Interact with Architects and Product Owners to translate requirements into actionable business logic.
  • Participate in proposing and adopting Engineering standards related to architectural considerations and non-functional requirements such as security, reliability, and stability. Ensure proper management and visibility of borrower data and the life of a loan. Contribute to data governance initiatives.
  • Actively contribute to strengthening the team and culture by taking on various duties as needed, excluding licensed activities.

PythonSQLAgileApache AirflowETLGitSnowflakeData engineeringREST APIJSONData modelingSoftware EngineeringData management

Posted 1 day ago
Apply
Apply

πŸ“ Canada

πŸ’Έ 98400.0 - 137800.0 CAD per year

πŸ” Data Technology

🏒 Company: HootsuiteπŸ‘₯ 1001-5000πŸ’° $50,000,000 Debt Financing almost 7 years agoπŸ«‚ Last layoff about 2 years agoDigital MarketingSocial Media MarketingSocial Media ManagementApps

  • A degree in Computer Science or Engineering, and senior-level experience in developing and maintaining software or an equivalent level of education or work experience, and a track record of substantial contributions to software projects with high business impact.
  • Experience planning and leading a team using Scrum agile methodology ensuring timely delivery and continuous improvement.
  • Experience liaising with various business stakeholders to understand their data requirements and convey the technical solutions.
  • Experience with data warehousing and data modeling best practices.
  • Passionate interest in data engineering and infrastructure; ingestion, storage and compute in relational, NoSQL, and serverless architectures
  • Experience developing data pipelines and integrations for high volume, velocity and variety of data.
  • Experience writing clean code that performs well at scale; ideally experienced with languages like Python, Scala, SQL and shell script.
  • Experience with various types of data stores, query engines and data frameworks, e.g. PostgreSQL, MySQL, S3, Redshift, Presto/Athena, Spark and dbt.
  • Experience working with message queues such as Kafka and Kinesis
  • Experience with ETL and pipeline orchestration such as Airflow, AWS Glue
  • Experience with JIRA in managing sprints and roadmaps
  • Lead development and maintenance of scalable and efficient data pipeline architecture
  • Work within cross-functional teams, including Data Science, Analytics, Software Development, and business units, to deliver data products and services.
  • Collaborate with business stakeholders and translate requirements into scalable data solutions.
  • Monitor and communicate project statuses while mitigating risk and resolving issues.
  • Work closely with the Senior Manager to align team priorities with business objectives.
  • Assess and prioritize the team's work, appropriately delegating to others and encouraging team ownership.
  • Proactively share information, actively solicit feedback, and facilitate communication, within teams and other departments.
  • Design, write, test, and deploy high quality scalable code.
  • Maintain high standards of security, reliability, scalability, performance, and quality in all delivered projects.
  • Contribute to shape our technical roadmap as we scale our services and build our next generation data platform.
  • Build, support and lead a high performance, cohesive team of developers, in close partnership with the Senior Manager, Data Analytics.
  • Participate in the hiring process, with an aim of attracting and hiring the best developers.
  • Facilitate ongoing development conversations with your team to support their learning and career growth.

AWSLeadershipPostgreSQLPythonSoftware DevelopmentSQLAgileApache AirflowETLMySQLSCRUMCross-functional Team LeadershipAlgorithmsApache KafkaData engineeringData StructuresSparkCommunication SkillsAnalytical SkillsCI/CDProblem SolvingMentoringDevOpsWritten communicationCoachingScalaData visualizationTeam managementData modelingData analyticsData management

Posted 1 day ago
Apply
Apply

πŸ“ United States, Australia, Canada, South America

🧭 Full-Time

πŸ’Έ 177000.0 - 213000.0 USD per year

πŸ” FinTech

🏒 Company: Flex

  • A minimum of 6 years of industry experience in the data infrastructure/data engineering domain.
  • A minimum of 6 years of experience with Python and SQL.
  • A minimum of 3 years of industry experience using DBT.
  • A minimum of 3 years of industry experience using Snowflake and its basic features.
  • Familiarity with AWS services, with industry experience using Lambda, Step Functions, Glue, RDS, EKS, DMS, EMR, etc.
  • Industry experience with different big data platforms and tools such as Snowflake, Kafka, Hadoop, Hive, Spark, Cassandra, Airflow, etc.
  • Industry experience working with relational and NoSQL databases in a production environment.
  • Strong fundamentals in data structures, algorithms, and design patterns.
  • Design, implement, and maintain high-quality data infrastructure services, including but not limited to Data Lake, Kafka, Amazon Kinesis, and data access layers.
  • Develop robust and efficient DBT models and jobs to support analytics reporting and machine learning modeling.
  • Closely collaborating with the Analytics team for data modeling, reporting, and data ingestion.
  • Create scalable real-time streaming pipelines and offline ETL pipelines.
  • Design, implement, and manage a data warehouse that provides secure access to large datasets.
  • Continuously improve data operations by automating manual processes, optimizing data delivery, and redesigning infrastructure for greater scalability.
  • Create engineering documentation for design, runbooks, and best practices.

AWSPythonSQLBashDesign PatternsETLHadoopJavaKafkaSnowflakeAirflowAlgorithmsCassandraData engineeringData StructuresNosqlSparkCommunication SkillsCI/CDRESTful APIsTerraformWritten communicationDocumentationData modelingDebugging

Posted 2 days ago
Apply
Shown 10 out of 211