Apply

Lead Data Engineer

Posted over 1 year agoViewed

View full description

Apply

Related Jobs

Apply

πŸ“ India

🧭 Full-Time

πŸ” Fintech

🏒 Company: Careers at Tide

  • Having 4+ years of extensive development experience using Snowflake or similar data warehouse technology.
  • Experience with dbt and modern data stack technologies including Snowflake, Apache Airflow, and Fivetran.
  • Extensive experience in writing advanced SQL statements and performance tuning.
  • Experience in data ingestion techniques using tools like Fivetran.
  • Experience in data modeling and optimization of existing/new data models.
  • Experience in data mining, ETL, and working with large-scale datasets.
  • Experience architecting analytical databases in a Data Mesh architecture is a plus.
  • Strong technical documentation skills and good communication skills in English.
  • Developing end to end ETL/ELT Pipeline working with Data Analysts of business Function.
  • Designing, developing, and implementing scalable, automated processes for data extraction, processing, and analysis.
  • Mentoring junior engineers in the team and being a 'go-to' expert for data technologies.
  • Troubleshooting and resolving technical issues; improving data pipeline delivery.
  • Translating business requirements into technical requirements.
  • Owning the end to end delivery of data models and reports.
  • Performing exploratory data analysis to ensure data quality.

SQLApache AirflowETLSnowflakeData modeling

Posted 6 days ago
Apply
Apply

πŸ“ UK

🧭 Full-Time

πŸ” Software Development

🏒 Company: Aker SystemsπŸ‘₯ 101-250πŸ’° over 4 years agoCloud Data ServicesBusiness IntelligenceAnalyticsSoftware

  • Data pipeline development using data processing technologies and frameworks
  • Agile or other rapid application development methods
  • Data modelling and understanding of different data structures and their benefits and limitations under particular use cases
  • Experience in Public Cloud services, such as AWS. Practical experience with core services such as EC2, RDS, Lambda, Athena & Glue would be even better!
  • Configuring and tuning Relational and NoSQL databases, including both query processing and query planning, or other data processing infrastructure
  • Programming or scripting languages, such as Python
  • Test Driven Development with appropriate tools and frameworks
  • Code, test, and document new or modified data pipelines that meet functional / non- functional business requirements
  • Conduct logical and physical database design
  • Expand and grows data platform capabilities to solve new data and analytics problems
  • Conduct data analysis, identifying feasible solutions and enhancements to data processing challenges
  • Ensure that data models are consistent with the data architecture (e.g. entity names, relationships and definitions)
  • Perform root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement

AWSPythonSQLAgileAmazon RDSData AnalysisData engineeringNosqlCI/CDLinuxTerraformData modelingScripting

Posted 8 days ago
Apply
Apply

πŸ“ Europe

🧭 Full-Time

πŸ” Supply Chain Risk Analytics

🏒 Company: Everstream AnalyticsπŸ‘₯ 251-500πŸ’° $50,000,000 Series B almost 2 years agoProductivity ToolsArtificial Intelligence (AI)LogisticsMachine LearningRisk ManagementAnalyticsSupply Chain ManagementProcurement

  • Deep understanding of Python, including data manipulation and analysis libraries like Pandas and NumPy.
  • Extensive experience in data engineering, including ETL, data warehousing, and data pipelines.
  • Strong knowledge of AWS services, such as RDS, Lake Formation, Glue, Spark, etc.
  • Experience with real-time data processing frameworks like Apache Kafka/MSK.
  • Proficiency in SQL and NoSQL databases, including PostgreSQL, Opensearch, and Athena.
  • Ability to design efficient and scalable data models.
  • Strong analytical skills to identify and solve complex data problems.
  • Excellent communication and collaboration skills to work effectively with cross-functional teams.
  • Manage and grow a remote team of data engineers based in Europe.
  • Collaborate with Platform and Data Architecture teams to deliver robust, scalable, and maintainable data pipelines.
  • Lead and own data engineering projects, including data ingestion, transformation, and storage.
  • Develop and optimize real-time data processing pipelines using technologies like Apache Kafka/MSK or similar.
  • Design and implement data lakehouses and ETL pipelines using AWS services like Glue or similar.
  • Create efficient data models and optimize database queries for optimal performance.
  • Work closely with data scientists, product managers, and engineers to understand data requirements and translate them into technical solutions.
  • Mentor junior data engineers and share your expertise. Establish and promote best practices.

AWSPostgreSQLPythonSQLETLApache KafkaNosqlSparkData modeling

Posted 23 days ago
Apply
Apply

πŸ“ United States of America

🧭 Full-Time

πŸ’Έ 140000.0 - 170000.0 USD per year

πŸ” Insurance

🏒 Company: joinroot

  • 4+ years as a software engineer.
  • 2+ years leading software teams.
  • Expertise in Python, Terraform, SQL, and Spark.
  • Expertise in Cloud Architecture.
  • Experience with telematics or sensor data collection systems.
  • Proven leadership of projects across multiple teams and functional domains.
  • Excellent communication skills with engineering colleagues and senior business leaders.
  • Partner with Marketing, Product, Data Science, Analytics, and Insurance experts to set the strategy for the quarters to come.
  • Identify and socialize important technical initiatives that increase the effectiveness of products, systems, and teams.
  • Coach and guide engineers in planning experiments and projects aligned with strategic objectives.
  • Contribute code each development cycle to advance the team’s impact.
  • Lead incident response to improve system resiliency.
  • Coordinate with Staff Engineers to establish and evangelize standards and best practices.

LeadershipPythonSQLStrategyData scienceSparkCommunication SkillsCollaborationTerraform

Posted 2 months ago
Apply
Apply
πŸ”₯ Lead Data Engineer
Posted 3 months ago

🧭 Full-Time

πŸ’Έ 190000.0 - 245000.0 USD per year

πŸ” App and access management

🏒 Company: LumosπŸ‘₯ 51-100πŸ’° $35,000,000 Series B 9 months agoSecurityInformation TechnologyIdentity ManagementCollaborationSoftware

  • Extensive experience designing and implementing medallion architectures (bronze, silver, gold layers) or similar data warehouse paradigms.
  • Skilled in optimizing data pipelines for both batch and real-time processing.
  • Proficiency in deploying data pipelines using CI/CD tools and integrating automated data quality checks.
  • Expertise in advanced SQL, ETL processes, and data transformation techniques.
  • Strong programming skills in Python.
  • Ability to work closely with AI engineers, data scientists, product engineers, product managers, and other stakeholders.
  • Architect, build, and maintain cutting-edge data pipelines for AI products and in-product analytics.
  • Ensure scalability, reliability, and quality of analytics data infrastructure.
  • Integrate usage, spend, compliance, and access data to drive business insights.
  • Focus on testing, automation, and best practices to transform complex data into actionable intelligence.
Posted 3 months ago
Apply
Apply

πŸ“ United States

πŸ” Data Management

🏒 Company: DemystπŸ‘₯ 51-100πŸ’° about 2 years agoBig DataFinancial ServicesBroadcastingData IntegrationAnalyticsInformation TechnologyFinTechSoftware

  • Bachelor's degree or higher in Computer Science, Data Engineering, or related fields. Equivalent work experience is also highly valued.
  • 5-10 years of experience in data engineering, software engineering, or client deployment roles, with at least 3 years in a leadership capacity.
  • Strong leadership skills, including the ability to mentor and motivate a team, lead through change, and drive outcomes.
  • Expertise in designing, building, and optimizing ETL/ELT data pipelines using Python, JavaScript, Golang, Scala, or similar languages.
  • Experience in managing large-scale data processing environments, including Databricks and Spark.
  • Proven experience with Apache Airflow to orchestrate data pipelines and manage workflow automation.
  • Deep knowledge of cloud services, particularly AWS (EC2/ECS, Lambda, S3), and their role in data engineering.
  • Hands-on experience with both SQL and NoSQL databases, with a deep understanding of data modeling and architecture.
  • Strong ability to collaborate with clients and cross-functional teams, delivering technical solutions that meet business needs.
  • Proven experience in unit testing, integration testing, and engineering best practices to ensure high-quality code.
  • Familiarity with agile project management tools (JIRA, Confluence, etc.) and methodologies.
  • Experience with data visualization and analytics tools such as Jupyter Lab, Metabase, Tableau.
  • Strong communicator and problem solver, comfortable working in distributed teams.
  • Lead the configuration, deployment, and maintenance of data solutions on the Demyst platform to support client use cases.
  • Supervise and mentor the local and distributed data engineering team, ensuring best practices in data architecture, pipeline development, and deployment.
  • Recruit, train, and evaluate technical talent, fostering a high-performing, collaborative team culture.
  • Contribute hands-on to coding, code reviews, and technical decision-making, ensuring scalability and performance.
  • Design, build, and optimize data pipelines, leveraging tools like Apache Airflow to automate workflows and manage large datasets effectively.
  • Work closely with clients to advise on data engineering best practices, including data cleansing, transformation, and storage strategies.
  • Implement solutions for data ingestion from various sources, ensuring the consistency, accuracy, and availability of data.
  • Lead critical client projects, managing engineering resources, project timelines, and client engagement.
  • Provide technical guidance and support for complex enterprise data integrations with third-party systems (e.g., AI platforms, data providers, decision engines).
  • Ensure compliance with data governance and security protocols when handling sensitive client data.
  • Develop and maintain documentation for solutions and business processes related to data engineering workflows.
  • Other duties as required.

AWSLeadershipProject ManagementPythonSQLAgileApache AirflowETLJavascriptJiraTableauStrategyAirflowData engineeringGoNosqlSpark

Posted 4 months ago
Apply

Related Articles

Posted 6 months ago

Insights into the evolving landscape of remote work in 2024 reveal the importance of certifications and continuous learning. This article breaks down emerging trends, sought-after certifications, and provides practical solutions for enhancing your employability and expertise. What skills will be essential for remote job seekers, and how can you navigate this dynamic market to secure your dream role?

Posted 6 months ago

Explore the challenges and strategies of maintaining work-life balance while working remotely. Learn about unique aspects of remote work, associated challenges, historical context, and effective strategies to separate work and personal life.

Posted 6 months ago

Google is gearing up to expand its remote job listings, promising more opportunities across various departments and regions. Find out how this move can benefit job seekers and impact the market.

Posted 6 months ago

Learn about the importance of pre-onboarding preparation for remote employees, including checklist creation, documentation, tools and equipment setup, communication plans, and feedback strategies. Discover how proactive pre-onboarding can enhance job performance, increase retention rates, and foster a sense of belonging from day one.

Posted 6 months ago

The article explores the current statistics for remote work in 2024, covering the percentage of the global workforce working remotely, growth trends, popular industries and job roles, geographic distribution of remote workers, demographic trends, work models comparison, job satisfaction, and productivity insights.