Apply

Lead Data Engineer

Posted 2 months agoViewed

View full description

πŸ’Ž Seniority level: Lead, Extensive experience

πŸ’Έ Salary: 190000.0 - 245000.0 USD per year

πŸ” Industry: App and access management

🏒 Company: LumosπŸ‘₯ 51-100πŸ’° $35,000,000 Series B 8 months agoSecurityInformation TechnologyIdentity ManagementCollaborationSoftware

⏳ Experience: Extensive experience

Requirements:
  • Extensive experience designing and implementing medallion architectures or similar data warehouse paradigms.
  • Skilled in optimizing data pipelines for batch and real-time processing.
  • Proficiency in deploying data pipelines using CI/CD tools and integrating automated data quality checks.
  • Expertise in advanced SQL, ETL processes, and data transformation techniques.
  • Strong programming skills in Python.
  • Demonstrated ability to collaborate with AI engineers, data scientists, and product teams.
Responsibilities:
  • Architect, build, and maintain cutting-edge data pipelines that empower AI products, in-product analytics, and internal reporting.
  • Ensure scalability, reliability, and quality of analytics data infrastructure.
  • Enable seamless integration of usage, spend, compliance, and access data to drive business insights and deliver value.
Apply

Related Jobs

Apply

πŸ“ United States

πŸ” Defense and Financial Technology

🏒 Company: 540

  • Bachelor's Degree.
  • 8+ years of related experience.
  • Well-versed in Python.
  • Experience building and managing data pipelines.
  • Proficient in data analytics tools such as Databricks.
  • Experience building dashboards using PowerBI and/or similar tools.
  • Experience working via the terminal / command line.
  • Experience consuming data via APIs.
  • Hands-on experience using Jira and Confluence.

  • Working directly with government leadership managing teams, customers, and data requirements.
  • Assisting Audit teams with monthly data ingestions from Army systems.
  • Management of data initiatives and small projects from start to finish.
  • Working with Army FM&C Lead to prioritize Advana data product requirements.
  • Developing recurring and ad hoc financial datasets.
  • Developing Advana datasets and analytical products to enable the Army reporting on all Financial System data.
  • Reviewing data pipeline code via GitLab to ensure it meets team and code standards.
  • Overseeing overall architecture and technical direction for FM&C data projects.

AWSPython

Posted 3 days ago
Apply
Apply

πŸ“ Brazil

🧭 Full-Time

πŸ” Digital Engineering and Modernization

🏒 Company: EncoraπŸ‘₯ 10001-10001πŸ’° $200,000,000 Private over 5 years agoBig DataCloud ComputingSoftware

  • Experience in data modeling.
  • Experience developing and maintaining data pipelines.
  • Proficiency in SQL.
  • Proficiency in Python.
  • Experience with AWS Redshift.
  • Experience with Apache Airflow.
  • Familiarity with BI tools.

  • Develop and maintain efficient and scalable data pipelines.
  • Model and transform data to meet analysis and reporting needs.
  • Collaborate closely with the customer, including BI and software engineering.
  • Lead other BI or DE team members.
  • Create and maintain detailed technical documentation.
  • Develop dashboards in AWS Quicksight with support from a BI Analyst.

PythonSQLApache AirflowBusiness IntelligenceData modeling

Posted 4 days ago
Apply
Apply

πŸ“ Romania, UK, Netherlands, Belgium

🧭 Full-Time

πŸ” Digital consultancy

🏒 Company: Qodea

  • Strong experience as a Senior/Principal Cloud Data Engineer with a solid track record of data migration.
  • Experience working on projects within large enterprises.
  • Technical leadership experience on projects, contributing to decision making.
  • Experience with Python and Scala or Java coding.
  • Familiarity with big data processing tools like Hadoop or Spark.
  • GCP experience, including services like BigQuery and Cloud Functions.
  • Experience with Terraform.
  • Prior experience in customer-facing consultancy roles is desirable.

  • Lead client engagements and project delivery.
  • Consult, design, and coordinate architecture to modernise infrastructure for performance, scalability, latency, and reliability.
  • Identify, scope, and participate in the design and delivery of cloud data platform solutions.
  • Deliver highly scalable big data architecture solutions using Google Cloud Technology.
  • Document and share technical best practices/insights with engineering colleagues and the Data Engineering community.
  • Mentor and develop engineers within the Qodea Data Team and within customers' engineering teams.

LeadershipPythonSQLAgileGCPHadoopJavaData engineeringSparkTerraform

Posted about 1 month ago
Apply
Apply
πŸ”₯ Lead Data Engineer I
Posted about 1 month ago

πŸ“ United States of America

🧭 Full-Time

πŸ’Έ 140000.0 - 170000.0 USD per year

πŸ” Insurance

🏒 Company: joinroot

  • 4+ years as a software engineer.
  • 2+ years leading software teams.
  • Expertise in Python, Terraform, SQL, and Spark.
  • Expertise in Cloud Architecture.
  • Experience with telematics or sensor data collection systems.
  • Proven leadership of projects across multiple teams and functional domains.
  • Excellent communication skills with engineering colleagues and senior business leaders.

  • Partner with Marketing, Product, Data Science, Analytics, and Insurance experts to set the strategy for the quarters to come.
  • Identify and socialize important technical initiatives that increase the effectiveness of products, systems, and teams.
  • Coach and guide engineers in planning experiments and projects aligned with strategic objectives.
  • Contribute code each development cycle to advance the team’s impact.
  • Lead incident response to improve system resiliency.
  • Coordinate with Staff Engineers to establish and evangelize standards and best practices.

LeadershipPythonSQLStrategyData scienceSparkCommunication SkillsCollaborationTerraform

Posted about 1 month ago
Apply
Apply

πŸ“ Latam

🧭 Full-Time

πŸ’Έ 100000 - 120000 USD per year

πŸ” Staff augmentation

🏒 Company: NearsureπŸ‘₯ 501-1000Staffing AgencyOutsourcingSoftware

  • Bachelor's Degree in Computer Science, Engineering, or a related field.
  • 5+ Years of experience working with Microsoft SQL and data engineering.
  • 5+ Years of experience managing data warehouse environments working with star schema architecture or data lake environments.
  • 3+ Years of experience working with Python.
  • 3+ Years of experience working with Power BI.
  • 3+ Years of experience working with ETL processes over SSIS.
  • 2+ Years of experience working with Azure Data Factory AND/OR Azure Synapse Analytics.
  • 1+ Years of experience working with Power BI Report Builder or SSRS.
  • Microsoft Certification DP600 – Fabric Analytics Engineer Associate.
  • Experience with Azure DevOps for code deployment.
  • Advanced English Level is required.

  • Design, develop, and maintain scalable data architectures using SQL, stored procedures, and ETL processes.
  • Ensure robust data pipelines and efficient data flow across systems.
  • Create and manage interactive and insightful dashboards using Power BI.
  • Collaborate with data analysts to translate business requirements into actionable data insights.
  • Oversee the management and optimization of data warehouse environments, ensuring data integrity and performance.
  • Utilize Azure DevOps for code deployment and continuous integration.
  • Ensure seamless integration of data solutions within the existing infrastructure.
  • Develop technical roadmaps and prototypes for data engineering projects.
  • Stay up to date with the latest trends and best practices in data engineering.
  • Convening with various stakeholders to collect, document, and prioritize needs.

PythonSQLETLAzureData engineeringData visualization

Posted 3 months ago
Apply
Apply
πŸ”₯ Lead Data Engineer
Posted 3 months ago

πŸ“ India

🧭 Full-Time

πŸ” Digital engineering and modernization

🏒 Company: EncoraπŸ‘₯ 10001-10001πŸ’° $200,000,000 Private over 5 years agoBig DataCloud ComputingSoftware

  • 7-10 years of strong development experience performing ETL and/or data pipeline implementations.
  • Expert in programming languages, preferably Python.
  • Expert in delivering end-to-end analytic solutions using AWS services (EMR, Airflow, S3, Athena, Kinesis, Redshift).
  • Experience in batch technologies like Hadoop, Hive, Athena, Presto.
  • Strong SQL skills, including query optimization, schema design, complex analytics.
  • Expert in data modeling and metadata management like Glue Catalog etc.
  • Experience in deployment tools like GitHub actions, Jenkins, AWS Code Pipeline etc.
  • Experience in data quality tools like Deque or Great Expectations is Nice To Have.

  • Collaborate and partner with Business Analyst teams located in US and EMEA regions.
  • Interface across our Business Analyst and Data Science teams.
  • Play a key role in integrating new data sources into our data & analytical ecosystem over AWS cloud.
  • Implement data lake solutions while addressing common data concerns, such as data quality, data governance.
  • Set the standard for technical excellence as we move / build our data ecosystem into the cloud.
  • Understand their common data problems and deliver scalable solutions.

AWSLeadershipPythonSQLData AnalysisETLHadoopJenkinsAirflowData science

Posted 3 months ago
Apply
Apply

πŸ“ United States

πŸ” Data Management

🏒 Company: DemystπŸ‘₯ 51-100πŸ’° about 2 years agoBig DataFinancial ServicesBroadcastingData IntegrationAnalyticsInformation TechnologyFinTechSoftware

  • Bachelor's degree or higher in Computer Science, Data Engineering, or related fields. Equivalent work experience is also highly valued.
  • 5-10 years of experience in data engineering, software engineering, or client deployment roles, with at least 3 years in a leadership capacity.
  • Strong leadership skills, including the ability to mentor and motivate a team, lead through change, and drive outcomes.
  • Expertise in designing, building, and optimizing ETL/ELT data pipelines using Python, JavaScript, Golang, Scala, or similar languages.
  • Experience in managing large-scale data processing environments, including Databricks and Spark.
  • Proven experience with Apache Airflow to orchestrate data pipelines and manage workflow automation.
  • Deep knowledge of cloud services, particularly AWS (EC2/ECS, Lambda, S3), and their role in data engineering.
  • Hands-on experience with both SQL and NoSQL databases, with a deep understanding of data modeling and architecture.
  • Strong ability to collaborate with clients and cross-functional teams, delivering technical solutions that meet business needs.
  • Proven experience in unit testing, integration testing, and engineering best practices to ensure high-quality code.
  • Familiarity with agile project management tools (JIRA, Confluence, etc.) and methodologies.
  • Experience with data visualization and analytics tools such as Jupyter Lab, Metabase, Tableau.
  • Strong communicator and problem solver, comfortable working in distributed teams.

  • Lead the configuration, deployment, and maintenance of data solutions on the Demyst platform to support client use cases.
  • Supervise and mentor the local and distributed data engineering team, ensuring best practices in data architecture, pipeline development, and deployment.
  • Recruit, train, and evaluate technical talent, fostering a high-performing, collaborative team culture.
  • Contribute hands-on to coding, code reviews, and technical decision-making, ensuring scalability and performance.
  • Design, build, and optimize data pipelines, leveraging tools like Apache Airflow to automate workflows and manage large datasets effectively.
  • Work closely with clients to advise on data engineering best practices, including data cleansing, transformation, and storage strategies.
  • Implement solutions for data ingestion from various sources, ensuring the consistency, accuracy, and availability of data.
  • Lead critical client projects, managing engineering resources, project timelines, and client engagement.
  • Provide technical guidance and support for complex enterprise data integrations with third-party systems (e.g., AI platforms, data providers, decision engines).
  • Ensure compliance with data governance and security protocols when handling sensitive client data.
  • Develop and maintain documentation for solutions and business processes related to data engineering workflows.
  • Other duties as required.

AWSLeadershipProject ManagementPythonSQLAgileApache AirflowETLJavascriptJiraTableauStrategyAirflowData engineeringGoNosqlSpark

Posted 3 months ago
Apply

Related Articles

Posted 5 months ago

Insights into the evolving landscape of remote work in 2024 reveal the importance of certifications and continuous learning. This article breaks down emerging trends, sought-after certifications, and provides practical solutions for enhancing your employability and expertise. What skills will be essential for remote job seekers, and how can you navigate this dynamic market to secure your dream role?

Posted 5 months ago

Explore the challenges and strategies of maintaining work-life balance while working remotely. Learn about unique aspects of remote work, associated challenges, historical context, and effective strategies to separate work and personal life.

Posted 5 months ago

Google is gearing up to expand its remote job listings, promising more opportunities across various departments and regions. Find out how this move can benefit job seekers and impact the market.

Posted 5 months ago

Learn about the importance of pre-onboarding preparation for remote employees, including checklist creation, documentation, tools and equipment setup, communication plans, and feedback strategies. Discover how proactive pre-onboarding can enhance job performance, increase retention rates, and foster a sense of belonging from day one.

Posted 5 months ago

The article explores the current statistics for remote work in 2024, covering the percentage of the global workforce working remotely, growth trends, popular industries and job roles, geographic distribution of remote workers, demographic trends, work models comparison, job satisfaction, and productivity insights.