Apply

Senior Data Engineer

Posted 20 days agoViewed

View full description

💎 Seniority level: Senior, 5+ years

📍 Location: Canada, United Kingdom, India, IST

🔍 Industry: Software Development

🏢 Company: Loopio Inc.

🗣️ Languages: English

⏳ Experience: 5+ years

🪄 Skills: AWSPythonSQLData AnalysisETLJenkinsMachine LearningAirflowData engineeringNosqlSparkCommunication SkillsAnalytical SkillsCollaborationCI/CDData visualizationData modeling

Requirements:
  • 5+ years of experience in data engineering in a high-growth agile software development environment
  • Strong understanding of database concepts, modeling, SQL, query optimization
  • Ability to learn fast and translate data into actionable results
  • Experience developing in Python and Pyspark
  • Hands-on experience with the AWS services (RDS, S3, Redshift, Glue, Quicksight, Athena, ECS)
  • Strong understanding of relational databases (RDS, MySQL) and NoSQL
  • Experience with ETL & Data warehousing, building fact & dimensional data models
  • Experience with data processing frameworks such as Spark / Databricks
  • Experience in developing Big Data solutions (migration, storage, processing)
  • Experience with CI/CD tools (Jenkins) and pipeline orchestration tools (Databricks Jobs, Airflow)
  • Experience working with data visualization and BI platforms (Quicksight, Tableau, Sisense, etc)
  • Experience working with Clickstream data (Amplitude, Pendo, etc)
  • Experience building and supporting large-scale systems in a production environment
  • Strong communication, collaboration, and analytical skills
  • Demonstrated ability to work with a high degree of ambiguity, and leadership within a team (mentorship, ownership, innovation)
  • Ability to clearly communicate technical roadmap, challenges, and mitigation
Responsibilities:
  • Be responsible for building, evolving and scaling data platforms and ETL pipelines, with an eye towards the growth of our business and the reliability of our data
  • Promote data-driven decision-making across the organization through data expertise
  • Build advanced automation tooling tooling for data orchestration, evaluation, testing, monitoring, administration, and data operations.
  • Integrate various data sources into our Data lake, including clickstream, relational, and unstructured data
  • Developing and maintaining a feature store for use in analytics & modeling
  • Partner with data scientists to create predictive models to help drive insights and decisions, both in Loopio’s product and internal teams (RevOps, Marketing, CX)
  • Work closely with stakeholders within and across teams to understand the data needs of the business and produce processes that enable a better product and support data-driven decision-making
  • Build scalable data pipelines using Databricks, and AWS (Redshift, S3, RDS), and other cloud technologies
  • Build and support Loopio’s data warehouse (Redshift) and data lake (Databricks delta lake)
  • Orchestrate pipelines using workflow frameworks/tooling
Apply

Related Jobs

Apply

📍 Germany, Italy, Netherlands, Portugal, Romania, Spain, UK

🧭 Full-Time

🔍 Wellness

  • You have a proven track record of designing and building robust, scalable, and maintainable data models and corresponding pipelines from business requirements.
  • You are skilled at engaging with engineering and product teams to elicit requirements.
  • You are comfortable with big data concepts, ensuring data is efficiently ingested, processed, and made available for data scientists, business analysts, and product teams.
  • You are experienced in maintaining data consistency across the entire data ecosystem.
  • You have experience maintaining and debugging data pipelines in production environments with high criticality, ensuring reliability and performance.
  • Develop and maintain efficient and scalable data models and structures to support analytical workloads.
  • Design, develop, and maintain data pipelines that transform and process large volumes of data while embedding business context and semantics.
  • Implement automated data quality checks to ensure consistency, accuracy, and reliability of data.
  • Ensure correct adoption and usage of Wellhub’s data by data practitioners across the company
  • Live the mission: inspire and empower others by genuinely caring for your own wellbeing and your colleagues. Bring wellbeing to the forefront of work, and create a supportive environment where everyone feels comfortable taking care of themselves, taking time off, and finding work-life balance.

SQLApache AirflowKubernetesApache KafkaData engineeringSparkData modeling

Posted 3 days ago
Apply
Apply
🔥 Senior Data Engineer
Posted 5 days ago

📍 Worldwide

🔍 Hospitality

🏢 Company: Lighthouse

  • 4+ years of professional experience using Python, Java, or Scala for data processing (Python preferred)
  • You stay up-to-date with industry trends, emerging technologies, and best practices in data engineering.
  • Improve, manage, and teach standards for code maintainability and performance in code submitted and reviewed
  • Ship large features independently, generate architecture recommendations and have the ability to implement them
  • Great communication: Regularly achieve consensus amongst teams
  • Familiarity with GCP, Kubernetes (GKE preferred),  CI/CD tools (Gitlab CI preferred), familiarity with the concept of Lambda Architecture.
  • Experience with Apache Beam or Apache Spark for distributed data processing or event sourcing technologies like Apache Kafka.
  • Familiarity with monitoring tools like Grafana & Prometheus.
  • Design and develop scalable, reliable data pipelines using the Google Cloud stack.
  • Optimise data pipelines for performance and scalability.
  • Implement and maintain data governance frameworks, ensuring data accuracy, consistency, and compliance.
  • Monitor and troubleshoot data pipeline issues, implementing proactive measures for reliability and performance.
  • Collaborate with the DevOps team to automate deployments and improve developer experience on the data front.
  • Work with data science and analytics teams to enable them to bring their research to production grade data solutions, using technologies like airflow, dbt or MLflow (but not limited to)
  • As a part of a platform team, you will communicate effectively with teams across the entire engineering organisation, to provide them with reliable foundational data models and data tools.
  • Mentor and provide technical guidance to other engineers working with data.

PythonSQLApache AirflowETLGCPKubernetesApache KafkaData engineeringCI/CDMentoringTerraformScalaData modeling

Posted 5 days ago
Apply
Apply

📍 Worldwide

🧭 Full-Time

NOT STATED
  • Own the design and implementation of cross-domain data models that support key business metrics and use cases.
  • Partner with analysts and data engineers to translate business logic into performant, well-documented dbt models.
  • Champion best practices in testing, documentation, CI/CD, and version control, and guide others in applying them.
  • Act as a technical mentor to other analytics engineers, supporting their development and reviewing their code.
  • Collaborate with central data platform and embedded teams to improve data quality, metric consistency, and lineage tracking.
  • Drive alignment on model architecture across domains—ensuring models are reusable, auditable, and trusted.
  • Identify and lead initiatives to reduce technical debt and modernise legacy reporting pipelines.
  • Contribute to the long-term vision of analytics engineering at Pleo and help shape our roadmap for scalability and impact.

SQLData AnalysisETLData engineeringCI/CDMentoringDocumentationData visualizationData modelingData analyticsData management

Posted 6 days ago
Apply
Apply

📍 United States, Canada

🧭 Full-Time

🔍 Software Development

  • Strong hands-on experience with Python and core Python Data Processing tools such as pandas, numpy, scipy, scikit
  • Experience with cloud tools and environments like Docker, Kubernetes, GCP, and/or Azure
  • Experience with Spark/PySpark
  • Experience with Data Lineage and Data Cataloging
  • Relational and non-relational database experience
  • Experience with Data Warehouses and Lakes, such as Bigquery, Databricks, or Snowflake
  • Experience in designing and building data pipelines that scale
  • Strong communication skills, with the ability to convey technical solutions to both technical and non-technical stakeholders
  • Experience working effectively in a fast-paced, agile environment as part of a collaborative team
  • Ability to work independently and as part of a team
  • Willingness and enthusiasm to learn new technologies and tackle challenging problems
  • Experience in Infrastructure as Code tools like Terraform
  • Advanced SQL expertise, including experience with complex queries, query optimization, and working with various database systems
  • Work with business stakeholders to understand their goals, challenges, and decisions
  • Assist with building solutions that standardize their data approach to common problems across the company
  • Incorporate observability and testing best practices into projects
  • Assist in the development of processes to ensure their data is trusted and well-documented
  • Effectively work with data analysts on refining the data model used for reporting and analytical purposes
  • Improve the availability and consistency of data points crucial for analysis
  • Standing up a reporting system in BigQuery from scratch, including data replication, infrastructure setup, dbt model creation, and Integration with reporting endpoints
  • Revamping orchestration and execution to reduce critical data delivery times
  • Database archiving to move data from a live database to cold storage

AWSSQLCloud ComputingData AnalysisETLData engineeringData visualizationData modeling

Posted 13 days ago
Apply
Apply
🔥 Senior Data Engineer
Posted 15 days ago

📍 Canada

🧭 Full-Time

🔍 Fintech

🏢 Company: Coinme👥 51-100💰 $772,801 Seed over 2 years agoCryptocurrencyBlockchainBitcoinFinTechVirtual Currency

  • 7+ years of experience with ETL, SQL, PowerBI, Tableau, or similar technologies
  • Strong understanding of data modeling, database design, and SQL
  • Experience working with Apache Kafka or MSK solution
  • Extensive experience delivering solutions on Snowflake or other cloud-based data warehouses
  • Proficiency in Python/R and familiarity with modern data engineering practices
  • Strong analytical and problem-solving skills
  • Experience with machine learning (ML)
  • Design, develop, and maintain scalable data pipelines.
  • Implement data ingestion frameworks.
  • Optimize data pipelines for performance.
  • Develop and deliver data assets.
  • Evaluate and improve existing data solutions.
  • Experience in data quality management.
  • Collaborate with engineers and product managers.
  • Lead the deployment and maintenance of data solutions.
  • Champion best practices in data development.
  • Conduct code reviews and provide mentorship.
  • Create and maintain process documentation.
  • Monitor data pipelines for performance.
  • Implement logging, monitoring, and alerting systems.
  • Drive the team’s Agile process.

PythonSQLAgileETLMachine LearningSnowflakeTableauApache KafkaData engineeringData visualizationData modeling

Posted 15 days ago
Apply
Apply

📍 United Kingdom

🧭 Full-Time

🔍 Cybersecurity

🏢 Company: Immersive

  • Proficient in python programming with experience using Plotly graphing libraries and application development, experience with Flask and SQLAlchemy is a plus
  • Experience maintaining data pipelines, managing infrastructure as code, and implementing data model changes
  • Experience following software engineering best practices like version control and continuous integration
  • Strong proficiency using SQL in cloud data warehouses (e.g. BigQuery, Redshift, Snowflake) and are comfortable with performance optimization, data partitioning, and window functions
  • Experience with dbt for data transformation layer
  • Experience with IaC tooling such as Terraform or CloudFormation
  • Experience with BI tooling such as Power BI or Looker
  • Experience with AWS, Azure or GCP
  • Design, build and maintain high quality python applications for customer facing reporting
  • Maintain and develop data pipelines to ensure data quality and consistency
  • Collaborate closely with analytics engineers to implement data model changes
  • Apply domain knowledge to enable the rest of the business to access the data they need to make informed business decisions

AWSPythonSQLFlaskGitSnowflakeData engineeringCI/CDRESTful APIsTerraformData visualizationData modelingSoftware Engineering

Posted 17 days ago
Apply
Apply
🔥 Senior Data Engineer
Posted 27 days ago

📍 Germany, Spain, United Kingdom, Austria

🧭 Full-Time

🔍 Software Development

🏢 Company: LocalStack👥 11-50💰 $25,000,000 Series A 5 months agoCloud ComputingInformation TechnologySoftware

  • Ability and experience working with non technical stakeholders to gather requirements
  • Ability to define technical initiatives required to satisfy business requirements
  • Excellent knowledge of Python
  • Experience in designing real time data ingestion solutions with massive volumes of data
  • Experience with AWS services commonly used in Data Engineering (like S3, ECS, Glue, EMR)
  • Experience with relational databases and data warehouses, data orchestration and ingestion tools, SQL, and BI tools
  • Experience in working remotely/ in async settings
  • Experience owning initiatives at the IC level
  • Experience Providing guidance to junior engineers
  • Maintain, monitor, and optimize data ingestion pipelines for our current data platform.
  • Lead the development of our future data platform based on evolving business needs.
  • Shape the data team roadmap and contribute to long-term strategic planning.
  • Take full ownership of data ingestion from external sources, ensuring smooth functionality.
  • Design and implement a robust data modelling and data lake solution architecture.
  • Provide technical leadership and mentorship to the data engineering team.
  • Collaborate with engineering teams to define and refine ingestion pipeline requirements.
  • Work with stakeholders to gather business questions and data needs.

AWSDockerLeadershipPythonSQLApache AirflowETLKafkaData engineeringData StructuresREST APICommunication SkillsAnalytical SkillsCollaborationCI/CDProblem SolvingMentoringWritten communicationData visualizationTeam managementStakeholder managementData modeling

Posted 27 days ago
Apply
Apply
🔥 Senior Data Engineer
Posted 27 days ago

📍 Canada

🧭 Full-Time

🔍 Data Engineering

🏢 Company: Vantage👥 1001-5000CryptocurrencyFinancial ServicesFinTechTrading Platform

  • 5+ years of experience in data engineering, big data, or distributed systems.
  • Strong expertise in Python, SQL (or equivalent big data processing frameworks).
  • Proficiency in ETL/ELT pipelines using Apache Airflow, or similar orchestration tools.
  • Experience working with real-time streaming data (Kafka, Kinesis, or Pub/Sub).
  • Strong understanding of data modelling, data warehousing, and distributed systems.
  • Familiarity with privacy-compliant data processing (GDPR, CCPA) for advertising/retail media use cases.
  • Design, develop, and optimize data pipelines, ETL/ELT workflows, and data warehouses to support large-scale retail media analytics.
  • Handle real-time and batch processing at scale
  • Work closely with data scientists, analysts, software engineers, and product teams to ensure seamless data integration and access.
  • Implement robust monitoring, validation, and security controls to maintain high data reliability.

PythonSQLApache AirflowETLKafkaData engineeringData modeling

Posted 27 days ago
Apply
Apply

📍 United States, Canada

🧭 Full-Time

💸 105825.0 - 136950.0 CAD per year

🔍 Data Engineering

🏢 Company: Samsara👥 1001-5000💰 Secondary Market over 4 years ago🫂 Last layoff almost 5 years agoCloud Data ServicesBusiness IntelligenceInternet of ThingsSaaSSoftware

  • BS degree in Computer Science, Statistics, Engineering, or a related quantitative discipline
  • 6+ years experience in a data engineering and data science-focused role
  • ​​Proficiency in data manipulation and processing in SQL and Python
  • Expertise building data pipelines with new API endpoints from their documentation
  • Proficiency in building ETL pipelines to handle large volumes of data
  • Demonstrated experience in designing data models at scale
  • Build and maintain highly reliable computed tables, incorporating data from various sources, including unstructured and highly sensitive data
  • Access, manipulate, and integrate external datasets with internal data
  • Building analytical and statistical models to identify patterns, anomalies, and root causes
  • Leverage SQL and Python to shape and aggregate data
  • Incorporate generative AI tools (ChatGPT Enterprise) into production data pipelines and automated workflows
  • Collaborate closely with data scientists, data analysts, and Tableau developers to ship top quality analytic products
  • Champion, role model, and embed Samsara’s cultural principles (Focus on Customer Success, Build for the Long Term, Adopt a Growth Mindset, Be Inclusive, Win as a Team) as we scale globally and across new offices

PythonSQLETLTableauAPI testingData engineeringData scienceSparkCommunication SkillsAnalytical SkillsData visualizationData modeling

Posted 29 days ago
Apply
Apply
🔥 Senior Data Engineer
Posted 29 days ago

📍 India

🏢 Company: BlackStone eIT👥 251-500Augmented RealityRoboticsAnalyticsProject Management

  • 5+ years of experience in data engineering or a similar role.
  • Proficiency in SQL and experience with relational databases.
  • Hands-on experience with data pipeline tools and ETL processes.
  • Familiarity with big data technologies (e.g., Hadoop, Spark) is a plus.
  • Experience with cloud-based data solutions is an advantage.
  • Design, implement, and maintain scalable data architectures and pipelines.
  • Develop ETL processes to facilitate smooth data transfer and transformation from various sources.
  • Collaborate with data scientists and analysts to fulfill data needs for analytics and reporting.
  • Optimize database performance and maintain data integrity across systems.
  • Conduct data quality checks and resolve discrepancies.
  • Mentor junior data engineers and provide technical guidance.
  • Stay current with emerging technologies and best practices in data engineering.

SQLApache HadoopCloud ComputingData AnalysisETLData engineeringRDBMSSparkData visualizationData modelingData management

Posted 29 days ago
Apply