Apply📍 Canada, United Kingdom, India
🧭 Full-Time
🔍 Software Development
🏢 Company: Loopio Inc.
- 5+ years of experience in data engineering in a high-growth agile software development environment
- Strong understanding of database concepts, modeling, SQL, query optimization
- Ability to learn fast and translate data into actionable results
- Experience developing in Python and Pyspark
- Hands-on experience with the AWS services (RDS, S3, Redshift, Glue, Quicksight, Athena, ECS)
- Strong understanding of relational databases (RDS, MySQL) and NoSQL
- Experience with ETL & Data warehousing, building fact & dimensional data models
- Experience with data processing frameworks such as Spark / Databricks
- Experience in developing Big Data solutions (migration, storage, processing)
- Experience with CI/CD tools (Jenkins) and pipeline orchestration tools (Databricks Jobs, Airflow)
- Experience working with data visualization and BI platforms (Quicksight, Tableau, Sisense, etc)
- Experience working with Clickstream data (Amplitude, Pendo, etc)
- Experience building and supporting large-scale systems in a production environment
- Strong communication, collaboration, and analytical skills
- Demonstrated ability to work with a high degree of ambiguity, and leadership within a team (mentorship, ownership, innovation)
- Ability to clearly communicate technical roadmap, challenges, and mitigation
- Be responsible for building, evolving and scaling data platforms and ETL pipelines, with an eye towards the growth of our business and the reliability of our data
- Promote data-driven decision-making across the organization through data expertise
- Build advanced automation tooling tooling for data orchestration, evaluation, testing, monitoring, administration, and data operations.
- Integrate various data sources into our Data lake, including clickstream, relational, and unstructured data
- Developing and maintaining a feature store for use in analytics & modeling
- Partner with data scientists to create predictive models to help drive insights and decisions, both in Loopio’s product and internal teams (RevOps, Marketing, CX)
- Work closely with stakeholders within and across teams to understand the data needs of the business and produce processes that enable a better product and support data-driven decision-making
- Build scalable data pipelines using Databricks, and AWS (Redshift, S3, RDS), and other cloud technologies
- Build and support Loopio’s data warehouse (Redshift) and data lake (Databricks delta lake)
- Orchestrate pipelines using workflow frameworks/tooling
AWSPythonSQLData AnalysisETLJenkinsMachine LearningAirflowData engineeringNosqlSparkCommunication SkillsAnalytical SkillsCollaborationCI/CDData visualizationData modeling
Posted 20 days ago
Apply