Apply

Senior Analytics Engineer

Posted 2024-09-29

View full description

💎 Seniority level: Senior, 5+ years

📍 Location: Europe, APAC, Americas

🔍 Industry: Technology

🗣️ Languages: English

⏳ Experience: 5+ years

🪄 Skills: LeadershipPythonSQLETLGitSnowflakeStrategyData engineeringCommunication SkillsCollaborationCI/CD

Requirements:
  • Experience: 5+ years of experience in data engineering or analytics engineering roles, with a proven track record of leading complex data projects and initiatives.
  • Technical Expertise: Deep expertise in SQL, DBT, and data modeling, with a strong understanding of data pipeline design, ETL processes, and data warehousing.
  • Software Engineering Skills: Proficiency in software engineering principles, including CI/CD pipelines, version control (e.g., Git), and scripting languages (e.g., Python).
  • Data Tools Proficiency: Hands-on experience with tools like Snowflake, DBT, and Looker. Familiarity with additional tools and platforms (e.g., AWS, Kubernetes) is a plus.
  • Problem-Solving: Strong analytical and problem-solving skills, with the ability to diagnose and resolve complex technical issues related to data infrastructure.
  • Leadership: Demonstrated ability to mentor and lead junior engineers, with a focus on fostering a collaborative and high-performance team environment.
  • Communication: Excellent communication skills, with the ability to clearly and concisely convey complex technical concepts to both technical and non-technical stakeholders.
  • Education: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field.
Responsibilities:
  • Data Pipeline Leadership: Design, develop, and maintain highly scalable and efficient data pipelines, ensuring timely and accurate collection, transformation, and integration of data from various sources.
  • Advanced Data Modeling: Architect and implement robust data models and data warehousing solutions that enable efficient storage, retrieval, and analysis of large, complex datasets.
  • Cross-Functional Collaboration: Work closely with data scientists, analysts, and business stakeholders to understand data requirements, translating them into actionable data models and insights.
  • Data Quality Assurance: Implement and oversee rigorous data validation, cleansing, and error-handling mechanisms to maintain high data quality and reliability.
  • Performance Optimization: Continuously monitor and optimize data pipeline performance, identifying and resolving bottlenecks and inefficiencies to maintain optimal system responsiveness.
  • Mentorship and Leadership: Provide guidance and mentorship to junior analytics engineers, fostering a collaborative and learning-oriented environment.
  • Strategic Contribution: Contribute to the strategic direction of data initiatives, staying abreast of industry best practices, emerging technologies, and trends in data engineering and analytics.
  • Documentation & Knowledge Sharing: Build and maintain user-facing documentation for key processes, metrics, and data models to enhance the data-driven culture within the organization.
Apply

Related Jobs

Apply

📍 Brazil

🧭 Full-Time

🔍 Real Estate

🏢 Company: Grupo QuintoAndar

  • Experience in building multidimensional data models (Star and/or Snowflake schema).
  • Strong SQL skills and comfort with complex SQL.
  • Familiarity with ELT/ETL pipelines.
  • Experience with Python.
  • Knowledge of columnar storage solutions (e.g., Amazon Redshift, Apache Parquet).
  • 3 or more years of experience building pipelines.
  • Proficient in English and/or Spanish.
  • No prejudice.

  • Communicate with different stakeholders to understand their primary needs.
  • Create and edit data pipelines, considering business logic, aggregation, and data quality.
  • Investigate inconsistencies and trace sources of differences (data troubleshooting).
  • Maintain and promote code and pipeline standards through code reviews.
  • Create robust data models that run reliably in a production environment.
  • Ensure self-service analytics for business areas.

PythonSQLETLSnowflakeCommunication SkillsCollaboration

Posted 2024-10-24
Apply
Apply

📍 Brazil

🧭 Full-Time

🔍 Real estate

🏢 Company: Grupo QuintoAndar

  • Experience in building multidimensional data models (Star and/or Snowflake schema).
  • Strong SQL skills and comfort with complex SQL.
  • Familiarity with ELT/ETL pipelines.
  • Experience with Python.
  • Knowledge of columnar storage solutions (e.g., Amazon Redshift, Apache Parquet).
  • 3 or more years of experience building pipelines.
  • Proficient in English and/or Spanish.
  • Free of any prejudice.

  • Communicate with different stakeholders to understand their primary needs.
  • Create and edit data pipelines, considering business logic and checking data quality.
  • Investigate inconsistencies and trace the source of differences.
  • Maintain and promote code and pipeline standards through code review.
  • Create robust data models that run reliably in a production environment.
  • Ensure self-service analytics for business areas.

PythonSQLETLSnowflakeAirflowSparkCommunication SkillsCollaboration

Posted 2024-10-21
Apply
Apply

📍 USA, UK, Philippines, Poland, South Africa

🧭 Permanent

🔍 Finance and technology (remittances)

🏢 Company: Zepz

  • Experience with DBT to design and implement data models.
  • Experience using a modern data warehouse (e.g., Databricks, Snowflake, BigQuery) for scalable and efficient data processing.
  • Proficiency in SQL and experience with database technologies (e.g., PostgreSQL, MySQL, Redshift).
  • You have the ability to work confidently with the tools of modern software engineering, for example, working with the command line, version control, testing, and performing code reviews.
  • You have previous use of any orchestration tools similar to Airflow, DBT Cloud, or Fivetran.
  • Familiarity with data visualization tools (e.g., Mode, Tableau, Looker).
  • Familiarity with cloud platforms (e.g., AWS, GCP, Azure) and their data services.
  • You’re comfortable, or have an interest in, reading and extracting data transformation processes from Python scripts.
  • You see yourself as a problem-solver who wants to understand the business problem and communicate the commercial impact alongside the technical solution.
  • Strong communication skills and the ability to work effectively in a distributed team environment.
  • You are comfortable collaborating across multiple time zones.
  • You have an open mind with respect to diversity and inclusivity.

  • Developing, testing, and implementing data models to ensure data integrity and performance using DBT.
  • Collaborating with cross-functional teams to understand data needs and develop solutions to meet those needs.
  • Contributing to and following best practices for data consumption, including educating data consumers on data quality, availability and interpretation.
  • Optimizing existing data processes and pipelines to improve efficiency and reduce latency.
  • Identifying opportunities to reduce complexity and increase efficiency across data models and our data warehouse.
  • Troubleshooting and resolving data issues, ensuring data quality and reliability.
  • Ensuring the data and output is of high quality - tested, automated, scalable, and documented.

AWSPostgreSQLPythonSQLGCPMySQLSnowflakeTableauAirflowAzureCommunication SkillsProblem Solving

Posted 2024-10-17
Apply