Apply

Data Engineer

Posted 2024-11-15

View full description

πŸ’Ž Seniority level: Middle, 3+ years

πŸ” Industry: Financial Services

🏒 Company: Reach Financial

⏳ Experience: 3+ years

Requirements:
  • 3+ years of hands-on experience in data engineering working with DBT (Core or Cloud) and Snowflake.
  • Extensive knowledge of SQL and NoSQL data stores.
  • Production experience with managing a Snowflake deployment.
  • Production experience using dbt Cloud or other transformation management tools.
  • Experience in operational database management, including backup, recovery, and data modeling.
  • Self-motivated and focused with a commitment to delivering value.
Responsibilities:
  • Develop and manage data pipelines, models, schemas, and data artifacts as a core engineer.
  • Work alongside Lead and Senior Data Engineers to enhance existing practices and infrastructure.
  • Partner with data scientists and stakeholders to understand and translate data requirements into solutions.
  • Stay updated on the latest data engineering tools and evaluate new solutions.
  • Collaborate with Platform engineers for seamless integration and deployment.
  • Monitor and troubleshoot data pipelines, proactively resolving issues.
  • Develop and maintain documentation for data pipelines and processes.
Apply

Related Jobs

Apply

πŸ“ Costa Rica, LATAM

🧭 Full-Time

πŸ” IT solutions and consulting

  • 5+ years of Data Engineering experience including 2+ years designing and building Databricks data pipelines is REQUIRED.
  • 2+ years of hands-on Python/Pyspark/SparkSQL and/or Scala experience is REQUIRED.
  • 2+ years of experience with Big Data pipelines or DAG tools is REQUIRED.
  • 2+ years of Spark experience, especially Databricks Spark and Delta Lake, is REQUIRED.
  • 2+ years of hands-on experience implementing Big Data solutions in a cloud ecosystem is REQUIRED.
  • 2+ years of SQL experience, specifically writing complex queries, is HIGHLY DESIRED.
  • Experience with source control (git) on the command line is REQUIRED.

  • Scope and execute together with team leadership.
  • Work with the team to understand platform capabilities and how to improve them.
  • Design, develop, enhance, and maintain complex data pipeline products.
  • Support analytics, data science, and engineering teams and address their challenges.
  • Commit to continuous learning and developing technical maturity across the company.

LeadershipPythonSQLGitKafkaAirflowAzureData engineeringSpark

Posted 2024-11-22
Apply
Apply

πŸ“ Ontario

πŸ” Customer engagement platform

🏒 Company: Braze

  • 5+ years of hands-on experience in data engineering, cloud data warehouses, and ETL development.
  • Proven expertise in designing and optimizing data pipelines and architectures.
  • Strong proficiency in advanced SQL and data modeling techniques.
  • A track record of leading impactful data projects from conception to deployment.
  • Effective collaboration skills with cross-functional teams and stakeholders.
  • In-depth understanding of technical architecture and data flow in a cloud-based environment.
  • Ability to mentor and guide junior team members on best practices for data engineering and development.
  • Passion for building scalable data solutions that enhance customer experiences and drive business growth.
  • Strong analytical and problem-solving skills, with a keen eye for detail and accuracy.
  • Extensive experience working with and aggregating large event-level data.
  • Familiarity with data governance principles and ensuring compliance with industry regulations.
  • Preferable experience with Kubernetes for container orchestration and Airflow for workflow management.

  • Lead the design, implementation, and monitoring of scalable data pipelines and architectures using tools like Snowflake and dbt.
  • Develop and maintain robust ETL processes to ensure high-quality data ingestion, transformation, and storage.
  • Collaborate closely with data scientists, analysts, and other engineers to design and implement data solutions that drive customer engagement and retention.
  • Optimize and manage data flows and integrations across various platforms and applications.
  • Ensure data quality, consistency, and governance by implementing best practices and monitoring systems.
  • Work extensively with large-scale event-level data, aggregating and processing it to support business intelligence and analytics.
  • Implement and maintain data products using advanced techniques and tools.
  • Collaborate with cross-functional teams including engineering, product management, sales, marketing, and customer success to deliver valuable data solutions.
  • Continuously evaluate and integrate new data technologies and tools to enhance our data infrastructure and capabilities.

SQLBusiness IntelligenceETLSnowflakeData engineeringCollaborationCompliance

Posted 2024-11-22
Apply
Apply

πŸ“ United States of America

πŸ’Έ 90000 - 154000 USD per year

🏒 Company: VSPVisionCareers

  • Bachelor’s degree in computer science, data science, statistics, economics or related area.
  • Excellent written and verbal communication skills.
  • 6+ years of experience in development teams focusing on analytics.
  • 6+ years of hands-on experience in data preparation and SQL.
  • Knowledge of data architectures like event-driven architecture and real-time data.
  • Familiarity with DataOps practices and multiple data integration platforms.

  • Design, build, and optimize data pipelines for analytics.
  • Collaborate with multi-disciplinary teams for data integration.
  • Analyze data requirements to develop scalable pipeline solutions.
  • Profile data for accuracy and completeness in data gathering.
  • Drive automation of data tasks to enhance productivity.
  • Participate in architecture and design reviews.

AWSSQLAgileETLKafkaOracleSCRUMSnowflakeApache KafkaData scienceData StructuresCommunication SkillsCollaboration

Posted 2024-11-22
Apply
Apply

πŸ“ United States

πŸ” Consumer insights

  • Strong PL/SQL and SQL development skills.
  • Proficient in Python and Java.
  • 3-5 years of experience in Data Engineering with Oracle and MS SQL.
  • Experience with cloud services like Snowflake and Azure.
  • Familiar with data orchestration tools such as Azure Data Factory and DataBricks.
  • Understanding of data privacy regulations.

  • Design, implement and maintain scalable data pipelines and architecture.
  • Unit test and document solutions that meet product quality standards.
  • Identify and resolve performance bottlenecks in data processing workflows.
  • Implement data quality checks to ensure accuracy and consistency.
  • Collaborate with cross-functional teams to address data needs.

PythonSQLETLGitJavaOracleQASnowflakeJiraAzureData engineeringCI/CD

Posted 2024-11-21
Apply
Apply
πŸ”₯ Data Engineer
Posted 2024-11-21

🧭 Full-Time

πŸ’Έ 100000 - 135000 USD per year

πŸ” Entertainment and media

  • 3+ years of experience in data engineering, with a foundational understanding of data modeling, ETL/ELT principles, and data warehousing.
  • Experience with cloud-based data warehouses like AWS S3, GCP BigQuery, Snowflake, or similar platforms.
  • Proficiency in building data pipelines using Python/SQL.
  • Experience with workflow orchestration tools like Airflow, or willingness to learn.
  • General understanding of CI/CD principles and processes.

  • Engage with business leaders, engineers, and product managers to define and meet data requirements.
  • Work closely with technology teams to execute ETL/ELT processes leveraging cloud-native principles.
  • Participate in the design, construction, and scaling of data pipelines integrating data from various sources.
  • Support process optimizations by automating workflows and enhancing data delivery.
  • Apply design patterns to ensure performance, cost-efficiency, security, and scalability.
  • Contribute to development sprints, demonstrations, and release processes.
  • Cultivate relationships with IT support teams for smooth deployment.
Posted 2024-11-21
Apply
Apply

🏒 Company: Globaldev Group

Posted 2024-11-21
Apply
Apply

πŸ“ US

🧭 Full-Time

πŸ’Έ 206700 - 289400 USD per year

πŸ” Social media / Online community

  • MS or PhD in a quantitative discipline: engineering, statistics, operations research, computer science, informatics, applied mathematics, economics, etc.
  • 7+ years of experience with large-scale ETL systems, building clean, maintainable, object-oriented code (Python preferred).
  • Strong programming proficiency in Python, SQL, Spark, Scala.
  • Experience with data modeling, ETL concepts, and manipulating large structured and unstructured data.
  • Experience with data workflows (e.g., Airflow) and data visualization tools (e.g., Looker, Tableau).
  • Deep understanding of technical and functional designs for relational and MPP databases.
  • Proven track record of collaboration and excellent communication skills.
  • Experience in mentoring junior data scientists and analytics engineers.

  • Act as the analytics engineering lead within Ads DS team and contribute to data science data quality and automation initiatives.
  • Ensure high-quality data through ETLs, reporting dashboards, and data aggregations for business tracking and ML model development.
  • Develop and maintain robust data pipelines and workflows for data ingestion, processing, and transformation.
  • Create user-friendly tools for internal use across Data Science and cross-functional teams.
  • Lead efforts to build a data-driven culture by enabling data self-service.
  • Provide mentorship and coaching to data analysts and act as a thought partner for data teams.

LeadershipPythonSQLData AnalysisETLTableauStrategyAirflowData analysisData engineeringData scienceSparkCommunication SkillsCollaborationMentoringCoaching

Posted 2024-11-21
Apply
Apply

πŸ“ Canada

πŸ” Artificial Intelligence

  • Strong background in AWS DevOps and data engineering.
  • Expertise with AWS and SageMaker is essential.
  • Experience with Snowflake for analytics and data warehousing is highly desirable.

  • Manage and optimize the data infrastructure.
  • Focus on both data engineering and DevOps responsibilities.
  • Deploy machine learning models to AWS using SageMaker.

AWSMachine LearningSnowflakeData engineeringDevOps

Posted 2024-11-21
Apply
Apply

πŸ“ Poland

🧭 Full-Time

πŸ” Software development

🏒 Company: Sunscrapers sp. z o.o.

  • At least 5 years of professional experience as a data engineer.
  • Undergraduate or graduate degree in Computer Science, Engineering, Mathematics, or similar.
  • Excellent command in spoken and written English, at least C1.
  • Strong professional experience with Python and SQL.
  • Hands-on experience with DBT and Snowflake.
  • Experience in building data pipelines with Airflow or alternative solutions.
  • Strong understanding of various data modeling techniques like Kimball Star Schema.
  • Great analytical skills and attention to detail.
  • Creative problem-solving skills.
  • Great customer service and troubleshooting skills.

  • Modeling datasets and schemes for consistency and easy access.
  • Design and implement data transformations and data marts.
  • Integrating third-party systems and external data sources into data warehouse.
  • Building data flows for fetching, aggregation and data modeling using batch pipelines.

PythonSQLSnowflakeAirflowAnalytical SkillsCustomer serviceDevOpsAttention to detail

Posted 2024-11-21
Apply
Apply
πŸ”₯ Data Engineer
Posted 2024-11-21

πŸ“ Poland

πŸ” Healthcare

🏒 Company: Sunscrapers sp. z o.o.

  • At least 3 years of professional experience as a data engineer.
  • Undergraduate or graduate degree in Computer Science, Engineering, Mathematics, or similar.
  • Excellent command in spoken and written English, at least C1.
  • Strong professional experience with Apache Spark.
  • Hands-on experience managing production spark clusters in Databricks.
  • Experience in CI/CD of data jobs in Spark.
  • Great analytical skills, attention to detail, and creative problem-solving skills.
  • Great customer service and troubleshooting skills.

  • Design and manage batch data pipelines, including file ingestion, transformation, and Delta Lake/table management.
  • Implement scalable architectures for batch and streaming workflows.
  • Leverage Microsoft equivalents of BigQuery for efficient querying and data storage.

SparkAnalytical SkillsCI/CDCustomer serviceAttention to detail

Posted 2024-11-21
Apply

Related Articles

Remote Job Certifications and Courses to Boost Your Career

August 22, 2024

Insights into the evolving landscape of remote work in 2024 reveal the importance of certifications and continuous learning. This article breaks down emerging trends, sought-after certifications, and provides practical solutions for enhancing your employability and expertise. What skills will be essential for remote job seekers, and how can you navigate this dynamic market to secure your dream role?

How to Balance Work and Life While Working Remotely

August 19, 2024

Explore the challenges and strategies of maintaining work-life balance while working remotely. Learn about unique aspects of remote work, associated challenges, historical context, and effective strategies to separate work and personal life.

Weekly Digest: Remote Jobs News and Trends (August 11 - August 18, 2024)

August 18, 2024

Google is gearing up to expand its remote job listings, promising more opportunities across various departments and regions. Find out how this move can benefit job seekers and impact the market.

How to Onboard Remote Employees Successfully

August 16, 2024

Learn about the importance of pre-onboarding preparation for remote employees, including checklist creation, documentation, tools and equipment setup, communication plans, and feedback strategies. Discover how proactive pre-onboarding can enhance job performance, increase retention rates, and foster a sense of belonging from day one.

Remote Work Statistics and Insights for 2024

August 13, 2024

The article explores the current statistics for remote work in 2024, covering the percentage of the global workforce working remotely, growth trends, popular industries and job roles, geographic distribution of remote workers, demographic trends, work models comparison, job satisfaction, and productivity insights.