Apply

Data Engineer

Posted 2024-08-07

View full description

πŸ’Ž Seniority level: Entry, Entry-level and senior positions available

πŸ“ Location: United States, NOT STATED

πŸ’Έ Salary: 75000 - 150000 USD per year

πŸ” Industry: Sports analytics

🏒 Company: Zelus AnalyticsπŸ‘₯ 51-100πŸ’° $3.6m Series A on 2023-10-17AnalyticsSports

πŸ—£οΈ Languages: English

⏳ Experience: Entry-level and senior positions available

πŸͺ„ Skills: PythonSQLETLAlgorithmsData engineeringLinux

Requirements:
  • Academic and/or industry experience in back-end software design and development.
  • Experience with ETL architecture and development in a cloud-based environment.
  • Fluency in SQL and knowledge of database and data warehousing technologies.
  • Proficiency with Python, Scala, or other data-oriented programming languages.
  • Experience with automated data quality validation across large datasets.
  • Familiarity with Linux servers in a virtualized/distributed environment.
  • Strong software-engineering and problem-solving skills.
Responsibilities:
  • Design, develop, document, and maintain the schemas and ETL pipelines for internal sports databases and data warehouses.
  • Implement and test data collection, mapping, and storage procedures for secured access to various data sources.
  • Develop algorithms for quality assurance and data preparation for analysis and modeling.
  • Profile and optimize automated data processing tasks.
  • Coordinate with data providers about raw data feed changes.
  • Deploy and maintain monitoring tools.
  • Collaborate effectively in a distributed work environment.
  • Fulfill other related duties, including platform support.
Apply

Related Jobs

Apply

πŸ“ North America, Latin America, Europe

πŸ” Data Consultancy

  • Bachelor’s degree in engineering, computer science or equivalent area.
  • 5+ years in related technical roles, including data management, database development, ETL, and data warehouses.
  • Experience with data ingestion technologies and architectural decisions for high-throughput frameworks.
  • Familiarity with Snowflake, SAP, AWS, Azure, GCP, and relevant ETL tools.

  • Develop database architectures and data warehouses.
  • Ensure optimal data delivery architecture throughout ongoing customer projects.
  • Lead technical teams in data system optimization and development.

AWSLeadershipPythonSQLAgileETLOracleSAPSnowflakeData engineeringSparkCollaborationProblem Solving

Posted 2024-11-23
Apply
Apply

πŸ“ US

πŸ’Έ 84000 - 120000 USD per year

πŸ” Consumer insights

  • Strong PL/SQL, SQL development skills.
  • Proficient in multiple data engineering languages such as Python and Java.
  • 3-5 years of experience in data engineering specific to Oracle and MS SQL.
  • Experience with data warehousing technologies and cloud-based services like Snowflake.
  • Experience with cloud platforms such as Azure and infrastructure knowledge.
  • Familiarity with data orchestration tools like Azure Data Factory and DataBricks workflows.
  • Understanding of data privacy regulations and best practices.
  • Experience working on a remote team.
  • Familiarity with CI/CD processes and tools like Git, Jira.

  • Design, implement, and maintain scalable data pipelines and architecture.
  • Unit test and document solutions to meet product quality standards.
  • Identify and resolve performance bottlenecks in data pipelines.
  • Implement data quality checks and validation processes.
  • Collaborate with cross-functional teams to address data needs.
  • Ensure technology solutions support customer and organizational needs.
  • Define and document technical requirements.

PythonSQLETLGitJavaOracleQASnowflakeJiraAzureData engineeringCI/CD

Posted 2024-11-23
Apply
Apply

πŸ“ Ontario

πŸ” Customer engagement platform

🏒 Company: Braze

  • 5+ years of hands-on experience in data engineering, cloud data warehouses, and ETL development.
  • Proven expertise in designing and optimizing data pipelines and architectures.
  • Strong proficiency in advanced SQL and data modeling techniques.
  • A track record of leading impactful data projects from conception to deployment.
  • Effective collaboration skills with cross-functional teams and stakeholders.
  • In-depth understanding of technical architecture and data flow in a cloud-based environment.
  • Ability to mentor and guide junior team members on best practices for data engineering and development.
  • Passion for building scalable data solutions that enhance customer experiences and drive business growth.
  • Strong analytical and problem-solving skills, with a keen eye for detail and accuracy.
  • Extensive experience working with and aggregating large event-level data.
  • Familiarity with data governance principles and ensuring compliance with industry regulations.
  • Preferable experience with Kubernetes for container orchestration and Airflow for workflow management.

  • Lead the design, implementation, and monitoring of scalable data pipelines and architectures using tools like Snowflake and dbt.
  • Develop and maintain robust ETL processes to ensure high-quality data ingestion, transformation, and storage.
  • Collaborate closely with data scientists, analysts, and other engineers to design and implement data solutions that drive customer engagement and retention.
  • Optimize and manage data flows and integrations across various platforms and applications.
  • Ensure data quality, consistency, and governance by implementing best practices and monitoring systems.
  • Work extensively with large-scale event-level data, aggregating and processing it to support business intelligence and analytics.
  • Implement and maintain data products using advanced techniques and tools.
  • Collaborate with cross-functional teams including engineering, product management, sales, marketing, and customer success to deliver valuable data solutions.
  • Continuously evaluate and integrate new data technologies and tools to enhance our data infrastructure and capabilities.

SQLBusiness IntelligenceETLSnowflakeData engineeringCollaborationCompliance

Posted 2024-11-22
Apply
Apply

πŸ“ United States of America

πŸ’Έ 90000 - 154000 USD per year

🏒 Company: VSPVisionCareers

  • Bachelor’s degree in computer science, data science, statistics, economics or related area.
  • Excellent written and verbal communication skills.
  • 6+ years of experience in development teams focusing on analytics.
  • 6+ years of hands-on experience in data preparation and SQL.
  • Knowledge of data architectures like event-driven architecture and real-time data.
  • Familiarity with DataOps practices and multiple data integration platforms.

  • Design, build, and optimize data pipelines for analytics.
  • Collaborate with multi-disciplinary teams for data integration.
  • Analyze data requirements to develop scalable pipeline solutions.
  • Profile data for accuracy and completeness in data gathering.
  • Drive automation of data tasks to enhance productivity.
  • Participate in architecture and design reviews.

AWSSQLAgileETLKafkaOracleSCRUMSnowflakeApache KafkaData scienceData StructuresCommunication SkillsCollaboration

Posted 2024-11-22
Apply
Apply

πŸ“ United States

πŸ” Consumer insights

  • Strong PL/SQL and SQL development skills.
  • Proficient in Python and Java.
  • 3-5 years of experience in Data Engineering with Oracle and MS SQL.
  • Experience with cloud services like Snowflake and Azure.
  • Familiar with data orchestration tools such as Azure Data Factory and DataBricks.
  • Understanding of data privacy regulations.

  • Design, implement and maintain scalable data pipelines and architecture.
  • Unit test and document solutions that meet product quality standards.
  • Identify and resolve performance bottlenecks in data processing workflows.
  • Implement data quality checks to ensure accuracy and consistency.
  • Collaborate with cross-functional teams to address data needs.

PythonSQLETLGitJavaOracleQASnowflakeJiraAzureData engineeringCI/CD

Posted 2024-11-21
Apply
Apply

πŸ“ US

🧭 Full-Time

πŸ’Έ 206700 - 289400 USD per year

πŸ” Social media / Online community

  • MS or PhD in a quantitative discipline: engineering, statistics, operations research, computer science, informatics, applied mathematics, economics, etc.
  • 7+ years of experience with large-scale ETL systems, building clean, maintainable, object-oriented code (Python preferred).
  • Strong programming proficiency in Python, SQL, Spark, Scala.
  • Experience with data modeling, ETL concepts, and manipulating large structured and unstructured data.
  • Experience with data workflows (e.g., Airflow) and data visualization tools (e.g., Looker, Tableau).
  • Deep understanding of technical and functional designs for relational and MPP databases.
  • Proven track record of collaboration and excellent communication skills.
  • Experience in mentoring junior data scientists and analytics engineers.

  • Act as the analytics engineering lead within Ads DS team and contribute to data science data quality and automation initiatives.
  • Ensure high-quality data through ETLs, reporting dashboards, and data aggregations for business tracking and ML model development.
  • Develop and maintain robust data pipelines and workflows for data ingestion, processing, and transformation.
  • Create user-friendly tools for internal use across Data Science and cross-functional teams.
  • Lead efforts to build a data-driven culture by enabling data self-service.
  • Provide mentorship and coaching to data analysts and act as a thought partner for data teams.

LeadershipPythonSQLData AnalysisETLTableauStrategyAirflowData analysisData engineeringData scienceSparkCommunication SkillsCollaborationMentoringCoaching

Posted 2024-11-21
Apply
Apply

πŸ“ US

πŸ” Consumer insights

  • Strong PL/SQL and SQL development skills.
  • Proficient in multiple data engineering languages such as Python and Java.
  • Minimum 3-5 years of experience in Data engineering with Oracle and MS SQL.
  • Experience with data warehousing concepts and cloud-based technologies like Snowflake.
  • Experience with cloud platforms such as Azure.
  • Knowledge of data orchestration tools like Azure Data Factory and DataBricks workflows.
  • Understanding of data privacy regulations and best practices.
  • Experience working with remote teams.
  • Familiarity with tools like Git, Jira.
  • Bachelor's degree in Computer Science or Computer Engineering.

  • Design, implement and maintain scalable pipelines and architecture to collect, process, and store data from various sources.
  • Unit test and document solutions that meet product quality standards prior to release to QA.
  • Identify and resolve performance bottlenecks in pipelines to ensure efficient data delivery.
  • Implement data quality checks and validation processes.
  • Work with Data Architect to implement best practices for data governance, quality, and security.
  • Collaborate with cross-functional teams to address data needs.

PythonSQLETLGitJavaOracleQASnowflakeJiraAzureData engineeringCI/CD

Posted 2024-11-21
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 95000 - 110000 USD per year

πŸ” Healthcare

🏒 Company: Wider Circle

  • Degree in Computer Science, Information Systems, or equivalent education or work experience.
  • 3+ Years experience with AWS or similar technologies (S3, Redshift, RDS, EMR).
  • 3+ Years strong abilities with SQL and Python.
  • Experience building test automation suites for test and production environments.
  • Experience using APIs for data extraction and updating.
  • Experience with Git and version control.

  • Develop and maintain data quality and accuracy dashboards, and scorecards to track data quality and model performance.
  • Develop, maintain, and enhance a comprehensive data quality framework that defines data standards, quality and accuracy expectations, and validation processes.
  • Enhance data quality through rapid testing, feedback, and insights.
  • Partner with Engineering & Product to predict data quality issues and production flaws.
  • Conceptualize data architecture (visually) and implement practically into logical structures.
  • Perform testing of data after ingesting and database loading.
  • Manage internal SLAs for data quality and frequency.
  • Provide expert support for solving complex problems of data integration across multiple data sets.
  • Update and evolve data ecosystem to streamline processes for maximum efficiency.

AWSPythonSQLGitProduct DevelopmentData engineering

Posted 2024-11-17
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ’Έ 124300 - 186500 USD per year

πŸ” Technology / Cloud Services

🏒 Company: SMX

  • Two + years of experience in relevant fields.
  • Expertise in complex SQL.
  • Knowledge of AWS technologies.
  • Solid understanding of RDBMS concepts, including Postgres, RedShift, and SQL Server.
  • Experience with logical data modeling and database/query optimization.
  • Familiarity with AWS data migration tools (DMS).
  • Scripting knowledge, especially in Python and Lambda.
  • Experience with version control and CI/CD tools such as Git, TFS, and Azure DevOps.
  • Awareness of network authentication and authorization protocols (Kerberos, SAML/OAUTH).
  • Some knowledge of networks.
  • Ability to obtain and maintain a Public Trust clearance; US Citizenship is required.
  • Strong collaboration and communication skills.

  • Assist Data Architect and customer in collecting requirements and documenting tasks for the data loading platform, focusing on performance and data quality.
  • Implement data loading and quality control activities based on project requirements and customer tickets.
  • Create CI/CD pipelines for infrastructure and data loading related to data warehouse maintenance.
  • Code and implement unique data migration requirements using AWS technologies like DMS and Lambda/Python.
  • Resolve identity and access management issues for various data sets in Postgres, Redshift, and SQL Server.

AWSPythonSQLETLGitOAuthAzurePostgresRDBMSCI/CDDevOps

Posted 2024-11-15
Apply
Apply

πŸ“ Latin America, United States, Canada

πŸ” Life insurance

  • The ideal candidate will be independent and a great communicator.
  • Attention to detail is critical.
  • Must possess problem-solving skills.

  • Develop and maintain enterprise data and analytics systems for a US client.
  • Optimize performance by building and supporting decision-making tools.
  • Collaborate closely with software engineering, AI/ML, Cybersecurity, and DevOps/SysOps teams.
  • Support end-to-end data pipelines using Python or Scala.
  • Participate in Agile framework-related tasks.

PythonSoftware DevelopmentAgileData engineeringDevOpsAttention to detail

Posted 2024-11-15
Apply