Apply

Senior Data Engineer

Posted 5 months agoViewed

View full description

๐Ÿ’Ž Seniority level: Senior, 5+ years

๐Ÿ“ Location: New York, US

๐Ÿ’ธ Salary: 196000 - 217000 USD per year

๐Ÿ” Industry: Digital Economy, Cryptocurrency

๐Ÿข Company: Uniswap Labs

๐Ÿ—ฃ๏ธ Languages: English

โณ Experience: 5+ years

๐Ÿช„ Skills: AWSPostgreSQLPythonSQLData AnalysisETLGCPHadoopKafkaMachine LearningMySQLSnowflakeTableauAzureData engineeringData scienceSparkAnalytical SkillsCollaboration

Requirements:
  • Bachelorโ€™s or Masterโ€™s degree in Computer Science, Engineering, Statistics or related field.
  • 5+ years of experience in data engineering, data analytics, or related field.
  • Strong proficiency in SQL and experience with relational databases (e.g., PostgreSQL, MySQL).
  • Experience with big data technologies (e.g., Hadoop, Spark) and data warehousing (e.g., BigQuery, Snowflake).
  • Proficiency in programming languages like Python or Scala.
  • Experience with data visualization tools (e.g., Tableau, Power BI, Looker).
  • Strong understanding of ETL processes, data modeling, data warehousing.
  • Excellent problem-solving skills and attention to detail.
  • Strong communication and interpersonal skills.
Responsibilities:
  • Design, develop, and maintain scalable data pipelines for large volumes of data.
  • Implement data integration solutions to ensure quality and integrity.
  • Optimize storage and retrieval processes for performance.
  • Develop data models, dashboards, and reports for decision-making.
  • Collaborate with stakeholders to understand data needs.
  • Conduct exploratory data analysis for business insights.
  • Implement data validation and monitoring for accuracy.
  • Establish data governance policies and conduct audits.
  • Work with cross-functional teams on data requirements.
  • Communicate technical details to non-technical stakeholders.
  • Mentor junior engineers and analysts.
  • Stay updated on trends in data engineering and analytics.
  • Evaluate and implement new tools for data processing and visualization.
  • Drive best practices in data engineering across the organization.
Apply

Related Jobs

Apply

๐Ÿ“ US, Europe

๐Ÿงญ Full-Time

๐Ÿ’ธ 175000.0 - 205000.0 USD per year

๐Ÿ” Cloud computing and AI services

๐Ÿข Company: CoreWeave๐Ÿ’ฐ $642,000,000 Secondary Market about 1 year agoCloud ComputingMachine LearningInformation TechnologyCloud Infrastructure

  • 5+ years of experience with Kubernetes and Helm, with a deep understanding of container orchestration.
  • Hands-on experience administering and optimizing clustered computing technologies on Kubernetes, such as Spark, Trino, Flink, Ray, Kafka, StarRocks or similar.
  • 5+ years of programming experience in C++, C#, Java, or Python.
  • 3+ years of experience scripting in Python or Bash for automation and tooling.
  • Strong understanding of data storage technologies, distributed computing, and big data processing pipelines.
  • Proficiency in data security best practices and managing access in complex systems.

  • Architect, deploy, and scale data storage and processing infrastructure to support analytics and data science workloads.
  • Manage and maintain data lake and clustered computing services, ensuring reliability, security, and scalability.
  • Build and optimize frameworks and tools to simplify the usage of big data technologies.
  • Collaborate with cross-functional teams to align data infrastructure with business goals and requirements.
  • Ensure data governance and security best practices across all platforms.
  • Monitor, troubleshoot, and optimize system performance and resource utilization.

PythonBashKubernetesApache Kafka

Posted 7 days ago
Apply
Apply

๐Ÿ“ South Africa, Mauritius, Kenya, Nigeria

๐Ÿ” Technology, Marketplaces

  • BSc degree in Computer Science, Information Systems, Engineering, or related technical field or equivalent work experience.
  • 3+ years related work experience.
  • Minimum of 2 years experience building and optimizing โ€˜big dataโ€™ data pipelines, architectures and maintaining data sets.
  • Experienced in Python.
  • Experienced in SQL (PostgreSQL, MS SQL).
  • Experienced in using cloud services: AWS, Azure or GCP.
  • Proficiency in version control, CI/CD and GitHub.
  • Understanding/experience in Glue and PySpark highly desirable.
  • Experience in managing data life cycle.
  • Proficiency in manipulating, processing and architecting large disconnected data sets for analytical requirements.
  • Ability to maintain and optimise processes supporting data transformation, data structures, metadata, dependency and workload management.
  • Good understanding of data management principles - data quality assurance and governance.
  • Strong analytical skills related to working with unstructured datasets.
  • Understanding of message queuing, stream processing, and highly scalable โ€˜big dataโ€™ datastores.
  • Strong attention to detail.
  • Good communication and interpersonal skills.

  • Suggest efficiencies and execute on implementation of internal process improvements in automating manual processes.
  • Implement enhancements and new features across data systems.
  • Improve streamline processes within data systems with support from Senior Data Engineer.
  • Test CI/CD process for optimal data pipelines.
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Highly efficient in ETL processes.
  • Develop and conduct unit tests on data pipelines as well as ensuring data consistency.
  • Develop and maintain automated monitoring solutions.
  • Support reporting and analytics infrastructure.
  • Maintain data quality and data governance as well as upkeep of overall maintenance of data infrastructure systems.
  • Maintain data warehouse and data lake metadata, data catalogue, and user documentation for internal business users.
  • Ensure best practice is implemented and maintained on database.

AWSPostgreSQLPythonSQLETLGitCI/CD

Posted 9 days ago
Apply
Apply

๐Ÿ“ US

๐Ÿ’ธ 103200.0 - 128950.0 USD per year

๐Ÿ” Genetics and healthcare

๐Ÿข Company: Natera๐Ÿ‘ฅ 1001-5000๐Ÿ’ฐ $250,000,000 Post-IPO Equity over 1 year ago๐Ÿซ‚ Last layoff almost 2 years agoWomen'sBiotechnologyMedicalGeneticsHealth Diagnostics

  • BS degree in computer science or a comparable program or equivalent experience.
  • 8+ years of overall software development experience, ideally in complex data management applications.
  • Experience with SQL and No-SQL databases including Dynamo, Cassandra, Postgres, Snowflake.
  • Proficiency in data technologies such as Hive, Hbase, Spark, EMR, Glue.
  • Ability to manipulate and extract value from large datasets.
  • Knowledge of data management fundamentals and distributed systems.

  • Work with other engineers and product managers to make design and implementation decisions.
  • Define requirements in collaboration with stakeholders and users to create reliable applications.
  • Implement best practices in development processes.
  • Write specifications, design software components, fix defects, and create unit tests.
  • Review design proposals and perform code reviews.
  • Develop solutions for the Clinicogenomics platform utilizing AWS cloud services.

AWSPythonSQLAgileDynamoDBSnowflakeData engineeringPostgresSparkData modelingData management

Posted 19 days ago
Apply
Apply
๐Ÿ”ฅ Senior Data Engineer
Posted about 1 month ago

๐Ÿ“ United States

๐Ÿงญ Full-Time

๐Ÿ” Construction technology

๐Ÿข Company: EquipmentShare

  • 7+ years of relevant data platform development experience.
  • Proficient with SQL and a high-order object-oriented programming language (e.g., Python).
  • Experience in designing and building distributed data architectures.
  • Experience with production-grade data pipelines using tools like Airflow, dbt, DataHub, MLFlow.
  • Experience with distributed data platforms like Kafka, Spark, Flink.
  • Familiarity with event data streaming at scale.
  • Proven ability to learn and apply new technologies quickly.
  • Experience in building observability and monitoring into data products.

  • Collaborate with Product Managers, Designers, Engineers, Data Scientists, and Data Analysts.
  • Design, build, and maintain the data platform for automation and self-service.
  • Develop data product framework for analytics features.
  • Create and manage CI/CD pipelines and automated deployment processes.
  • Implement data monitoring and alerting capabilities.
  • Document architecture and processes for collaboration.
  • Mentor peers to enhance their skills.

AWSPythonSQLApache AirflowKafkaMLFlowSnowflakeSparkCI/CD

Posted about 1 month ago
Apply
Apply
๐Ÿ”ฅ Senior Data Engineer
Posted about 1 month ago

๐Ÿ“ United States, United Kingdom, Spain, Estonia

๐Ÿ” Identity verification

๐Ÿข Company: Veriff๐Ÿ‘ฅ 501-1000๐Ÿ’ฐ $100,000,000 Series C almost 3 years ago๐Ÿซ‚ Last layoff over 1 year agoArtificial Intelligence (AI)Fraud DetectionInformation TechnologyCyber SecurityIdentity Management

  • Expert-level knowledge of SQL, particularly with Redshift.
  • Strong experience in data modeling with an understanding of dimensional data modeling best practices.
  • Proficiency in data transformation frameworks like dbt.
  • Solid programming skills in languages used in data engineering, such as Python or R.
  • Familiarity with orchestration frameworks like Apache Airflow or Luigi.
  • Experience with data from diverse sources including RDBMS and APIs.

  • Collaborate with business stakeholders to design, document, and implement robust data models.
  • Build and optimize data pipelines to transform raw data into actionable insights.
  • Fine-tune query performance and ensure efficient use of data warehouse infrastructure.
  • Ensure data reliability and quality through rigorous testing and monitoring.
  • Assist in migrating from batch processing to real-time streaming systems.
  • Expand support for various use cases including business intelligence and analytics.

PythonSQLApache AirflowETLData engineeringJSONData modeling

Posted about 1 month ago
Apply
Apply
๐Ÿ”ฅ Senior Data Engineer
Posted about 1 month ago

๐Ÿ“ USA

๐Ÿงญ Full-Time

๐Ÿ’ธ 165000.0 - 210000.0 USD per year

๐Ÿ” E-commerce and AI technologies

๐Ÿข Company: Wizard๐Ÿ‘ฅ 11-50Customer ServiceManufacturing

  • 5+ years of professional experience in software development with a focus on data engineering.
  • Bachelor's degree in Computer Science or a related field, or equivalent practical experience.
  • Proficiency in Python with software engineering best practices.
  • Strong expertise in building ETL pipelines using tools like Apache Spark.
  • Hands-on experience with NoSQL databases like MongoDB, Cassandra, or DynamoDB.
  • Proficiency in real-time stream processing systems such as Kafka or AWS Kinesis.
  • Experience with cloud platforms (AWS, GCP, Azure) and technologies like Delta Lake and Parquet files.

  • Develop and maintain scalable data infrastructure for batch and real-time processing.
  • Build and optimize ETL pipelines for efficient data flow.
  • Collaborate with data scientists and cross-functional teams for accurate monitoring.
  • Design backend data solutions for microservices architecture.
  • Implement and manage integrations with third-party e-commerce platforms.

AWSPythonDynamoDBElasticSearchETLGCPGitHadoopKafkaMongoDBRabbitmqAzureCassandraRedis

Posted about 1 month ago
Apply
Apply
๐Ÿ”ฅ Senior Data Engineer
Posted about 1 month ago

๐Ÿ“ United States

๐Ÿงญ Full-Time

๐Ÿข Company: Avalore, LLC

  • Masterโ€™s or PhD in statistics, mathematics, computer science, or related field.
  • 8+ years of experience as a Data Engineer within the IC.
  • Outstanding communication skills, influencing abilities, and client focus.
  • Professional proficiency in English is required.
  • Current, active Top Secret security clearance.
  • Applicants must be currently authorized to work in the United States on a full-time basis.

  • Develops and documents data pipelines for ingest, transformation, and preparation of data for AI applications.
  • Designs scalable technologies such as streaming and transformation, joining disparate data sets for predictive analytics.
  • Develops API interfaces for accessibility.
  • Leads technical efforts and guides development teams.

PythonSQLApache AirflowArtificial IntelligenceETLMachine LearningAPI testingData engineering

Posted about 1 month ago
Apply
Apply
๐Ÿ”ฅ Senior Data Engineer
Posted about 1 month ago

๐Ÿ“ USA

๐Ÿงญ Full-Time

๐Ÿ’ธ 190000.0 - 220000.0 USD per year

๐Ÿ” B2B data / Data as a Service (DaaS)

๐Ÿข Company: People Data Labs๐Ÿ‘ฅ 101-250๐Ÿ’ฐ $45,000,000 Series B about 3 years agoDatabaseArtificial Intelligence (AI)Developer APIsMachine LearningAnalyticsB2BSoftware

  • 5-7+ years industry experience with strategic technical problem-solving.
  • Strong software development fundamentals.
  • Experience with Python.
  • Expertise in Apache Spark (Java, Scala, or Python-based).
  • Proficiency in SQL.
  • Experience building scalable data processing systems.
  • Familiarity with data pipeline orchestration tools (e.g., Airflow, dbt).
  • Knowledge of modern data design and storage patterns.
  • Experience working in Databricks.
  • Familiarity with cloud computing services (e.g., AWS, GCP, Azure).
  • Experience in data warehousing technologies.
  • Understanding of modern data storage formats and tools.

  • Build infrastructure for ingestion, transformation, and loading of data using Spark, SQL, AWS, and Databricks.
  • Create an entity resolution framework for merging billions of entities into clean datasets.
  • Develop CI/CD pipelines and anomaly detection systems to enhance data quality.
  • Provide solutions to undefined data engineering problems.
  • Assist Engineering and Product teams with data-related technical issues.

AWSPythonSQLKafkaAirflowData engineeringPandasCI/CD

Posted about 1 month ago
Apply
Apply

๐Ÿ“ US

๐Ÿงญ Full-Time

๐Ÿ” Cloud integration technology

๐Ÿข Company: Cleo (US)

  • 5-7+ years of experience in data engineering focusing on AI/ML models.
  • Hands-on expertise in data transformation and building data pipelines.
  • Leadership experience in mentoring data engineering teams.
  • Strong experience with cloud platforms and big data technologies.

  • Lead the design and build of scalable, reliable, and efficient data pipelines.
  • Set data infrastructure strategy for data warehouses and lakes.
  • Hands-on data transformation for AI/ML models.
  • Build data structures and manage metadata.
  • Implement data quality controls.
  • Collaborate with cross-functional teams to meet data requirements.
  • Optimize ETL processes for AI/ML.
  • Ensure data pipelines support model training needs.
  • Define data governance practices.

LeadershipArtificial IntelligenceETLMachine LearningStrategyData engineeringData StructuresMentoring

Posted 2 months ago
Apply
Apply

๐Ÿ“ ANY STATE

๐Ÿ” Data and technology

  • 5+ years of experience making contributions in the form of code.
  • Experience with algorithms and data structures and knowing when to apply them.
  • Experience with machine learning techniques to develop better predictive and clustering models.
  • Experience working with high-scale systems.
  • Experience creating powerful machine learning tools for experimentation and productionalization at scale.
  • Experience in data engineering and warehousing to develop ingestion engines, ETL pipelines, and organizing data for consumption.

  • Be a senior member of the team by contributing to the architecture, design, and implementation of EMS systems.
  • Mentor junior engineers and promote their growth.
  • Lead technical projects and manage planning, execution, and success of complex technical projects.
  • Collaborate with other engineering, product, and data science teams to ensure optimal product development.

PythonSQLETLGCPKubeflowMachine LearningAlgorithmsData engineeringData scienceData StructuresTensorflowCollaborationScala

Posted 2 months ago
Apply