Apply

Senior Data Engineer

Posted 6 days agoViewed

View full description

πŸ’Ž Seniority level: Senior, Proven experience with building robust data pipelines

πŸ“ Location: United States, Europe, EST, GMT

πŸ” Industry: Blockchain Analytics

πŸ—£οΈ Languages: English

⏳ Experience: Proven experience with building robust data pipelines

πŸͺ„ Skills: PythonSQLETLKubernetesSnowflakeClickhouseData engineering

Requirements:
  • Experience with orchestration tools like DBT and Prefect
  • Proficiency with data warehouses like Trino, Snowflake, or Clickhouse
  • Deployment experience in public cloud platforms
Responsibilities:
  • Build data pipelines that power popular datasets on Dune
  • Support analysts in creating new datasets
  • Orchestrate robust transformation pipelines
Apply

Related Jobs

Apply

πŸ“ Madrid, Barcelona

🏒 Company: Clarity AI

  • 5+ years in data architecture, data engineering, or a related role, with hands-on experience in data modeling, schema design, and cloud-based data systems
  • Strong Python and SQL skills and proficiency with distributed data stores
  • Proficiency in schema design, data modeling, and building data products
  • Proficient in at least one major programming language, preferably Python, with a software engineering mindset for writing clean, maintainable code
  • Deep understanding of architectural principles in microservices, distributed systems, and data pipeline design
  • Familiarity with containerized environments, public/private API integrations, and security best practices
  • Strong communication and interpersonal skills, with experience working cross-functionally
  • Proven ability to guide teams and drive complex projects to successful completion
  • Self-starter, able to take ownership and initiative, with high energy and stamina
  • Decisive and action-oriented, able to make rapid decisions even when they are short of information
  • Highly motivated, independent and deeply passionate about sustainability and impact
  • Excellent oral and written English communication skills (minimum C1 level-proficient user)
  • Transforming raw data into intuitive, high-quality data models
  • Implementing and monitor data quality checks
  • Collaborating across functions
  • Working closely with engineering and product teams
  • Acting as a bridge between technical and non-technical stakeholders
  • Leading initiatives to improve data practices
  • Guiding the team in experimenting with new tools and technologies
  • Continuously evolving the data architecture
  • Applying a pragmatic approach to performance metrics and scaling decisions
  • Implementing performance metrics to monitor system health
  • Maintaining comprehensive documentation of data systems, processes, and best practices

DockerPythonSQLApache AirflowCloud ComputingKubernetesAlgorithmsData engineeringData StructuresPostgresREST APICommunication SkillsAnalytical SkillsCI/CDProblem SolvingAgile methodologiesMicroservicesData visualizationData modelingData analyticsData management

Posted 1 day ago
Apply
Apply

πŸ“ United States, Canada

🧭 Full-Time

πŸ” B2B SaaS

🏒 Company: Sanity

  • 4+ years of experience building data pipelines at scale
  • Deep expertise in SQL, Python, and Node.js/TypeScript
  • Production experience with Airflow and RudderStack
  • Track record of building reliable data infrastructure
  • Design, develop, and maintain scalable ETL/ELT pipelines
  • Collaborate to implement and scale product telemetry
  • Establish best practices for data ingestion and transformation
  • Monitor and optimize data pipeline performance

Node.jsPythonSQLApache AirflowETLTypeScript

Posted 3 days ago
Apply
Apply

πŸ“ United States, Canada

🧭 Full-Time

πŸ” E-commerce

  • Bachelor's or Master's degree in Computer Science or related field
  • 5+ years of experience in data engineering
  • Strong proficiency in SQL and database technologies
  • Experience with data pipeline orchestration tools
  • Proficiency in programming languages like Python and Scala
  • Hands-on experience with AWS cloud data services
  • Familiarity with big data frameworks like Apache Spark
  • Knowledge of data modeling and warehousing
  • Experience implementing CI/CD for data pipelines
  • Real-time data processing architectures experience
  • Design, develop, and maintain ETL/ELT pipelines
  • Optimize data architecture and storage solutions
  • Work with AWS for scalable data solutions
  • Ensure data quality, integrity, and security
  • Collaborate with cross-functional teams
  • Monitor and troubleshoot data workflows
  • Create APIs for analytical information

AWSPostgreSQLPythonSQLApache AirflowETLKafkaMySQLSnowflakeCI/CDScala

Posted 3 days ago
Apply
Apply

πŸ“ Europe, APAC, Americas

🧭 Full-Time

πŸ” Software Development

🏒 Company: DockerπŸ‘₯ 251-500πŸ’° $105,000,000 Series C almost 3 years agoDeveloper ToolsDeveloper PlatformInformation TechnologySoftware

  • 4+ years of relevant industry experience
  • Experience with data modeling and building scalable pipelines
  • Proficiency with Snowflake or BigQuery
  • Experience with data governance and security controls
  • Experience creating ETL scripts using Python and SQL
  • Familiarity with a cloud ecosystem: AWS/Azure/Google Cloud
  • Experience with Tableau or Looker
  • Manage and develop ETL jobs, warehouse, and event collection tools
  • Build and manage the Central Data Model for reporting
  • Integrate emerging methodologies and technologies
  • Build data pipelines for ML and AI projects
  • Contribute to SOC2 compliance across the data platform
  • Document technical architecture

PythonSQLETLSnowflakeAirflowData engineeringData visualizationData modeling

Posted 4 days ago
Apply
Apply

πŸ“ Poland

🧭 Full-Time

πŸ” Software Development

🏒 Company: N-iXπŸ‘₯ 1001-5000IT Services and IT Consulting

  • Minimum of 3-4 years as data engineer, or in a relevant field
  • Advanced experience in Python, particularly in delivering production-grade data pipelines and troubleshooting code-based bugs.
  • Structured approach to data insights
  • Familiarity with cloud platforms (preferably Azure)
  • Experience with Databricks, Snowflake, or similar data platforms
  • Knowledge of relational databases, with proficiency in SQL
  • Experience using Apache Spark
  • Experience in creating and maintaining structured documentation
  • Proficiency in utilizing testing frameworks to ensure code reliability and maintainability
  • Experience with Gitlab or equivalent tools
  • English Proficiency: B2 level or higher
  • Design, build, and maintain data pipelines using Python
  • Collaborate with an international team to develop scalable data solutions
  • Conduct in-depth analysis and debugging of system bugs (Tier 2)
  • Develop and maintain smart documentation for process consistency, including the creation and refinement of checklists and workflows
  • Set up and configure new tenants, collaborating closely with team members to ensure smooth onboarding
  • Write integration tests to ensure the quality and reliability of data services
  • Work with Gitlab to manage code and collaborate with team members
  • Utilize Databricks for data processing and management

DockerPythonSQLCloud ComputingData AnalysisETLGitKubernetesSnowflakeApache KafkaAzureData engineeringRDBMSREST APIPandasCI/CDDocumentationMicroservicesDebugging

Posted 6 days ago
Apply
Apply

πŸ“ Poland, Ukraine, Abroad

🧭 Contract

πŸ” Data Engineering

🏒 Company: N-iXπŸ‘₯ 1001-5000IT Services and IT Consulting

  • Experience with Databricks
  • 4+ years experience in development of database systems (MS SQL/T-SQL)
  • Experience in creation and maintenance of Azure Pipelines
  • Developing robust data pipelines with DBT
  • Implementation of business logic in Data Warehouse
  • Conversion of business requirements into data models
  • Pipelines management
  • Loadings and query performance tuning

SQLApache AirflowGitMicrosoft SQL ServerAzure

Posted 6 days ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ” Health and Wellness Solutions

🏒 Company: Panasonic Well

  • 5+ years technology industry experience
  • Proficiency in building data pipelines in Python and/or Kotlin
  • Deep understanding of relational and non-relational database solutions
  • Experience with large-scale data pipeline construction
  • Familiarity with PCI, CCPA, GDPR compliance
  • Design, develop, and optimize automated data pipelines
  • Identify improvements for data reliability and quality
  • Own and evolve data architecture with a focus on privacy
  • Drive continuous improvement in data workflows
  • Collaborate with Data Scientists, AI Engineers, and Product Managers

PythonETLKafkaKotlinSnowflakeData engineeringComplianceData modeling

Posted 7 days ago
Apply
Apply

πŸ“ United States, Canada

🧭 Full-Time

πŸ” Software Development

🏒 Company: BioRenderπŸ‘₯ 101-250πŸ’° $15,319,133 Series A almost 2 years agoLife ScienceGraphic DesignSoftware

  • 7+ years of data engineering experience of relevant industry experience
  • Expertise working with Data Warehousing platforms (AWS RedShift or Snowflake preferred) and data lake / lakehouse architectures
  • Experience with Data Streaming platforms (AWS Kinesis / Firehose preferred)
  • Expertise with SQL and programming languages commonly used in data platforms (Python, Spark, etc)
  • Experience with data pipeline orchestration (e.g., Airflow) and data pipeline integrations (e.g. Airbyte, Stitch)
  • Building and maintaining the right architecture and tooling to support our data science, analytics, product, and machine learning initiatives.
  • Solve complex architectural problems
  • Translate deeply technical designs into business appropriate representations as well as analyze business needs and requirements ensuring implementation of data services directly correlates to the strategy and growth of the business

AWSPythonSQLApache AirflowSnowflakeData engineeringSparkData modeling

Posted 7 days ago
Apply
Apply

πŸ“ Germany

🧭 Full-Time

πŸ” Insurtech

🏒 Company: Getsafe

  • 4+ years of experience in creating data pipelines using SQL/Python/Airflow
  • Experience designing Data Mart and Data Warehouse
  • Experience with cloud infrastructure, including Terraform
  • Analyze, design, develop, and deliver Data Warehouse solutions
  • Create ETL/ELT pipelines using Python and Airflow
  • Design, develop, maintain and support Data Warehouse & BI platform

PythonSQLApache AirflowETLTerraform

Posted 8 days ago
Apply
Apply

πŸ“ UK

🧭 Full-Time

πŸ” Technology, Data Engineering

🏒 Company: Aker SystemsπŸ‘₯ 101-250πŸ’° over 4 years agoCloud Data ServicesBusiness IntelligenceAnalyticsSoftware

  • Data pipeline development using processing technologies
  • Experience in Public Cloud services, especially AWS
  • Configuring and tuning Relational and NoSQL databases
  • Programming with Python
  • Code, test, and document data pipelines
  • Conduct database design
  • Expand data platform capabilities
  • Perform data analysis and root cause analysis

AWSPythonData modeling

Posted 12 days ago
Apply