Apply

Senior Data Engineer

Posted 2 months agoViewed

View full description

πŸ’Ž Seniority level: Senior, 4+ years

πŸ’Έ Salary: 160000.0 - 200000.0 USD per year

πŸ” Industry: Financial media, digital assets

πŸ—£οΈ Languages: English

⏳ Experience: 4+ years

πŸͺ„ Skills: Data modeling

Requirements:
  • Significant knowledge of the crypto industry; crypto native.
  • 4+ years of hands-on experience with data modeling, schema design, data operations.
  • Experience building backend systems at scale focused on data processing.
  • Proficient in Python, Go, Rust, and/or Typescript.
  • Strong expertise in SQL (Parquet, Postgres, Clickhouse).
  • Experience creating large-scale data warehouses (100M+ rows/day).
  • Familiar with DevOps tools and cloud solutions like Docker, Kubernetes, AWS, or GCP.
Responsibilities:
  • Own Data Sourcing Pipelines: Architect data warehousing, ingestion, and sourcing strategy.
  • Design and Implement ETL Solutions: Implement ETL approaches and model internal schemas.
  • Grow Shared Knowledge: Provide guidance in implementation of architectures.
  • Drive Operational Efficiency: Improve data workflows and automate tasks.
  • Cross-Functional Collaboration: Lead initiatives with Research, Product, and Engineering teams.
Apply

Related Jobs

Apply

πŸ“ US

πŸ’Έ 103200.0 - 128950.0 USD per year

πŸ” Genetics and healthcare

🏒 Company: NateraπŸ‘₯ 1001-5000πŸ’° $250,000,000 Post-IPO Equity over 1 year agoπŸ«‚ Last layoff almost 2 years agoWomen'sBiotechnologyMedicalGeneticsHealth Diagnostics

  • BS degree in computer science or a comparable program or equivalent experience.
  • 8+ years of overall software development experience, ideally in complex data management applications.
  • Experience with SQL and No-SQL databases including Dynamo, Cassandra, Postgres, Snowflake.
  • Proficiency in data technologies such as Hive, Hbase, Spark, EMR, Glue.
  • Ability to manipulate and extract value from large datasets.
  • Knowledge of data management fundamentals and distributed systems.

  • Work with other engineers and product managers to make design and implementation decisions.
  • Define requirements in collaboration with stakeholders and users to create reliable applications.
  • Implement best practices in development processes.
  • Write specifications, design software components, fix defects, and create unit tests.
  • Review design proposals and perform code reviews.
  • Develop solutions for the Clinicogenomics platform utilizing AWS cloud services.

AWSPythonSQLAgileDynamoDBSnowflakeData engineeringPostgresSparkData modelingData management

Posted 2 days ago
Apply
Apply

πŸ“ Colombia, Spain, Ecuador, Venezuela, Argentina

πŸ” HR Tech

🏒 Company: JobgetherπŸ‘₯ 11-50πŸ’° $1,493,585 Seed almost 2 years agoInternet

  • Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field.
  • Minimum of 4 years of experience in data engineering.
  • Proficiency in Python with at least 4 years of experience.
  • Hands-on experience with big data technologies like Hadoop, Spark, or Kafka.
  • Proficiency in relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases like MongoDB.
  • Experience with AWS cloud platforms.
  • Strong understanding of data modeling, schema design, and data warehousing concepts.
  • Excellent analytical and troubleshooting skills.
  • Fluency in English and Spanish.

  • Design, build, and maintain scalable data pipelines and ETL processes for large volumes of data.
  • Develop and optimize data scraping solutions for efficient job data extraction.
  • Collaborate with data scientists on AI-driven matching algorithms.
  • Ensure data integrity and reliability through validation mechanisms.
  • Analyze and optimize system performance, addressing challenges.
  • Work with teams to efficiently deploy machine learning models.
  • Stay updated on emerging technologies and best practices.
  • Collaborate with various teams to understand and implement data requirements.
  • Maintain documentation for systems and processes, ensuring compliance.

AWSDockerPostgreSQLPythonETLHadoopKafkaKubernetesMachine LearningMongoDBMySQLSparkData modeling

Posted 5 days ago
Apply
Apply

πŸ“ United States, United Kingdom, Spain, Estonia

πŸ” Identity verification

🏒 Company: VeriffπŸ‘₯ 501-1000πŸ’° $100,000,000 Series C almost 3 years agoπŸ«‚ Last layoff over 1 year agoArtificial Intelligence (AI)Fraud DetectionInformation TechnologyCyber SecurityIdentity Management

  • Expert-level knowledge of SQL, particularly with Redshift.
  • Strong experience in data modeling with an understanding of dimensional data modeling best practices.
  • Proficiency in data transformation frameworks like dbt.
  • Solid programming skills in languages used in data engineering, such as Python or R.
  • Familiarity with orchestration frameworks like Apache Airflow or Luigi.
  • Experience with data from diverse sources including RDBMS and APIs.

  • Collaborate with business stakeholders to design, document, and implement robust data models.
  • Build and optimize data pipelines to transform raw data into actionable insights.
  • Fine-tune query performance and ensure efficient use of data warehouse infrastructure.
  • Ensure data reliability and quality through rigorous testing and monitoring.
  • Assist in migrating from batch processing to real-time streaming systems.
  • Expand support for various use cases including business intelligence and analytics.

PythonSQLApache AirflowETLData engineeringJSONData modeling

Posted 21 days ago
Apply
Apply

πŸ“ United Kingdom

πŸ” Esports, gaming, tournaments, leagues, events

🏒 Company: ESL FACEIT GroupπŸ‘₯ 501-1000πŸ«‚ Last layoff 10 months agoVideo GamesGamingDigital EntertainmenteSports

  • Experience shaping architecture for mature data platforms.
  • Hands-on building of resilient data pipelines (Airflow, Kafka, etc.) at scale.
  • CI/CD expertise (Github Actions, Jenkins) in data engineering.
  • Infrastructure management using IaC (Terraform).
  • Knowledge of data modeling in cloud warehouses (BigQuery, Snowflake).
  • Familiarity with database design principles.
  • Skills in operational procedures and data observability tools.

  • Serve as a leader in tech and understand customer needs.
  • Partner with stakeholders and promote data platform adoption.
  • Contribute to technical strategy and manage delivery.
  • Set high standards for documentation, testing, and code quality.
  • Drive efficiencies in code, infrastructure and data models.
  • Inspire and guide team members through code reviews and design sessions.

AWSLeadershipPythonSQLGCPJenkinsKafkaSnowflakeStrategyAirflowData engineeringData StructuresPrometheusCI/CDDevOpsTerraformDocumentationData modeling

Posted 29 days ago
Apply
Apply
πŸ”₯ Senior Data Engineer
Posted about 1 month ago

πŸ“ Spain

πŸ’Έ 80000 - 110000 EUR per year

πŸ” Financial services

  • 5+ years of professional experience in Data Engineering or similar roles.
  • Proficient in SQL and DBT for data transformations.
  • Fluent in Python or other modern programming languages.
  • Experience with infrastructure as code languages, like Terraform.
  • Experienced in data pipelines, data modeling, data warehouse technologies, and cloud infrastructures.
  • Experience with AWS and/or other cloud providers like Azure or GCP.
  • Strong cross-team communication and collaboration skills.
  • Ability to thrive in ambiguous situations.

  • Work with engineering managers and tech leads to identify and plan projects based on team goals.
  • Collaborate closely with tech leads, managers, and cross-functional teams to deliver technology for analytical use cases.
  • Write high-quality, understandable code.
  • Review other engineers' work, providing constructive feedback.
  • Act as a technical resource and mentor for engineers inside and outside the team.
  • Promote a respectful and supportive team environment.
  • Participate in on-call rotation as required.

AWSPythonSQLGCPAzureData engineeringCollaborationTerraformData modeling

Posted about 1 month ago
Apply
Apply
πŸ”₯ Senior Data Engineer
Posted about 2 months ago

πŸ“ United States, Canada

πŸ” Advanced analytics consulting

🏒 Company: Tiger AnalyticsπŸ‘₯ 1001-5000AdvertisingConsultingBig DataNewsMachine LearningAnalytics

  • Bachelor’s degree in Computer Science or similar field.
  • 8+ years of experience in a Data Engineer role.
  • Experience with relational SQL and NoSQL databases like MySQL, Postgres.
  • Strong analytical skills and advanced SQL knowledge.
  • Development of ETL pipelines using Python & SQL.
  • Good experience with Customer Data Platforms (CDP).
  • Experience in SQL optimization and performance tuning.
  • Data modeling and building high-volume ETL pipelines.
  • Working experience with any cloud platform.
  • Experience with Google Tag Manager and Power BI is a plus.
  • Experience with object-oriented scripting languages: Python, Java, Scala, etc.
  • Experience extracting/querying/joining large data sets at scale.
  • Strong communication and organizational skills.

  • Designing, building, and maintaining scalable data pipelines on cloud infrastructure.
  • Working closely with cross-functional teams.
  • Supporting data analytics, machine learning, and business intelligence initiatives.

PythonSQLBusiness IntelligenceETLJavaMySQLPostgresNosqlAnalytical SkillsOrganizational skillsData modeling

Posted about 2 months ago
Apply
Apply
πŸ”₯ Senior Data Engineer
Posted about 2 months ago

πŸ“ Mexico, Gibraltar, Colombia, USA, Brazil, Argentina

🧭 Full-Time

πŸ” FinTech

🏒 Company: Bitso

  • Proven English fluency.
  • 3+ years professional working experience with analytics, ETLs, and data systems.
  • 3+ years with SQL databases, data lake, big data, and cloud infrastructure.
  • 3+ years experience with Spark.
  • BS or Master's in Computer Science or similar.
  • Strong proficiency in SQL, Python, and AWS.
  • Strong data modeling skills.

  • Build processes required for optimal extraction, transformation, and loading of data from various sources using SQL, Python, Spark.
  • Identify, design, and implement internal process improvements while optimizing data delivery and redesigning infrastructure for scalability.
  • Ensure data integrity, quality, and security.
  • Work with stakeholders to assist with data-related technical issues and support their data needs.
  • Manage data separation and security across multiple data sources.

AWSPythonSQLBusiness IntelligenceMachine LearningData engineeringData StructuresSparkCommunication SkillsData modeling

Posted about 2 months ago
Apply
Apply

πŸ“ Portugal

πŸ” Healthcare technology

NOT STATED

  • Assist in developing data engineering solutions that drive company initiatives.
  • Implement data pipelines to manage and process large datasets effectively.
  • Collaborate with cross-functional teams to enhance product offerings.

AWSDockerPostgreSQLPythonSQLApache AirflowETLData engineeringData modeling

Posted about 2 months ago
Apply
Apply

πŸ“ Colombia

🧭 Full-Time

πŸ” Data engineering and analytics consultancy

🏒 Company: Aimpoint Digital

  • Degree educated in Computer Science, Engineering, Mathematics, or equivalent experience.
  • 3+ years working with relational databases and query languages.
  • 3+ years building data pipelines in production across various data types.
  • 3+ years data modeling experience.
  • 3+ years writing clean code in Python, Scala, Java, or similar languages.
  • Experience with cloud data warehouses and ETL/ELT tools preferred.
  • DevOps and container technology experience preferred.
  • Consulting experience strongly preferred.

  • Become a trusted advisor working with clients, including data owners and C-level executives.
  • Work independently as part of a small team to solve complex data engineering use cases across various industries.
  • Design and develop the analytical layer, building cloud data warehouses, data lakes, and ETL/ELT pipelines.
  • Support deployment of data science and machine learning projects into production.
  • Work with modern data tools like Snowflake, Databricks, and Fivetran.

PythonSQLBusiness DevelopmentETLGitJavaSnowflakeData engineeringSparkCommunication SkillsCI/CDStakeholder managementData modeling

Posted about 2 months ago
Apply
Apply

πŸ“ United States

πŸ” Advertising software for Connected TV

🏒 Company: MNTNπŸ‘₯ 251-500πŸ’° $2,000,000 Seed almost 2 years agoAdvertisingReal TimeMarketingSoftware

  • 5+ years of experience related to data engineering, analysis, and modeling complex data.
  • Experience with distributed processing engines such as Spark.
  • Strong experience with programming languages like Python and familiarity with algorithms.
  • Experience in SQL, data modeling, and manipulating large data sets.
  • Hands-on experience with data warehousing and building data pipelines.
  • Familiarity with software processes and tools such as Git, CI/CD, Linux, and Airflow.
  • Experience with cloud computing environments like AWS, Azure, or GCP.
  • Strong written and verbal communication skills for conveying technical topics.

  • Become the expert on MNTN Data Pipelines, Infrastructure and Processes.
  • Design architecture with observability to maintain high quality data pipelines.
  • Create and manage ETL/ELT workflows for transforming large data sets.
  • Organize data and metrics for ad buying features and client performance.
  • Organize visualizations, reporting, and alerting for performance and trends.
  • Investigate critical incidents and ensure issues are resolved.

AWSPythonSQLCloud ComputingETLGCPGitAirflowAlgorithmsAzureData engineeringGoSparkCommunication SkillsCI/CDLinuxData modeling

Posted 2 months ago
Apply