Apply

Staff Software Engineer, Data Incrementalization

Posted 4 months ago

View full description

💎 Seniority level: Staff, 5+ years

📍 Location: United States, EST,PST

💸 Salary: 240000 - 270000 USD per year

🔍 Industry: Blockchain intelligence and financial services

🏢 Company: TRM Labs👥 101-250💰 $70,000,000 Series B about 2 years agoCryptocurrencyComplianceBlockchainBig Data

🗣️ Languages: English

⏳ Experience: 5+ years

🪄 Skills: DockerPythonSQLETLKafkaKubernetesSnowflakeAirflowAlgorithmsData engineeringData StructuresPostgresSparkCollaborationTerraform

Requirements:
  • Bachelor's degree (or equivalent) in Computer Science or related field.
  • 5+ years of experience in building distributed system architecture, focusing on incremental updates.
  • Strong programming skills in Python and SQL.
  • Deep technical expertise in advanced data structures and algorithms for incremental updating.
  • Knowledge of data stores like BigQuery, Snowflake, RedShift, and more.
  • Experience orchestrating data pipelines using Airflow, DBT, Luigi, etc.
  • Proficiency in data processing technologies like Spark, Kafka, Flink.
  • Ability to deploy and monitor systems in public cloud environments.
Responsibilities:
  • Design and build our Cloud Data Warehouse focusing on incremental updates for cost efficiency.
  • Research methods to optimize data processing, storage, and retrieval.
  • Develop and maintain ETL pipelines for structured and unstructured data.
  • Collaborate with teams on new data models and tools for innovation.
  • Continuously monitor and optimize performance for cost, scalability, and reliability.
Apply

Related Jobs

Apply

📍 United States

🧭 Full-Time

💸 $240,000 - $270,000 per year

🔍 Blockchain intelligence data platform

  • Bachelor's degree (or equivalent) in Computer Science or a related field.
  • 5+ years of experience in building distributed system architecture, with a particular focus on incremental updates from inception to production.
  • Strong programming skills in Python and SQL.
  • Deep technical expertise in advanced data structures and algorithms for incremental updating of data stores (e.g., Graphs, Trees, Hash Maps).
  • Comprehensive knowledge across all facets of data engineering, including implementing and managing incremental updates in data stores like BigQuery, Snowflake, RedShift, Athena, Hive, and Postgres.
  • Orchestrating data pipelines and workflows focused on incremental processing using tools such as Airflow, DBT, Luigi, Azkaban, and Storm.
  • Developing and optimizing data processing technologies and streaming workflows for incremental updates (e.g., Spark, Kafka, Flink).
  • Deploying and monitoring scalable, incremental update systems in public cloud environments (e.g., Docker, Terraform, Kubernetes, Datadog).
  • Expertise in loading, querying, and transforming large datasets with a focus on efficiency and incremental growth.

  • Design and build our Cloud Data Warehouse with a focus on incremental updates to improve cost efficiency and scalability.
  • Research innovative methods to incrementally optimize data processing, storage, and retrieval to support efficient data analytics and insights.
  • Develop and maintain ETL pipelines that transform and incrementally process petabytes of structured and unstructured data to enable data-driven decision-making.
  • Collaborate with cross-functional teams to design and implement new data models and tools focused on accelerating innovation through incremental updates.
  • Continuously monitor and optimize the Data Platform's performance, focusing on enhancing cost efficiency, scalability, and reliability.

DockerPythonSQLETLKafkaKubernetesMachine LearningSnowflakeAirflowAlgorithmsData engineeringData scienceData StructuresPostgresSparkCollaborationTerraform

Posted 3 months ago
Apply