Apply

Staff Software Engineer (Data Platform)

Posted about 2 months agoViewed

View full description

💎 Seniority level: Staff, 4+ years

📍 Location: United States, India, United Kingdom

🔍 Industry: B2B Technology

🏢 Company: Demandbase👥 501-1000💰 $175,000,000 Debt Financing almost 2 years agoSales AutomationAdvertisingBig DataSaaSAnalyticsB2BMarketingMarketing AutomationSoftware

🗣️ Languages: English

⏳ Experience: 4+ years

🪄 Skills: PythonSQLAgileData AnalysisETLJavaJavascriptProduct DevelopmentData engineeringSparkCommunication SkillsProblem SolvingData modeling

Requirements:
  • Bachelor’s or master’s degree in computer science, Mathematics, or Statistics from a top engineering institution.
  • 4+ years of data engineering experience in building enterprise data/analytics solutions.
  • Practical experience with complex analytics projects and advanced SQL for data analysis.
  • Strong practical experience in databases, Advanced SQL, and Python/R.
  • Good understanding of data strategies and data model design.
Responsibilities:
  • Design, model, and implement data analysis and analytics solutions.
  • Contribute hand-on to data projects involving high-level design, analysis, experiments, data architecture, and data modeling.
  • Support ETL pipeline modules through effective data transformation, data cleaning, reporting, and statistical analysis.
  • Apply analysis techniques such as segmentation, regression, clustering, and data profiling to analyze trends and report KPIs.
  • Collaborate with cross-functional teams in an Agile setting to build a scalable, high-availability data analytics platform.
Apply

Related Jobs

Apply

📍 US

🧭 Full-Time

💸 200000 - 255000 USD per year

🔍 Financial services, Blockchain

  • A Bachelor's degree (or equivalent) in Computer Science or a related field.
  • A proven track record with 8+ years of hands-on experience in architecting distributed system architecture.
  • Exceptional programming skills in Python and adeptness in SQL or SparkSQL.
  • In-depth experience with data stores such as Iceberg, Trino, BigQuery, and StarRocks.
  • Proficiency in data pipeline and workflow orchestration tools like Airflow, DBT.
  • Expertise in data processing technologies and streaming workflows including Spark, Kafka, and Flink.
  • Competence in deploying and monitoring infrastructure within public cloud platforms using Docker, Terraform, Kubernetes, and Datadog.
  • Proven ability in loading, querying, and transforming extensive datasets.
  • Build highly reliable data services to integrate with dozens of blockchains.
  • Develop complex ETL pipelines that transform and process petabytes of structured and unstructured data in real-time.
  • Design and architect intricate data models for optimal storage and retrieval to support sub-second latency for querying blockchain data.
  • Oversee the deployment and monitoring of large database clusters with a focus on performance and high availability.
  • Collaborate across departments to design and implement novel data models that enhance TRM’s products.

DockerPythonSQLBlockchainETLKafkaKubernetesAirflowData engineeringSparkCollaborationTerraformDocumentation

Posted about 2 months ago
Apply
Apply

📍 US

🔍 Blockchain intelligence and financial services

  • Bachelor's degree (or equivalent) in Computer Science or a related field.
  • 5+ years of hands-on experience in architecting distributed system architecture.
  • Exceptional programming skills in Python.
  • Adeptness in SQL or SparkSQL.
  • In-depth experience with data stores such as ClickHouse, ElasticSearch, Postgres, Redis, and Neo4j.
  • Proficiency in data pipeline and workflow orchestration tools like Airflow, DBT, Luigi, Azkaban, and Storm.
  • Expertise in data processing technologies and streaming workflows including Spark, Kafka, and Flink.
  • Competence in deploying and monitoring infrastructure within public cloud platforms, utilizing tools like Docker, Terraform, Kubernetes, and Datadog.
  • Build highly reliable data services to integrate with various blockchains.
  • Develop complex ETL pipelines for processing structured and unstructured data in real-time.
  • Design intricate data models to support optimal storage and retrieve with sub-second latency.
  • Oversee large database cluster deployment and monitoring with a focus on performance.
  • Collaborate with data scientists, engineers, and product managers to enhance TRM's products.

DockerPythonSQLBlockchainElasticSearchETLKafkaKubernetesAirflowClickhouseData engineeringPostgresRedisSparkCollaborationTerraformDocumentation

Posted about 2 months ago
Apply
Apply

📍 United States

💸 200000 - 255000 USD per year

🔍 Blockchain intelligence and financial services

🏢 Company: TRM Labs👥 101-250💰 $70,000,000 Series B about 2 years agoCryptocurrencyComplianceBlockchainBig Data

  • A Bachelor's degree (or equivalent) in Computer Science or a related field.
  • 8+ years of hands-on experience in architecting distributed systems.
  • Exceptional programming skills in Python and proficiency in SQL or SparkSQL.
  • In-depth experience with data stores such as Iceberg, Trino, BigQuery, and StarRocks.
  • Proficiency in tools like Airflow and DBT for data pipeline orchestration.
  • Expertise in technologies like Spark, Kafka, and Flink.
  • Competence in deploying and monitoring cloud infrastructure with tools like Docker and Terraform.
  • Proven ability in managing extensive datasets.
  • Build highly reliable data services to integrate with blockchains.
  • Develop complex ETL pipelines for real-time data processing.
  • Design intricate data models for optimal storage and retrieval.
  • Oversee deployment and monitoring of large database clusters.
  • Collaborate with data scientists and engineers to enhance products.

DockerPythonSQLBlockchainETLKafkaKubernetesAirflowData engineeringSparkCollaborationTerraform

Posted 5 months ago
Apply