Apply

Senior Software Engineer, Data Platform

Posted 28 days agoViewed

View full description

💎 Seniority level: Senior, 5+ years

📍 Location: United States, Canada

💸 Salary: 140000.0 - 160000.0 USD per year

🔍 Industry: Fraud Prevention and AML Compliance

🏢 Company: Sardine👥 101-250💰 $70,000,000 Series C about 1 month agoCryptocurrencyFraud DetectionFinTechSoftware

🗣️ Languages: English

⏳ Experience: 5+ years

🪄 Skills: AWSDockerPythonSQLDynamoDBElasticSearchETLGCPKubernetesNosqlCI/CD

Requirements:
  • 5+ years of experience in backend or data engineering roles
  • Strong knowledge of database systems (SQL and NoSQL)
  • Expertise in a modern programming language (Go, Python, Java)
  • Familiarity with cloud platforms (AWS, GCP, Azure)
  • Experience with containerization (Docker, Kubernetes)
Responsibilities:
  • Design and implement ETL pipelines for large datasets
  • Develop and optimize APIs for data retrieval
  • Architect and manage scalable storage solutions
  • Collaborate on data product development
  • Perform data analysis for client value
  • Document processes and mentor junior engineers
Apply

Related Jobs

Apply

📍 United States, Canada

🧭 Full-Time

💸 147500.0 - 227500.0 USD per year

🔍 Financial Technology

  • 4+ years in software engineering for data systems.
  • Experience in scalable infrastructure to support batch, micro-batch or streaming processing
  • Experience in business domains such as payment systems, credit cards, bank transfers, or blockchains.
  • Experience in data governance and provenance.
  • Internal knowledge of open-source data technologies.
  • Ability to tackle complex and ambiguous problems.
  • Self-starter who takes ownership and enjoys moving at a fast pace.
  • Excellent communication skills, with the ability to collaborate across multiple remote teams, share ideas and present concepts effectively.
  • Design, build, and operate data platform services (warehousing, orchestration, and catalogs).
  • Continuously enhance platform operations by improving monitoring, performance, reliability, and resource optimization.
  • Design, build and maintain the data ingestion framework to source the required data for various analytical and reporting needs, which include onchain data, internal system data, and partner data.
  • Be a domain expert in data warehousing, modeling, pipelines, and quality. Work closely across multiple stakeholders–including Product, Engineering, Data Science, Security and Compliance teams–on data contract modeling, data lifecycle management, governance and regulatory/legal compliance.
  • Provide ML data platform capabilities for AI/Data Science teams to perform data preparation, model training and management, and experiment execution.
  • Develop and maintain core services and libraries to enhance critical platform functionalities, such as cataloging data assets and lineage, tracking data versioning and quality, managing auto-backfilling, implementing access controls on data assets.

AWSDockerPostgreSQLPythonSQLApache AirflowBlockchainCloud ComputingETLJavaKafkaKubernetesMachine LearningSnowflakeAlgorithmsData engineeringData scienceData StructuresREST APICI/CDRESTful APIsMicroservicesData visualizationData modelingSoftware EngineeringData analyticsData management

Posted 7 days ago
Apply
Apply

📍 United States

💸 200000 - 255000 USD per year

🔍 Blockchain intelligence and financial services

🏢 Company: TRM Labs👥 101-250💰 $70,000,000 Series B over 2 years agoCryptocurrencyComplianceBlockchainBig Data

  • A Bachelor's degree (or equivalent) in Computer Science or a related field.
  • 8+ years of hands-on experience in architecting distributed systems.
  • Exceptional programming skills in Python and proficiency in SQL or SparkSQL.
  • In-depth experience with data stores such as Iceberg, Trino, BigQuery, and StarRocks.
  • Proficiency in tools like Airflow and DBT for data pipeline orchestration.
  • Expertise in technologies like Spark, Kafka, and Flink.
  • Competence in deploying and monitoring cloud infrastructure with tools like Docker and Terraform.
  • Proven ability in managing extensive datasets.
  • Build highly reliable data services to integrate with blockchains.
  • Develop complex ETL pipelines for real-time data processing.
  • Design intricate data models for optimal storage and retrieval.
  • Oversee deployment and monitoring of large database clusters.
  • Collaborate with data scientists and engineers to enhance products.

DockerPythonSQLBlockchainETLKafkaKubernetesAirflowData engineeringSparkCollaborationTerraform

Posted 7 months ago
Apply