Senior Data Engineer - AI-Optimized Data Platforms

A
Aimpoint DigitalData and Analytics Consultancy
US and UKFull-TimeSenior
Salary not disclosed
Apply NowOpens the employer's application page

Job Details

Required Skills
AWSDockerPythonSQLGCPGitJavaKubernetesSnowflakeAzureSparkCI/CDDevOpsScaladbtDatabricks

Requirements

  • Degree educated in Computer Science, Engineering, Mathematics, or equivalent experience
  • Experienced at partnering with business stakeholders, explaining technical concepts clearly, and shaping solutions around real business outcomes
  • Passionate about modern data engineering and AI trends, including LLM powered analytics, semantic layers, vectorized data access, and metadata-driven architectures
  • Strong written and verbal communication skills required
  • 3+ years working with relational databases and query languages
  • 3+ years building data pipelines in production and ability to work across structured, semi-structured and unstructured data
  • 3+ years data modeling (e.g. star schema, entity-relationship)
  • 3+ years writing clean, maintainable, and robust code in Python, Scala, Java, or similar coding languages
  • 2+ years' experience with dbt Core and/or dbt Cloud preferred
  • Experience enabling or accelerating data platform engineering workflows with AI tools such as Codex, Claude, Copilot, Snowflake Cortex Code, and/or Databricks Genie Code preferred
  • Comfortable working independently on individual workstream, owning end-to-end delivery from design through production
  • Expertise in software engineering concepts and best practices
  • DevOps experience preferred
  • Experience working with cloud data warehouses (Databricks, Snowflake, Google BigQuery, AWS Redshift, Microsoft Synapse) preferred
  • Experience working with cloud ETL/ELT tools (Fivetran, dbt, Matillion, Informatica, Talend, etc.) preferred
  • Experience working with cloud platforms (AWS, Azure, GCP) and container technologies (Docker, Kubernetes) preferred
  • Experience working with Apache Spark preferred
  • Experience preparing data for analytics and following a data science workflow to drive business results preferred
  • Consulting experience strongly preferred
  • Willingness to travel

Responsibilities

  • Become a trusted data and AI advisor to clients, translating business questions into AI-ready data architectures
  • Work independently as part of a small team to solve complex data engineering use-cases across various industries
  • Design and implement AI-optimized data platforms, including cloud data warehouses, lakehouses, ETL/ELT pipelines, orchestration jobs, and analytic layers
  • Build and evolve semantic and analytical layers that power tools like Snowflake Cortex, Databricks Genie, BI platforms, and emerging AI Copilots
  • Use modern platforms and tooling such as Snowflake, Databricks, dbt, Fivetran, and cloud-native orchestration frameworks
  • Engineer modern ELT/ETL pipelines that handle structured, semi-structured, and unstructured data to support AI and analytics use cases
  • Design modern data models with an emphasis on metrics layers, knowledge graphs, and semantic consistency for AI consumption
  • Write production-ready code in SQL, Python, and Spark, following software engineering tools and best-practices such as Git and CI/CD
  • Apply AI-assisted data engineering techniques for data exploration, quality checks, schema generation, documentation, lineage, and transformation acceleration
  • Contribute to the evolution of the AI-forward data engineering and infrastructure practice, including internal accelerators, patterns, and client-ready architectures
  • Collaborate with analytics, data science, and ML project teams to productionize AI-enabled analytics, features, and inference pipelines
View Full Description & ApplyYou'll be redirected to the employer's site
View details
Apply Now