Finance Staff Data Engineer, AI Native
New
United StatesFull-TimeStaff
Salary not disclosed
Apply NowOpens the employer's application page
Job Details
- Experience
- 8+ years
- Required Skills
- AWSPythonSQLArtificial IntelligenceAirflowSparkCI/CDRESTful APIsDevOpsTerraformData modelingdbtDatabricksLLMNetworking
Requirements
- 8+ years of experience building and operating large-scale distributed data systems in production environments.
- Strong expertise with cloud platforms, ideally Databricks, and AWS infrastructure/services.
- Advanced proficiency in Python, SQL, and Spark for large-scale data processing.
- Strong experience with CI/CD pipelines, Terraform, and modern DevOps practices.
- Solid understanding of dbt, data modeling, and analytical data architecture.
- Experience with data orchestration tools such as Airflow.
- Deep knowledge of data ingestion challenges including networking, APIs, and cross-cloud integration.
- Strong systems design skills with experience in scalable and event-driven architectures.
- Proven ability to use AI/LLM tools (e.g., Cursor, Claude, GitHub Copilot-style tools) to enhance engineering productivity.
- Experience implementing data quality controls, validation frameworks, and observability systems.
- Ability to independently scope ambiguous technical problems and drive them to completion.
- Strong communication skills across technical and non-technical stakeholders.
- Bachelor’s degree in Computer Science, Engineering, Mathematics, or equivalent experience.
Responsibilities
- Architect and evolve scalable data ingestion, transformation, and egress frameworks for financial data systems.
- Design and maintain robust pipelines ensuring high data quality, reliability, and observability across all workflows.
- Build and enhance CI/CD pipelines, improving testing, deployment automation, and developer velocity.
- Develop and optimize data infrastructure across AWS, Databricks, and related cloud environments.
- Define and enforce data security, governance, and SOX compliance controls across systems.
- Implement distributed data processing systems using Spark and modern cloud data architectures.
- Improve developer experience through tooling, automation, and AI-assisted engineering workflows.
- Establish ingestion standards, monitoring systems, and recovery mechanisms to ensure system resilience.
- Collaborate with analytics engineers and finance stakeholders to support downstream reporting and modeling.
- Leverage AI/LLM tools to accelerate development, debugging, and system optimization while maintaining quality ownership.
- Identify architectural risks, dependencies, and scalability challenges, providing clear technical direction.
- Mentor engineers and contribute to raising engineering standards across the team.
View Full Description & ApplyYou'll be redirected to the employer's site