Senior AI Data Engineer
New
C
CorticaHealthcare
Remote in CA, TX, NC, WA, ID, NV, AZ, CO, KS, AR, LA, AL, GA, FL, SC, TN, VA, MD, NJ, DE, IL, WI, MI, OH, MA, PA, NH, CTFull-TimeSenior
Salary160,000 - 200,000 USD per year
Apply NowOpens the employer's application page
Job Details
- Experience
- 5+ years of hands-on data engineering experience, including building and operating production data pipelines. 2+ years’ experience working with AI first development workflows. 4+ years’ experience with the following AWS (S3, Glue, Lambda, Redshift), and/or Azure big data services. 1+ year of experience with Snowflake. 2+ years of experience with orchestration frameworks. 2+ years of Salesforce experience with Apex and configurations.
- Required Skills
- GraphQLNode.jsPythonSQLFlaskKafkaMicrosoft Power BISalesforceSnowflakeAzureRESTful APIsAWS Lambda
Requirements
- You have 5+ years of hands-on data engineering experience, including building and operating production data pipelines.
- You have expert-level Python skills for ETL, pipeline orchestration, and automation.
- You possess deep SQL proficiency — query optimization, data modeling, stored procedures.
- You bring 2+ years’ experience working with AI first development workflows.
- You bring 4+ years’ experience with the following AWS (S3, Glue, Lambda, Redshift), and/or Azure big data services.
- You have 1+ year of experience with Snowflake.
- You have 2+ years of experience with orchestration frameworks.
- You have 2+ years of Salesforce experience with Apex and configurations.
- You’re experienced with Kimball dimensional modeling — you've built star schemas and conformed dimensions in production.
- You have Power BI (or equivalent BI tool) experience — data model design and report development.
- You have API integration experience — REST, GraphQL, event streaming (Kafka, Kinesis, or similar).
- You possess application development literacy — comfortable building lightweight web tooling (Python/Flask, Node, or similar) to complement data products.
Responsibilities
- Engage stakeholders directly to gather, clarify, and document project requirements.
- Translate requirements into architected data solutions: choose the right storage, pipeline, modeling, and delivery approach for each problem.
- Own testing end-to-end — unit tests, data quality checks, reconciliation, and integration tests before anything reaches production.
- Deploy solutions to production and monitor post-deployment health, iterating rapidly based on real-world feedback.
- Run parallel AI coding sessions (Claude Code, Cursor, Codex) across different facets of a pipeline simultaneously — orchestrate, verify, and integrate the outputs.
- Design and build complex, reliable data pipelines ingesting from AWS, Azure, Salesforce, MuleSoft, and multiple third-party APIs into our AWS Data Lake and Snowflake warehouse.
- Implement and evolve data models using Kimball methodology to support financial, operational, and clinical analytics.
- Build and support Power BI data models and reports; empower analytics team members to self-serve on a reliable data foundation.
- Build lightweight internal data applications and tooling where needed; data entry interfaces, operational dashboards, automation scripts that bridge the gap between data pipelines and end users.
- Ensure data security and HIPAA compliance in all pipeline and application work. Partner with IT to enforce data governance standards.
View Full Description & ApplyYou'll be redirected to the employer's site