Associate Architect - Data Engineering

Remote (USA/Canada)Full-TimeMiddle
Salary not disclosed
Apply NowOpens the employer's application page

Job Details

Experience
5+ years
Required Skills
AWSPostgreSQLPythonSQLAmazon RDSMicrosoft Power BIMongoDBMySQLTableauAWS LambdaPySpark

Requirements

  • 5+ years of experience in designing and implementing data warehouses and data lakes/lakehouses on AWS
  • Hands-on experience with AtScale or similar semantic layer tools
  • Proven success working with globally distributed teams
  • Deep working knowledge across key AWS Data & Analytics services
  • Building large-scale data lake architectures on Amazon S3 and open table formats
  • Implementing governance and cataloging through AWS Lake Formation
  • Developing ETL and metadata frameworks using AWS Glue
  • Leveraging AWS Lambda for serverless data processing
  • Running distributed data workloads on Amazon EMR
  • Enabling real-time data pipelines with AWS Kinesis (Data Streams and Firehose)
  • Orchestrating pipelines using AWS Step Functions/Amazon MWAA/similar services
  • Designing and optimizing schemas and query performance on Amazon Redshift, including Spectrum and Serverless features
  • Querying large datasets interactively using Amazon Athena
  • Managing operational databases using Amazon RDS across engines such as PostgreSQL, MySQL, and Aurora
  • Integrating and migrating data using AWS DMS, Glue Connectors, EventBridge, SNS, and SQS
  • Strong understanding of semantic modeling, including logical data models, virtual cubes, and centralized metric definitions
  • Experience optimizing performance using query pushdown, caching, and aggregate awareness over platforms like Redshift and Athena
  • Ability to integrate semantic layers with BI tools (QuickSight, Tableau, Power BI) and enforce row/column-level security
  • Strong programming capability in Python and PySpark for large-scale data processing
  • Proficiency in writing complex SQL queries, analytical functions, and performance tuning for large datasets
  • Familiarity with NoSQL databases such as Amazon DynamoDB, MongoDB, or DocumentDB
  • Strong understanding of partitioning, indexing, scaling approaches, and query optimization techniques
  • Proven experience in architecting and implementing data pipelines using native AWS services
  • Solid understanding of data modeling concepts, including dimensional, normalized, and lakehouse patterns
View Full Description & ApplyYou'll be redirected to the employer's site
View details
Apply Now