Apply

AWS Data Engineer

Posted 3 months agoViewed

View full description

šŸ’Ž Seniority level: Senior, 8+ years

šŸ“ Location: United States

šŸ¢ Company: ARFA Solutions, LLC

šŸ—£ļø Languages: English

ā³ Experience: 8+ years

šŸŖ„ Skills: AWSPythonETLKafkaSparkMentoringDocumentationMicroservicesComplianceScala

Requirements:
  • 8+ years experience as a Data Engineer.
  • Experience with Scala, Spark and Apache Flink streaming.
  • AWS Cloud Development.
  • Experience with Kafka and/or other AWS streaming technologies.
  • Proficiency in Python and Pyspark.
  • Strong proficiency in SCALA.
Responsibilities:
  • Designing, creating, and maintaining Scala-based data tasks to transform raw data into meaningful data models.
  • Participating in all architectural development tasks related to the data pipeline.
  • Writing code in accordance with the data model requirements.
  • Collaborating with cross-functional teams.
  • Mentoring team members in Scala development and management.
  • Documenting Scala Development processes including Knowledge Documents, build and run books.
  • Working with business analysts to understand requirements.
  • Testing to ensure designs are in compliance with specifications.
  • Debugging and resolving technical issues.
  • Making recommendations to existing infrastructure.
  • Continually engaging in professional development.
  • Developing documentation to track.
Apply

Related Jobs

Apply

šŸ“ United States

šŸ” Utilities

šŸ¢ Company: ScalepexšŸ‘„ 11-50Staffing AgencyFinanceProfessional Services

  • Minimum of 5 years of experience in data engineering.
  • Proficiency in AWS services such as Step Functions, Lambda, Glue, S3, DynamoDB, and Redshift.
  • Strong programming skills in Python with experience using PySpark and Pandas for large-scale data processing.
  • Hands-on experience with distributed systems and scalable architectures.
  • Knowledge of ETL/ELT processes for integrating diverse datasets.
  • Familiarity with utilities-specific datasets is highly desirable.
  • Strong analytical skills to work with unstructured datasets.
  • Knowledge of data governance practices.
  • Design and build scalable data pipelines using AWS services to process and transform large datasets from utility systems.
  • Orchestrate workflows using AWS Step Functions.
  • Implement ETL/ELT processes to clean, transform, and integrate data.
  • Leverage distributed systems experience to ensure reliability and performance.
  • Utilize AWS Lambda for serverless application development.
  • Design data models for analytics tailored to utilities use cases.
  • Continuously monitor and optimize data pipeline performance.

AWSPythonDynamoDBETLData engineeringServerlessPandasData modeling

Posted about 2 months ago
Apply