Apply📍 United States
🔍 Advanced analytics consulting
🏢 Company: Tiger Analytics
- 8+ years of experience building and deploying large-scale data processing pipelines in a production environment.
- Hands-on experience in designing and building data pipelines.
- Strong proficiency in AWS services such as Amazon S3, AWS Glue, AWS Lambda, and Amazon Redshift.
- Strong experience with Databricks, Pyspark for data processing and analytics.
- Solid understanding of data modeling, database design principles, and SQL and Spark SQL.
- Experience with version control systems (e.g., Git) and CI/CD pipelines.
- Excellent communication skills and the ability to collaborate effectively with cross-functional teams.
- Strong problem-solving skills and attention to detail.
- Design, develop, and deploy end-to-end data pipelines on AWS cloud infrastructure using services such as Amazon S3, AWS Glue, AWS Lambda, and Amazon Redshift.
- Implement data processing and transformation workflows using Databricks, Apache Spark, and SQL to support analytics and reporting requirements.
- Build and maintain orchestration workflows using Apache Airflow for automating data pipeline execution, scheduling, and monitoring.
- Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver scalable data solutions.
- Optimize data pipelines for performance, reliability, and cost-effectiveness utilizing AWS best practices.
AWSSQLApache AirflowBusiness IntelligenceGitSnowflakeAirflowData engineeringSparkCommunication SkillsCI/CDProblem Solving
Posted 2024-10-24
Apply