Apply

Senior Data Engineer

Posted 3 days agoViewed

View full description

πŸ’Ž Seniority level: Senior, 8+ years

πŸ“ Location: India, PST, NOT STATED

πŸ” Industry: Data Engineering

🏒 Company: AryngπŸ‘₯ 11-50ConsultingTrainingAnalytics

πŸ—£οΈ Languages: English

⏳ Experience: 8+ years

πŸͺ„ Skills: AWSPythonSQLGCPHadoopKafkaSnowflakeTableauAirflowAzureData engineeringSpark

Requirements:
  • 8+ years of data engineering experience.
  • 4+ years implementing and managing data solutions with cloud solutions like GCP, AWS, or Azure.
  • Strong proficiency in Python and SQL.
  • Experience with Big Query, Snowflake, Redshift, and DBT.
  • Understanding of data warehousing, data lakes, and cloud concepts.
  • Excellent communication, presentation, and problem-solving skills.
  • A B.S. in computer science or related field.
  • Consulting background is a plus.
Responsibilities:
  • Implement asynchronous data ingestion and high volume stream data processing.
  • Conduct real-time data analytics using various data engineering techniques.
  • Implement application components using cloud technologies.
  • Assist in defining data pipelines and identifying bottlenecks for data management methodologies.
  • Utilize cutting edge cloud platform solutions from GCP, AWS, and Azure.
Apply

Related Jobs

Apply

πŸ“ India

πŸ” Fintech

🏒 Company: OportunπŸ‘₯ 1001-5000πŸ’° $235,000,000 Post-IPO Debt 3 months agoπŸ«‚ Last layoff about 1 year agoDebit CardsConsumer LendingFinancial ServicesFinTech

  • Bachelor's or Master's degree in Computer Science, Data Science, or a related field.
  • 5+ years of experience in data engineering, focusing on data architecture and ETL.
  • Proficiency in Python/Pyspark and Java/Scala.
  • Expertise in big data technologies like Hadoop, Spark, and Kafka.
  • In-depth knowledge of SQL and experience with databases like PostgreSQL, MySQL, or NoSQL.
  • Experience in building complex end-to-end data pipelines.
  • Familiarity with cloud platforms such as AWS, Azure, or GCP.
  • Strong leadership and communication skills.
  • Lead the design and implementation of scalable data architectures.
  • Develop data pipelines and optimize them for performance.
  • Oversee management of databases and data warehouses.
  • Establish data quality standards and governance practices.
  • Provide mentorship and technical leadership to junior engineers.
  • Collaborate with cross-functional teams to deliver data solutions.
  • Implement monitoring systems for data pipeline performance.

AWSLeadershipPostgreSQLPythonSQLApache AirflowETLGCPHadoopJavaKafkaMySQLAzureData engineeringNosqlSparkScalaMentorship

Posted 14 days ago
Apply
Apply

πŸ“ South Africa, Mauritius, Kenya, Nigeria

πŸ” Technology, Marketplaces

  • BSc degree in Computer Science, Information Systems, Engineering, or related technical field or equivalent work experience.
  • 3+ years related work experience.
  • Minimum of 2 years experience building and optimizing β€˜big data’ data pipelines, architectures and maintaining data sets.
  • Experienced in Python.
  • Experienced in SQL (PostgreSQL, MS SQL).
  • Experienced in using cloud services: AWS, Azure or GCP.
  • Proficiency in version control, CI/CD and GitHub.
  • Understanding/experience in Glue and PySpark highly desirable.
  • Experience in managing data life cycle.
  • Proficiency in manipulating, processing and architecting large disconnected data sets for analytical requirements.
  • Ability to maintain and optimise processes supporting data transformation, data structures, metadata, dependency and workload management.
  • Good understanding of data management principles - data quality assurance and governance.
  • Strong analytical skills related to working with unstructured datasets.
  • Understanding of message queuing, stream processing, and highly scalable β€˜big data’ datastores.
  • Strong attention to detail.
  • Good communication and interpersonal skills.
  • Suggest efficiencies and execute on implementation of internal process improvements in automating manual processes.
  • Implement enhancements and new features across data systems.
  • Improve streamline processes within data systems with support from Senior Data Engineer.
  • Test CI/CD process for optimal data pipelines.
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Highly efficient in ETL processes.
  • Develop and conduct unit tests on data pipelines as well as ensuring data consistency.
  • Develop and maintain automated monitoring solutions.
  • Support reporting and analytics infrastructure.
  • Maintain data quality and data governance as well as upkeep of overall maintenance of data infrastructure systems.
  • Maintain data warehouse and data lake metadata, data catalogue, and user documentation for internal business users.
  • Ensure best practice is implemented and maintained on database.

AWSPostgreSQLPythonSQLETLGitCI/CD

Posted 25 days ago
Apply