Apply

Senior Data Engineer

Posted 2024-09-12

View full description

πŸ’Ž Seniority level: Senior, 5+ years

πŸ“ Location: India, PST, IST

πŸ” Industry: Data Engineering

πŸ—£οΈ Languages: English

⏳ Experience: 5+ years

πŸͺ„ Skills: AWSProject ManagementPythonSQLAgileGCPSnowflakeAzureData engineering

Requirements:
  • 5+ years of data engineering experience is a must.
  • 2+ years implementing and managing data engineering solutions using Cloud solutions GCP/AWS/Azure or on-premise distributed servers.
  • 2+ years' experience in Python.
  • Must be strong in SQL and its concepts.
  • Experience in Big Query, Snowflake, Redshift, DBT.
  • Strong understanding of data warehousing, data lake, and cloud concepts.
  • Excellent communication and presentation skills.
  • Excellent problem-solving skills, highly proactive and self-driven.
  • Consulting background is a big plus.
  • Must have a B.S. in computer science, software engineering, computer engineering, electrical engineering, or a related area of study.
Responsibilities:
  • Should have implement asynchronous data ingestion, high volume stream data processing, and real-time data analytics using various Data Engineering Techniques.
  • Implement application components using Cloud technologies and infrastructure.
  • Assist in defining the data pipelines and able to identify bottlenecks to enable the adoption of data management methodologies.
  • Implementing cutting edge cloud platform solutions using the latest tools and platforms offered by GCP, AWS, and Azure.
Apply

Related Jobs

Apply

πŸ“ India

🧭 Full-Time

πŸ” Data Engineering

🏒 Company: Aryng

  • 8+ years of data engineering experience.
  • 4+ years of experience with cloud solutions: GCP, AWS, Azure, or on-premise distributed servers.
  • Proficiency in Python (4+ years) and strong SQL skills.
  • Experience with Big Query, Snowflake, Redshift, DBT.
  • Understanding of data warehousing, data lakes, and cloud concepts.
  • Excellent communication and problem-solving skills.
  • Bachelor's degree in relevant fields.

  • Implement asynchronous data ingestion and high volume stream data processing.
  • Develop real-time data analytics using various Data Engineering techniques.
  • Define and optimize data pipelines, identifying bottlenecks.
  • Utilize GCP, AWS, and Azure cloud technologies for cutting-edge solutions.

AWSProject ManagementPythonSQLAgileGCPHadoopKafkaSnowflakeAirflowAzureData engineeringSparkProblem Solving

Posted 2024-10-25
Apply
Apply

πŸ“ India

πŸ” Data Engineering

🏒 Company: Aryng

  • 8+ years of data engineering experience.
  • 4+ years using cloud solutions such as GCP, AWS, or Azure.
  • 4+ years experience in Python.
  • Strong knowledge of SQL and data concepts.
  • Experience with Big Query, Snowflake, Redshift, and DBT.
  • Understanding of data warehousing, data lakes, and cloud concepts.
  • Excellent communication and presentation skills.
  • Strong problem-solving skills with a proactive approach.
  • A B.S. in computer science, software engineering, computer engineering, electrical engineering, or a related field is required.

  • Implement asynchronous data ingestion and high-volume stream data processing.
  • Perform real-time data analytics using various Data Engineering techniques.
  • Implement application components using Cloud technologies and infrastructure.
  • Assist in defining data pipelines and identify bottlenecks for data management.
  • Apply cutting-edge cloud platform solutions using GCP, AWS, and Azure.

AWSProject ManagementPythonSQLAgileGCPSnowflakeAzureData engineering

Posted 2024-10-25
Apply
Apply

πŸ“ India

🧭 Full-Time

πŸ” Data engineering

🏒 Company: Aryng

  • 8+ years of data engineering experience.
  • 4+ years managing data engineering solutions using Cloud (GCP/AWS/Azure) or on-premise.
  • 4+ years' experience in Python.
  • Strong in SQL and its concepts.
  • Experience with Big Query, Snowflake, Redshift, DBT.
  • Understanding of data warehousing, data lake, and cloud concepts.
  • Excellent communication and presentation skills.
  • Excellent problem-solving skills.
  • B.S. in computer science or related field.

  • Implement asynchronous data ingestion, high volume stream data processing, and real-time data analytics using various Data Engineering Techniques.
  • Assist in defining the data pipelines and identify bottlenecks to enable effective data management.
  • Implement application components using Cloud technologies and infrastructure.
  • Implement cutting edge cloud platform solutions using tools and platforms offered by GCP, AWS, and Azure.

AWSProject ManagementPythonSQLAgileGCPSnowflakeAzureData engineering

Posted 2024-10-25
Apply
Apply

πŸ“ APAC

πŸ” Cryptocurrency derivatives

🏒 Company: BitMEX

  • Minimum 4+ years experience in the data engineering field with demonstrated design and technical implementation of data warehouses.
  • Experience with OLAP databases and understanding of data structuring/modeling for trade-offs between storage/performance and usability.
  • Experience building, deploying, and troubleshooting reliable and consistent data pipelines.
  • Familiarity with AWS Redshift, Glue Data Catalog, S3, PostgreSQL, Parquet, Iceberg, Trino, and their management using Terraform & Kubernetes.

  • Design and maintain enhancements to our data warehouse, data lake, and data pipelines.
  • Increase reliability and consistency of data systems.
  • Improve queriability of large historical datasets using industry-standard tools.

AWSPostgreSQLKubernetesAirflowData engineeringTerraform

Posted 2024-10-22
Apply
Apply

πŸ“ India

πŸ” Data and cloud engineering services

🏒 Company: Enable Data Incorporated

  • Bachelor's or Master's degree in computer science, engineering, or a related field.
  • 10+ years of experience as a Data Engineer, Software Engineer, or similar role, with a focus on building cloud-based data solutions.
  • Strong experience with cloud platforms such as Azure or AWS.
  • Proficiency in Apache Spark and Databricks for large-scale data processing and analytics.
  • Experience in designing and implementing data processing pipelines using Spark and Databricks.
  • Strong knowledge of SQL and experience with relational and NoSQL databases.
  • Experience with data integration and ETL processes using tools like Apache Airflow or cloud-native orchestration services.
  • Good understanding of data modeling and schema design principles.
  • Experience with data governance and compliance frameworks.
  • Excellent problem-solving and troubleshooting skills.
  • Strong communication and collaboration skills to work effectively in a cross-functional team.
  • Relevant certifications in cloud platforms, Spark, or Databricks are a plus.

  • Design, develop, and maintain scalable and robust data solutions in the cloud using Apache Spark and Databricks.
  • Gather and analyze data requirements from business stakeholders and identify opportunities for data-driven insights.
  • Build and optimize data pipelines for data ingestion, processing, and integration using Spark and Databricks.
  • Ensure data quality, integrity, and security throughout all stages of the data lifecycle.
  • Collaborate with cross-functional teams to design and implement data models, schemas, and storage solutions.
  • Optimize data processing and analytics performance by tuning Spark jobs and leveraging Databricks features.
  • Provide technical guidance and expertise to junior data engineers and developers.
  • Stay up-to-date with emerging trends and technologies in cloud computing, big data, and data engineering.
  • Contribute to the continuous improvement of data engineering processes, tools, and best practices.

AWSSQLApache AirflowCloud ComputingETLAirflowAzureData engineeringNosqlSparkCollaborationCompliance

Posted 2024-10-01
Apply
Apply

πŸ“ United States, India, United Kingdom

🧭 Full-Time

πŸ’Έ 150000 - 180000 USD per year

πŸ” B2B technology

  • Four-year degree in Computer Science or related field, or equivalent experience.
  • Designing frameworks and writing efficient data pipelines, including batches and real-time streams.
  • Understanding of data strategies, data analysis, and data model design.
  • Experience with the Spark Ecosystem (YARN, Executors, Livy, etc.).
  • Experience in large scale data streaming, particularly Kafka or similar technologies.
  • Experience with data orchestration frameworks, particularly Airflow or similar.
  • Experience with columnar data stores, particularly Parquet and Clickhouse.
  • Strong SDLC principles (CI/CD, Unit Testing, git, etc.).
  • General understanding of AWS EMR, EC2, S3.

  • Help build the next generation unified data platform.
  • Solve complex data warehousing problems.
  • Ensure quality, discoverability, and accessibility of data.
  • Build batch and streaming data pipelines for ingestion, normalization, and analysis.
  • Develop standard design and access patterns.
  • Lead the unification of data from multiple products.

GitAirflowClickhouseSpark

Posted 2024-07-11
Apply