Apply

Data Engineer

Posted 2024-10-23

View full description

πŸ’Ž Seniority level: Proven experience as a Data Engineer or in a similar role.

πŸ“ Location: Philippines

🏒 Company: Sourcefit

⏳ Experience: Proven experience as a Data Engineer or in a similar role.

πŸͺ„ Skills: PostgreSQLPythonSQLBusiness IntelligenceETLHadoopJavaKafkaMachine LearningMySQLAzureSparkCollaboration

Requirements:
  • Bachelor’s degree in Computer Science, Engineering, or a related field.
  • Proven experience as a Data Engineer or in a similar role.
  • Strong knowledge of SQL and experience with relational databases (e.g., MySQL, PostgreSQL).
  • Experience with big data tools and frameworks (e.g., Hadoop, Spark, Kafka).
  • Proficiency in programming languages such as Python, Java, or Scala.
  • Extensive experience with Azure cloud services, including Azure Data Factory, Azure Databricks, and Azure Synapse Analytics.
  • Familiarity with data warehousing solutions, particularly Microsoft Fabric.
  • Strong problem-solving skills and attention to detail.
  • Excellent communication and collaboration skills.
  • Experience in data analytics and AI, including data visualization, statistical analysis, predictive modeling, and machine learning.
Responsibilities:
  • Design, develop, and maintain scalable data pipelines and systems.
  • Assemble large, complex data sets that meet business requirements.
  • Identify, design, and implement internal process improvements.
  • Build infrastructure for optimal ETL of data from various sources using SQL and Azure technologies.
  • Develop and maintain data architecture for analytics, business intelligence, and AI.
  • Collaborate with data scientists and analysts to support data infrastructure needs.
  • Ensure data quality and integrity through cleaning, validation, and analysis.
  • Monitor and troubleshoot data pipeline performance and reliability.
Apply

Related Jobs

Apply

πŸ“ Philippines

🧭 Contract

πŸ’Έ 1690 - 1940 INR per hour

πŸ” B2B technology

  • 7+ years of expertise with SQL, particularly in Redshift and BigQuery.
  • Experience with Python for data processing and automation.
  • Proficient with data visualization tools like Tableau or Looker.
  • Skilled in leveraging DBT for data transformations and source control.
  • Strong knowledge of data pipeline automation and ETL/ELT processes.
  • Experience in establishing database architecture and data governance.
  • Analytical skills to interpret data into actionable insights.
  • Understanding of statistical analysis, predictive modeling, and machine learning.

  • Develop, maintain, and optimize data pipelines and ETL/ELT processes using Redshift, BigQuery, and other data systems.
  • Design and manage data architectures for efficient cross-channel integration.
  • Utilize APIs and automation tools for data ingestion and transformation.
  • Build and maintain robust data models to improve reporting accuracy.
  • Collaborate with teams to align data structures with business needs.
  • Clean, preprocess, and transform raw data for consistency.
  • Monitor and troubleshoot data pipelines for high quality.
  • Conduct exploratory data analysis to uncover trends.
  • Support campaign optimization with predictive models and forecasting.
  • Create interactive dashboards and reports for real-time insights.

PythonSQLAgileData AnalysisETLMachine LearningTableauData analysisData engineeringData Structures

Posted 2024-11-09
Apply
Apply
πŸ”₯ Data Engineer
Posted 2024-11-08

πŸ“ Philippines, India

πŸ” Employee wellness services

  • Expertise in data engineering.
  • Proficient in programming.
  • Willingness to work hands-on with data.

  • The Data Engineering team transforms raw data into meaningful information.
  • Responsible for managing data systems.
  • Collaborates with business analysts to provide usable data.
  • Engages in critical and collaborative debate to solve data-related problems.

PythonSQLData AnalysisData analysisData engineeringCommunication SkillsAnalytical SkillsCollaboration

Posted 2024-11-08
Apply
Apply

πŸ“ Philippines

🏒 Company: Activate Talent

  • Bachelor's degree in Computer Science, Information Technology, or related field.
  • Experience as a Data Engineer with a focus on Active Pooling and data pipeline development.
  • Strong proficiency in SQL and programming languages such as Python or Java.
  • Hands-on experience with cloud platforms (e.g., AWS, Azure, GCP) and modern data stack tools.
  • Familiarity with ETL processes and data integration methodologies.
  • Analytical mindset with the ability to troubleshoot and resolve data issues effectively.
  • Excellent communication skills for collaboration with cross-functional teams.
  • Strong organizational skills and attention to detail.

  • Design, build, and maintain data pipelines that support Active Pooling initiatives and improve data accessibility.
  • Collaborate with stakeholders to identify data needs and deliver effective data solutions.
  • Utilize cloud services and technologies to architect scalable data architectures.
  • Ensure data quality, integrity, and security throughout the data lifecycle.
  • Analyze and optimize ETL processes to improve data processing efficiency.
  • Stay current with industry trends and best practices in data engineering and cloud technologies.
  • Provide support for data-related issues and contribute to troubleshooting efforts.
  • Document data engineering processes and architectures appropriately.

AWSPythonSQLETLGCPJavaAzureData engineeringCommunication SkillsCollaborationDocumentation

Posted 2024-10-23
Apply
Apply

πŸ“ APAC

πŸ” Cryptocurrency derivatives

🏒 Company: BitMEX

  • Minimum 4+ years experience in the data engineering field with demonstrated design and technical implementation of data warehouses.
  • Experience with OLAP databases and understanding of data structuring/modeling for trade-offs between storage/performance and usability.
  • Experience building, deploying, and troubleshooting reliable and consistent data pipelines.
  • Familiarity with AWS Redshift, Glue Data Catalog, S3, PostgreSQL, Parquet, Iceberg, Trino, and their management using Terraform & Kubernetes.

  • Design and maintain enhancements to our data warehouse, data lake, and data pipelines.
  • Increase reliability and consistency of data systems.
  • Improve queriability of large historical datasets using industry-standard tools.

AWSPostgreSQLKubernetesAirflowData engineeringTerraform

Posted 2024-10-22
Apply
Apply
πŸ”₯ Data Engineer
Posted 2024-08-26

πŸ“ Americas, EMEA, APAC

πŸ” Crypto and blockchain technology

  • 4+ years of work experience in relevant fields such as Data Engineer, DWH Engineer, or Software Engineer.
  • Experience with data warehouse technologies and relevant data modeling best practices like Presto, Athena, Glue, etc.
  • Experience building data pipelines/ETL and familiarity with design principles; knowledge of Apache Airflow is a plus.
  • Excellent SQL and data manipulation skills using frameworks like Spark/PySpark or similar.
  • Proficiency in a major programming language such as Scala, Python, or Golang.
  • Experience with business requirements gathering for data sourcing.

  • Build scalable and reliable data pipeline that collects, transforms, loads, and curates data from internal systems.
  • Augment data platform with data pipelines from select external systems.
  • Ensure high data quality for pipelines built and maintain auditability.
  • Drive data systems to approach real-time processing.
  • Support the design and deployment of a distributed data store as the central source of truth.
  • Build data connections to internal IT systems.
  • Develop and customize self-service tools for data consumers.
  • Evaluate new technologies and create prototypes for continuous improvements in data engineering.

PythonSQLETLAirflowData engineeringSpark

Posted 2024-08-26
Apply