Apply

Data Engineer

Posted 2 days agoViewed

View full description

💎 Seniority level: Senior, 3+ years

📍 Location: Philippines

🔍 Industry: ECommerce and Marketing

🏢 Company: Podean

🗣️ Languages: English

⏳ Experience: 3+ years

🪄 Skills: PythonSQLDynamoDBETL

Requirements:
  • 3+ years of experience in a data engineering or similar role, focusing on API integration.
  • Proficiency in Python, Java, or another programming language suitable for API integration and data engineering.
  • Expertise in SQL and experience with data warehouses (e.g., Redshift, Snowflake, BigQuery).
  • Hands-on experience with workflow orchestration tools.
  • Proven track record of building scalable data pipelines and systems.
  • Strong problem-solving abilities and attention to detail.
  • Excellent communication skills and a collaborative mindset.
  • Ability to manage multiple projects in a fast-paced environment.
Responsibilities:
  • Develop and maintain integrations with marketplace APIs such as Amazon Selling Partner API.
  • Handle API authentication, rate limits, pagination, and error handling.
  • Design, build, and optimize ETL/ELT pipelines for ingesting and processing data from multiple marketplaces.
  • Automate data workflows to ensure reliable and timely updates.
  • Design and implement data models to support analytical and operational use cases.
  • Utilize data storage solutions such as AWS S3, Redshift, DynamoDB, or Google BigQuery.
  • Monitor and optimize API calls for efficient large-scale data operations.
  • Collaborate with data analysts and product teams to deliver actionable insights.
  • Communicate technical concepts to non-technical stakeholders.
  • Manage API keys, tokens, and access credentials securely.
Apply

Related Jobs

Apply
🔥 Senior Data Engineer
Posted 8 days ago

📍 South Africa, Mauritius, Kenya, Nigeria

🔍 Technology, Marketplaces

  • BSc degree in Computer Science, Information Systems, Engineering, or related technical field or equivalent work experience.
  • 3+ years related work experience.
  • Minimum of 2 years experience building and optimizing ‘big data’ data pipelines, architectures and maintaining data sets.
  • Experienced in Python.
  • Experienced in SQL (PostgreSQL, MS SQL).
  • Experienced in using cloud services: AWS, Azure or GCP.
  • Proficiency in version control, CI/CD and GitHub.
  • Understanding/experience in Glue and PySpark highly desirable.
  • Experience in managing data life cycle.
  • Proficiency in manipulating, processing and architecting large disconnected data sets for analytical requirements.
  • Ability to maintain and optimise processes supporting data transformation, data structures, metadata, dependency and workload management.
  • Good understanding of data management principles - data quality assurance and governance.
  • Strong analytical skills related to working with unstructured datasets.
  • Understanding of message queuing, stream processing, and highly scalable ‘big data’ datastores.
  • Strong attention to detail.
  • Good communication and interpersonal skills.

  • Suggest efficiencies and execute on implementation of internal process improvements in automating manual processes.
  • Implement enhancements and new features across data systems.
  • Improve streamline processes within data systems with support from Senior Data Engineer.
  • Test CI/CD process for optimal data pipelines.
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Highly efficient in ETL processes.
  • Develop and conduct unit tests on data pipelines as well as ensuring data consistency.
  • Develop and maintain automated monitoring solutions.
  • Support reporting and analytics infrastructure.
  • Maintain data quality and data governance as well as upkeep of overall maintenance of data infrastructure systems.
  • Maintain data warehouse and data lake metadata, data catalogue, and user documentation for internal business users.
  • Ensure best practice is implemented and maintained on database.

AWSPostgreSQLPythonSQLETLGitCI/CD

Posted 8 days ago
Apply
Apply
🔥 Data Engineer
Posted 3 months ago

📍 Philippines

🧭 Full-Time

🏢 Company: Sourcefit👥 51-100💰 about 1 year agoStaffing AgencyConsultingHuman ResourcesInformation Technology

  • Bachelor’s degree in Computer Science, Engineering, or a related field.
  • Proven experience as a Data Engineer or in a similar role.
  • Strong knowledge of SQL and experience with relational databases (e.g., MySQL, PostgreSQL).
  • Experience with big data tools and frameworks (e.g., Hadoop, Spark, Kafka).
  • Proficiency in programming languages such as Python, Java, or Scala.
  • Extensive experience with Azure cloud services, including Azure Data Factory, Azure Databricks, and Azure Synapse Analytics.
  • Familiarity with data warehousing solutions, particularly Microsoft Fabric.
  • Strong problem-solving skills and attention to detail.
  • Excellent communication and collaboration skills.
  • Experience in data analytics and AI, including data visualization, statistical analysis, predictive modeling, and machine learning.

  • Design, develop, and maintain scalable data pipelines and systems.
  • Assemble large, complex data sets that meet business requirements.
  • Identify, design, and implement internal process improvements.
  • Build infrastructure for optimal ETL of data from various sources using SQL and Azure technologies.
  • Develop and maintain data architecture for analytics, business intelligence, and AI.
  • Collaborate with data scientists and analysts to support data infrastructure needs.
  • Ensure data quality and integrity through cleaning, validation, and analysis.
  • Monitor and troubleshoot data pipeline performance and reliability.

PostgreSQLPythonSQLBusiness IntelligenceETLHadoopJavaKafkaMachine LearningMySQLAzureSparkCollaborationScala

Posted 3 months ago
Apply
Apply

📍 Philippines

🏢 Company: Activate Talent

  • Bachelor's degree in Computer Science, Information Technology, or related field.
  • Experience as a Data Engineer with a focus on Active Pooling and data pipeline development.
  • Strong proficiency in SQL and programming languages such as Python or Java.
  • Hands-on experience with cloud platforms (e.g., AWS, Azure, GCP) and modern data stack tools.
  • Familiarity with ETL processes and data integration methodologies.
  • Analytical mindset with the ability to troubleshoot and resolve data issues effectively.
  • Excellent communication skills for collaboration with cross-functional teams.
  • Strong organizational skills and attention to detail.

  • Design, build, and maintain data pipelines that support Active Pooling initiatives and improve data accessibility.
  • Collaborate with stakeholders to identify data needs and deliver effective data solutions.
  • Utilize cloud services and technologies to architect scalable data architectures.
  • Ensure data quality, integrity, and security throughout the data lifecycle.
  • Analyze and optimize ETL processes to improve data processing efficiency.
  • Stay current with industry trends and best practices in data engineering and cloud technologies.
  • Provide support for data-related issues and contribute to troubleshooting efforts.
  • Document data engineering processes and architectures appropriately.

AWSPythonSQLETLGCPJavaAzureData engineeringCommunication SkillsCollaborationDocumentation

Posted 3 months ago
Apply