Apply

Data Engineer

Posted 3 months agoInactiveViewed

View full description

πŸ’Ž Seniority level: Senior, 3+ years

πŸ“ Location: Philippines

πŸ” Industry: ECommerce marketing

🏒 Company: Podean

πŸ—£οΈ Languages: English

⏳ Experience: 3+ years

πŸͺ„ Skills: PythonSQLDynamoDBETL

Requirements:
  • 3+ years of experience in a data engineering or similar role, with a focus on API integration.
  • Proficiency in Python, Java, or another programming language suitable for API integration and data engineering.
  • Expertise in SQL and experience with data warehouses (e.g., Redshift, Snowflake, BigQuery).
  • Hands-on experience with workflow orchestration tools.
  • Proven track record of building scalable data pipelines and systems.
  • Strong problem-solving abilities and attention to detail.
  • Excellent communication skills and a collaborative mindset.
  • Ability to manage multiple projects in a fast-paced environment.
Responsibilities:
  • Develop and maintain integrations with marketplace APIs such as Amazon Selling Partner API.
  • Handle API authentication, rate limits, pagination, and error handling.
  • Design, build, and optimize ETL/ELT pipelines for ingesting and processing data from multiple marketplaces.
  • Automate data workflows to ensure reliable and timely updates.
  • Design and implement data models to support analytical and operational use cases.
  • Utilize data storage solutions such as AWS S3, Redshift, DynamoDB, or Google BigQuery.
  • Monitor and optimize API calls to handle large-scale data operations efficiently.
  • Work closely with data analysts and product teams to deliver actionable insights and solutions.
  • Manage API keys, tokens, and access credentials securely.
Apply

Related Jobs

Apply

πŸ“ Worldwide

πŸ” Hospitality

🏒 Company: Lighthouse

  • 4+ years of professional experience using Python, Java, or Scala for data processing (Python preferred)
  • You stay up-to-date with industry trends, emerging technologies, and best practices in data engineering.
  • Improve, manage, and teach standards for code maintainability and performance in code submitted and reviewed
  • Ship large features independently, generate architecture recommendations and have the ability to implement them
  • Great communication: Regularly achieve consensus amongst teams
  • Familiarity with GCP, Kubernetes (GKE preferred),Β  CI/CD tools (Gitlab CI preferred), familiarity with the concept of Lambda Architecture.
  • Experience with Apache Beam or Apache Spark for distributed data processing or event sourcing technologies like Apache Kafka.
  • Familiarity with monitoring tools like Grafana & Prometheus.
  • Design and develop scalable, reliable data pipelines using the Google Cloud stack.
  • Optimise data pipelines for performance and scalability.
  • Implement and maintain data governance frameworks, ensuring data accuracy, consistency, and compliance.
  • Monitor and troubleshoot data pipeline issues, implementing proactive measures for reliability and performance.
  • Collaborate with the DevOps team to automate deployments and improve developer experience on the data front.
  • Work with data science and analytics teams to enable them to bring their research to production grade data solutions, using technologies like airflow, dbt or MLflow (but not limited to)
  • As a part of a platform team, you will communicate effectively with teams across the entire engineering organisation, to provide them with reliable foundational data models and data tools.
  • Mentor and provide technical guidance to other engineers working with data.

PythonSQLApache AirflowETLGCPKubernetesApache KafkaData engineeringCI/CDMentoringTerraformScalaData modeling

Posted 4 days ago
Apply
Apply

πŸ“ Worldwide

🧭 Full-Time

NOT STATED
  • Own the design and implementation of cross-domain data models that support key business metrics and use cases.
  • Partner with analysts and data engineers to translate business logic into performant, well-documented dbt models.
  • Champion best practices in testing, documentation, CI/CD, and version control, and guide others in applying them.
  • Act as a technical mentor to other analytics engineers, supporting their development and reviewing their code.
  • Collaborate with central data platform and embedded teams to improve data quality, metric consistency, and lineage tracking.
  • Drive alignment on model architecture across domainsβ€”ensuring models are reusable, auditable, and trusted.
  • Identify and lead initiatives to reduce technical debt and modernise legacy reporting pipelines.
  • Contribute to the long-term vision of analytics engineering at Pleo and help shape our roadmap for scalability and impact.

SQLData AnalysisETLData engineeringCI/CDMentoringDocumentationData visualizationData modelingData analyticsData management

Posted 5 days ago
Apply
Apply
πŸ”₯ Data Engineer
Posted 7 days ago

πŸ“ Worldwide

🧭 Full-Time

πŸ’Έ 145000.0 - 160000.0 USD per year

  • Proficiency in managing MongoDB databases, including performance tuning and maintenance.
  • Experience with cloud-based data warehousing, particularly using BigQuery.
  • Familiarity with DBT for data transformation and modeling.
  • Exposure to tools like Segment for data collection and integration.
  • Basic knowledge of integrating third-party data sources to build a comprehensive data ecosystem.
  • Overseeing our production MongoDB database to ensure optimal performance, reliability, and security.
  • Assisting in the management and optimization of data pipelines into BigQuery, ensuring data is organized and accessible for downstream users.
  • Utilizing DBT to transform raw data into structured formats, making it useful for analysis and reporting.
  • Collaborating on the integration of data from Segment and various third-party sources to create a unified, clean data ecosystem.
  • Working closely with BI, Marketing, and Data Science teams to understand data requirements and ensure our infrastructure meets their needs.
  • Participating in code reviews, learning new tools, and contributing to the refinement of data processes and best practices.

SQLETLMongoDBData engineeringData modeling

Posted 7 days ago
Apply
Apply

πŸ“ Thailand, Philippines

πŸ” Financial Technology

🏒 Company: EnvissoπŸ‘₯ 11-50CreditComplianceTransaction ProcessingFinancial Services

  • 5+ years of work experience in data engineering.
  • Strong skills in SQL and Python.
  • Experience designing, building and maintaining data models and data pipelines.
  • Experience working with cloud based architecture.
  • Great communication skills with a diverse team of varying technical ability.
  • Create and maintain scalable data pipelines to ingest, transform and serve global payments and risk data.
  • Manage and maintain the data platform, including data pipelines and environments.
  • Collaborate with cross-functional teams of data scientists, software engineers, product managers and business leads, to understand requirements and deliver appropriate solutions.
  • Take ownership of a data area, building subject matter expertise and cultivating trust with stakeholders.
  • Mentor junior members, and grow a strong data culture across the team and organisation.

PythonSQLCloud ComputingETLData engineeringCommunication SkillsData modeling

Posted 22 days ago
Apply
Apply
πŸ”₯ Sr. Data Engineer
Posted about 1 month ago

πŸ“ Philippines

🧭 Full-Time

πŸ’Έ 1211200.0 - 2266220.0 PHP per year

πŸ” Retail

🏒 Company: BARK

  • Solid background and minimum 5 years experience in SQL, Python and Ruby-on-Rails languages, with ability to troubleshoot existing code and script new features
  • Extensive experience working with enterprise data warehouses, experience with Redshift and Big Query are strongly preferred.
  • Experience working with Business Intelligence platforms, preferably building KPI dashboards. Tableau and/or Looker preferred
  • Experience in retail and/or consumer packaged goods companies strongly preferred
  • Support the foundation of the existing data platforms, which include but are not limited to Periscope, Redshift, Tableau, and ad-hoc reporting as needed.
  • Analyze key SQL queries that support revenue recognition, and assist the finance teams with their closing processes
  • Collaborate with cross functional teams (Finance, Accounting, Planning) to ensure data is accurate, standardized/normalized, and accessible
  • Serve as a resident expert of data integrations, reviewing data as it travels through the BARK tech stack, and review any and all ETL processes in place
  • Build new reporting to support revenue, gross margin and planning reporting, particularly during period closing with strict deadlines
  • Support an existing Ruby on Rails platform, as well as integrations with Shopify to the data warehouse, for purposes of revenue recognition

PythonSQLBusiness IntelligenceETLRuby on RailsTableauData engineeringRESTful APIsAccountingData visualizationData modelingFinance

Posted about 1 month ago
Apply
Apply
πŸ”₯ Junior Data Engineer
Posted about 1 month ago

πŸ“ Worldwide

🧭 Full-Time

πŸ’Έ 90000.0 - 105000.0 CAD per year

πŸ” Blockchain

🏒 Company: FigmentπŸ‘₯ 11-50HospitalityTravel AccommodationsArt

  • At least 1 year of experience of IT experience (including co-op and internship experience)
  • Proficiency in SQL and at least one mature programming language (ideally Python)
  • Strong communication skills
  • Strong ability to write clear, concise and accurate documentation.
  • Strong experience with investigating and resolving data quality issues.
  • Strong skills in data analysis and visualization.
  • Ensure accuracy and reliability in data reporting and analysis.
  • Thrive in a fast-paced environment, adapting to new challenges as they arise.
  • Must have a passion/desire to work in and learn this space because will be working with blockchain data and block explorers on a daily basis.
  • Develop and maintain dashboards and reports.
  • Query databases, review and process data to support data-driven decision-making.
  • Investigating new chains and their corresponding block explorers in order to figure out ways of collecting data from these chains.
  • Write instructions for our Data Entry Team and assist Engineering Manager with coordination of Data Entry Team.
  • Review ingested data to flag issues and drive the investigation of any data quality issues.
  • When not working on manual and hybrid processes that will be required to do the above, will be working on automating them to be able to have growing capacity to work on other opportunities.
  • Work with internal teams like Engineering, Product, Finance, and Customer Success to deliver tailored data solutions.

PythonSQLCloud ComputingData AnalysisGitSnowflakeData engineeringTroubleshootingData visualization

Posted about 1 month ago
Apply
Apply
πŸ”₯ Senior Data Engineer
Posted about 1 month ago

πŸ“ Worldwide

🧭 Full-Time

πŸ’Έ 167471.0 USD per year

πŸ” Software Development

🏒 Company: Float.com

  • Expertise in ML, expert systems, and advanced algorithms (e.g., pattern matching, optimization) with applied experience in Scheduling, Recommendations, or Personalization.
  • Proficient in Python or Java and comfortable with SQL and Javascript/Typescript.
  • Experience with large-scale data pipelines and stream processing (e.g., Kafka, Debezium, Flink).
  • Skilled in data integration, cleaning, and validation.
  • Familiar with vector and graph databases (e.g., Neo4j).
  • Lead technical viability discussions:
  • Develop and test proof-of-concepts for this project.
  • Analyse existing data:
  • Evaluate our data streaming pipeline: Y
  • Lead technical discussions related to optimization, pattern detection, and AI, serving as the primary point of contact for these areas within Float.
  • Develop and implement advanced algorithms to enhance the Resource Recommendation Engine and other product features, initially focused on pattern detection and optimization.
  • Design, implement, and maintain our streaming data architecture to support real-time data processing and analytics, ensuring data integrity and reliability.
  • Establish best practices and standards for optimization, AI, and data engineering development within the organization.
  • Mentor and train team members on optimization, AI, and data engineering concepts and techniques, fostering a culture of continuous learning and innovation.
  • Stay updated with the latest trends and related technologies, and proactively identify opportunities to incorporate them into Float's solutions.

PythonSQLKafkaMachine LearningAlgorithmsData engineering

Posted about 1 month ago
Apply
Apply
πŸ”₯ Data Engineer
Posted about 2 months ago

πŸ“ Worldwide

🧭 Full-Time

πŸ” Decentralized Computing

🏒 Company: io.netπŸ‘₯ 11-50πŸ’° $30,000,000 Series A about 1 year agoCloud ComputingInformation TechnologyCloud InfrastructureGPU

  • Strong programming skills in Python or Java.
  • Experience with SQL and relational databases (e.g., PostgreSQL, MySQL).
  • Knowledge of data pipeline tools like Apache Airflow, Spark, or similar.
  • Familiarity with cloud-based data warehouses (e.g., Redshift, Snowflake).
  • Design and build scalable ETL pipelines to handle large volumes of data.
  • Develop and maintain data models and optimize database schemas.
  • Work with real-time data processing frameworks like Kafka.
  • Ensure data quality, consistency, and reliability across systems.
  • Collaborate with backend engineers and data scientists to deliver insights.
  • Monitor and troubleshoot data workflows to ensure high availability.

AWSPostgreSQLPythonSQLApache AirflowCloud ComputingETLKafkaData engineeringData modeling

Posted about 2 months ago
Apply
Apply

πŸ“ Philippines

🧭 Full-Time

πŸ’Έ 90000.0 - 115000.0 PHP per month

πŸ” IGaming

🏒 Company: ConnectOSπŸ‘₯ 251-500ComplianceConsultingHuman ResourcesBusiness DevelopmentSecurityLegal

  • 3+ years of experience in data engineering
  • Strong proficiency in SQL and Python
  • Hands-on experience with GCP, Azure, or AWS
  • Have experience with implementing data quality checks and creating basic dashboards
  • Know the basics of CI/CD
  • Design, develop, and maintain robust data pipelines to process and analyze web traffic and other business-critical data.
  • Implement systems to monitor data quality, including building mechanisms for identifying discrepancies and inconsistencies.
  • Develop and configure alerting systems to promptly notify teams of data issues or pipeline failures.
  • Optimize data pipelines and workflows for performance, scalability, and efficiency.
  • Collaborate with other internal stakeholders to understand data requirements and deliver actionable insights.
  • Manage and maintain data storage solutions, including databases and data lakes.
  • Implement data governance best practices to ensure data security, compliance, and integrity.
  • Research and integrate new tools and technologies to enhance data processing and analysis capabilities.
  • Write and maintain technical documentation for data pipelines, processes, and tools.

AWSPythonSQLETLGCPAzureData engineeringCI/CD

Posted about 2 months ago
Apply
Apply

πŸ“ Philippines

🧭 Full-Time

πŸ’Έ 100000.0 - 180000.0 PHP per month

πŸ” Airline Technology

🏒 Company: ConnectOSπŸ‘₯ 251-500ComplianceConsultingHuman ResourcesBusiness DevelopmentSecurityLegal

  • 2+ years of experience working with airline technology, specifically loyalty CRM, ancillary management, and PSS components.
  • In-depth understanding of airline data management and processing, particularly for loyalty programs and ancillary services.
  • Strong foundation in data engineering principles, including data transformation (cleansing, enrichment, aggregation), integration (scheduling, orchestration), and data modeling (schema design, warehousing).
  • Proficient in Python, NodeJS, SQL, and experienced with Apache Airflow, Apache Spark, Amazon Redshift, and Docker for data processing and workflow automation.
  • Expertise in building and maintaining ETL pipelines and managing NoSQL databases such as MongoDB and DynamoDB.
  • Be instrumental in the development of client's post-flight platform, particularly with data extraction, transformation and understanding parts of our product.
  • Build ETL pipelines to manage loyalty and ancillary transactions (e.g., points accrual, redemptions, tier upgrades).
  • Manage & optimize client’s AI data models by monitoring customer usage.
  • Enable data to be used in downstream use cases like customer segmentation, personalization, and targeted marketing.
  • Work with ML engineers to optimize and re-train models based on insights, usage and business aims.

DockerNode.jsPythonSQLApache AirflowDynamoDBETLMongoDBData engineeringData modeling

Posted 2 months ago
Apply