Apply

Senior Data Engineer

Posted 11 days agoViewed

View full description

💎 Seniority level: Senior, 5+ years

📍 Location: Singapore

🔍 Industry: Tech-enabled services

🏢 Company: Sleek👥 251-500💰 $5,000,000 Debt Financing 4 months agoAccountingService IndustryLegalProfessional Services

⏳ Experience: 5+ years

🪄 Skills: AWSNode.jsPostgreSQLPythonETLGCPGitHadoopMongoDBMySQLSnowflakeCassandraData engineeringSparkCI/CDData modeling

Requirements:
  • 5+ years in data engineering, software engineering, or a related field.
  • Proficiency in working with relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Cassandra).
  • Familiarity with big data frameworks like Hadoop, Hive, Spark, BigQuery, etc.
  • Strong expertise in programming languages such as Python, NodeJS, etc.
  • Advanced knowledge of cloud platforms (AWS, or GCP) and their associated data services.
  • Expertise in modern data warehouses like BigQuery, Snowflake or Redshift, etc.
  • Expertise in version control systems (e.g., Git) and CI/CD pipelines.
  • Excellent problem-solving abilities, attention to detail, and strong communication skills.
Responsibilities:
  • Design, implement, and optimize robust, scalable ETL/ELT pipelines to process large volumes of structured and unstructured data.
  • Develop and maintain conceptual, logical, and physical data models to support analytics and reporting requirements.
  • Architect, deploy, and maintain cloud-based data platforms (e.g., AWS, GCP).
  • Work closely with data analysts, business owners, and stakeholders to understand data requirements and deliver reliable solutions.
  • Ensure data quality, consistency, and security through robust validation and monitoring frameworks.
  • Monitor, troubleshoot, and optimize the performance of data systems and pipelines.
  • Stay up to date with the latest industry trends and emerging technologies to continuously improve data engineering practices.
Apply

Related Jobs

Apply
🔥 Senior Data Engineer
Posted 25 days ago

📍 South Africa, Mauritius, Kenya, Nigeria

🔍 Technology, Marketplaces

  • BSc degree in Computer Science, Information Systems, Engineering, or related technical field or equivalent work experience.
  • 3+ years related work experience.
  • Minimum of 2 years experience building and optimizing ‘big data’ data pipelines, architectures and maintaining data sets.
  • Experienced in Python.
  • Experienced in SQL (PostgreSQL, MS SQL).
  • Experienced in using cloud services: AWS, Azure or GCP.
  • Proficiency in version control, CI/CD and GitHub.
  • Understanding/experience in Glue and PySpark highly desirable.
  • Experience in managing data life cycle.
  • Proficiency in manipulating, processing and architecting large disconnected data sets for analytical requirements.
  • Ability to maintain and optimise processes supporting data transformation, data structures, metadata, dependency and workload management.
  • Good understanding of data management principles - data quality assurance and governance.
  • Strong analytical skills related to working with unstructured datasets.
  • Understanding of message queuing, stream processing, and highly scalable ‘big data’ datastores.
  • Strong attention to detail.
  • Good communication and interpersonal skills.
  • Suggest efficiencies and execute on implementation of internal process improvements in automating manual processes.
  • Implement enhancements and new features across data systems.
  • Improve streamline processes within data systems with support from Senior Data Engineer.
  • Test CI/CD process for optimal data pipelines.
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Highly efficient in ETL processes.
  • Develop and conduct unit tests on data pipelines as well as ensuring data consistency.
  • Develop and maintain automated monitoring solutions.
  • Support reporting and analytics infrastructure.
  • Maintain data quality and data governance as well as upkeep of overall maintenance of data infrastructure systems.
  • Maintain data warehouse and data lake metadata, data catalogue, and user documentation for internal business users.
  • Ensure best practice is implemented and maintained on database.

AWSPostgreSQLPythonSQLETLGitCI/CD

Posted 25 days ago
Apply