Apply

Senior Data Consultant

Posted 2 months agoViewed

View full description

πŸ’Ž Seniority level: Senior, Minimum 5 years

πŸ“ Location: France

πŸ” Industry: Data

🏒 Company: FINN APP PTE LTD

πŸ—£οΈ Languages: English

⏳ Experience: Minimum 5 years

πŸͺ„ Skills: PythonSQLETLGCPStrategyData StructuresNosql

Requirements:
  • Minimum 5 years of experience in a Senior Data position.
  • Proficiency in data analysis tools such as SQL and Python.
  • Familiarity with Google Cloud Platform (GCP) and BigQuery for data warehousing.
  • Experience in setting up ELT pipelines from scratch.
  • Good written and spoken English skills.
  • Strong attention to detail.
  • Optional: Experience building ELT pipelines for NoSQL databases (e.g., Firestore).
Responsibilities:
  • Evaluate current data strategies to identify optimizations.
  • Recommend enhancements to improve data utilization and efficiency.
  • Assist in navigating data-related challenges.
  • Leverage data-driven insights to support business objectives.
  • Work on various projects related to ETL and data monitoring.
Apply

Related Jobs

Apply
πŸ”₯ Senior Data Consultant
Posted about 2 months ago

πŸ“ South Africa

🧭 Full-Time

πŸ” IT leadership, cyber security, cloud solutions, and business intelligence

🏒 Company: CyberlogicπŸ‘₯ 51-100Cloud Data ServicesConsultingIT ManagementTechnical SupportCloud SecurityInformation TechnologyNetwork SecuritySoftware

  • 8-10 years of experience in data engineering or a related role with a strong background in building and optimizing data pipelines.
  • Expertise in PySpark for large-scale data processing and distributed computing.
  • Strong experience with Azure data technologies (Azure Data Factory, Azure Databricks, Azure SQL Database, etc.).
  • Proficiency in SQL and experience working with relational and NoSQL databases.
  • Experience with cloud-based data platforms and services, especially in Azure.
  • Solid understanding of ETL processes and data modeling techniques.
  • Strong problem-solving and troubleshooting skills.
  • Experience working with version control systems (e.g., Git) and continuous integration/continuous deployment (CI/CD) practices.
  • Strong communication skills and the ability to work effectively within a collaborative, small-team environment.
  • A proactive, self-starter attitude with a passion for data engineering and technology.

  • Design, build, and maintain scalable data pipelines for the collection, transformation, and storage of data.
  • Work with large, complex datasets to ensure efficient processing and integration.
  • Develop data engineering solutions using Azure technologies and PySpark for distributed data processing.
  • Implement ETL processes and automate data workflows to support analytics, reporting, and business intelligence initiatives.
  • Collaborate with data scientists, business analysts, and stakeholders to understand data requirements and deliver actionable insights.
  • Ensure data quality, consistency, and reliability across all pipelines and datasets.
  • Optimize data models, storage, and processing workflows for performance and scalability.
  • Contribute to the architecture and design decisions for cloud-based data platforms.
  • Mentor junior team members and foster a collaborative and innovative team culture.
  • Troubleshoot and resolve issues related to data pipelines, ensuring high availability and performance.

DockerSQLAgileBusiness IntelligenceETLGitHadoopKafkaKubernetesSnowflakeAzureData engineeringNosqlSparkCommunication SkillsCI/CDTerraform

Posted about 2 months ago
Apply