Apply

Data Engineer

Posted 3 months agoViewed

View full description

πŸ“ Location: Kenya, EMEA, GMT, GMT+3

πŸ” Industry: Emergency response technology

🏒 Company: FlareπŸ‘₯ 101-250πŸ’° $15,516,604 Series C over 4 years agoEmploymentHuman ResourcesFinancial ServicesSaaSEmployee BenefitsInformation TechnologyFinTechSoftware

πŸ—£οΈ Languages: English

πŸͺ„ Skills: AWSNode.jsPythonSQLData AnalysisDjangoFlaskJavascriptMachine LearningPyTorchSnowflakeData engineeringTensorflowRESTful APIsData visualization

Requirements:
  • Proficient in SQL, AWS Athena, Quicksight, and other data visualization/analytics platforms.
  • Experience with modern data engines and tools (e.g., Snowflake, Redshift, BigQuery, or similar).
  • Strong programming skills in Python, JavaScript, R, and other languages.
  • Experience with backend technologies such as Node.js, Flask, or Django.
  • Ability to create and manage RESTful APIs for data systems.
  • Proven ability to create visually compelling and insightful dashboards and UIs.
  • Strong sense of design and user experience in data presentation.
  • Solid understanding of machine learning frameworks and libraries (e.g., TensorFlow, PyTorch, scikit-learn).
  • Experience deploying and managing AI models in production environments.
  • Strong analytical and critical-thinking skills to solve complex data challenges.
  • Collaborative mindset and excellent communication and leadership abilities.
  • Strong problem-solving skills.
  • Ability to work in an agile development environment.
Responsibilities:
  • Design, build, and optimize data pipelines and data models for robust, scalable analytics.
  • Work with large datasets to extract meaningful insights and support data-driven decision-making.
  • Develop, deploy, and manage advanced and secure data systems using SQL, various AWS data pipelines & platforms, and other tools.
  • Integrate AI/ML capabilities into data pipelines and analytics platforms.
  • Prototype and implement machine learning models that solve real-world business problems.
  • Build and maintain data dashboards, reports, and visualizations using AWS Quicksight or similar tools.
  • Design and develop user-friendly, interactive UIs for presenting complex datasets and models.
  • Collaborate with stakeholders to ensure dashboards and tools meet business needs.
  • Design and implement secure end-to-end solutions that integrate backend data systems with front-end applications or data displays.
  • Develop and deploy interfaces for advanced data visualization and interaction.
  • Work closely with cross-functional teams to align on project goals and deliverables.
  • Solve complex technical challenges related to data accuracy, integration, performance, and scalability.
  • Advocate for best practices in data governance, security, and architecture.
Apply

Related Jobs

Apply

πŸ“ Germany, Italy, Netherlands, Portugal, Romania, Spain, UK

🧭 Full-Time

πŸ” Wellness

  • You have a proven track record of designing and building robust, scalable, and maintainable data models and corresponding pipelines from business requirements.
  • You are skilled at engaging with engineering and product teams to elicit requirements.
  • You are comfortable with big data concepts, ensuring data is efficiently ingested, processed, and made available for data scientists, business analysts, and product teams.
  • You are experienced in maintaining data consistency across the entire data ecosystem.
  • You have experience maintaining and debugging data pipelines in production environments with high criticality, ensuring reliability and performance.
  • Develop and maintain efficient and scalable data models and structures to support analytical workloads.
  • Design, develop, and maintain data pipelines that transform and process large volumes of data while embedding business context and semantics.
  • Implement automated data quality checks to ensure consistency, accuracy, and reliability of data.
  • Ensure correct adoption and usage of Wellhub’s data by data practitioners across the company
  • Live the mission: inspire and empower others by genuinely caring for your own wellbeing and your colleagues. Bring wellbeing to the forefront of work, and create a supportive environment where everyone feels comfortable taking care of themselves, taking time off, and finding work-life balance.

SQLApache AirflowKubernetesApache KafkaData engineeringSparkData modeling

Posted 2 days ago
Apply
Apply

πŸ“ Portugal

🧭 Full-Time

🏒 Company: Wellhub

  • Proven track record of designing and building robust, scalable, and maintainable data models and corresponding pipelines from business requirements.
  • Skilled at engaging with engineering and product teams to elicit requirements.
  • Comfortable with big data concepts, ensuring data is efficiently ingested, processed, and made available for data scientists, business analysts, and product teams.
  • Experienced in maintaining data consistency across the entire data ecosystem.
  • Experience maintaining and debugging data pipelines in production environments with high criticality, ensuring reliability and performance.
  • Motivated to contribute to a data-driven culture and take pride in seeing the impact of your work across the company
  • Develop and maintain efficient and scalable data models and structures to support analytical workloads.
  • Design, develop, and maintain data pipelines that transform and process large volumes of data while embedding business context and semantics.
  • Implement automated data quality checks to ensure consistency, accuracy, and reliability of data.
  • Ensure correct adoption and usage of Wellhub’s data by data practitioners across the company
  • Live the mission: inspire and empower others by genuinely caring for your own wellbeing and your colleagues. Bring wellbeing to the forefront of work, and create a supportive environment where everyone feels comfortable taking care of themselves, taking time off, and finding work-life balance.

SQLApache AirflowETLKubernetesApache KafkaData engineeringSparkData visualizationData modelingData analyticsData management

Posted 2 days ago
Apply
Apply

πŸ“ Worldwide

πŸ” Hospitality

🏒 Company: Lighthouse

  • 4+ years of professional experience using Python, Java, or Scala for data processing (Python preferred)
  • You stay up-to-date with industry trends, emerging technologies, and best practices in data engineering.
  • Improve, manage, and teach standards for code maintainability and performance in code submitted and reviewed
  • Ship large features independently, generate architecture recommendations and have the ability to implement them
  • Great communication: Regularly achieve consensus amongst teams
  • Familiarity with GCP, Kubernetes (GKE preferred),Β  CI/CD tools (Gitlab CI preferred), familiarity with the concept of Lambda Architecture.
  • Experience with Apache Beam or Apache Spark for distributed data processing or event sourcing technologies like Apache Kafka.
  • Familiarity with monitoring tools like Grafana & Prometheus.
  • Design and develop scalable, reliable data pipelines using the Google Cloud stack.
  • Optimise data pipelines for performance and scalability.
  • Implement and maintain data governance frameworks, ensuring data accuracy, consistency, and compliance.
  • Monitor and troubleshoot data pipeline issues, implementing proactive measures for reliability and performance.
  • Collaborate with the DevOps team to automate deployments and improve developer experience on the data front.
  • Work with data science and analytics teams to enable them to bring their research to production grade data solutions, using technologies like airflow, dbt or MLflow (but not limited to)
  • As a part of a platform team, you will communicate effectively with teams across the entire engineering organisation, to provide them with reliable foundational data models and data tools.
  • Mentor and provide technical guidance to other engineers working with data.

PythonSQLApache AirflowETLGCPKubernetesApache KafkaData engineeringCI/CDMentoringTerraformScalaData modeling

Posted 4 days ago
Apply
Apply

πŸ“ Worldwide

🧭 Full-Time

NOT STATED
  • Own the design and implementation of cross-domain data models that support key business metrics and use cases.
  • Partner with analysts and data engineers to translate business logic into performant, well-documented dbt models.
  • Champion best practices in testing, documentation, CI/CD, and version control, and guide others in applying them.
  • Act as a technical mentor to other analytics engineers, supporting their development and reviewing their code.
  • Collaborate with central data platform and embedded teams to improve data quality, metric consistency, and lineage tracking.
  • Drive alignment on model architecture across domainsβ€”ensuring models are reusable, auditable, and trusted.
  • Identify and lead initiatives to reduce technical debt and modernise legacy reporting pipelines.
  • Contribute to the long-term vision of analytics engineering at Pleo and help shape our roadmap for scalability and impact.

SQLData AnalysisETLData engineeringCI/CDMentoringDocumentationData visualizationData modelingData analyticsData management

Posted 5 days ago
Apply
Apply

πŸ“ Germany, Austria, Italy, Spain, Portugal

πŸ” Financial and Real Estate

🏒 Company: PriceHubbleπŸ‘₯ 101-250πŸ’° Non-equity Assistance over 3 years agoArtificial Intelligence (AI)PropTechBig DataMachine LearningAnalyticsReal Estate

  • 3+ years of experience building and maintaining production data pipelines.
  • Excellent English communication skills, both spoken and written, to effectively collaborate with cross-functional teams and mentor other engineers.
  • Clear writing is key in our remote-first setup.
  • Proficient in working with geospatial data and leveraging geospatial features.
  • Work with backend engineers and data scientists to turn raw data into trusted insights, handling everything from scraping and ingestion to transformation and monitoring.
  • Navigate cost-value trade-offs to make decisions that deliver value to customers at an appropriate cost.
  • Develop solutions that work in over 10 countries, considering local specifics.
  • Lead a project from concept to launch with a temporary team of engineers.
  • Raise the bar and drive the team to deliver high-quality products, services, and processes.
  • Improve the performance, data quality, and cost-efficiency of our data pipelines at scale.
  • Maintain and monitor the data systems your team owns.

AWSDockerLeadershipPostgreSQLPythonSQLApache AirflowCloud ComputingData AnalysisETLGitKubernetesApache KafkaData engineeringData scienceSparkCI/CDProblem SolvingRESTful APIsMentoringLinuxExcellent communication skillsTeamworkCross-functional collaborationData visualizationData modelingData managementEnglish communication

Posted 6 days ago
Apply
Apply

πŸ“ Poland, Ukraine, Cyprus

🧭 Full-Time

πŸ” Software Development

🏒 Company: CompeteraπŸ‘₯ 51-100πŸ’° $3,000,000 Seed about 1 year agoArtificial Intelligence (AI)Big DataE-CommerceRetailMachine LearningAnalyticsRetail TechnologyInformation TechnologyEnterprise SoftwareSoftware

  • 5+ years of experience in data engineer role.
  • Strong knowledge of SQL, Spark, Python, Airflow, binary file formats.
  • Contribute to the development of the new data platform.
  • Collaborate with platform and ML teams to create ETL pipelines that efficiently deliver clean and trustworthy data.
  • Engage in architectural decisions regarding the current and future state of the data platform.
  • Design and optimize data models based on business and engineering needs.

PythonSQLETLKafkaAirflowSparkData modeling

Posted 6 days ago
Apply
Apply
πŸ”₯ Data Engineer
Posted 7 days ago

πŸ“ Worldwide

🧭 Full-Time

πŸ’Έ 145000.0 - 160000.0 USD per year

  • Proficiency in managing MongoDB databases, including performance tuning and maintenance.
  • Experience with cloud-based data warehousing, particularly using BigQuery.
  • Familiarity with DBT for data transformation and modeling.
  • Exposure to tools like Segment for data collection and integration.
  • Basic knowledge of integrating third-party data sources to build a comprehensive data ecosystem.
  • Overseeing our production MongoDB database to ensure optimal performance, reliability, and security.
  • Assisting in the management and optimization of data pipelines into BigQuery, ensuring data is organized and accessible for downstream users.
  • Utilizing DBT to transform raw data into structured formats, making it useful for analysis and reporting.
  • Collaborating on the integration of data from Segment and various third-party sources to create a unified, clean data ecosystem.
  • Working closely with BI, Marketing, and Data Science teams to understand data requirements and ensure our infrastructure meets their needs.
  • Participating in code reviews, learning new tools, and contributing to the refinement of data processes and best practices.

SQLETLMongoDBData engineeringData modeling

Posted 7 days ago
Apply
Apply

πŸ“ Poland, Romania, Ukraine

πŸ” Cybersecurity

🏒 Company: Point WildπŸ‘₯ 101-250SecuritySoftware

  • Experience building and maintaining ETL/ELT pipelines for large-scale data ingestion and transformation.
  • Strong knowledge of AWS services for ML infrastructure, model deployment, and automation.
  • Experience setting up CI/CD workflows for ML models, including versioning, monitoring, and automated retraining.
  • Comfortable writing efficient Python and SQL scripts for data processing and model deployment.
  • Can balance quick PoC enablement with long-term scalability in AI deployments.
  • Design and maintain ETL/ELT pipelines to ingest, clean, and transform data from multiple product lines.
  • Stand up and manage AWS-based ML infrastructure (e.g., S3 data lakes, AWS Glue, EMR, AWS Batch, Lambda, SageMaker).
  • Own CI/CD for ML models, including environment setup, model versioning, containerization, and monitoring.
  • Ensure AI teams have reliable access to data, scalable training environments, and efficient deployment pipelines.
  • Help move AI proofs-of-concept from experimentation to fully productionized, scalable deployments.

AWSDockerPythonSQLApache AirflowCloud ComputingETLData engineeringCI/CDDevOps

Posted 10 days ago
Apply
Apply

πŸ“ Portugal

πŸ” E-commerce

🏒 Company: Constructor

  • Proficient in BI tools (both data analysis and building dashboards for engineers and non-technical folks).
  • Proficient at SQL (any variant) and well-versed in exploratory data analysis with Python (pandas & numpy, data visualization libraries)
  • Practical familiarity with the big data stack (Spark, Hive, Databricks).
  • Enhance the dashboard experience for merchandizers by building analytics that provide actionable insights to improve e-commerce KPIs.
  • Perform data exploration and research user behavior.
  • Implement end-to-end data pipelines to support real-time analytics for essential business metrics.
  • Take part in product research and development, iterate with prototypes and customer product interviews.

PythonSQLData AnalysisETLGitClickhouseData engineeringREST APIPandasSparkCommunication SkillsAnalytical SkillsAgile methodologiesData visualizationData modeling

Posted 13 days ago
Apply
Apply
πŸ”₯ Data Engineer
Posted 14 days ago

πŸ“ Croatia

🧭 Full-Time

🏒 Company: Inspiration Commerce Group

  • You have a solid track record of working in a data engineering role within a high-growth environment (we move quickly, and want to ensure you're comfortable with being uncomfortable)
  • You have expert-level SQL knowledge β€” you can write + optimize complex queries and database performance for large-scale systems.
  • You are comfortable working with modern database technologies, cloud-based data solutions, ETL/ELT tools and the data services within cloud platforms.
  • Assist in designing, implementing, and overseeing scalable data pipelines and architectures across the organization.
  • Build and maintain reliable ETL processes to ensure efficient data ingestion, transformation, and storage.
  • Streamline and manage data flows and integrations across multiple platforms and applications.
  • Work with large-scale event-level data, aggregating and processing it to drive business intelligence and analytics efforts.
  • Continuously assess and adopt new data technologies and tools to improve our data infrastructure and capabilities.

PostgreSQLPythonSQLCloud ComputingETLData engineeringAnalytical SkillsRESTful APIsData modeling

Posted 14 days ago
Apply