Apply

Data Engineer

Posted 3 months agoViewed

View full description

📍 Location: United Kingdom, England

🔍 Industry: Consultancy

🏢 Company: The Dot Collective👥 11-50Cloud ComputingAnalyticsInformation Technology

🪄 Skills: PythonSQLAgileSCRUMSparkCollaboration

Requirements:
  • Good knowledge of distributed computing with Spark.
  • Understanding of cloud architecture principles and best practices.
  • Hands-on experience in designing, deploying, and managing cloud resources.
  • Excellent Python and SQL skills.
  • Experience in cloud automation and orchestration using tools such as CloudFormation or Terraform.
  • Agile ways of working.
Responsibilities:
  • Implement cloud-native data platforms.
  • Engineer scalable and reliable data pipelines.
  • Monitor and perform tuning of cloud-based applications and services.
Apply

Related Jobs

Apply
🔥 Lead Data Engineer
Posted 10 days ago

📍 Europe

🧭 Full-Time

🔍 Supply Chain Risk Analytics

🏢 Company: Everstream Analytics👥 251-500💰 $50,000,000 Series B almost 2 years agoProductivity ToolsArtificial Intelligence (AI)LogisticsMachine LearningRisk ManagementAnalyticsSupply Chain ManagementProcurement

  • Deep understanding of Python, including data manipulation and analysis libraries like Pandas and NumPy.
  • Extensive experience in data engineering, including ETL, data warehousing, and data pipelines.
  • Strong knowledge of AWS services, such as RDS, Lake Formation, Glue, Spark, etc.
  • Experience with real-time data processing frameworks like Apache Kafka/MSK.
  • Proficiency in SQL and NoSQL databases, including PostgreSQL, Opensearch, and Athena.
  • Ability to design efficient and scalable data models.
  • Strong analytical skills to identify and solve complex data problems.
  • Excellent communication and collaboration skills to work effectively with cross-functional teams.
  • Manage and grow a remote team of data engineers based in Europe.
  • Collaborate with Platform and Data Architecture teams to deliver robust, scalable, and maintainable data pipelines.
  • Lead and own data engineering projects, including data ingestion, transformation, and storage.
  • Develop and optimize real-time data processing pipelines using technologies like Apache Kafka/MSK or similar.
  • Design and implement data lakehouses and ETL pipelines using AWS services like Glue or similar.
  • Create efficient data models and optimize database queries for optimal performance.
  • Work closely with data scientists, product managers, and engineers to understand data requirements and translate them into technical solutions.
  • Mentor junior data engineers and share your expertise. Establish and promote best practices.

AWSPostgreSQLPythonSQLETLApache KafkaNosqlSparkData modeling

Posted 10 days ago
Apply
Apply

📍 Copenhagen, London, Stockholm, Berlin, Madrid, Montreal, Lisbon, 35 other countries

🧭 Full-Time

🔍 Financial Technology

  • Strong background in building and managing data infrastructure at scale.
  • Expertise in Python, AWS, dbt, Airflow, and Kubernetes.
  • Ability to translate business and product requirements into technical data solutions.
  • Experience in mentoring and fostering collaboration within teams.
  • Curiosity and enthusiasm for experimenting with new technologies to solve complex problems.
  • Hands-on experience with modern data tools and contributing to strategic decision-making.
  • Partnering with product and business teams to develop data strategies that enable new features and improve user experience.
  • Driving key strategic projects across the organisation, dipping in and out as needed to provide leadership and hands-on support.
  • Supporting multiple teams across Pleo in delivering impactful data and analytics solutions.
  • Building data products that directly support Pleo's product roadmap and business goals.
  • Collaborating with the VP of Data and other data leaders to set the vision for Pleo’s data strategy and ensure alignment with company objectives.
  • Enhancing our data infrastructure and pipelines to improve scalability, performance, and data quality.
  • Experimenting with and implementing innovative technologies to keep Pleo’s data stack at the forefront of the industry.
  • Mentoring engineers, analysts, and data scientists to foster growth and build a world-class data team.

AWSPythonApache AirflowKubernetesData engineering

Posted 17 days ago
Apply
Apply
🔥 Senior Data Engineer
Posted 21 days ago

📍 South Africa, Mauritius, Kenya, Nigeria

🔍 Technology, Marketplaces

  • BSc degree in Computer Science, Information Systems, Engineering, or related technical field or equivalent work experience.
  • 3+ years related work experience.
  • Minimum of 2 years experience building and optimizing ‘big data’ data pipelines, architectures and maintaining data sets.
  • Experienced in Python.
  • Experienced in SQL (PostgreSQL, MS SQL).
  • Experienced in using cloud services: AWS, Azure or GCP.
  • Proficiency in version control, CI/CD and GitHub.
  • Understanding/experience in Glue and PySpark highly desirable.
  • Experience in managing data life cycle.
  • Proficiency in manipulating, processing and architecting large disconnected data sets for analytical requirements.
  • Ability to maintain and optimise processes supporting data transformation, data structures, metadata, dependency and workload management.
  • Good understanding of data management principles - data quality assurance and governance.
  • Strong analytical skills related to working with unstructured datasets.
  • Understanding of message queuing, stream processing, and highly scalable ‘big data’ datastores.
  • Strong attention to detail.
  • Good communication and interpersonal skills.
  • Suggest efficiencies and execute on implementation of internal process improvements in automating manual processes.
  • Implement enhancements and new features across data systems.
  • Improve streamline processes within data systems with support from Senior Data Engineer.
  • Test CI/CD process for optimal data pipelines.
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Highly efficient in ETL processes.
  • Develop and conduct unit tests on data pipelines as well as ensuring data consistency.
  • Develop and maintain automated monitoring solutions.
  • Support reporting and analytics infrastructure.
  • Maintain data quality and data governance as well as upkeep of overall maintenance of data infrastructure systems.
  • Maintain data warehouse and data lake metadata, data catalogue, and user documentation for internal business users.
  • Ensure best practice is implemented and maintained on database.

AWSPostgreSQLPythonSQLETLGitCI/CD

Posted 21 days ago
Apply
Apply

📍 UK

🧭 Full-Time

🔍 Technology solutions

  • 5+ years of commercial experience in Data Engineering.
  • Experience in a senior technical leadership role.
  • Strong expertise in modern data warehouse architectures, with experience in Snowflake preferred.
  • Advanced knowledge of Python for data processing and automation.
  • Expert-level SQL skills for complex data transformations and optimization.
  • Extensive experience in designing and implementing ETL/ELT pipelines.
  • Deep expertise in Azure cloud services and data platform components.
  • Strong background in Infrastructure as Code - Terraform.
  • Proven experience with GIT and CI/CD tools (Azure DevOps).
  • Track record of successfully implementing enterprise-scale data solutions.
  • Expertise in data visualization tools, particularly Power BI.
  • Lead and shape the newly created Data Engineers team.
  • Architect and implement enterprise-scale data solutions.
  • Drive technical excellence and establish best practices for ETL/ELT processes.
  • Mentor team members and provide technical leadership across multiple workstreams.
  • Collaborate with stakeholders to translate business requirements into scalable data solutions.

PythonSQLETLGitSnowflakeAzureData engineeringCI/CDTerraform

Posted about 1 month ago
Apply
Apply
🔥 Senior/Staff Data Engineer
Posted about 2 months ago

📍 UK

🔍 Advertising

  • Ability to take an ambiguously defined task, and break it down into actionable steps.
  • Ability to follow through complex projects to completion, both by independent implementation and by coordinating others.
  • Deep understanding of algorithm and software design, concurrency, and data structures.
  • Experience in implementing probabilistic or machine learning algorithms.
  • Experience in designing scalable distributed systems.
  • High GPA from a well-respected Computer Science program or equivalent experience.
  • Design modular and scalable real-time data pipelines to handle huge datasets.
  • Suggest, implement, and coordinate architectural improvements for big data ML pipelines.
  • Understand and implement custom ML algorithms in a low latency environment.
  • Work on microservice architectures that run training, inference, and monitoring on thousands of ML models concurrently.

AWSPythonApache AirflowMachine LearningAlgorithmsData engineeringData Structures

Posted about 2 months ago
Apply
Apply
🔥 Senior Data Engineer
Posted about 2 months ago

📍 United States, United Kingdom, Spain, Estonia

🔍 Identity verification

🏢 Company: Veriff👥 501-1000💰 $100,000,000 Series C about 3 years ago🫂 Last layoff over 1 year agoArtificial Intelligence (AI)Fraud DetectionInformation TechnologyCyber SecurityIdentity Management

  • Expert-level knowledge of SQL, particularly with Redshift.
  • Strong experience in data modeling with an understanding of dimensional data modeling best practices.
  • Proficiency in data transformation frameworks like dbt.
  • Solid programming skills in languages used in data engineering, such as Python or R.
  • Familiarity with orchestration frameworks like Apache Airflow or Luigi.
  • Experience with data from diverse sources including RDBMS and APIs.
  • Collaborate with business stakeholders to design, document, and implement robust data models.
  • Build and optimize data pipelines to transform raw data into actionable insights.
  • Fine-tune query performance and ensure efficient use of data warehouse infrastructure.
  • Ensure data reliability and quality through rigorous testing and monitoring.
  • Assist in migrating from batch processing to real-time streaming systems.
  • Expand support for various use cases including business intelligence and analytics.

PythonSQLApache AirflowETLData engineeringJSONData modeling

Posted about 2 months ago
Apply
Apply

📍 Ireland, United Kingdom

🔍 IT, Digital Transformation

🏢 Company: Tekenable👥 51-100Information TechnologyEnterprise SoftwareSoftware

  • Experience with the Azure Intelligent Data Platform, including Data Lakes, Data Factory, Azure Synapse, Azure SQL, and Power BI.
  • Knowledge of Microsoft Fabric.
  • Proficiency in SQL and Python.
  • Understanding of data integration and ETL processes.
  • Ability to work with large datasets and optimize data systems for performance and scalability.
  • Experience working with JSON, CSV, XML, Open API, RESTful API integration and OData v4.0.
  • Strong knowledge of SQL and experience with relational databases.
  • Experience with big data technologies like Hadoop, Spark, or Kafka.
  • Familiarity with cloud platforms such as Azure.
  • Bachelor's degree in Computer Science, Engineering, or a related field.
  • Design, develop, and maintain scalable data pipelines.
  • Collaborate with data analysts to understand their requirements.
  • Implement data integration solutions to meet business needs.
  • Ensure data quality and integrity through testing and validation.
  • Optimize data systems for performance and scalability.

PythonSQLETLHadoopKafkaAzureSparkJSON

Posted about 2 months ago
Apply
Apply
🔥 Data Engineer
Posted about 2 months ago

📍 London

🧭 Full-Time

🔍 Financial technology

🏢 Company: Flagstone Group LTD

  • Knowledge in various areas such as data architecture, cloud technologies, and programming languages relevant to data engineering.
  • A collaborative spirit, curiosity, problem-solving mindset, attention to detail, resilience, and a customer-centric focus.
  • Databricks Data Engineer with at least 2 years of experience who is well-versed in applying software engineering principles to data engineering practices.
  • Commitment to staying up to date with industry trends and advances for innovative solutions that enhance operational efficiency.
  • Responsible for creating and maintaining robust cloud infrastructure that ensures reliability, efficiency, and scalability.
  • Define goals, identify data sources, design processing plans, orchestrate data flow, ensure data quality, and provide accessible interfaces for business empowerment.
  • Integrate people, processes, and technology to enhance the quality and speed of data delivery.
  • Engage with stakeholders to understand their needs and ensure data solutions align with business objectives.

PythonSQLCloud ComputingETLData engineering

Posted about 2 months ago
Apply
Apply
🔥 Principal/Lead Data Engineer
Posted about 2 months ago

📍 Romania, UK, Netherlands, Belgium

🧭 Full-Time

🔍 Digital consultancy, Cloud services

🏢 Company: Qodea

  • Strong experience as a Senior / Principal Cloud Data Engineer, with a solid track record of migrating large volumes of data.
  • Experience working on projects within large enterprise organisations.
  • Experience in performing a technical leadership role on projects.
  • A track record of being involved in a wide range of projects with various tools and technologies.
  • Strong communication and stakeholder management skills.
  • Significant experience in coding in Python and Scala or Java.
  • Experience with big data processing tools such as Hadoop or Spark.
  • Cloud experience, specifically with GCP and its services.
  • Experience with Terraform and working with agile software engineering teams.
  • Prior experience in a customer-facing consultancy role would be highly desirable.
  • Lead client engagements and team lead on client-facing delivery projects.
  • Consult, design, coordinate architecture to modernise infrastructure for performance, scalability, latency, and reliability.
  • Identify, scope, and participate in the design and delivery of cloud data platform solutions.
  • Deliver highly scalable big data architecture solutions using Google Cloud Technology.
  • Create and maintain appropriate standards and best practices around Google Cloud SQL, BigQuery, and other data technologies.
  • Design and execute a platform modernization approach for customers' data environments.
  • Document and share technical best practices/insights with engineering colleagues and the Data Engineering community.
  • Mentor and develop engineers within the Qodea Data Team and within our customers' engineering teams.
  • Act as the point of escalation with client-facing problems that need solving.

LeadershipPythonSQLAgileGCPHadoopJavaData engineeringSparkTerraform

Posted about 2 months ago
Apply
Apply
🔥 Principal Data Engineer
Posted 2 months ago

📍 UK

🧭 Full-Time

🔍 Data infrastructure and enterprise technology

🏢 Company: Aker Systems👥 101-250💰 over 4 years agoCloud Data ServicesBusiness IntelligenceAnalyticsSoftware

  • Bachelor's degree.
  • Data pipeline development using data processing technologies and frameworks.
  • Agile or other rapid application development methods.
  • Data modeling and understanding of different data structures and their benefits and limitations under particular use cases.
  • Experience in Public Cloud services, such as AWS, with knowledge of core services like EC2, RDS, Lambda, Athena & Glue preferred.
  • Configuring and tuning Relational and NoSQL databases.
  • Programming or scripting languages, such as Python.
  • Test Driven Development with appropriate tools and frameworks.
  • Code, test, and document new or modified data pipelines that meet functional/non-functional business requirements.
  • Conduct logical and physical database design.
  • Expand and grow data platform capabilities to solve new data and analytics problems.
  • Conduct data analysis, identifying feasible solutions and enhancements to data processing challenges.
  • Ensure that data models are consistent with the data architecture.
  • Perform root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.

AWSPythonAgileData AnalysisData StructuresNosqlLinuxData modeling

Posted 2 months ago
Apply