Apply

Senior Data Engineer

Posted 3 days agoViewed

View full description

๐Ÿ’Ž Seniority level: Senior, 5+ years

๐Ÿ“ Location: Poland, Spain, United Kingdom

๐Ÿ” Industry: Beauty marketplace

๐Ÿข Company: Booksy๐Ÿ‘ฅ 501-1000๐Ÿ’ฐ Debt Financing 4 months agoMobile PaymentsMarketplaceSaaSPaymentsMobile AppsWellnessSoftware

๐Ÿ—ฃ๏ธ Languages: English

โณ Experience: 5+ years

๐Ÿช„ Skills: GCPData engineeringCI/CDData modeling

Requirements:
  • 5+ years of experience in backend and data engineering, with strong system design skills.
  • Practical proficiency in cloud technologies (ideally GCP), with expertise in tools like BigQuery, Dataflow, Pub/Sub, or similar.
  • Hands-on experience with CI/CD tools (e.g., GitLab CI) and infrastructure as code.
  • Strong focus on data quality, governance, and building scalable, automated workflows.
  • Experience designing self-service data platforms and infrastructure.
  • Proven ability to mentor and support others, fostering data literacy across teams.
Responsibilities:
  • Design and implement robust data solutions.
  • Enable teams to make informed, data-driven decisions.
  • Ensure data is accessible, reliable, and well-governed.
  • Play a key role in driving growth, innovation, and operational excellence.
Apply

Related Jobs

Apply

๐Ÿ“ Spain

๐Ÿ’ธ 80000.0 - 110000.0 EUR per year

๐Ÿ” Financial services

  • 5+ years of professional experience in Data Engineering or similar roles.
  • Proficiency in SQL and DBT for data transformations.
  • Fluency in Python or other modern programming languages.
  • Experience with infrastructure as code languages like Terraform.
  • Knowledge of data modeling, data warehouse technologies, and cloud infrastructures.
  • Experience with AWS or other cloud platforms like Azure or GCP.
  • Ability to provide constructive code reviews.
  • Strong communication and collaboration skills.
  • Work with engineering managers and tech leads to identify and plan projects based on team goals.
  • Collaborate with teams across engineering, analytics, and product to deliver technology for analytical use cases.
  • Write high-quality, understandable code.
  • Review teammates' work and offer feedback.
  • Serve as a technical mentor for other engineers.
  • Promote a respectful and supportive team environment.
  • Participate in on-call rotation.

AWSPythonSQLTerraformData modeling

Posted 2 days ago
Apply
Apply

๐Ÿ“ South Africa, Mauritius, Kenya, Nigeria

๐Ÿ” Technology, Marketplaces

  • BSc degree in Computer Science, Information Systems, Engineering, or related technical field or equivalent work experience.
  • 3+ years related work experience.
  • Minimum of 2 years experience building and optimizing โ€˜big dataโ€™ data pipelines, architectures and maintaining data sets.
  • Experienced in Python.
  • Experienced in SQL (PostgreSQL, MS SQL).
  • Experienced in using cloud services: AWS, Azure or GCP.
  • Proficiency in version control, CI/CD and GitHub.
  • Understanding/experience in Glue and PySpark highly desirable.
  • Experience in managing data life cycle.
  • Proficiency in manipulating, processing and architecting large disconnected data sets for analytical requirements.
  • Ability to maintain and optimise processes supporting data transformation, data structures, metadata, dependency and workload management.
  • Good understanding of data management principles - data quality assurance and governance.
  • Strong analytical skills related to working with unstructured datasets.
  • Understanding of message queuing, stream processing, and highly scalable โ€˜big dataโ€™ datastores.
  • Strong attention to detail.
  • Good communication and interpersonal skills.
  • Suggest efficiencies and execute on implementation of internal process improvements in automating manual processes.
  • Implement enhancements and new features across data systems.
  • Improve streamline processes within data systems with support from Senior Data Engineer.
  • Test CI/CD process for optimal data pipelines.
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Highly efficient in ETL processes.
  • Develop and conduct unit tests on data pipelines as well as ensuring data consistency.
  • Develop and maintain automated monitoring solutions.
  • Support reporting and analytics infrastructure.
  • Maintain data quality and data governance as well as upkeep of overall maintenance of data infrastructure systems.
  • Maintain data warehouse and data lake metadata, data catalogue, and user documentation for internal business users.
  • Ensure best practice is implemented and maintained on database.

AWSPostgreSQLPythonSQLETLGitCI/CD

Posted 25 days ago
Apply
Apply

๐Ÿ“ Poland

๐Ÿ” Financial services

๐Ÿข Company: Capco๐Ÿ‘ฅ 101-250Electric VehicleProduct DesignMechanical EngineeringManufacturing

  • Strong cloud providerโ€™s experience on GCP
  • Hands-on experience using Python; Scala and Java are nice to have
  • Experience in data and cloud technologies such as Hadoop, HIVE, Spark, PySpark, DataProc
  • Hands-on experience with schema design using semi-structured and structured data structures
  • Experience using messaging technologies โ€“ Kafka, Spark Streaming
  • Strong experience in SQL
  • Understanding of containerisation (Docker, Kubernetes)
  • Experience in design, build and maintain CI/CD Pipelines
  • Enthusiasm to pick up new technologies as needed
  • Work alongside clients to interpret requirements and define industry-leading solutions
  • Design and develop robust, well tested data pipelines
  • Demonstrate and help clients adhere to best practices in engineering and SDLC
  • Lead and mentor the team of junior and mid-level engineers
  • Contribute to security designs and have advanced knowledge of key security technologies
  • Support internal Capco capabilities by sharing insight, experience and credentials

DockerPythonSQLETLGCPGitHadoopKafkaKubernetesSnowflakeAirflowSparkCI/CD

Posted about 2 months ago
Apply
Apply

๐Ÿ“ Spain

๐Ÿ’ธ 80000 - 110000 EUR per year

๐Ÿ” Financial services

  • 5+ years of professional experience in Data Engineering or similar roles.
  • Proficient in SQL and DBT for data transformations.
  • Fluent in Python or other modern programming languages.
  • Experience with infrastructure as code languages, like Terraform.
  • Experienced in data pipelines, data modeling, data warehouse technologies, and cloud infrastructures.
  • Experience with AWS and/or other cloud providers like Azure or GCP.
  • Strong cross-team communication and collaboration skills.
  • Ability to thrive in ambiguous situations.
  • Work with engineering managers and tech leads to identify and plan projects based on team goals.
  • Collaborate closely with tech leads, managers, and cross-functional teams to deliver technology for analytical use cases.
  • Write high-quality, understandable code.
  • Review other engineers' work, providing constructive feedback.
  • Act as a technical resource and mentor for engineers inside and outside the team.
  • Promote a respectful and supportive team environment.
  • Participate in on-call rotation as required.

AWSPythonSQLGCPAzureData engineeringCollaborationTerraformData modeling

Posted 2 months ago
Apply
Apply

๐Ÿ“ Paris, New York, San Francisco, Sydney, Madrid, London, Berlin

๐Ÿ” Communication technology

  • Passionate about data engineering.
  • Experience in designing and developing data infrastructure.
  • Technical skills to solve complex challenges.
  • Play a crucial role in designing, developing, and maintaining data infrastructure.
  • Collaborate with teams across the company to solve complex challenges.
  • Improve operational efficiency and lead business towards strategic goals.
  • Contribute to engineering efforts that enhance customer journey.

AWSPostgreSQLPythonSQLApache AirflowETLData engineering

Posted 3 months ago
Apply
Apply

๐Ÿ“ Poland

๐Ÿงญ Full-Time

๐Ÿ” Enterprise security products

๐Ÿข Company: Intuition Machines, Inc.๐Ÿ‘ฅ 51-100InternetEducationInternet of ThingsMachine LearningSoftware

  • Thoughtful, conscientious, and self-directed.
  • Experience with data engineering services on major cloud providers.
  • Minimum of 3 years in a data role involving data store design, feature engineering, and building reliable data pipelines.
  • Proven ability to independently decide on data processing strategies.
  • At least 2 years of professional software development experience outside data engineering.
  • Significant coding experience in Python.
  • Experience in building/maintaining distributed data pipelines.
  • Experience with Kafka infrastructure and applications.
  • Deep understanding of SQL and NoSQL databases (preferably Clickhouse).
  • Familiarity with public cloud providers (AWS or Azure).
  • Experience with CI/CD and orchestration platforms: Kubernetes, containerization.
  • Maintain, extend, and improve existing data/ML workflows, and implement new workflows for high-velocity data.
  • Provide systems for ML engineers and researchers to build datasets on demand.
  • Influence data storage and processing strategies.
  • Collaborate with ML, frontend, and backend teams to enhance the data platform.
  • Reduce deployment time for dashboards and ML models.
  • Establish best practices and develop pipelines for efficient dataset usage.
  • Handle large datasets under performance constraints.
  • Iterate quickly to ensure deployment of products or features to millions of users.

AWSPythonSoftware DevelopmentSQLKafkaKubernetesStrategyAzureClickhouseData engineeringNosqlCI/CD

Posted 4 months ago
Apply
Apply

๐Ÿ“ Central EU or Americas

๐Ÿงญ Full-Time

๐Ÿ” Real estate investment

๐Ÿข Company: Roofstock๐Ÿ‘ฅ 501-1000๐Ÿ’ฐ $240,000,000 Series E almost 3 years ago๐Ÿซ‚ Last layoff almost 2 years agoRental PropertyPropTechMarketplaceReal EstateFinTech

  • BS or MS in a technical field: computer science, engineering or similar.
  • 8+ years technical experience working with data.
  • 5+ years strong experience building scalable data services and applications using SQL, Python, Java/Kotlin.
  • Deep understanding of microservices architecture and RESTful API development.
  • Experience with AWS services including messaging and familiarity with real-time data processing frameworks.
  • Significant experience building and deploying data-related infrastructure and robust data pipelines.
  • Strong understanding of data architecture and related challenges.
  • Experience with complex problems and distributed systems focusing on scalability and performance.
  • Strong communication and interpersonal skills.
  • Independent worker able to collaborate with cross-functional teams.
  • Improve and maintain the data services platform.
  • Deliver high-quality data services promptly, ensuring data governance and integrity while meeting objectives and maintaining SLAs.
  • Develop effective architectures and produce key code components contributing to technical solutions.
  • Integrate a diverse network of third-party tools into a cohesive, scalable platform.
  • Continuously enhance system performance and reliability by diagnosing and resolving operational issues.
  • Ensure rigorous testing of the team's work through automated methods.
  • Support data infrastructure and collaborate with the data team on scalable data pipelines.
  • Work within an Agile/Scrum framework with cross-functional teams to deliver value.
  • Influence the enterprise data platform architecture and standards.

AWSDockerPythonSQLAgileETLSCRUMSnowflakeAirflowData engineeringgRPCRESTful APIsMicroservices

Posted 6 months ago
Apply