Apply

Senior Data Engineer

Posted about 1 month agoViewed

View full description

πŸ’Ž Seniority level: Senior, 5+ years

πŸ“ Location: Poland, Spain, United Kingdom

πŸ” Industry: Beauty marketplace

🏒 Company: BooksyπŸ‘₯ 501-1000πŸ’° Debt Financing 5 months agoMobile PaymentsMarketplaceSaaSPaymentsMobile AppsWellnessSoftware

πŸ—£οΈ Languages: English

⏳ Experience: 5+ years

πŸͺ„ Skills: GCPData engineeringCI/CDData modeling

Requirements:
  • 5+ years of experience in backend and data engineering, with strong system design skills.
  • Practical proficiency in cloud technologies (ideally GCP), with expertise in tools like BigQuery, Dataflow, Pub/Sub, or similar.
  • Hands-on experience with CI/CD tools (e.g., GitLab CI) and infrastructure as code.
  • Strong focus on data quality, governance, and building scalable, automated workflows.
  • Experience designing self-service data platforms and infrastructure.
  • Proven ability to mentor and support others, fostering data literacy across teams.
Responsibilities:
  • Design and implement robust data solutions.
  • Enable teams to make informed, data-driven decisions.
  • Ensure data is accessible, reliable, and well-governed.
  • Play a key role in driving growth, innovation, and operational excellence.
Apply

Related Jobs

Apply

πŸ“ Germany, Spain, United Kingdom, Austria

πŸ” Software Development

🏒 Company: LocalStackπŸ‘₯ 11-50πŸ’° $25,000,000 Series A 4 months agoCloud ComputingInformation TechnologySoftware

  • Ability and experience working with non technical stakeholders to gather requirements
  • Ability to define technical initiatives required to satisfy business requirements
  • Excellent knowledge of Python
  • Experience in designing real time data ingestion solutions with massive volumes of data
  • (preferred) Experience with AWS services commonly used in Data Engineering (like S3, ECS, Glue, EMR)
  • Experience with relational databases and data warehouses, data orchestration and ingestion tools, SQL, and BI tools
  • (preferred) Experience in working remotely/ in async settings
  • Experience owning initiatives at the IC level
  • Experience Providing guidance to junior engineers
  • Maintain, monitor, and optimize data ingestion pipelines for our current data platform.
  • Lead the development of our future data platform based on evolving business needs.
  • Shape the data team roadmap and contribute to long-term strategic planning.
  • Take full ownership of data ingestion from external sources, ensuring smooth functionality.
  • Design and implement a robust data modelling and data lake solution architecture.
  • Provide technical leadership and mentorship to the data engineering team.
  • Collaborate with engineering teams to define and refine ingestion pipeline requirements.
  • Work with stakeholders to gather business questions and data needs.

AWSDockerLeadershipPythonSQLApache AirflowETLKafkaData engineeringData StructuresREST APICommunication SkillsAnalytical SkillsCollaborationCI/CDProblem SolvingMentoringWritten communicationData visualizationTeam managementStakeholder managementData modeling

Posted 2 days ago
Apply
Apply

πŸ“ Madrid, Barcelona

πŸ” Software Development

🏒 Company: Clarity AI

  • 5+ years in data architecture, data engineering, or a related role, with hands-on experience in data modeling, schema design, and cloud-based data systems
  • Strong Python and SQL skills and proficiency with distributed data stores
  • Proficiency in schema design, data modeling, and building data products
  • Proficient in at least one major programming language, preferably Python, with a software engineering mindset for writing clean, maintainable code
  • Deep understanding of architectural principles in microservices, distributed systems, and data pipeline design
  • Familiarity with containerized environments, public/private API integrations, and security best practices
  • Strong communication and interpersonal skills, with experience working cross-functionally
  • Proven ability to guide teams and drive complex projects to successful completion
  • Self-starter, able to take ownership and initiative, with high energy and stamina
  • Decisive and action-oriented, able to make rapid decisions even when they are short of information
  • Highly motivated, independent and deeply passionate about sustainability and impact
  • Excellent oral and written English communication skills (minimum C1 level-proficient user)
  • Transforming raw data into intuitive, high-quality data models that support analytical and reporting needs across the organization.
  • Implementing and monitor data quality checks to ensure the accuracy, completeness, and reliability of data across all systems
  • Collaborating across functions
  • Working closely with engineering and product teams to understand business requirements, translating them into scalable data solutions
  • Acting as a bridge between technical and non-technical stakeholders, ensuring alignment with strategic goals and effective communication of technical designs
  • Leading initiatives to improve data practices, from schema design to data governance, ensuring data quality, consistency, and security
  • Guiding the team in experimenting with new tools and technologies thoughtfully, focusing on understanding both the benefits and limitations of each option
  • Continuously evolving the data architecture for optimal performance, balancing scalability with cost-efficiency and reliability
  • Applying a pragmatic approach to performance metrics and scaling decisions, ensuring that the system remains performant without unnecessary complexity
  • Implementing performance metrics to monitor system health, proposing improvements where necessary
  • Maintaining comprehensive documentation of data systems, processes, and best practices to facilitate knowledge sharing and compliance

DockerPythonSQLApache AirflowCloud ComputingKubernetesAlgorithmsData engineeringData StructuresPostgresREST APICommunication SkillsAnalytical SkillsCI/CDProblem SolvingAgile methodologiesMicroservicesData visualizationData modelingData analyticsData management

Posted 16 days ago
Apply
Apply

πŸ“ Europe, APAC, Americas

🧭 Full-Time

πŸ” Software Development

🏒 Company: DockerπŸ‘₯ 251-500πŸ’° $105,000,000 Series C almost 3 years agoDeveloper ToolsDeveloper PlatformInformation TechnologySoftware

  • 4+ years of relevant industry experience
  • Experience with data modeling and building scalable pipelines
  • Proficiency with Snowflake or BigQuery
  • Experience with data governance and security controls
  • Experience creating ETL scripts using Python and SQL
  • Familiarity with a cloud ecosystem: AWS/Azure/Google Cloud
  • Experience with Tableau or Looker
  • Manage and develop ETL jobs, warehouse, and event collection tools
  • Build and manage the Central Data Model for reporting
  • Integrate emerging methodologies and technologies
  • Build data pipelines for ML and AI projects
  • Contribute to SOC2 compliance across the data platform
  • Document technical architecture

PythonSQLETLSnowflakeAirflowData engineeringData visualizationData modeling

Posted 19 days ago
Apply
Apply

πŸ“ Poland

🧭 Full-Time

πŸ” Software Development

🏒 Company: N-iXπŸ‘₯ 1001-5000IT Services and IT Consulting

  • Minimum of 3-4 years as data engineer, or in a relevant field
  • Advanced experience in Python, particularly in delivering production-grade data pipelines and troubleshooting code-based bugs
  • Familiarity with cloud platforms (preferably Azure)
  • Experience with Databricks, Snowflake, or similar data platforms
  • Knowledge of relational databases, with proficiency in SQL
  • Experience using Apache Spark
  • Experience in creating and maintaining structured documentation
  • Proficiency in utilizing testing frameworks to ensure code reliability and maintainability
  • Experience with Gitlab or equivalent tools
  • B2 level or higher English proficiency
  • Strong collaboration abilities, experience in an international team environment, willing to learn new skills and tools, adaptive and exploring mindset
  • Design, build, and maintain data pipelines using Python
  • Collaborate with an international team to develop scalable data solutions
  • Conduct in-depth analysis and debugging of system bugs (Tier 2)
  • Develop and maintain smart documentation for process consistency, including the creation and refinement of checklists and workflows
  • Set up and configure new tenants, collaborating closely with team members to ensure smooth onboarding
  • Write integration tests to ensure the quality and reliability of data services
  • Utilize Databricks for data processing and management

DockerPythonSQLCloud ComputingData AnalysisETLGitKubernetesSnowflakeApache KafkaAzureData engineeringRDBMSREST APIPandasCI/CDDocumentationMicroservicesDebugging

Posted 21 days ago
Apply
Apply

πŸ“ UK

🧭 Full-Time

πŸ” Technology, Data Engineering

🏒 Company: Aker SystemsπŸ‘₯ 101-250πŸ’° over 4 years agoCloud Data ServicesBusiness IntelligenceAnalyticsSoftware

  • Data pipeline development using processing technologies
  • Experience in Public Cloud services, especially AWS
  • Configuring and tuning Relational and NoSQL databases
  • Programming with Python
  • Code, test, and document data pipelines
  • Conduct database design
  • Expand data platform capabilities
  • Perform data analysis and root cause analysis

AWSPythonData modeling

Posted 27 days ago
Apply
Apply

πŸ“ Poland

🧭 Full-Time

πŸ” Data Engineering

🏒 Company: Softeta

  • Advanced Degree in computer science, mathematics, data science or other related fields
  • 5 years of working as a data engineer
  • Proficiency with AirFlow, DBT, DataFlow or similar products
  • Strong knowledge of data structures and data modeling
  • CI/CD pipeline and MLOPs experience very advantageous
  • Experience with very large data sets is advantageous
  • Experience with cloud data platforms is essential, specific experience with GCP / BigQuery is advantageous
  • Create and maintain pipeline architectures in AirFlow and DBT
  • Assemble large and / or complex datasets for business requirements
  • Improve our own processes and infrastructure for scale, delivery and automation
  • Maintain and improve our data warehouse structure so that it is fit for purpose
  • Adjust methods, queries and techniques to suit our very large data environment
  • Adopt best-practice coding and review processes
  • Communicate technical details and edge cases in the data to specialist and non-specialist stakeholders
  • Notice, investigate, resolve and communicate about anomalies in the data
  • Develop and maintain brief, relevant documentation for data products

SQLGCPAirflowData engineeringCI/CDData modeling

Posted 27 days ago
Apply
Apply
πŸ”₯ Senior Data Engineer
Posted about 1 month ago

πŸ“ Spain

πŸ’Έ 80000.0 - 110000.0 EUR per year

πŸ” Financial services

  • 5+ years of professional experience in Data Engineering or similar roles.
  • Proficiency in SQL and DBT for data transformations.
  • Fluency in Python or other modern programming languages.
  • Experience with infrastructure as code languages like Terraform.
  • Knowledge of data modeling, data warehouse technologies, and cloud infrastructures.
  • Experience with AWS or other cloud platforms like Azure or GCP.
  • Ability to provide constructive code reviews.
  • Strong communication and collaboration skills.
  • Work with engineering managers and tech leads to identify and plan projects based on team goals.
  • Collaborate with teams across engineering, analytics, and product to deliver technology for analytical use cases.
  • Write high-quality, understandable code.
  • Review teammates' work and offer feedback.
  • Serve as a technical mentor for other engineers.
  • Promote a respectful and supportive team environment.
  • Participate in on-call rotation.

AWSPythonSQLTerraformData modeling

Posted about 1 month ago
Apply
Apply
πŸ”₯ Senior Data Engineer
Posted about 2 months ago

πŸ“ Worldwide

πŸ” Event technology

  • Experience in data engineering and building data pipelines.
  • Proficiency in programming languages like Python, Java, or Scala.
  • Familiarity with cloud platforms and data architecture design.
  • Design and develop data solutions to enhance the functionality of the platform.
  • Implement efficient data pipelines and ETL processes.
  • Collaborate with cross-functional teams to define data requirements.

AWSDockerPostgreSQLPythonSQLApache AirflowCloud ComputingData AnalysisETLKubernetesAlgorithmsApache KafkaData engineeringData StructuresCI/CDRESTful APIsMicroservicesData visualizationData modeling

Posted about 2 months ago
Apply
Apply
πŸ”₯ Senior Data Engineer
Posted about 2 months ago

πŸ“ Philippines, Spain, Germany, France, Italy

πŸ” Fintech, Healthcare, EdTech, Construction, Hospitality

🏒 Company: IntellectsoftπŸ‘₯ 251-500Augmented RealityArtificial Intelligence (AI)DevOpsBlockchainInternet of ThingsUX DesignWeb DevelopmentMobile AppsQuality AssuranceSoftware

  • Proficiency in SQL for data manipulation and querying large datasets.
  • Strong experience with Python for data processing and scripting.
  • Expertise in pySpark for distributed data processing and big data workflows.
  • Hands-on experience with Airflow for workflow orchestration and automation.
  • Deep understanding of Database Management Systems (DBMS), including design, optimization, and maintenance.
  • Solid knowledge of data modeling, ETL pipelines, and data integration.
  • Familiarity with cloud platforms such as AWS, GCP, or Azure.
  • Design, develop, and maintain scalable data pipelines and ETL processes.
  • Build and optimize large-scale data processing frameworks using PySpark.
  • Create workflows and automate processes using Apache Airflow.
  • Manage, monitor, and enhance database performance and integrity.
  • Collaborate with cross-functional teams, including data analysts, scientists, and stakeholders, to understand data needs.
  • Ensure data quality, reliability, and compliance with industry standards.
  • Troubleshoot, debug, and optimize data pipelines and workflows.
  • Continuously evaluate and integrate new tools and technologies to enhance data infrastructure.

AWSPythonSQLApache AirflowETLGCPAzureData modeling

Posted about 2 months ago
Apply
Apply
πŸ”₯ Senior Data Engineer
Posted about 2 months ago

πŸ“ South Africa, Mauritius, Kenya, Nigeria

πŸ” Technology, Marketplaces

  • BSc degree in Computer Science, Information Systems, Engineering, or related technical field or equivalent work experience.
  • 3+ years related work experience.
  • Minimum of 2 years experience building and optimizing β€˜big data’ data pipelines, architectures and maintaining data sets.
  • Experienced in Python.
  • Experienced in SQL (PostgreSQL, MS SQL).
  • Experienced in using cloud services: AWS, Azure or GCP.
  • Proficiency in version control, CI/CD and GitHub.
  • Understanding/experience in Glue and PySpark highly desirable.
  • Experience in managing data life cycle.
  • Proficiency in manipulating, processing and architecting large disconnected data sets for analytical requirements.
  • Ability to maintain and optimise processes supporting data transformation, data structures, metadata, dependency and workload management.
  • Good understanding of data management principles - data quality assurance and governance.
  • Strong analytical skills related to working with unstructured datasets.
  • Understanding of message queuing, stream processing, and highly scalable β€˜big data’ datastores.
  • Strong attention to detail.
  • Good communication and interpersonal skills.
  • Suggest efficiencies and execute on implementation of internal process improvements in automating manual processes.
  • Implement enhancements and new features across data systems.
  • Improve streamline processes within data systems with support from Senior Data Engineer.
  • Test CI/CD process for optimal data pipelines.
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Highly efficient in ETL processes.
  • Develop and conduct unit tests on data pipelines as well as ensuring data consistency.
  • Develop and maintain automated monitoring solutions.
  • Support reporting and analytics infrastructure.
  • Maintain data quality and data governance as well as upkeep of overall maintenance of data infrastructure systems.
  • Maintain data warehouse and data lake metadata, data catalogue, and user documentation for internal business users.
  • Ensure best practice is implemented and maintained on database.

AWSPostgreSQLPythonSQLETLGitCI/CD

Posted about 2 months ago
Apply