Apply

Senior Data Engineer

Posted 1 day agoViewed

View full description

πŸ’Ž Seniority level: Senior

πŸ“ Location: Germany, Spain, United Kingdom, Austria

πŸ” Industry: Software Development

🏒 Company: LocalStackπŸ‘₯ 11-50πŸ’° $25,000,000 Series A 4 months agoCloud ComputingInformation TechnologySoftware

πŸ—£οΈ Languages: English

πŸͺ„ Skills: AWSDockerLeadershipPythonSQLApache AirflowETLKafkaData engineeringData StructuresREST APICommunication SkillsAnalytical SkillsCollaborationCI/CDProblem SolvingMentoringWritten communicationData visualizationTeam managementStakeholder managementData modeling

Requirements:
  • Ability and experience working with non technical stakeholders to gather requirements
  • Ability to define technical initiatives required to satisfy business requirements
  • Excellent knowledge of Python
  • Experience in designing real time data ingestion solutions with massive volumes of data
  • (preferred) Experience with AWS services commonly used in Data Engineering (like S3, ECS, Glue, EMR)
  • Experience with relational databases and data warehouses, data orchestration and ingestion tools, SQL, and BI tools
  • (preferred) Experience in working remotely/ in async settings
  • Experience owning initiatives at the IC level
  • Experience Providing guidance to junior engineers
Responsibilities:
  • Maintain, monitor, and optimize data ingestion pipelines for our current data platform.
  • Lead the development of our future data platform based on evolving business needs.
  • Shape the data team roadmap and contribute to long-term strategic planning.
  • Take full ownership of data ingestion from external sources, ensuring smooth functionality.
  • Design and implement a robust data modelling and data lake solution architecture.
  • Provide technical leadership and mentorship to the data engineering team.
  • Collaborate with engineering teams to define and refine ingestion pipeline requirements.
  • Work with stakeholders to gather business questions and data needs.
Apply

Related Jobs

Apply

πŸ“ Madrid, Barcelona

πŸ” Software Development

🏒 Company: Clarity AI

  • 5+ years in data architecture, data engineering, or a related role, with hands-on experience in data modeling, schema design, and cloud-based data systems
  • Strong Python and SQL skills and proficiency with distributed data stores
  • Proficiency in schema design, data modeling, and building data products
  • Proficient in at least one major programming language, preferably Python, with a software engineering mindset for writing clean, maintainable code
  • Deep understanding of architectural principles in microservices, distributed systems, and data pipeline design
  • Familiarity with containerized environments, public/private API integrations, and security best practices
  • Strong communication and interpersonal skills, with experience working cross-functionally
  • Proven ability to guide teams and drive complex projects to successful completion
  • Self-starter, able to take ownership and initiative, with high energy and stamina
  • Decisive and action-oriented, able to make rapid decisions even when they are short of information
  • Highly motivated, independent and deeply passionate about sustainability and impact
  • Excellent oral and written English communication skills (minimum C1 level-proficient user)
  • Transforming raw data into intuitive, high-quality data models that support analytical and reporting needs across the organization.
  • Implementing and monitor data quality checks to ensure the accuracy, completeness, and reliability of data across all systems
  • Collaborating across functions
  • Working closely with engineering and product teams to understand business requirements, translating them into scalable data solutions
  • Acting as a bridge between technical and non-technical stakeholders, ensuring alignment with strategic goals and effective communication of technical designs
  • Leading initiatives to improve data practices, from schema design to data governance, ensuring data quality, consistency, and security
  • Guiding the team in experimenting with new tools and technologies thoughtfully, focusing on understanding both the benefits and limitations of each option
  • Continuously evolving the data architecture for optimal performance, balancing scalability with cost-efficiency and reliability
  • Applying a pragmatic approach to performance metrics and scaling decisions, ensuring that the system remains performant without unnecessary complexity
  • Implementing performance metrics to monitor system health, proposing improvements where necessary
  • Maintaining comprehensive documentation of data systems, processes, and best practices to facilitate knowledge sharing and compliance

DockerPythonSQLApache AirflowCloud ComputingKubernetesAlgorithmsData engineeringData StructuresPostgresREST APICommunication SkillsAnalytical SkillsCI/CDProblem SolvingAgile methodologiesMicroservicesData visualizationData modelingData analyticsData management

Posted 16 days ago
Apply
Apply

πŸ“ Europe, APAC, Americas

🧭 Full-Time

πŸ” Software Development

🏒 Company: DockerπŸ‘₯ 251-500πŸ’° $105,000,000 Series C almost 3 years agoDeveloper ToolsDeveloper PlatformInformation TechnologySoftware

  • 4+ years of relevant industry experience
  • Experience with data modeling and building scalable pipelines
  • Proficiency with Snowflake or BigQuery
  • Experience with data governance and security controls
  • Experience creating ETL scripts using Python and SQL
  • Familiarity with a cloud ecosystem: AWS/Azure/Google Cloud
  • Experience with Tableau or Looker
  • Manage and develop ETL jobs, warehouse, and event collection tools
  • Build and manage the Central Data Model for reporting
  • Integrate emerging methodologies and technologies
  • Build data pipelines for ML and AI projects
  • Contribute to SOC2 compliance across the data platform
  • Document technical architecture

PythonSQLETLSnowflakeAirflowData engineeringData visualizationData modeling

Posted 19 days ago
Apply
Apply

πŸ“ UK

🧭 Full-Time

πŸ” Technology, Data Engineering

🏒 Company: Aker SystemsπŸ‘₯ 101-250πŸ’° over 4 years agoCloud Data ServicesBusiness IntelligenceAnalyticsSoftware

  • Data pipeline development using processing technologies
  • Experience in Public Cloud services, especially AWS
  • Configuring and tuning Relational and NoSQL databases
  • Programming with Python
  • Code, test, and document data pipelines
  • Conduct database design
  • Expand data platform capabilities
  • Perform data analysis and root cause analysis

AWSPythonData modeling

Posted 27 days ago
Apply
Apply
πŸ”₯ Senior Data Engineer
Posted about 1 month ago

πŸ“ Spain

πŸ’Έ 80000.0 - 110000.0 EUR per year

πŸ” Financial services

  • 5+ years of professional experience in Data Engineering or similar roles.
  • Proficiency in SQL and DBT for data transformations.
  • Fluency in Python or other modern programming languages.
  • Experience with infrastructure as code languages like Terraform.
  • Knowledge of data modeling, data warehouse technologies, and cloud infrastructures.
  • Experience with AWS or other cloud platforms like Azure or GCP.
  • Ability to provide constructive code reviews.
  • Strong communication and collaboration skills.
  • Work with engineering managers and tech leads to identify and plan projects based on team goals.
  • Collaborate with teams across engineering, analytics, and product to deliver technology for analytical use cases.
  • Write high-quality, understandable code.
  • Review teammates' work and offer feedback.
  • Serve as a technical mentor for other engineers.
  • Promote a respectful and supportive team environment.
  • Participate in on-call rotation.

AWSPythonSQLTerraformData modeling

Posted about 1 month ago
Apply
Apply
πŸ”₯ Senior Data Engineer
Posted about 1 month ago

πŸ“ Poland, Spain, United Kingdom

πŸ” Beauty marketplace

🏒 Company: BooksyπŸ‘₯ 501-1000πŸ’° Debt Financing 5 months agoMobile PaymentsMarketplaceSaaSPaymentsMobile AppsWellnessSoftware

  • 5+ years of experience in backend and data engineering, with strong system design skills.
  • Practical proficiency in cloud technologies (ideally GCP), with expertise in tools like BigQuery, Dataflow, Pub/Sub, or similar.
  • Hands-on experience with CI/CD tools (e.g., GitLab CI) and infrastructure as code.
  • Strong focus on data quality, governance, and building scalable, automated workflows.
  • Experience designing self-service data platforms and infrastructure.
  • Proven ability to mentor and support others, fostering data literacy across teams.
  • Design and implement robust data solutions.
  • Enable teams to make informed, data-driven decisions.
  • Ensure data is accessible, reliable, and well-governed.
  • Play a key role in driving growth, innovation, and operational excellence.

GCPData engineeringCI/CDData modeling

Posted about 1 month ago
Apply
Apply
πŸ”₯ Senior Data Engineer
Posted about 2 months ago

πŸ“ Worldwide

πŸ” Event technology

  • Experience in data engineering and building data pipelines.
  • Proficiency in programming languages like Python, Java, or Scala.
  • Familiarity with cloud platforms and data architecture design.
  • Design and develop data solutions to enhance the functionality of the platform.
  • Implement efficient data pipelines and ETL processes.
  • Collaborate with cross-functional teams to define data requirements.

AWSDockerPostgreSQLPythonSQLApache AirflowCloud ComputingData AnalysisETLKubernetesAlgorithmsApache KafkaData engineeringData StructuresCI/CDRESTful APIsMicroservicesData visualizationData modeling

Posted about 2 months ago
Apply
Apply
πŸ”₯ Senior Data Engineer
Posted about 2 months ago

πŸ“ South Africa, Mauritius, Kenya, Nigeria

πŸ” Technology, Marketplaces

  • BSc degree in Computer Science, Information Systems, Engineering, or related technical field or equivalent work experience.
  • 3+ years related work experience.
  • Minimum of 2 years experience building and optimizing β€˜big data’ data pipelines, architectures and maintaining data sets.
  • Experienced in Python.
  • Experienced in SQL (PostgreSQL, MS SQL).
  • Experienced in using cloud services: AWS, Azure or GCP.
  • Proficiency in version control, CI/CD and GitHub.
  • Understanding/experience in Glue and PySpark highly desirable.
  • Experience in managing data life cycle.
  • Proficiency in manipulating, processing and architecting large disconnected data sets for analytical requirements.
  • Ability to maintain and optimise processes supporting data transformation, data structures, metadata, dependency and workload management.
  • Good understanding of data management principles - data quality assurance and governance.
  • Strong analytical skills related to working with unstructured datasets.
  • Understanding of message queuing, stream processing, and highly scalable β€˜big data’ datastores.
  • Strong attention to detail.
  • Good communication and interpersonal skills.
  • Suggest efficiencies and execute on implementation of internal process improvements in automating manual processes.
  • Implement enhancements and new features across data systems.
  • Improve streamline processes within data systems with support from Senior Data Engineer.
  • Test CI/CD process for optimal data pipelines.
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Highly efficient in ETL processes.
  • Develop and conduct unit tests on data pipelines as well as ensuring data consistency.
  • Develop and maintain automated monitoring solutions.
  • Support reporting and analytics infrastructure.
  • Maintain data quality and data governance as well as upkeep of overall maintenance of data infrastructure systems.
  • Maintain data warehouse and data lake metadata, data catalogue, and user documentation for internal business users.
  • Ensure best practice is implemented and maintained on database.

AWSPostgreSQLPythonSQLETLGitCI/CD

Posted about 2 months ago
Apply