Apply

Senior Data Engineer

Posted 25 days agoViewed

View full description

πŸ’Ž Seniority level: Senior, 5+ years

πŸ“ Location: Spain

πŸ’Έ Salary: 80000 - 110000 EUR per year

πŸ” Industry: Financial services

πŸ—£οΈ Languages: English

⏳ Experience: 5+ years

πŸͺ„ Skills: AWSPythonSQLGCPAzureData engineeringCollaborationTerraformData modeling

Requirements:
  • 5+ years of professional experience in Data Engineering or similar roles.
  • Proficient in SQL and DBT for data transformations.
  • Fluent in Python or other modern programming languages.
  • Experience with infrastructure as code languages, like Terraform.
  • Experienced in data pipelines, data modeling, data warehouse technologies, and cloud infrastructures.
  • Experience with AWS and/or other cloud providers like Azure or GCP.
  • Strong cross-team communication and collaboration skills.
  • Ability to thrive in ambiguous situations.
Responsibilities:
  • Work with engineering managers and tech leads to identify and plan projects based on team goals.
  • Collaborate closely with tech leads, managers, and cross-functional teams to deliver technology for analytical use cases.
  • Write high-quality, understandable code.
  • Review other engineers' work, providing constructive feedback.
  • Act as a technical resource and mentor for engineers inside and outside the team.
  • Promote a respectful and supportive team environment.
  • Participate in on-call rotation as required.
Apply

Related Jobs

Apply

πŸ“ Colombia, Spain, Ecuador, Venezuela, Argentina

πŸ” HR Tech

🏒 Company: JobgetherπŸ‘₯ 11-50πŸ’° $1,493,585 Seed almost 2 years agoInternet

  • Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field.
  • Minimum of 5 years of experience in data engineering.
  • 5 years of experience in Python programming.
  • Hands-on experience with big data technologies like Hadoop, Spark, or Kafka.
  • Proficiency with MySQL, PostgreSQL, and MongoDB.
  • Experience with AWS cloud platforms.
  • Strong understanding of data modeling and warehousing concepts.
  • Excellent analytical and problem-solving skills.
  • Fluency in English and Spanish.

  • Design, build, and maintain scalable data pipelines and ETL processes.
  • Develop and optimize data scraping and extraction solutions.
  • Collaborate with data scientists to implement AI-driven algorithms.
  • Ensure data integrity and reliability with validation mechanisms.
  • Analyze and optimize system performance.
  • Deploy machine learning models into production.
  • Stay updated on emergent technologies in data engineering.
  • Work with cross-functional teams to define data requirements.
  • Develop and maintain comprehensive documentation.

AWSDockerPostgreSQLPythonETLHadoopKafkaKubernetesMachine LearningMongoDBMySQLSparkCI/CDData modeling

Posted 3 days ago
Apply
Apply

πŸ“ United States, United Kingdom, Spain, Estonia

πŸ” Identity verification

🏒 Company: VeriffπŸ‘₯ 501-1000πŸ’° $100,000,000 Series C almost 3 years agoπŸ«‚ Last layoff over 1 year agoArtificial Intelligence (AI)Fraud DetectionInformation TechnologyCyber SecurityIdentity Management

  • Expert-level knowledge of SQL, particularly with Redshift.
  • Strong experience in data modeling with an understanding of dimensional data modeling best practices.
  • Proficiency in data transformation frameworks like dbt.
  • Solid programming skills in languages used in data engineering, such as Python or R.
  • Familiarity with orchestration frameworks like Apache Airflow or Luigi.
  • Experience with data from diverse sources including RDBMS and APIs.

  • Collaborate with business stakeholders to design, document, and implement robust data models.
  • Build and optimize data pipelines to transform raw data into actionable insights.
  • Fine-tune query performance and ensure efficient use of data warehouse infrastructure.
  • Ensure data reliability and quality through rigorous testing and monitoring.
  • Assist in migrating from batch processing to real-time streaming systems.
  • Expand support for various use cases including business intelligence and analytics.

PythonSQLApache AirflowETLData engineeringJSONData modeling

Posted 10 days ago
Apply
Apply
πŸ”₯ Senior Data Engineer
Posted about 1 month ago

πŸ“ Belgium, Spain

πŸ” Hospitality industry

🏒 Company: LighthouseπŸ‘₯ 501-1000πŸ’° $370,000,000 Series C about 1 month agoHospitalityBusiness IntelligenceSaaSAnalyticsInformation TechnologyHotelSoftware

  • 5+ years of professional experience using Python, Java, or Scala for data processing (Python preferred)
  • Experience with writing data processing pipelines and with cloud platforms like AWS, GCP, or Azure
  • Experience with data pipeline orchestration tools like Apache Airflow (preferred), Dagster or Prefect
  • Deep understanding of data warehousing strategies
  • Experience with transformation tools like dbt to manage data transformation in your data pipelines
  • Some experience in managing infrastructure with IaC tools like Terraform
  • Stay updated with industry trends, emerging technologies, and best practices in data engineering
  • Improve, manage, and teach standards for code maintainability and performance in code submitted and reviewed
  • Ship large features independently, generate architecture recommendations with the ability to implement them
  • Strong communicator that can describe complex topics in a simple way to a variety of technical and non-technical stakeholders.

  • Design and develop scalable, reliable data pipelines using the Google Cloud stack.
  • Ingest, process, and store structured and unstructured data from various sources into our data-lakes and data warehouses.
  • Optimise data pipelines for cost, performance and scalability.
  • Implement and maintain data governance frameworks, ensuring data accuracy, consistency, and compliance.
  • Monitor and troubleshoot data pipeline issues, implementing proactive measures for reliability and performance.
  • Mentor and provide technical guidance to other engineers working with data.
  • Partner with Product, Engineering & Data Science teams to operationalise new solutions.

PythonApache AirflowGCPJavaKafkaKubernetesAirflowData engineeringGrafanaPrometheusSparkCI/CDTerraformDocumentationComplianceScala

Posted about 1 month ago
Apply
Apply
πŸ”₯ Senior Data Engineer
Posted about 2 months ago

πŸ“ Any European country

🧭 Full-Time

πŸ” Software development

🏒 Company: Janea Systems

  • Proven experience as a data engineer, preferably with at least 3 or more years of relevant experience.
  • Experience designing cloud native solutions and implementations with Kubernetes.
  • Experience with Airflow or similar pipeline orchestration tools.
  • Strong Python programming skills.
  • Experience collaborating with Data Science and Engineering teams in production environments.
  • Solid understanding of SQL and relational data modeling schemas.
  • Preference for experience with Databricks or Spark.
  • Familiarity with modern data stack design and data lifecycle management.
  • Experience with distributed systems, microservices architecture, and cloud platforms like AWS, Azure, Google Cloud.
  • Excellent problem-solving skills and strong communication skills.

  • Develop and maintain data pipelines using Databricks, Airflow, or similar orchestration systems.
  • Design and implement cloud-native solutions using Kubernetes for high availability.
  • Gather product data requirements and implement solutions to ingest and process data for applications.
  • Collaborate with Data Science and Engineering teams to optimize production-ready applications.
  • Cultivate data from various sources for data scientists and maintain documentation.
  • Design modern data stack for data scientists and ML engineers.

AWSPythonSoftware DevelopmentSQLKubernetesAirflowAzureData scienceSparkCollaboration

Posted about 2 months ago
Apply
Apply
πŸ”₯ Lead/Senior Data Engineer
Posted about 2 months ago

πŸ“ UK, EU

πŸ” Consultancy

🏒 Company: The Dot CollectiveπŸ‘₯ 11-50Cloud ComputingAnalyticsInformation Technology

  • Advanced knowledge of distributed computing with Spark.
  • Extensive experience with AWS data offerings such as S3, Glue, Lambda.
  • Ability to build CI/CD processes including Infrastructure as Code (e.g. terraform).
  • Expert Python and SQL skills.
  • Agile ways of working.

  • Leading a team of data engineers.
  • Designing and implementing cloud-native data platforms.
  • Owning and managing technical roadmap.
  • Engineering well-tested, scalable, and reliable data pipelines.

AWSPythonSQLAgileSCRUMSparkCollaborationAgile methodologies

Posted about 2 months ago
Apply
Apply

πŸ“ England, Portugal, Spain

πŸ” Automotive

🏒 Company: Carwow

  • Have significant experience in software or data engineering, preferably in a senior or lead role.
  • Are highly proficient in writing Python and SQL.
  • Have extensive experience building and optimizing complex ETL/ELT data pipelines.
  • Have used data transformation tools like dbt.
  • Are skilled in managing and optimizing script dependencies with tools like Airflow.
  • Have substantial experience designing and maintaining data warehouses using Snowflake or similar technologies.
  • Experience leading and mentoring junior data engineers, and guiding teams through complex projects.
  • Ability to contribute to the strategic direction of data engineering practices and technologies within the organisation.
  • Nice to have: experience with Terraform, Ruby, data visualization tools (e.g., Looker, Tableau, Power BI), Amplitude, DevOps, Heroku, Kafka, AWS/GCP, etc.

  • Leading the design, development, and maintenance of robust ETL/ELT data pipelines.
  • Writing, optimizing, and reviewing advanced SQL queries for data extraction and transformation.
  • Implementing and managing sophisticated data workflows and dependencies using tools like Airflow.
  • Designing, building, and maintaining advanced data models and data warehouses using Snowflake or similar technologies.
  • Collaborating with cross-functional teams to understand complex data requirements and deliver efficient solutions.
  • Ensuring high data quality, integrity, and security across the data lifecycle.
  • Continuously improving data engineering processes and infrastructure, and implementing best practices.

PythonSQLETLRubyRuby on RailsSnowflakeAirflowData engineeringCollaboration

Posted 3 months ago
Apply
Apply

πŸ“ Central EU or Americas

🧭 Full-Time

πŸ” Real estate investment

🏒 Company: RoofstockπŸ‘₯ 501-1000πŸ’° $240,000,000 Series E almost 3 years agoπŸ«‚ Last layoff almost 2 years agoRental PropertyPropTechMarketplaceReal EstateFinTech

  • BS or MS in a technical field: computer science, engineering or similar.
  • 8+ years technical experience working with data.
  • 5+ years strong experience building scalable data services and applications using SQL, Python, Java/Kotlin.
  • Deep understanding of microservices architecture and RESTful API development.
  • Experience with AWS services including messaging and familiarity with real-time data processing frameworks.
  • Significant experience building and deploying data-related infrastructure and robust data pipelines.
  • Strong understanding of data architecture and related challenges.
  • Experience with complex problems and distributed systems focusing on scalability and performance.
  • Strong communication and interpersonal skills.
  • Independent worker able to collaborate with cross-functional teams.

  • Improve and maintain the data services platform.
  • Deliver high-quality data services promptly, ensuring data governance and integrity while meeting objectives and maintaining SLAs.
  • Develop effective architectures and produce key code components contributing to technical solutions.
  • Integrate a diverse network of third-party tools into a cohesive, scalable platform.
  • Continuously enhance system performance and reliability by diagnosing and resolving operational issues.
  • Ensure rigorous testing of the team's work through automated methods.
  • Support data infrastructure and collaborate with the data team on scalable data pipelines.
  • Work within an Agile/Scrum framework with cross-functional teams to deliver value.
  • Influence the enterprise data platform architecture and standards.

AWSDockerPythonSQLAgileETLSCRUMSnowflakeAirflowData engineeringgRPCRESTful APIsMicroservices

Posted 4 months ago
Apply