Apply

Data Engineer

Posted 1 day agoViewed

View full description

📍 Location: Italy

💸 Salary: 40000.0 - 60000.0 EUR per year

🔍 Industry: Fintech

🏢 Company: Qomodo

🗣️ Languages: English

🪄 Skills: PostgreSQLPythonSQLApache AirflowCloud ComputingETLData engineeringData modeling

Requirements:
  • Experience in the design and development of scalable data pipelines
  • Excellent knowledge of SQL and relational databases (we use PostgreSQL)
  • You like Python and you wink at PySpark!
  • Familiarity with workflow orchestration tools (we use Airflow and Glue Workflow)
  • Knowledge of cloud services for data management (we mainly use AWS Glue and Athena)
  • Experience with ETL/ELT tools and data modeling practices
  • Understanding of best practices for data governance, quality and data security
Responsibilities:
  • Model data in a way that makes it easy for the business to extract insights
  • Create robust and scalable pipelines to support analysis, reporting, and decision-making
Apply

Related Jobs

Apply

📍 Worldwide

🧭 Full-Time

🔍 AI

🏢 Company: ElevenLabs👥 101-250💰 $180,000,000 Series C about 1 month agoArtificial Intelligence (AI)Developer APIsContent CreatorsGenerative AI

  • A track record of partnering with RevOps and Finance teams to translate business challenges into data-driven solutions, ensuring alignment on key performance metrics.
  • Proficiency with tools across the modern data stack (python, SQL, BI tools, dbt)
  • familiarity with Salesforce, Gong, Stripe, Netsuite APIs
  • Develop robust ETL processes that integrate data from various sources (CRM, ERP, marketing platforms, financial systems) to ensure that RevOps and Finance have reliable, timely data.
  • Implement automated data validation and cleansing processes to maintain high-quality datasets, reducing errors that could impact financial reporting or revenue forecasting.
  • Create and maintain data models that drive key performance indicators (KPIs) for revenue operations and finance.
  • Streamline regular reporting tasks by automating data extractions and report generation, ensuring stakeholders have access to real-time insights.
  • Maintain thorough documentation of data pipelines, models, and analytical methodologies to facilitate transparency and ensure consistency across teams.

PythonSQLApache AirflowData AnalysisETLData engineeringRESTful APIsData visualizationCRMData modelingFinanceData management

Posted 15 days ago
Apply
Apply
🔥 Senior Data Engineer
Posted 21 days ago

📍 Worldwide

🧭 Full-Time

💸 167471.0 USD per year

🔍 Software Development

🏢 Company: Float.com

  • Expertise in ML, expert systems, and advanced algorithms (e.g., pattern matching, optimization) with applied experience in Scheduling, Recommendations, or Personalization.
  • Proficient in Python or Java and comfortable with SQL and Javascript/Typescript.
  • Experience with large-scale data pipelines and stream processing (e.g., Kafka, Debezium, Flink).
  • Skilled in data integration, cleaning, and validation.
  • Familiar with vector and graph databases (e.g., Neo4j).
  • Lead technical viability discussions
  • Develop and test proof-of-concepts for this project.
  • Conduct a comprehensive analysis of existing data to uncover patterns, identify optimization opportunities, and support the squad’s next deliveries.
  • Evaluate our data streaming pipeline: Y
  • Lead technical discussions related to optimization, pattern detection, and AI, serving as the primary point of contact for these areas within Float.
  • Develop and implement advanced algorithms to enhance the Resource Recommendation Engine and other product features, initially focused on pattern detection and optimization.
  • Design, implement, and maintain our streaming data architecture to support real-time data processing and analytics, ensuring data integrity and reliability.
  • Establish best practices and standards for optimization, AI, and data engineering development within the organization.
  • Mentor and train team members on optimization, AI, and data engineering concepts and techniques, fostering a culture of continuous learning and innovation.
  • Stay updated with the latest trends and related technologies, and proactively identify opportunities to incorporate them into Float's solutions.

PythonSQLKafkaMachine LearningAlgorithmsData engineering

Posted 21 days ago
Apply
Apply
🔥 Senior Data Engineer
Posted 24 days ago

📍 Europe, APAC, Americas

🧭 Full-Time

🔍 Software Development

🏢 Company: Docker👥 251-500💰 $105,000,000 Series C almost 3 years agoDeveloper ToolsDeveloper PlatformInformation TechnologySoftware

  • 4+ years of relevant industry experience
  • Experience with data modeling and building scalable pipelines
  • Proficiency with Snowflake or BigQuery
  • Experience with data governance and security controls
  • Experience creating ETL scripts using Python and SQL
  • Familiarity with a cloud ecosystem: AWS/Azure/Google Cloud
  • Experience with Tableau or Looker
  • Manage and develop ETL jobs, warehouse, and event collection tools
  • Build and manage the Central Data Model for reporting
  • Integrate emerging methodologies and technologies
  • Build data pipelines for ML and AI projects
  • Contribute to SOC2 compliance across the data platform
  • Document technical architecture

PythonSQLETLSnowflakeAirflowData engineeringData visualizationData modeling

Posted 24 days ago
Apply
Apply
🔥 Data Engineer
Posted about 1 month ago

📍 Worldwide

🧭 Full-Time

🔍 Decentralized Computing

🏢 Company: io.net👥 11-50💰 $30,000,000 Series A about 1 year agoCloud ComputingInformation TechnologyCloud InfrastructureGPU

  • Strong programming skills in Python or Java.
  • Experience with SQL and relational databases (e.g., PostgreSQL, MySQL).
  • Knowledge of data pipeline tools like Apache Airflow, Spark, or similar.
  • Familiarity with cloud-based data warehouses (e.g., Redshift, Snowflake).
  • Design and build scalable ETL pipelines to handle large volumes of data.
  • Develop and maintain data models and optimize database schemas.
  • Work with real-time data processing frameworks like Kafka.
  • Ensure data quality, consistency, and reliability across systems.
  • Collaborate with backend engineers and data scientists to deliver insights.
  • Monitor and troubleshoot data workflows to ensure high availability.

AWSPostgreSQLPythonSQLApache AirflowCloud ComputingETLKafkaData engineeringData modeling

Posted about 1 month ago
Apply
Apply
🔥 Digital Health Data Engineer
Posted about 1 month ago

📍 Italy, Poland, Spain, Hungary, Sweden

🧭 Contract

🔍 Digital Health

🏢 Company: Axiom Software Solutions Limited

  • 5 years industry experience with bachelor's or 3 years with master's in relevant field
  • Proficient in Python
  • Experience with SQL, PySpark, Dask
  • Knowledge of AWS, Azure, GCP
  • Familiarity with machine learning for large datasets
  • Design, build, and maintain data pipelines
  • Utilize large language models for digital health applications
  • Provide Python expertise and drive coding best practices
  • Manage and optimize cloud infrastructure
  • Implement generative AI technologies

AWSDockerPostgreSQLPythonSQLCloud ComputingMachine LearningTableauAzureData visualization

Posted about 1 month ago
Apply
Apply

📍 Europe

🧭 Fulltime

🔍 DeFi, Staking

🏢 Company: P2P. org

  • Strong knowledge of Python and SQL (preferably BQ, Clickhouse).
  • Airflow is mandatory.
  • Experience with Kubernetes is a plus.
  • General understanding and experience with GCP (Cloud SQL, VM, Storage).
  • Friendly and willing to help colleagues.
  • English language proficiency at B2 level or higher.
  • Perform technical and business tasks from analysts related to core tools.
  • Participate in code reviews of analysts and identify suboptimal processes.
  • Monitor load and alerts from services.
  • Interact with DevOps team on services and support tasks.
  • Maintain security and compliance standards.

PythonSQLApache AirflowGCPKubernetesClickhouse

Posted about 1 month ago
Apply
Apply
🔥 Principal Data Engineer (m/f/d)
Posted about 2 months ago

📍 Europe

🧭 Full-Time

🔍 Supply Chain Risk Analytics

🏢 Company: Everstream Analytics👥 251-500💰 $50,000,000 Series B almost 2 years agoProductivity ToolsArtificial Intelligence (AI)LogisticsMachine LearningRisk ManagementAnalyticsSupply Chain ManagementProcurement

  • Deep understanding of Python, including data manipulation and analysis libraries like Pandas and NumPy.
  • Extensive experience in data engineering, including ETL, data warehousing, and data pipelines.
  • Strong knowledge of AWS services, such as RDS, Lake Formation, Glue, Spark, etc.
  • Experience with real-time data processing frameworks like Apache Kafka/MSK.
  • Proficiency in SQL and NoSQL databases, including PostgreSQL, Opensearch, and Athena.
  • Ability to design efficient and scalable data models.
  • Strong analytical skills to identify and solve complex data problems.
  • Excellent communication and collaboration skills to work effectively with cross-functional teams.
  • Manage and grow a remote team of data engineers based in Europe.
  • Collaborate with Platform and Data Architecture teams to deliver robust, scalable, and maintainable data pipelines.
  • Lead and own data engineering projects, including data ingestion, transformation, and storage.
  • Develop and optimize real-time data processing pipelines using technologies like Apache Kafka/MSK or similar.
  • Design and implement data lakehouses and ETL pipelines using AWS services like Glue or similar.
  • Create efficient data models and optimize database queries for optimal performance.
  • Work closely with data scientists, product managers, and engineers to understand data requirements and translate them into technical solutions.
  • Mentor junior data engineers and share your expertise. Establish and promote best practices.

AWSPostgreSQLPythonSQLETLApache KafkaNosqlSparkData modeling

Posted about 2 months ago
Apply
Apply
🔥 Senior Data Engineer
Posted about 2 months ago

📍 Worldwide

🔍 Event technology

  • Experience in data engineering and building data pipelines.
  • Proficiency in programming languages like Python, Java, or Scala.
  • Familiarity with cloud platforms and data architecture design.
  • Design and develop data solutions to enhance the functionality of the platform.
  • Implement efficient data pipelines and ETL processes.
  • Collaborate with cross-functional teams to define data requirements.

AWSDockerPostgreSQLPythonSQLApache AirflowCloud ComputingData AnalysisETLKubernetesAlgorithmsApache KafkaData engineeringData StructuresCI/CDRESTful APIsMicroservicesData visualizationData modeling

Posted about 2 months ago
Apply
Apply
🔥 Senior Data Engineer
Posted about 2 months ago

📍 Philippines, Spain, Germany, France, Italy

🔍 Fintech, Healthcare, EdTech, Construction, Hospitality

🏢 Company: Intellectsoft👥 251-500Augmented RealityArtificial Intelligence (AI)DevOpsBlockchainInternet of ThingsUX DesignWeb DevelopmentMobile AppsQuality AssuranceSoftware

  • Proficiency in SQL for data manipulation and querying large datasets.
  • Strong experience with Python for data processing and scripting.
  • Expertise in pySpark for distributed data processing and big data workflows.
  • Hands-on experience with Airflow for workflow orchestration and automation.
  • Deep understanding of Database Management Systems (DBMS), including design, optimization, and maintenance.
  • Solid knowledge of data modeling, ETL pipelines, and data integration.
  • Familiarity with cloud platforms such as AWS, GCP, or Azure.
  • Design, develop, and maintain scalable data pipelines and ETL processes.
  • Build and optimize large-scale data processing frameworks using PySpark.
  • Create workflows and automate processes using Apache Airflow.
  • Manage, monitor, and enhance database performance and integrity.
  • Collaborate with cross-functional teams, including data analysts, scientists, and stakeholders, to understand data needs.
  • Ensure data quality, reliability, and compliance with industry standards.
  • Troubleshoot, debug, and optimize data pipelines and workflows.
  • Continuously evaluate and integrate new tools and technologies to enhance data infrastructure.

AWSPythonSQLApache AirflowETLGCPAzureData modeling

Posted about 2 months ago
Apply
Apply
🔥 Senior Data Engineer
Posted 2 months ago

📍 South Africa, Mauritius, Kenya, Nigeria

🔍 Technology, Marketplaces

  • BSc degree in Computer Science, Information Systems, Engineering, or related technical field or equivalent work experience.
  • 3+ years related work experience.
  • Minimum of 2 years experience building and optimizing ‘big data’ data pipelines, architectures and maintaining data sets.
  • Experienced in Python.
  • Experienced in SQL (PostgreSQL, MS SQL).
  • Experienced in using cloud services: AWS, Azure or GCP.
  • Proficiency in version control, CI/CD and GitHub.
  • Understanding/experience in Glue and PySpark highly desirable.
  • Experience in managing data life cycle.
  • Proficiency in manipulating, processing and architecting large disconnected data sets for analytical requirements.
  • Ability to maintain and optimise processes supporting data transformation, data structures, metadata, dependency and workload management.
  • Good understanding of data management principles - data quality assurance and governance.
  • Strong analytical skills related to working with unstructured datasets.
  • Understanding of message queuing, stream processing, and highly scalable ‘big data’ datastores.
  • Strong attention to detail.
  • Good communication and interpersonal skills.
  • Suggest efficiencies and execute on implementation of internal process improvements in automating manual processes.
  • Implement enhancements and new features across data systems.
  • Improve streamline processes within data systems with support from Senior Data Engineer.
  • Test CI/CD process for optimal data pipelines.
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Highly efficient in ETL processes.
  • Develop and conduct unit tests on data pipelines as well as ensuring data consistency.
  • Develop and maintain automated monitoring solutions.
  • Support reporting and analytics infrastructure.
  • Maintain data quality and data governance as well as upkeep of overall maintenance of data infrastructure systems.
  • Maintain data warehouse and data lake metadata, data catalogue, and user documentation for internal business users.
  • Ensure best practice is implemented and maintained on database.

AWSPostgreSQLPythonSQLETLGitCI/CD

Posted 2 months ago
Apply