Apply

Data Engineer

Posted 2 months agoViewed

View full description

πŸ’Ž Seniority level: Solid track record

πŸ“ Location: Croatia

πŸ” Industry: Software Development

🏒 Company: Inspiration Commerce Group

πŸ—£οΈ Languages: English

⏳ Experience: Solid track record

πŸͺ„ Skills: PostgreSQLPythonSQLCloud ComputingETLData engineeringAnalytical SkillsRESTful APIsData modeling

Requirements:
  • Expert-level SQL knowledge β€” you can write + optimize complex queries and database performance for large-scale systems.
  • Comfortable working with modern database technologies, cloud-based data solutions, ETL/ELT tools and the data services within cloud platforms.
Responsibilities:
  • Assist in designing, implementing, and overseeing scalable data pipelines and architectures across the organization.
  • Build and maintain reliable ETL processes to ensure efficient data ingestion, transformation, and storage.
  • Streamline and manage data flows and integrations across multiple platforms and applications.
  • Work with large-scale event-level data, aggregating and processing it to drive business intelligence and analytics efforts.
  • Continuously assess and adopt new data technologies and tools to improve our data infrastructure and capabilities.
Apply

Related Jobs

Apply

πŸ“ Europe

🧭 Full-Time

πŸ” Fintech

NOT STATED
  • Design our data platform, from data ingestion and validation to transport, storage, and exposure to consumers.
  • Collaborate with other data engineering teams focused on delivering analytics services, sharing technical knowledge and defining best practices, and guiding them in the configuration of core systems such as data warehousing and orchestration.
  • Work hand in hand with machine learning engineers to design and scale automation and intelligence features into the Pennylane application.
  • Partner with software engineers to ensure data quality and integrity, acting as a role model to establish data as a core part of our production systems.
  • Drive optimizations on our core components to maintain an effective balance between scalability and costs.
  • Shape an engineering culture with high standards in the data team by defining best practices, architecture patterns and security measures.

AWSPythonSQLApache AirflowETLData engineeringData visualizationData modelingData analyticsData management

Posted 5 days ago
Apply
Apply

πŸ“ Europe

πŸ” Fintech

  • Experience with AWS services
  • Experience with infrastructure as code, batch processing, scheduling, data warehousing
  • Understanding of data engineering best practices
  • Design our data platform, from data ingestion and validation to transport, storage, and exposure to consumers.
  • Collaborate with other data engineering teams focused on delivering analytics services, sharing technical knowledge and defining best practices, and guiding them in the configuration of core systems such as data warehousing and orchestration.
  • Work hand in hand with machine learning engineers to design and scale automation and intelligence features into the Pennylane application.
  • Partner with software engineers to ensure data quality and integrity, acting as a role model to establish data as a core part of our production systems.
  • Drive optimizations on our core components to maintain an effective balance between scalability and costs.
  • Shape an engineering culture with high standards in the data team by defining best practices, architecture patterns and security measures.

AWSPostgreSQLPythonSQLApache AirflowData AnalysisETLKafkaSnowflakeAlgorithmsData engineeringData StructuresSparkCI/CDRESTful APIsMicroservicesData visualizationData modelingScriptingData analyticsData management

Posted 6 days ago
Apply
Apply

πŸ“ EMEA countries

🧭 Full-Time

πŸ” Mobile Games and Apps

🏒 Company: Voodoo

  • Extensive experience in data or backend engineering, with at least 2+ years building real-time data pipelines.
  • Proficiency with stream processing frameworks like Flink, Spark Structured Streaming, Beam, or similar.
  • Strong programming experience in Java, Scala, or Python, with a focus on distributed systems.
  • Deep understanding of event streaming and messaging platforms such as GCP Pub/Sub, AWS Kinesis, Apache Pulsar, or Kafka β€” including performance tuning, delivery guarantees, and schema management.
  • Solid experience operating data services in Kubernetes, including Helm, resource tuning, and service discovery.
  • Experience with Protobuf/Avro, and best practices around schema evolution in streaming environments.
  • Familiarity with CI/CD workflows and infrastructure-as-code (e.g., Terraform, ArgoCD, CircleCI).
  • Strong debugging skills and a bias for building reliable, self-healing systems.
  • Design, implement, and optimize real-time data pipelines handling billions of events per day with strict SLAs.
  • Architect data flows for bidstream data, auction logs, impression tracking and user behavior data.
  • Build scalable and reliable event ingestion and processing systems using Kafka, Flink, Spark Structured Streaming, or similar technologies.
  • Operate data infrastructure on Kubernetes, managing deployments, autoscaling, resource limits, and high availability.
  • Collaborate with backend to integrate OpenRTB signals into our data platform in near real-time.
  • Ensure high-throughput, low-latency processing, and system resilience in our streaming infrastructure.
  • Design and manage event schemas (Avro, Protobuf), schema evolution strategies, and metadata tracking.
  • Implement observability, alerting, and performance monitoring for critical data services.
  • Contribute to decisions on data modeling and data retention strategies for real-time use cases.
  • Mentor other engineers and advocate for best practices in streaming architecture, reliability, and performance.
  • Continuously evaluate new tools, trends, and techniques to evolve our modern streaming stack.

Backend DevelopmentPythonSQLGCPJavaKafkaKubernetesAlgorithmsData engineeringData StructuresSparkCI/CDRESTful APIsLinuxTerraformMicroservicesScalaData modelingDebugging

Posted 7 days ago
Apply
Apply

πŸ“ Worldwide

🧭 Full-Time

πŸ’Έ 140000.0 - 175000.0 USD per year

πŸ” Software Development

🏒 Company: FigmentπŸ‘₯ 11-50HospitalityTravel AccommodationsArt

  • Extensive experience with data engineering, including building and managing data pipelines and ETL processes.
  • Proficiency in the Python programming language and SQL.
  • Experience developing highly concurrent and performant applications ensuring scalability and efficient resource utilization in distributed or multi-threaded systems.
  • Experience implementing robust microservices following best practices in error handling, logging, and testing for production-grade systems.
  • Experience with using CI/CD pipelines for automated data infrastructure provisioning and application deployment.
  • Experience with the data orchestration tool Dagster or Airflow.
  • Experience designing and orchestrating complex DAGs to manage dependencies, triggers, and retries for data workflows, ensuring reliable and efficient pipeline execution.
  • Experience with the data transformation tool DBT.
  • Experience designing and implementing complex data transformations using advanced DBT models, materializations, and configurations to streamline data workflows and improve performance.
  • Experience optimizing and troubleshoot DBT pipelines for scale, ensuring that transformations run efficiently in production environments, handling large datasets without issues.
  • Experience with cloud data warehousing platforms (e.g. Snowflake)
  • Experience architecting and optimizing Snowflake environments for performance, including designing partitioning strategies, clustering keys, and storage optimizations for cost-effective scaling.
  • Has an understanding of security and governance policies within Snowflake, including data encryption, access control, and audit logging to meet compliance and security best practices.
  • Implement and maintain reliable data pipelines and data storage solutions.
  • Implement data modeling and integrate technologies according to project needs.
  • Manage specific data pipelines and oversees the technical aspects of data operations
  • Ensure data processes are optimized and align with business requirements
  • Identify areas for process improvements and suggests tools and technologies to enhance efficiency
  • Continuously improve data infrastructure automation, ensuring reliable and efficient data processing.
  • Develop and maintain data pipelines and ETL processes using technologies such as Dagster and DBT to ensure efficient data flow and processing.
  • Automate data ingestion, transformation, and loading processes to support blockchain data analytics and reporting.
  • Utilize Snowflake data warehousing solutions to manage and optimize data storage and retrieval.
  • Collaborate with Engineering Leadership and Product teams to articulate data strategies and progress.
  • Promote best practices in data engineering, cloud infrastructure, networking, and security.

PythonSQLCloud ComputingETLSnowflakeData engineeringCI/CDRESTful APIsMicroservicesData modeling

Posted 9 days ago
Apply
Apply
πŸ”₯ Data Engineer
Posted 11 days ago

πŸ“ EU

🧭 Full-Time

πŸ” Decentralized Finance

🏒 Company: P2P. org

  • Strong knowledge: Python, SQL- any syntax, preferably BQ, Clickhouse, Postgres
  • Production experience with Airflow, Clickhouse or BigQuery, DBT, git
  • General understanding and experience with GCP or AWS
  • English level: B2+
  • Perform technical and business tasks from analysts related to our core tools
  • Participate in code reviews of analysts and identifying suboptimal processes
  • Monitor load and alerts from our services
  • Take care about Data Platform
  • Write DBT models for Core Datamarts

PythonSQLGCPGitAirflowClickhouseData engineeringData modeling

Posted 11 days ago
Apply
Apply

πŸ“ European Union

🧭 Full-Time

πŸ” Software Development

🏒 Company: SPACE44

  • Minimum 6 years of experience as a Data Engineer working with large-scale data architectures
  • Strong expertise in SQL for data manipulation, optimization, and analytics
  • Advanced proficiency in Python for ETL, scripting, and data automation
  • In-depth experience with Apache Spark for big data processing and distributed computing
  • Proven experience with Kafka for real-time data streaming and event processing
  • Hands-on experience with cloud-native ETL tools such as AWS Glue and Azure Data Factory
  • Familiarity with data warehousing platforms like Snowflake or BigQuery
  • Knowledge of NoSQL databases like MongoDB or Cassandra
  • Comfortable working in modern cloud environments and Agile remote teams
  • BSc in Computer Science, Data Engineering, or a relevant field
  • Design, build, and manage ETL/ELT pipelines using tools like AWS Glue, Azure Data Factory, and Apache Airflow
  • Develop scalable real-time and batch data processing solutions using Apache Spark and Kafka
  • Write optimized, production-grade SQL queries and perform performance tuning across data systems
  • Build and maintain data lakes and warehouses, including platforms like Snowflake and BigQuery
  • Work with structured and semi-structured data across relational and NoSQL databases (e.g., MongoDB, Cassandra)
  • Collaborate with data analysts, engineers, and product teams to define data models and architecture
  • Ensure data integrity, quality, and lineage throughout the pipeline
  • Automate workflows, testing, and deployment for data systems in cloud-native environments
  • Monitor and troubleshoot pipeline performance and reliability

AWSPythonSQLApache AirflowCloud ComputingETLKafkaMongoDBSnowflakeCassandraData engineeringData modelingScripting

Posted 12 days ago
Apply
Apply
πŸ”₯ Big Data Engineer
Posted 13 days ago

πŸ“ UK, US, Europe

🧭 Full-Time

🏒 Company: RNRS Solutions

NOT STATED
  • Build, optimize, and maintain scalable Big Data pipelines.
  • Design and implement real-time data processing solutions.
  • Work with large-scale distributed systems to manage structured and unstructured data.
  • Develop ETL processes and improve data ingestion workflows.
  • Collaborate with software engineers, analysts, and data scientists.

AWSPostgreSQLApache AirflowETLHadoopJavaKafkaKubernetesData engineeringScala

Posted 13 days ago
Apply
Apply

πŸ“ Any country

🧭 Full-Time

🏒 Company: Ruby LabsπŸ‘₯ 11-50Media and EntertainmentMobile AppsSoftware

  • Proficiency in Python for data pipeline development, automation, and tooling.
  • Strong SQL skills and experience working with cloud data warehouses (BigQuery preferred).
  • Experience with workflow orchestration tools such as Airflow.
  • Familiarity with data quality frameworks (e.g., Great Expectations, dbt tests) and anomaly detection methods.
  • Experience building monitoring and alerting systems for data pipelines and data quality.
  • Ability to write clear, maintainable, and actionable technical documentation.
  • Develop and maintain ETL/ELT data pipelines to ingest, transform, and deliver data into the data warehouse.
  • Design and implement monitoring and alerting systems to proactively detect pipeline failures, anomalies, and data quality issues.
  • Establish data quality validation checks and anomaly detection mechanisms to ensure accuracy and trust in data.
  • Define and maintain data structures, schemas, and partitioning strategies for efficient and scalable data storage.
  • Create and maintain comprehensive documentation of data pipelines, workflows, data models, and data lineage.
  • Troubleshoot and resolve issues related to data pipelines, performance, and quality.
  • Collaborate with stakeholders to understand data requirements and translate them into reliable engineering solutions.
  • Contribute to the continuous improvement of the data platform’s observability, reliability, and maintainability.

PythonSQLCloud ComputingETLAirflowData engineeringData StructuresProblem SolvingDocumentationData modeling

Posted 16 days ago
Apply
Apply

πŸ“ North Macedonia, Serbia, Croatia, South Africa, Romania

🧭 Full-Time

πŸ’Έ 4500.0 - 6500.0 USD per month

πŸ” Software Development

🏒 Company: JobRack

  • 4+ years of experience working with Python
  • Strong knowledge of data engineering best practices including Snowflake, BigQuery, or general data pipeline experience
  • Ability to work independently and take full ownership of tasks
  • Strong problem-solving skills and the ability to think beyond just writing code
  • Great communication skills - ability to work effectively with non-technical team members
  • Build and optimize ETL pipelines using Python
  • Ensure the system is scalable, secure, and well-structured
  • Collaborate with the wider team to understand requirements and deliver solutions that work seamlessly
  • Take ownership of backend-related challenges and implement effective solutions
  • Stay up to date with AI data engineering advancements and explore ways to integrate them into our platform

Backend DevelopmentPythonETLSnowflakeData engineering

Posted 18 days ago
Apply
Apply

πŸ“ EMEA countries

πŸ” Mobile Games and Apps

  • Extensive experience in data engineering or platform engineering roles.
  • Strong programming skills in Python and Java.
  • Strong experience with modern data stacks (e.g., Spark, Kafka, DBT, Airflow, Lakehouse).
  • Deep understanding of distributed systems, data architecture, and performance tuning.
  • Experience with cloud platforms (AWS, GCP, or Azure) and Infrastructure-as-Code tools (Terraform, CloudFormation, etc.).
  • Solid experience operating data services in Kubernetes, including Helm, resource tuning, and service discovery.
  • Strong understanding of data modeling, data governance, and security best practices.
  • Knowledge of CI/CD principles and DevOps practices in a data environment.
  • Design, develop, and maintain scalable, secure, and high-performance data platforms.
  • Build and manage data pipelines (ETL/ELT) using tools such as Apache Airflow, DBT, SQLMesh or similar.
  • Architect and optimize lakehouse solutions (e.g., Iceberg).
  • Lead the design and implementation of data infrastructure components (streaming, batch processing, orchestration, lineage, observability).
  • Ensure data quality, governance, and compliance (GDPR, HIPAA, etc.) across all data processes.
  • Automate infrastructure provisioning and CI/CD pipelines for data platform components using tools like Terraform, CircleCI, or similar.
  • Collaborate cross-functionally with data scientists, analytics teams, and product engineers to understand data needs and deliver scalable solutions.
  • Mentor experienced data engineers and set best practices for code quality, testing, and platform reliability.
  • Monitor and troubleshoot performance issues in real-time data flows and long-running batch jobs.
  • Stay ahead of trends in data engineering, proactively recommending new technologies and approaches to keep our stack modern and efficient.

AWSPythonSQLETLGCPJavaKafkaKubernetesAirflowAzureData engineeringSparkCI/CDTerraformData modeling

Posted about 2 months ago
Apply