Apply

Data Engineer

Posted 2024-07-18

View full description

💎 Seniority level: Senior, 4+ years

📍 Location: Africa, United Kingdom, Europe, Middle East

🔍 Industry: Sports and Digital Entertainment

🗣️ Languages: English

⏳ Experience: 4+ years

🪄 Skills: PythonAgileETLKafkaAzureData engineeringSpark

Requirements:
  • 4+ years' of experience in a data engineering or similar role.
  • Excellent programming skills in Python and Spark (PySpark/Databricks).
  • 2+ years’ experience working with Databricks and Azure data services.
  • Experience with other cloud data environments (AWS, Google Cloud, Hadoop, etc.) is a plus.
  • Experience with Customer Data Platforms is advantageous.
  • Active management of data quality including monitoring and alerting.
  • Familiarity with the application lifecycle from development to production.
  • Remote working experience is required; hyper-growth startup experience is a plus.
Responsibilities:
  • Design and manage a scalable Data Lake/ETL infrastructure and data pipeline.
  • Ensure fault-tolerant systems and processes with data integrity and automated monitoring.
  • Continuously improve data processes for performance and scalability.
  • Ensure data privacy and security standards are met.
  • Maintain up-to-date documentation for the Data Platform stack.
  • Collaborate with business stakeholders and engineering to deliver data products.
Apply

Related Jobs

Apply
🔥 Senior Data Engineer
Posted 2024-11-21

📍 Poland

🧭 Full-Time

🔍 Software development

🏢 Company: Sunscrapers sp. z o.o.

  • At least 5 years of professional experience as a data engineer.
  • Undergraduate or graduate degree in Computer Science, Engineering, Mathematics, or similar.
  • Excellent command in spoken and written English, at least C1.
  • Strong professional experience with Python and SQL.
  • Hands-on experience with DBT and Snowflake.
  • Experience in building data pipelines with Airflow or alternative solutions.
  • Strong understanding of various data modeling techniques like Kimball Star Schema.
  • Great analytical skills and attention to detail.
  • Creative problem-solving skills.
  • Great customer service and troubleshooting skills.

  • Modeling datasets and schemes for consistency and easy access.
  • Design and implement data transformations and data marts.
  • Integrating third-party systems and external data sources into data warehouse.
  • Building data flows for fetching, aggregation and data modeling using batch pipelines.

PythonSQLSnowflakeAirflowAnalytical SkillsCustomer serviceDevOpsAttention to detail

Posted 2024-11-21
Apply
Apply

📍 Poland

🔍 Healthcare

🏢 Company: Sunscrapers sp. z o.o.

  • At least 3 years of professional experience as a data engineer.
  • Undergraduate or graduate degree in Computer Science, Engineering, Mathematics, or similar.
  • Excellent command in spoken and written English, at least C1.
  • Strong professional experience with Apache Spark.
  • Hands-on experience managing production spark clusters in Databricks.
  • Experience in CI/CD of data jobs in Spark.
  • Great analytical skills, attention to detail, and creative problem-solving skills.
  • Great customer service and troubleshooting skills.

  • Design and manage batch data pipelines, including file ingestion, transformation, and Delta Lake/table management.
  • Implement scalable architectures for batch and streaming workflows.
  • Leverage Microsoft equivalents of BigQuery for efficient querying and data storage.

SparkAnalytical SkillsCI/CDCustomer serviceAttention to detail

Posted 2024-11-21
Apply
Apply
🔥 Senior Data Engineer
Posted 2024-11-21

📍 Belgium, Spain

🔍 Hospitality industry

🏢 Company: Lighthouse

  • 5+ years of professional experience using Python, Java, or Scala for data processing (Python preferred)
  • Experience with writing data processing pipelines and with cloud platforms like AWS, GCP, or Azure
  • Experience with data pipeline orchestration tools like Apache Airflow (preferred), Dagster or Prefect
  • Deep understanding of data warehousing strategies
  • Experience with transformation tools like dbt to manage data transformation in your data pipelines
  • Some experience in managing infrastructure with IaC tools like Terraform
  • Stay updated with industry trends, emerging technologies, and best practices in data engineering
  • Improve, manage, and teach standards for code maintainability and performance in code submitted and reviewed
  • Ship large features independently, generate architecture recommendations with the ability to implement them
  • Strong communicator that can describe complex topics in a simple way to a variety of technical and non-technical stakeholders.

  • Design and develop scalable, reliable data pipelines using the Google Cloud stack.
  • Ingest, process, and store structured and unstructured data from various sources into our data-lakes and data warehouses.
  • Optimise data pipelines for cost, performance and scalability.
  • Implement and maintain data governance frameworks, ensuring data accuracy, consistency, and compliance.
  • Monitor and troubleshoot data pipeline issues, implementing proactive measures for reliability and performance.
  • Mentor and provide technical guidance to other engineers working with data.
  • Partner with Product, Engineering & Data Science teams to operationalise new solutions.

PythonApache AirflowGCPJavaKafkaKubernetesAirflowData engineeringGrafanaPrometheusSparkCI/CDTerraformDocumentationCompliance

Posted 2024-11-21
Apply
Apply
🔥 Data Engineer
Posted 2024-11-20

📍 Argentina, Spain, England, United Kingdom, Lisbon, Portugal

🧭 Full-Time

🔍 Web3

🏢 Company: Reown

  • 5+ years working in the analytics stack within a fast-paced environment.
  • 3+ years production experience with SQL templating engines like DBT.
  • Experience with distributed query engines (Bigquery, Athena, Spark), data warehouses, and BI tools.
  • Strong understanding of software engineering principles, coding standards, design patterns, version control (e.g., Git), testing methodologies, and CI/CD processes.
  • Experience with AWS/GCP/Azure services for deployment and management.
  • Familiarity with GitHub, CI/CD pipelines, GitHub Actions, and Terraform.
  • Ability to write Python scripts for ETL processes and data manipulation.
  • Proficient with libraries like pandas for analysis and transformation.
  • Experience handling various data formats (e.g., CSV, JSON, Parquet).
  • Strong problem-solving skills and communication abilities to discuss technical concepts.

  • Write complex SQL queries that extract and combine data from on-chain and off-chain logs for analytics.
  • Create dashboards and tools for team data discoverability and KR tracking.
  • Perform deep-dive analyses into specific topics for internal stakeholders.
  • Help design, implement, and evolve Reown's on-chain data infrastructure.
  • Build, maintain, and monitor end-to-end data pipelines for new datasets and features.
  • Write health-checks and alerts to ensure data correctness, consistency, and freshness.
  • Meet with product managers and stakeholders to understand data needs and detect new product opportunities.

AWSPythonSQLData AnalysisDesign PatternsETLGCPGitTableauAzureClickhouseData analysisPandasSparkCommunication SkillsCI/CDTerraformWritten communication

Posted 2024-11-20
Apply
Apply
🔥 Data Engineer
Posted 2024-11-19

📍 Germany

🧭 Permanent/Contract

🏢 Company: Axiom Software Solutions Limited

  • 4-6 years of experience in data architecture and engineering roles.
  • 3+ years of hands-on experience with relational, dimensional, and/or analytic technologies.
  • Knowledge of Sagemaker/Jupyter and data lake.
  • Experience with data warehouse, data lake, and big data platforms.
  • Implement business and IT data requirements through new data strategies and designs.
  • Identify architecture, infrastructure, and interfaces for automated data loads and security concerns.

  • Collaborate with stakeholders to understand and document data requirements, business rules, and objectives for the data platform.
  • Design and develop conceptual, logical, and physical data models that accurately represent the organization’s data assets.
  • Ensure designs meet objectives for reliability, scalability, supportability, user experience, security, governance, and performance.
  • Implement data engineering practices for data integrity, performance, and scalability.
  • Work closely with architects to integrate data engineering into overall platform architecture.
  • Communicate design decisions and recommendations effectively with stakeholders.

ETLMachine LearningData engineeringRDBMSNosql

Posted 2024-11-19
Apply
Apply

📍 Latin America

🧭 Contract

💸 1300 - 2300 USD per month

🔍 Talent-as-a-Service

🏢 Company: GoFasti

  • Bachelor’s degree in Computer Science, Engineering, or related field, or equivalent work experience.
  • 4+ years working as a Data Engineer or similar role focusing on data infrastructure.
  • Experience with AWS (Redshift, RDS) and Google Cloud (BigQuery, Firebase).
  • Proficiency in at least one programming language (Python, Java, Scala).
  • Hands-on experience with AWS Kinesis for real-time data ingestion.
  • Familiarity with data pipeline orchestration tools such as dbt and Airflow.
  • Strong knowledge of data warehousing concepts and ETL best practices.

  • Enhance and maintain the infrastructure powering analytics and data products.
  • Ingest new data from files, APIs, and external databases into a centralized data store.
  • Continuously monitor, optimize, and troubleshoot data pipelines and infrastructure.
  • Work closely with business stakeholders to understand data needs and translate requirements into solutions.
  • Implement and enforce data governance policies for security, integrity, and compliance.
  • Drive efforts to ensure high data quality by identifying and resolving pipeline issues.
  • Participate in architecture discussions and engage in peer code reviews.

AWSPythonETLAirflowFirebaseAttention to detailCompliance

Posted 2024-11-19
Apply
Apply
🔥 Big Data Engineer
Posted 2024-11-15

📍 Spain

🧭 Full-Time

🔍 Technology

🏢 Company: Plain Concepts

  • At least 4 years of experience in software/data engineering.
  • Experience in designing architectures.
  • Strong proficiency in Python or Scala and Spark for processing large data volumes.
  • Solid experience with cloud platforms (Azure or AWS).
  • Experience in creating CI/CD data pipelines.
  • Familiarity with testing methodologies (unit and integration).
  • Knowledge of SQL and NoSQL databases.
  • Good command of English (mandatory).
  • Desirable experience with BI tools like Power BI, and technologies like Databricks, Snowflake, Fabric, and IaC.

  • Engage in projects from initial client interactions to understand business needs and propose suitable technical solutions.
  • Develop projects from scratch with minimal supervision, collaborating with the team.
  • Participate in architecture design and decision-making in a constructive environment.
  • Contribute to good practices for clean and reusable code.
  • Develop ETLs using Spark in Python or Scala.
  • Execute projects in cloud environments like Azure or AWS.
  • Build scalable pipelines using various technologies.

AWSPythonSQLAgileAzureNosqlSparkCI/CD

Posted 2024-11-15
Apply
Apply

📍 North America, South America, Europe

💸 100000 - 500000 USD per year

🔍 Web3, blockchain

🏢 Company: Edge & Node

  • A self-motivated, team member with keen attention to detail.
  • Proactive collaboration with team members and a willingness to adapt to a growing environment.
  • Familiarity and experience with Rust, particularly focusing on data transformation and ingestion.
  • A strong understanding of blockchain data structures and ingestion interfaces.
  • Experience in real-time data handling, including knowledge of reorg handling.
  • Familiarity with blockchain clients like Geth and Reth is a plus.
  • Adaptability to a dynamic and fully-remote work environment.
  • Rigorous approach to software development that reflects a commitment to excellence.

  • Develop and maintain data ingestion adapters for various blockchain networks and web3 protocols.
  • Implement data ingestion strategies for both historical and recent data.
  • Apply strategies for handling block reorgs.
  • Optimize the latency of block ingestion at the chain head.
  • Write interfaces with file storage protocols such as IPFS and Arweave.
  • Collaborate with upstream data sources, such as chain clients and tracing frameworks, and monitor the latest upstream developments.
  • Perform data quality checks, cross-checking data across multiple sources and investigating any discrepancies that arise.

Software DevelopmentBlockchainData StructuresRustCollaborationAttention to detail

Posted 2024-11-15
Apply
Apply

📍 Hungary

🧭 Full-Time

🔍 ICT and telecommunications

  • Számítástechnikai vagy kapcsolódó területen szerzett diplomát.
  • Bizonyított tapasztalatod van adatplatform-megoldások építésében, különösen az ETL folyamatok területén.
  • Magabiztosan dolgozol legalább egy adatfelhő szolgáltatóval, modern adatbázis-architektúrákkal.
  • Alapvető adatfeldolgozási ismeretekkel (pl. SQL, különböző adatbázisok).
  • Magabiztosan dolgozol önállóan, felelősségteljesen.
  • Kommunikálsz, és legalább B2/C1 szinten beszélsz angolul.
  • Adatvizualizációs tapasztalat előny, de nem feltétel.

  • Adatintegráció fejlesztése: Önállóan dolgozol majd komplex adatintegrációs folyamatokon, amelyek aktuális felhőtechnológiákra épülnek.
  • Teljes fejlesztési folyamat támogatása: Részt veszel az elemzéstől és technikai koncepcióalkotástól kezdve, a tervezésen át, a megoldás kiválasztásán és megvalósításán keresztül a minőségbiztosításig.
  • Csapatmunka: Szorosan együttműködsz más részlegekkel és németországi kollégáinkkal.

SQLETLGCPAzure

Posted 2024-11-14
Apply
Apply

📍 United Kingdom

🧭 Contract

🏢 Company: Axiom Software Solutions Limited

  • Expertise in L2/L3 IP protocols such as HSRP, OSPF, BGP, MPLS, and VRF.
  • Deep knowledge of configuration and troubleshooting of IP protocols.
  • Experience in multi-vendor environments specialized in Viptela SD-WAN, Cisco SD-Access.
  • CCNP-ENCOR or CCIE Enterprise Infrastructure certified.
  • Good understanding of Data Center infrastructure and connectivity.
  • Knowledge of Vulnerability Management and Lifecycle Management.

  • Extensive support for DC Operation support.
  • Troubleshooting expert of network-related issues.
  • Interaction with customers to understand network and service delivery requirements.
  • Handling deployment and implementation of network solutions.
  • Preparing high-level and low-level design documents.
  • Providing consultation on network design with SD-WAN and SD-Access.

CiscoProject CoordinationCommunication SkillsCollaborationProblem SolvingAttention to detailOrganizational skillsPresentation skillsTime ManagementWritten communicationDocumentation

Posted 2024-11-14
Apply