Apply

Data Engineer

Posted 4 months agoViewed

View full description

💎 Seniority level: Senior, Minimum 5 years of hands-on experience

📍 Location: Poland

🔍 Industry: Consulting

🏢 Company: Infosys Consulting - Europe

🗣️ Languages: English

⏳ Experience: Minimum 5 years of hands-on experience

🪄 Skills: AWSDockerLeadershipPostgreSQLPythonSQLAgileBusiness IntelligenceDynamoDBETLGitHadoopJavaJenkinsKafkaKubernetesMachine LearningMongoDBMySQLOracleStrategyAzureCassandraData engineeringData scienceNosqlSparkCommunication SkillsCollaborationCI/CDScalaData modeling

Requirements:
  • Bachelor’s degree in Computer Science, Engineering, or a related field.
  • Proven experience as a Data Engineer or similar role in large scale data implementation.
  • Strong experience in SQL and relational database systems (MySQL, PostgreSQL, Oracle).
  • Proficiency in programming languages such as Python, Java, or Scala.
  • Minimum 5 years of hands-on experience with ETL tools like Apache Nifi or Talend.
  • Familiarity with big data technologies like Hadoop and Spark.
  • Minimum 3 years with cloud-based data services (AWS, Azure, Google Cloud).
  • Knowledge of data modeling, database design, and architecture best practices.
  • Experience with version control (e.g., Git) and agile practices.
Responsibilities:
  • Develop, construct, test, and maintain scalable data pipelines for large data sets.
  • Integrate data from differing source systems into the data lake or warehouse.
  • Implement ETL processes and ensure data quality and integrity.
  • Design and implement database and data warehousing solutions.
  • Work with cloud platforms to set up data infrastructure.
  • Collaborate with teams and document workflows.
  • Implement data governance and compliance measures.
  • Monitor performance and continuously improve processes.
  • Automate tasks and develop tools for data management.
Apply

Related Jobs

Apply
🔥 Senior Data Engineer
Posted 4 days ago

📍 Europe, APAC, Americas

🧭 Full-Time

🔍 Software Development

🏢 Company: Docker👥 251-500💰 $105,000,000 Series C almost 3 years agoDeveloper ToolsDeveloper PlatformInformation TechnologySoftware

  • 4+ years of relevant industry experience
  • Experience with data modeling and building scalable pipelines
  • Proficiency with Snowflake or BigQuery
  • Experience with data governance and security controls
  • Experience creating ETL scripts using Python and SQL
  • Familiarity with a cloud ecosystem: AWS/Azure/Google Cloud
  • Experience with Tableau or Looker
  • Manage and develop ETL jobs, warehouse, and event collection tools
  • Build and manage the Central Data Model for reporting
  • Integrate emerging methodologies and technologies
  • Build data pipelines for ML and AI projects
  • Contribute to SOC2 compliance across the data platform
  • Document technical architecture

PythonSQLETLSnowflakeAirflowData engineeringData visualizationData modeling

Posted 4 days ago
Apply
Apply

📍 Argentina, Brazil, Bulgaria, Colombia, Poland, Romania

🧭 Contract

🔍 Data Engineering

🏢 Company: N-iX👥 1001-5000IT Services and IT Consulting

  • Bachelor's or Master's in Computer Science or related field
  • 3+ years experience in developing data services for HPC
  • Expertise with HDF5 in parallel I/O operations
  • Strong proficiency in C++, Python, GoLang, or Fortran
  • In-depth knowledge of HDF5 APIs
  • Experience with MPI I/O, POSIX I/O or similar frameworks
  • Skills in profiling I/O operations
  • Proficiency in SQL and any RDMS
  • Design and implement the data service module using HDF5
  • Develop parallel and concurrent I/O mechanisms
  • Ensure integration with HPC workflows
  • Optimize I/O for CPU/GPU workflows
  • Implement caching and compression strategies
  • Design data structures for simulation outputs
  • Ensure data integrity during concurrent operations
  • Develop test cases for performance validation
  • Conduct benchmarking for scalability
  • Document architecture and APIs
  • Provide technical support for data integration

PythonSQLFortranC++Data engineeringData Structures

Posted 6 days ago
Apply
Apply

📍 Poland

🧭 Full-Time

🔍 Software Development

🏢 Company: N-iX👥 1001-5000IT Services and IT Consulting

  • Minimum of 3-4 years as data engineer, or in a relevant field
  • Advanced experience in Python, particularly in delivering production-grade data pipelines and troubleshooting code-based bugs.
  • Structured approach to data insights
  • Familiarity with cloud platforms (preferably Azure)
  • Experience with Databricks, Snowflake, or similar data platforms
  • Knowledge of relational databases, with proficiency in SQL
  • Experience using Apache Spark
  • Experience in creating and maintaining structured documentation
  • Proficiency in utilizing testing frameworks to ensure code reliability and maintainability
  • Experience with Gitlab or equivalent tools
  • English Proficiency: B2 level or higher
  • Design, build, and maintain data pipelines using Python
  • Collaborate with an international team to develop scalable data solutions
  • Conduct in-depth analysis and debugging of system bugs (Tier 2)
  • Develop and maintain smart documentation for process consistency, including the creation and refinement of checklists and workflows
  • Set up and configure new tenants, collaborating closely with team members to ensure smooth onboarding
  • Write integration tests to ensure the quality and reliability of data services
  • Work with Gitlab to manage code and collaborate with team members
  • Utilize Databricks for data processing and management

DockerPythonSQLCloud ComputingData AnalysisETLGitKubernetesSnowflakeApache KafkaAzureData engineeringRDBMSREST APIPandasCI/CDDocumentationMicroservicesDebugging

Posted 6 days ago
Apply
Apply

📍 Poland, Ukraine, Abroad

🧭 Contract

🔍 Data Engineering

🏢 Company: N-iX👥 1001-5000IT Services and IT Consulting

  • Experience with Databricks
  • 4+ years experience in development of database systems (MS SQL/T-SQL)
  • Experience in creation and maintenance of Azure Pipelines
  • Developing robust data pipelines with DBT
  • Implementation of business logic in Data Warehouse
  • Conversion of business requirements into data models
  • Pipelines management
  • Loadings and query performance tuning

SQLApache AirflowGitMicrosoft SQL ServerAzure

Posted 6 days ago
Apply
Apply

📍 Poland, Ukraine

🧭 Full-Time

🔍 Data Engineering

🏢 Company: N-iX👥 1001-5000IT Services and IT Consulting

  • 5+ years of experience in data modeling
  • Expert knowledge of Kimball data modeling principles
  • Strong proficiency in English
  • Experience integrating and modeling data from multiple sources
  • Lead the design and development of robust data models for a data platform
  • Integrate and model data from 30+ diverse sources
  • Apply Kimball principles for data modeling
  • Collaborate with teams to define data requirements
  • Optimize and evolve the data platform
  • Mentor junior engineers

AWSSQLETLSnowflakeData modeling

Posted 6 days ago
Apply
Apply

📍 Italy, Poland, Spain, Hungary, Sweden

🧭 Contract

🔍 Digital Health

🏢 Company: Axiom Software Solutions Limited

  • 5 years industry experience with bachelor's or 3 years with master's in relevant field
  • Proficient in Python
  • Experience with SQL, PySpark, Dask
  • Knowledge of AWS, Azure, GCP
  • Familiarity with machine learning for large datasets
  • Design, build, and maintain data pipelines
  • Utilize large language models for digital health applications
  • Provide Python expertise and drive coding best practices
  • Manage and optimize cloud infrastructure
  • Implement generative AI technologies

AWSDockerPostgreSQLPythonSQLCloud ComputingMachine LearningTableauAzureData visualization

Posted 12 days ago
Apply
Apply
🔥 Sr Data Engineer
Posted 12 days ago

📍 US, Europe and India

🔍 Software Development

  • Extensive experience in developing data and analytics applications in geographically distributed teams
  • Hands-on experience in using modern architectures and frameworks, structured, semi-structured and unstructured data, and programming with Python
  • Hands-on SQL knowledge and experience with relational databases such as MySQL, PostgreSQL, and others
  • Hands-on ETL knowledge and experience
  • Knowledge of commercial data platforms (Databricks, Snowflake) or cloud data warehouses (Redshift, BigQuery)
  • Knowledge of data catalog and MDM tooling (Atlan, Alation, Informatica, Collibra)
  • CICD pipeline for continuous deployment (CloudFormation template)
  • Knowledge of how machine learning / A.I. workloads are implemented in batch and streaming, including the preparing of datasets, training models, and using pre-trained models
  • Exposure to software engineering processes that can be applied to Data Ecosystems
  • Excellent analytical and troubleshooting skills
  • Excellent communication skills
  • Excellent English (both verbal and written)
  • BS. in Computer Science or equivalent
  • Design and develop our best-in-class cloud platform, working on all parts of the code stack from front-end, REST and asynchronous APIs, back-end application logic, SQL/NoSQL databases and integrations with external systems
  • Develop solutions across the data and analytics stack from ETL and Streaming data
  • Design and develop reusable libraries
  • Enhance strong processes in Data Ecosystem
  • Write unit and integration tests

PythonSQLApache AirflowCloud ComputingETLMachine LearningSnowflakeAlgorithmsApache KafkaData engineeringData StructuresCommunication SkillsAnalytical SkillsCI/CDRESTful APIsDevOpsMicroservicesExcellent communication skillsData visualizationData modelingData analyticsData management

Posted 12 days ago
Apply
Apply
🔥 Senior Data Engineer
Posted 12 days ago

📍 Poland

🧭 Full-Time

🔍 Data Engineering

🏢 Company: Softeta

  • Advanced degree in computer science or related fields
  • 5+ years of experience as a data engineer
  • Proficiency with AirFlow, DBT, DataFlow or similar products
  • Strong knowledge of data structures and data modeling
  • CI/CD pipeline and MLOPs experience
  • Experience with large data sets
  • Experience with GCP / BigQuery
  • Create and maintain pipeline architectures in AirFlow and DBT
  • Assemble large and/or complex datasets for business requirements
  • Improve processes and infrastructure for scale, delivery and automation
  • Maintain and improve data warehouse structure
  • Adjust methods and techniques for large data environments
  • Adopt best-practice coding and review processes
  • Communicate technical details to stakeholders
  • Investigate and resolve anomalies in data
  • Develop and maintain documentation for data products

SQLGCPAirflowData engineeringCI/CDData modeling

Posted 12 days ago
Apply
Apply

📍 Europe

🧭 Fulltime

🔍 DeFi, Staking

🏢 Company: P2P. org

  • Strong knowledge of Python and SQL (preferably BQ, Clickhouse).
  • Airflow is mandatory.
  • Experience with Kubernetes is a plus.
  • General understanding and experience with GCP (Cloud SQL, VM, Storage).
  • Friendly and willing to help colleagues.
  • English language proficiency at B2 level or higher.
  • Perform technical and business tasks from analysts related to core tools.
  • Participate in code reviews of analysts and identify suboptimal processes.
  • Monitor load and alerts from services.
  • Interact with DevOps team on services and support tasks.
  • Maintain security and compliance standards.

PythonSQLApache AirflowGCPKubernetesClickhouse

Posted 14 days ago
Apply
Apply
🔥 Senior Data Engineer
Posted 23 days ago

📍 Poland, Spain, United Kingdom

🔍 Beauty marketplace

🏢 Company: Booksy👥 501-1000💰 Debt Financing 5 months agoMobile PaymentsMarketplaceSaaSPaymentsMobile AppsWellnessSoftware

  • 5+ years of experience in backend and data engineering, with strong system design skills.
  • Practical proficiency in cloud technologies (ideally GCP), with expertise in tools like BigQuery, Dataflow, Pub/Sub, or similar.
  • Hands-on experience with CI/CD tools (e.g., GitLab CI) and infrastructure as code.
  • Strong focus on data quality, governance, and building scalable, automated workflows.
  • Experience designing self-service data platforms and infrastructure.
  • Proven ability to mentor and support others, fostering data literacy across teams.
  • Design and implement robust data solutions.
  • Enable teams to make informed, data-driven decisions.
  • Ensure data is accessible, reliable, and well-governed.
  • Play a key role in driving growth, innovation, and operational excellence.

GCPData engineeringCI/CDData modeling

Posted 23 days ago
Apply