Apply

Data Engineer

Posted 2024-10-21

View full description

💎 Seniority level: Senior, Minimum 5 years of hands-on experience

📍 Location: Poland

🔍 Industry: Consulting

🏢 Company: Infosys Consulting - Europe

🗣️ Languages: English

⏳ Experience: Minimum 5 years of hands-on experience

🪄 Skills: AWSDockerLeadershipPostgreSQLPythonSQLAgileBusiness IntelligenceDynamoDBETLGitHadoopJavaJenkinsKafkaKubernetesMachine LearningMongoDBMySQLOracleStrategyAzureCassandraData engineeringData scienceNosqlSparkCommunication SkillsCollaborationCI/CD

Requirements:
  • Bachelor’s degree in Computer Science, Engineering, or a related field.
  • Proven experience as a Data Engineer or similar role in large scale data implementation.
  • Strong experience in SQL and relational database systems (MySQL, PostgreSQL, Oracle).
  • Proficiency in programming languages such as Python, Java, or Scala.
  • Minimum 5 years of hands-on experience with ETL tools like Apache Nifi or Talend.
  • Familiarity with big data technologies like Hadoop and Spark.
  • Minimum 3 years with cloud-based data services (AWS, Azure, Google Cloud).
  • Knowledge of data modeling, database design, and architecture best practices.
  • Experience with version control (e.g., Git) and agile practices.
Responsibilities:
  • Develop, construct, test, and maintain scalable data pipelines for large data sets.
  • Integrate data from differing source systems into the data lake or warehouse.
  • Implement ETL processes and ensure data quality and integrity.
  • Design and implement database and data warehousing solutions.
  • Work with cloud platforms to set up data infrastructure.
  • Collaborate with teams and document workflows.
  • Implement data governance and compliance measures.
  • Monitor performance and continuously improve processes.
  • Automate tasks and develop tools for data management.
Apply

Related Jobs

Apply
🔥 Senior Data Engineer
Posted 2024-11-21

📍 Poland

🧭 Full-Time

🔍 Software development

🏢 Company: Sunscrapers sp. z o.o.

  • At least 5 years of professional experience as a data engineer.
  • Undergraduate or graduate degree in Computer Science, Engineering, Mathematics, or similar.
  • Excellent command in spoken and written English, at least C1.
  • Strong professional experience with Python and SQL.
  • Hands-on experience with DBT and Snowflake.
  • Experience in building data pipelines with Airflow or alternative solutions.
  • Strong understanding of various data modeling techniques like Kimball Star Schema.
  • Great analytical skills and attention to detail.
  • Creative problem-solving skills.
  • Great customer service and troubleshooting skills.

  • Modeling datasets and schemes for consistency and easy access.
  • Design and implement data transformations and data marts.
  • Integrating third-party systems and external data sources into data warehouse.
  • Building data flows for fetching, aggregation and data modeling using batch pipelines.

PythonSQLSnowflakeAirflowAnalytical SkillsCustomer serviceDevOpsAttention to detail

Posted 2024-11-21
Apply
Apply
🔥 Data Engineer
Posted 2024-11-21

📍 Poland

🔍 Healthcare

🏢 Company: Sunscrapers sp. z o.o.

  • At least 3 years of professional experience as a data engineer.
  • Undergraduate or graduate degree in Computer Science, Engineering, Mathematics, or similar.
  • Excellent command in spoken and written English, at least C1.
  • Strong professional experience with Apache Spark.
  • Hands-on experience managing production spark clusters in Databricks.
  • Experience in CI/CD of data jobs in Spark.
  • Great analytical skills, attention to detail, and creative problem-solving skills.
  • Great customer service and troubleshooting skills.

  • Design and manage batch data pipelines, including file ingestion, transformation, and Delta Lake/table management.
  • Implement scalable architectures for batch and streaming workflows.
  • Leverage Microsoft equivalents of BigQuery for efficient querying and data storage.

SparkAnalytical SkillsCI/CDCustomer serviceAttention to detail

Posted 2024-11-21
Apply
Apply

📍 North America, South America, Europe

💸 100000 - 500000 USD per year

🔍 Web3, blockchain

🏢 Company: Edge & Node

  • A self-motivated, team member with keen attention to detail.
  • Proactive collaboration with team members and a willingness to adapt to a growing environment.
  • Familiarity and experience with Rust, particularly focusing on data transformation and ingestion.
  • A strong understanding of blockchain data structures and ingestion interfaces.
  • Experience in real-time data handling, including knowledge of reorg handling.
  • Familiarity with blockchain clients like Geth and Reth is a plus.
  • Adaptability to a dynamic and fully-remote work environment.
  • Rigorous approach to software development that reflects a commitment to excellence.

  • Develop and maintain data ingestion adapters for various blockchain networks and web3 protocols.
  • Implement data ingestion strategies for both historical and recent data.
  • Apply strategies for handling block reorgs.
  • Optimize the latency of block ingestion at the chain head.
  • Write interfaces with file storage protocols such as IPFS and Arweave.
  • Collaborate with upstream data sources, such as chain clients and tracing frameworks, and monitor the latest upstream developments.
  • Perform data quality checks, cross-checking data across multiple sources and investigating any discrepancies that arise.

Software DevelopmentBlockchainData StructuresRustCollaborationAttention to detail

Posted 2024-11-15
Apply
Apply
🔥 Lead Data Engineer
Posted 2024-11-07

📍 North America, Latin America, Europe

🔍 Data consulting

  • Bachelor’s degree in engineering, computer science or equivalent area.
  • 5+ years in related technical roles such as data management, database development, and ETL.
  • Expertise in evaluating and integrating data ingestion technologies.
  • Experience in designing and developing data warehouses with various platforms.
  • Proficiency in building ETL/ELT ingestion pipelines with tools like DataStage or Informatica.
  • Cloud experience on AWS; Azure and GCP experience is a plus.
  • Proficiency in Python scripting; Scala is required.

  • Designing and developing Snowflake Data Cloud solutions.
  • Creating data ingestion pipelines and working on data architecture.
  • Ensuring data governance and security throughout customer projects.
  • Leading technical teams and collaborating with clients on data initiatives.

AWSLeadershipPythonSQLAgileETLOracleSnowflakeData engineeringSparkCollaboration

Posted 2024-11-07
Apply
Apply

📍 Poland, Bulgaria, Portugal

🔍 Art market and blockchain technology

🏢 Company: Dev.Pro

  • 4+ years of experience in data engineering, encompassing data extraction, transformation, and migration.
  • Advanced experience with data extraction from unstructured files and legacy systems.
  • Proven expertise in migrating data from file-based storage systems to Google Cloud Platform.
  • Proficiency with relational databases, specifically MariaDB or MySQL, and cloud-native solutions like Google Cloud Storage, BigQuery.
  • Strong programming skills in Python, focusing on data manipulation and automation.
  • Extensive experience with ETL/ELT pipeline development and workflow orchestration tools.
  • Hands-on experience with batch processing frameworks and real-time data processing frameworks.
  • In-depth understanding of data modeling, warehousing, and scalable data architectures.
  • Practical experience in developing data mastering tools for data cleaning.
  • Expertise in RDBMS functionalities and ability to handle PII data.

  • Take full responsibility for the data warehouse and pipeline, including planning, coding, reviews, and delivery to production.
  • Migrate data from existing file storage systems to Google Cloud Platform.
  • Design, develop, and maintain ETL/ELT pipelines for data migration and integration.
  • Collaborate with team to re-implement custom data mastering tools for improved data cleaning.
  • Evaluate existing technology stack and provide recommendations for improvements.
  • Develop a new scraper system to extract and aggregate data from external sources.
  • Ensure integrity, consistency, and quality of data through optimized processes.

PythonSoftware DevelopmentApache AirflowBlockchainElasticSearchETLJavascriptMachine LearningMongoDBMySQLJavaScriptTableauAirflowAlgorithmsCassandraData engineeringElasticsearchGrafanaPrometheusRDBMSNosql

Posted 2024-11-07
Apply
Apply

📍 Poland

🧭 Contract

💸 100000 - 140000 USD per year

🔍 Retail AI solutions

🏢 Company: Focal Systems

  • Bachelor's or Master's degree in Computer Science, Data Engineering, or related field.
  • 5+ years of experience in data engineering with focus on data transformation and integration.
  • Proficiency in MySQL, Redis, Google BigQuery, MongoDB.
  • Strong skills in data profiling, cleansing, and transformation techniques.
  • Proficiency in Python and SQL for data manipulation.
  • Experience with ETL tools for large-scale data processing.
  • Demonstrated ability with diverse data formats like CSV, JSON, XML, APIs.
  • Advanced SQL knowledge for transformations and query optimization.
  • Expertise in data modeling and schema design.
  • Strong analytical skills for resolving data inconsistencies.
  • Experience with data mapping and reconciliation.
  • Proficiency in writing data transformation scripts.
  • Excellent attention to detail.

  • Partner with the sales team for customer integration and rollout.
  • Design and implement data transformation processes.
  • Analyze complex data structures and formats.
  • Develop and maintain ETL pipelines for data ingestion.
  • Perform data quality assessments and implement cleaning procedures.
  • Optimize transformation queries for performance.
  • Collaborate with cross-functional teams for data requirements.
  • Create and maintain documentation for data processes.
  • Implement data validation checks.

PythonSQLETLGitMongoDBMySQLData engineeringRedis

Posted 2024-11-07
Apply
Apply
🔥 Senior Data Engineer
Posted 2024-11-07

📍 Any European country

🧭 Full-Time

🔍 Software development

🏢 Company: Janea Systems

  • Proven experience as a data engineer, preferably with at least 3 or more years of relevant experience.
  • Experience designing cloud native solutions and implementations with Kubernetes.
  • Experience with Airflow or similar pipeline orchestration tools.
  • Strong Python programming skills.
  • Experience collaborating with Data Science and Engineering teams in production environments.
  • Solid understanding of SQL and relational data modeling schemas.
  • Preference for experience with Databricks or Spark.
  • Familiarity with modern data stack design and data lifecycle management.
  • Experience with distributed systems, microservices architecture, and cloud platforms like AWS, Azure, Google Cloud.
  • Excellent problem-solving skills and strong communication skills.

  • Develop and maintain data pipelines using Databricks, Airflow, or similar orchestration systems.
  • Design and implement cloud-native solutions using Kubernetes for high availability.
  • Gather product data requirements and implement solutions to ingest and process data for applications.
  • Collaborate with Data Science and Engineering teams to optimize production-ready applications.
  • Cultivate data from various sources for data scientists and maintain documentation.
  • Design modern data stack for data scientists and ML engineers.

AWSPythonSoftware DevelopmentSQLKubernetesAirflowAzureData scienceSparkCollaboration

Posted 2024-11-07
Apply
Apply

📍 Poland

🔍 Financial services industry

🏢 Company: Capco

  • Extensive experience with Databricks, including ETL processes and data migration.
  • Experience with additional cloud platforms like AWS, Azure, or GCP.
  • Strong knowledge of data warehousing concepts, data modeling, and SQL.
  • Proficiency in programming languages such as Python, SQL, and scripting languages.
  • Knowledge of data governance frameworks and data security principles.
  • Familiarity with containerization technologies such as Docker and orchestration tools like Kubernetes.
  • Bachelor or Master Degree in Computer Science or related field.

  • Design, develop, and implement robust data architecture solutions utilizing modern data platforms like Databricks.
  • Ensure scalable, reliable, and secure data environments that meet business requirements and support advanced analytics.
  • Lead the migration of data from traditional RDBMS systems to Databricks environments.
  • Architect and design scalable data pipelines and infrastructure to support the organization's data needs.
  • Develop and manage ETL processes using Databricks to ensure efficient data extraction, transformation, and loading.
  • Optimize ETL workflows to enhance performance and maintain data integrity.
  • Monitor and optimize performance of data systems to ensure reliability, scalability, and cost-effectiveness.
  • Collaborate with cross-functional teams to understand data requirements and deliver solutions.
  • Define best practices for data engineering and ensure adherence to them.
  • Evaluate and implement new technologies to improve data pipeline efficiency.

AWSDockerLeadershipPythonSQLETLGCPKubernetesAzureData engineeringRDBMSAnalytical Skills

Posted 2024-11-07
Apply
Apply

📍 UK, EU

🔍 Consultancy

🏢 Company: The Dot Collective

  • Advanced knowledge of distributed computing with Spark.
  • Extensive experience with AWS data offerings such as S3, Glue, Lambda.
  • Ability to build CI/CD processes including Infrastructure as Code (e.g. terraform).
  • Expert Python and SQL skills.
  • Agile ways of working.

  • Leading a team of data engineers.
  • Designing and implementing cloud-native data platforms.
  • Owning and managing technical roadmap.
  • Engineering well-tested, scalable, and reliable data pipelines.

AWSPythonSQLAgileSCRUMSparkCollaborationAgile methodologies

Posted 2024-11-07
Apply
Apply
🔥 Data Engineer
Posted 2024-10-18

📍 Poland

🧭 Full-Time

🔍 Real estate and investment management

  • Bachelor or master degree in BI, computer science, econometrics, operations research or statistics preferred, or equivalent working experience.
  • Minimum 1 year past work experience as a data engineer, BI, business or data analyst required.
  • Fluent in English.
  • Business analysis, consulting/advisory and/or project or client management experience.
  • Working experience in BI tools (e.g. Tableau).
  • Querying databases (SQL).
  • Experience in ETL tools (e.g. Alteryx, Azure Data Factory) is desired.
  • Working experience in analytical toolkit (e.g. SPSS, SAS, R, Python) would be a plus.
  • Strong analytical mindset with the ability to interpret, visualize and tell the story behind the data.
  • Ability to deliver high-quality work in a timely manner under pressure.
  • Excellent written and verbal communication skills.

  • Understand business/client requirements to provide relevant solutions.
  • Manage and oversee projects conducted by the team.
  • Design, develop and deploy BI solutions.
  • Source data from internal MS SQL Server and external sources, leveraging ETL tools.
  • Setup and support data governance and data integration processes.
  • Advise on optimal BI/analytics strategy and its execution.
  • Perform analyses leveraging analytical techniques and big data environments.
  • Communicate with internal and external stakeholders and manage their expectations.
  • Develop expertise and share it with team members, supporting junior members.

SQLBusiness AnalysisBusiness IntelligenceETLTableauAzureData engineeringCommunication Skills

Posted 2024-10-18
Apply