Apply

Data Engineer

Posted 2024-11-07

View full description

πŸ“ Location: Poland

πŸ” Industry: Financial services

🏒 Company: Softeta

πŸͺ„ Skills: PythonSQLAgileETLGCPSCRUMTableauCI/CDTerraform

Requirements:
  • Proficiency in Google Cloud Platform (GCP) and experience with GCP services including BigQuery, Dataform, Dataflow, and Cloud Functions.
  • Advanced programming skills in Python.
  • Competence in SQL for data manipulation and querying.
  • Familiarity with version control systems such as GitLab or GitHub.
  • Experience with Tableau or Looker.
  • Knowledge of Terraform as an Infrastructure as Code tool.
  • Understanding of ETL/ELT processes, data modeling, and data quality management.
  • Basic understanding of Continuous Integration/Continuous Deployment (CI/CD) practices.
Responsibilities:
  • Utilize the data platform to build reports and analytics.
  • Be a part of establishing data products and incorporating them into our reports and market place.
  • Collaborate with architects, product owners, and other developers.
Apply

Related Jobs

Apply

πŸ“ Poland

🧭 Full-Time

πŸ” Software development

🏒 Company: Sunscrapers sp. z o.o.

  • At least 5 years of professional experience as a data engineer.
  • Undergraduate or graduate degree in Computer Science, Engineering, Mathematics, or similar.
  • Excellent command in spoken and written English, at least C1.
  • Strong professional experience with Python and SQL.
  • Hands-on experience with DBT and Snowflake.
  • Experience in building data pipelines with Airflow or alternative solutions.
  • Strong understanding of various data modeling techniques like Kimball Star Schema.
  • Great analytical skills and attention to detail.
  • Creative problem-solving skills.
  • Great customer service and troubleshooting skills.

  • Modeling datasets and schemes for consistency and easy access.
  • Design and implement data transformations and data marts.
  • Integrating third-party systems and external data sources into data warehouse.
  • Building data flows for fetching, aggregation and data modeling using batch pipelines.

PythonSQLSnowflakeAirflowAnalytical SkillsCustomer serviceDevOpsAttention to detail

Posted 2024-11-21
Apply
Apply

πŸ“ Poland

πŸ” Healthcare

🏒 Company: Sunscrapers sp. z o.o.

  • At least 3 years of professional experience as a data engineer.
  • Undergraduate or graduate degree in Computer Science, Engineering, Mathematics, or similar.
  • Excellent command in spoken and written English, at least C1.
  • Strong professional experience with Apache Spark.
  • Hands-on experience managing production spark clusters in Databricks.
  • Experience in CI/CD of data jobs in Spark.
  • Great analytical skills, attention to detail, and creative problem-solving skills.
  • Great customer service and troubleshooting skills.

  • Design and manage batch data pipelines, including file ingestion, transformation, and Delta Lake/table management.
  • Implement scalable architectures for batch and streaming workflows.
  • Leverage Microsoft equivalents of BigQuery for efficient querying and data storage.

SparkAnalytical SkillsCI/CDCustomer serviceAttention to detail

Posted 2024-11-21
Apply
Apply

πŸ“ North America, South America, Europe

πŸ’Έ 100000 - 500000 USD per year

πŸ” Web3, blockchain

🏒 Company: Edge & Node

  • A self-motivated, team member with keen attention to detail.
  • Proactive collaboration with team members and a willingness to adapt to a growing environment.
  • Familiarity and experience with Rust, particularly focusing on data transformation and ingestion.
  • A strong understanding of blockchain data structures and ingestion interfaces.
  • Experience in real-time data handling, including knowledge of reorg handling.
  • Familiarity with blockchain clients like Geth and Reth is a plus.
  • Adaptability to a dynamic and fully-remote work environment.
  • Rigorous approach to software development that reflects a commitment to excellence.

  • Develop and maintain data ingestion adapters for various blockchain networks and web3 protocols.
  • Implement data ingestion strategies for both historical and recent data.
  • Apply strategies for handling block reorgs.
  • Optimize the latency of block ingestion at the chain head.
  • Write interfaces with file storage protocols such as IPFS and Arweave.
  • Collaborate with upstream data sources, such as chain clients and tracing frameworks, and monitor the latest upstream developments.
  • Perform data quality checks, cross-checking data across multiple sources and investigating any discrepancies that arise.

Software DevelopmentBlockchainData StructuresRustCollaborationAttention to detail

Posted 2024-11-15
Apply
Apply

πŸ“ North America, Latin America, Europe

πŸ” Data consulting

  • Bachelor’s degree in engineering, computer science or equivalent area.
  • 5+ years in related technical roles such as data management, database development, and ETL.
  • Expertise in evaluating and integrating data ingestion technologies.
  • Experience in designing and developing data warehouses with various platforms.
  • Proficiency in building ETL/ELT ingestion pipelines with tools like DataStage or Informatica.
  • Cloud experience on AWS; Azure and GCP experience is a plus.
  • Proficiency in Python scripting; Scala is required.

  • Designing and developing Snowflake Data Cloud solutions.
  • Creating data ingestion pipelines and working on data architecture.
  • Ensuring data governance and security throughout customer projects.
  • Leading technical teams and collaborating with clients on data initiatives.

AWSLeadershipPythonSQLAgileETLOracleSnowflakeData engineeringSparkCollaboration

Posted 2024-11-07
Apply
Apply

πŸ“ Poland

🧭 Contract

πŸ’Έ 100000 - 140000 USD per year

πŸ” Retail AI solutions

🏒 Company: Focal Systems

  • Bachelor's or Master's degree in Computer Science, Data Engineering, or related field.
  • 5+ years of experience in data engineering with focus on data transformation and integration.
  • Proficiency in MySQL, Redis, Google BigQuery, MongoDB.
  • Strong skills in data profiling, cleansing, and transformation techniques.
  • Proficiency in Python and SQL for data manipulation.
  • Experience with ETL tools for large-scale data processing.
  • Demonstrated ability with diverse data formats like CSV, JSON, XML, APIs.
  • Advanced SQL knowledge for transformations and query optimization.
  • Expertise in data modeling and schema design.
  • Strong analytical skills for resolving data inconsistencies.
  • Experience with data mapping and reconciliation.
  • Proficiency in writing data transformation scripts.
  • Excellent attention to detail.

  • Partner with the sales team for customer integration and rollout.
  • Design and implement data transformation processes.
  • Analyze complex data structures and formats.
  • Develop and maintain ETL pipelines for data ingestion.
  • Perform data quality assessments and implement cleaning procedures.
  • Optimize transformation queries for performance.
  • Collaborate with cross-functional teams for data requirements.
  • Create and maintain documentation for data processes.
  • Implement data validation checks.

PythonSQLETLGitMongoDBMySQLData engineeringRedis

Posted 2024-11-07
Apply
Apply

πŸ“ Any European country

🧭 Full-Time

πŸ” Software development

🏒 Company: Janea Systems

  • Proven experience as a data engineer, preferably with at least 3 or more years of relevant experience.
  • Experience designing cloud native solutions and implementations with Kubernetes.
  • Experience with Airflow or similar pipeline orchestration tools.
  • Strong Python programming skills.
  • Experience collaborating with Data Science and Engineering teams in production environments.
  • Solid understanding of SQL and relational data modeling schemas.
  • Preference for experience with Databricks or Spark.
  • Familiarity with modern data stack design and data lifecycle management.
  • Experience with distributed systems, microservices architecture, and cloud platforms like AWS, Azure, Google Cloud.
  • Excellent problem-solving skills and strong communication skills.

  • Develop and maintain data pipelines using Databricks, Airflow, or similar orchestration systems.
  • Design and implement cloud-native solutions using Kubernetes for high availability.
  • Gather product data requirements and implement solutions to ingest and process data for applications.
  • Collaborate with Data Science and Engineering teams to optimize production-ready applications.
  • Cultivate data from various sources for data scientists and maintain documentation.
  • Design modern data stack for data scientists and ML engineers.

AWSPythonSoftware DevelopmentSQLKubernetesAirflowAzureData scienceSparkCollaboration

Posted 2024-11-07
Apply
Apply

πŸ“ Poland

πŸ” Financial services industry

🏒 Company: Capco

  • Extensive experience with Databricks, including ETL processes and data migration.
  • Experience with additional cloud platforms like AWS, Azure, or GCP.
  • Strong knowledge of data warehousing concepts, data modeling, and SQL.
  • Proficiency in programming languages such as Python, SQL, and scripting languages.
  • Knowledge of data governance frameworks and data security principles.
  • Familiarity with containerization technologies such as Docker and orchestration tools like Kubernetes.
  • Bachelor or Master Degree in Computer Science or related field.

  • Design, develop, and implement robust data architecture solutions utilizing modern data platforms like Databricks.
  • Ensure scalable, reliable, and secure data environments that meet business requirements and support advanced analytics.
  • Lead the migration of data from traditional RDBMS systems to Databricks environments.
  • Architect and design scalable data pipelines and infrastructure to support the organization's data needs.
  • Develop and manage ETL processes using Databricks to ensure efficient data extraction, transformation, and loading.
  • Optimize ETL workflows to enhance performance and maintain data integrity.
  • Monitor and optimize performance of data systems to ensure reliability, scalability, and cost-effectiveness.
  • Collaborate with cross-functional teams to understand data requirements and deliver solutions.
  • Define best practices for data engineering and ensure adherence to them.
  • Evaluate and implement new technologies to improve data pipeline efficiency.

AWSDockerLeadershipPythonSQLETLGCPKubernetesAzureData engineeringRDBMSAnalytical Skills

Posted 2024-11-07
Apply
Apply

πŸ“ UK, EU

πŸ” Consultancy

🏒 Company: The Dot Collective

  • Advanced knowledge of distributed computing with Spark.
  • Extensive experience with AWS data offerings such as S3, Glue, Lambda.
  • Ability to build CI/CD processes including Infrastructure as Code (e.g. terraform).
  • Expert Python and SQL skills.
  • Agile ways of working.

  • Leading a team of data engineers.
  • Designing and implementing cloud-native data platforms.
  • Owning and managing technical roadmap.
  • Engineering well-tested, scalable, and reliable data pipelines.

AWSPythonSQLAgileSCRUMSparkCollaborationAgile methodologies

Posted 2024-11-07
Apply
Apply

πŸ“ Cyprus, Malta, USA, Thailand, Indonesia, Hong Kong, Japan, Australia, Poland, Israel, Turkey, Latvia

🧭 Full-Time

πŸ” Social discovery technology

🏒 Company: Social Discovery Group

  • 3+ years of professional experience as a Data Engineer.
  • Confident knowledge of MS SQL including window functions, subqueries, and various joins.
  • Excellent knowledge of Python.
  • Basic query optimization skills.
  • Experience with Airflow.
  • Nice to have: experience with Google Cloud Platform (BigQuery, Storage, pub/sub).

  • Design, develop, and maintain SQL data warehouses, including creation and optimization of stored procedures.
  • Build and enhance reports using SSRS and create dynamic dashboards with Superset for actionable insights.
  • Develop and manage efficient data pipelines using Airflow to ensure smooth data integration and automation.

PythonSQLApache AirflowGCPAirflowData engineering

Posted 2024-10-23
Apply
Apply
πŸ”₯ Data Engineer
Posted 2024-10-21

πŸ“ Poland

πŸ” Consulting

🏒 Company: Infosys Consulting - Europe

  • Bachelor’s degree in Computer Science, Engineering, or a related field.
  • Proven experience as a Data Engineer or similar role in large scale data implementation.
  • Strong experience in SQL and relational database systems (MySQL, PostgreSQL, Oracle).
  • Proficiency in programming languages such as Python, Java, or Scala.
  • Minimum 5 years of hands-on experience with ETL tools like Apache Nifi or Talend.
  • Familiarity with big data technologies like Hadoop and Spark.
  • Minimum 3 years with cloud-based data services (AWS, Azure, Google Cloud).
  • Knowledge of data modeling, database design, and architecture best practices.
  • Experience with version control (e.g., Git) and agile practices.

  • Develop, construct, test, and maintain scalable data pipelines for large data sets.
  • Integrate data from differing source systems into the data lake or warehouse.
  • Implement ETL processes and ensure data quality and integrity.
  • Design and implement database and data warehousing solutions.
  • Work with cloud platforms to set up data infrastructure.
  • Collaborate with teams and document workflows.
  • Implement data governance and compliance measures.
  • Monitor performance and continuously improve processes.
  • Automate tasks and develop tools for data management.

AWSDockerLeadershipPostgreSQLPythonSQLAgileBusiness IntelligenceDynamoDBETLGitHadoopJavaJenkinsKafkaKubernetesMachine LearningMongoDBMySQLOracleStrategyAzureCassandraData engineeringData scienceNosqlSparkCommunication SkillsCollaborationCI/CD

Posted 2024-10-21
Apply