Apply

Senior Data Engineer

Posted 10 days agoViewed

View full description

πŸ’Ž Seniority level: Senior, proven experience as a Senior Software Engineer

πŸ“ Location: Malta

πŸ” Industry: IGaming

🏒 Company: The Mill AdventureπŸ‘₯ 11-50InternetGamingInformation TechnologySoftware

πŸ—£οΈ Languages: English

⏳ Experience: Proven experience as a Senior Software Engineer

πŸͺ„ Skills: AWSNode.jsDynamoDBETLJavascriptTypeScriptData engineeringCI/CD

Requirements:
  • Knowledge of JavaScript, TypeScript, and the Node.js ecosystem.
  • Previous experience working with AWS solutions including DynamoDB, Kinesis, Lambda, Quicksight, S3, etc.
  • Experience using automated testing frameworks.
  • An analytical mind and proactive attitude.
  • Ability to work independently.
  • Self-management and communication skills.
  • Proven experience as a Senior Software Engineer working with Back-End.
  • Experience with event-driven architectures.
  • Experience developing highly available and fault-tolerant systems.
Responsibilities:
  • Ensure timely and accurate data send-outs to meet stringent business and compliance requirements.
  • Maintain reporting and API integrations with iGaming regulatory bodies for various jurisdictions.
  • Incorporate data from gaming platform for reports and analyses.
  • Review platform changes affecting data and adapt aggregation jobs.
  • Redesign datasets to enhance performance and reduce operational costs.
  • Maintain and improve transformation logic in ETL aggregation jobs.
  • Update ETL orchestration tools to align with industry standards.
  • Collaborate with BI team to design datasets for reporting.
  • Improve automation code for reporting templates, analyses, and dashboards.
  • Maintain AWS Lambda data ingestion and validation service.
  • Address adhoc data requests and assist with data investigations.
  • Handle affiliate provider integration and improvements.
  • Participate in data migrations and assist with ingesting external data into the data lake.
Apply

Related Jobs

Apply

πŸ“ South Africa, Mauritius, Kenya, Nigeria

πŸ” Technology, Marketplaces

  • BSc degree in Computer Science, Information Systems, Engineering, or related technical field or equivalent work experience.
  • 3+ years related work experience.
  • Minimum of 2 years experience building and optimizing β€˜big data’ data pipelines, architectures and maintaining data sets.
  • Experienced in Python.
  • Experienced in SQL (PostgreSQL, MS SQL).
  • Experienced in using cloud services: AWS, Azure or GCP.
  • Proficiency in version control, CI/CD and GitHub.
  • Understanding/experience in Glue and PySpark highly desirable.
  • Experience in managing data life cycle.
  • Proficiency in manipulating, processing and architecting large disconnected data sets for analytical requirements.
  • Ability to maintain and optimise processes supporting data transformation, data structures, metadata, dependency and workload management.
  • Good understanding of data management principles - data quality assurance and governance.
  • Strong analytical skills related to working with unstructured datasets.
  • Understanding of message queuing, stream processing, and highly scalable β€˜big data’ datastores.
  • Strong attention to detail.
  • Good communication and interpersonal skills.
  • Suggest efficiencies and execute on implementation of internal process improvements in automating manual processes.
  • Implement enhancements and new features across data systems.
  • Improve streamline processes within data systems with support from Senior Data Engineer.
  • Test CI/CD process for optimal data pipelines.
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Highly efficient in ETL processes.
  • Develop and conduct unit tests on data pipelines as well as ensuring data consistency.
  • Develop and maintain automated monitoring solutions.
  • Support reporting and analytics infrastructure.
  • Maintain data quality and data governance as well as upkeep of overall maintenance of data infrastructure systems.
  • Maintain data warehouse and data lake metadata, data catalogue, and user documentation for internal business users.
  • Ensure best practice is implemented and maintained on database.

AWSPostgreSQLPythonSQLETLGitCI/CD

Posted 21 days ago
Apply
Apply

πŸ“ UK, EU

πŸ” Consultancy

🏒 Company: The Dot CollectiveπŸ‘₯ 11-50Cloud ComputingAnalyticsInformation Technology

  • Advanced knowledge of distributed computing with Spark.
  • Extensive experience with AWS data offerings such as S3, Glue, Lambda.
  • Ability to build CI/CD processes including Infrastructure as Code (e.g. terraform).
  • Expert Python and SQL skills.
  • Agile ways of working.
  • Leading a team of data engineers.
  • Designing and implementing cloud-native data platforms.
  • Owning and managing technical roadmap.
  • Engineering well-tested, scalable, and reliable data pipelines.

AWSPythonSQLAgileSCRUMSparkCollaborationAgile methodologies

Posted 3 months ago
Apply