Apply

Senior Data Engineer

Posted about 2 months agoViewed

View full description

πŸ’Ž Seniority level: Senior, 5 years

πŸ“ Location: Sofia, Sofia City Province, Bulgaria

πŸ” Industry: Software Development

🏒 Company: Dreamix Ltd.

πŸ—£οΈ Languages: English

⏳ Experience: 5 years

πŸͺ„ Skills: AWSPostgreSQLPythonETLHadoopKafkaOracleAzureSparkData modeling

Requirements:
  • A minimum of 5 years of relevant experience in data engineering
  • Bachelor's degree in Computer Science, Information Technology, or a related field
  • Strong proficiency in Python for scripting and data processing
  • Familiarity with big data technologies such as Hadoop, Spark, and Kafka.
  • Experience with cloud platforms (AWS, Azure, or Google Cloud) and their data services.
  • Strong understanding of data warehousing concepts and experience with databases like SQL Server, Oracle, or PostgreSQL.
  • Solid understanding of data modeling, database design, and data warehousing concepts
  • Excellent problem-solving and communication skills
  • Ability to work independently and collaboratively in a fast-paced environment
Responsibilities:
  • Design, develop, and maintain scalable data pipelines for processing and analyzing large volumes of data
  • Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and ensure data integrity and quality
  • Utilize your expertise in Python for scripting and coding tasks related to data processing and analysis
  • Understand and implement business rules in python for data transformation
  • Implement ETL processes to integrate data from various sources into data warehouse or data lake solutions
  • Optimize big data storage and processing
  • Troubleshoot and resolve data-related issues, ensuring the reliability and performance of our data infrastructure
  • Follow emerging trends and technologies in the data engineering space and make recommendations for continuous improvement
  • Optimize and tune data workflows for maximum efficiency and scalability.
  • Implement data security best practices to protect sensitive information and ensure compliance with data protection regulations.
  • Develop and maintain API integrations to facilitate seamless data exchange between systems and applications
Apply

Related Jobs

Apply

πŸ“ Worldwide

πŸ” Hospitality

🏒 Company: Lighthouse

  • 4+ years of professional experience using Python, Java, or Scala for data processing (Python preferred)
  • You stay up-to-date with industry trends, emerging technologies, and best practices in data engineering.
  • Improve, manage, and teach standards for code maintainability and performance in code submitted and reviewed
  • Ship large features independently, generate architecture recommendations and have the ability to implement them
  • Great communication: Regularly achieve consensus amongst teams
  • Familiarity with GCP, Kubernetes (GKE preferred),Β  CI/CD tools (Gitlab CI preferred), familiarity with the concept of Lambda Architecture.
  • Experience with Apache Beam or Apache Spark for distributed data processing or event sourcing technologies like Apache Kafka.
  • Familiarity with monitoring tools like Grafana & Prometheus.
  • Design and develop scalable, reliable data pipelines using the Google Cloud stack.
  • Optimise data pipelines for performance and scalability.
  • Implement and maintain data governance frameworks, ensuring data accuracy, consistency, and compliance.
  • Monitor and troubleshoot data pipeline issues, implementing proactive measures for reliability and performance.
  • Collaborate with the DevOps team to automate deployments and improve developer experience on the data front.
  • Work with data science and analytics teams to enable them to bring their research to production grade data solutions, using technologies like airflow, dbt or MLflow (but not limited to)
  • As a part of a platform team, you will communicate effectively with teams across the entire engineering organisation, to provide them with reliable foundational data models and data tools.
  • Mentor and provide technical guidance to other engineers working with data.

PythonSQLApache AirflowETLGCPKubernetesApache KafkaData engineeringCI/CDMentoringTerraformScalaData modeling

Posted 4 days ago
Apply
Apply

πŸ“ Worldwide

🧭 Full-Time

NOT STATED
  • Own the design and implementation of cross-domain data models that support key business metrics and use cases.
  • Partner with analysts and data engineers to translate business logic into performant, well-documented dbt models.
  • Champion best practices in testing, documentation, CI/CD, and version control, and guide others in applying them.
  • Act as a technical mentor to other analytics engineers, supporting their development and reviewing their code.
  • Collaborate with central data platform and embedded teams to improve data quality, metric consistency, and lineage tracking.
  • Drive alignment on model architecture across domainsβ€”ensuring models are reusable, auditable, and trusted.
  • Identify and lead initiatives to reduce technical debt and modernise legacy reporting pipelines.
  • Contribute to the long-term vision of analytics engineering at Pleo and help shape our roadmap for scalability and impact.

SQLData AnalysisETLData engineeringCI/CDMentoringDocumentationData visualizationData modelingData analyticsData management

Posted 5 days ago
Apply
Apply
πŸ”₯ Senior Data Engineer
Posted about 1 month ago

πŸ“ Worldwide

🧭 Full-Time

πŸ’Έ 167471.0 USD per year

πŸ” Software Development

🏒 Company: Float.com

  • Expertise in ML, expert systems, and advanced algorithms (e.g., pattern matching, optimization) with applied experience in Scheduling, Recommendations, or Personalization.
  • Proficient in Python or Java and comfortable with SQL and Javascript/Typescript.
  • Experience with large-scale data pipelines and stream processing (e.g., Kafka, Debezium, Flink).
  • Skilled in data integration, cleaning, and validation.
  • Familiar with vector and graph databases (e.g., Neo4j).
  • Lead technical viability discussions:
  • Develop and test proof-of-concepts for this project.
  • Analyse existing data:
  • Evaluate our data streaming pipeline: Y
  • Lead technical discussions related to optimization, pattern detection, and AI, serving as the primary point of contact for these areas within Float.
  • Develop and implement advanced algorithms to enhance the Resource Recommendation Engine and other product features, initially focused on pattern detection and optimization.
  • Design, implement, and maintain our streaming data architecture to support real-time data processing and analytics, ensuring data integrity and reliability.
  • Establish best practices and standards for optimization, AI, and data engineering development within the organization.
  • Mentor and train team members on optimization, AI, and data engineering concepts and techniques, fostering a culture of continuous learning and innovation.
  • Stay updated with the latest trends and related technologies, and proactively identify opportunities to incorporate them into Float's solutions.

PythonSQLKafkaMachine LearningAlgorithmsData engineering

Posted about 1 month ago
Apply
Apply
πŸ”₯ Senior Data Engineer
Posted about 1 month ago

πŸ“ Europe, APAC, Americas

🧭 Full-Time

πŸ” Software Development

🏒 Company: DockerπŸ‘₯ 251-500πŸ’° $105,000,000 Series C about 3 years agoDeveloper ToolsDeveloper PlatformInformation TechnologySoftware

  • 4+ years of relevant industry experience
  • Experience with data modeling and building scalable pipelines
  • Proficiency with Snowflake or BigQuery
  • Experience with data governance and security controls
  • Experience creating ETL scripts using Python and SQL
  • Familiarity with a cloud ecosystem: AWS/Azure/Google Cloud
  • Experience with Tableau or Looker
  • Manage and develop ETL jobs, warehouse, and event collection tools
  • Build and manage the Central Data Model for reporting
  • Integrate emerging methodologies and technologies
  • Build data pipelines for ML and AI projects
  • Contribute to SOC2 compliance across the data platform
  • Document technical architecture

PythonSQLETLSnowflakeAirflowData engineeringData visualizationData modeling

Posted about 1 month ago
Apply
Apply
πŸ”₯ Senior Data Engineer
Posted about 2 months ago

πŸ“ United States, EU

🧭 Full-Time

πŸ’Έ 200000.0 - 250000.0 USD per year

πŸ” Crypto, Blockchain

🏒 Company: PhantomπŸ‘₯ 51-100πŸ’° $109,000,000 Series B about 3 years agoCryptocurrencyEthereumBitcoinFinTech

  • 5+ years of experience building data infrastructure
  • Experience in startup environments
  • Deep expertise in Snowflake and dbt
  • Strong background in data modeling and architecture
  • Experience implementing data quality frameworks
  • Expert-level SQL skills and proficiency in Python
  • Design and implement robust data architecture
  • Drive data quality initiatives
  • Lead sophisticated data modeling efforts
  • Build and scale A/B testing frameworks
  • Collaborate with stakeholders for data solutions
  • Mentor teams on data best practices

PythonSQLSnowflakeData modelingA/B testing

Posted about 2 months ago
Apply
Apply

πŸ“ Worldwide

πŸ” Event Technology

NOT STATED
NOT STATED

AWSDockerPostgreSQLPythonSQLApache AirflowCloud ComputingData AnalysisETLKubernetesAlgorithmsApache KafkaData engineeringData StructuresCI/CDRESTful APIsMicroservicesData visualizationData modeling

Posted 3 months ago
Apply