Apply

Data Engineer

Posted 14 days agoViewed

View full description

πŸ’Ž Seniority level: Solid track record

πŸ“ Location: Croatia

🏒 Company: Inspiration Commerce Group

πŸ—£οΈ Languages: English

⏳ Experience: Solid track record

πŸͺ„ Skills: PostgreSQLPythonSQLCloud ComputingETLData engineeringAnalytical SkillsRESTful APIsData modeling

Requirements:
  • You have a solid track record of working in a data engineering role within a high-growth environment (we move quickly, and want to ensure you're comfortable with being uncomfortable)
  • You have expert-level SQL knowledge β€” you can write + optimize complex queries and database performance for large-scale systems.
  • You are comfortable working with modern database technologies, cloud-based data solutions, ETL/ELT tools and the data services within cloud platforms.
Responsibilities:
  • Assist in designing, implementing, and overseeing scalable data pipelines and architectures across the organization.
  • Build and maintain reliable ETL processes to ensure efficient data ingestion, transformation, and storage.
  • Streamline and manage data flows and integrations across multiple platforms and applications.
  • Work with large-scale event-level data, aggregating and processing it to drive business intelligence and analytics efforts.
  • Continuously assess and adopt new data technologies and tools to improve our data infrastructure and capabilities.
Apply

Related Jobs

Apply

πŸ“ Worldwide

πŸ” Hospitality

🏒 Company: Lighthouse

  • 4+ years of professional experience using Python, Java, or Scala for data processing (Python preferred)
  • You stay up-to-date with industry trends, emerging technologies, and best practices in data engineering.
  • Improve, manage, and teach standards for code maintainability and performance in code submitted and reviewed
  • Ship large features independently, generate architecture recommendations and have the ability to implement them
  • Great communication: Regularly achieve consensus amongst teams
  • Familiarity with GCP, Kubernetes (GKE preferred),Β  CI/CD tools (Gitlab CI preferred), familiarity with the concept of Lambda Architecture.
  • Experience with Apache Beam or Apache Spark for distributed data processing or event sourcing technologies like Apache Kafka.
  • Familiarity with monitoring tools like Grafana & Prometheus.
  • Design and develop scalable, reliable data pipelines using the Google Cloud stack.
  • Optimise data pipelines for performance and scalability.
  • Implement and maintain data governance frameworks, ensuring data accuracy, consistency, and compliance.
  • Monitor and troubleshoot data pipeline issues, implementing proactive measures for reliability and performance.
  • Collaborate with the DevOps team to automate deployments and improve developer experience on the data front.
  • Work with data science and analytics teams to enable them to bring their research to production grade data solutions, using technologies like airflow, dbt or MLflow (but not limited to)
  • As a part of a platform team, you will communicate effectively with teams across the entire engineering organisation, to provide them with reliable foundational data models and data tools.
  • Mentor and provide technical guidance to other engineers working with data.

PythonSQLApache AirflowETLGCPKubernetesApache KafkaData engineeringCI/CDMentoringTerraformScalaData modeling

Posted 5 days ago
Apply
Apply

πŸ“ Worldwide

🧭 Full-Time

NOT STATED
  • Own the design and implementation of cross-domain data models that support key business metrics and use cases.
  • Partner with analysts and data engineers to translate business logic into performant, well-documented dbt models.
  • Champion best practices in testing, documentation, CI/CD, and version control, and guide others in applying them.
  • Act as a technical mentor to other analytics engineers, supporting their development and reviewing their code.
  • Collaborate with central data platform and embedded teams to improve data quality, metric consistency, and lineage tracking.
  • Drive alignment on model architecture across domainsβ€”ensuring models are reusable, auditable, and trusted.
  • Identify and lead initiatives to reduce technical debt and modernise legacy reporting pipelines.
  • Contribute to the long-term vision of analytics engineering at Pleo and help shape our roadmap for scalability and impact.

SQLData AnalysisETLData engineeringCI/CDMentoringDocumentationData visualizationData modelingData analyticsData management

Posted 6 days ago
Apply
Apply
πŸ”₯ Data Engineer
Posted 8 days ago

πŸ“ Worldwide

🧭 Full-Time

πŸ’Έ 145000.0 - 160000.0 USD per year

  • Proficiency in managing MongoDB databases, including performance tuning and maintenance.
  • Experience with cloud-based data warehousing, particularly using BigQuery.
  • Familiarity with DBT for data transformation and modeling.
  • Exposure to tools like Segment for data collection and integration.
  • Basic knowledge of integrating third-party data sources to build a comprehensive data ecosystem.
  • Overseeing our production MongoDB database to ensure optimal performance, reliability, and security.
  • Assisting in the management and optimization of data pipelines into BigQuery, ensuring data is organized and accessible for downstream users.
  • Utilizing DBT to transform raw data into structured formats, making it useful for analysis and reporting.
  • Collaborating on the integration of data from Segment and various third-party sources to create a unified, clean data ecosystem.
  • Working closely with BI, Marketing, and Data Science teams to understand data requirements and ensure our infrastructure meets their needs.
  • Participating in code reviews, learning new tools, and contributing to the refinement of data processes and best practices.

SQLETLMongoDBData engineeringData modeling

Posted 8 days ago
Apply
Apply
πŸ”₯ Junior Data Engineer
Posted about 1 month ago

πŸ“ Worldwide

🧭 Full-Time

πŸ’Έ 90000.0 - 105000.0 CAD per year

πŸ” Blockchain

🏒 Company: FigmentπŸ‘₯ 11-50HospitalityTravel AccommodationsArt

  • At least 1 year of experience of IT experience (including co-op and internship experience)
  • Proficiency in SQL and at least one mature programming language (ideally Python)
  • Strong communication skills
  • Strong ability to write clear, concise and accurate documentation.
  • Strong experience with investigating and resolving data quality issues.
  • Strong skills in data analysis and visualization.
  • Ensure accuracy and reliability in data reporting and analysis.
  • Thrive in a fast-paced environment, adapting to new challenges as they arise.
  • Must have a passion/desire to work in and learn this space because will be working with blockchain data and block explorers on a daily basis.
  • Develop and maintain dashboards and reports.
  • Query databases, review and process data to support data-driven decision-making.
  • Investigating new chains and their corresponding block explorers in order to figure out ways of collecting data from these chains.
  • Write instructions for our Data Entry Team and assist Engineering Manager with coordination of Data Entry Team.
  • Review ingested data to flag issues and drive the investigation of any data quality issues.
  • When not working on manual and hybrid processes that will be required to do the above, will be working on automating them to be able to have growing capacity to work on other opportunities.
  • Work with internal teams like Engineering, Product, Finance, and Customer Success to deliver tailored data solutions.

PythonSQLCloud ComputingData AnalysisGitSnowflakeData engineeringTroubleshootingData visualization

Posted about 1 month ago
Apply
Apply
πŸ”₯ Senior Data Engineer
Posted about 1 month ago

πŸ“ Worldwide

🧭 Full-Time

πŸ’Έ 167471.0 USD per year

πŸ” Software Development

🏒 Company: Float.com

  • Expertise in ML, expert systems, and advanced algorithms (e.g., pattern matching, optimization) with applied experience in Scheduling, Recommendations, or Personalization.
  • Proficient in Python or Java and comfortable with SQL and Javascript/Typescript.
  • Experience with large-scale data pipelines and stream processing (e.g., Kafka, Debezium, Flink).
  • Skilled in data integration, cleaning, and validation.
  • Familiar with vector and graph databases (e.g., Neo4j).
  • Lead technical viability discussions:
  • Develop and test proof-of-concepts for this project.
  • Analyse existing data:
  • Evaluate our data streaming pipeline: Y
  • Lead technical discussions related to optimization, pattern detection, and AI, serving as the primary point of contact for these areas within Float.
  • Develop and implement advanced algorithms to enhance the Resource Recommendation Engine and other product features, initially focused on pattern detection and optimization.
  • Design, implement, and maintain our streaming data architecture to support real-time data processing and analytics, ensuring data integrity and reliability.
  • Establish best practices and standards for optimization, AI, and data engineering development within the organization.
  • Mentor and train team members on optimization, AI, and data engineering concepts and techniques, fostering a culture of continuous learning and innovation.
  • Stay updated with the latest trends and related technologies, and proactively identify opportunities to incorporate them into Float's solutions.

PythonSQLKafkaMachine LearningAlgorithmsData engineering

Posted about 1 month ago
Apply
Apply
πŸ”₯ Senior Data Engineer
Posted about 1 month ago

πŸ“ Europe, APAC, Americas

🧭 Full-Time

πŸ” Software Development

🏒 Company: DockerπŸ‘₯ 251-500πŸ’° $105,000,000 Series C about 3 years agoDeveloper ToolsDeveloper PlatformInformation TechnologySoftware

  • 4+ years of relevant industry experience
  • Experience with data modeling and building scalable pipelines
  • Proficiency with Snowflake or BigQuery
  • Experience with data governance and security controls
  • Experience creating ETL scripts using Python and SQL
  • Familiarity with a cloud ecosystem: AWS/Azure/Google Cloud
  • Experience with Tableau or Looker
  • Manage and develop ETL jobs, warehouse, and event collection tools
  • Build and manage the Central Data Model for reporting
  • Integrate emerging methodologies and technologies
  • Build data pipelines for ML and AI projects
  • Contribute to SOC2 compliance across the data platform
  • Document technical architecture

PythonSQLETLSnowflakeAirflowData engineeringData visualizationData modeling

Posted about 1 month ago
Apply
Apply
πŸ”₯ Senior Data Engineer
Posted about 2 months ago

πŸ“ United States, EU

🧭 Full-Time

πŸ’Έ 200000.0 - 250000.0 USD per year

πŸ” Crypto, Blockchain

🏒 Company: PhantomπŸ‘₯ 51-100πŸ’° $109,000,000 Series B about 3 years agoCryptocurrencyEthereumBitcoinFinTech

  • 5+ years of experience building data infrastructure
  • Experience in startup environments
  • Deep expertise in Snowflake and dbt
  • Strong background in data modeling and architecture
  • Experience implementing data quality frameworks
  • Expert-level SQL skills and proficiency in Python
  • Design and implement robust data architecture
  • Drive data quality initiatives
  • Lead sophisticated data modeling efforts
  • Build and scale A/B testing frameworks
  • Collaborate with stakeholders for data solutions
  • Mentor teams on data best practices

PythonSQLSnowflakeData modelingA/B testing

Posted about 2 months ago
Apply
Apply
πŸ”₯ Data Engineer
Posted about 2 months ago

πŸ“ Worldwide

🧭 Full-Time

πŸ” Decentralized Computing

🏒 Company: io.netπŸ‘₯ 11-50πŸ’° $30,000,000 Series A about 1 year agoCloud ComputingInformation TechnologyCloud InfrastructureGPU

  • Strong programming skills in Python or Java.
  • Experience with SQL and relational databases (e.g., PostgreSQL, MySQL).
  • Knowledge of data pipeline tools like Apache Airflow, Spark, or similar.
  • Familiarity with cloud-based data warehouses (e.g., Redshift, Snowflake).
  • Design and build scalable ETL pipelines to handle large volumes of data.
  • Develop and maintain data models and optimize database schemas.
  • Work with real-time data processing frameworks like Kafka.
  • Ensure data quality, consistency, and reliability across systems.
  • Collaborate with backend engineers and data scientists to deliver insights.
  • Monitor and troubleshoot data workflows to ensure high availability.

AWSPostgreSQLPythonSQLApache AirflowCloud ComputingETLKafkaData engineeringData modeling

Posted about 2 months ago
Apply
Apply
πŸ”₯ Sr Data Engineer
Posted about 2 months ago

πŸ“ United States, Europe, India

πŸ” SaaS

  • Extensive experience in developing data and analytics applications in geographically distributed teams
  • Hands-on experience in using modern architectures and frameworks, structured, semi-structured and unstructured data, and programming with Python
  • Hands-on SQL knowledge and experience with relational databases such as MySQL, PostgreSQL, and others
  • Hands-on ETL knowledge and experience
  • Knowledge of commercial data platforms (Databricks, Snowflake) or cloud data warehouses (Redshift, BigQuery)
  • Knowledge of data catalog and MDM tooling (Atlan, Alation, Informatica, Collibra)
  • CICD pipeline for continuous deployment (CloudFormation template)
  • Knowledge of how machine learning / A.I. workloads are implemented in batch and streaming, including the preparing of datasets, training models, and using pre-trained models
  • Exposure to software engineering processes that can be applied to Data Ecosystems
  • Excellent analytical and troubleshooting skills
  • Excellent communication skills
  • Excellent English (both verbal and written)
  • BS. in Computer Science or equivalent
  • Design and develop our best-in-class cloud platform, working on all parts of the code stack from front-end, REST and asynchronous APIs, back-end application logic, SQL/NoSQL databases and integrations with external systems
  • Develop solutions across the data and analytics stack from ETL and Streaming data
  • Design and develop reusable libraries
  • Enhance strong processes in Data Ecosystem
  • Write unit and integration tests

PythonSQLApache AirflowCloud ComputingETLMachine LearningSnowflakeAlgorithmsApache KafkaData engineeringData StructuresCommunication SkillsAnalytical SkillsCI/CDRESTful APIsDevOpsMicroservicesExcellent communication skillsData visualizationData modelingData analyticsData management

Posted about 2 months ago
Apply
Apply

πŸ“ Europe

🧭 Full-Time

πŸ” Supply Chain Risk Analytics

🏒 Company: Everstream AnalyticsπŸ‘₯ 251-500πŸ’° $50,000,000 Series B about 2 years agoProductivity ToolsArtificial Intelligence (AI)LogisticsMachine LearningRisk ManagementAnalyticsSupply Chain ManagementProcurement

  • Deep understanding of Python, including data manipulation and analysis libraries like Pandas and NumPy.
  • Extensive experience in data engineering, including ETL, data warehousing, and data pipelines.
  • Strong knowledge of AWS services, such as RDS, Lake Formation, Glue, Spark, etc.
  • Experience with real-time data processing frameworks like Apache Kafka/MSK.
  • Proficiency in SQL and NoSQL databases, including PostgreSQL, Opensearch, and Athena.
  • Ability to design efficient and scalable data models.
  • Strong analytical skills to identify and solve complex data problems.
  • Excellent communication and collaboration skills to work effectively with cross-functional teams.
  • Manage and grow a remote team of data engineers based in Europe.
  • Collaborate with Platform and Data Architecture teams to deliver robust, scalable, and maintainable data pipelines.
  • Lead and own data engineering projects, including data ingestion, transformation, and storage.
  • Develop and optimize real-time data processing pipelines using technologies like Apache Kafka/MSK or similar.
  • Design and implement data lakehouses and ETL pipelines using AWS services like Glue or similar.
  • Create efficient data models and optimize database queries for optimal performance.
  • Work closely with data scientists, product managers, and engineers to understand data requirements and translate them into technical solutions.
  • Mentor junior data engineers and share your expertise. Establish and promote best practices.

AWSPostgreSQLPythonSQLETLApache KafkaNosqlSparkData modeling

Posted 2 months ago
Apply