Apply

Data Engineer

Posted 1 day agoViewed

View full description

πŸ’Ž Seniority level: Junior, 2+ years

πŸ” Industry: AI

🏒 Company: Prolific

πŸ—£οΈ Languages: English

⏳ Experience: 2+ years

Requirements:
  • 2+ years of hands on experience deploying production quality code with proficiency in Python for data processing and related packages.
  • Deep understanding of SQL and analytical data warehouses (Snowflake, Redshift preferred) with proven experience implementing ETL/ELT best practices at scale.
  • Hands on experience with data pipeline tools (Airflow, dbt) and strong ability to optimise for performance and reliability.
  • Ability to design and develop robust data APIs and services that expose data to applications, bridging analytical and operational systems.
  • Strong data modelling skills and familiarity with the Kimball methodology to create efficient, scalable data structures.
  • Commitment to continuously improving product quality, security, and performance through rigorous testing and code reviews.
  • Meticulous approach to creating and maintaining architecture and systems documentation.
  • Ability to work across teams to understand and address diverse data needs while maintaining data integrity.
  • Desire to continually keep up with advancements in data engineering practices and technologies.
  • Exceptional analytical skills to troubleshoot complex data issues and implement effective solutions.
  • Capability to ship medium features independently while contributing to the team's overall objectives.
Responsibilities:
  • Build and maintain robust data pipelines from internal databases and SaaS applications ensuring timely and accurate data delivery.
  • Maintain our data warehouse with high-quality, well-structured data that supports analytics and business operations.
  • Design and implement scalable data infrastructure that accommodates our growing data volume and complexity.
  • Create and maintain APIs and micro services that expose data to applications, enabling seamless integration between data systems and business applications.
  • Establish processes and tools to monitor data quality, identify issues, and implement fixes promptly.
  • Create and maintain comprehensive documentation of data flows, models, and systems for knowledge sharing.
  • Work closely with analytics, research, and product teams to ensure their data needs are addressed effectively.
  • Implement and advocate for data engineering best practices across the organisation.
  • Plan and execute system expansion as needed to support the company's growth and evolving analytical needs.
  • Continuously optimise data pipelines and warehouse performance to improve efficiency and reduce costs.
  • Ensure all data systems adhere to security best practices and compliance requirements.
Apply

Related Jobs

Apply
πŸ”₯ Senior Data Engineer
Posted about 6 hours ago

πŸ’Έ 135300.0 - 202950.0 USD per year

πŸ” Software Development

  • 4+ years of professional experience working in data warehousing, data architecture, and/or data engineering environments, especially using spark, hadoop, hive etc with solid understanding of streaming pipelines.
  • Experience with Airflow, AWS cloud tech stack, any data movement tools such as Fivetran, Scala/Python.
  • You have built large-scale data products and understand the tradeoffs made when building these features
  • BS / MS in Computer Science, Engineering, Mathematics, or a related field
  • Contribute to the design/architecture new initiatives such as real time streaming pipelines, tooling around data governance, build job orchestration abstractions to manage resources on AWS
  • Collaborate with the team to build tools for data science/marketing teams
  • Design integration pipelines for new data sources and improve existing pipelines to perform efficiently at scale
  • Provide technical guidance to the team
  • Leverage best practices in continuous integration and deployment to our cloud-based infrastructure
  • Optimize data access and consumption for our business and product colleagues
Posted about 6 hours ago
Apply
Apply
πŸ”₯ Data Engineer [FDE Team]
Posted about 8 hours ago

🧭 Full-Time

πŸ” Software Development

🏒 Company: SweedπŸ‘₯ 101-250CannabisE-CommerceWeb DevelopmentMarketingInformation TechnologySoftware

  • Strong SQL skills – ability to write and optimize complex queries.
  • Proficiency in Python
  • Experience handling and cleaning large, messy datasets.
  • Basic web development knowledge – understanding of front-end and back-end principles.
FDEs focus on Implementation, Architecture, and Support, balancing client-facing responsibilities with internal tool development and system maintenance.
Posted about 8 hours ago
Apply
Apply

πŸ“ United States

πŸ’Έ 64000.0 - 120000.0 USD per year

  • Strong PL/SQL, SQL development skills
  • Proficient in multiple languages used in data engineering such as Python, Java
  • Minimum 3-5 years of experience in Data engineering working with Oracle and MS SQL
  • Experience with data warehousing concepts and technologies including cloud-based services (e.g. Snowflake)
  • Experience with cloud platforms like Azure and knowledge of infrastructure
  • Experience with data orchestration tools (e.g. Azure Data Factory, DataBricks workflows)
  • Understanding of data privacy regulations and best practices
  • Experience working with remote teams
  • Experience working on a team with a CI/CD process
  • Familiarity using tools like Git, Jira
  • Bachelor's degree in Computer Science or Computer Engineering
  • Design, implement and maintain scalable pipelines and architecture to collect, process, and store data from various sources.
  • Unit test and document solutions that meet product quality standards prior to release to QA.
  • Identify and resolve performance bottlenecks in pipelines due to data, queries and processing workflows to ensure efficient and timely data delivery.
  • Implement data quality checks and validations processes to ensure accuracy, completeness and consistency of data delivery.
  • Work with Data Architect and implement best practices for data governance, quality and security.
  • Collaborate with cross-functional teams to identify and address data needs.
  • Ensure technology solutions support the needs of the customer and/or organization.
  • Define and document technical requirements.

PythonSQLETLGitJavaOracleSnowflakeAzureData engineeringCI/CDRESTful APIs

Posted about 19 hours ago
Apply
Apply

πŸ” AI and data analytics consulting

🏒 Company: Unit8 SA

  • Proficient software engineer who knows the fundamentals of computer science and you master at least one widely adopted programming language (Python, Java, C#, C++).
  • Know how to write distributed services and work with high-volume heterogeneous data, preferably with distributed systems such as Spark.
  • Knowledgeable about data governance, data access, and data storage techniques.
  • Experience with cloud technologies is a strong plus.
  • Good presentation and communication skills, with the ability to explain complex analytical concepts to people from other fields.
  • Strong client-facing skills: comfortable interacting with clients (business & technical audience), delivering presentations, problem-solving mindset.
  • Design, build, maintain, and troubleshoot data pipelines and processing systems that are relied on for both production and analytics applications, using a variety of open-source and closed-source technologies.
  • Implement best practices in CI/CD and help drive optimization, testing, and tooling to improve data quality.
  • Contribute to the implementation and engineering of systems at different scales: from small proof-of-concepts to larger end-to-end data systems.
  • Work with our customers to understand their challenges, design and implement solutions.
  • Collaborate with other software engineers, ML experts, and business stakeholders, taking learning and leadership opportunities that will arise every single day.
  • Work in multi-functional agile teams to continuously experiment, iterate and deliver on new product objectives.
Posted about 23 hours ago
Apply
Apply
πŸ”₯ Data Engineer
Posted 1 day ago

πŸ“ United States

🧭 Full-Time

πŸ” Sustainable Agriculture

🏒 Company: Agrovision

  • Experience with RDBMS (e.g., Teradata, MS SQL Server, Oracle) in production environments is preferred
  • Hands-on experience in data engineering and databases/data warehouses
  • Familiarity with Big Data platforms (e.g., Hadoop, Spark, Hive, HBase, Map/Reduce)
  • Expert level understanding of Python (e.g., Pandas)
  • Proficient in shell scripting (e.g., Bash) and Python data application development (or similar)
  • Excellent collaboration and communication skills with teams
  • Strong analytical and problem-solving skills, essential for tackling complex challenges
  • Experience working with BI teams and tooling (e.g. PowerBI), supporting analytics work and interfacing with Data Scientists
  • Collaborate with data scientists to ensure high-quality, accessible data for analytical and predictive modeling
  • Design and implement data pipelines (ETL’s) tailored to meet business needs and digital/analytics solutions
  • Enhance data integrity, security, quality, and automation, addressing system gaps proactively
  • Support pipeline maintenance, troubleshoot issues, and optimize performance
  • Lead and contribute to defining detailed scalable data models for our global operations
  • Ensure data security standards are met and upheld by contributors, partners and regional teams through programmatic solutions and tooling

PythonSQLApache HadoopBashETLData engineeringData scienceRDBMSPandasSparkCommunication SkillsAnalytical SkillsCollaborationProblem SolvingData modeling

Posted 1 day ago
Apply
Apply

πŸ“ LATAM

🧭 Full-Time

πŸ” Financial Services

🏒 Company: South GeeksπŸ‘₯ 101-250Web DevelopmentSoftware EngineeringEnterprise SoftwareSoftware

  • Proficiency in administering one or more platforms: Snowflake (required), DBT (required), GitHub, Workato (and/or Made), or other tools (preferred).
  • Data engineering experience with a focus on DBT and Snowflake.
  • Strong desire to expand skills across multiple platforms and contribute to both administrative and engineering functions.
  • Comfort with dynamic, multi-role environments that blend administration with engineering work.
  • Collaborative mindset, able to thrive in a team-first, cross-functional setting.
  • Administer one or more platforms, focusing on: Snowflake, DBT, Fivetran, and other tools: GitHub, Workato (and/or Make), and High Touch.
  • Participate in cross-training to administer multiple platforms, ensuring seamless coverage across the team.
  • Collaborate on data engineering projects, using DBT and Snowflake as part of the stack.
  • Take part in development opportunities and engineering work to broaden your expertise and career path.
  • Focus on team-based service delivery, rather than individual responsibilities, ensuring all systems are effectively managed by the group.

SQLSnowflakeData engineering

Posted 1 day ago
Apply
Apply

πŸ” Software Development

  • Experience with AWS Big Data services
  • Experience with Snowflake
  • Experience with dbt
  • Experience with Python
  • Experience with distributed systems
  • Design and build scalable data systems that support advanced analytics and business intelligence.
  • Develop and maintain data pipelines.
  • Implement data management best practices.
  • Work closely with Product Managers, Data Analysts, and Software Developers to support data-driven decision-making.
Posted 2 days ago
Apply
Apply

πŸ“ Lithuania

πŸ’Έ 4000.0 - 6000.0 EUR per month

πŸ” Software Development

🏒 Company: Softeta

  • 4+ years of experience as a Data Engineer
  • Experience with Azure (Certifications are a Plus)
  • Experience with Databricks, Azure Data Lake, Data Factory and Apache Airflow
  • CI/CD or infrastructure as code
  • Knowledge of Medallion Architecture or Multihop architecture
  • Experience developing and administering ETL processes in the Cloud (Azure, AWS or GCP) environment
  • Strong programming skills in Python and SQL
  • Strong problem-solving and analytical skills
  • Design, develop, and maintain data pipelines and ETL processes
  • Data modeling, data cleansing
  • Automating data processing workflows using tools such as Airflow or other workflow management tools
  • Optimizing the performance of databases, including designing and implementing data structures and using indexes appropriately
  • Implement data quality and data governance processes
  • Being a data advocate and helping unlock business value by using data

PythonSQLApache AirflowETLAzureData engineeringCI/CDData modeling

Posted 2 days ago
Apply
Apply

πŸ“ United States

πŸ’Έ 144000.0 - 180000.0 USD per year

πŸ” Software Development

🏒 Company: HungryrootπŸ‘₯ 101-250πŸ’° $40,000,000 Series C almost 4 years agoArtificial Intelligence (AI)Food and BeverageE-CommerceRetailConsumer GoodsSoftware

  • 5+ years of experience in ETL development and data modeling
  • 5+ years of experience in both Scala and Python
  • 5+ years of experience in Spark
  • Excellent problem-solving skills and the ability to translate business problems into practical solutions
  • 2+ years of experience working with the Databricks Platform
  • Develop pipelines in Spark (Python + Scala) in the Databricks Platform
  • Build cross-functional working relationships with business partners in Food Analytics, Operations, Marketing, and Web/App Development teams to power pipeline development for the business
  • Ensure system reliability and performance
  • Deploy and maintain data pipelines in production
  • Set an example of code quality, data quality, and best practices
  • Work with Analysts and Data Engineers to enable high quality self-service analytics for all of Hungryroot
  • Investigate datasets to answer business questions, ensuring data quality and business assumptions are understood before deploying a pipeline

AWSPythonSQLApache AirflowData MiningETLSnowflakeAlgorithmsAmazon Web ServicesData engineeringData StructuresSparkCI/CDRESTful APIsMicroservicesJSONScalaData visualizationData modelingData analyticsData management

Posted 2 days ago
Apply
Apply

πŸ“ United States

πŸ’Έ 135000.0 - 155000.0 USD per year

πŸ” Software Development

🏒 Company: JobgetherπŸ‘₯ 11-50πŸ’° $1,493,585 Seed about 2 years agoInternet

  • 8+ years of experience as a data engineer, with a strong background in data lake systems and cloud technologies.
  • 4+ years of hands-on experience with AWS technologies, including S3, Redshift, EMR, Kafka, and Spark.
  • Proficient in Python or Node.js for developing data pipelines and creating ETLs.
  • Strong experience with data integration and frameworks like Informatica and Python/Scala.
  • Expertise in creating and managing AWS services (EC2, S3, Lambda, etc.) in a production environment.
  • Solid understanding of Agile methodologies and software development practices.
  • Strong analytical and communication skills, with the ability to influence both IT and business teams.
  • Design and develop scalable data pipelines that integrate enterprise systems and third-party data sources.
  • Build and maintain data infrastructure to ensure speed, accuracy, and uptime.
  • Collaborate with data science teams to build feature engineering pipelines and support machine learning initiatives.
  • Work with AWS cloud technologies like S3, Redshift, and Spark to create a world-class data mesh environment.
  • Ensure proper data governance and implement data quality checks and lineage at every stage of the pipeline.
  • Develop and maintain ETL processes using AWS Glue, Lambda, and other AWS services.
  • Integrate third-party data sources and APIs into the data ecosystem.

AWSNode.jsPythonSQLETLKafkaData engineeringSparkAgile methodologiesScalaData modelingData management

Posted 2 days ago
Apply

Related Articles

Posted 14 days ago

Why remote work is such a nice opportunity?

Why is remote work so nice? Let's try to see!

Posted 7 months ago

Insights into the evolving landscape of remote work in 2024 reveal the importance of certifications and continuous learning. This article breaks down emerging trends, sought-after certifications, and provides practical solutions for enhancing your employability and expertise. What skills will be essential for remote job seekers, and how can you navigate this dynamic market to secure your dream role?

Posted 7 months ago

Explore the challenges and strategies of maintaining work-life balance while working remotely. Learn about unique aspects of remote work, associated challenges, historical context, and effective strategies to separate work and personal life.

Posted 7 months ago

Google is gearing up to expand its remote job listings, promising more opportunities across various departments and regions. Find out how this move can benefit job seekers and impact the market.

Posted 7 months ago

Learn about the importance of pre-onboarding preparation for remote employees, including checklist creation, documentation, tools and equipment setup, communication plans, and feedback strategies. Discover how proactive pre-onboarding can enhance job performance, increase retention rates, and foster a sense of belonging from day one.