Apply

Data Analytics Engineer

Posted 4 days agoViewed

View full description

Apply

Related Jobs

Apply

🧭 Full-Time

  • Bachelor’s Degree in Mechanical, Thermal, Chemical, Energy Systems or Computer science
  • Analytical mindset, ability to handle multiple tasks simultaneously, work under fast-paced environment
  • Strong technical background in data engineering, systems integration, or industrial IoT
  • Proficiency in Python, SQL, and working with cloud-native data architectures (in AWS, GCP, Azure, etc.)
  • Ability to visualize complex datasets clearly and effectively for business and technical users
  • Hands-on mindset with the ability to troubleshoot across hardware, software, and network layers
  • Experience with edge computing platforms, gateways, or PLC integration (a plus)
  • Prior experience on data analytics of energy systems, refrigeration systems is highly desirable
  • Familiarity with thermodynamic principals, refrigeration or energy systems is highly desirable
  • Understanding of refrigeration, HVAC, or thermodynamic systems (a plus)
  • Experience collaborating cross-functionally with engineering, operations, and vendors
  • Self-driven and adaptable to fast-changing needs in field operations
  • Ability to be individual contributor
  • Team player and effective communicator
  • Fluent in English
  • Assist in creating and scaling a unified data strategy for field installations
  • Develop and maintain pipelines to ingest, clean, tag, and store data from mixed field controllers and edge devices
  • Interface with protocols such as Modbus, M2M2 Loytec Gateway, etc and integrate with platforms like AWS, GCP, or vendor clouds (e.g., Danfoss Carel boss)
  • Standardize and normalize fragmented datasets into a centralized cloud repository
  • Build business-facing and engineering dashboards using tools like Google data studio, Grafana or Power BI
  • Write and maintain Python or SQL scripts for data transformation, anomaly detection, and automated reporting
  • Work with refrigeration and controls engineers to support field deployments, thermodynamic analysis, and commissioning of edge/IoT devices
  • Oversee day-to-day health, analysis, and integrity of field data streams and dashboards
  • Collaborate with external vendors or contractors to deploy pilot implementations across selected sites
  • Develop algorithms for data collection, clean up, organization
  • Develop visualization dashboards
  • Conduct energy saving calculations
  • All other projects and duties as assigned
Posted 1 day ago
Apply
Apply

🧭 Full-Time

🔍 Software Development

🏢 Company: Veeam Software👥 5001-10000💰 $2,000,000,000 Secondary Market 6 months ago🫂 Last layoff over 1 year agoVirtualizationData ManagementData CenterEnterprise SoftwareSoftwareCloud Infrastructure

  • Strong proficiency in SQL and experience with database and data warehouse systems
  • Experience building and deploying data processes in a cloud environment (Azure, AWS, GCP)
  • Experience in at least one programming language (e.g., Python, Java, Scala)
  • Understanding of best practices in designing ETL/ELT processes, data modeling, and data warehousing
  • Design, implement, and maintain scalable and reliable data marts and transformation processes
  • Collaborate with data analysts, data scientists, and business stakeholders to understand their data needs and requirements
  • Develop and optimize data models and schemas to ensure data is stored efficiently and can be retrieved and analyzed quickly
  • Implement ETL processes and frameworks to transform raw data from multiple sources into structured formats suitable for analysis
  • Work closely with our internal Data, Architecture, and DevOps teams to ensure the reliability and scalability of data systems
  • Monitor and troubleshoot data pipelines, and look for opportunities for continuous improvement
  • Document data systems, data flow, and data processes
Posted 9 days ago
Apply
Apply

📍 Australia

🏢 Company: Employment Hero👥 501-1000💰 $166,333,052 Series F over 1 year agoManagement Information SystemsHuman ResourcesSaaSFinanceEmployee Benefits

  • Proficiency in SQL and relational databases.
  • Experience with data pipeline development and ETL processes.
  • Familiarity with visualization tools (e.g., Tableau, Looker Studio).
  • Solid understanding of statistical modeling and machine learning techniques.
  • Expertise in Martech platforms (e.g., CRM, automation, advertising).
  • Strong grasp of Webtech (GA4, GTM, conversion event tracking).
  • Understanding of GTM data layer configuration and conversion tracking.
  • Bachelor’s or Master’s degree in data science, statistics, or a related field.
  • Strong analytical, communication, and problem-solving skills.
  • Knowledge of marketing attribution methods and campaign measurement.
  • Build and manage data pipelines to integrate marketing and CRM data into a single source of truth.
  • Resolve integration challenges across multiple marketing platforms, tools, and systems.
  • Simplify complex, cross-channel reporting to support faster, smarter decision-making.
  • Break down MarTech data silos to uncover end-to-end customer and campaign insights.
  • Design and optimise predictive lead scoring models using behavioural, demographic, and firmographic data.
  • Analyse and improve MQL quality by creating data views that highlight fit, intent, and downstream performance.
  • Develop intent models based on user behaviour to power audience segmentation and personalised campaigns.
  • Implement advanced attribution and media mix modelling to fine-tune ad spend and boost ROI.
  • Build smart dashboards and reports that deliver clear, actionable insights to marketing and sales teams.
  • Partner with Marketing to define and track candidate quality metrics across all channels, supporting long-term engagement and re-engagement strategies.

SQLData AnalysisETLMachine LearningGoogle AnalyticsData scienceData visualizationMarketingCRMData modelingData analytics

Posted 10 days ago
Apply
Apply

💸 130000.0 - 150000.0 USD per year

🔍 Healthcare

🏢 Company: Garner Health👥 51-100💰 $45,000,000 Series B over 3 years agoBusiness IntelligenceBig DataMedicalEmployee BenefitsHealth Care

  • 2+ years of software/data engineering experience (distributed data processing, data warehousing, data governance, Big Data, data variance, data privacy, and data quality).
  • Expertise in SQL, familiarity with Python
  • Expertise in building scalable data pipelines, query optimization, PostgreSQL tuning (nice to have), data modeling, and defining reusable datasets
  • Experience working with orchestration tools (especially Argo), databases (especially PostgreSQL), data warehouses (especially Snowflake)
  • Familiarity with distributed event-driven architectures. NATS experience is a plus.
  • Familiarity with healthcare or insurance
  • Build, optimize, and maintain data pipelines that power our business
  • Define and build out abstracted reusable data sets to be used for Business Intelligence, Marketing, and Data Science Research
  • Design, build, and evangelize a federated data validation framework to be used to monitor potential data inconsistencies
  • Protect our users’ privacy and security through best practices
Posted 16 days ago
Apply
Apply

📍 Georgia, Serbia, Spain, Poland

🔍 B2B SaaS or enterprise

🏢 Company: Cloudlinux

  • 4+ years of experience in analytics/data engineering roles, ideally in a B2B SaaS or enterprise context
  • Deep knowledge of SQL, data normalization, and transformation best practices
  • Familiarity with data governance, Looker modeling (LookML)
  • Hands-on experience with Airflow, dbt, Snowflake
  • Partner with Analysts, Marketing, Product, and Finance teams to define and implement data models that make insights easily accessible.
  • Transform raw data into clean, governed, analytics-ready models using dbt.
  • Ensure data consistency and integrity across systems.
  • Collaborate on designing self-serve Looker experiences while optimizing performance and reducing manual reporting overhead.
  • Debug data issues across multiple source systems and orchestrate efficient resolutions.

PostgreSQLPythonSQLETLSnowflakeAirflowClickhouseData engineeringData visualizationData modelingData analytics

Posted 18 days ago
Apply
Apply
🔥 Data Analytics Engineer
Posted about 1 month ago

🧭 Full-Time

💸 120000.0 - 135000.0 USD per year

🔍 Data Analytics

🏢 Company: dv01👥 51-100💰 $6,000,000 Series B over 4 years agoLendingRisk Management

  • 3+ years of professional experience writing production-ready code in a language such as Python, DBT, Scala or Java as well as being highly proficient in SQL.
  • 2+ years of professional experience working directly with data pipelines with exposure to datasets related to loans an added plus.
  • Knowledgeable about relational data concepts.
  • Excited about big data tools.
  • Interested and experienced in both engineering and finance.
  • A first-rate collaborator and communicator.
  • Be at the heart of dv01.
  • Be an owner of dv01's most valuable asset.
  • Work directly with internal and external stakeholders.
  • Work with state-of-the-art technology.
Posted about 1 month ago
Apply
Apply
🔥 Senior Data Analytics Engineer
Posted about 1 month ago

📍 India

🧭 Full-Time

🔍 Software Development

🏢 Company: YipitData (Alternative)

  • 5+ years of proven experience in data engineering, particularly in systems with high uptime requirements.
  • Eager to learn basic application development using Python frameworks and Databricks to automate analytical and data entry workflows
  • Proficient in Python, Spark, Docker, AWS, and database technologies.
  • Own and maintain core data pipelines that power strategic internal and external analytics products.
  • Build lightweight data applications and tools on top of these pipelines using Python to streamline data refinement, transformation, and processing workflows.
  • Drive reliability, efficiency, and performance improvements across the data platform.
  • Diagnose and resolve technical issues in data applications and platform services, including web application performance, optimizing SQL, Pandas, and PySpark queries, and interacting with REST APIs.
  • Partner with analysts, product teams, and engineering stakeholders to understand data requirements and translate them into scalable solutions.
  • Identify and implement process improvements to streamline support workflows, reduce repetitive tasks, and improve application and data platform efficiency.

AWSDockerPythonSQLETLGitData engineeringREST APIPandasSpark

Posted about 1 month ago
Apply
Apply

📍 Mexico

🧭 Full-Time

🔍 Software Development

🏢 Company: Enroute👥 1-10💰 Non-equity Assistance over 4 years agoE-CommerceEnterprise SoftwareSoftware

  • Proficient in Python.
  • Strong experience with Pandas.
  • Experience with Jenkins for continuous integration and continuous delivery pipelines.
  • Solid understanding and hands-on experience with Snowflake and PostgreSQL.
  • Proficient with Git for code management and collaboration.
  • Experience with Hightouch or similar Reverse ETL tools.
  • Familiarity with Amazon Web Services (AWS) and its data-related services.
  • Experience with Sigma Computing or other BI and analytics platforms.
  • Strong SQL skills for data extraction, manipulation, and analysis.
  • Design, build, and maintain scalable and efficient data pipelines to ingest, transform, and load data from various sources into our data warehouse (Snowflake) and other data stores (PostgreSQL).
  • Maintain and optimize our data storage systems (Snowflake and PostgreSQL) for performance, reliability, and cost-effectiveness.
  • Implement and monitor data quality checks and processes to guarantee the accuracy, completeness, and availability of data for reporting and analysis.
  • Implement and adhere to security measures and data privacy policies to protect sensitive information.
  • Utilize Jenkins to build and maintain automated CI/CD pipelines for data engineering workflows.
  • Design and implement Reverse ETL processes using Hightouch to push data from the data warehouse back to operational systems.
  • Work closely with data engineers, analysts, and other stakeholders to understand their data needs and provide1 effective data solutions.
  • Create and maintain clear and comprehensive documentation for data pipelines, storage systems, and processes.
  • Identify and resolve data-related issues and provide support to data consumers.
Posted 2 months ago
Apply
Apply

📍 United States

🏢 Company: Sophinea👥 1-10Information ServicesAnalyticsInformation Technology

  • Bachelor's or Master's degree in Computer Science, Information Technology, or a related field.
  • Minimum of 10 years of experience in ETL operations, Systems Operations, and Data Analytics.
  • Expert knowledge of SQL, git, various data formats (JSON, YAML, CSV), and MS Excel.
  • Expert Python and Bash skills including OO techniques.
  • Proficiency in Ruby, Go, and other languages is a plus.
  • Familiarity with Argo CD/Workflow, Kubernetes (K8s), containers, GitHub actions, Linux, and AWS is highly desirable.
  • Strong problem-solving skills and attention to detail.
  • Excellent communication and collaboration skills.
  • Ability to work independently and as part of a team.
  • Strong proficiency in SQL and experience with MySQL or similar relational databases.
  • Must be able to interact with databases using raw-SQL.
  • Solid understanding of data modeling concepts and techniques.
  • Experience with Jaspersoft or similar reporting tools is preferred.
  • Design, develop, and maintain ETL processes to extract, transform, and load data from various sources.
  • Monitor and optimize ETL workflows to ensure data quality and performance.
  • Collaborate with cross-functional teams to gather and understand data requirements.
  • Create and maintain documentation for ETL processes and data analytics solutions.
  • Create and maintain data models to support reporting and analysis needs.
  • Support and troubleshoot ETL processes and resolve any issues in a timely manner.
  • Perform data analysis, develop dashboards, and present actionable insights to stakeholders.

AWSPythonSQLBashData AnalysisElasticSearchETLGitKafkaKibanaKubernetesMySQLApache KafkaGoCollaborationCI/CDLinux

Posted 7 months ago
Apply

Related Articles

Posted about 1 month ago

How to Overcome Burnout While Working Remotely: Practical Strategies for Recovery

Burnout is a silent epidemic among remote workers. The blurred lines between work and home life, coupled with the pressure to always be “on,” can leave even the most dedicated professionals feeling drained. But burnout doesn’t have to define your remote work experience. With the right strategies, you can recover, recharge, and prevent future episodes. Here’s how.



Posted 6 days ago

Top 10 Skills to Become a Successful Remote Worker by 2025

Remote work is here to stay, and by 2025, the competition for remote jobs will be tougher than ever. To stand out, you need more than just basic skills. Employers want people who can adapt, communicate well, and stay productive without constant supervision. Here’s a simple guide to the top 10 skills that will make you a top candidate for remote jobs in the near future.

Posted 9 months ago

Google is gearing up to expand its remote job listings, promising more opportunities across various departments and regions. Find out how this move can benefit job seekers and impact the market.

Posted 10 months ago

Read about the recent updates in remote work policies by major companies, the latest tools enhancing remote work productivity, and predictive statistics for remote work in 2024.

Posted 10 months ago

In-depth analysis of the tech layoffs in 2024, covering the reasons behind the layoffs, comparisons to previous years, immediate impacts, statistics, and the influence on the remote job market. Discover how startups and large tech companies are adapting, and learn strategies for navigating the new dynamics of the remote job market.