Apply

Senior Data Engineer

Posted over 1 year agoViewed

View full description

📍 Location: Anywhere in the united states

💸 Salary: $170k to $195k

🗣️ Languages: English

Requirements:
Experience with designing and implementing scalable solutions, ability to refactor and simplify existing processes, experience with relational and non-relational databases, proficient in python, experience with handling and working with large datasets, experience with etl pipelines and data warehousing best practices, value collaboration and feedback, excellent documentation and verbal communication skills.Apply

Related Jobs

Apply

📍 LatAm

🧭 Fulltime

🔍 Technology solutions

🏢 Company: Truelogic👥 101-250ConsultingWeb DevelopmentWeb DesignSoftware

  • At least 5 years of Data Engineering experience.
  • AWS Cloud experience preferred, but other cloud experience is acceptable.
  • Experience with AWS QuickSight is preferred, but similar tools may be considered.
  • Experience with AWS Athena preferred, but open to other tools.
  • Strong SQL skills.
  • Understanding of event-based data workflows.
  • Develop and maintain internal QuickSight reports for insights on LMS applications.
  • Create customer-facing QuickSight reports with benchmarking insights for stakeholders.
  • Build, maintain, and improve operational reports for business visibility.
  • Work with event-based data for accurate reporting.
  • Write efficient SQL queries to create views and structured data.
  • Develop analyses in QuickSight as per product team needs.
  • Collaborate in an agile scrum team and contribute to various agile ceremonies.
  • Identify opportunities to enhance data processing workflows.

AWSSQLData engineering

Posted about 5 hours ago
Apply
Apply

📍 LatAm

🧭 Fulltime

🔍 B2B data and intelligence

🏢 Company: Truelogic👥 101-250ConsultingWeb DevelopmentWeb DesignSoftware

  • 8+ years of experience as a Data/BI engineer.
  • Experience developing data pipelines with Airflow or equivalent code-based orchestration software.
  • Strong SQL abilities and hands-on experience with SQL and no-SQL DBs, performing analysis and performance optimizations.
  • Hands-on experience in Python or equivalent programming language
  • Experience with data warehouse solutions (like BigQuery/ Redshift/ Snowflake)
  • Experience with data modeling, data catalog concepts, data formats, data pipelines/ETL design, implementation, and maintenance.
  • Experience with AWS/GCP cloud services such as GCS/S3, Lambda/Cloud Function, EMR/Dataproc, Glue/Dataflow, and Athena.
  • Experience in Quality Checks
  • Experience in DBT
  • EFront Knowledge
  • Strong and Clear Communication Skills
  • Building, and continuously improving our data gathering, modeling, reporting capabilities, and self-service data platforms.
  • Working closely with Data Engineers, Data Analysts, Data Scientists, Product Owners, and Domain Experts to identify data needs.

AWSPythonSQLCloud ComputingETLSnowflakeAirflowData engineeringCommunication SkillsData modeling

Posted about 5 hours ago
Apply
Apply

📍 LatAm

🧭 Fulltime

🔍 Enterprise software

🏢 Company: Truelogic👥 101-250ConsultingWeb DevelopmentWeb DesignSoftware

  • 5+ years of professional experience in data engineering.
  • Strong proficiency in Python.
  • Experience with data warehousing technologies such as Snowflake, Redshift, BigQuery.
  • Experience with data processing frameworks like Apache Spark (PySpark).
  • Hands-on experience with core GCP services: BigQuery, Dataflow, Cloud Storage (GCS), Compute Engine (GCE).
  • Experience with data modeling, ETL/ELT processes, and data quality assurance.
  • Strong analytical and problem-solving skills.
  • Excellent English communication and collaboration skills.
  • Develop and maintain data pipelines using tools such as Apache Airflow, dbt, and Prefect.
  • Design and implement data models and data warehousing solutions within the Google Cloud Platform (GCP) ecosystem.
  • Work with technologies like BigQuery, Dataflow, Cloud Storage (GCS), and Databricks for data storage and processing.
  • Implement and manage data pipelines and CI/CD pipelines using GCP services.
  • Ensure data quality, security, and compliance throughout the data lifecycle.
  • Implement data monitoring and alerting systems.
  • Collaborate with cross-functional teams to understand data requirements.

PythonApache AirflowETLGCPGrafanaPrometheusCI/CD

Posted about 5 hours ago
Apply
Apply

📍 UK, India, Germany

🧭 Full-Time

🔍 Fintech

🏢 Company: Careers at Tide

  • 4+ years of development experience using Snowflake or similar data warehouse technology.
  • Experience with dbt, Snowflake, Apache Airflow, Fivetran, AWS, and Looker.
  • Proficient in writing advanced SQL and performance tuning.
  • Knowledge of data ingestion techniques with tools like Fivetran.
  • Experience in data modeling and architecting analytical databases.
  • Strong documentation skills and ability to communicate with business users.
  • Developing end-to-end ETL/ELT pipelines in collaboration with Data Analysts.
  • Designing and implementing automated data processes in a Data Mesh architecture.
  • Mentoring junior engineers and acting as a data technology expert.
  • Troubleshooting and resolving technical challenges.
  • Optimizing data feeds and performing exploratory data analysis.
  • Translating business needs into technical specifications and ensuring data quality.

AWSSQLApache AirflowData MiningETLSnowflakeData modeling

Posted 2 days ago
Apply
Apply
🔥 Senior Data Engineer
Posted 2 days ago

📍 Romania

🔍 Technology

🏢 Company: Faptic Technology

  • At least 5 years of experience in ETL development with a strong focus on data design, transformation, and cleansing.
  • Azure Data Engineer Associate or Power BI Data Analyst Associate certification.
  • Demonstrated ability to mentor and lead teams, fostering professional growth and collaboration.
  • Experience integrating data from files, APIs, and database sources (SQL Server).
  • Strong understanding of ETL principles, including data validation, cleaning, and transformation best practices.
  • Knowledge of SnapLogic is welcomed but not mandatory.
  • Strong ability to work independently and engage stakeholders to deliver solutions that align with business needs.
  • Self-motivated, detail-oriented, and a strong team player.
  • Proficiency in English is required.
  • University degree in Computer Science, Engineering, or a similar discipline.
  • Design and maintain ETL pipelines, ensuring efficient data extraction, transformation, and loading from files, APIs, and SQL Server databases.
  • Mentor and support team members, fostering technical growth and collaboration across the data team.
  • Collaborate with stakeholders to gather requirements, communicate progress, and align deliverables with business objectives.
  • Apply ETL principles and data cleansing techniques to ensure data quality and consistency.
  • Utilize SQL for advanced querying and transformations, ensuring data is accurate and accessible for downstream processes.
  • Perform data migration, cleaning, and normalization tasks to maintain a high-quality data environment.
  • Propose and implement improvements to pipeline performance and design to meet evolving business needs.
  • Leverage knowledge of SnapLogic (if available) to build and optimize pipelines.
  • Be adaptable and willing to learn new tools and technologies to address emerging challenges and opportunities.
Posted 2 days ago
Apply
Apply

🔍 Insurance technology

🏢 Company: Anomaly👥 501-1000🫂 Last layoff about 2 years agoAdvertisingCreative AgencyMarketing

  • 5+ years of data engineering in a fast moving team environment.
  • Strong computer science fundamentals.
  • Expert level knowledge of SQL and Python.
  • Experience with large scale data platforms like Spark.
  • Experience building robust data pipelines in DBT and Airflow.
  • Work on the full data pipeline from ingesting raw healthcare data from clients to producing refined data inputs for Anomaly products.
  • Create new Airflow DAGs, data ingestion scripts, and PySpark jobs.
  • Collaborate with ML engineers to create training pipelines.
  • Design and build data models for efficient and intuitive analysis.
Posted 2 days ago
Apply
Apply

📍 OR, WA, CA, CO, TX, IL

🧭 Contract

💸 65.0 - 75.0 USD per hour

🔍 Music industry

🏢 Company: Discogs👥 51-100💰 $2,500,000 about 7 years agoDatabaseCommunitiesMusic

  • Proficiency in data integration and ETL processes.
  • Knowledge of programming languages such as Python, Java, or Javascript.
  • Familiarity with cloud platforms and services (e.g., AWS, GCP, Azure).
  • Understanding of data warehousing concepts and technologies (e.g., Redshift, BigQuery, Snowflake).
  • Excellent problem-solving skills and attention to detail.
  • Strong communication and collaboration skills to work effectively with cross-functional teams.
  • Experience with marketing automation platforms.
  • Experience with data warehouses in a marketing context.
  • Knowledge of API integration and data exchange formats such as JSON, XML, and CSV.
  • Design, develop, and maintain data pipelines to ingest, process, and store data.
  • Implement data validation and quality checks to maintain the integrity of incoming data.
  • Optimize and automate data workflows to improve efficiency and reduce manual intervention.
  • Work closely with the product, engineering, marketing and analytics teams to support data-driven decision-making.
  • Develop and maintain documentation related to data processes, workflows, and system architecture.
  • Troubleshoot and resolve data-related issues promptly to minimize disruptions.
  • Monitor and enhance the performance of data infrastructure, ensuring scalability and reliability.
  • Stay updated with industry trends and best practices in data engineering to apply improvements.

AWSPythonApache AirflowETLGCPMySQLSnowflakeApache KafkaAzureJSON

Posted 3 days ago
Apply
Apply

💸 125000.0 - 135000.0 USD per year

🔍 Digital transformation and technology services

🏢 Company: Dynamo Technologies

  • Proficiency in Python or R for data operations and analytics.
  • Strong expertise in SQL development with enterprise databases such as PostgreSQL or SQL Server.
  • Hands-on experience with ETL development and data pipeline operations.
  • Familiarity with cloud-based solutions such as AWS.
  • Knowledge of DevOps practices and automation in data environments.
  • Design and implement data storage and processing infrastructure to manage large-scale data analytics.
  • Develop and maintain robust and scalable solutions for managing structured and unstructured data using traditional and NoSQL databases.
  • Support ETL processes, ensuring data quality and integrity.
  • Collaborate with data scientists, database architects, and business users on data use cases.
Posted 3 days ago
Apply
Apply
🔥 Senior Data Engineer
Posted 5 days ago

🧭 Full-Time

🔍 Construction

  • 7+ years of relevant data platform development experience building production-grade solutions.
  • Proficient with SQL and a high-order object-oriented language (e.g., Python).
  • Experience with designing and building distributed data architecture.
  • Experience building and managing production-grade data pipelines using tools such as Airflow, dbt, DataHub, MLFlow.
  • Experience building and managing production-grade data platforms using distributed systems such as Kafka, Spark, Flink and/or others.
  • Familiarity with event data streaming at scale.
  • Experience building observability and monitoring into data products.
  • Motivated to identify opportunities for automation to reduce manual toil.
  • Collaborate with Product Managers, Designers, Engineers, Data Scientists and Data Analysts to take ideas from concept to production at scale.
  • Design, build and maintain the data platform for automation and self-service for data scientists and analysts.
  • Develop maintenance for data product framework in support of analytics features.
  • Build and maintain CI/CD pipelines and automated deployment processes.
  • Develop data monitoring and alerting capabilities.
  • Document architecture and processes for collaboration.
  • Mentor peers for skill development.
Posted 5 days ago
Apply
Apply
🔥 Senior Data Engineer
Posted 5 days ago

📍 Egypt

🏢 Company: SSC Egypt

  • Bachelor’s or master’s degree in computer science, Data Engineering, or a related field.
  • Proven experience as a Data Engineer, focusing on large-scale data extraction and normalization projects.
  • Strong proficiency in SQL and scripting languages such as Python/Java and Spark/Pyspark/Scala for data manipulation.
  • Experience with ETL tools including Talend and Informatica.
  • In-depth knowledge of Data-Warehouse Concepts and database management systems, both relational and NoSQL.
  • Working understanding of unstructured data handling and data extraction techniques.
  • Familiarity with data modeling, schema design, and optimization techniques.
  • Understanding of data governance, security, and compliance standards.
  • Excellent problem-solving and analytical skills with attention to detail.
  • Effective communication skills for collaboration with diverse stakeholders.
  • Previous experience in transportation, asset management, or digital transformation projects is required.
  • Familiarity with Agile methodologies.
  • Lead the extraction of engineering data from diverse sources, including structured and unstructured data such as PDF documents.
  • Develop comprehensive mapping strategies between different data sets and implement robust normalization and standardization strategies.
  • Create Source to Target mapping documents with business rules by collaborating with stakeholders.
  • Collaborate with cross-functional teams to define and enforce data standards and quality.
  • Design, implement, and manage scalable databases for large volumes of data.
  • Develop and maintain efficient ETL processes to ensure seamless data flow, transformation, quality, and integrity through effective error handling and validation.
  • Collaborate with domain experts, data stewards, and project teams for effective data integration.
  • Utilize technologies including scripting languages and data integration tools.
  • Create and maintain comprehensive documentation for data engineering processes.
Posted 5 days ago
Apply

Related Articles

Posted 6 months ago

Insights into the evolving landscape of remote work in 2024 reveal the importance of certifications and continuous learning. This article breaks down emerging trends, sought-after certifications, and provides practical solutions for enhancing your employability and expertise. What skills will be essential for remote job seekers, and how can you navigate this dynamic market to secure your dream role?

Posted 6 months ago

Explore the challenges and strategies of maintaining work-life balance while working remotely. Learn about unique aspects of remote work, associated challenges, historical context, and effective strategies to separate work and personal life.

Posted 6 months ago

Google is gearing up to expand its remote job listings, promising more opportunities across various departments and regions. Find out how this move can benefit job seekers and impact the market.

Posted 6 months ago

Learn about the importance of pre-onboarding preparation for remote employees, including checklist creation, documentation, tools and equipment setup, communication plans, and feedback strategies. Discover how proactive pre-onboarding can enhance job performance, increase retention rates, and foster a sense of belonging from day one.

Posted 6 months ago

The article explores the current statistics for remote work in 2024, covering the percentage of the global workforce working remotely, growth trends, popular industries and job roles, geographic distribution of remote workers, demographic trends, work models comparison, job satisfaction, and productivity insights.