Apply

Data Engineer

Posted 13 days agoViewed

View full description

💎 Seniority level: Junior, 2-4 years

💸 Salary: 162000.0 - 180000.0 USD per year

🔍 Industry: Mental health

🏢 Company: Two Chairs👥 501-1000💰 $72,000,000 Series C 10 months agoMental HealthTherapeuticsWellnessHealth Care

⏳ Experience: 2-4 years

Requirements:
  • 2-4 years of hands-on experience with data pipelines and ETL processes.
  • Experience with cloud platforms (preferably GCP) and data warehouses (preferably BigQuery).
  • Working knowledge of Python, SQL, and DBT or similar frameworks.
  • Familiarity with data quality monitoring and alerting practices.
  • Strong problem-solving skills and attention to detail.
  • Ability to collaborate with both technical and non-technical teammates.
  • Enthusiasm for learning in a fast-paced environment.
Responsibilities:
  • Build and maintain reliable data pipelines in collaboration with Data Scientists and engineers.
  • Implement monitoring, alerting, and data quality checks for data integrity.
  • Partner with various departments to understand data needs.
  • Optimize query performance and manage costs in BigQuery.
  • Create API integrations for data ingestion.
  • Automate tasks using notebooks and APIs.
  • Build data models and documentation for data consumers.
Apply

Related Jobs

Apply
🔥 Senior Data Engineer
Posted about 20 hours ago

📍 United States, Canada

🧭 Regular

💸 125000.0 - 160000.0 USD per year

🔍 Digital driver assistance services

🏢 Company: Agero👥 1001-5000💰 $4,750,000 over 2 years agoAutomotiveInsurTechInformation TechnologyInsurance

  • Bachelor's degree in a technical field and 5+ years or Master's degree with 3+ years of industry experience.
  • Extensive experience with Snowflake or other cloud-based data warehousing solutions.
  • Expertise in ETL/ELT pipelines using tools like Airflow, DBT, Fivetran.
  • Proficiency in Python for data processing and advanced SQL for managing databases.
  • Solid understanding of data modeling techniques and cost management strategies.
  • Experience with data quality frameworks and deploying data solutions in the cloud.
  • Familiarity with version control systems and implementing CI/CD pipelines.
  • Develop and maintain ETL/ELT pipelines to ingest data from diverse sources.
  • Monitor and optimize cloud costs while performing query optimization in Snowflake.
  • Establish modern data architectures including data lakes and warehouses.
  • Apply dimensional modeling techniques and develop transformations using DBT or Spark.
  • Write reusable and efficient code, and develop data-intensive UIs and dashboards.
  • Implement data quality frameworks and observability solutions.
  • Collaborate cross-functionally and document data flows, processes, and architecture.

AWSPythonSQLApache AirflowDynamoDBETLFlaskMongoDBSnowflakeFastAPIPandasCI/CDData modeling

Posted about 20 hours ago
Apply
Apply

💸 126400.0 - 223100.0 USD per year

🔍 Financial services

🏢 Company: Block👥 1001-5000ElectronicsManufacturing

  • 5+ years of data engineering experience.
  • Experience with database management and cloud computing.
  • Knowledge of data warehousing architecture and dimensional modeling.
  • End-to-end ETL pipeline development using SQL and Python.
  • Experience with BI visualization tools like Looker or Tableau.
  • Familiarity with cloud-based services like Snowflake, Redshift, or Azure.
  • Be the expert in building the data foundation that powers BI and visualization tools.
  • Partner with various teams to translate requirements into scalable data pipelines.
  • Develop and manage BI data pipelines and centralized data warehouse with curated datasets.
  • Perform ad hoc analysis and design dashboards for stakeholders.
Posted 1 day ago
Apply
Apply
🔥 Staff Data Engineer
Posted 1 day ago

🧭 Full-Time

💸 130000.0 - 170000.0 USD per year

🔍 Entertainment and Media

  • 8+ years of experience in a data engineering role, leading data engineering teams.
  • Understanding of ETL/ELT principles, cloud technologies (AWS, Azure, GCP), and data modeling.
  • Experience building data pipelines using Python/SQL.
  • Understanding of REST-based APIs and AI workload components.
  • Bachelor's degree in computer science, Data Science, or related fields.
  • Design, build, and scale data pipelines across various source systems and streams.
  • Collaborate with cross-functional teams for data requirements and integration strategies.
  • Support APIs and machine learning services, working closely with data scientists.
  • Implement design patterns optimizing performance, cost, security, and user experience.
  • Create automated tests for code compatibility and document to assist developers and business users.
Posted 1 day ago
Apply
Apply

📍 Hungary

🔍 Crypto

  • Experience with SQL, Python, R, Java, and C++.
  • Familiarity with machine learning techniques.
  • Knowledge of programming and deployment strategies.
  • Facilitate operations for Data Scientists and Engineering team.
  • Employ various tools and techniques to construct frameworks.
  • Prepare information using SQL, Python, R, Java, and C++.
  • Utilize machine learning techniques for data analysis.
  • Collaborate with coworkers to meet project requirements.

PythonSQLJavaMachine LearningC++

Posted 2 days ago
Apply
Apply

🔍 Crypto

  • Strong knowledge of SQL, Python, R, Java, and C++.
  • Experience with machine learning techniques.
  • Familiarity with programming and deployment strategies.
  • Ability to work collaboratively with team members.
  • Facilitate the operations of the Data Scientists and Engineering team.
  • Employ various tools and techniques for data preparation.
  • Construct frameworks using programming languages such as SQL, Python, R, Java, and C++.
  • Utilize machine learning techniques to create structures for data analysis.
  • Collaborate with coworkers to meet project needs.
Posted 2 days ago
Apply
Apply

📍 Uruguay

🔍 Subscription economy

🏢 Company: Neocol👥 11-50💰 almost 2 years agoInformation ServicesSoftware

  • 1-3+ years of data migration experience.
  • Background in completing ETL processes.
  • Experience with SQL and programming languages.
  • Ability to develop SQL scripts with a strong understanding of database concepts.
  • Self-starter with good team collaboration skills.
  • Work with a data architect to migrate customer data from source systems to target systems, mainly Salesforce.
  • Participate in customer workshops to discuss data migration processes.
  • Aid in creating migration plans, including timelines and milestones.
  • Create field mappings and develop migration scripts.
  • Ensure data integrity and troubleshoot issues during migrations.
  • Provide documentation of processes and participate in check-in meetings with customers.

SQLETLMicrosoft SQL Server

Posted 2 days ago
Apply
Apply

🧭 Full-Time

💸 163282.0 - 192262.0 USD per year

🔍 Software / Data Visualization

  • 15+ years of professional software development or data engineering experience (12+ with a STEM B.S. or 10+ with a relevant Master's degree)
  • Strong proficiency in Python and familiarity with Java and Bash scripting
  • Hands-on experience implementing database technologies, messaging systems, and stream computing software (e.g., PostgreSQL, PostGIS, MongoDB, DuckDB, KsqlDB, RabbitMQ)
  • Experience with data fabric development using publish-subscribe models (e.g., Apache NiFi, Apache Pulsar, Apache Kafka and Kafka-based data service architecture)
  • Proficiency with containerization technologies (e.g., Docker, Docker-Compose, RKE2, Kubernetes, and Microk8s)
  • Experience with version control systems (e.g., Git), CI/CD tools (e.g., Jenkins), and collaborative development workflows
  • Strong knowledge of data modeling and database optimization techniques
  • Familiarity with data serialization languages (e.g., JSON, GeoJSON, YAML, XML)
  • Excellent problem-solving and analytical skills that have been applied to high visibility, important data engineering projects
  • Strong communication skills and ability to lead the work of other engineers in a collaborative environment
  • Demonstrated experience in coordinating team activities, setting priorities, and managing tasks to ensure balanced workloads and effective team performance
  • Experience managing and mentoring development teams in an Agile environment
  • Ability to make effective architecture decisions and document them clearly
  • Must be a US Citizen and eligible to obtain and maintain a US Security Clearance
  • Develop and continuously improve a data service that underpins cloud-based applications
  • Support data and database modeling efforts
  • Contribute to the development and maintenance of reusable component libraries and shared codebase
  • Participate in the entire software development lifecycle, including requirement gathering, design, development, testing, and deployment, using an agile, iterative process
  • Collaborate with developers, designers, testers, project managers, product owners, and project sponsors to integrate the data service to end user applications
  • Communicate tasking estimation and progress regularly to a development lead and product owner through appropriate tools
  • Ensure seamless integration between database and messaging systems and the frontend / UI they support
  • Ensure data quality, reliability, and performance through code reviews and effective testing strategies
  • Write high-quality code, applying best practices, coding standards, and design patterns
  • Team with other developers, fostering a culture of continuous learning and professional growth
Posted 3 days ago
Apply
Apply

📍 India

🧭 Full-Time

🔍 Healthcare and life sciences

🏢 Company: Reveal Health Tech

  • 0-2 years of experience in a similar data profile.
  • Strong experience with ETL/ELT pipelines and tools like Apache Airflow or dbt.
  • Proficiency in SQL and optimization for PostgreSQL and Redis.
  • Hands-on experience with AWS services like S3 and CloudFront.
  • Familiarity with ODBC adapters and APIs.
  • Experience with observability tools such as Datadog and Sentry.
  • Excellent problem-solving and communication skills.
  • Prior experience in a customer-facing role is a plus.
  • Design, build, and maintain robust ETL/ELT pipelines from multiple data sources.
  • Optimize existing pipelines for improved performance and reliability.
  • Maintain and optimize PostgreSQL and Redis databases.
  • Ensure timely and accurate data delivery for analytics.
  • Leverage AWS services for data storage and distribution.
  • Utilize monitoring tools for pipeline health.
  • Document data models and operational workflows.

AWSDockerPostgreSQLSQLApache AirflowETLRedis

Posted 3 days ago
Apply
Apply

📍 Japan

🧭 Full-Time

🔍 FinTech

  • 5+ years experience as a Data Engineer or in a similar role.
  • Hands-on experience with Apache Hudi, Delta Lake, Spark, and Scala.
  • Experience designing, building, and operating a DataLake or Data Warehouse.
  • Strong expertise in AWS services, including Glue, Step Functions, Lambda, and EMR.
  • Proficiency in Terraform for infrastructure as code (IaC).
  • Experience with data warehousing tools like AWS Athena, BigQuery, and Databricks.
  • Design, develop, and maintain scalable data ingestion pipelines using AWS Glue, Step Functions, Lambda, and Terraform.
  • Optimize and manage large scale data pipelines to ensure high performance, reliability, and efficiency.
  • Implement data processing workflows using Hudi, Delta Lake, Spark, and Scala.
  • Maintain and enhance Lakeformation and Glue Data Catalog for effective data management and discovery.
  • Collaborate with cross-functional teams to ensure seamless data flow and integration across the organization.

AWSPythonSQLApache AirflowMachine LearningSparkTerraformScala

Posted 4 days ago
Apply
Apply

📍 Brazil, Argentina, Peru, Colombia, Uruguay

🔍 AdTech

🏢 Company: Workana Premium

  • 6+ years of experience in data engineering or related roles, preferably within the AdTech industry.
  • Expertise in SQL and experience with relational databases such as BigQuery and SpannerDB or similar.
  • Experience with GCP services, including Dataflow, Pub/Sub, and Cloud Storage.
  • Experience building and optimizing ETL/ELT pipelines in support of audience segmentation and analytics use cases.
  • Experience with Docker and Kubernetes for containerization and orchestration.
  • Familiarity with message queues or event-streaming tools, such as Kafka or Pub/Sub.
  • Knowledge of data modeling, schema design, and query optimization for performance at scale.
  • Programming experience in languages like Python, Go, or Java for data engineering tasks.
  • Build and optimize data pipelines and ETL/ELT processes to support AdTech products: Insights, Activation, and Measurement.
  • Leverage GCP tools like BigQuery, SpannerDB, and Dataflow to process and analyze real-time consumer-permissioned data.
  • Design scalable and robust data solutions to power audience segmentation, targeted advertising, and outcome measurement.
  • Develop and maintain APIs to facilitate data sharing and integration across the platform’s products.
  • Optimize database and query performance to ensure efficient delivery of advertising insights and analytics.
  • Work with event-driven architectures using tools like Pub/Sub or Kafka to ensure seamless data processing.
  • Proactively monitor and troubleshoot issues to maintain data accuracy, security, and performance.
  • Drive innovation by identifying opportunities to enhance the platform’s capabilities in audience targeting and measurement.

DockerPythonSQLETLGCPJavaKafkaKubernetesGoData modeling

Posted 5 days ago
Apply

Related Articles

Posted 5 months ago

Insights into the evolving landscape of remote work in 2024 reveal the importance of certifications and continuous learning. This article breaks down emerging trends, sought-after certifications, and provides practical solutions for enhancing your employability and expertise. What skills will be essential for remote job seekers, and how can you navigate this dynamic market to secure your dream role?

Posted 5 months ago

Explore the challenges and strategies of maintaining work-life balance while working remotely. Learn about unique aspects of remote work, associated challenges, historical context, and effective strategies to separate work and personal life.

Posted 6 months ago

Google is gearing up to expand its remote job listings, promising more opportunities across various departments and regions. Find out how this move can benefit job seekers and impact the market.

Posted 6 months ago

Learn about the importance of pre-onboarding preparation for remote employees, including checklist creation, documentation, tools and equipment setup, communication plans, and feedback strategies. Discover how proactive pre-onboarding can enhance job performance, increase retention rates, and foster a sense of belonging from day one.

Posted 6 months ago

The article explores the current statistics for remote work in 2024, covering the percentage of the global workforce working remotely, growth trends, popular industries and job roles, geographic distribution of remote workers, demographic trends, work models comparison, job satisfaction, and productivity insights.