Data engineering Jobs

Find remote positions requiring Data engineering skills. Browse through opportunities where you can utilize your expertise and grow your career.

Data engineering
544 jobs found. to receive daily emails with new job openings that match your preferences.
544 jobs found.

Set alerts to receive daily emails with new job openings that match your preferences.

Apply

📍 USA

💸 90000 - 125000 USD per year

🔍 Digital media, personal finance

🏢 Company: GOBankingRates

  • 3+ years of experience in digital marketing analytics within a digital advertising/marketing/media company.
  • Strong background in analytics and understanding of Statistical Significance and data modeling.
  • BA/BS in Mathematics, Statistics, Finance, Marketing, Economics, or related field; MBA is a plus.
  • Intermediate to advanced knowledge in BI tools, such as Tableau.
  • Strong verbal and written communication skills for clear reporting.
  • Complex problem-solving skills with a solutions-oriented mindset.
  • Intermediate to advanced knowledge of Excel for data manipulation.
  • Intermediate to advanced knowledge of SQL and querying relational databases.

  • Strategize with the team for business understanding and goal achievement.
  • Ingest and aggregate performance data to build models and insights.
  • Analyze digital marketing data for growth and performance improvement.
  • Use statistical methods and data visualization for business cases.
  • Share insights and recommendations for measurable business impact.
  • Partner with teams for decision-making on ad placements and KPI improvements.
  • Collaborate with Data Engineering for better data access and accuracy.
  • Adapt quickly to changing project priorities and business needs.
  • Build working relationships for alignment in strategy and vision.

SQLTableauStrategyData engineeringData scienceCommunication SkillsWritten communication

Posted 2024-11-21
Apply
Apply

📍 US

🧭 Full-Time

💸 206700 - 289400 USD per year

🔍 Social media / Online community

  • MS or PhD in a quantitative discipline: engineering, statistics, operations research, computer science, informatics, applied mathematics, economics, etc.
  • 7+ years of experience with large-scale ETL systems, building clean, maintainable, object-oriented code (Python preferred).
  • Strong programming proficiency in Python, SQL, Spark, Scala.
  • Experience with data modeling, ETL concepts, and manipulating large structured and unstructured data.
  • Experience with data workflows (e.g., Airflow) and data visualization tools (e.g., Looker, Tableau).
  • Deep understanding of technical and functional designs for relational and MPP databases.
  • Proven track record of collaboration and excellent communication skills.
  • Experience in mentoring junior data scientists and analytics engineers.

  • Act as the analytics engineering lead within Ads DS team and contribute to data science data quality and automation initiatives.
  • Ensure high-quality data through ETLs, reporting dashboards, and data aggregations for business tracking and ML model development.
  • Develop and maintain robust data pipelines and workflows for data ingestion, processing, and transformation.
  • Create user-friendly tools for internal use across Data Science and cross-functional teams.
  • Lead efforts to build a data-driven culture by enabling data self-service.
  • Provide mentorship and coaching to data analysts and act as a thought partner for data teams.

LeadershipPythonSQLData AnalysisETLTableauStrategyAirflowData analysisData engineeringData scienceSparkCommunication SkillsCollaborationMentoringCoaching

Posted 2024-11-21
Apply
Apply

📍 Canada

🔍 Artificial Intelligence

  • Strong background in AWS DevOps and data engineering.
  • Expertise with AWS and SageMaker is essential.
  • Experience with Snowflake for analytics and data warehousing is highly desirable.

  • Manage and optimize the data infrastructure.
  • Focus on both data engineering and DevOps responsibilities.
  • Deploy machine learning models to AWS using SageMaker.

AWSMachine LearningSnowflakeData engineeringDevOps

Posted 2024-11-21
Apply
Apply

📍 Saint Mandé

🔍 Gaming

  • Experience in Software/Data engineering or a related field.
  • Solid experience with Python or Rust.
  • Experience building and interacting with REST APIs.
  • Familiarity with microservice architecture and API design.
  • Previous experience using cloud technology, Kubernetes, and AWS/Azure.
  • Knowledge of Machine Learning and Deep Learning.
  • Experience in deploying models to production.
  • Good communication skills and the ability to work collaboratively.

  • Take ownership over the projects you build and push them ahead.
  • Design, prototype, build, and maintain microservices & APIs for data and model delivery.
  • Build pipelines and batch processes to move and transform data.
  • Manage scalable infrastructure in the cloud.
  • Work on quality improvements and proof-of-concept projects.
  • Write, optimize, and produce high-quality code for scalability using modern best practices.
  • Collaborate with data and machine learning engineers to deploy models or prediction pipelines to production.

AWSPythonKubernetesMachine LearningAzureData engineeringRustCommunication SkillsCollaborationMicroservices

Posted 2024-11-21
Apply
Apply

📍 Mexico, Gibraltar, Colombia, USA, Brazil, Argentina

🧭 Full-Time

🔍 Cryptocurrency

🏢 Company: Bitso

  • 4+ years of professional experience working with analytics, ETLs, and data systems as an individual contributor.
  • 3+ years of experience in engineering management at tech companies.
  • Expertise in defining and implementing data architectures, including ETL/ELT pipelines, data lakes, data warehouses, and real-time data processing systems.
  • Expertise with cloud platforms (AWS preferred), data engineering tools (Databricks, Spark, Kafka), and SQL/NoSQL databases.
  • Expertise translating business requirements into technical solutions and data architecture.
  • Expertise with orchestration tools (e.g. AWS step functions, Databricks workflows, or Dagster).
  • Proven experience in building data migration services or implementing change data capture (CDC) processes.
  • Experience with CI/CD tools (Github actions).
  • Experience with CDP platforms and handling behavioral data (e.g. Segment, Amplitude, AVO).
  • Experience in infrastructure as code technologies (e.g. terraform) and serverless for data engineering tasks.

  • Lead the Data Engineering team and Data Governance lead on daily tasks with technical expertise and mentoring.
  • Prioritize workload, set clear goals and drive accountability to ensure the team delivers exceptional data products in a timely manner.
  • Mentor and coach all the Data Engineering division; fostering their professional development and an innovation culture.
  • Partner with Data Science divisions to drive data products that solve business problems.
  • Engage with stakeholders to define roadmaps according to Bitso’s priorities.
  • Recruit and retain top talent.
  • Define and drive Bitso’s data strategy in partnership with the SVP of Data Science.

AWSLeadershipSQLBusiness IntelligenceETLKafkaStrategyData engineeringData scienceServerlessNosqlSparkCollaborationCI/CDMentoringDevOpsTerraform

Posted 2024-11-21
Apply
Apply

📍 India

🧭 Full-Time

🔍 Data & AI

🏢 Company: ProArch

  • Bachelor’s degree in Computer Science, Engineering, or related field (Master’s preferred).
  • 8+ years of experience in AI, Data Engineering, and full-stack development.
  • Ability to work with ambiguity and deliver consultative solutions.
  • Familiarity with Agile methodologies (Scrum, Kanban).
  • Excellent communication and interpersonal skills.

  • Build and maintain a list of innovative ideas and evaluate them for feasibility.
  • Partner with presales teams to create tailored solutions for clients.
  • Provide technical expertise to resolve delivery challenges.
  • Conduct workshops to educate sales and marketing teams.
  • Share insights from sales calls with solution teams.

AWSDockerGraphQLLeadershipNode.jsPostgreSQLPythonSQLAgileBlockchainDjangoFlaskGCPIoTJavaJenkinsKafkaKubernetesMachine LearningMongoDBPyTorchSCRUMSnowflakeSpringSpring BootVue.JsAzureData engineering.NETAngularServerlessReactSparkTensorflowVue.jsCI/CDAgile methodologiesDevOpsMicroservices

Posted 2024-11-21
Apply
Apply
🔥 Senior Data Engineer
Posted 2024-11-21

📍 Belgium, Spain

🔍 Hospitality industry

🏢 Company: Lighthouse

  • 5+ years of professional experience using Python, Java, or Scala for data processing (Python preferred)
  • Experience with writing data processing pipelines and with cloud platforms like AWS, GCP, or Azure
  • Experience with data pipeline orchestration tools like Apache Airflow (preferred), Dagster or Prefect
  • Deep understanding of data warehousing strategies
  • Experience with transformation tools like dbt to manage data transformation in your data pipelines
  • Some experience in managing infrastructure with IaC tools like Terraform
  • Stay updated with industry trends, emerging technologies, and best practices in data engineering
  • Improve, manage, and teach standards for code maintainability and performance in code submitted and reviewed
  • Ship large features independently, generate architecture recommendations with the ability to implement them
  • Strong communicator that can describe complex topics in a simple way to a variety of technical and non-technical stakeholders.

  • Design and develop scalable, reliable data pipelines using the Google Cloud stack.
  • Ingest, process, and store structured and unstructured data from various sources into our data-lakes and data warehouses.
  • Optimise data pipelines for cost, performance and scalability.
  • Implement and maintain data governance frameworks, ensuring data accuracy, consistency, and compliance.
  • Monitor and troubleshoot data pipeline issues, implementing proactive measures for reliability and performance.
  • Mentor and provide technical guidance to other engineers working with data.
  • Partner with Product, Engineering & Data Science teams to operationalise new solutions.

PythonApache AirflowGCPJavaKafkaKubernetesAirflowData engineeringGrafanaPrometheusSparkCI/CDTerraformDocumentationCompliance

Posted 2024-11-21
Apply
Apply

📍 US

🔍 Consumer insights

  • Strong PL/SQL and SQL development skills.
  • Proficient in multiple data engineering languages such as Python and Java.
  • Minimum 3-5 years of experience in Data engineering with Oracle and MS SQL.
  • Experience with data warehousing concepts and cloud-based technologies like Snowflake.
  • Experience with cloud platforms such as Azure.
  • Knowledge of data orchestration tools like Azure Data Factory and DataBricks workflows.
  • Understanding of data privacy regulations and best practices.
  • Experience working with remote teams.
  • Familiarity with tools like Git, Jira.
  • Bachelor's degree in Computer Science or Computer Engineering.

  • Design, implement and maintain scalable pipelines and architecture to collect, process, and store data from various sources.
  • Unit test and document solutions that meet product quality standards prior to release to QA.
  • Identify and resolve performance bottlenecks in pipelines to ensure efficient data delivery.
  • Implement data quality checks and validation processes.
  • Work with Data Architect to implement best practices for data governance, quality, and security.
  • Collaborate with cross-functional teams to address data needs.

PythonSQLETLGitJavaOracleQASnowflakeJiraAzureData engineeringCI/CD

Posted 2024-11-21
Apply
Apply

📍 US, UK

🧭 Full-Time

💸 185000 - 200000 USD per year

🔍 Music technology

🏢 Company: Splice

  • Experience with Elasticsearch, optimizing data representations, queries, and clusters.
  • Relevant work experience building and evolving production software using Go and Python.
  • Experience deploying and managing ML models in production environments.
  • Several years working with RDBMS such as MySQL or PostgreSQL and crafting performant SQL queries.
  • Experience leveraging SaaS and cloud provider primitives effectively.
  • Strong customer experience focus and willingness to engage in build-versus-buy discussions.
  • Proficiency in writing, deploying, evolving, and deleting code.
  • Excellent communication skills with both technical and non-technical audiences.

  • Define the architecture and drive implementation changes across multiple backend services that power Splice’s products.
  • Translate large-scale architectural changes into manageable outcomes that benefit customers.
  • Optimize search infrastructure and create mechanisms for safe and rapid ML model production.
  • Advocate for system designs and APIs that prioritize customer needs.
  • Deliver complex projects spanning multiple domains and teams.
  • Identify areas for team improvement and propose solutions.
  • Provide mentorship and constructive feedback for engineering practices.
  • Manage cross-team commitments and track progress related to the delivery roadmap.

AWSDockerGraphQLPostgreSQLPythonSQLElasticSearchJenkinsKerasMySQLPyTorchTypeScriptData engineeringElasticsearchGogRPCRDBMSRedisTensorflowCollaborationTerraform

Posted 2024-11-20
Apply
Apply

📍 Canada

🧭 Full-Time

🔍 Technology

  • Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field.
  • 5+ years of experience in Data Engineering, with at least 3+ years in AWS environments.
  • Strong knowledge of AWS services including SageMaker, Lambda, Glue, and Redshift.
  • Hands-on experience deploying machine learning models in AWS SageMaker.
  • Proficiency in DevOps practices, CI/CD pipelines, Docker, and infrastructure-as-code tools.
  • Advanced SQL skills and experience with complex ETL workflows.
  • Proficiency in Python and skills in Java or Scala.
  • Experience with Apache Airflow for data orchestration.
  • Effective communication skills and a result-oriented approach.

  • Design, develop, and maintain ETL pipelines ensuring reliable data flow and high-quality data for analytics.
  • Build and optimize data models to efficiently handle large data volumes in Snowflake.
  • Create complex SQL queries for data processing and analytics.
  • Manage orchestration and scheduling using Apache Airflow.
  • Document data pipelines and architecture.
  • Architect, build, and maintain data science infrastructure on AWS.
  • Collaborate with Data Scientists on deploying ML models using SageMaker.
  • Automate ML model deployment and monitoring with CI/CD and IaC tools.
  • Set up monitoring solutions to ensure effective operation of data pipelines.

AWSDockerPythonSQLApache AirflowArtificial IntelligenceETLGitJavaMachine LearningSnowflakeAirflowData engineeringData scienceCI/CDDevOpsTerraformDocumentation

Posted 2024-11-20
Apply
Shown 10 out of 544