Full-Stack Developer Jobs

Data engineering
803 jobs found. to receive daily emails with new job openings that match your preferences.
803 jobs found.

Set alerts to receive daily emails with new job openings that match your preferences.

Apply
🔥 Data Engineering Manager
Posted about 2 hours ago

📍 North America, Europe, the Middle East, APAC

🧭 Full-Time

🔍 Cybersecurity

🏢 Company: Dragos

  • 7+ years of engineering management experience, consistently building and leading highly effective teams
  • Strong technical background in distributed data systems
  • Prior experience as an engineer with strong working knowledge of technologies such as Airflow, the Elastic Stack, Docker, Kubernetes, and cloud platforms
  • Comfortable with hands-on development or troubleshooting
  • Excellent communication, leadership, and presentation skills
  • Management experience with Agile and working in cross-functional Product Teams
  • Detail oriented, takes initiative, and dedicated to quality
  • Knowledge of cybersecurity threat detections, threat intelligence, or ICS/OT security
  • Lead and mentor a team of data engineers, driving their technical growth and professional development.
  • Champion the vision and execution of scalable data processing systems.
  • Collaborate with product and engineering teams to consistently define data architecture and solution strategies.
  • Foster a strong technical community by developing close relationships amongst Engineering Managers.
  • Establish and drive engineering best practices, coding standards, and performance optimization strategies.
  • Provide guidance during software design, development, testing, and release.
  • Ensure timely delivery of projects, balancing technical debt and innovation.
  • Ensure support for fielded production deployments.
  • Stay current with emerging data technologies, frameworks, and best practices to drive continuous improvement.

DockerLeadershipPythonSQLAgileApache AirflowCloud ComputingElasticSearchETLKubernetesPeople ManagementCross-functional Team LeadershipData engineeringCI/CDRESTful APIsMentoringData visualizationTeam managementData modelingData management

Posted about 2 hours ago
Apply
Apply
🔥 Sr. Consultant, Architecture
Posted about 5 hours ago

📍 North America, Latin America, Europe

🔍 Data Consultancy

  • 6+ yrs proven work experience in data warehousing/BI/analytics
  • 3+ yrs as an Architect
  • 3+ yrs experience working in Cloud platforms
  • Understanding of migration, dev/ops, ETL/ELT ingestion pipeline with tools like DataStage, Informatica, Matillion
  • Experience on Data Modeling
  • Experience on cloud data platforms (AWS, Azure, Google)
  • Project management skills and experience working with Scrum and Agile Development methods
  • Strong communication skills and the ability to guide clients and team members
  • People Leadership Skills
  • Design cutting-edge data solutions on Snowflake
  • Guide organizations through their data transformations
  • Lead architectural discussions and design exercises

AWSProject ManagementSQLAgileCloud ComputingETLSnowflakeAzureData engineeringRDBMSDevOpsData modeling

Posted about 5 hours ago
Apply
Apply
🔥 HR Data Engineer
Posted about 8 hours ago

📍 United States

💸 94800.0 - 151400.0 USD per year

🏢 Company: careers_gm

  • 5+ years of experience in HR Data Engineer role leading HR data engineering transformation and implementing data pipelines and data solutions in the People Analytics/HR domain
  • Very good understanding of HR data and HR employee lifecycle processes (talent acquisition, talent development, workforce planning, engagement, employee listening, external benchmarking etc.)
  • Very good understanding of HCM data architecture, models and data pipelines and experience designing and implementing data integrations and ETLs with Workday (RaaS, APIs)
  • Experience designing and automating data and analytics solutions that can provide insights and recommendations at scale
  • Proficiency in SQL, R/Python and ETL tools
  • Deep expertise in modern data platforms (particularly Databricks) and end-to-end data architecture (DLT Streaming Pipelines, Workflows, Notebooks, DeltaLake, Unity Catalog)
  • Experience with different authentication (Basic Auth, Oauth, etc.) and encryption methods and tools (GPG, Voltage, etc.)
  • Very strong data analytics skills and ability to leverage multiple internal and external data sources to enable data-driven insights and inform strategic talent decisions
  • Knowledge of compliance and regulatory requirements associated with data management
  • Experience working in environments requiring strict confidentiality and handling of sensitive data
  • Great communication skills and ability to explain complex technical concepts to non-technical stakeholders.
  • Degree with quantitative focus (e.g., Mathematics, Statistics) and/or degree in Human Resources is a plus
  • Design, develop, and maintain ETL/ELT processes for HR data from multiple systems including Workday to empower data-driven decision-making
  • Drive implementation of robust HR data models and pipelines optimized for reporting and analytics, ensuring data quality, reliability, and security for on-prem and Azure cloud solutions.
  • Develop pipelines and testing automation to ensure HR data quality and integrity across multiple data sources
  • Collaborate with People Analytics and HR business partners to understand data requirements and deliver reliable solutions. Collaborate with technical teams to build the best-in-class data environment and technology stack for People Analytics teams.
  • Ensure data integrity, quality, consistency, security, and compliance (e.g., GDPR, CCPA, HIPAA where applicable).
  • Design and implement secure processes for handling sensitive information in our data tech stack while maintaining appropriate access controls and confidentiality
  • Automate manual HR reporting and improve data accessibility through scalable data pipelines across the entire HR employee lifecycle
  • Troubleshoot and resolve data-related issues quickly and efficiently.
  • Contribute to HR tech stack evaluations and migrations, especially around data capabilities and API integrations.
  • Incorporate external data sources into internal datasets for comprehensive analysis
  • Manage and optimize platform architecture including Databricks environment configuration and performance optimization
  • Stay up to date with emerging trends and advancements in data engineering – both technically and in the HR and People Analytics/sciences domain

AWSPythonSQLApache AirflowETLAPI testingAzureData engineeringNosqlRESTful APIsData visualizationData modelingData analyticsData management

Posted about 8 hours ago
Apply
Apply

📍 Mexico, Colombia, Argentina, Peru

🔍 Software Development

🏢 Company: DaCodes

  • Experiencia comprobada en arquitecturas nativas de AWS para ingesta y orquestación de datos.
  • Manejo avanzado de herramientas y servicios para procesamiento de datos a gran escala (Spark, Lambda, Kinesis).
  • Conocimientos sólidos en modelado de datos open-table y arquitecturas de Data Lake y Data Warehouse.
  • Dominio de programación en Python o Scala para ETL/ELT y transformaciones.
  • Experiencia en aseguramiento de calidad de datos y monitoreo continuo (Great Expectations, Datadog).
  • Construir pipelines batch o micro-batch (SLA ≤ 24 horas) para ingesta de eventos y perfiles desde S3/Kinesis hacia almacenes de datos (Data Warehouse).
  • Automatizar DAGs específicos de campañas con AWS Step Functions o Managed Airflow, que se provisionan al inicio y se eliminan tras finalizar la campaña.
  • Modelar datos en formatos open-table particionados en S3 usando tecnologías como Iceberg, Hudi o Delta, con versionado por campaña.
  • Realizar cargas ELT a Redshift Serverless o consultas en Athena/Trino usando patrones de snapshot e incrementales.
  • Desarrollar transformaciones de datos con Glue Spark jobs o EMR en EKS para procesos pesados, y usar Lambda o Kinesis Data Analytics para enriquecimientos ligeros.
  • Programar en Python (PySpark, Pandas, boto3) o Scala para procesamiento de datos.
  • Implementar pruebas declarativas de calidad de datos con herramientas como Great Expectations o Deequ que se ejecutan diariamente durante campañas activas.
  • Gestionar pipelines de infraestructura y código mediante GitHub Actions o CodePipeline, con alertas configuradas en CloudWatch o Datadog.
  • Asegurar seguridad y gobernanza de datos con Lake Formation, cifrado a nivel de columna y cumplimiento de normativas como GDPR/CCPA.
  • Gestionar roles IAM con principio de mínimo privilegio para pipelines de campañas temporales.
  • Exponer modelos semánticos en Redshift/Athena para herramientas BI como Looker (LookML, PDTs) o conectados a Trino.

AWSPythonSQLDynamoDBETLData engineeringRedisPandasSparkCI/CDTerraformScalaA/B testing

Posted about 8 hours ago
Apply
Apply
🔥 Director | Data Science
Posted about 11 hours ago

📍 United States

🧭 Full-Time

🔍 Healthcare

🏢 Company: Machinify👥 51-100💰 $10,000,000 Series A over 6 years agoArtificial Intelligence (AI)Business IntelligencePredictive AnalyticsSaaSMachine LearningAnalytics

  • 10+ years of data science experience, with at least 5 years in a leadership role, including a leadership role at a start-up. Proven track record of managing data teams and delivering complex, high-impact products from concept to deployment
  • Strong knowledge of data privacy regulations and best practices in data security
  • Exceptional team management abilities, with experience in building and leading high-performing teams
  • Ability to think strategically and execute methodically
  • Ability to drive change and inspire a distributed team
  • Strong problem-solving skills and a data-driven mindset
  • Ability to communicate effectively, collaborating with diverse groups to solve complex problems
  • Provide direction and guidance to a team of Senior and Staff Data Scientists, enabling them to do their best work
  • Collaborate with the leadership team to define key technical and business metrics and objectives
  • Translate objectives into internal team priorities and assignments
  • Drive sprints and work with cross-functional stakeholders to appropriately prioritize various initiatives to improve customer metrics
  • Hire, mentor and develop team members
  • Foster a culture of innovation, collaboration, and continuous improvement
  • Communicate technical concepts and strategies to technical and non-technical stakeholders effectively
  • Own the success of various models on the field by continuously monitoring KPI’s and initiating projects to improve quality.

AWSLeadershipPythonSQLApache AirflowCloud ComputingData AnalysisETLKerasMachine LearningMLFlowNumpyPeople ManagementCross-functional Team LeadershipAlgorithmsData engineeringData scienceData StructuresPandasSparkTensorflowCommunication SkillsProblem SolvingAgile methodologiesRESTful APIsMentoringData visualizationTeam managementStrategic thinkingData modeling

Posted about 11 hours ago
Apply
Apply

📍 Worldwide

🧭 Full-Time

🔍 Software Development

🏢 Company: Kit👥 11-50💰 over 1 year agoEducationFinancial ServicesApps

  • Strong command of SQL, including DDL and DML.
  • Proficient in Python
  • Strong understanding of DBMS internals, including an appreciation for platform-specific nuances.
  • A willingness to work with Redshift and deeply understand its nuances.
  • Familiarity with our key tools (Redshift, Segment, dbt, github)
  • 8+ years in data, with at least 3 years specializing in Data Engineering
  • Proven track record managing and optimizing OLAP clusters
  • Experience refactoring problematic data pipelines without disrupting business operations
  • History of implementing data quality frameworks and validation processes
  • Dive into our Redshift warehouse, dbt models, and workflows.
  • Evaluate the CRM data lifecycle, including source extraction, warehouse ingestion, transformation, and reverse ETL.
  • Refine and start implementing your design for source extraction and warehouse ingestion.
  • Complete the implementation of the CRM source extraction/ingestion project and use the learnings to refine your approach in preparation for other, similar initiatives including, but by no means limited to web traffic events and product usage logs.

PythonSQLETLGitData engineeringRDBMSData modelingData management

Posted 1 day ago
Apply
Apply

📍 Poland

💸 22900.0 - 29900.0 PLN per month

🔍 Threat Intelligence

  • Bachelor’s degree in Computer Science, Engineering, or a related field.
  • 2+ years of experience as a Data Engineer, working with large-scale distributed systems.
  • Proven expertise in Lakehouse architecture and Apache Hudi in production environments.
  • Experience with Airflow, Kafka, or streaming data pipelines.
  • Strong programming skills in Python and PySpark.
  • Comfortable working in a cloud-based environment (preferably AWS).
  • Design, build, and manage scalable data pipelines using Python, SQL, and PySpark.
  • Develop and maintain lakehouse architectures, with hands-on use of Apache Hudi for data versioning, upserts, and compaction.
  • Implement efficient ETL/ELT processes for both batch and real-time data ingestion.
  • Optimize data storage and query performance across large datasets (partitioning, indexing, compaction).
  • Ensure data quality, governance, and lineage, integrating validation and monitoring into pipelines.
  • Work with cloud-native services (preferably AWS – S3, Athena, EMR) to support modern data workflows.
  • Collaborate closely with data scientists, analysts, and platform engineers to deliver reliable data infrastructure

AWSPythonSQLCloud ComputingETLKafkaAirflowData engineeringCI/CDTerraform

Posted 1 day ago
Apply
Apply

📍 Brazil

🔍 Data Governance

🏢 Company: TELUS Digital Brazil

  • At least 3 years of experience in Data Governance, Metadata Management, Data Cataloging, or Data Engineering.
  • Have actively participated in the design, implementation, and management of data governance frameworks and data catalogs.
  • Experience working with Colibra and a strong understanding of the Collibra Operating Model, workflows, domains, and policies.
  • Experience working with APIs.
  • Experience with a low-code/no-code ETL tool, such as Informatica
  • Experience working with databases and data modeling projects, as well as practical experience utilizing SQL
  • Effective English communication - able to explain technical and non-technical concepts to different audiences
  • Experience with a general-purpose programming language such as Python or Scala
  • Ability to work well in teams and interact effectively with others
  • Ability to work independently and manage multiple tasks simultaneously while meeting deadlines
  • Conduct detailed assessments of the customer’s data governance framework and current-state Collibra implementation
  • Translate business needs into functional specifications for Collibra use cases
  • Serve as a trusted advisor to the customer’s data governance leadership
  • Lead requirement-gathering sessions and workshops with business users and technical teams
  • Collaborate with cross-functional teams to ensure data quality and support data-driven decision-making to strive for greater functionality in our data systems
  • Collaborate with project managers and product owners to assist in prioritizing, estimating, and planning development tasks
  • Provide constructive feedback and share expertise with fellow team members, fostering mutual growth and learning
  • Demonstrate a commitment to accessibility and ensure that your work considers and positively impacts others

PythonSQLETLData engineeringRESTful APIsData visualizationData modelingData analyticsData management

Posted 1 day ago
Apply
Apply

📍 United States

🧭 Full-Time

💸 135000.0 - 170000.0 USD per year

🔍 Healthcare

🏢 Company: SmarterDx👥 101-250💰 $50,000,000 Series B about 1 year agoArtificial Intelligence (AI)HospitalInformation TechnologyHealth Care

  • 3+ years of analytics engineering experience in the healthcare industry, involving clinical and/or billing/claims data.
  • You are very well-versed in SQL and ETL processes, significant experience in dbt is a must
  • You have experience in a general purpose programming language (Python, Java, Scala, Ruby, etc.)
  • You have strong experience in data modeling and their implementation in production  data pipelines.
  • You are comfortable with the essentials of data orchestration
  • Designing, developing, and maintaining dbt data models that support our healthcare analytics products.
  • Integrating and transforming customer data to conform to our data specifications and standards.
  • Collaborating with cross-functional teams to translate data and business requirements into effective data models.
  • Configuring and improving data pipelines that integrate and connect the data models.
  • Conducting QA and testing on data models to ensure data accuracy, reliability, and performance.
  • Applying industry standards and best practices around data modeling, testing, and data pipelining.
  • Participating in a rota with other engineers to help investigate and resolve data related issues.

AWSPostgreSQLPythonSQLApache AirflowETLSnowflakeData engineeringData modelingData analytics

Posted 1 day ago
Apply
Apply

📍 United States

💸 175000.0 - 200000.0 USD per year

🔍 Manufacturing

  • 10+ years of experience in data, analytics, and solution architecture roles.
  • Proven success in a pre-sales or client-facing architecture role.
  • Deep understanding of Microsoft Azure data services, including: Azure Synapse, Data Factory, Data Lake, Databricks/Spark, Fabric, etc.
  • Experience aligning modern data architectures to business strategies in the manufacturing or industrial sector.
  • Experience scoping, pricing, and estimating delivery efforts in early-stage client engagements.
  • Clear and confident communicator who can build trust with technical and business stakeholders alike.
  • Lead strategic discovery sessions with executives and business stakeholders to understand business goals, challenges, and technical landscapes.
  • Define solution architectures and delivery roadmaps that align with client objectives and Microsoft best practices.
  • Scope, estimate, and present proposals for complex Data, AI, and Azure-based projects.
  • Architect end-to-end data and analytics solutions using Microsoft Azure (Data Lake, Synapse, ADF, Power BI, Microsoft Fabric, Azure AI Services, etc.).
  • Oversee solution quality from ideation through handoff to delivery teams.
  • Support and guide sales consultants in an effort to align solutions, progress deals and effectively close business.
  • Collaborate with Delivery to ensure project success and customer satisfaction.
  • Contribute to thought leadership, IP development, and reusable solution frameworks.

SQLCloud ComputingETLMachine LearningMicrosoft AzureMicrosoft Power BIAzureData engineeringRESTful APIsData modelingData analyticsData management

Posted 1 day ago
Apply
Shown 10 out of 803

Ready to Start Your Remote Journey?

Apply to 5 jobs per day for free, or get unlimited applications with a subscription starting at €5/week.

Why Full-Stack Developer Jobs Are Becoming More Popular

The remote work from home is increasingly in demand among computer and IT professionals for several reasons:

  • Flexibility in time and location.
  • Collaboration with international companies.
  • Higher salary levels.
  • Lack of ties to the office.

Remote work opens up new opportunities for specialists, allowing them to go beyond geographical limits and build a successful remote IT career. This employment model is transforming traditional work approaches, making it more convenient, efficient, and accessible for professionals worldwide.

Why do Job Seekers Choose Remoote.app?

Our platform offers convenient conditions for finding remote IT jobs from home:

  • localized search — filter job listings based on your country of residence;
  • AI-powered job processing — artificial intelligence analyzes thousands of listings, highlighting key details so you don’t have to read long descriptions;
  • advanced filters — sort vacancies by skills, experience, qualification level, and work model;
  • regular database updates — we monitor job relevance and remove outdated listings;
  • personalized notifications — get tailored job offers directly via email or Telegram;
  • resume builder — create a professional VC with ease using our customizable templates and AI-powered suggestions;
  • data security — modern encryption technologies ensure the protection of your personal information.

Join our platform and find your dream job today! We offer flexible pricing — up to 5 applications per day for free, with weekly, monthly, and yearly subscription plans for extended access.