Full-Stack Developer Jobs

ETL
712 jobs found. to receive daily emails with new job openings that match your preferences.
712 jobs found.

Set alerts to receive daily emails with new job openings that match your preferences.

Apply
🔥 Data Engineering Manager
Posted about 2 hours ago

📍 North America, Europe, the Middle East, APAC

🧭 Full-Time

🔍 Cybersecurity

🏢 Company: Dragos

  • 7+ years of engineering management experience, consistently building and leading highly effective teams
  • Strong technical background in distributed data systems
  • Prior experience as an engineer with strong working knowledge of technologies such as Airflow, the Elastic Stack, Docker, Kubernetes, and cloud platforms
  • Comfortable with hands-on development or troubleshooting
  • Excellent communication, leadership, and presentation skills
  • Management experience with Agile and working in cross-functional Product Teams
  • Detail oriented, takes initiative, and dedicated to quality
  • Knowledge of cybersecurity threat detections, threat intelligence, or ICS/OT security
  • Lead and mentor a team of data engineers, driving their technical growth and professional development.
  • Champion the vision and execution of scalable data processing systems.
  • Collaborate with product and engineering teams to consistently define data architecture and solution strategies.
  • Foster a strong technical community by developing close relationships amongst Engineering Managers.
  • Establish and drive engineering best practices, coding standards, and performance optimization strategies.
  • Provide guidance during software design, development, testing, and release.
  • Ensure timely delivery of projects, balancing technical debt and innovation.
  • Ensure support for fielded production deployments.
  • Stay current with emerging data technologies, frameworks, and best practices to drive continuous improvement.

DockerLeadershipPythonSQLAgileApache AirflowCloud ComputingElasticSearchETLKubernetesPeople ManagementCross-functional Team LeadershipData engineeringCI/CDRESTful APIsMentoringData visualizationTeam managementData modelingData management

Posted about 2 hours ago
Apply
Apply
🔥 Sr. Consultant, Architecture
Posted about 5 hours ago

📍 North America, Latin America, Europe

🔍 Data Consultancy

  • 6+ yrs proven work experience in data warehousing/BI/analytics
  • 3+ yrs as an Architect
  • 3+ yrs experience working in Cloud platforms
  • Understanding of migration, dev/ops, ETL/ELT ingestion pipeline with tools like DataStage, Informatica, Matillion
  • Experience on Data Modeling
  • Experience on cloud data platforms (AWS, Azure, Google)
  • Project management skills and experience working with Scrum and Agile Development methods
  • Strong communication skills and the ability to guide clients and team members
  • People Leadership Skills
  • Design cutting-edge data solutions on Snowflake
  • Guide organizations through their data transformations
  • Lead architectural discussions and design exercises

AWSProject ManagementSQLAgileCloud ComputingETLSnowflakeAzureData engineeringRDBMSDevOpsData modeling

Posted about 5 hours ago
Apply
Apply
🔥 Senior Product Data Analyst
Posted about 5 hours ago

📍 Brazil

💸 60000.0 - 70000.0 USD per year

🏢 Company: LawnStarter👥 101-250💰 $10,500,000 over 5 years agoInternetMarketplaceOutdoorsLandscaping

  • 4+ years of experience in data & analytics
  • Experience with SQL and intermediate experience in Python or R.
  • Experience with Tableau and Metabase.
  • Empower the organization to be more data-driven.
  • Modeling & Analysis
  • Reporting
  • Analytics Engineering

PythonSQLData AnalysisETLTableauReportingData visualizationData modelingA/B testing

Posted about 5 hours ago
Apply
Apply

📍 Philippines

💸 500.0 USD per month

  • At least 1 year of data analysis experience
  • At least 1 year of Agile Process Knowledge
  • Strong analytical and problem-solving skills
  • Proficient in Google Suite
  • Map all source to destination files to ensure precise data transformation and loading.
  • Create, update, and manage configuration files essential for the ETL platform.
  • Regularly adjust platform configurations to accommodate new data sources.
  • Perform preliminary analysis and troubleshooting for potential data issues, escalating only those that require engineering intervention.
  • Maintain comprehensive records of all configurations and mappings for auditing and future reference.
  • Collaborate with the engineering team to clarify requirements and ensure seamless escalation of issues.

SQLAgileData AnalysisETLCommunication SkillsAnalytical SkillsProblem SolvingDocumentationTroubleshootingData visualizationData modelingData management

Posted about 7 hours ago
Apply
Apply
🔥 HR Data Engineer
Posted about 7 hours ago

📍 United States

💸 94800.0 - 151400.0 USD per year

🏢 Company: careers_gm

  • 5+ years of experience in HR Data Engineer role leading HR data engineering transformation and implementing data pipelines and data solutions in the People Analytics/HR domain
  • Very good understanding of HR data and HR employee lifecycle processes (talent acquisition, talent development, workforce planning, engagement, employee listening, external benchmarking etc.)
  • Very good understanding of HCM data architecture, models and data pipelines and experience designing and implementing data integrations and ETLs with Workday (RaaS, APIs)
  • Experience designing and automating data and analytics solutions that can provide insights and recommendations at scale
  • Proficiency in SQL, R/Python and ETL tools
  • Deep expertise in modern data platforms (particularly Databricks) and end-to-end data architecture (DLT Streaming Pipelines, Workflows, Notebooks, DeltaLake, Unity Catalog)
  • Experience with different authentication (Basic Auth, Oauth, etc.) and encryption methods and tools (GPG, Voltage, etc.)
  • Very strong data analytics skills and ability to leverage multiple internal and external data sources to enable data-driven insights and inform strategic talent decisions
  • Knowledge of compliance and regulatory requirements associated with data management
  • Experience working in environments requiring strict confidentiality and handling of sensitive data
  • Great communication skills and ability to explain complex technical concepts to non-technical stakeholders.
  • Degree with quantitative focus (e.g., Mathematics, Statistics) and/or degree in Human Resources is a plus
  • Design, develop, and maintain ETL/ELT processes for HR data from multiple systems including Workday to empower data-driven decision-making
  • Drive implementation of robust HR data models and pipelines optimized for reporting and analytics, ensuring data quality, reliability, and security for on-prem and Azure cloud solutions.
  • Develop pipelines and testing automation to ensure HR data quality and integrity across multiple data sources
  • Collaborate with People Analytics and HR business partners to understand data requirements and deliver reliable solutions. Collaborate with technical teams to build the best-in-class data environment and technology stack for People Analytics teams.
  • Ensure data integrity, quality, consistency, security, and compliance (e.g., GDPR, CCPA, HIPAA where applicable).
  • Design and implement secure processes for handling sensitive information in our data tech stack while maintaining appropriate access controls and confidentiality
  • Automate manual HR reporting and improve data accessibility through scalable data pipelines across the entire HR employee lifecycle
  • Troubleshoot and resolve data-related issues quickly and efficiently.
  • Contribute to HR tech stack evaluations and migrations, especially around data capabilities and API integrations.
  • Incorporate external data sources into internal datasets for comprehensive analysis
  • Manage and optimize platform architecture including Databricks environment configuration and performance optimization
  • Stay up to date with emerging trends and advancements in data engineering – both technically and in the HR and People Analytics/sciences domain

AWSPythonSQLApache AirflowETLAPI testingAzureData engineeringNosqlRESTful APIsData visualizationData modelingData analyticsData management

Posted about 7 hours ago
Apply
Apply

📍 Mexico, Colombia, Argentina, Peru

🔍 Software Development

🏢 Company: DaCodes

  • Experiencia comprobada en arquitecturas nativas de AWS para ingesta y orquestación de datos.
  • Manejo avanzado de herramientas y servicios para procesamiento de datos a gran escala (Spark, Lambda, Kinesis).
  • Conocimientos sólidos en modelado de datos open-table y arquitecturas de Data Lake y Data Warehouse.
  • Dominio de programación en Python o Scala para ETL/ELT y transformaciones.
  • Experiencia en aseguramiento de calidad de datos y monitoreo continuo (Great Expectations, Datadog).
  • Construir pipelines batch o micro-batch (SLA ≤ 24 horas) para ingesta de eventos y perfiles desde S3/Kinesis hacia almacenes de datos (Data Warehouse).
  • Automatizar DAGs específicos de campañas con AWS Step Functions o Managed Airflow, que se provisionan al inicio y se eliminan tras finalizar la campaña.
  • Modelar datos en formatos open-table particionados en S3 usando tecnologías como Iceberg, Hudi o Delta, con versionado por campaña.
  • Realizar cargas ELT a Redshift Serverless o consultas en Athena/Trino usando patrones de snapshot e incrementales.
  • Desarrollar transformaciones de datos con Glue Spark jobs o EMR en EKS para procesos pesados, y usar Lambda o Kinesis Data Analytics para enriquecimientos ligeros.
  • Programar en Python (PySpark, Pandas, boto3) o Scala para procesamiento de datos.
  • Implementar pruebas declarativas de calidad de datos con herramientas como Great Expectations o Deequ que se ejecutan diariamente durante campañas activas.
  • Gestionar pipelines de infraestructura y código mediante GitHub Actions o CodePipeline, con alertas configuradas en CloudWatch o Datadog.
  • Asegurar seguridad y gobernanza de datos con Lake Formation, cifrado a nivel de columna y cumplimiento de normativas como GDPR/CCPA.
  • Gestionar roles IAM con principio de mínimo privilegio para pipelines de campañas temporales.
  • Exponer modelos semánticos en Redshift/Athena para herramientas BI como Looker (LookML, PDTs) o conectados a Trino.

AWSPythonSQLDynamoDBETLData engineeringRedisPandasSparkCI/CDTerraformScalaA/B testing

Posted about 8 hours ago
Apply
Apply
🔥 Director | Data Science
Posted about 11 hours ago

📍 United States

🧭 Full-Time

🔍 Healthcare

🏢 Company: Machinify👥 51-100💰 $10,000,000 Series A over 6 years agoArtificial Intelligence (AI)Business IntelligencePredictive AnalyticsSaaSMachine LearningAnalytics

  • 10+ years of data science experience, with at least 5 years in a leadership role, including a leadership role at a start-up. Proven track record of managing data teams and delivering complex, high-impact products from concept to deployment
  • Strong knowledge of data privacy regulations and best practices in data security
  • Exceptional team management abilities, with experience in building and leading high-performing teams
  • Ability to think strategically and execute methodically
  • Ability to drive change and inspire a distributed team
  • Strong problem-solving skills and a data-driven mindset
  • Ability to communicate effectively, collaborating with diverse groups to solve complex problems
  • Provide direction and guidance to a team of Senior and Staff Data Scientists, enabling them to do their best work
  • Collaborate with the leadership team to define key technical and business metrics and objectives
  • Translate objectives into internal team priorities and assignments
  • Drive sprints and work with cross-functional stakeholders to appropriately prioritize various initiatives to improve customer metrics
  • Hire, mentor and develop team members
  • Foster a culture of innovation, collaboration, and continuous improvement
  • Communicate technical concepts and strategies to technical and non-technical stakeholders effectively
  • Own the success of various models on the field by continuously monitoring KPI’s and initiating projects to improve quality.

AWSLeadershipPythonSQLApache AirflowCloud ComputingData AnalysisETLKerasMachine LearningMLFlowNumpyPeople ManagementCross-functional Team LeadershipAlgorithmsData engineeringData scienceData StructuresPandasSparkTensorflowCommunication SkillsProblem SolvingAgile methodologiesRESTful APIsMentoringData visualizationTeam managementStrategic thinkingData modeling

Posted about 11 hours ago
Apply
Apply

📍 Philippines

🏢 Company: Anytime Mailbox👥 11-50Information ServicesEmailInformation TechnologySoftware

  • Bachelor's degree in Computer Science, Information Systems, Mathematics, Statistics, or a related field.
  • Proven working experience as a Data Analyst or in a similar role.
  • Technical expertise regarding data models, database design development, data mining, and segmentation techniques.
  • Proficient in using statistical packages (e.g., Excel, SPSS, SAS) for analyzing datasets.
  • Experience with reporting packages (e.g., Business Objects), databases (e.g., SQL), and programming (e.g., XML, Javascript, ETL frameworks).
  • Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy.
  • Excellent verbal and written communication skills.
  • Familiarity with call center operations and metrics (e.g., average handle time, first call resolution) is a plus.
  • Proficiency in data visualization tools and software such as Excel, Google Sheets, PowerPoint, and Google Slides.
  • Proficiency in using automation and AI is preferred.
  • Strong knowledge of and experience with reporting tools and data visualization (e.g., Tableau, PowerBI).
  • High-level experience in methodologies and processes for managing large-scale databases.
  • Ability to work with stakeholders to assess potential risks and translate business requirements into non-technical terms.
  • Experience in managing master data, including creation, updates, and deletion.
  • High attention to detail and strong organizational skills.
  • Gather data from various sources including call logs, customer feedback, and operational metrics.
  • Conduct detailed data analysis to identify key performance indicators (KPIs), trends, and outliers.
  • Gather, clean, and analyze structured and unstructured data from various sources.
  • Identify trends, correlations, and patterns to support strategic decision-making.
  • Utilize statistical techniques to interpret data and generate business insights.
  • Develop and maintain comprehensive dashboards and reports to visualize key performance indicators (KPIs) and other relevant metrics using tools such as Excel and Google Sheets.
  • Prepare monthly management reports and presentation materials on various topics such as business continuity, space strategy, and expense management using PowerPoint and Google Slides.
  • Develop and implement data collection systems and other strategies to optimize statistical efficiency and quality.
  • Create daily, weekly, and monthly reports as needed, including customer reports, to track performance metrics and support decision-making.
  • Ensure the quality and accuracy of imported data, collaborating with quality assurance analysts when necessary.
  • Filter and clean data by reviewing computer reports, printouts, and performance indicators to locate and correct code problems.
  • Work with management to prioritize business and information needs.
  • Support the data warehouse in identifying and revising reporting requirements.
  • Provide technical expertise in data storage structures, data mining, and data cleansing.
  • Ensure data accuracy, completeness, and consistency across different systems.
  • Perform data validation and collaborate with teams to resolve inconsistencies.
  • Establish and maintain data collection systems to enhance efficiency and reliability.
  • Identify, analyze, and interpret trends or patterns in complex data sets.
  • Locate and define new process improvement opportunities.

SQLData AnalysisData MiningETLTableauAnalytical SkillsReportingData visualizationData modelingPowerPoint

Posted 1 day ago
Apply
Apply

📍 Worldwide

🧭 Full-Time

🔍 Software Development

🏢 Company: Kit👥 11-50💰 over 1 year agoEducationFinancial ServicesApps

  • Strong command of SQL, including DDL and DML.
  • Proficient in Python
  • Strong understanding of DBMS internals, including an appreciation for platform-specific nuances.
  • A willingness to work with Redshift and deeply understand its nuances.
  • Familiarity with our key tools (Redshift, Segment, dbt, github)
  • 8+ years in data, with at least 3 years specializing in Data Engineering
  • Proven track record managing and optimizing OLAP clusters
  • Experience refactoring problematic data pipelines without disrupting business operations
  • History of implementing data quality frameworks and validation processes
  • Dive into our Redshift warehouse, dbt models, and workflows.
  • Evaluate the CRM data lifecycle, including source extraction, warehouse ingestion, transformation, and reverse ETL.
  • Refine and start implementing your design for source extraction and warehouse ingestion.
  • Complete the implementation of the CRM source extraction/ingestion project and use the learnings to refine your approach in preparation for other, similar initiatives including, but by no means limited to web traffic events and product usage logs.

PythonSQLETLGitData engineeringRDBMSData modelingData management

Posted 1 day ago
Apply
Apply

📍 Poland

💸 22900.0 - 29900.0 PLN per month

🔍 Threat Intelligence

  • Bachelor’s degree in Computer Science, Engineering, or a related field.
  • 2+ years of experience as a Data Engineer, working with large-scale distributed systems.
  • Proven expertise in Lakehouse architecture and Apache Hudi in production environments.
  • Experience with Airflow, Kafka, or streaming data pipelines.
  • Strong programming skills in Python and PySpark.
  • Comfortable working in a cloud-based environment (preferably AWS).
  • Design, build, and manage scalable data pipelines using Python, SQL, and PySpark.
  • Develop and maintain lakehouse architectures, with hands-on use of Apache Hudi for data versioning, upserts, and compaction.
  • Implement efficient ETL/ELT processes for both batch and real-time data ingestion.
  • Optimize data storage and query performance across large datasets (partitioning, indexing, compaction).
  • Ensure data quality, governance, and lineage, integrating validation and monitoring into pipelines.
  • Work with cloud-native services (preferably AWS – S3, Athena, EMR) to support modern data workflows.
  • Collaborate closely with data scientists, analysts, and platform engineers to deliver reliable data infrastructure

AWSPythonSQLCloud ComputingETLKafkaAirflowData engineeringCI/CDTerraform

Posted 1 day ago
Apply
Shown 10 out of 712

Ready to Start Your Remote Journey?

Apply to 5 jobs per day for free, or get unlimited applications with a subscription starting at €5/week.

Why Full-Stack Developer Jobs Are Becoming More Popular

The remote work from home is increasingly in demand among computer and IT professionals for several reasons:

  • Flexibility in time and location.
  • Collaboration with international companies.
  • Higher salary levels.
  • Lack of ties to the office.

Remote work opens up new opportunities for specialists, allowing them to go beyond geographical limits and build a successful remote IT career. This employment model is transforming traditional work approaches, making it more convenient, efficient, and accessible for professionals worldwide.

Why do Job Seekers Choose Remoote.app?

Our platform offers convenient conditions for finding remote IT jobs from home:

  • localized search — filter job listings based on your country of residence;
  • AI-powered job processing — artificial intelligence analyzes thousands of listings, highlighting key details so you don’t have to read long descriptions;
  • advanced filters — sort vacancies by skills, experience, qualification level, and work model;
  • regular database updates — we monitor job relevance and remove outdated listings;
  • personalized notifications — get tailored job offers directly via email or Telegram;
  • resume builder — create a professional VC with ease using our customizable templates and AI-powered suggestions;
  • data security — modern encryption technologies ensure the protection of your personal information.

Join our platform and find your dream job today! We offer flexible pricing — up to 5 applications per day for free, with weekly, monthly, and yearly subscription plans for extended access.