Full-Stack Developer Jobs

Scala
125 jobs found. to receive daily emails with new job openings that match your preferences.
125 jobs found.

Set alerts to receive daily emails with new job openings that match your preferences.

Apply

📍 Mexico, Colombia, Argentina, Peru

🔍 Software Development

🏢 Company: DaCodes

  • Experiencia comprobada en arquitecturas nativas de AWS para ingesta y orquestación de datos.
  • Manejo avanzado de herramientas y servicios para procesamiento de datos a gran escala (Spark, Lambda, Kinesis).
  • Conocimientos sólidos en modelado de datos open-table y arquitecturas de Data Lake y Data Warehouse.
  • Dominio de programación en Python o Scala para ETL/ELT y transformaciones.
  • Experiencia en aseguramiento de calidad de datos y monitoreo continuo (Great Expectations, Datadog).
  • Construir pipelines batch o micro-batch (SLA ≤ 24 horas) para ingesta de eventos y perfiles desde S3/Kinesis hacia almacenes de datos (Data Warehouse).
  • Automatizar DAGs específicos de campañas con AWS Step Functions o Managed Airflow, que se provisionan al inicio y se eliminan tras finalizar la campaña.
  • Modelar datos en formatos open-table particionados en S3 usando tecnologías como Iceberg, Hudi o Delta, con versionado por campaña.
  • Realizar cargas ELT a Redshift Serverless o consultas en Athena/Trino usando patrones de snapshot e incrementales.
  • Desarrollar transformaciones de datos con Glue Spark jobs o EMR en EKS para procesos pesados, y usar Lambda o Kinesis Data Analytics para enriquecimientos ligeros.
  • Programar en Python (PySpark, Pandas, boto3) o Scala para procesamiento de datos.
  • Implementar pruebas declarativas de calidad de datos con herramientas como Great Expectations o Deequ que se ejecutan diariamente durante campañas activas.
  • Gestionar pipelines de infraestructura y código mediante GitHub Actions o CodePipeline, con alertas configuradas en CloudWatch o Datadog.
  • Asegurar seguridad y gobernanza de datos con Lake Formation, cifrado a nivel de columna y cumplimiento de normativas como GDPR/CCPA.
  • Gestionar roles IAM con principio de mínimo privilegio para pipelines de campañas temporales.
  • Exponer modelos semánticos en Redshift/Athena para herramientas BI como Looker (LookML, PDTs) o conectados a Trino.

AWSPythonSQLDynamoDBETLData engineeringRedisPandasSparkCI/CDTerraformScalaA/B testing

Posted about 8 hours ago
Apply
Apply

📍 United States of America

🏢 Company: IDEXX

  • Bachelor’s degree in Computer Science, Computer Engineering, Information Systems, Information Systems Engineering or a related field and 5 years of experience or Master’s degree in Computer Science, Computer Engineering, Information Systems, Information Systems Engineering or a related field and 3 years of related professional experience.
  • Advanced SQL knowledge and experience working with relational databases, including Snowflake, Oracle, Redshift.
  • Experience with AWS or Azure cloud platforms
  • Experience with data pipeline and workflow scheduling tools: Apache Airflow, Informatica.
  • Experience with ETL/ELT tools and data processing techniques
  • Experience in database design, development, and modeling
  • 3 years of related professional experience with object-oriented languages: Python, Java, and Scala
  • Design and implement scalable, reliable distributed data processing frameworks and analytical infrastructure
  • Design metadata and schemas for assigned projects based on a logical model
  • Create scripts for physical data layout
  • Write scripts to load test data
  • Validate schema design
  • Develop and implement node cluster models for unstructured data storage and metadata
  • Design advanced level Structured Query Language (SQL), data definition language (DDL) and Python scripts
  • Define, design, and implement data management, storage, backup and recovery solutions
  • Design automated software deployment functionality
  • Monitor structural performance and utilization, identifying problems and implements solutions
  • Lead the creation of standards, best practices and new processes for operational integration of new technology solutions
  • Ensures environments are compliant with defined standards and operational procedures
  • Implement measures to ensure data accuracy and accessibility, constantly monitoring and refining the performance of data management systems

AWSPythonSQLApache AirflowCloud ComputingETLJavaOracleSnowflakeAzureData engineeringScalaData modelingData management

Posted 1 day ago
Apply
Apply

📍 United States

🧭 Full-Time

💸 185500.0 - 293750.0 USD per year

🔍 Software Development

  • Strong technical expertise in designing and building scalable ML infrastructure.
  • Experience with distributed systems and cloud-based ML platforms.
  • Proficiency in programming languages such as Python, Java, or Scala.
  • Deep understanding of ML workflows, including data pipelines, model training, and deployment.
  • Passion for innovation and eagerness to implement the latest advancements in ML infrastructure.
  • Strong problem-solving skills and ability to optimize complex systems for performance and reliability.
  • Collaborative mindset with excellent communication skills to work across teams.
  • Ability to thrive in a fast-paced, dynamic environment with evolving technical challenges.
  • Design, implement, and optimize distributed systems and infrastructure components to support large-scale machine learning workflows, including data ingestion, feature engineering, model training, and serving.
  • Develop and maintain frameworks, libraries, and tools that streamline the end-to-end machine learning lifecycle, from data preparation and experimentation to model deployment and monitoring.
  • Architect and implement highly available, fault-tolerant, and secure systems that meet the performance and scalability requirements of production machine learning workloads.
  • Collaborate with machine learning researchers and data scientists to understand their requirements and translate them into scalable and efficient software solutions.
  • Stay current with advancements in machine learning infrastructure, distributed computing, and cloud technologies, integrating them into our platform to drive innovation.
  • Mentor junior engineers, conduct code reviews, and uphold engineering best practices to ensure the delivery of high-quality software solutions.

AWSDockerPythonCloud ComputingKubernetesMachine LearningAlgorithmsData engineeringData scienceCI/CDRESTful APIsScalaSoftware Engineering

Posted 1 day ago
Apply
Apply

📍 LatAm

🧭 Contract

🏢 Company: AbleRentalProperty ManagementReal Estate

  • 10+ years of data engineering experience with enterprise-scale systems
  • Expertise in Apache Spark and Delta Lake, including ACID transactions, time travel, Z-ordering, and compaction
  • Deep knowledge of Databricks (Jobs, Clusters, Workspaces, Delta Live Tables, Unity Catalog)
  • Experience building scalable ETL/ELT pipelines using tools like Airflow, Glue, Dataflow, or ADF
  • Advanced SQL for data modeling and transformation
  • Strong programming skills in Python (or Scala)
  • Hands-on experience with data formats such as Parquet, Avro, and JSON
  • Familiarity with schema evolution, versioning, and backfilling strategies
  • Working knowledge of at least one major cloud platform: AWS (S3, Athena, Redshift, Glue Catalog, Step Functions), GCP (BigQuery, Cloud Storage, Dataflow, Pub/Sub), or Azure (Synapse, Data Factory, Azure Databricks)
  • Experience designing data architectures with real-time or streaming data (Kafka, Kinesis)
  • Consulting or client-facing experience with strong communication and leadership skills
  • Experience with data mesh architectures and domain-driven data design
  • Knowledge of metadata management, data cataloging, and lineage tracking tools
  • Shape large-scale data architecture vision and roadmap across client engagements
  • Establish governance, security frameworks, and regulatory compliance standards
  • Lead strategy around platform selection, integration, and scaling
  • Guide organizations in adopting data lakehouse and federated data models
  • Lead technical discovery sessions to understand client needs
  • Translate complex architectures into clear, actionable value for stakeholders
  • Build trusted advisor relationships and guide strategic decisions
  • Align architecture recommendations with business growth and goals
  • Design and implement modern data lakehouse architectures with Delta Lake and Databricks
  • Build and manage ETL/ELT pipelines at scale using Spark (PySpark preferred)
  • Leverage Delta Live Tables, Unity Catalog, and schema evolution features
  • Optimize storage and queries on cloud object storage (e.g., AWS S3, Azure Data Lake)
  • Integrate with cloud-native services like AWS Glue, GCP Dataflow, and Azure Synapse Analytics
  • Implement data quality monitoring, lineage tracking, and schema versioning
  • Build scalable pipelines with tools like Apache Airflow, Step Functions, and Cloud Composer
  • Develop cost-optimized, scalable, and compliant data solutions
  • Design POCs and pilots to validate technical approaches
  • Translate business requirements into production-ready data systems
  • Define and track success metrics for platform and pipeline initiatives

AWSPythonSQLCloud ComputingETLGCPKafkaAirflowAzureData engineeringScalaData modeling

Posted 1 day ago
Apply
Apply

📍 United States

🧭 Full-Time

💸 180000.0 - 220000.0 USD per year

🔍 Software Development

🏢 Company: Prepared👥 51-100💰 $27,000,000 Series B 8 months agoEnterprise SoftwarePublic Safety

  • 5+ years of experience in data engineering, software engineering with a data focus, data science, or a related role
  • Knowledge of designing data pipelines from a variety of source (e.g. streaming, flat files, APIs)
  • Proficiency in SQL and experience with relational databases (e.g., PostgreSQL)
  • Experience with real-time data processing frameworks (e.g., Apache Kafka, Spark Streaming, Flink, Pulsar, Redpanda)
  • Strong programming skills in common data-focused languages (e.g., Python, Scala)
  • Experience with data pipeline and workflow management tools (e.g., Apache Airflow, Prefect, Temporal)
  • Familiarity with AWS-based data solutions
  • Strong understanding of data warehousing concepts and technologies (Snowflake)
  • Experience documenting data dependency maps and data lineage
  • Strong communication and collaboration skills
  • Ability to work independently and take initiative
  • Proficiency in containerization and orchestration tools (e.g., Docker, Kubernetes)
  • Design, implement, and maintain scalable data pipelines and infrastructure
  • Collaborate with software engineers, product managers, customer success managers, and others across the business to understand data requirements
  • Optimize and manage our data storage solutions
  • Ensure data quality, reliability, and security across the data lifecycle
  • Develop and maintain ETL processes and frameworks
  • Work with stakeholders to define data availability SLAs
  • Create and manage data models to support business intelligence and analytics

AWSDockerPostgreSQLPythonSQLApache AirflowETLKubernetesSnowflakeApache KafkaData engineeringSparkScalaData modeling

Posted 2 days ago
Apply
Apply

📍 United Kingdom, Denmark

🧭 Full-Time

🔍 Blockchain

🏢 Company: Chainalysis

  • Extensive experience (8+ years) in software engineering, with a strong focus on blockchain and cryptocurrency technologies and protocols.
  • Proven track record of designing and implementing large-scale, distributed systems in a cloud environment (AWS preferred).
  • Deep understanding and practical experience with multiple blockchain protocols (e.g., Ethereum, Bitcoin, Solana, Optimistic rollups, ZK rollups).
  • Demonstrated ability to lead and mentor engineering teams, driving technical excellence and innovation.
  • Strong architectural skills, with the ability to design and implement scalable and reliable data systems.
  • Excellent communication and collaboration skills, with the ability to effectively communicate complex technical concepts to both technical and non-technical audiences.
  • A proven history of successfully delivering large scale projects.
  • Provide architectural guidance and technical leadership in the design and implementation of highly scalable, reliable, and performant data systems for ingesting and parsing cryptocurrency blockchain data.
  • Define and enforce best practices for software development, data engineering, and blockchain integration across the PA team.
  • Lead the evaluation and adoption of new blockchain technologies, tools, and methodologies to enhance the platform's capabilities.
  • Drive the long-term technical roadmap for the PA team, aligning with the company's strategic objectives and anticipating future industry trends.
  • Architect and implement solutions to significantly scale the collection of blockchain data, enabling faster and more efficient onboarding of new chains.
  • Identify and resolve performance bottlenecks, ensuring optimal efficiency and reliability of data ingestion and processing pipelines.
  • Design and implement robust monitoring and alerting systems to ensure the health and stability of production services.
  • Serve as a subject matter expert on cryptocurrency and blockchain technologies, providing guidance and mentorship to team members and other stakeholders.
  • Conduct in-depth research and analysis of emerging blockchain protocols and technologies, evaluating their potential impact on Chainalysis products and services.
  • Represent the PA team in cross-functional technical discussions and initiatives, fostering collaboration and knowledge sharing across the organization.
  • Collaborate closely with product management, data science, and other engineering teams to define and deliver innovative solutions that meet customer needs.
  • Mentor and guide junior and mid-level engineers, fostering their technical growth and development.
  • Lead technical design reviews and code reviews, ensuring high-quality and maintainable code.
  • Support production services including debugging and maintenance, and create strategies to reduce production issues.

AWSLeadershipPostgreSQLBlockchainCloud ComputingEthereumJavaKafkaKubernetesSoftware ArchitectureTypeScriptData engineeringCommunication SkillsCI/CDProblem SolvingMentoringTerraformScalaSoftware Engineering

Posted 2 days ago
Apply
Apply

📍 Europe, Brazil

🧭 Full-Time

🏢 Company: Vigil👥 1-10💰 $1,300,000 Pre-seed 4 months agoSaaSInformation TechnologyCollaborationSoftware

  • Scala development skills and knowledge of the Scala ecosystem
  • Can show an understanding of pure functional programming
  • Good knowledge of at least one other programming language
  • Unit testing ability and understanding of how to structure testable code
  • Experience with CI/CD pipelines (CircleCI, Travis, Jenkins, etc)
  • Ability to build highly available, scalable and concurrent system
  • Strong English communication skills, both written and verbal
  • Discuss and promote the implementation of new features
  • Listen to the customer and suggest feasibility options
  • As a team commit to goals, deadlines, and objectives
  • As a team design and define system architectures and contribute to technical decisions
  • Communicate your needs clearly and responsibly.

AWSBackend DevelopmentSoftware DevelopmentCI/CDAgile methodologiesRESTful APIsScalaDebuggingEnglish communication

Posted 2 days ago
Apply
Apply
🔥 Staff Data Engineer
Posted 4 days ago

📍 United States, Canada

🧭 Full-Time

💸 158000.0 - 239000.0 USD per year

🔍 Software Development

🏢 Company: 1Password

  • Minimum of 8+ years of professional software engineering experience.
  • Minimum of 7 years technical engineering experience building data processing applications (batch and streaming) with coding in languages.
  • In-depth, hands-on experience on extensible data modeling, query optimizations and work in Java, Scala, Python, and related technologies.
  • Experience in data modeling across external facing product insights and business processes, such as revenue/sales operations, finance, and marketing.
  • Experience with Big Data query engines such as Hive, Presto, Trino, Spark.
  • Experience with data stores such as Redshift, MySQL, Postgres, Snowflake, etc.
  • Experience using Realtime technologies like Apache Kafka, Kinesis, Flink, etc.
  • Experience building scalable services on top of public cloud infrastructure like Azure, AWS, or GCP with extensive use of datastores like RDBMS, key-value stores, etc.
  • Experience leveraging distributed systems at scale and systems knowledge on infrastructure hardware, resources bare-metal hosts to containers to networking.
  • Design, develop, and automate large-scale, high-performance batch and streaming data processing systems to drive business growth and enhance product experience.
  • Build data engineering strategy that supports a rapidly growing tech company and aligns with the priorities across our product strategy and internal business organizations’ desire to leverage data for more competitive advantages.
  • Build scalable data pipelines using best-in-class software engineering practices.
  • Develop optimal data models for storage and retrieval, meeting critical product and business requirements.
  • Establish and execute short and long-term architectural roadmaps in collaboration with Analytics, Data Platform, Business Systems, Engineering, Privacy and Security.
  • Lead efforts on continuous improvement to the efficiency and flexibility of the data, platform, and services.
  • Mentor Analytics & Data Engineers on best practices, standards and forward-looking approaches on building robust, extensible and reusable data solutions.
  • Influence and evangelize high standard of code quality, system reliability, and performance.

AWSPythonSQLETLGCPJavaKubernetesMySQLSnowflakeAlgorithmsApache KafkaAzureData engineeringData StructuresPostgresRDBMSSparkCI/CDRESTful APIsMentoringScalaData visualizationData modelingSoftware EngineeringData analyticsData management

Posted 4 days ago
Apply
Apply

📍 United States

🧭 Full-Time

💸 140000.0 - 185000.0 USD per year

🔍 Software Development

🏢 Company: Smartsheet👥 1001-5000💰 $3,200,000,000 Post-IPO Debt 8 months ago🫂 Last layoff over 2 years agoSaaSEnterpriseSoftware

  • 5+ years software development experience building highly scalable, highly available applications
  • 5+ years of programming experience with backend technologies such as Python, Java, or Scala
  • 2+ years of experience with building and supporting data pipelines in databases such as Snowflake
  • 2+ years of experience with cloud technologies (AWS, Azure, etc.)
  • Experience developing, documenting, and supporting REST APIs
  • A degree in Computer Science, Engineering, or a related field or equivalent practical experience
  • Build scalable backend services for the next generation of applications at Smartsheet (Python, Java, Scala)
  • Build, support, and maintain graph databases including data pipelines, infrastructure deployment, and writing performant graph queries for APIs
  • Solve challenging distributed systems problems and work with modern cloud infrastructure (AWS)
  • Take part in code reviews and architectural discussions as you work with other software engineers and product managers
  • Take a leading role in designing key areas of scalable, performant systems
  • Be outspoken in suggesting operational improvements
  • Mentor junior engineers on code quality and other industry best practices
  • Forge a strong partnership with product management and other key areas of the business

AWSBackend DevelopmentPythonSoftware DevelopmentSQLCloud ComputingJavaSnowflakeREST APICI/CDLinuxMicroservicesScalaData modeling

Posted 4 days ago
Apply
Apply

📍 United States

🧭 Full-Time

💸 191000.0 - 225000.0 USD per year

🔍 Software Development

  • 5+ years of experience building and operating large scale core backend distributed systems like storage, data ingestion, backup and restore, streaming.
  • Ability to own and dive deeply in a complex code base.
  • Experience maintaining, analyzing, and debugging production systems
  • Knack for writing clean, readable, testable, maintainable code.
  • Strong collaboration and communication skills in a remote-working environment.
  • Demonstrate strong ownership and consistently deliver in a timely manner.
  • Experience working in either Java, Scala or Python.
  • Build and operate data ingestion system that enables various ways of accessing data at Airbnb, including ingest DB data in the warehouse in various formats and frequency, and stream change data capture (CDC) at near real time.
  • Be hands-on (code, design, test) and collaborate with cross team partners (internal customers, dependencies and leadership) to deliver on multi-month projects in a timely fashion.
  • Raise operational standards by effectively and proactively identifying, debugging and fixing operational issues. Be part of the oncall rotation for the DBExports platform.
  • Mentor junior engineers on the team.

AWSBackend DevelopmentPythonSQLCloud ComputingGCPJavaKafkaKubernetesAlgorithmsData engineeringData StructuresSparkCommunication SkillsAnalytical SkillsCollaborationCI/CDProblem SolvingRESTful APIsMentoringDocumentationMicroservicesScalaSoftware EngineeringDebugging

Posted 4 days ago
Apply
Shown 10 out of 125

Ready to Start Your Remote Journey?

Apply to 5 jobs per day for free, or get unlimited applications with a subscription starting at €5/week.

Why Full-Stack Developer Jobs Are Becoming More Popular

The remote work from home is increasingly in demand among computer and IT professionals for several reasons:

  • Flexibility in time and location.
  • Collaboration with international companies.
  • Higher salary levels.
  • Lack of ties to the office.

Remote work opens up new opportunities for specialists, allowing them to go beyond geographical limits and build a successful remote IT career. This employment model is transforming traditional work approaches, making it more convenient, efficient, and accessible for professionals worldwide.

Why do Job Seekers Choose Remoote.app?

Our platform offers convenient conditions for finding remote IT jobs from home:

  • localized search — filter job listings based on your country of residence;
  • AI-powered job processing — artificial intelligence analyzes thousands of listings, highlighting key details so you don’t have to read long descriptions;
  • advanced filters — sort vacancies by skills, experience, qualification level, and work model;
  • regular database updates — we monitor job relevance and remove outdated listings;
  • personalized notifications — get tailored job offers directly via email or Telegram;
  • resume builder — create a professional VC with ease using our customizable templates and AI-powered suggestions;
  • data security — modern encryption technologies ensure the protection of your personal information.

Join our platform and find your dream job today! We offer flexible pricing — up to 5 applications per day for free, with weekly, monthly, and yearly subscription plans for extended access.