Full-Stack Developer Jobs

Spark
300 jobs found. to receive daily emails with new job openings that match your preferences.
300 jobs found.

Set alerts to receive daily emails with new job openings that match your preferences.

Apply

📍 Mexico, Colombia, Argentina, Peru

🔍 Software Development

🏢 Company: DaCodes

  • Experiencia comprobada en arquitecturas nativas de AWS para ingesta y orquestación de datos.
  • Manejo avanzado de herramientas y servicios para procesamiento de datos a gran escala (Spark, Lambda, Kinesis).
  • Conocimientos sólidos en modelado de datos open-table y arquitecturas de Data Lake y Data Warehouse.
  • Dominio de programación en Python o Scala para ETL/ELT y transformaciones.
  • Experiencia en aseguramiento de calidad de datos y monitoreo continuo (Great Expectations, Datadog).
  • Construir pipelines batch o micro-batch (SLA ≤ 24 horas) para ingesta de eventos y perfiles desde S3/Kinesis hacia almacenes de datos (Data Warehouse).
  • Automatizar DAGs específicos de campañas con AWS Step Functions o Managed Airflow, que se provisionan al inicio y se eliminan tras finalizar la campaña.
  • Modelar datos en formatos open-table particionados en S3 usando tecnologías como Iceberg, Hudi o Delta, con versionado por campaña.
  • Realizar cargas ELT a Redshift Serverless o consultas en Athena/Trino usando patrones de snapshot e incrementales.
  • Desarrollar transformaciones de datos con Glue Spark jobs o EMR en EKS para procesos pesados, y usar Lambda o Kinesis Data Analytics para enriquecimientos ligeros.
  • Programar en Python (PySpark, Pandas, boto3) o Scala para procesamiento de datos.
  • Implementar pruebas declarativas de calidad de datos con herramientas como Great Expectations o Deequ que se ejecutan diariamente durante campañas activas.
  • Gestionar pipelines de infraestructura y código mediante GitHub Actions o CodePipeline, con alertas configuradas en CloudWatch o Datadog.
  • Asegurar seguridad y gobernanza de datos con Lake Formation, cifrado a nivel de columna y cumplimiento de normativas como GDPR/CCPA.
  • Gestionar roles IAM con principio de mínimo privilegio para pipelines de campañas temporales.
  • Exponer modelos semánticos en Redshift/Athena para herramientas BI como Looker (LookML, PDTs) o conectados a Trino.

AWSPythonSQLDynamoDBETLData engineeringRedisPandasSparkCI/CDTerraformScalaA/B testing

Posted about 8 hours ago
Apply
Apply
🔥 Director | Data Science
Posted about 11 hours ago

📍 United States

🧭 Full-Time

🔍 Healthcare

🏢 Company: Machinify👥 51-100💰 $10,000,000 Series A over 6 years agoArtificial Intelligence (AI)Business IntelligencePredictive AnalyticsSaaSMachine LearningAnalytics

  • 10+ years of data science experience, with at least 5 years in a leadership role, including a leadership role at a start-up. Proven track record of managing data teams and delivering complex, high-impact products from concept to deployment
  • Strong knowledge of data privacy regulations and best practices in data security
  • Exceptional team management abilities, with experience in building and leading high-performing teams
  • Ability to think strategically and execute methodically
  • Ability to drive change and inspire a distributed team
  • Strong problem-solving skills and a data-driven mindset
  • Ability to communicate effectively, collaborating with diverse groups to solve complex problems
  • Provide direction and guidance to a team of Senior and Staff Data Scientists, enabling them to do their best work
  • Collaborate with the leadership team to define key technical and business metrics and objectives
  • Translate objectives into internal team priorities and assignments
  • Drive sprints and work with cross-functional stakeholders to appropriately prioritize various initiatives to improve customer metrics
  • Hire, mentor and develop team members
  • Foster a culture of innovation, collaboration, and continuous improvement
  • Communicate technical concepts and strategies to technical and non-technical stakeholders effectively
  • Own the success of various models on the field by continuously monitoring KPI’s and initiating projects to improve quality.

AWSLeadershipPythonSQLApache AirflowCloud ComputingData AnalysisETLKerasMachine LearningMLFlowNumpyPeople ManagementCross-functional Team LeadershipAlgorithmsData engineeringData scienceData StructuresPandasSparkTensorflowCommunication SkillsProblem SolvingAgile methodologiesRESTful APIsMentoringData visualizationTeam managementStrategic thinkingData modeling

Posted about 11 hours ago
Apply
Apply
🔥 Software Engineer 2
Posted 1 day ago

📍 UK

🔍 Cybersecurity

🏢 Company: Abnormal👥 501-1000💰 $250,000,000 Series D 10 months agoArtificial Intelligence (AI)EmailInformation TechnologyCyber SecurityNetwork Security

  • Streaming data systems - using Kafka, Spark, Map/Reduce or similar to process large data sets
  • Experience with building and operating distributed systems and services at a high scale (~billions of transactions each day)
  • Working with external party APIs
  • 3-5 years of overall software engineering experience
  • Strong sense of best practices in developing software
  • Build out streaming infrastructure for our data integration platform
  • Be able to capture data from slack, teams and other streaming data platforms for processing within our Data Ingestion Platform (DIP)
  • Work to integrate customers into the new streaming infrastructure, migrating from the older polling model where necessary
  • Work with Product Managers, Designers & Account TakeOver (ATO) detection team on product requirements and frontend implementation
  • Partner with our ATO Detection team
  • Understand the workflows and processes of the ATO Detection team. Be an effective liaison between ATO Infrastructure <> ATO Detection to understand and represent ATO Detection team needs, and convert those needs into ATO Infrastructure team deliverables
  • Help build our group through excellent interview practices
  • Be a talent magnet - someone who through the interview process demonstrates their own strengths in a way that attracts candidates to Abnormal and to the ATO team and ensures that we close the candidates we want to close

Backend DevelopmentPythonSoftware DevelopmentCybersecurityApache KafkaAPI testingSparkCommunication SkillsCI/CDRESTful APIsDevOpsMicroservicesSoftware Engineering

Posted 1 day ago
Apply
Apply

📍 Romania

🧭 Full-Time

🔍 Software Development

🏢 Company: Plain Concepts👥 251-500ConsultingAppsMobile AppsInformation TechnologyMobile

  • At least 3 years of experience as a Delivery Manager, Engineering Manager, or similar role in software, data-intensive or analytics projects.
  • Proven experience managing client relationships and navigating stakeholder expectations.
  • Strong technical background in Data Engineering (e.g., Python, Spark, SQL) and Cloud Data Platforms (e.g., Azure Data Services, AWS, or similar).
  • Solid understanding of scalable software and data architectures, CI/CD practices for data pipelines, and cloud-native data solutions.
  • Experience with data pipelines, sensor integration, edge computing, or real-time analytics is a big plus.
  • Ability to read, write, and discuss technical documentation with confidence.
  • Strong analytical and consultative skills to identify impactful opportunities.
  • Agile mindset, always focused on delivering real value fast.
  • Conflict resolution skills and a proactive approach to identifying and mitigating risks.
  • Understanding the business and technical objectives of data-driven projects.
  • Leading multidisciplinary teams to deliver scalable and robust software and data solutions on time and within budget.
  • Maintaining proactive and transparent communication with clients, helping them understand the impact of data products.
  • Supporting the team during key client interactions and solution presentations.
  • Designing scalable architectures for data ingestion, processing, and analytics.
  • Collaborating with data engineers, analysts, and data scientists to align solutions with client needs.
  • Ensuring the quality and scalability of data solutions and deliverables across cloud environments.
  • Analyzing system performance and recommending improvements using data-driven insights.
  • Providing hands-on technical guidance and mentorship to your team and clients when needed

AWSPythonSQLAgileCloud ComputingAzureData engineeringSparkCommunication SkillsCI/CDClient relationship managementTeam managementStakeholder managementData analytics

Posted 2 days ago
Apply
Apply

📍 United States

🧭 Full-Time

💸 180000.0 - 220000.0 USD per year

🔍 Software Development

🏢 Company: Prepared👥 51-100💰 $27,000,000 Series B 8 months agoEnterprise SoftwarePublic Safety

  • 5+ years of experience in data engineering, software engineering with a data focus, data science, or a related role
  • Knowledge of designing data pipelines from a variety of source (e.g. streaming, flat files, APIs)
  • Proficiency in SQL and experience with relational databases (e.g., PostgreSQL)
  • Experience with real-time data processing frameworks (e.g., Apache Kafka, Spark Streaming, Flink, Pulsar, Redpanda)
  • Strong programming skills in common data-focused languages (e.g., Python, Scala)
  • Experience with data pipeline and workflow management tools (e.g., Apache Airflow, Prefect, Temporal)
  • Familiarity with AWS-based data solutions
  • Strong understanding of data warehousing concepts and technologies (Snowflake)
  • Experience documenting data dependency maps and data lineage
  • Strong communication and collaboration skills
  • Ability to work independently and take initiative
  • Proficiency in containerization and orchestration tools (e.g., Docker, Kubernetes)
  • Design, implement, and maintain scalable data pipelines and infrastructure
  • Collaborate with software engineers, product managers, customer success managers, and others across the business to understand data requirements
  • Optimize and manage our data storage solutions
  • Ensure data quality, reliability, and security across the data lifecycle
  • Develop and maintain ETL processes and frameworks
  • Work with stakeholders to define data availability SLAs
  • Create and manage data models to support business intelligence and analytics

AWSDockerPostgreSQLPythonSQLApache AirflowETLKubernetesSnowflakeApache KafkaData engineeringSparkScalaData modeling

Posted 2 days ago
Apply
Apply

📍 United States

🧭 Full-Time

🔍 Software Development

  • 7+ years of experience related to Java development (Kotlin preferred) in addition to data engineering and modeling complex data
  • Strong experience in SQL, data modeling, and manipulating and extracting large data sets.
  • Hands-on experience working with data warehouse technologies.
  • Experience building high-quality APIs and working with microservices (Spring Boot, REST).
  • Experience with cloud infrastructure and containerization (Docker, Kubernetes).
  • Proficiency with Git, CI/CD pipelines, and build tools (Gradle preferred).
  • Work with your engineering squad to design and build a robust platform that will handle terabytes of real-time and batch data flowing through internal and external systems.
  • Build high volume and low latency services that are reliable at scale.
  • Create and manage ETL/ELT workflows that transform our billions of raw data points daily into quickly accessible information across our databases and data warehouses
  • Develop big data solutions using commercial and open-source frameworks.
  • Collaborate with and explain complex technical issues to your technical peers and non-technical stakeholders.

Backend DevelopmentDockerSQLCloud ComputingDesign PatternsETLGitJavaKafkaKotlinKubernetesSpring BootAlgorithmsAPI testingData engineeringData StructuresREST APISparkCI/CDRESTful APIsMicroservicesData modeling

Posted 2 days ago
Apply
Apply

📍 Texas, Denver, CO

💸 148000.0 - 189000.0 USD per year

🔍 SaaS

🏢 Company: Branch Metrics

  • 4+ years of relevant experience in data science, analytics, or related fields.
  • Degree in Statistics, Mathematics, Computer Science, or related field.
  • Proficiency with Python, SQL, Spark, Bazel, CLI (Bash/Zsh).
  • Expertise in Spark, Presto, Airflow, Docker, Kafka, Jupyter.
  • Strong knowledge of ML frameworks (scikit-learn, pandas, xgboost, lightgbm).
  • Experience deploying models to production on AWS infrastructure and experience with the basic AWS services.
  • Advanced statistical knowledge (regression, A/B testing, Multi-Armed Bandits, time-series anomaly detection).
  • Collaborate with stakeholders to identify data-driven business opportunities.
  • Perform data mining, analytics, and predictive modeling to optimize business outcomes.
  • Conduct extensive research and evaluate innovative approaches for new product initiatives.
  • Develop, deploy, and monitor custom models and algorithms.
  • Deliver end-to-end production-ready solutions through close collaboration with engineering and product teams.
  • Identify opportunities to measure and monitor key performance metrics, assessing the effectiveness of existing ML-based products.
  • Serve as a cross-functional advisor, proposing innovative solutions and guiding product and engineering teams toward the best approaches.
  • Anticipate and clearly articulate potential risks in ML-driven products.
  • Effectively integrate solutions into existing engineering infrastructure.

AWSDockerPythonSQLBashKafkaMachine LearningAirflowRegression testingPandasSparkRESTful APIsTime ManagementA/B testing

Posted 2 days ago
Apply
Apply
🔥 Sr Data Scientist
Posted 3 days ago

📍 United States

🧭 Full-Time

💸 211536.0 - 287100.0 USD per year

🔍 Software Development

🏢 Company: jobs

  • SQL and Python programming to query and validate the accuracy of datasets
  • Design and develop workflow orchestration tools
  • Python scripting to develop statistical and machine learning models for classification
  • Use agile software development principle to design, plan and structure deployment of software products
  • Develop machine learning models to segment customer behavior, identify market concentration and volatility using Python and Spark ML
  • Building KPIs (Key Performance Indicators) and metrics, validating using statistical hypothesis testing
  • Expertise in Cloud Computing resources and maintaining data on cloud storage
  • Big Data processing for data cleaning
  • Deploy self-serving data visualization tools, automating, generating reports and consolidating visually on tableau dashboards
  • Develop data engineering pipelines and transformations
  • Lead, build and implement analytics functions for Honey features
  • Conduct impactful data analysis to improve customer experiences and inform product development
  • Collaborate cross-functional support teams to build world-class products and design hypothesis-driven experiments
  • Gather and collate business performance and metrics to recommend improvements, automation, and data science directives for overall business performance
  • Present findings and recommendations to senior level/non-technical stakeholders
  • Maintain large datasets by performing batch scheduling and pipelining ETL operations
  • Perform ad-hoc exploratory analysis on datasets to generate insights and automate production ready solutions
  • Develop machine learning-based models to improve forecasting and predictive analytics
  • Implement innovative quantitative analyses, test new data wrangling techniques, and experiment with new visualization tools to deliver scalable analytics
  • Develop and create programming paradigms and utilizing tools like git, data structures, OOP, and network algorithms

PythonSQLCloud ComputingData AnalysisETLGitMachine LearningNumpyTableauAlgorithmsData engineeringData StructuresPandasSparkTensorflowAgile methodologiesData visualizationData modeling

Posted 3 days ago
Apply
Apply
🔥 Staff Data Engineer
Posted 3 days ago

📍 United States

🧭 Full-Time

💸 160000.0 - 230000.0 USD per year

🔍 Daily Fantasy Sports

  • 7+ years of experience in a data Engineering, or data-oriented software engineering role creating and pushing end-to-end data engineering pipelines.
  • 3+ years of experience acting as technical lead and providing mentorship and feedback to junior engineers.
  • Extensive experience building and optimizing cloud-based data streaming pipelines and infrastructure.
  • Extensive experience exposing real-time predictive model outputs to production-grade systems leveraging large-scale distributed data processing and model training.
  • Experience in most of the following: SQL/NoSQL databases/warehouses: Postgres, BigQuery, BigTable, Materialize, AlloyDB, etc
  • Replication/ELT services: Data Stream, Hevo, etc.
  • Data Transformation services: Spark, Dataproc, etc
  • Scripting languages: SQL, Python, Go.
  • Cloud platform services in GCP and analogous systems: Cloud Storage, Cloud Compute Engine, Cloud Functions, Kubernetes Engine etc.
  • Data Processing and Messaging Systems: Kafka, Pulsar, Flink
  • Code version control: Git
  • Data pipeline and workflow tools: Argo, Airflow, Cloud Composer.
  • Monitoring and Observability platforms: Prometheus, Grafana, ELK stack, Datadog
  • Infrastructure as Code platforms: Terraform, Google Cloud Deployment Manager.
  • Other platform tools such as Redis, FastAPI, and Streamlit.
  • Excellent organizational, communication, presentation, and collaboration experience with organizational technical and non-technical teams
  • Graduate degree in Computer Science, Mathematics, Informatics, Information Systems or other quantitative field
  • Enhance the capabilities of our existing Core Data Platform and develop new integrations with both internal and external APIs within the Data organization.
  • Develop and maintain advanced data pipelines and transformation logic using Python and Go, ensuring efficient and reliable data processing.
  • Collaborate with Data Scientists and Data Science Engineers to support the needs of advanced ML development.
  • Collaborate with Analytics Engineers to enhance data transformation processes, streamline CI/CD pipelines, and optimize team collaboration workflows Using DBT.
  • Work closely with DevOps and Infrastructure teams to ensure the maturity and success of the Core Data platform.
  • Guide teams in implementing and maintaining comprehensive monitoring, alerting, and documentation practices, and coordinate with Engineering teams to ensure continuous feature availability.
  • Design and implement Infrastructure as Code (IaC) solutions to automate and streamline data infrastructure deployment, ensuring scalable, consistent configurations aligned with data engineering best practices.
  • Build and maintain CI/CD pipelines to automate the deployment of data solutions, ensuring robust testing, seamless integration, and adherence to best practices in version control, automation, and quality assurance.
  • Experienced in designing and automating data governance workflows and tool integrations across complex environments, ensuring data integrity and protection throughout the data lifecycle.
  • Serve as a Staff Engineer within the broader PrizePicks technology organization by staying current with emerging technologies, implementing innovative solutions, and sharing knowledge and best practices with junior team members and collaborators.
  • Ensure code is thoroughly tested, effectively integrated, and efficiently deployed, in alignment with industry best practices for version control, automation, and quality assurance.
  • Mentor and support junior engineers by providing guidance, coaching and educational opportunities
  • Provide on-call support as part of a shared rotation between the Data and Analytics Engineering teams to maintain system reliability and respond to critical issues.

LeadershipPythonSQLCloud ComputingETLGCPGitKafkaKubernetesAirflowData engineeringGoPostgresREST APISparkCI/CDMentoringDevOpsTerraformData visualizationData modelingScripting

Posted 3 days ago
Apply
Apply
🔥 Staff Data Engineer
Posted 4 days ago

📍 United States, Canada

🧭 Full-Time

💸 158000.0 - 239000.0 USD per year

🔍 Software Development

🏢 Company: 1Password

  • Minimum of 8+ years of professional software engineering experience.
  • Minimum of 7 years technical engineering experience building data processing applications (batch and streaming) with coding in languages.
  • In-depth, hands-on experience on extensible data modeling, query optimizations and work in Java, Scala, Python, and related technologies.
  • Experience in data modeling across external facing product insights and business processes, such as revenue/sales operations, finance, and marketing.
  • Experience with Big Data query engines such as Hive, Presto, Trino, Spark.
  • Experience with data stores such as Redshift, MySQL, Postgres, Snowflake, etc.
  • Experience using Realtime technologies like Apache Kafka, Kinesis, Flink, etc.
  • Experience building scalable services on top of public cloud infrastructure like Azure, AWS, or GCP with extensive use of datastores like RDBMS, key-value stores, etc.
  • Experience leveraging distributed systems at scale and systems knowledge on infrastructure hardware, resources bare-metal hosts to containers to networking.
  • Design, develop, and automate large-scale, high-performance batch and streaming data processing systems to drive business growth and enhance product experience.
  • Build data engineering strategy that supports a rapidly growing tech company and aligns with the priorities across our product strategy and internal business organizations’ desire to leverage data for more competitive advantages.
  • Build scalable data pipelines using best-in-class software engineering practices.
  • Develop optimal data models for storage and retrieval, meeting critical product and business requirements.
  • Establish and execute short and long-term architectural roadmaps in collaboration with Analytics, Data Platform, Business Systems, Engineering, Privacy and Security.
  • Lead efforts on continuous improvement to the efficiency and flexibility of the data, platform, and services.
  • Mentor Analytics & Data Engineers on best practices, standards and forward-looking approaches on building robust, extensible and reusable data solutions.
  • Influence and evangelize high standard of code quality, system reliability, and performance.

AWSPythonSQLETLGCPJavaKubernetesMySQLSnowflakeAlgorithmsApache KafkaAzureData engineeringData StructuresPostgresRDBMSSparkCI/CDRESTful APIsMentoringScalaData visualizationData modelingSoftware EngineeringData analyticsData management

Posted 4 days ago
Apply
Shown 10 out of 300

Ready to Start Your Remote Journey?

Apply to 5 jobs per day for free, or get unlimited applications with a subscription starting at €5/week.

Why Full-Stack Developer Jobs Are Becoming More Popular

The remote work from home is increasingly in demand among computer and IT professionals for several reasons:

  • Flexibility in time and location.
  • Collaboration with international companies.
  • Higher salary levels.
  • Lack of ties to the office.

Remote work opens up new opportunities for specialists, allowing them to go beyond geographical limits and build a successful remote IT career. This employment model is transforming traditional work approaches, making it more convenient, efficient, and accessible for professionals worldwide.

Why do Job Seekers Choose Remoote.app?

Our platform offers convenient conditions for finding remote IT jobs from home:

  • localized search — filter job listings based on your country of residence;
  • AI-powered job processing — artificial intelligence analyzes thousands of listings, highlighting key details so you don’t have to read long descriptions;
  • advanced filters — sort vacancies by skills, experience, qualification level, and work model;
  • regular database updates — we monitor job relevance and remove outdated listings;
  • personalized notifications — get tailored job offers directly via email or Telegram;
  • resume builder — create a professional VC with ease using our customizable templates and AI-powered suggestions;
  • data security — modern encryption technologies ensure the protection of your personal information.

Join our platform and find your dream job today! We offer flexible pricing — up to 5 applications per day for free, with weekly, monthly, and yearly subscription plans for extended access.