Full-Stack Developer Jobs

Bash
453 jobs found. to receive daily emails with new job openings that match your preferences.
453 jobs found.

Set alerts to receive daily emails with new job openings that match your preferences.

Apply

πŸ“ LATAM

πŸ” Telecommunications

🏒 Company: NearsureπŸ‘₯ 501-1000Staffing AgencyOutsourcingSoftware

  • 5+ Years of experience in Data Architecture, including designing robust data models (e.g., Splunk CIM, star/snowflake schemas) and data governance frameworks tailored to large-scale, high-volume telemetry data (preferably within telecommunications).
  • 3+ Years of hands-on experience in at least two of the following observability platforms: Splunk (including Splunk Search Processing Language – SPL, and Common Information Model – CIM), ELK Stack (Elasticsearch, Logsearch complex pipelines, Kibana dashboards), Grafana and Prometheus for implementing observability solutions.
  • 2+ Years of strong scripting skills (e.g., Python, Bash, PowerShell) for automation of telemetry system deployment, configuration management, and operational tasks.
  • Experience designing and implementing observability solutions with audit trails for user/system activities.
  • Familiarity with distributed tracing concepts and tools (e.g., OpenTelemetry, Grafana Tempo, Jaeger).
  • Understanding of telco network architectures (4G/LTE, 5G) and common data sources (CDRs, IPDRs, signaling data).
  • Proven experience designing scalable observability systems in telco or high-throughput environments.
  • Knowledge of security best practices for telemetry platforms (data encryption, RBAC).
  • Experience with cloud environments (AWS, Azure or GCP) and native cloud monitoring tools (CloudWatch, Azure Monitor or Google Operations Suite).
  • Experience designing and implementing data warehousing or data lake solutions for high-volume telemetry data.
  • Design, implement, and manage telemetry systems to enhance telecommunications operations.
  • Develop and optimize telemetry and observability platforms using tools like Splunk, ELK Stack, Grafana, and distributed tracing solutions.
  • Ensure network performance, service assurance, and security through advanced telemetry strategies.

AWSPythonBashCloud ComputingGCPAzureGrafanaCI/CDData modeling

Posted about 3 hours ago
Apply
Apply
πŸ”₯ DevOps Associate
Posted about 4 hours ago

πŸ“ United States

🏒 Company: InteraptπŸ‘₯ 101-250Web DevelopmentSoftware

  • 2+ years of experience in a DevOps or Cloud Engineering role
  • Proficiency with AWS services required (EC2, S3, IAM, CloudFormation, etc.)
  • Experience with CI/CD tools
  • Familiarity with Infrastructure as Code (IaC) and containerization (Docker, Kubernetes)
  • Experience using scripting tools (Python, Bash, or similar)
  • Excellent communication and organizational skills for client-facing work
  • Passion for learning and applying IT risk, controls, and compliance concepts
  • Exposure to or interest in SOX compliance, NIST, or COSO frameworks
  • Experience working in regulated or high-growth environments (e.g., fintech, healthcare, pre-IPO)
  • AWS certification (Cloud Practitioner, Solutions Architect Associate, etc.)
  • Collaborate with risk consultants and IT auditors to assess and strengthen technology control environments
  • Automate deployment, monitoring, and configuration management across client environments (primarily AWS)
  • Support the design and implementation of DevOps pipelines with an eye on compliance
  • Participate in system reviews to identify gaps in infrastructure security, change management, and access controls
  • Assist clients in prep for IPO or audit readiness by aligning their DevOps practices with governance frameworks
  • Build scripts/tools to help automate evidence collection or control monitoring processes
  • Act as a technical SME during client meetings and documentation sessions

AWSDockerPythonBashKubernetesCI/CDDevOpsComplianceRisk ManagementScripting

Posted about 4 hours ago
Apply
Apply
πŸ”₯ DevSecOps Engineer (1023)
Posted about 7 hours ago

πŸ“ United States

🏒 Company: Raft Company Website

  • 5+ years of relevant hands-on experience in DevOps or related field
  • Minimum 2 years of hands-on experience with Docker, including provisioning production containerized environments and maintaining their security and compliance
  • Proficiency in Infrastructure as Code tools such as: Terraform, Ansible, and Packer for automated infrastructure provisioning and management
  • Experience with CI/CD pipeline tools including Jenkins or GitLab CI with runners for building, deployment, and release automation
  • Strong scripting capabilities in BASH, Linux, Windows, and PowerShell environments for automation tasks
  • Experience with monitoring and logging solutions using ELK stack and Grafana
  • Demonstrated experience with cloud platforms (AWS, Azure, GCP) and on-premises containerized solutions
  • Proven ability to identify and address security vulnerabilities within infrastructure and application delivery pipelines
  • Experience configuring and managing CI/CD pipelines using GitLab Runners
  • Solid understanding of orchestration and automated deployment processes
  • Architect, develop, and implement end-to-end application lifecycles with security integrated at every stage
  • Collaborate closely with clients to design and implement containerized solutions
  • Build robust CI/CD pipelines
  • Automate infrastructure provisioning
  • Ensure compliance and best practices are followed throughout the development and deployment process

AWSDockerBashGCPJenkinsKubernetesAzureGrafanaCI/CDLinuxDevOpsTerraformAnsibleScripting

Posted about 7 hours ago
Apply
Apply
πŸ”₯ Staff SecOps Engineer
Posted about 8 hours ago

πŸ“ Brazil, USA, Canada

🧭 Full-Time

πŸ” Payments

  • Experience in conducting complex projects within AWS, preferably using Terraform or Cloud formation
  • Previous experience as an IT Security Engineer
  • Previous experience with Kubernets
  • Deep Knowledge of AWS Cloud Security Stack
  • Knowledge of AWS Identity and Access Management (IAM)
  • Experience with Security in the development pipeline (CI/CD)
  • Experience with SAST and DAST tools, and GitHub
  • Knowledge about programming: Terraform, Cloud formation, and others
  • Experience in reviewing and implementing internal processes and controls and managing security projects
  • Knowledge in information technology, with a focus on security, cloud security, infrastructure, and monitoring
  • Knowledge of the rules and standards for information security and the risk management and information security policy
  • Knowledge about malware and cyber attacks, incident response
  • Experience with Security Incident Response (Solving security and network issues on Cloud Environments)
  • Advanced English
  • Operation of all AWS Cloud Security Stack
  • Acting in Access Controls (IAM)
  • Participate in projects evaluating information security
  • Security Automation
  • Work with DevSecOps
  • Provide best security practices in new projects, demands, and changes
  • Create alerts and scripts for monitoring
  • Vulnerability Management
  • Perform threat hunting on Cloud Environment (AWS)
  • Create and maintain security compliance baselines

AWSPythonBashCloud ComputingCybersecurityKubernetesAmazon Web ServicesCI/CDRESTful APIsLinuxDevOpsTerraform

Posted about 8 hours ago
Apply
Apply
πŸ”₯ Senior Data Engineer
Posted about 9 hours ago

πŸ“ United States

🧭 Full-Time

πŸ’Έ 145000.0 - 200000.0 USD per year

πŸ” Daily Fantasy Sports

🏒 Company: PrizePicksπŸ‘₯ 101-250πŸ’° Corporate about 2 years agoGamingFantasy SportsSports

  • 5+ years of experience in a data Engineering, or data-oriented software engineering role creating and pushing end-to-end data engineering pipelines.
  • 2+ years of experience acting as technical lead and providing mentorship and feedback to junior engineers.
  • Extensive experience building and optimizing cloud-based data streaming pipelines and infrastructure.
  • Extensive experience exposing real-time predictive model outputs to production-grade systems leveraging large-scale distributed data processing and model training.
  • Experience in most of the following: SQL/NoSQL databases/warehouses: Postgres, BigQuery, BigTable, Materialize, AlloyDB, etc Replication/ELT services: Data Stream, Hevo, etc. Data Transformation services: Spark, Dataproc, etc Scripting languages: SQL, Python, Go. Cloud platform services in GCP and analogous systems: Cloud Storage, Cloud Compute Engine, Cloud Functions, Kubernetes Engine etc. Data Processing and Messaging Systems: Kafka, Pulsar, Flink Code version control: Git Data pipeline and workflow tools: Argo, Airflow, Cloud Composer. Monitoring and Observability platforms: Prometheus, Grafana, ELK stack, Datadog Infrastructure as Code platforms: Terraform, Google Cloud Deployment Manager. Other platform tools such as Redis, FastAPI, and Streamlit.
  • Enhance the capabilities of our existing Core Data platforms and develop new integrations with both internal and external APIs within the Data organization.
  • Work closely with DevOps, architects, and engineers to ensure the success of the Core Data platform.
  • Collaborate with Analytics Engineers to enhance data transformation processes, streamline CI/CD pipelines, and optimize team collaboration workflows.
  • Architect and implement Infrastructure as Code (IaC) solutions to automate and streamline the deployment and management of data infrastructure.
  • Develop and manage CI/CD pipelines to automate and streamline the deployment of data solutions.
  • Ensure code is thoroughly tested, effectively integrated, and efficiently deployed, in alignment with industry best practices for version control, automation, and quality assurance.
  • Serve as a Data Engineering thought leader within the broader PrizePicks technology organization by staying current with emerging technologies, implementing innovative solutions, and sharing knowledge and best practices with junior team members and collaborators.
  • Provide on-call support as part of a shared rotation between the Data and Analytics Engineering teams to maintain system reliability and respond to critical issues.

LeadershipPostgreSQLPythonSQLApache AirflowBashCloud ComputingETLGCPGitKafkaKubernetesData engineeringData scienceREST APICI/CDRESTful APIsMentoringTerraformData modeling

Posted about 9 hours ago
Apply
Apply

πŸ“ Canada

🧭 Contract

πŸ” IT Solutions and Managed Services

🏒 Company: Charter Telecom

  • 6+ years of experience (Midrange Support) administering Red Hat, SuSe, and AIX UNIX environments.
  • Proven experience with Microsoft Server and Azure Administration.
  • Experience providing administration and support for IBM Storage (including SVC) and backup solutions.
  • Experience creating technical documentation and user guides for new systems.
  • Experience setting up and managing permissions to ensure the security of systems.
  • Experience setting up and supporting Unix, server and Azure systems.
  • Maintain and support Red Hat, SuSe, and AIX environments (6 years of hands-on experience required).
  • Perform Azure administration and Microsoft Server support from a systems administration perspective.
  • Provide administration and support for IBM Storage (including SVC) and backup solutions.
  • Ensure secure environments by configuring and managing system permissions.
  • Create and update technical documentation and user guides for new systems or updates.
  • Deliver technical support and troubleshooting to ensure smooth operation of server environments.
  • Participate in on-call rotation as part of a team; occasional work outside normal business hours may be required (but typically not exceeding 40 hours per week).

BashCloud ComputingMicrosoft AzureAzureLinuxTroubleshootingScripting

Posted about 9 hours ago
Apply
Apply

πŸ“ DC, DE, GA, MA, MD, NC, NH, NJ, OH, PA, SC, UT, VA

🧭 Full-Time

πŸ” Software Development

🏒 Company: AWeber

  • Writing APIs and event-driven, distributed microservices in Python with frameworks like Tornado and Rejected.
  • Experience with Docker and Kubernetes.
  • Capture metrics with tools like statsd, Graphite, and Grafana that you will use to monitor the health of services.
  • Design, develop, maintain, and operate applications that power key capabilities of AWeber products.
  • Work alongside a team of skilled engineers, writing APIs and event-driven, distributed microservices in Python with frameworks like Tornado and Rejected.
  • Leverage internal tools to deploy applications onto a modern, hybrid platform using Docker and Kubernetes.

AWSBackend DevelopmentDockerPythonSQLBashCloud ComputingGitKafkaKubernetesNginxActiveMQAlgorithmsAPI testingData StructuresGrafanaPostgresPrometheusREST APITestRailCI/CDRESTful APIsDevOpsMicroservicesScriptingSoftware EngineeringDebugging

Posted about 9 hours ago
Apply
Apply
πŸ”₯ Associate Platform Engineer
Posted about 11 hours ago

πŸ“ Canada

🧭 Full-Time

🏒 Company: Top HatπŸ‘₯ 251-500πŸ’° $130,000,000 Series E over 4 years agoπŸ«‚ Last layoff over 1 year agoEducationEdTechMobileSoftware

  • Have cloud infrastructure and networking knowledge (AWS, GCP, Terraform) to be able to design and operate services on the cloud
  • Have CI/CD tooling knowledge (Github Actions, Jenkins) and experience in maintaining large multi-stage pipelines
  • Hands-on experience with configuration automation (eg Terraform), Docker, observability tooling (Honeycomb).
  • Scale continuous deployment practices across the engineering department.
  • Extend our reusable service template and its associated CI/CD tooling.
  • Lead in efforts to further mature our cloud infrastructure and platform offering as our business grows
  • Help mature our production observability practices. Help teams define SLI’s,manage and achieve SLOs
  • Extend and maintain our internal developer-friendly CLI
  • Operate and maintain our platform-level shared services and capabilities, such as continuous integration, continuous deployment, infrastructure automation and monitoring.
  • Coach product teams on operational ownership. Teach blame-free root cause analysis for incidents that impact the customer or our delivery performance.
  • Participate in our team’s support rotations

AWSBackend DevelopmentDockerPythonSoftware DevelopmentBashCloud ComputingFrontend DevelopmentGCPJenkinsKubernetesCI/CDMentoringDevOpsTerraformNetworkingDebugging

Posted about 11 hours ago
Apply
Apply
πŸ”₯ Senior DevOps Engineer
Posted about 15 hours ago

πŸ“ United States

🧭 Full-Time

πŸ” Advertising Software

🏒 Company: MNTNπŸ‘₯ 251-500πŸ’° $2,000,000 Seed over 2 years agoAdvertisingReal TimeMarketingSoftware

  • 5–8+ years in DevOps, SRE, or Platform Engineering, with production experience in AWS and GCP
  • Strong automation skills, programming and scripting in Python, including working with APIs, SDKs, and CLIs
  • Proven experience operating Kubernetes (EKS, GKE) in production, with Helm, ArgoCD, and Kustomize
  • Deep expertise in Terraform / OpenTofu and infrastructure-as-code best practices; experience with Terragrunt is a plus
  • Hands-on experience with cloud migrations, hybrid deployments, and designing for provider-agnostic portability
  • Solid understanding of cost modeling, tagging strategies, and cloud cost optimization techniques (FinOps awareness a plus)
  • Familiarity with microservices architectures, service discovery, and containerization workflows
  • A platform mindset: you’ve built tools, abstractions, or services that improve the daily life of other developers
  • Strong communication skills and a habit of clear, useful documentation
  • Design, provision, and manage infrastructure across AWS and GCP (EKS, GKE, EC2, IAM, S3, VPC, etc.) using Terraform and GitOps workflows
  • Build internal platform tools and self-service capabilities that reduce friction for engineers and improve reliability
  • Write Python automation scripts and infrastructure tools, integrating with REST APIs, cloud SDKs, and Kubernetes components
  • Create and manage CI/CD pipelines (e.g., ArgoCD, GitHub Actions), ensuring fast, safe, and observable deployments
  • Partner with engineering teams to define infrastructure patterns and provide cost-efficient solutions based on usage and scaling requirements
  • Monitor infrastructure spend and participate in cost analysis, optimization, and FinOps practices to drive better cloud economics
  • Improve observability, alerting, and runbooks for incident response and system health
  • Implement cloud security and compliance best practices, including secrets management and IAM policy design

AWSDockerPythonAWS EKSBashCloud ComputingGCPGitKubernetesREST APICI/CDLinuxDevOpsTerraformMicroservicesAnsibleScripting

Posted about 15 hours ago
Apply
Apply
πŸ”₯ Senior Data Scientist, US
Posted about 16 hours ago

πŸ“ Texas, Denver, CO

πŸ’Έ 148000.0 - 189000.0 USD per year

πŸ” SaaS

🏒 Company: Branch Metrics

  • 4+ years of relevant experience in data science, analytics, or related fields.
  • Degree in Statistics, Mathematics, Computer Science, or related field.
  • Proficiency with Python, SQL, Spark, Bazel, CLI (Bash/Zsh).
  • Expertise in Spark, Presto, Airflow, Docker, Kafka, Jupyter.
  • Strong knowledge of ML frameworks (scikit-learn, pandas, xgboost, lightgbm).
  • Experience deploying models to production on AWS infrastructure and experience with the basic AWS services.
  • Advanced statistical knowledge (regression, A/B testing, Multi-Armed Bandits, time-series anomaly detection).
  • Collaborate with stakeholders to identify data-driven business opportunities.
  • Perform data mining, analytics, and predictive modeling to optimize business outcomes.
  • Conduct extensive research and evaluate innovative approaches for new product initiatives.
  • Develop, deploy, and monitor custom models and algorithms.
  • Deliver end-to-end production-ready solutions through close collaboration with engineering and product teams.
  • Identify opportunities to measure and monitor key performance metrics, assessing the effectiveness of existing ML-based products.
  • Serve as a cross-functional advisor, proposing innovative solutions and guiding product and engineering teams toward the best approaches.
  • Anticipate and clearly articulate potential risks in ML-driven products.
  • Effectively integrate solutions into existing engineering infrastructure.

AWSDockerPythonSQLBashKafkaMachine LearningAirflowRegression testingPandasSparkRESTful APIsTime ManagementA/B testing

Posted about 16 hours ago
Apply
Shown 10 out of 453

Ready to Start Your Remote Journey?

Apply to 5 jobs per day for free, or get unlimited applications with a subscription starting at €5/week.

Why Full-Stack Developer Jobs Are Becoming More Popular

The remote work from home is increasingly in demand among computer and IT professionals for several reasons:

  • Flexibility in time and location.
  • Collaboration with international companies.
  • Higher salary levels.
  • Lack of ties to the office.

Remote work opens up new opportunities for specialists, allowing them to go beyond geographical limits and build a successful remote IT career. This employment model is transforming traditional work approaches, making it more convenient, efficient, and accessible for professionals worldwide.

Why do Job Seekers Choose Remoote.app?

Our platform offers convenient conditions for finding remote IT jobs from home:

  • localized search β€” filter job listings based on your country of residence;
  • AI-powered job processing β€” artificial intelligence analyzes thousands of listings, highlighting key details so you don’t have to read long descriptions;
  • advanced filters β€” sort vacancies by skills, experience, qualification level, and work model;
  • regular database updates β€” we monitor job relevance and remove outdated listings;
  • personalized notifications β€” get tailored job offers directly via email or Telegram;
  • resume builder β€” create a professional VC with ease using our customizable templates and AI-powered suggestions;
  • data security β€” modern encryption technologies ensure the protection of your personal information.

Join our platform and find your dream job today! We offer flexible pricing β€” up to 5 applications per day for free, with weekly, monthly, and yearly subscription plans for extended access.