Apply

Infrastructure Engineer

Posted about 1 month agoViewed

View full description

💎 Seniority level: Junior, 2+ years

📍 Location: United States, Canada

💸 Salary: 134000.0 - 181000.0 USD per year

🔍 Industry: Website Experience Platform

🏢 Company: Webflow👥 501-1000💰 $120,000,000 Series C almost 3 years ago🫂 Last layoff 6 months agoCMSWeb HostingWeb Design

⏳ Experience: 2+ years

🪄 Skills: AWSDockerGCPKubernetesTypeScriptGoTerraform

Requirements:
  • 2+ years of experience building, maintaining, and debugging Cloud services in a customer-facing environment that allows for little to no downtime.
  • Hand-on experience building and scaling cloud services on AWS or GCP.
  • Experience with container-centric architectures built with tools like Docker, Kubernetes, ECS, or Mesos.
  • Experience with infrastructure-as-code tooling like Terraform, Pulumi, or Cloudformation.
  • Strong communication and collaboration skills.
  • Familiarity with contributing to fullstack applications, services, and tools authored in TypeScript, Node, or Go.
Responsibilities:
  • Create and test reliable cloud infrastructure services that support Webflow’s range of products.
  • Balance reliability, scalability, and cost efficiency concerns while refactoring and modernizing existing services.
  • Collaborate with product engineering teams to deliver new solutions for services and ways of working that might not exist yet.
  • Participate in design discussions and code reviews with your team.
  • Participate in and continuously improve on-call and incident response processes.
  • Enhance engineering speed and safety through standardized processes, automation, and cloud best practices.
Apply

Related Jobs

Apply

📍 United States

💸 70000.0 - 80000.0 USD per year

🔍 Legal and accounting services

🏢 Company: Caret👥 1-10💰 $1,291,130 Seed almost 4 years agoPropTechCommercial Real EstateSaaSAppsProperty Management

  • Proficient in understanding system architecture and the interaction between various services.
  • Experience in automation to reduce manual work through scripting.
  • Knowledgeable in backup processes and related applications.
  • Proficiency in state management systems like Ansible or Terraform.
  • Experienced with Active Directory and Azure AD.
  • Advanced understanding of code management repositories.
  • Self-autonomous with project ownership capabilities.
  • Strong troubleshooting, project planning, and collaboration skills.
  • Willing to train junior engineers on technology and best practices.

  • Building, maintaining and supporting the virtualization environment through infrastructure-as-code, scripting, and automation methods.
  • Monitoring and logging of the various infrastructure and network components to provide a reliable environment for clients.
  • Providing backup and retention of critical client data and system components.
  • Working with client onboarding and client success managers to define, build, and deliver environments suited to client needs.
  • Designing, implementing, and supporting networking components including switches, routers, firewalls, VPN, and iSCSI.
  • Planning, executing, and monitoring security measures including intrusion detection and prevention, OS patching, and user access monitoring.
  • Participating in ongoing and new projects using proven project management processes.
  • Participating in the on-call rotation.

Cloud ComputingMicrosoft Active DirectoryMicrosoft AzureTerraformNetworkingTroubleshootingAnsibleScripting

Posted 2 days ago
Apply
Apply

📍 United States

🧭 Contract

💸 50.0 - 60.0 USD per hour

🔍 Cloud Infrastructure

🏢 Company: Third Eye Software👥 11-50ConsultingInformation TechnologyRecruitingSoftware

  • 3-5 years of hands-on professional experience in a Cloud, Infrastructure, or Systems Engineering role.
  • Proficiency with Google Cloud Platform (GCP) services, including deployment and management of resources.
  • Expertise with Kubernetes for deploying, managing, and maintaining production clusters.
  • Strong proficiency with Terraform for infrastructure-as-code practices.
  • Experience with monitoring and logging tools such as Prometheus and Grafana.
  • Familiarity with CI/CD tools like GitHub Actions.
  • Knowledge of networking concepts and protocols such as network setup, IPs, and namespaces.
  • Strong problem-solving skills and attention to detail.
  • Outstanding communication skills for effective teamwork.
  • Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent experience).

  • Design, set up, and maintain cloud-based infrastructure including clusters, namespaces, networks, and IP management.
  • Support the development and optimization of internal tools, improving developer onboarding and automating workflows.
  • Contribute to backend automation, CI/CD pipelines, and tools to enhance productivity and reliability.
  • Work closely with cross-functional teams to address technical challenges and support project deliverables.
  • Provide expertise in GCP deployments and ensure smooth migration processes.
  • Troubleshoot and resolve issues with GCP services, Kubernetes deployments, Terraform configurations, and other cloud technologies.
  • Create and maintain documentation for best practices, troubleshooting procedures, and internal training.
  • Collaborate with team leads to align infrastructure strategies with project goals.

GCPKubernetesGrafanaPrometheusCI/CDTerraformNetworking

Posted 4 days ago
Apply
Apply

📍 US

🧭 Full-Time

💸 160000.0 - 180000.0 USD per year

🔍 Senior living technology

🏢 Company: Inspiren👥 11-50💰 $2,720,602 over 2 years agoMachine LearningAnalyticsInformation TechnologyHealth Care

  • Bachelor's degree in Computer Science, Engineering, or related field.
  • Experience in network performance tuning and troubleshooting.
  • Proficiency with network diagnostic, monitoring, and analysis tools.
  • Expertise in managing cloud-based infrastructure with providers like AWS, Azure, or GCP.
  • Ability to program in languages like C++, Python, and JavaScript.
  • Knowledge of containerization technologies like Docker and Kubernetes.
  • Demonstrated track record of implementing large-scale infrastructure projects.
  • History of successful cross-team collaboration on technological integrations.
  • Record of maintaining high system uptime and reliability in past positions.

  • Act swiftly to troubleshoot and resolve complex issues related to networking, provisioning, and deployment of IoT devices.
  • Work with team members across functions to help bring new devices into our infrastructure.
  • Understand where our missing variable bias exists and build out telemetry solutions to eliminate it.

AWSDockerPythonGCPJavascriptKubernetesC++AzureNetworking

Posted 4 days ago
Apply
Apply

📍 United States

🧭 Contract

🔍 Life Sciences

🏢 Company: Apprentice👥 10-50

  • Strong experience managing CI/CD platforms in cloud environments with Linux/Unix background.
  • Expertise in scripting languages, including Bash, Python, Go, TypeScript, JavaScript, and Node.js.
  • Proficiency in parsing structured data formats such as JSON and YAML.
  • Hands-on experience with Docker and virtualization technologies.
  • Knowledge of test automation frameworks and tools, including Cypress and Jest.
  • Proven experience in monitoring pipeline health and performance.

  • Design, implement, and optimize CI/CD pipelines for cloud-based environments, ensuring scalability and reliability.
  • Manage CI/CD platforms and tools, including GitHub Actions, Microsoft DevOps, and Jenkins.
  • Automate build and release processes for applications, implement containerized solutions and manage testing environments.
  • Collaborate with the test automation team to optimize test execution and analyze pipeline performance metrics.
  • Implement monitoring and alerting solutions for rapid troubleshooting and ensure stability in CI/CD pipelines.

DockerNode.jsPythonBashCypressJenkinsJestTypeScriptGoCI/CDTerraformJSON

Posted 8 days ago
Apply
Apply

📍 United States of America

🧭 Full-Time

💸 130295.0 - 260590.0 USD per year

🔍 Healthcare

  • 7+ years experience managing expansive data platforms like Splunk and Clickhouse.
  • 6+ years mastering high-volume data pipelines with tools such as Vector, Cribl, and Confluent.
  • Strong understanding of contemporary data modeling and architecture.
  • Proven collaboration skills across different teams.
  • Exceptional problem-solving abilities in a healthcare IT environment.
  • Excellent communication skills to convey technical data solutions to diverse audiences.
  • Experience with project management, CI/CD pipelines, and GitHub.
  • Proficiency in query languages like SPL2 and programming with Python or Java.

  • Architect and cultivate a scalable observability data platform using tools like Splunk and Clickhouse.
  • Innovate and refine enterprise data models to boost performance and reliability.
  • Support data management policies and the data lifecycle.
  • Enhance data integrity through robust governance processes.
  • Ensure compliance with regulations regarding data security.
  • Develop sophisticated data pipelines for data collection and processing.
  • Optimize data flows and long-term storage strategies.
  • Collaborate with various IT teams for a unified operational data view.
  • Drive enhancements in data platform architecture and security measures.

AWSPythonETLJavaKafkaClickhouseData engineeringCI/CDData modelingData management

Posted 10 days ago
Apply
Apply

📍 USA

🧭 Full-Time

💸 136000.0 - 170000.0 USD per year

🔍 Crypto and Web3

🏢 Company: Gemini👥 501-1000💰 $1,000,000 Secondary Market over 2 years ago🫂 Last layoff almost 2 years agoCryptocurrencyWeb3Financial ServicesFinanceFinTech

  • Bachelor’s degree in a technical field or 4-8 years of experience in a DevOps-focused IT/infrastructure role.
  • Strong experience managing macOS endpoints and familiarity with Linux principles.
  • Demonstrated experience with Infrastructure as Code tools (e.g., Terraform).
  • Strong understanding of AWS services and cloud-native operations.
  • Solid CI/CD pipeline experience and version control with Git.
  • Proficiency in scripting and programming languages (e.g., Python, Go, Swift).
  • Understanding of identity and access management technologies and authentication protocols.
  • Detail-oriented with excellent communication and documentation skills.
  • Proactive self-starter able to identify and implement solutions.

  • Build, maintain, and improve internal infrastructure using DevOps methodologies.
  • Integrate and automate workflows across various SaaS platforms.
  • Design and implement CI/CD pipelines for automated deployments.
  • Develop internal tools and scripts to manage global device fleet.
  • Collaborate with support teams and serve as an escalation point.
  • Engineer and maintain integrations with AWS services.
  • Support identity management and automate user management.
  • Develop and maintain technical documentation and FAQs.
  • Partner with cross-functional teams for continuous improvement.

AWSPythonSwiftGoREST APICI/CDLinuxTerraformAnsible

Posted 10 days ago
Apply
Apply

📍 California, Texas, New York, Washington

💸 140000.0 - 160000.0 USD per year

🔍 AI-driven career management

🏢 Company: WerQ AI👥 1-10💰 Pre-seed 4 months agoProductivity ToolsEdTechArtificial Intelligence (AI)GamificationE-LearningHuman ResourcesSaaSMachine LearningProfessional NetworkingSoftware

  • Experience in product management, preferably in SaaS or AI platforms.
  • Hands-on GCP management experience, or AWS/Azure with adaptability to GCP.
  • Proficiency in Infrastructure as Code (IaC) using Terraform or CloudFormation.
  • Practical knowledge of Docker and Kubernetes for container deployment.
  • Experience setting up CI/CD pipelines like Jenkins or GitLab CI.
  • Familiarity with monitoring tools like Prometheus or GCP-native solutions.
  • Understanding of encryption, identity management, and network security.

  • Develop, maintain, and optimize GCP-based infrastructure for AI applications.
  • Build and manage CI/CD workflows for expedited deployments.
  • Implement monitoring, logging, and incident response tools.
  • Enforce security best practices to protect sensitive data.
  • Plan proactively for system capacity and performance.
  • Collaborate with engineers, data scientists, and developers.

DockerGCPJenkinsKubernetesGrafanaPrometheusCI/CDTerraformCompliance

Posted 13 days ago
Apply
Apply

📍 United States

🧭 Full-Time

💸 85000.0 - 100000.0 USD per year

🔍 Cybersecurity

🏢 Company: Proficio👥 11-50MarketingProject ManagementProfessional Services

  • 2+ years hands-on experience in engineering and supporting a large scale Elastic Stack environment.
  • Strong event logging solutions experience for large corporations is preferred.
  • Experience with multiple security platform administration or engineering within large-scale or global enterprises.
  • Understanding of Network Firewalls, Load-balancers, and complex network designs.
  • Good understanding of Unix/Linux and Windows operating systems.
  • Proficient in Python, Perl, SQL, Regex, and Shell scripting.
  • Strong knowledge in Terraform, Kubernetes, AWS, and Elasticsearch.
  • Clear understanding of Elastic's data onboarding process and CIM mapping.

  • Report to the Lead SIEM Infrastructure Engineer.
  • Implement Elastic SIEM architecture for customer instances primarily in the US.
  • Support global customers as needed.
  • Organize and drive multiple customer implementations and maintenance.
  • Provide telephonic, email, or video support, with occasional site visits.
  • Work as a part of a team to define work scope and ensure effective solutions.

AWSPythonSQLElasticSearchKubernetesTerraform

Posted 17 days ago
Apply
Apply

📍 Kenya, Mexico, Philippines, India, United States

🔍 Financial services / Fintech

  • 5+ years of previous experience deploying and automating infrastructure in public cloud environments using Infrastructure as Code such as Terraform or Ansible.
  • In-depth hands-on experience with at least one public Cloud platform (AWS or GCP).
  • Experience with Docker and Kubernetes in production.
  • Experience with Continuous Deployment tools such as Jenkins or ArgoCD.
  • Experience with Logging and Monitoring tools for SaaS such as Sumo, Splunk, Datadog, etc.
  • Proficiency in English.

  • Provide technical leadership to the team in driving automation of infrastructure & platform services in Public Clouds (AWS, GCP, and Azure) using Terraform and Ansible.
  • Architect new solutions with development for infra & platform.
  • Design and manage Continuous Deployment using Kubernetes, ArgoCD, and Jenkins.
  • Monitor applications and services within the environments and be part of the on-call rotation to resolve issues and implement strategies to prevent future occurrences.
  • Set up intelligent application performance alerts in Datadog and ElasticSearch.

AWSDockerElasticSearchGCPJenkinsKubernetesTerraformAnsible

Posted about 1 month ago
Apply
Apply

📍 US

🧭 Full-Time

💸 165000.0 - 195000.0 USD per year

🔍 Artificial Intelligence

🏢 Company: SambaNova Systems👥 251-500💰 Secondary Market over 1 year agoArtificial Intelligence (AI)SemiconductorMachine LearningAnalyticsSoftware

  • 7+ years in software engineering, technology architecture, and DevOps with a demonstrated ability to design, implement, and maintain secure, scalable, and resilient systems.
  • Strong computer science fundamentals.
  • Extensive experience building enterprise-grade software with Python, Go, or other modern programming languages.
  • Deep knowledge of distributed systems and cloud platform concepts.
  • Solid understanding of fundamental CI/CD, GitOps, and DevOps concepts and experience with building pipelines using technologies such as Jenkins or CircleCI.
  • Strong understanding of Linux operating system concepts and proficiency with Python and Bash scripting.
  • Strong technical acumen with Kubernetes and hands-on experience with deploying and managing services on clusters using tools like Helm and ArgoCD.
  • Deep understanding of provisioning infrastructure in a hybrid-cloud environment, infrastructure-as-code (IaC) principles, and familiarity with relevant technologies such as Ansible and Terraform.
  • Experience building and deploying robust and secure web APIs and microservices.
  • Familiarity with container image building and container runtimes.
  • Experience building Linux packages and distribution platforms like Artifactory or similar technologies.
  • Experience building systems for data analytics using the ELK stack or similar technologies.
  • Proficient in cloud computing technologies and have experience working with AWS, Azure, or GCP.
  • Proven high-growth startup experience in scaling and supporting engineering teams.

  • Build cutting-edge customized tooling solutions to run complex workloads at scale across hybrid cloud environments to enhance software development, testing, and deployment efficiency.
  • Profile components of these workloads to identify bottlenecks and inefficiencies and develop solutions to improve overall time and resource utilization.
  • Drive the adoption of cutting-edge public cloud technologies across hybrid cloud environments to accelerate overall developer velocity.
  • Develop robust and scalable systems for building and managing software artifacts like Linux packages, container images and Python wheels.
  • Oversee the maintenance of leading continuous integration systems and ensure seamless integration with other systems and tools for optimal performance.
  • Design and implement streamlined industry-standard practices and policies for DevOps, GitOps and MLOps across various teams and functions.
  • Work closely with stakeholders to assess and implement enterprise-level systems that align with the organization's goals and objectives.

AWSDockerPythonBashElasticSearchGCPJenkinsKibanaKubernetesAzureGoREST APICI/CDLinuxDevOpsTerraformMicroservicesAnsible

Posted about 2 months ago
Apply