Rackspace

👥 1001-5000💰 Private over 7 years ago🫂 Last layoff almost 2 years agoIaaSBig DataCloud ComputingCloud Infrastructure
Website LinkedIn Email Facebook Twitter

Rackspace is a global leader in providing hybrid cloud-based services for businesses, offering expertise in public and private cloud solutions through their Fanatical Support service. They serve over 300,000 customers worldwide, including top FORTUNE 100 companies.

Related companies:

🏢 DigitalOcean
👥 1001-5000💰 $34,913,641 Post-IPO Equity over 3 years ago🫂 Last layoff almost 2 years agoVirtualizationDevOpsWeb HostingCloud ComputingSaaS
Website LinkedIn Email Facebook Twitter

Jobs at this company:

Apply

🧭 Full-Time

🔍 Multi-cloud solutions

  • In-depth knowledge of Datadog features such as dashboards, monitors, log management, and alerting.
  • Expertise in configuring Datadog agents across various environments (e.g., AWS, Azure, GCP).
  • Experience with creating and managing custom metrics, traces, and logs.
  • Ability to integrate Datadog with various services and tools.
  • Experience in automating monitoring and alerting setups using IaC tools like Terraform or CloudFormation.
  • Proficiency in scripting languages such as Python, Bash, or PowerShell.
  • Experience with developing custom Datadog integrations.
  • Strong understanding of observability principles, including metrics, logging, and tracing.
  • Ability to design monitoring strategies providing visibility into system performance.
  • Experience with monitoring application and infrastructure performance using Datadog.
  • Skills in identifying performance bottlenecks and optimizing system performance.
  • Experience in setting up and managing alerts in Datadog.
  • Ability to troubleshoot complex issues by analyzing metrics in Datadog.
  • Strong communication skills to convey insights to technical and non-technical stakeholders.
  • Knowledge of security best practices related to monitoring.
  • Experience with integrating Datadog into security operations.
  • Commitment to continuously improving monitoring setups and staying updated with Datadog features.

  • Be a key member of the Managed Public Cloud software development team, collaborating globally.
  • Work on a variety of projects including cloud integrated services, customer interaction platforms, and backend business systems.
  • Collaborate with Product teams to assess functional requirements for new offerings and analyze technical feasibility.
  • Architect production-ready software with minimal direction, prioritizing system observability.
  • Establish coding best practices, conduct code reviews, and motivate the team.
  • Lead research, proof of concept, and prototype efforts.
  • Contribute to engineering standards and engage in project discussions.
  • Participate in a DevOps culture, including on-call rotations and maintenance schedules.

Scripting

Posted 3 months ago
Apply
Apply

📍 Canada

🔍 Multicloud solutions and technology services

  • Proven track record in designing and implementing scalable ML inference systems.
  • Hands-on experience with deep learning frameworks such as TensorFlow, Keras, or Spark MLlib.
  • Solid foundation in machine learning algorithms, natural language processing, and statistical modeling.
  • Strong understanding of computer science concepts including algorithms and distributed systems.
  • Proficiency and recent experience in Java is required.
  • Experience in Apache Hadoop ecosystem (Oozie, Pig, Hive, Map Reduce).
  • Expertise in public cloud services, particularly GCP and Vertex AI.
  • Understanding of LLM architectures and model optimization techniques.

  • Architect and optimize existing data infrastructure for machine learning and deep learning models.
  • Collaborate with cross-functional teams to translate business objectives into engineering solutions.
  • Own development and operation of high-performance inference systems for various models.
  • Provide technical leadership and mentorship to the engineering team.

LeadershipPythonApache HadoopGCPHadoopJavaKerasMachine LearningC++AlgorithmsData StructuresSparkTensorflowC (Programming language)

Posted 5 months ago
Apply
Apply
🔥 Senior MLOPs Engineer
Posted 5 months ago

🧭 Full-Time

🔍 Multicloud solutions

  • Proven track record in designing and implementing cost-effective and scalable ML inference systems.
  • Hands-on experience with leading deep learning frameworks such as TensorFlow, Keras, or Spark MLlib.
  • Solid foundation in machine learning algorithms, natural language processing, and statistical modeling.
  • Strong grasp of fundamental computer science concepts like algorithms, distributed systems, data structures, and database management.
  • Experience in Apache Hadoop ecosystem (Oozie, Pig, Hive, Map Reduce).
  • Expertise in public cloud services, particularly in GCP and Vertex AI.
  • Proficient in applying model optimization techniques (distillation, quantization, hardware acceleration).
  • Recent experience in Java.
  • In-depth understanding of LLM architectures, parameter scaling, and deployment trade-offs.
  • Technical degree: Bachelor's degree in Computer Science or Master's degree with relevant industry experience.
  • Specialization in Machine Learning is preferred.

  • Architect and optimize existing data infrastructure for machine learning and deep learning models.
  • Collaborate with cross-functional teams to translate business objectives into engineering solutions.
  • Own end-to-end development and operation of high-performance, cost-effective inference systems.
  • Provide technical leadership and mentorship to the engineering team.

LeadershipPythonApache HadoopGCPHadoopJavaKerasMachine LearningC++AlgorithmsData StructuresSparkTensorflowCommunication SkillsC (Programming language)

Posted 5 months ago
Apply
Apply

📍 US

💸 116100 - 198440 USD per year

🔍 Cloud Solutions

  • Proficiency in the Hadoop ecosystem including Map Reduce, Oozie, Hive, Pig, HBase, and Storm.
  • Strong programming skills with Java, Python, and Spark.
  • Knowledge in public cloud services, particularly in GCP.
  • Experience in Infrastructure and Applied DevOps principles, including CI/CD and IaC like Terraform.
  • Ability to tackle complex challenges with innovative solutions.
  • Effective communication skills in a remote work setting.

  • Develop scalable and robust code for large scale batch processing systems using Hadoop, Oozie, Pig, Hive, Map Reduce, Spark (Java), Python, HBase.
  • Develop, manage, and maintain batch pipelines supporting Machine Learning workloads.
  • Leverage GCP for scalable big data processing and storage solutions.
  • Implement automation/DevOps best practices for CI/CD and IaC.

PythonApache HadoopGCPHadoopJavaMachine LearningSparkTerraform

Posted 5 months ago
Apply
Apply

📍 USA

🧭 Full-Time

🔍 Multicloud solutions

  • Proven experience in designing and implementing scalable ML inference systems.
  • Hands-on experience with deep learning frameworks like TensorFlow, Keras, or Spark MLlib.
  • Solid understanding of ML algorithms, natural language processing, and statistical modeling.
  • Strong knowledge of computer science concepts such as algorithms, distributed systems, and database management.
  • Effective problem-solving skills and critical thinking.
  • Experience in the Apache Hadoop ecosystem (Oozie, Pig, Hive, Map Reduce).
  • Expertise in public cloud services, specifically GCP and Vertex AI.
  • Recent proficiency in Java and knowledge of model optimization techniques.

  • Architect and optimize the existing data infrastructure to support machine learning and deep learning models.
  • Collaborate with cross-functional teams to align engineering solutions with business objectives.
  • Develop and operate cost-effective inference systems for a variety of models, including LLMs.
  • Provide technical leadership and mentorship for the engineering team.

LeadershipPythonApache HadoopGCPHadoopJavaKerasMachine LearningC++AlgorithmsData StructuresSparkTensorflowCommunication SkillsC (Programming language)

Posted 5 months ago
Apply