Apply

Senior Machine Learning Engineer

Posted 8 days agoViewed

View full description

💎 Seniority level: Senior, 5+ years

📍 Location: Latin America

🔍 Industry: Software Development

🏢 Company: EX Squared LATAM

🗣️ Languages: English

⏳ Experience: 5+ years

🪄 Skills: AWSDockerPythonSQLApache AirflowElasticSearchETLGCPKubernetesMachine LearningMLFlowPyTorchAlgorithmsAPI testingAzureData engineeringFastAPIREST APITensorflowCI/CDJSONData visualizationData analytics

Requirements:
  • 5+ years of experience in Machine Learning or Data Science roles
  • Proficiency with Python and ML frameworks such as TensorFlow, PyTorch, or Hugging Face
  • Deep understanding of Elasticsearch/OpenSearch, search ranking, and indexing strategies
  • Experience working with vector databases and semantic search architectures
  • Strong experience with Docker, Kubernetes, and cloud environments (AWS, GCP, or Azure)
  • Familiarity with MLOps tools such as MLflow, Weights & Biases, Airflow, or Kubeflow
  • Understanding of data pipelines, versioning, and deployment practices
  • Experience with natural language processing (NLP) and embedding models (e.g., BERT, SBERT, OpenAI) is a plus
Responsibilities:
  • Design and implement search relevance models using deep learning and semantic embedding techniques
  • Build and manage scalable data pipelines for text/vector indexing using Elasticsearch and OpenSearch
  • Integrate and optimize vector databases (e.g., Faiss, Pinecone, Weaviate, Milvus) to support semantic search and recommendation engines
  • Deploy and maintain ML models in production using Python, TensorFlow/PyTorch, FastAPI, Docker, and Kubernetes
  • Collaborate with data engineers and platform teams to manage ETL workflows, feature stores, and model registries
  • Monitor performance of deployed models and drive continuous improvement through experimentation and retraining
  • Implement automated evaluation pipelines for search quality metrics (precision, recall, MRR, NDCG)
  • Contribute to the evolution of a scalable ML infrastructure across distributed environments
Apply

Related Jobs

Apply

📍 LATAM, Uruguay

🔍 Data Science Consultancy

  • Bachelor’s degree or higher in Computer Science or a related technical field.
  • 4+ years of experience in software development or machine learning engineering roles.
  • Proficiency in Python, especially for ML applications.
  • Strong understanding of machine learning and AI techniques.
  • Experience with cloud platforms and infrastructure (e.g., AWS, Azure, GCP) for deploying ML models.
  • Familiarity with Spark, SparkML, and big data processing frameworks.
  • Competency in Java; knowledge of Scala is a plus.
  • Knowledge of Kubernetes and containerization best practices.
  • Experience building ML and data pipelines using frameworks like Airflow.
  • Solid understanding of software engineering principles, including clean code practices, debugging, and performance optimization.
  • Exposure to big data ecosystems and distributed data processing.
  • Knowledge of GCP and AWS a plus.
  • Design, develop, and optimize machine learning models using Python, SparkML, and related libraries.
  • Build and maintain scalable microservices for model serving and low latency/ high throughput backend services that leverage ML
  • Develop in-house tools and automation frameworks to streamline ML workflows.
  • Deploy, manage, and scale ML models using cloud infrastructure.
  • Write clean, maintainable, and efficient code with strong emphasis on performance and reliability.
  • Collaborate with data engineers, product managers, and DevOps teams to integrate ML models into production environments.
  • Monitor and debug model performance, ensuring reliable operations in production.

AWSBackend DevelopmentPythonSoftware DevelopmentCloud ComputingJavaKubernetesMachine LearningAirflowData engineeringSparkMicroservices

Posted 16 days ago
Apply
Apply

📍 Colombia, United States, Croatia

🏢 Company: Jobgether👥 11-50💰 $1,493,585 Seed over 2 years agoInternet

  • Proven experience in applied machine learning and data science, with a strong portfolio of deployed projects
  • Advanced proficiency in Python and experience with libraries like pandas, scikit-learn, PyTorch
  • Strong command of SQL and familiarity with large-scale data processing
  • Experience with cloud platforms such as AWS, GCP, or Azure
  • Solid understanding of DevOps tools and practices, including Docker, Kubernetes, CI/CD, and infrastructure as code
  • Hands-on experience building ML pipelines, from data ingestion to deployment and monitoring
  • Develop and maintain robust machine learning pipelines for training, validation, and deployment
  • Collaborate cross-functionally to ensure production readiness of models and successful integration with existing systems
  • Leverage cloud-native tools to optimize ML workflows and infrastructure performance
  • Apply DevOps principles to streamline CI/CD processes and manage containerized environments
  • Ensure compliance with data governance and security standards
  • Drive innovation by researching and applying the latest trends in ML, MLOps, and cloud technologies

AWSDockerPythonSQLBashCloud ComputingGCPKubernetesMachine LearningPyTorchAlgorithmsAzureData engineeringData scienceREST APIPandasCI/CDDevOpsTerraformAnsibleData modeling

Posted about 2 months ago
Apply
Apply

📍 Brazil, U.S., Canada

🧭 Full-Time

🔍 Payments

  • Bachelor’s or Master’s degree in CS/Engineering/Data-Science or other technical disciplines.
  • Solid experience in DS/ML engineering.
  • Proficiency in programming languages such as Python, Scala, or Java.
  • Hands-on experience in implementing batch and real-time streaming pipelines, using SQL and NoSQL database solutions
  • Familiarity with monitoring tools for data pipelines, streaming systems, and model performance.
  • Experience in AWS cloud services (Sagemaker, EC2, EMR, ECS/EKS, RDS, etc.).
  • Experience with CI/CD pipelines, infrastructure-as-code tools (e.g., Terraform, CloudFormation), and MLOps platforms like MLflow.
  • Experience with Machine Learning modeling, notably tree-based and boosting models supervised learning for imbalanced target scenarios.
  • Experience with Online Inference, APIs, and services that respond under tight time constraints.
  • Proficiency in English.
  • Design the data-architecture flow for the efficient implementation of real-time model endpoints and/or batch solutions.
  • Engineer domain-specific features that can enhance model performance and robustness.
  • Build pipelines to deploy machine learning models in production with a focus on scalability and efficiency, and participate in and enforce the release management process for models and rules.
  • Implement systems to monitor model performance, endpoints/feature health, and other business metrics; Create model-retraining pipelines to boost performance, based on monitoring metrics; Model recalibration.
  • Design and implement scalable architectures to support real-time/batch solutions; Optimize algorithms and workflows for latency, throughput, and resource efficiency; Ensure systems adhere to company standards for reliability and security.
  • Conduct research and prototypes to explore novel approaches in ML engineering for addressing emerging risk/fraud patterns.
  • Partner with fraud analysts, risk managers, and product teams to translate business requirements into ML solutions.

AWSBackend DevelopmentDockerPythonSQLAmazon RDSAWS EKSFrontend DevelopmentJavaKafkaKubernetesMachine LearningMLFlowAirflowAlgorithmsData engineeringData scienceREST APINosqlPandasSparkCI/CDTerraformScalaData modelingEnglish communication

Posted 2 months ago
Apply
Apply

📍 Africa, Europe, or the Americas

🧭 Full-Time

🔍 Software Development

🏢 Company: Zepz👥 1001-5000💰 $267,000,000 Series F 8 months ago🫂 Last layoff over 1 year agoMobile PaymentsFinancial ServicesPaymentsFinTech

  • 4+ years of professional experience training and deploying models that deliver measurable value (regression, clustering, decision trees, cost-sensitive Machine Learning etc with an emphasis on gradient boosting-based methods).
  • You have strong SQL skills, confidently able to pull and manipulate data to get into the desired format for modelling (CTEs, joins, case statements, subqueries)
  • Possess strong Python skills, able to automate processes and deploy applications. you are able to deploy your stuff and be able to set up at least basic monitoring.
  • Familiar with building and deploying web applications using Python web frameworks.
  • Modernization our FinCrime Machine Learning Pipeline
  • Evaluate and integrate new data sources for our algorithms, aligning with Data Engineering and Analytical Engineers' best practices for dbt
  • In collaboration with Data Scientist, automate the training and deployment of updated models, ensuring the output is tested, scalable and documented and checks are in place to identify drift.
  • Help build experiments framework to evaluate new models, third-party data sources and tooling.
  • Translate commercial requirements into technical solutions, converting real-world problems into solvable data science projects, resulting in insights that further the strategy and enable visibility into key results
  • Improving existing models through greater scrutiny of the methodology and improving the input data
  • Develop strategies and tools to help less technical individuals understand and use the models and results.

AWSDockerPythonSQLKubernetesMachine LearningNumpyAlgorithmsData scienceData StructuresRegression testingPandasCommunication SkillsAnalytical SkillsCI/CDProblem SolvingRESTful APIsDevOpsData visualizationData modeling

Posted 3 months ago
Apply
Apply

📍 United States, Latin America, India

🧭 Full-Time

🔍 Software Development

🏢 Company: phData👥 501-1000💰 $2,499,997 Seed about 7 years agoInformation ServicesAnalyticsInformation Technology

  • At least 4 years experience as a Machine Learning Engineer, Software Engineer, or Data Engineer
  • 4-year Bachelor's degree in Computer Engineering or a related field
  • Experience deploying data science models in a production setting
  • Expertise in Python, Scala, Java, or another modern programming language
  • The ability to build and operate robust data pipelines using a variety of data sources,  programming languages, and toolsets
  • Strong working knowledge of SQL and the ability to write, debug, and optimize distributed SQL queries
  • Experience working with Data Science/Machine Learning software and libraries such as h2o, TensorFlow, Keras, scikit-learn, etc.
  • Experience with Docker, Kubernetes, or some other containerization technology
  • Familiarity with multiple data source systems (e.g. JMS, Kafka, RDBMS, DWH, MySQL, Oracle, SAP)
  • Systems-level knowledge in network/cloud architecture, operating systems (e.g., Linux), storage systems (e.g., AWS, Databricks, Cloudera)
  • Production experience in core data technologies (e.g. Spark, Pandas)
  • Development of APIs and web server applications (e.g. Flask, Django, Spring)
  • Complete software development lifecycle experience including design, documentation, ong analytical abilities; ability to translate business requirements and use cases into a solution, including ingestion of many data sources, ETL processing, data access, and consumption, as well as custom analytics
  • Excellent communication and presentation skills; previous experience working with internal or external customers
  • Design and create environments for data scientists to build models and manipulate data
  • Work within customer systems to extract data and place it within an analytical environment
  • Learn and understand customer technology environments and systems
  • Define the deployment approach and infrastructure for models and be responsible for ensuring that businesses can use the models we develop
  • Reveal the true value of data by working with data scientists to manipulate and transform data into appropriate formats in order to deploy actionable machine learning models
  • Partner with data scientists to ensure solution deployability—at scale, in harmony with existing business systems and pipelines, and such that the solution can be maintained throughout its life cycle
  • Create operational testing strategies, validate and test the model in QA, and implementation, testing, and deployment
  • Ensure the quality of the delivered product

AWSDockerPythonSQLCloud ComputingDjangoETLFlaskGCPJavaKerasKubernetesMachine LearningSnowflakeAPI testingAzureData engineeringData scienceREST APISparkTensorflowCommunication SkillsAnalytical SkillsCI/CDProblem SolvingScalaData visualizationData modelingData analyticsData management

Posted 3 months ago
Apply