Apply

Data Scientist

Posted 3 days agoViewed

View full description

Apply

Related Jobs

Apply

πŸ“ United States

🧭 Full-Time

πŸ” Consulting

🏒 Company: Aimpoint DigitalπŸ‘₯ 1-50ConsultingAnalyticsAdvice

  • Experience with building high quality Data Science models to solve a client's business problems
  • Experience with managing stakeholders and collaborating with customers
  • Strong written and verbal communication skills required
  • 3+ years of experience developing and deploying ML models in any platform (Azure, AWS, GCP, Databricks etc.)
  • Ability to apply data science methodologies and principles to real life projects
  • Expertise in software engineering concepts and best practices
  • Become a trusted advisor working with clients to design end-to-end analytical solutions
  • Work independently to solve complex data science use-cases across various industries
  • Design and develop feature engineering pipelines, build ML & AI infrastructure, deploy models, and orchestrate advanced analytical insights
  • Write code in SQL, Python, and Spark following software engineering best practices
  • Collaborate with stakeholders and customers to ensure successful project delivery

PythonSQLCloud ComputingData AnalysisGitMachine LearningAlgorithmsData engineeringData scienceSparkCommunication SkillsCI/CDProblem SolvingAgile methodologiesRESTful APIsData visualizationData modelingSoftware Engineering

Posted 44 minutes ago
Apply
Apply
πŸ”₯ Senior Data Scientist
Posted about 4 hours ago

πŸ“ United States

🧭 Full-Time

πŸ” Healthcare

🏒 Company: Atropos Health

  • Degree in Clinical Informatics, Bioinformatics, CS, Engineering, Epidemiology, Statistics, or a quantitative discipline
  • Experience manipulating large data sets and developing and deploying models onto production infrastructure
  • Fluency with Python, R, SQL, git, Linux and cloud infrastructure (AWS and Docker). You have published a Python or R package before and are familiar with virtual environments
  • Sufficiently knowledgeable about healthcare to understand product needs
  • Excellent problem-solving, project management and team collaboration skills
  • Flexible thinking: you know how to re-frame problems to find practical solutions
  • Work with Product, Clinical and Engineering stakeholder teams to understand product and clinical requirements and deliver solutions that balance technical rigor with practical application
  • Test, productionalize and maintain data science products with software engineering best practices including code management and documentation
  • Articulate and deconstruct complex projects into workable solutions and identify appropriate data and methods
  • Practice good judgment and solicit information to make good and timely design decisions
  • Manage and drive projects, working with stakeholders to address dependencies and gaps
  • Solicit user feedback and propose opportunities for product innovation (e.g. to add new functionalities, improve model performance, automate processes)
  • Stay abreast of research and conduct literature and empirical research to propose appropriate solutions while sidestepping less promising ones
  • Excellent writing skills – you may be asked to contribute to our technical blogs

AWSDockerPythonSQLGitMachine LearningData engineeringData scienceLinuxDevOpsData visualizationData modeling

Posted about 4 hours ago
Apply
Apply

πŸ“ South America

🧭 Contract

πŸ” Software Development

🏒 Company: NerdyπŸ‘₯ 501-1000πŸ’° $150,000,000 Post-IPO Equity over 3 years agoπŸ«‚ Last layoff over 2 years agoEducationEdTechArtificial Intelligence (AI)

  • M.S. / PhD degree in a quantitative discipline (e.g. Mathematics, Statistics, Machine Learning, Econometrics).
  • 5+ years of professional data science experience
  • Applied skills in the areas of statistical inference, probability, machine learning (unsupervised and supervised).
  • Proven track record of productionalizing customer facing models that create impact and that have operation rigor.
  • Proven ability to gather / munge / clean data from various sources using SQL.
  • Ability to develop ML or statistical analyses software in Python.
  • Experience with building recommender systems and implementation with a breadth of matching algorithms is a plus.
  • Experience with LLMs, gen AI, MCP, agents and Natural Language Processing next to a classical understanding of ML models (classification, ranking systems).
  • Develop customer facing algorithmic systems owning the end-to-end lifecycle of data science, from problem definition, qualitative analysis, data gathering and preparation to model training and testing.
  • Own evaluation strategy & outcomes through time ensuring the model keep performing
  • Partner with Product and Engineering teams to define, prototype, and help productionalize ML solutions that personalize live learning at scale.
  • Develop offline and online evaluation strategies to ensure ongoing model performance and impact.
  • Bring generative AI and LLM-based methods into production use cases (e.g., summarization, classification, content generation).
  • Drive initiatives from conception to production with measurable impact to the business.

PythonSQLApache AirflowData AnalysisData MiningETLMachine LearningNumpyAlgorithmsData scienceRDBMSREST APIPandasTensorflowData visualization

Posted about 8 hours ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ” Healthcare

🏒 Company: VetsEZπŸ‘₯ 101-250DatabaseInformation ServicesInformation TechnologySoftware

  • 5+ years of experience working with healthcare interoperability standards, including HL7v2, HL7 FHIR, X12, and CDA.
  • Hands-on experience with healthcare data quality assessment, machine learning, and clinical NLP applications.
  • Proficiency in Python, R, SAS, SQL, and experience working with big data platforms (Snowflake, AWS, PySpark).
  • Strong knowledge of FHIR, CCDA, and HL7 parsing techniques.
  • Experience analyzing and improving clinical data exchange quality and usability.
  • Experience in data visualization and dashboard development using Power BI, Tableau, or similar tools.
  • Strong problem-solving and analytical skills, with the ability to work effectively in cross-functional teams.
  • Ability to work closely and effectively with developers in an Agile and DevSecOps environment.
  • Design and implement healthcare data interoperability solutions, collaborating with HDM leadership, DevSecOps teams, and VHA business owners.
  • Support data validation, quality analysis, and transformation across HL7 standards (HL7v2, HL7 FHIR, X12, CDA) to ensure seamless integration between systems.
  • Utilize Natural Language Processing (NLP) and machine learning techniques to analyze clinical data, focusing on prescription SIG classification, procedure text, and immunization records.
  • Develop clinical usability algorithms and automation tools to improve data validation, quality assessment, and exchange between healthcare partners.
  • Collaborate with software developers, data analysts, and functional testers to enhance data quality, system integration, and interoperability efforts.
  • Build interactive dashboards and data visualization tools (Power BI, Tableau, or similar) to support informed clinical decision-making.
  • Analyze and interpret large-scale healthcare datasets from sources such as HL7 FHIR, V2, and CCDA XML files to improve data reliability.
  • Support unit and regression testing for data integrity validation in VA enterprise health data environments.
  • Take on additional tasks and responsibilities as needed to support team objectives and ensure the success of the project.

AWSPythonSQLAgileData AnalysisMachine LearningSnowflakeTableauData engineeringDevOpsData visualizationData modeling

Posted about 17 hours ago
Apply
Apply
πŸ”₯ Actuarial Data Scientist
Posted about 19 hours ago

πŸ“ United Kingdom

πŸ” Insurance

🏒 Company: careers

  • Strong undergraduate degree in a STEM subject
  • Work (or postgraduate research) experience in an analytical role
  • Experience of demographic/biometric research or assumption setting
  • Experience of using R or python in a work or research environment
  • Postgraduate degree
  • Qualified or part-qualified actuary
  • Exposure to working in and/or with Actuarial teams in the insurance industry
  • Builds and maintains software related to modelling and forecasting mortality rates and future improvements in the markets and geographies in which RGA writes business
  • Works with stakeholders across the business to help them understand the tools and models that are on offer, and troubleshoot any issues that arise
  • Writes high quality code in R or Python and understands how to structure what they write such that it is readable, maintainable, and reusable.
  • Works with a range of stakeholders across the organization to understand their needs and prioritize potential projects
  • Responds to development requests and bug-fix-requests in a timely manner, and communicates timelines and progress to all relevant stakeholders
  • Serves tools and insights using a range of modern solutions including launchers and dashboards
  • Stays abreast of new developments in mortality modelling across different geographies, and explains the benefits of these to stakeholders ahead of implementing any agreed methods
  • Finds, collates, cleans, and standardizes a range of key datasets from multiple sources and geographies to help with understanding mortality improvements (cause of death, socioeconomic data, etc.).
  • Conducts ad hoc research into topics related to future mortality improvements, and shares findings with stakeholders across global functions and business units

PythonSQLData AnalysisData scienceCommunication SkillsAnalytical SkillsData visualizationData modeling

Posted about 19 hours ago
Apply
Apply

🧭 Full-Time

πŸ” Software Development

🏒 Company: Bhblasted

  • Bachelor’s or Master’s degree in Computer Science, Statistics, Mathematics, Data Science, Engineering, or a related field.
  • Understanding of data engineering practices and data architecture.
  • Familiarity with version control systems (e.g., Git).
  • Programming Languages: Python, R, Java, or Scala for data analysis and modeling; SQL for data querying.
  • Experience with data manipulation and analysis libraries (e.g., pandas, NumPy, dplyr, or similar libraries).
  • Familiarity with specific tools (Tableau, Power BI, or Looker) and Big Data Tools (Hadoop, Spark, Hive, or Kafka for large-scale data processing).
  • Competence in Cloud architectures (AWS, Google Cloud Platform GCP, Azure for deploying models and managing infrastructure).
  • Proficiency in English (both written and spoken).
  • Collect, clean, and preprocess large datasets from multiple sources (databases, APIs, web scraping, sensors, etc.).
  • Identify and correct data inconsistencies, handle missing values, and ensure data quality.
  • Prepare data for analysis by normalizing, scaling, encoding, or aggregating it as needed.
  • Conduct exploratory data analysis (EDA) to identify trends, patterns, and anomalies.
  • Develop predictive and descriptive models using supervised, unsupervised, and reinforcement learning techniques.
  • Tune model parameters to maximize accuracy and efficiency.
  • Identify and create meaningful features to improve model performance.
  • Translate data findings into actionable business recommendations.
  • Create detailed reports and presentations to communicate results to stakeholders.
  • Build interactive dashboards or visualizations for real-time data interpretation.
  • Implement ML models into production systems.
  • Create scalable data pipelines and automate processes for continuous data analysis.
  • Work with developers to integrate insights and models into business applications.
  • Ensure compliance with data privacy laws and ethical guidelines (e.g., GDPR, CCPA).
  • Maintain detailed documentation of methodologies, datasets, and workflows.
  • Protect sensitive data and ensure secure handling.
  • Work closely with business analysts, engineers, domain experts, and leadership teams.
  • Understand business needs and translate them into analytical tasks.
  • Simplify technical results for non-technical stakeholders.

AWSDockerPythonSQLCloud ComputingData AnalysisData MiningETLGCPGitHadoopJavaKafkaMachine LearningNumpyJiraTableauAlgorithmsData engineeringData sciencePandasSparkCommunication SkillsAnalytical SkillsRESTful APIsScalaData visualizationData modeling

Posted 1 day ago
Apply
Apply
πŸ”₯ Data Scientist
Posted 1 day ago

πŸ“ England, United Kingdom

🧭 Permanent

🏒 Company: Keller Executive SearchπŸ‘₯ 51-100

  • Bachelor's degree in Data Science, Computer Science, Statistics, or a related field; a Master's degree is preferred.
  • Proven experience as a Data Scientist or in a similar analytical role.
  • Strong programming skills in Python or R, with proficiency in data manipulation libraries (e.g., Pandas, NumPy).
  • Experience with machine learning frameworks (e.g., Scikit-Learn, TensorFlow) and data visualization tools (e.g., Tableau, Matplotlib).
  • Solid understanding of statistics, probability, and data-driven decision-making.
  • Experience working with databases (SQL) and data warehousing solutions.
  • Strong problem-solving skills and ability to work collaboratively within a team environment.
  • Excellent written and verbal communication skills, with the ability to effectively present complex information.
  • Proficiency in English; knowledge of additional languages is a plus.
  • Analyze and interpret complex data from diverse sources to inform strategic business decisions.
  • Develop and implement machine learning models and statistical analyses to solve business challenges.
  • Collaborate with cross-functional teams to identify opportunities for leveraging data to drive business growth.
  • Visualize data through intuitive dashboards and reports to effectively communicate findings and insights to technical and non-technical stakeholders.
  • Stay updated on industry trends and best practices to continuously enhance the team’s analytical capabilities.
  • Write clear documentation on methodology, model interpretations, and implementation strategies.

PythonSQLData AnalysisMachine LearningNumpyTableauData scienceREST APIPandasTensorflowData visualizationData modeling

Posted 1 day ago
Apply
Apply

πŸ“ Canada

πŸ” Fintech

  • Excellent Python and SQL skills, first hand experience working with popular Python libraries such as Pandas, scikit-learn, numpy and Jupyter
  • Strong understanding of statistics: both frequentist and Bayesian approaches
  • Strong understanding of fundamental machine-learning algorithms: regression and decision trees
  • Analyze user behaviour to understand how clients interact with the product
  • Develop and implement experiments to evaluate new features and product changes
  • Apply causal inference methods in non-experimental settings
  • Use machine learning algorithms to segment users and personalize product experiences
  • Develop and maintain core data models and pipelines to ensure data quality and accessibility

PythonSQLMachine LearningNumpyAlgorithmsData scienceRegression testingPandasData modeling

Posted 1 day ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ” Software Development

🏒 Company: Trace MachinaπŸ‘₯ 11-50πŸ’° $4,700,000 Seed 7 months agoIT InfrastructureRoboticsSoftware

  • 3+ years of experience as a Data Scientist, with a strong focus on AI and machine learning
  • Expertise in machine learning algorithms, data analysis, and statistical modeling techniques
  • Proficiency in Python, R, or other data science programming languages, with experience using libraries such as TensorFlow, PyTorch, Scikit-learn, and Pandas
  • Strong knowledge of deep learning, reinforcement learning, or other advanced AI techniques
  • Experience with large-scale data processing, including working with big data technologies (e.g., Spark, Hadoop)
  • Familiarity with cloud infrastructure (AWS, GCP, Azure) and deploying machine learning models in production
  • Strong understanding of data wrangling, feature engineering, and building predictive models
  • Experience with version control (Git) and working in collaborative environments
  • Excellent problem-solving skills and ability to generate actionable insights from data
  • Ability to communicate complex AI/ML concepts effectively to both technical and non-technical teams
  • Design, implement, and deploy machine learning models to optimize software build systems, including caching, task distribution, and execution workflows
  • Work with large datasets to identify patterns, anomalies, and insights that inform decisions for improving build processes and remote execution
  • Develop predictive models to optimize build times, cache hit rates, and system resource utilization
  • Conduct experiments to improve the efficiency of build systems through data-driven decisions, leveraging AI/ML techniques such as reinforcement learning and optimization
  • Collaborate with cross-functional teams (engineering, product, and operations) to translate business problems into AI/ML-driven solutions
  • Analyze customer usage data to identify opportunities for feature improvements and innovations within the NativeLink platform
  • Develop custom algorithms for performance monitoring, anomaly detection, and optimization of CI/CD pipelines
  • Build, test, and validate machine learning models using a variety of techniques, ensuring they are scalable, robust, and interpretable
  • Build and maintain data pipelines to support model training, testing, and deployment in production environments
  • Communicate findings and insights to both technical and non-technical stakeholders in a clear and actionable way

AWSPythonCloud ComputingData AnalysisGCPGitHadoopMachine LearningPyTorchAzureData sciencePandasSparkTensorflowCI/CDData visualization

Posted 1 day ago
Apply
Apply

πŸ“ United States of America

πŸ’Έ 135886.0 - 189000.0 USD per year

🏒 Company: external

  • 3 years of experience with consumer pricing concepts such as price elasticity, price optimization, demand forecasting, or consumer segmentation
  • building models in Python using causal inference, predictive modeling, time series forecasting, optimization, econometrics or statistics to optimize decision-making
  • analyzing enterprise-level multi-dimensional data sets stored in Google Big Query, Snowflake, Amazon Redshift, or similar enterprise database with tools including Python and SQL
  • deploying machine learning algorithms using AWS, GCP, Airflow, Terraform, or Docker
  • presenting analytical methodology and results to non-technical audiences.
  • Create algorithms that drive company pricing and promotion strategies and engines.
  • Build and iterate algorithms and platforms for pricing and promotions strategies.
  • Prototype, test, deploy, and measure new pricing and promotions models and partner with data engineering teams to automate and productionize systems.
  • Apply econometric, statistical models and machine learning to explain and predict business impacts of pricing and promotions decisions.
  • Design, execute, and assess pricing and promotion experiments to drive both top and bottom line returns.
  • Utilize and continually learn latest modeling techniques to identify optimal price and promotions, including Bayesian optimization, surrogate modeling, causal inference, and others.

AWSDockerPythonSQLData AnalysisGCPMachine LearningSnowflakeAirflowAlgorithmsData scienceData visualization

Posted 1 day ago
Apply

Related Articles

Posted about 1 month ago

Why remote work is such a nice opportunity?

Why is remote work so nice? Let's try to see!

Posted 8 months ago

Insights into the evolving landscape of remote work in 2024 reveal the importance of certifications and continuous learning. This article breaks down emerging trends, sought-after certifications, and provides practical solutions for enhancing your employability and expertise. What skills will be essential for remote job seekers, and how can you navigate this dynamic market to secure your dream role?

Posted 8 months ago

Explore the challenges and strategies of maintaining work-life balance while working remotely. Learn about unique aspects of remote work, associated challenges, historical context, and effective strategies to separate work and personal life.

Posted 8 months ago

Google is gearing up to expand its remote job listings, promising more opportunities across various departments and regions. Find out how this move can benefit job seekers and impact the market.

Posted 8 months ago

Learn about the importance of pre-onboarding preparation for remote employees, including checklist creation, documentation, tools and equipment setup, communication plans, and feedback strategies. Discover how proactive pre-onboarding can enhance job performance, increase retention rates, and foster a sense of belonging from day one.