Apply

Data Scientist

Posted 2 days agoViewed

View full description

πŸ’Ž Seniority level: Junior, 2+ years

πŸ“ Location: United States, United Kingdom

πŸ’Έ Salary: 120000.0 - 170000.0 USD per year

πŸ” Industry: Data Science

⏳ Experience: 2+ years

πŸͺ„ Skills: AWSPythonSQLCloud ComputingMachine LearningData science

Requirements:
  • Graduate degree or equivalent practical experience in operations research, statistics, computer science, machine learning, or a related field.
  • 2+ years of professional experience in a data-driven role, such as data scientist, operations research analyst, ML engineer, or research scientist.
  • Demonstrated proficiency in operations research techniques, including optimization, simulation, and decision analysis.
  • Strong understanding of modern product development techniques and methodologies.
  • Excellent cross-functional communication skills with the ability to collaborate effectively across teams.
Responsibilities:
  • Lead and contribute to ongoing projects leveraging advanced operations research methods, machine learning (ML), and statistical techniques in a full-stack data science environment.
  • Design, implement, and optimize decision-making models to improve fleet operations and logistics.
  • Collaborate with cross-functional teams to identify high-value opportunities for data science applications within a massive greenfield space.
  • Develop actionable insights and solutions, presenting results to stakeholders and measuring business impact through experimentation and rigorous analysis.
  • Stay up to date with the latest trends in operations research and data science, applying innovative methods to solve complex business problems.
Apply

Related Jobs

Apply
πŸ”₯ Senior Data Scientist
Posted about 3 hours ago

πŸ“ United States

🧭 Full-Time

πŸ” Healthcare

🏒 Company: Atropos Health

  • Degree in Clinical Informatics, Bioinformatics, CS, Engineering, Epidemiology, Statistics, or a quantitative discipline
  • Experience manipulating large data sets and developing and deploying models onto production infrastructure
  • Fluency with Python, R, SQL, git, Linux and cloud infrastructure (AWS and Docker). You have published a Python or R package before and are familiar with virtual environments
  • Sufficiently knowledgeable about healthcare to understand product needs
  • Excellent problem-solving, project management and team collaboration skills
  • Flexible thinking: you know how to re-frame problems to find practical solutions
  • Work with Product, Clinical and Engineering stakeholder teams to understand product and clinical requirements and deliver solutions that balance technical rigor with practical application
  • Test, productionalize and maintain data science products with software engineering best practices including code management and documentation
  • Articulate and deconstruct complex projects into workable solutions and identify appropriate data and methods
  • Practice good judgment and solicit information to make good and timely design decisions
  • Manage and drive projects, working with stakeholders to address dependencies and gaps
  • Solicit user feedback and propose opportunities for product innovation (e.g. to add new functionalities, improve model performance, automate processes)
  • Stay abreast of research and conduct literature and empirical research to propose appropriate solutions while sidestepping less promising ones
  • Excellent writing skills – you may be asked to contribute to our technical blogs

AWSDockerPythonSQLGitMachine LearningData engineeringData scienceLinuxDevOpsData visualizationData modeling

Posted about 3 hours ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ” Healthcare

🏒 Company: VetsEZπŸ‘₯ 101-250DatabaseInformation ServicesInformation TechnologySoftware

  • 5+ years of experience working with healthcare interoperability standards, including HL7v2, HL7 FHIR, X12, and CDA.
  • Hands-on experience with healthcare data quality assessment, machine learning, and clinical NLP applications.
  • Proficiency in Python, R, SAS, SQL, and experience working with big data platforms (Snowflake, AWS, PySpark).
  • Strong knowledge of FHIR, CCDA, and HL7 parsing techniques.
  • Experience analyzing and improving clinical data exchange quality and usability.
  • Experience in data visualization and dashboard development using Power BI, Tableau, or similar tools.
  • Strong problem-solving and analytical skills, with the ability to work effectively in cross-functional teams.
  • Ability to work closely and effectively with developers in an Agile and DevSecOps environment.
  • Design and implement healthcare data interoperability solutions, collaborating with HDM leadership, DevSecOps teams, and VHA business owners.
  • Support data validation, quality analysis, and transformation across HL7 standards (HL7v2, HL7 FHIR, X12, CDA) to ensure seamless integration between systems.
  • Utilize Natural Language Processing (NLP) and machine learning techniques to analyze clinical data, focusing on prescription SIG classification, procedure text, and immunization records.
  • Develop clinical usability algorithms and automation tools to improve data validation, quality assessment, and exchange between healthcare partners.
  • Collaborate with software developers, data analysts, and functional testers to enhance data quality, system integration, and interoperability efforts.
  • Build interactive dashboards and data visualization tools (Power BI, Tableau, or similar) to support informed clinical decision-making.
  • Analyze and interpret large-scale healthcare datasets from sources such as HL7 FHIR, V2, and CCDA XML files to improve data reliability.
  • Support unit and regression testing for data integrity validation in VA enterprise health data environments.
  • Take on additional tasks and responsibilities as needed to support team objectives and ensure the success of the project.

AWSPythonSQLAgileData AnalysisMachine LearningSnowflakeTableauData engineeringDevOpsData visualizationData modeling

Posted about 15 hours ago
Apply
Apply
πŸ”₯ Actuarial Data Scientist
Posted about 17 hours ago

πŸ“ United Kingdom

πŸ” Insurance

🏒 Company: careers

  • Strong undergraduate degree in a STEM subject
  • Work (or postgraduate research) experience in an analytical role
  • Experience of demographic/biometric research or assumption setting
  • Experience of using R or python in a work or research environment
  • Postgraduate degree
  • Qualified or part-qualified actuary
  • Exposure to working in and/or with Actuarial teams in the insurance industry
  • Builds and maintains software related to modelling and forecasting mortality rates and future improvements in the markets and geographies in which RGA writes business
  • Works with stakeholders across the business to help them understand the tools and models that are on offer, and troubleshoot any issues that arise
  • Writes high quality code in R or Python and understands how to structure what they write such that it is readable, maintainable, and reusable.
  • Works with a range of stakeholders across the organization to understand their needs and prioritize potential projects
  • Responds to development requests and bug-fix-requests in a timely manner, and communicates timelines and progress to all relevant stakeholders
  • Serves tools and insights using a range of modern solutions including launchers and dashboards
  • Stays abreast of new developments in mortality modelling across different geographies, and explains the benefits of these to stakeholders ahead of implementing any agreed methods
  • Finds, collates, cleans, and standardizes a range of key datasets from multiple sources and geographies to help with understanding mortality improvements (cause of death, socioeconomic data, etc.).
  • Conducts ad hoc research into topics related to future mortality improvements, and shares findings with stakeholders across global functions and business units

PythonSQLData AnalysisData scienceCommunication SkillsAnalytical SkillsData visualizationData modeling

Posted about 17 hours ago
Apply
Apply
πŸ”₯ Data Scientist
Posted 1 day ago

πŸ“ England, United Kingdom

🧭 Permanent

🏒 Company: Keller Executive SearchπŸ‘₯ 51-100

  • Bachelor's degree in Data Science, Computer Science, Statistics, or a related field; a Master's degree is preferred.
  • Proven experience as a Data Scientist or in a similar analytical role.
  • Strong programming skills in Python or R, with proficiency in data manipulation libraries (e.g., Pandas, NumPy).
  • Experience with machine learning frameworks (e.g., Scikit-Learn, TensorFlow) and data visualization tools (e.g., Tableau, Matplotlib).
  • Solid understanding of statistics, probability, and data-driven decision-making.
  • Experience working with databases (SQL) and data warehousing solutions.
  • Strong problem-solving skills and ability to work collaboratively within a team environment.
  • Excellent written and verbal communication skills, with the ability to effectively present complex information.
  • Proficiency in English; knowledge of additional languages is a plus.
  • Analyze and interpret complex data from diverse sources to inform strategic business decisions.
  • Develop and implement machine learning models and statistical analyses to solve business challenges.
  • Collaborate with cross-functional teams to identify opportunities for leveraging data to drive business growth.
  • Visualize data through intuitive dashboards and reports to effectively communicate findings and insights to technical and non-technical stakeholders.
  • Stay updated on industry trends and best practices to continuously enhance the team’s analytical capabilities.
  • Write clear documentation on methodology, model interpretations, and implementation strategies.

PythonSQLData AnalysisMachine LearningNumpyTableauData scienceREST APIPandasTensorflowData visualizationData modeling

Posted 1 day ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ” Software Development

🏒 Company: Trace MachinaπŸ‘₯ 11-50πŸ’° $4,700,000 Seed 7 months agoIT InfrastructureRoboticsSoftware

  • 3+ years of experience as a Data Scientist, with a strong focus on AI and machine learning
  • Expertise in machine learning algorithms, data analysis, and statistical modeling techniques
  • Proficiency in Python, R, or other data science programming languages, with experience using libraries such as TensorFlow, PyTorch, Scikit-learn, and Pandas
  • Strong knowledge of deep learning, reinforcement learning, or other advanced AI techniques
  • Experience with large-scale data processing, including working with big data technologies (e.g., Spark, Hadoop)
  • Familiarity with cloud infrastructure (AWS, GCP, Azure) and deploying machine learning models in production
  • Strong understanding of data wrangling, feature engineering, and building predictive models
  • Experience with version control (Git) and working in collaborative environments
  • Excellent problem-solving skills and ability to generate actionable insights from data
  • Ability to communicate complex AI/ML concepts effectively to both technical and non-technical teams
  • Design, implement, and deploy machine learning models to optimize software build systems, including caching, task distribution, and execution workflows
  • Work with large datasets to identify patterns, anomalies, and insights that inform decisions for improving build processes and remote execution
  • Develop predictive models to optimize build times, cache hit rates, and system resource utilization
  • Conduct experiments to improve the efficiency of build systems through data-driven decisions, leveraging AI/ML techniques such as reinforcement learning and optimization
  • Collaborate with cross-functional teams (engineering, product, and operations) to translate business problems into AI/ML-driven solutions
  • Analyze customer usage data to identify opportunities for feature improvements and innovations within the NativeLink platform
  • Develop custom algorithms for performance monitoring, anomaly detection, and optimization of CI/CD pipelines
  • Build, test, and validate machine learning models using a variety of techniques, ensuring they are scalable, robust, and interpretable
  • Build and maintain data pipelines to support model training, testing, and deployment in production environments
  • Communicate findings and insights to both technical and non-technical stakeholders in a clear and actionable way

AWSPythonCloud ComputingData AnalysisGCPGitHadoopMachine LearningPyTorchAzureData sciencePandasSparkTensorflowCI/CDData visualization

Posted 1 day ago
Apply
Apply

πŸ“ United States of America

πŸ’Έ 135886.0 - 189000.0 USD per year

🏒 Company: external

  • 3 years of experience with consumer pricing concepts such as price elasticity, price optimization, demand forecasting, or consumer segmentation
  • building models in Python using causal inference, predictive modeling, time series forecasting, optimization, econometrics or statistics to optimize decision-making
  • analyzing enterprise-level multi-dimensional data sets stored in Google Big Query, Snowflake, Amazon Redshift, or similar enterprise database with tools including Python and SQL
  • deploying machine learning algorithms using AWS, GCP, Airflow, Terraform, or Docker
  • presenting analytical methodology and results to non-technical audiences.
  • Create algorithms that drive company pricing and promotion strategies and engines.
  • Build and iterate algorithms and platforms for pricing and promotions strategies.
  • Prototype, test, deploy, and measure new pricing and promotions models and partner with data engineering teams to automate and productionize systems.
  • Apply econometric, statistical models and machine learning to explain and predict business impacts of pricing and promotions decisions.
  • Design, execute, and assess pricing and promotion experiments to drive both top and bottom line returns.
  • Utilize and continually learn latest modeling techniques to identify optimal price and promotions, including Bayesian optimization, surrogate modeling, causal inference, and others.

AWSDockerPythonSQLData AnalysisGCPMachine LearningSnowflakeAirflowAlgorithmsData scienceData visualization

Posted 1 day ago
Apply
Apply

πŸ“ United States, Canada

πŸ’Έ 160000.0 - 200000.0 USD per year

🏒 Company: JobgetherπŸ‘₯ 11-50πŸ’° $1,493,585 Seed about 2 years agoInternet

  • Master’s degree in Data Science, Computer Science, Mathematics, Statistics, Economics, or a related field; equivalent experience is also considered.
  • 4+ years of experience in data science, particularly within consumer apps or mobile game applications.
  • Proven experience working on user acquisition campaigns and developing data pipelines, reporting, and visualizations.
  • Proficiency in Python (or R), SQL, and statistical modeling tools.
  • Experience with data visualization tools like Looker, Tableau, or QlikView.
  • Familiarity with SKAdNetwork, mobile attribution platforms (e.g., Adjust, Singular, AppsFlyer), and methodologies like media mix modeling and incremental measurement.
  • Exceptional communication skills, with the ability to translate complex data findings into clear insights for non-technical stakeholders.
  • Strong analytical and problem-solving skills with the ability to approach challenges proactively.
  • Analyze and interpret complex datasets to identify trends, patterns, and insights that drive user acquisition and organic growth.
  • Design and maintain dashboards, visualizations, and reports that provide actionable insights for both strategic and operational decisions.
  • Collaborate closely with the game development team to define and collect in-game data, ensuring that metrics align with business and gameplay goals.
  • Implement advanced statistical techniques and machine learning models to optimize user acquisition and marketing efforts.
  • Design, execute, and analyze experiments (A/B and multivariate tests) to assess marketing campaigns and provide actionable recommendations.
  • Develop and manage ETL pipelines, attribution methodologies, and conversion value frameworks to ensure data accuracy in campaign analysis.
  • Work cross-functionally with Engineers, Business Intelligence, and Analytics teams to improve data collection processes and streamline data implementation.

AWSPythonSQLData AnalysisETLMachine LearningAlgorithmsData scienceMobile testingCommunication SkillsAnalytical SkillsReportingData visualizationData modelingA/B testing

Posted 2 days ago
Apply
Apply
πŸ”₯ Data Scientist
Posted 2 days ago

πŸ“ US

🏒 Company: G2πŸ‘₯ 501-1000πŸ’° $157,000,000 Series D almost 4 years agoπŸ«‚ Last layoff over 4 years agoConsumer ReviewsMarketplaceBusiness IntelligenceB2BEnterprise SoftwareMarketing AutomationSoftware

  • 4+ years experience as a data scientist involved in data extraction, analysis and modeling.
  • 4+ years of experience in Python and SQL
  • Strong understanding of statistics
  • Proficiency in machine learning algorithms and all stages of machine learning.
  • Familiarity with neural networks and deep learning.
  • Familiarity with AWS services and Snowflake (or similar SQL DB)
  • Familiar with containerization (e.g., Docker) and API frameworks (e.g., Flask).
  • Demonstrated ability to troubleshoot issues in production environments, including debugging data pipelines or model related errors.
  • Lead the development and refinement of machine learning models, including feature engineering, algorithm selection, and model optimization.
  • Conduct experiments with advanced machine learning techniques to improve model performance and deliver impactful solutions.
  • Build, maintain, and optimize data pipelines to support end-to-end machine learning workflows.
  • Analyze large datasets to extract insights and provide actionable recommendations for business teams in conjunction with model development
  • Collaborate with ML engineers to operationalize models, ensuring scalability and reliability.
  • Work closely with cross-functional teams, including product managers and engineers, to translate business requirements into machine learning solutions.
  • Document and present methodologies, findings, and results to both technical and non-technical audiences.
  • Act as an on-call resource to troubleshoot and resolve issues with deployed machine learning models.
  • Collaborate with ML engineers to monitor model performance and ensure operational stability.
  • Mentor junior team members, providing technical support, guidance on model development, and best practices implementation.

AWSDockerPythonSQLData AnalysisData MiningFlaskKubeflowMachine LearningMLFlowNumpySnowflakeAPI testingSparkTensorflowTroubleshootingDebugging

Posted 2 days ago
Apply
Apply

πŸ“ United States

🧭 Full-Time

πŸ” Real Estate

🏒 Company: Property LeadsπŸ‘₯ 11-50Real Estate

  • 4+ years of experience working with real estate data, particularly niche data sets (divorce, bankruptcy, probate, etc.).
  • Proficiency in data analysis and engineering tools (e.g., SQL, Python, Pandas, Excel).
  • Experience with data platforms and lead generation tools (e.g., PropStream, BatchLeads).
  • Experience developing scoring models, predictive features, or data-driven segmentation.
  • Strong understanding of ETL pipelines, public record APIs, and/or scraping strategies
  • Proven ability to source and curate high-quality leads.
  • Meticulous attention to detail and a passion for clean, actionable data
  • Excellent communication skills and a collaborative, fast-moving mindset.
  • Source and analyze specialized real estate data sets, including: Divorce records Bankruptcy filings Probate cases Tax liens, pre-foreclosures, and other distressed property indicators
  • Develop and maintain efficient processes for extracting, cleansing, and managing data from multiple sources.
  • Identify patterns and insights in data to improve lead targeting and conversion rates.
  • Collaborate with the marketing and sales teams to create actionable lead lists and improve outreach strategies.
  • Stay current with trends and tools in real estate data sourcing and analysis.
  • Ensure data accuracy, completeness, and compliance with relevant regulations.
  • Provide input into data roadmap, tooling decisions, and long-term analytics strategy
  • Mentor junior analysts or data contributors (as team grows)

PythonSQLData AnalysisData MiningETLMachine LearningData sciencePandasRESTful APIsData visualizationLead GenerationData modeling

Posted 3 days ago
Apply
Apply

πŸ“ Glassdoor is a registered company (US)

🧭 Full-Time

πŸ’Έ 96400.0 - 128000.0 USD per year

πŸ” Internet

🏒 Company: GlassdoorπŸ‘₯ 501-1000πŸ’° Secondary Market about 8 years agoπŸ«‚ Last layoff about 2 years agoEmploymentDigital MediaCareer PlanningRecruitingSocial Media

  • 2+ years of data science/decision science/analytics experience, ideally in an Internet company with experience in web analytics platforms and tools
  • Proficiency in SQL, Python or R required
  • Experience working with large data sets, modeling, statistics (descriptive and inferential), and A/B & multivariate testing
  • Familiarity with large language models (LLMs) and their applications in data analysis and business insights
  • Collaborate with product teams to understand and address product challenges using machine learning, statistical, and analytical expertise.
  • Leverage data to analyze and enhance user experience, converting insights into strategic recommendations.
  • Provide strategic and analytical support to the product and engineering partners, offering data-driven guidance to optimize various aspects of Glassdoor Web and App experiences.
  • Design and implement data models, tools, and dashboards to facilitate self-service data access and insights.
  • Conduct A/B testing to consistently improve user engagement and evaluate new product features.

AWSPythonSQLData AnalysisMachine LearningRDBMSREST APICommunication SkillsAnalytical SkillsExcellent communication skillsData visualizationData modelingA/B testing

Posted 5 days ago
Apply