Apply

Senior Data Scientist

Posted 8 days agoViewed

View full description

💎 Seniority level: Senior, 5 years

📍 Location: US

💸 Salary: 150000.0 - 180000.0 USD per year

🔍 Industry: Healthcare

🏢 Company: Inspiren👥 11-50💰 $2,720,602 over 2 years agoMachine LearningAnalyticsInformation TechnologyHealth Care

⏳ Experience: 5 years

🪄 Skills: PythonData AnalysisMachine LearningNumpyTableauAlgorithmsData sciencePandasData visualizationData modeling

Requirements:
  • At least 5 years of experience with data science
  • Proven experience with analytics
  • Outstanding analytical skills and the ability to take a practical approach to solving problems in real-world settings
  • Ability to share your skills with other team members and contribute to learning as a group
  • Able to manage timelines, quality, and delivery
  • Effective collaborator who values cross-disciplinary delivery
Responsibilities:
  • Design, update, and maintain algorithms that power information and feedback from the Inspiren system.
  • Identify and integrate new datasets that can be used to provide additional insights from the Inspiren system.
  • Setup, own, and innovate on Inspiren’s data lake solution.
  • Create data visualizations as necessary.
  • Analyze data for trends and patterns.
  • Work with the operations, sales, and product teams to understand business problems and develop the answers with data to meet their needs.
Apply

Related Jobs

Apply
🔥 Senior Data Scientist
Posted about 12 hours ago

📍 United States

🧭 Full-Time

💸 140000.0 - 195000.0 USD per year

🔍 Fintech

🏢 Company: Plum Inc

  • Master’s degree in Computer Science, Engineering, Physics, or a related technical field.
  • 5+ years of experience developing and deploying machine learning pipelines in production.
  • 2+ years of experience building Generative AI or LLM-based applications.
  • Strong programming skills in Python, with hands-on experience in ML/AI frameworks (e.g., LangChain, Transformers, LLM APIs).
  • Deep understanding of LLM evaluation, prompt engineering, and text generation quality metrics.
  • Experience designing and implementing RAG architectures.
  • Hands-on experience with Databricks, MLflow, or similar platforms.
  • Experience with cloud infrastructure (AWS preferred) and MLOps practices for deploying and maintaining models in production.
  • Strong problem-solving skills and ability to lead through ambiguity.
  • Excellent communication and documentation habits.
  • Design and architect end-to-end Generative AI pipelines using LLMs to process and generate context-aware results.
  • Integrate open-source and proprietary LLMs (e.g., GPT, LLaMA) via APIs and custom orchestration.
  • Build and optimize workflows using frameworks such as LangChain
  • Design and implement RAG (Retrieval-Augmented Generation) architecture to inject relevant, contextual data into generation prompts.
  • Develop robust methods to evaluate and compare LLM outputs based on relevance, personalization, and factual accuracy.
  • Build automated and scalable LLM evaluation pipelines using embedding-based similarity, scoring metrics, and human-in-the-loop feedback.
  • Implement monitoring, observability, and logging for GenAI workflows to ensure reliability in production.
  • Collaborate with cross-functional teams to integrate generative outputs into client-facing applications.

AWSPythonCloud ComputingMachine LearningMLFlowData science

Posted about 12 hours ago
Apply
Apply

📍 United States, Canada

💸 150000.0 - 180000.0 USD per year

🔍 Restaurant

  • Excellent SQL and Python skills
  • Strong data science and analytics background
  • Experience with SaaS and/or startups preferred
NOT STATED

PythonSQLData AnalysisETLMachine LearningBusiness OperationsData scienceAnalytical SkillsRESTful APIsData visualizationSaaS

Posted 1 day ago
Apply
Apply

📍 United States of America

🧭 Full-Time

💸 105740.0 - 130620.0 USD per year

🔍 Supply Chain

🏢 Company: global

  • Bachelor’s degree or higher in Supply Chain, Computer Science, Engineering, Data Science or related technical field.
  • 7+ years of business experience with cross-functional Supply Chain experience.
  • Celonis software experience including areas such as process analysis, transformations, action flows and automations.
  • Advanced data visualization and design skills with data representation and presentation
  • Drive the execution of industry process intelligence solutions to ensure SC optimization and address critical business needs
  • Leverage strong technical acumen to advance data architecture strategy, data transformations, accuracy and availability
  • Identify and optimize complex operational processes through Celonis process digital twin development of data transformations, analyses, automations, action flows and predictive analytical apps

PythonSQLBusiness IntelligenceData MiningMachine LearningSnowflakeAlgorithmsData scienceData visualizationData modeling

Posted 3 days ago
Apply
Apply

📍 United States

💸 195000.0 - 200000.0 USD per year

🔍 Fintech

🏢 Company: Clair👥 51-100💰 $25,000,000 Debt Financing almost 2 years agoHuman ResourcesFinancial ServicesBankingFinTech

  • 5+ years as a Data Scientist, with a strong focus on credit underwriting in the B2C fintech product space.
  • Proven experience with rules-based modeling, transaction labeling, and cash flow predictors.
  • Deep understanding of credit risk, including familiarity with relevant regulatory limits and compliance requirements.
  • Worked on a product that successfully went to market.
  • Proficient in Python, SQL, and data analysis tools.
  • Experience with financial data APIs like Plaid and handling unconventional datasets.
  • Strong background in statistical modeling, machine learning, and data visualization.
  • Ability to work with obscure data and deliver actionable insights.
  • Develop and refine rules-based models for transaction labeling (e.g., categorizing income buckets, expenses).
  • Build and optimize cash flow prediction models to assess user behavior and financial stability.
  • Conduct wage and hour assessments to ensure accurate and compliant financial modeling.
  • Analyze and interpret complex credit risk data, aligning models with regulatory requirements and limits.
  • Leverage Plaid and other financial APIs to process and analyze unconventional or obscure datasets.
  • Partner with engineering and product teams to bring data-driven insights into our product, ensuring scalability and reliability.
  • Collaborate across departments to align credit underwriting strategies with business goals.
  • Monitor and validate the performance of credit models post-launch, iterating as needed to optimize outcomes.
  • Stay current on regulatory changes and industry best practices in credit risk and financial modeling.

AWSPythonSQLData AnalysisData MiningMachine LearningAlgorithmsData scienceData StructuresPostgresRegression testingPandasCommunication SkillsAnalytical SkillsRESTful APIsData visualizationFinancial analysisData modeling

Posted 4 days ago
Apply
Apply

📍 United States of America

💸 109000.0 - 247000.0 USD per year

🏢 Company: Shipt_External

  • 4+ years of experience in applying experimentation and statistics in an industry setting, with a track record of delivering impactful results
  • PhD or Masters in Computer Science, Statistics, or a related field
  • Experience with experiment design, rollout, analysis and reporting
  • 4+ years Python development
  • Experience presenting to stakeholders
  • Experience leading key technical projects and substantially influencing the scope and output of others
  • Designing and analyzing advanced experiments for Shipt pricing models
  • Implementing cutting edge models in collaboration with product, business, and engineering teams
  • Presenting recommendations to key stakeholders

PythonData AnalysisMachine LearningA/B testing

Posted 7 days ago
Apply
Apply

📍 United States

🧭 Full-Time

🔍 Software Development

🏢 Company: Givebutter👥 101-250💰 $50,000,000 Series A about 1 year agoNon ProfitEvent ManagementSoftware

  • 5+ years of experience as a Data Scientist or similar role, ideally within a growth, revenue, or marketing analytics function.
  • Strong statistical foundations and hands-on experience with: Logistic/linear regression, clustering, and segmentation, Machine learning techniques (e.g., classification, regression, ensemble methods), Time series forecasting and anomaly detection, Causal inference and experimentation design, Building and validating predictive models at scale
  • Expert proficiency in SQL and at least one statistical programming language (e.g., Python, R), and familiarity with machine learning libraries such as scikit-learn, XGBoost, TensorFlow, or similar.
  • Experience with BI tools (e.g., Looker, Mode, Tableau) and modern data stack environments (e.g., dbt, Snowflake, BigQuery).
  • A strong business acumen with a deep curiosity about how data can drive decisions.
  • Excellent communication and storytelling skills—you make data come alive.
  • Manage the inflow of data requests from revenue stakeholders
  • Be the subject matter expert on the customer funnel, leading the charge in deepening the company’s understanding of what drives acquisition, conversion, engagement, and retention through behavioral insights.
  • Develop and maintain models for: Customer lifetime value, Multi-touch attribution, Customer segmentation and clustering, Churn risk prediction, Engagement scoring, Revenue forecasting
  • Apply statistical and machine learning methodologies including: Logistic and linear regression, Clustering and dimensionality reduction, Predictive modeling (e.g., decision trees, random forests, gradient boosting, neural networks), Causal inference and experimentation (e.g., A/B testing, uplift modeling)
  • Design and deploy machine learning models that enhance targeting, personalization, and lifecycle marketing efforts.
  • Collaborate cross-functionally with GTM teams (Marketing, Sales, Customer Success) to embed insights and models into decision-making processes.
  • Communicate complex findings clearly and effectively to both technical and non-technical stakeholders.
  • Gain a deep understanding of the revenue team’s business needs and data sources, partnering with stakeholders to understand what’s needed in the short- and medium-term.
  • Working with our analytics engineer, develop the definition, logic, and documentation of the data model that supports these needs, with a focus on quality, usability, adaptability, and clear communication/documentation in a fast-changing environment.

AWSPythonSQLETLMachine LearningSnowflakeTableauData scienceRegression testingData modelingData analyticsCustomer SuccessA/B testing

Posted 8 days ago
Apply
Apply

📍 United States

💸 250000.0 - 450000.0 USD per year

🔍 Insurance

🏢 Company: Quanata👥 101-250Software EngineeringInformation TechnologySoftware

  • Strong Python and SQL skills, especially in cloud-based environments (AWS, Azure, GCP).
  • Familiarity with software engineering best practices (version control, code reviews, CI/CD, containerization) is critical.
  • Deep knowledge of machine learning algorithms and data science methodologies, with an ability to deploy models in a production setting.
  • Lead the design, development, and maintenance of advanced personal auto insurance risk models and foundational data pipelines.
  • Create modular, reusable components and libraries that enhance the efficiency and scalability of our modeling process.
  • Mentor fellow data scientists and data engineers by encouraging best practices for code structure, version control, CI/CD, testing, and reproducibility.
  • Review pull requests and champion code quality across the data team.
  • Partner with actuarial, product, and engineering teams to translate complex models and analytical tools into scalable, real-world applications that add measurable business value.
  • Manage the full project lifecycle—from requirements gathering and architecture to deployment and monitoring—ensuring solutions are robust and optimized for performance in cloud-based environments.
  • Present analytical findings and operational roadmaps to senior leadership. Demonstrate how sound engineering practices and data-driven insights can inform strategic business decisions.
  • Lead efforts to optimize and automate cloud-based data science environments, establishing guidelines for resource utilization, environment provisioning, and production deployments.
  • Stay current with emerging technologies in data engineering, MLOps, and machine learning. Continuously evaluate and integrate new techniques to keep Quanata at the cutting edge of risk prediction.

AWSPythonSQLCloud ComputingGitMachine LearningData engineeringData scienceCommunication SkillsCI/CDRESTful APIsSoftware Engineering

Posted 8 days ago
Apply
Apply

📍 United States

🧭 Full-Time

💸 150000.0 - 200000.0 USD per year

🔍 Software Development

🏢 Company: Dataiku👥 1001-5000💰 $200,000,000 Series F over 2 years agoArtificial Intelligence (AI)Big DataData IntegrationAnalyticsEnterprise Software

  • Over 5 years of experience with Python and SQL
  • Over 5 years of experience with building ML models and using ML tools (e.g., sklearn)
  • Experience with LLM
  • Experience with data visualization and building web apps with Python frameworks (Dash, Streamlit)
  • Understanding of underlying data systems such as Cloud architectures and SQL
  • Bachelor’s or Master’s program focused on: Statistics, Computer Science, or a related field
  • Scope and co-develop production-level data science projects with our customers across different industries and use cases
  • Help users discover and master the Dataiku platform via user training, office hours, and ongoing consultative support
  • Provide strategic input to the customer and account teams that help make our customers successful.
  • Provide data science expertise both to customers and internally to Dataiku’s sales and marketing teams
  • Lead technical data science projects pre-sales scoping and design appealing proposals
  • Flag technical and non-technical account risks (onboarding issues, performance pitfalls, timeline slippage)
  • Develop custom Python-based “plugins” in collaboration with Solutions, R&D, and Product teams, to enhance Dataiku’s functionality
  • Lead Data Scientist engagements: You will coordinate agile sprints, prioritize tasks, estimate effort, do backlog grooming
  • Run demo booth/tech talk duties at company public events (e.g. Everyday AI)
  • Lead Junior Data Scientist technical interview
  • Contribute to 2 internal assets (internal best practice or external blog post/project on the public gallery) per year

AWSPythonSQLCloud ComputingMachine LearningMLFlowData sciencePandasRESTful APIsData visualization

Posted 8 days ago
Apply
Apply

📍 United States

💸 120000.0 - 170000.0 USD per year

🔍 Insurance

🏢 Company: Verikai_External

  • Bachelor’s degree or above in Statistics, Mathematics, Actuarial Science or Computer Science with at least 5 years of relevant working experience
  • Possesses extensive expertise in machine learning algorithms and techniques, including supervised learning, unsupervised learning, and deep learning
  • Has in-depth knowledge of statistical analysis methods and their applications in data science
  • Skilled in optimizing data processing workflows to efficiently handle and analyze large amount of data
  • Demonstrates advanced proficiency in Python, with extensive experience in writing efficient, maintainable, and well-documented code
  • Has a working knowledge of PySpark, capable of performing basic data manipulation and processing tasks in a distributed computing environment
  • Experience of working on a cloud-based ML platform is a plus; Experience of working with insurance related data is a plus
  • Takes full accountability and ownership for their work, ensuing tasks are completed to the highest standards and within deadlines
  • Demonstrates exceptional attention to detail in all aspects of work, ensuring accuracy and completeness
  • Able to think critically and make informed decisions by assessing all available data and considering various perspectives
  • Capable of working autonomously with minimal supervision, managing time effectively and prioritizing tasks to meet deadlines
  • Maintains a positive and professional attitude, demonstrating reliability, flexibility, and a strong work ethic
  • Committed to supporting and assisting colleagues, contributing to a collaborative and team-oriented work environment
  • Communicates clearly and effectively, both verbally and in writing, ensuing complex statistical concept is conveyed accurately to colleagues and stakeholders
  • Innovate and implement cutting-edge machine learning algorithms, aiming to extract greater lift from our data and deliver enhanced value to our customers
  • Proactively seek out and identify useful attributes from various data sources to enhance our core data assets. Rigorously validate the utility and relevance of new data to ensure it contributes to the improvement of our models and insights
  • Conduct thorough descriptive and statistical analyses on customer data to uncover valuable patterns and insights. Apply creative analytical approaches to ensure that results are directly aligned with customer needs and support their business decision-making processes
  • Present results, findings, and models to customers and stakeholders with a high level of technical expertise and industry knowledge. Act as a technical expert, industry insider, and company promoter, ensuing that presentations are clear, informative, and persuasive
  • Work closely with other data scientists, engineers, product managers, and other stakeholders to integrate machine learning models and statistical analyses into our products and services. Ensure seamless collaboration and knowledge sharing across the teams
  • Keep up with the latest developments and trends in machine learning, data science, and the insurance technology industry. Apply this knowledge to continuously improve our models, methodologies, and approaches
  • Adhere to data privacy regulations and ensure that all data handling practices comply with relevant legal and ethical standards. Maintain the highest levels of data security and confidentiality
  • Mentor and support the professional growth of the junior team members, fostering a culture of continuous learning and development

AWSPythonSQLCloud ComputingData AnalysisData MiningKerasMachine LearningNumpyAlgorithmsData sciencePandasTensorflowCommunication SkillsRESTful APIsData visualizationData modeling

Posted 8 days ago
Apply
Apply

📍 United States

🧭 Full-Time

💸 160000.0 - 175000.0 USD per year

🔍 Blockchain Intelligence

🏢 Company: TRM Labs👥 101-250💰 $70,000,000 Series B over 2 years agoCryptocurrencyComplianceBlockchainBig Data

  • You have 8+ years of experience working in an analytical role.
  • You have customer facing experience and you enjoy sitting with the customers to understand their challenges.
  • You have a strong knowledge of relational databases, (e.g. SQL).
  • You have proven experience with at least one programming language (Python preferred) and are comfortable developing code in a team environment (e.g. git, notebooks, testing).
  • You have a strong experience building scalable data and analytics products from design to deployment phases.
  • You think about data in terms of statistical distributions and have a big enough analytics toolbox to know how to find patterns in data and identify targets for performance.
  • You are delivery-oriented and you are experience in a face-paced environment.
  • You have excellent verbal and written communication skills and experience in influencing decisions with information.
  • You have a high tolerance for ambiguity. You find a way through. You anticipate. You connect and synthesize.
  • Lead the implementation of new advanced analytics products that allow our customers to explore and uncover hidden insights through complex blockchain data such as assessing the risk of DeFi services, stablecoins, global regulations and other crypto-trends.
  • Lead the roadmap and prioritisation of the analytics customer products suite based on market demands.
  • Use your high-level business acumen to collaborate with our go-to-market team and work directly with our customers to understand their challenges and pain points in order to creatively translate them into impactful models and products.
  • Develop complex statistical analysis, algorithms, and interactive visualizations that will be consumed by former FBI, Secret Service, and Europol agents and analysts to detect new threat vectors unique to cryptocurrencies and blockchains.
  • Collaborate with engineers and research scientists to design algorithms to analyze complex data structures inherent to cryptocurrencies and blockchains. This will involve working with distributed ledger technologies, cryptographic protocols, and transactional data to uncover suspicious activities.
  • As a leader in the team, you will collaborate closely with engineers, data scientists, and research scientists to align on project goals, methodologies, and deliverables. You will communicate your findings and insights effectively to both technical and non-technical stakeholders.
  • Develop your skills through exceptional training as well as frequent coaching and mentoring from colleagues
  • Establish best practices and statistical rigor around data-driven decision-making

PythonSQLBlockchainData AnalysisMachine LearningAlgorithmsData scienceCommunication SkillsAnalytical SkillsCollaborationCustomer serviceRESTful APIsDevOpsWritten communicationTeamworkData visualizationData modelingData analytics

Posted 14 days ago
Apply