<img src="https://ws.zoominfo.com/pixel/eCRg8hzy2qp0kNUl6E1g" width="1" height="1" style="display: none;">


Transparent AI Platform

Explainable AI in biomedicine generates insights that biopharma, physicians, and patients can trust. Our extensively validated deep learning platform is optimized for the discovery of clinical diagnostic tests across multiple technologies, including genomics, transcriptomics, proteomics, and radiomics.

BDSX diagnostic solutions figure

Our Machine Learning Platform​

We combine biological information related to the tumor, immune response, and host-status with clinical and radiomic data as inputs for our proprietary AI platform, which enables us to interpret the holistic disease state of each patient or clinical dataset we encounter. We continuously incorporate new market insights and patient data to enhance our platform through a data-driven learning loop.

Standard Machine Learning Challenges​


Researchers commonly encounter issues with machine learning-based biological discoveries that cannot be repeated or validated when assessed in additional specimen cohorts. This challenge, commonly referred to as overfitting, occurs when the machine identifies a perfect pattern in an initial training dataset but is unable to identify the same pattern in a new dataset.

perfect pattern

One Solution: The Right Fit

We believe our platform overcomes standard machine learning challenges faced in life sciences research. For over 15 years we have focused on developing proprietary computational techniques to ensure each diagnostic test that is discovered with the Diagnostic Cortex can be further developed to perform consistently in the clinical testing environment.

Regularization to Avoid Overfitting
Layers of Data Abstraction
Ensemble Averaging ("bagging")
Hierarchical Approach to Classification
Avoidance of Feature Selection

Allow to machine to decide what is important.

Combining Multiple Learners ("boosting")
black box

The Black Box

While machine learning methods have grown increasingly common in biological research, AI-powered diagnostic tests have been slower to enter the clinic because of the "black box" dilemma inherent in many AI algorithms, where it can be unclear how the algorithm is making a decision.

Our Solution: Explainable AI

We recently added a tool called the Shapley Explainability Module to our Diagnostic Cortex. This module reveals the features, or molecules, being used throughout the platform's calculations and the contributions these features make to the phenotypes of interest (such as aggressive lung cancers or treatment response). This allows researchers and physicians to better understand the AI's decision-making process and inspire greater researcher and physician confidence in AI algorithms by allowing visibility into how an algorithm arrives at its results.


Case Study Diagnostic Test Discovery & Validation

A test developed for anti-PD-1 therapy in second-line non-small cell lung cancer that identifies a group of patients that likely won’t demonstrate long term benefit from nivolumab and is likely predictive for nivolumab vs. docetaxel.

Development (nivolumab) (N=98) graph

Development (nivolumab) (N=98)

Validation (nivolumab) (N=32) graph

Validation (nivolumab) (N=32)

Evaluation (chemotherapy) (N=68) graph

Evaluation (chemotherapy) (N=68)

Source: Goldberg 2017 (SITC poster)

Data-driven diagnostic solutions

Get in touch

Data library