Highly Intelligible Models
XTRACTIS-induced models are intelligible: humans can fully understand the internal decision-making logic of the AI systems automating the decisions.
This chart shows the “#1 ranking” counts for INTELLIGIBILITY observed from 23 Public Use Cases with benchmarks.
No Data Found
Highly Predictive Models
XTRACTIS-induced models are robust: their performance is at least as high as mainstream AI and data-driven modeling techs, as benchmarks prove.
This chart shows the “#1 ranking” counts for PERFORMANCE observed from 23 Public Use Cases with benchmarks.
No Data Found
As Logistic Regression is inapplicable to Regression (UC#02, UC#15 and UC#23) and unavailable for UC#01, graph#1 displays the scores calculated for the 23 UCs, but only for 4 modeling techniques. Graph#2 displays the scores of the 5 modeling techniques, calculated only for the 19 UCs for which Logistic Regression is available.
All quantitative metrics are defined in the use cases documents.
The benchmarks of XTRACTIS, Logistic Regression, Random Forest, Boosted Tree, and Neural Network are carried out on Test and External Test datasets (all the unknown cases of the reference dataset, i.e., not used for training or validation).
In these graphs, the Intelligibility Score IS and the Performance Score PS are calculated from the 21 public Use Cases (UC) that include benchmarks of models.
The center of a bubble corresponds to the point of coordinates IS -on top- and PS -on bottom. A bubble on the top-right corner is the Holy Grail for critical AI-based decision systems: an AI Technique which produces predictive models with the highest Performance and the highest Intelligibility.
Our Use Cases present the results of XTRACTIS modeling and include complete Benchmarks against Neural Network, Boosted Tree, Random Forest, and Logistic Regression.
These studies illustrate the ability of XTRACTIS to automatically induce knowledge in the form of predictive and intelligible mathematical relationships from real-world data (public data or authorized private data).
For each application, we show how the induced decision system uses its fuzzy rules to compute explained predictions for new situations, i.e. unknown to the learning data set.
Choose the sector to access the corresponding Use Cases.
XTRACTIS discovers how to identify foetus heart conditions based on signal characteristics of fetal heart rate and the mother’s uterine contractions. As a result, you get an explainable and accurate automated medical diagnosis.
XTRACTIS discovers how to detect breast cancer based on topological characteristics of mammary cells. As a result, you get an explainable and accurate automated medical diagnosis.
XTRACTIS discovers how to detect ovarian cancer based on the 15,154 mass-charge ratios coming from the protein spectrum of serum samples, and for a small number of patients. As a result, you get an explainable and accurate automated medical diagnosis.
XTRACTIS discovers how to detect prostate cancer based on the expression levels of 12,600 genes and for a small number of patients. As a result, you get an explainable and accurate automated medical diagnosis.
XTRACTIS discovers how to identify 2 types of lung cancer based on the expression levels of 12,533 genes and for a small number of patients. As a result, you get an explainable and accurate automated medical diagnosis.
XTRACTIS discovers how to detect kidney disease based on the patient record and blood measures. As a result, you get an explainable and accurate automated medical diagnosis.
XTRACTIS discovers how to detect Parkinson’s disease based on simple voice recordings of patients. As a result, you get an explainable and accurate automated medical diagnosis.
XTRACTIS discovers the strategies to evaluate employees, highlighting existing biases in these strategies.