Skip to main content

Table 1 R2 performance of each model on the four tasks

From: A multitask GNN-based interpretable model for discovery of selective JAK inhibitors

  Methods Training set Validation set Test set
Global MTATFP 0.96 0.79 0.78
STATFP 0.93 0.75 0.73
LightGBM_MD 0.93 0.69 0.70
LightGBM_ECFP4 0.91 0.71 0.70
JAK1 MTATFP 0.95 0.80 0.82
STATFP 0.95 0.80 0.80
LightGBM_MD 0.91 0.75 0.72
LightGBM_ECFP4 0.91 0.75 0.76
JAK2 MTATFP 0.97 0.83 0.81
STATFP 0.96 0.82 0.8
LightGBM_MD 0.94 0.70 0.75
LightGBM_ECFP4 0.92 0.75 0.71
Xgboosta 0.97a 0.80a 0.80a
JAK3 MTATFP 0.97 0.77 0.76
STATFP 0.91 0.68 0.70
LightGBM_MD 0.92 0.69 0.70
LightGBM_ECFP4 0.90 0.69 0.73
TYK2 MTATFP 0.94 0.76 0.75
STATFP 0.91 0.69 0.63
LightGBM_MD 0.93 0.61 0.62
LightGBM_ECFP4 0.91 0.65 0.61
  1. It showed that the performance of various models (deep learning methods based on MTATFP or STATFP strategies and LightGBM-based machine learning approaches) on each task. The closer the R2 value was to 1, the better the model performed
  2. ais the best results in Yang’s work. Although each dataset could not be guaranteed to be the identical, our multitasking model has obvious advantages as well