Skip to main content

Advertisement

Table 1 Performance scores for the CHEMDNER chemical entity mention (CEM) subtask

From: Putting hands to rest: efficient deep CNN-RNN architecture for chemical named entity recognition with no hand-crafted rules

Model Precision % Recall % F1-score %
Our model 88.6 88.8 88.7
ChemDataExtractor [22] 89.1 86.6 87.8
tmChem (173) [2] 89.2 85.8 87.4
(231) [9] 89.1 85.2 87.1
LeadMine (179) [8] 88.7 85.1 86.9
(184) 92.7 81.2 86.6
Chemspot (198) [32] 91.2 82.3 86.7
Becas (197) [31] 86.5 85.7 86.1
(192) 89.4 81.1 85.1
BANNER-CHEMDNER (233) [33] 88.7 81.2 84.8
(185) 84.5 80.1 82.2
  1. CHEMDNER challenge team IDs are given in parenthesis in the Model column (where available; performance scores for these models have been taken from Table 4 in [30]). We provide ChemDataExtractor performance scores reported by the authors