Skip to main content

Table 3 Final evaluation results on the CHEMDNER test set.

From: A document processing pipeline for annotating chemical entities in scientific documents

 

Entity Mention

Document Indexing

System

Precision

Recall

F-measure

Precision

Recall

F-measure

Top scoring

89.09

85.75

87.39

87.02

89.41

88.20

Official best run

86.50

85.66

86.08

86.35

82.37

84.31

Corrected run

87.35

86.49

86.92

87.07

87.97

87.52

 

+0.85

+0.83

+0.84

+0.72

+5.60

+3.21

1st order CRF

88.04

84.89

86.44

88.00

86.42

87.20

 

+0.69

-1.60

-0.48

+0.93

-1.55

-0.31

2nd order CRF

88.35

83.79

86.01

88.14

86.65

87.39

 

+1.01

-2.71

-0.91

+1.08

-1.32

-0.13

Post-processing

88.67

86.32

87.48

87.68

87.81

87.75

 

+1.32

-0.17

+0.56

+0.61

-0.16

+0.23

  1. The 'Corrected run' line shows results obtained using the same models as in the official run, after correcting the generation of the annotation files. Results obtained with first- and second-order CRF models alone (without model combination) and after post-processing are shown with differences compared to the corrected run. Values are shown in percentage points.