Skip to main content

Table 1 Comparison of the performance of the different methods

From: An exploration strategy improves the diversity of de novo ligands using deep reinforcement learning: a case for the adenosine A2A receptor

 

DrugEx (Pre-trained)

DrugEx (Fine-tuned)

REINVENT

ORGANIC

Pre-trained

Fine-tuned

ε

0.01

0.01

0.1

0.1

0.01

0.01

0.1

0.1

β

0.0

0.1

0.0

0.1

0.0

0.1

0.0

0.1

Valid SMILES

98.3%

98.9%

95.9%

98.8%

99.1%

99.0%

98.2%

97.5%

98.8%

99.8%

93.9%

96.2%

Desired SMILES

97.5%

98.0%

74.6%

80.9%

98.3%

98.5%

94.4%

94.5%

98.2%

99.8%

0.7%

47.9%

Unique SMILES

96.5%

96.3%

73.0%

80.0%

96.5%

96.6%

84.8%

86.0%

95.8%

94.8%

0.7%

22.7%

Diversity

0.74

0.75

0.80

0.80

0.75

0.74

0.80

0.80

0.75

0.67

0.83

0.82

  1. These methods included DrugEx with different ε, β and Gφ (shown in the parentheses), REINVENT, ORGANIC, the pre-trained network, and the fine-tuned network (both without using DrugEx)