Skip to main content

Table 2 Comparison architectures A, B, C and D

From: GEN: highly efficient SMILES explorer using autodidactic generative examination networks

ArchitectureMerge modeLayer countLayer sizeBest model epoch#Validity%Uniqueness%Training%Length match%aHAC match%b
A: LSTM–LSTM1/164/6454, 72, 6395.4 ± 0.499.9 ± 0.112.0 ± 0.998.2 ± 0.394.0 ± 0.9
B: biLSTM–biLSTM1/164/6420, 22, 2896.5 ± 0.599.9 ± 0.112.5 ± 0.997.9 ± 0.594.9 ± 0.8
A: LSTM–LSTM1/1256/25617, 17, 2096.7 ± 0.499.9 ± 0.115.0 ± 0.798.2 ± 0.994.0 ± 1.8
B: biLSTM–biLSTM1/1256/2566, 7, 1097.1 ± 0.499.9 ± 0.113.1 ± 0.598.2 +/0.693.9 ± 0. 8
C: biLSTM–biLSTMConcatenated1/464/6410, 14, 1697.0 ± 0.399.9 ± 0.011.9 ± 0.698.5 ± 0.397.4 ± 0.5
C: biLSTM–biLSTMAverage1/464/6411, 15, 1597.2 ± 0.399.9 ± 0.112.5 ± 0.398.6 ± 0.296.1 ± 0.7
C: biLSTM–biLSTMLearnable average1/464/6415, 17, 2397.6 ± 0.299.9 ± 0.014.6 ± 0.297.4 ± 0.494.8 ± 1.2
D: biLSTM–biLSTMConcatenated4/464/6411, 11, 996.9 ± 0.399.9 ± 0.014.4 ± 0.597.4 ± 0.295.6 ± 1.2
D: biLSTM–biLSTMAverage4/464/6415, 17, 1496.7 ± 0.199.9 ± 0.011.9 ± 0.298.1 ± 0.595.3 ± 1.1
D: biLSTM–biLSTMLearnable average4/464/6412, 25, 1895.6 ± 0.199.9 ± 0.010.4 ± 0.598.0 ± 0.296.2 ± 0.6
Influence of bidirectionality
 LSTM–LSTMConcatenated1/464/6420,17,3196.8 ± 0.499.9 ± 0.113.4 ± 0.597.6 ± 0.894.8 ± 1.3
 biLSTM-LSTMConcatenated1/464/649, 14, 997.1 ± 0.399.9 ± 0.113.2 ± 0.597.7 ± 0.995.5 ± 1.4
  1. Best architecture is highlighted in italics
  2. aLength match for SMILES length distributions of the training set and generated set (See “Methods”)
  3. bHAC match for the atom count distributions of the generated set and training set (See “Methods”)