Skip to main content

Table 3 Optimal number of parallel encoding layers in architecture C

From: GEN: highly efficient SMILES explorer using autodidactic generative examination networks

ArchitectureMerge mode# LayersLayer sizesBest model epoch#Validity%Uniqueness%Training%Length match%aHAC Match%b
B: biLSTM–biLSTM1/164/6420, 22, 2897.1 ± 0.499.9 ± 0.113.1 ± 0.598.2 ± 0. 693.9 ± 0. 8
C: biLSTM–biLSTMConcatenated1/264/6419, 19, 1997.8 ± 0.499.9 ± 0.112.5 ± 0.497.3 ± 0.496.1 ± 0.1
C: biLSTM–biLSTMConcatenated1/364/6412, 12, 1297.2 ± 0.299.9 ± 0.012.2 ± 0.498.6 ± 0.396.9 ± 0.8
C: biLSTM–biLSTMConcatenated1/464/6410, 14, 1697.0 ± 0.399.9 ± 0.011.9 ± 0.698.5 ± 0.397.4 ± 0.5
C: biLSTM–biLSTMConcatenated1/564/64895.9 ± 0.399.9 ± 0.013.5 ± 1.097.6 ± 0.297.2 ± 0.3
C: biLSTM–biLSTMConcatenated1/664/64895.9 ± 0.299.9 ± 0.110.1 ± 0.496.3 ± 0.393.9 ± 0.7
C: biLSTM–biLSTMConcatenated1/764/64796.8 ± 0.499.9 ± 0.014.0 ± 1.097.6 ± 0.695.9 ± 0.5
C: biLSTM–biLSTMConcatenated1/864/646, 6, 696.2 ± 0.799.9 ± 0.013.6 ± 0.198.0 ± 0.794.8 ± 0.8
C: biLSTM–biLSTMConcatenated1/1664/645, 5, 595.9 ± 0.399.9 ± 0.013.5 ± 1.096.6 ± 0.793.1 ± 0.7
  1. aLength match for SMILES length distributions of the training set and generated set (See “Methods”)
  2. bHAC match for the atom count distributions of the generated set and training set (See “Methods”)