Skip to main content
Fig. 2 | Journal of Cheminformatics

Fig. 2

From: Randomized SMILES strings improve the quality of molecular generative models

Fig. 2

Architecture of the RNN model used in this study. For every step \(i\), input one-hot encoded token \(X_{i}\) goes through an embedding layer of size \(m \le w\), followed by \(l > 0\) GRU/LSTM layers of size \(w\) with dropout in-between and then a linear layer that has dimensionality \(w\) and the size of the vocabulary. Lastly a softmax is used to obtain the token probability distribution \(Y_{ij}\). \(H_{i}\) symbolizes the input hidden state matrix at step \(i\)

Back to article page
\