Skip to main content
Fig. 2 | Journal of Cheminformatics

Fig. 2

From: DrugEx v3: scaffold-constrained drug design with graph transformer-based reinforcement learning

Fig. 2

Architectures of four different end-to-end deep learning models: A The Graph Transformer; B The LSTM-based encoder-decoder model (LSTM-BASE); C The LSTM-based encoder-decoder model with attention mechanisms (LSTM + ATTN); D The sequential Transformer model. The Graph Transformer accepts a graph representation as input and SMILES sequences are taken as input for the other three models

Back to article page