Fig. 2From: DrugEx v3: scaffold-constrained drug design with graph transformer-based reinforcement learningArchitectures of four different end-to-end deep learning models: A The Graph Transformer; B The LSTM-based encoder-decoder model (LSTM-BASE); C The LSTM-based encoder-decoder model with attention mechanisms (LSTM + ATTN); D The sequential Transformer model. The Graph Transformer accepts a graph representation as input and SMILES sequences are taken as input for the other three modelsBack to article page