Fig. 5From: Scalable training of graph convolutional neural networks for fast and accurate predictions of HOMO-LUMO gap in moleculesStrong scaling performance of HydraGNN training on OLCF’s Summit and NERSC’s Perlmutter(top), and detailed timing (bottom). We perform data-parallel training for PCQM4Mv2 and AISD HOMO-LUMO data sets with HydraGNN using up to 1500 GPUs and observe linear scaling up to 1024 GPUsBack to article page