Skip to main content
  • Research article
  • Open access
  • Published:

Improving structural similarity based virtual screening using background knowledge



Virtual screening in the form of similarity rankings is often applied in the early drug discovery process to rank and prioritize compounds from a database. This similarity ranking can be achieved with structural similarity measures. However, their general nature can lead to insufficient performance in some application cases. In this paper, we provide a link between ranking-based virtual screening and fragment-based data mining methods. The inclusion of binding-relevant background knowledge into a structural similarity measure improves the quality of the similarity rankings. This background knowledge in the form of binding relevant substructures can either be derived by hand selection or by automated fragment-based data mining methods.


In virtual screening experiments we show that our approach clearly improves enrichment factors with both applied variants of our approach: the extension of the structural similarity measure with background knowledge in the form of a hand-selected relevant substructure or the extension of the similarity measure with background knowledge derived with data mining methods.


Our study shows that adding binding relevant background knowledge can lead to significantly improved similarity rankings in virtual screening and that even basic data mining approaches can lead to competitive results making hand-selection of the background knowledge less crucial. This is especially important in drug discovery and development projects where no receptor structure is available or more frequently no verified binding mode is known and mostly ligand based approaches can be applied to generate hit compounds.


Medical needs are the starting point for every drug discovery and development project. Apart from the classical in vitro and in vivo studies used in this process, pharmaceutical research relies more and more on in silico methods like (high throughput) virtual screening or molecular docking simulations [1, 2]. Computational methods promise to shorten the typically time-consuming efforts that come with the development of new market-approved drug compounds. In the early drug discovery process, virtual screening is used to rank or select compounds from huge databases of potential drug candidates that are later assessed in wet-lab and animal studies. In case one or more ligand structures of the target protein are known and available, virtual screening based on ligand similarities can be used to calculate a ranking of candidate compounds in a database. This approach is applied if no binding mode of the reported ligands, as well as no X-ray or NMR structure of the protein target is available and receptor based approaches are not easily accessible. Yet even in these cases the virtual screening approach is certainly a valid orthogonal approach to derive interesting and promising structures and scaffolds for the drug discovery pipeline.

In this paper, we present a concept of how structural similarity based methods used in virtual screening can be improved by integrating chemical background knowledge in the form of binding relevant or informative structural elements. Improvement in this case means higher enrichment of chemical compounds related to the query compound in the similarity ranking of a compound database. Consequently, more potentially biologically active and less potentially inactive compounds are selected in virtual screening for further processing in the drug discovery pipeline (e.g. in vitro, in vivo). To achieve an improved enrichment we extract binding relevant substructures from known ligands and transform them into a fingerprint. This fingerprint is then used to extend a structural similarity measure. We present two approaches to extract the binding relevant information: first we use visual inspection of a known ligand as well as literature review to identify binding relevant substructures, second we test a relatively basic data mining approach. We apply the Free Tree Miner (FTM) software [3] that takes a set of two-dimensional chemical structures as input. FTM mines for and returns all substructures that occur frequently (more often than a user defined minimum support threshold) in the given set. These relevant substructures are then fragmented and the fragments’ occurrences in a chemical structure are used as bits in a binary occurrence fingerprint. A limitation of the data mining based approach is the need for more than one known ligand (active compound). An advantage of the approach is that it can still be applied if no literature information on the binding relevant substructures or structural patterns is available and that it saves human effort.

In our experiments we extend two structural similarity measures with background knowledge and apply them to rank compounds in a database according to their similarity to a known active structure. The first similarity measure is based on the size of the maximum common substructure (MCS – e.g., Raymond et al.[4]) of two molecules, the second is based on Extended Connectivity Fingerprints (ECFP) [5]. No other factors like drug-likeness, Cytochrome P450 interaction or physico-chemical properties are used. This enables an isolated view on the effects of the similarity methods used for the rankings. The extended similarity measures are compared to their non-extended versions to assess their performance by calculating enrichment factors for 1%, 5% and 10% of the database.

We show that adding background knowledge on important binding components of ligands to both, the MCS similarity and the ECFP similarity, changes the virtual screening ranking in such a way that the top structures have improved docking scores, related structures are ranked at better positions and clearly improved enrichment factor values are obtained. We also show that replacing the visual inspection and literature search by a data mining approach improves the similarity rankings for most assessed data sets. The data mining approach performs slightly weaker than the by-hand approach, but gives competitive results.

The remainder of the paper is organized as follows: In the next section we give detailed information on the data and methods we use for the similarity calculations and our experimental setup. This is followed by a presentation and discussion of our results before we conclude. Additional result tables can be found in the Additional file 1.

Materials and methods

In this section we give detailed information on our experimental setting, on how we extend a similarity measure and on the data sources and evaluation measures used in our virtual screening experiments.

Experimental setup

When virtual screening by means of similarity ranking is performed in a drug discovery project, the similarities of all compounds in the screening database are calculated with respect to one or more known ligands of the protein target (used as reference compounds). The compounds in the database are subsequently sorted according to their similarity scores in descending order so that the compounds most similar to the reference appear first in the ranking. A good similarity measure will find structures that are related to the reference – or that potentially interact with the target protein – in the first few percent of the list. To assess the performance of different similarity measures we mix a set of known ligands into a set of decoys to form a screening database. As reference compound for the similarity rankings we use a randomly selected representative of the known ligands. After applying the standard similarity ranking procedure individually with each similarity measure, we can evaluate the performance of the similarity measures by examining the results for the known ligands in the screening database. The better a similarity measure is, the more known ligands will be in the top section of the ranking.

The experiments on extending a structural similarity measure can be divided into two lines of experiments: line “A” considers the by-hand selection of the binding relevant information that is used to extend the similarity measure and line “B” considers the data mining based selection of this information.

Table 1 shows a comparison of the steps necessary to apply the two presented approaches to extend similarity measures and rank a screening database.

Table 1 Overview of the steps necessary to apply the two presented approaches to extend similarities

Extended similarity

The extended similarity measures proposed in this work are constructed from two building blocks: a structural similarity measure used as base simililarity (s i m b a s e ) and a fingerprint-based similarity that is based on the binding relevant substructures (si m bind_fp ). After defining the extended similarity measure we will first explain the base similarities and second explain the two variants used to derive si m bind_fp . The extended similarity of two molecules a and b is defined as:

si m ext a , b =1αsi m base a , b +αsi m bind_fp a , b ,

where si m bind_fp a , b gives the Tanimoto similarity coefficient (for a mathematical definition see the Additional file 1) of two binary sub-structural occurrence fingerprints of molecules a and b.

For most experiments we choose α= 1 3 as weight coefficient for the fingerprint-based part arbitrarily and motivated by the wish to weight the base similarity higher than its extension. No optimization regarding this parameter has been attempted, however we show a short evaluation of α in the Results and discussion section. In our experiments the substructures constituting the fingerprint for si m bind_fp are selected by visual inspection and literature review or by a data mining approach.

The first structural similarity measure (s i m b a s e ) that we extend is based on the notion of maximum common substructures (MCS). For computation of the size of the MCS of two molecular structures, the JChem Java classes were used (JChem, ChemAxon ( The similarity between two structures was then calculated with the similarity measure proposed by Wallis et al.[6]:

si m MCS a , b = mcs a , b a + b mcs a , b ,

where |·| gives the number of vertices in a graph, and m c s(a,b) calculates the MCS of molecules a and b. Consequently, |m c s(a,b)| is the number of atoms of the MCS of molecules a and b. The second structural similarity measure is based on Extended-Connectivity Fingerprints (ECFP) [5], a standard method in pharmaceutical research and industry. ECFP fingerprints are circular, structural feature fingerprints that use as input information not only the atom and bond type, but the six atom numbering independent Daylight atomic invariants [7] to encode atoms: the number of immediate heavy atom neighbors, the valence minus the number of hydrogens, the atomic number, the atomic mass, the atomic charge, the number of attached hydrogens, plus a seventh invariant added by Rogers et al.[5]: whether the atom is contained in at least one ring. These fingerprints are available via the RDKit functionality of the open source cheminformatics software AZOrange [8]. The radius parameter for the ECFP fingerprint calculation was used at the default value of r=2. The fingerprint similarity of two ECFP fingerprints is calculated with the Dice coefficient (for a mathematical definition see the Additional file 1).

Our first approach (approach A) to extend s i m b a s e relies on literature review or visual inspection of a set of known ligands to retrieve a binding relevant substructure (or fragment). Once such a substructure is known we apply the Free Tree Miner [3] software without minimum frequency constraint to produce all possible fragments of the substructure. From these fragments we build a binary occurrence fingerprint that is used to encode the reference molecules and all database molecules. The fingerprints are then used to calculate si m bind_fp . In our experimental evaluation of approach A on the HMGR data set, we derive the binding relevant substructure not only by literature review (which would be the standard approach and sufficient in most cases), but we support the process by additional calculations. First, we use the MCS similarity measure to rank the screening database. Subsequently, the top 25 compounds of the similarity ranking are docked to the HMGR receptor. The examination of the results in combination with the literature review is used to derive the binding relevant structural parts that are used as background knowledge. For the second data set used to evaluate approach A (PPAR γ) we derive the binding relevant stubstructure from reviewing known ligands from the DrugBank [9] database. We expect the PPAR γ hand-selection experiments to show less improvement than those on HMGR as the binding relevant information is selected with less effort.

In our second approach to extend s i m b a s e , the data mining based approach - denoted approach B, we try to substitute the by-hand selection of the additional knowledge that is integrated into the similarity measure by applying data mining techniques. To retrieve the substructure fingerprint used for the similarity measure extension we calculate the set of frequently occurring substructures from a set of known ligands with the FTM algorithm. From those frequent substructures we build the binary occurrence fingerprint used to encode our molecules and used to calculate si m bind_fp . Two variant of input ligand sets are tested: (B1) We use all available ligands for the generation of the fingerprint fragments. The minimum support parameter (minsup) for the FTM software was chosen in such a way for each data set that it resulted in approximately the same number of substructural features as the fingerprint of approach A did (57 features). The parameters are given in Table 2. This ensures that we can exclude the lenght of the fingerprint (feature number) as driving force of improvement or degradation. (B2) We use only 10% (20% in case of the DuD HMGR, ADA and TK data sets) of the ligand compounds randomly chosen from the respective DuD ligand sets to work with a more realistic setting, where only few compounds interacting with the protein are known in advance. The minimum support parameter of FTM was set to 0.9 for all data sets. This second, reduced variant provides less information on the ligands to be found in the ranking and consequently poses a more realistic but harder problem. The resulting enrichment factor values of the extended similarity measures should show less improvement over the non-enxtended versions compared to the first variant that uses all ligands.

Table 2 Overview of the used DuD data sets

For a graphical overview of the two extension approaches as well as how they interact with the base-line similarity ranking please see Figure 1.

Figure 1
figure 1

Overview of the experimental setup of the (A) by-hand extension experiments (B) mining-based extension experiments. The upper half of the workflow shows a similarity ranking without the incorporation of background knowledge. FP = fingerprint.


In the first line of experiments (by-hand selection) we use only two data sets for our analysis, in line two of the experiments (data mining based extension) we use ten data sets from the Directory of useful Decoys (DuD) [10] as well as 25 ChEMBL activity class data sets [11]. We use different database setups in our evaluation: For experiments with the DuD data sets we use either all 95,000 decoy structures of the DuD (DuD all ) or only those DuD decoys as database that were designed especially for the target ligand system considered (DuD set ). For the experiments with the ChEMBL activity classes we use a subset of the ZINC [12] database.

HMGR and statins

In our approach A experiments we first consider the problem of inhibition of the enzyme HMG-CoA reductase (HMGR). Well-known inhibitors of HMGR are chemicals from the drug class of statins (HMG-CoA reductase inhibitors). Most of them are marketed drugs or drugs under development. Inhibition of HMGR lowers the cholesterol levels and prevents cardiovascular diseases [13], which are a major problem in developed countries as coronary artery disease affects 13 to 14 million adults in the United States alone [14]. The statins are structurally quite similar as can be seen in Figures 2, 3, 4, 5, 6 and 7. All of them are competitive inhibitors of HMGR with respect to binding of the substrate HMG-CoA, but not with respect to binding of NADPH [15]. The protein receptor used in the docking procedure is the structure of HMGR co-crystallized with fluvastatin (Figure 8, CID 446155), which is available in the PDB [16] with identifier [PDB:1HWI] [17]. We use two sets of known ligands that are mixed with the decoys and provide the reference compound in this first set of experiments: first the set of statins and second the HMGR ligands provided by the DuD HMGR data set. In case the statins are used as ligand set, we repeat the experiment with each statin as query compound, otherwise we randomly select ten DuD HMGR ligands and use each one of those as query compound.

Figure 2
figure 2

Atorvastatin. 2D structure depiction of Atorvastatin (PubChem CID 60823).

Figure 3
figure 3

Fluvastatin. 2D structure depiction of Fluvastatin (PubChem CID 446155), the binding relevant substructure is marked.

Figure 4
figure 4

Lovastatin. 2D structure depiction of Lovastatin (PubChem CID 53232).

Figure 5
figure 5

Mevastatin. 2D structure depiction of Mevastatin (PubChem CID 64715).

Figure 6
figure 6

Pitavastatin. 2D structure depiction of Pitavastatin (PubChem CID 24848419).

Figure 7
figure 7

Simvastatin. 2D structure depiction of Simvastatin (PubChem CID 54454).

Figure 8
figure 8

Selected HMGR ligand binding poses. Only the active site of the receptor is shown. A: Fluvastatin receptor binding. Original position of fluvastatin in the HMGR ([PDB:1HWI]) receptor. The hand-selected important fragment is marked in yellow. B: Best docking of best MCS similarity search hit in the HMGR ([PDB:1HWI]) receptor. C: Best docking of best MCS e x t similarity search hit in the HMGR ([PDB:1HWI]) receptor.


In addition to HMGR we test the by-hand selection approach on the PPAR γ data set. The PPAR γ receptor binds peroxisome proliferators such as hypolipidemic drugs and fatty acids. Once activated by a ligand, the receptor binds to a promoter element in the gene for acyl-CoA oxidase and activates its transcription. It therefore controls the peroxisomal beta-oxidation pathway of fatty acids and is a key regulator of adipocyte differentiation and glucose homeostasis [18]. The DrugBank [9] database lists - amongst others - these eight drugs that are market approved and known PPAR γ interactors: Bezafibrate, Glipizide, Ibuprofen, Mesalazine, Sulfasalazine, Balsalazide, Rosiglitazone and Pioglitazone. An overview of the drugs, their DrugBank IDs and structures are given in Table 3 and Figure 9.

Table 3 PPAR γ market approved drugs
Figure 9
figure 9

PPAR γ approved active drugs. Eight DrugBank listed PPAR γ active drugs that have “approved” status. The DrugBank ID is shown with the molecule.

We use the same query and database set-up as with the HMGR experiments.

Directory of useful decoys

As database for the approach B experiments, we use the Directory of useful Decoys that is designed to avoid bias in docking and screening studies. The DuD database consists of more than 95,000 decoy structures and 2,950 ligand structures (more than 30 decoy structures per ligand) for 40 protein targets including HMGR. We chose eight target structures from the DuD database in addition to HMGR and PPAR γ. The original forty DuD target sets are grouped into six classes: nuclear hormone receptors, kinases, serine proteases, metalloenzymes, folate enzymes and other enzymes. We selected the additional protein targets to cover all six classes: estrogen receptor (ER; antagonists) from the class of nuclear hormone receptors, p38 mitogen-activated protein kinase (P38 MAP) and thymidine kinase (TK) for the class of kinases, factor Xa (FXa) for the class of serine proteases, adenosine deaminase (ADA) for the class of metalloenzymes, dihydrofolate reductase (DHFR) for the class of folate enzymes and the acetylcholine esterase (AChE) as well as cyclooxygenase 2 (COX-2) for the remaining “other enzyme” class. An overview of the DuD data sets used in this study is given in Table 2. For DHFR three and for FXa two ECFP similarities could not be calculated due to software problems (the applied RDKit software was not able to process those molecules). The respective compounds were removed from the experimental setting. For this second set of experiments we randomly select ten of the ligands as reference compounds and mix the remaining ligands with the decoys. This procedure is repeated ten times.

ChEMBL activity classes

To strengthen the findings on the mining-based experiments with the DuD data sets we add another set of compound data sets compiled by Li and Bajorath [11]. They selected compounds by activity class from the ChEMBL database (ChEMBL level 9) with restrictions to the reported potency values (at least 10μ M) and the contained number of distinct Bemis and Murcko scaffolds [19] (at least 3). After evaluation they report 50 activity classes as test cases for benchmark calculations. We use 25 (random selection) of those 50 activity classes (actually 49 – activity class 168 only provides one ligand and is therefore removed) as ligand sets. We randomly select ten ligands per activity class (or all available if the number of compounds in the activity class is smaller than ten). As background database we randomly select a set of 100,000 compounds from the ZINC [12] “All Purchasable” data set. For this set of experiments we randomly select one of the ligands as reference compound and mix the remaining ligands with the decoys. This procedure is repeated ten times.

Evaluation measures

To evaluate the performance of the similarity measures, we consider the enrichment factor (EF) [20] that is achieved by a virtual screening. The enrichment factor reflects the amount of known related structures in the first x % of the ranked database. In practice, often only the highest ranked compounds are of interest and considered further in the drug discovery pipeline. The enrichment factor is defined for certain fractions of the database:

EF(%)= ( N active ( % ) / N ( % ) ) ( N active / N all ) ,

where E F(%) is given for the specified percentage of the ranked database, Na c t i v e(%) is the number of active compounds in the selected subset of the ranked database, N(%) is the number of compounds in the subset, N a c t i v e is the number of active molecules in the data set and N a l l is the number of compounds in the database. For an easier interpretation of the EF values, it is helpful to compare them to the maximum possible enrichment at the specified fraction of the database:

For easier comparison we do not use the EF(%) directly, but the difference of maximum possible enrichment and achieved enrichment:

Δ EF =E F max EF(%).

Keep in mind that for Δ E F smaller values are better and the optimal Δ E F is zero. In our study, we use the top 1%, 5% and 10% fractions of the ranked database to calculate the EF values. In the results section of this work we restrict ourselves to showing the Δ E F values.

The first step in the docking process was the automatic preparation of the complete PDB structure of [PDB:1HWI] with the Protein Preparation Wizard of the Maestro Suite. Since there are four identical binding sites, the docking was performed with only one of them. At some binding sites ADP is bound nearby. Since ADP does not participate in statin binding [17] the binding site mainly formed by chain D with some contribution of chain C was chosen, which lacks ADP. In order to speed up the docking procedure, the multi-mer was simplified by removing the redundant chains A and B. The receptor preparation was completed by the manual removal of all waters, the ligand molecule and the ADPs of the other binding site formed by chain C and D. The selected drug candidates were prepared using Ligprep 2.5. In a preprocessing step of the docking procedure the receptor grid for the chosen binding site was pre-calculated using the Glide 5.7 Receptor Grid Generation. The co-crystallized fluvastatin in the chosen binding site was used as reference ligand. Subsequently the rigid receptor docking was performed with the extra precision mode of the Glide 5.7 Ligand Docking application.

Docking procedure

Molecular docking was applied in order to assess if the extensions to the structural similarity measures are suitable for virtual screening. For the HMGR experiments we did the docking ourselves, for the second experiment we used the docking scores as provided in the DuD database. We now describe the docking procedure applied in the by-hand HMGR experiment.

HMGR is a tetra-mer with four identical binding sites whereas two chains contribute residues to one binding site. In the PDB six co-crystallizations of HMGR are available, each with one statin: atorvastatin ([PDB:1HWK]), fluvastatin ([PDB:1HWI]), simvastatin ([PDB:1HW9]), compactin ([PDB:1HW8]), rosuvastatin ([PDB:1HWL]) and cerivastatin ([PDB:[1HWJ]). A comparison of the CoA bound binding sites with the statin bound binding sites showed rearrangements. In the statin bound binding sites some residues are disordered which fold to an α-helix when CoA is bound. In the presence of the α-helix, a narrow pantothenic acid-binding pocket is formed making it impossible for statins to bind. Instead a hydrophobic groove is formed that accommodates the hydrophobic moieties of the statins which accounts for a tighter binding of the statins [17]. Since we are interested in drug candidates with a similar binding ability as the statins, we focus on the statin bound HMGR structures. According to Istvan et al.[17] the orientation of the side chains in the binding sites does not differ among the statins. This was confirmed by a superposition of the six PDB structures with Pymol ( Due to this we chose to perform a rigid receptor cross-docking of the structural similar drug candidates to 1hwi with Glide 5.7 from the Maestro Suite of Schrödinger. If not indicated otherwise, the default settings were used.

Results and discussion

By-hand experiments

In the first set of experiments we extract the binding-relevant knowledge used to extend the structural similarity measures by literature review and support the process by MCS similarity ranking and docking calculations. We therefore rank the screening database (including decoys and statin ligands) with respect to fluvastatin using s i m M C S . Subsequently, we docked the top 25 compounds of the similarity ranking to the HMGR receptor. Looking at the docking results in Table 4 (and the long version in the Additional file 1: Table S1), it can be seen that only one compound (CID 60823) has a good docking score. This is atorvastatin, one of the two statins found in the top 25 of the MCS similarity ranking. All other compounds have rather weak docking scores. Four structures from this ranking are shown in Figures 10, 11, 12, 13 and the docking of the best non-statin is shown in Figure 8B. It can clearly be seen that the highlighted part of the structure of fluvastatin (Figure 3 and Figure 8A) or something structurally similar, is not present in any of the structures (non-statins). According to Istvan et al.[17], this part mimics the original binding ligand and consequently is essential for binding. The hydrophobic part of the statins is responsible for the nano-molar affinity of the statins but not sufficient for inhibitory binding on its own. Taking those facts into consideration, we decided to use the highlighted hydrophilic part of fluvastatin as background knowledge in our study. As described in the Materials and methods Section, the substructure was fragmented and used to derive a binary occurrence fingerprint of length 57 for the extended similarity measure (1).

Table 4 Results of the docking run (MCS top 25)
Figure 10
figure 10

ZINC26851. 2D structure depiction of ZINC26851 from the MCS similarity ranking. Rank difference: Δ R a n k =−16.

Figure 11
figure 11

ZINC588723. 2D structure depiction of ZINC588723 from the MCS similarity ranking. Rank difference: Δ R a n k =6.

Figure 12
figure 12

ZINC714466. 2D structure depiction of ZINC714466 from the MCS similarity ranking. Rank difference: Δ R a n k =0.

Figure 13
figure 13

ZINC4628438. 2D structure depiction of ZINC4628438 from the MCS similarity ranking. Rank difference: Δ R a n k =11.

We then calculated a similarity ranking with the extended MCS similarity measure and again docked the top 25 compounds. The results of docking the top 25 compounds of the extended MCS similarity ranking are shown in Table 5 (see Additional file 1: Table S2 of the supplement). Four structures from the ranking are shown in Figures 14, 15, 16 and 17. The docking scores are clearly improved in comparison to those of the structures found by the MCS similarity ranking given in Table 4 (see Additional file 1: Table S1). This means that the compounds found will very likely have a higher binding affinity to the receptor. Figures 8, 9 and 10 show the original position of fluvastatin and dockings of the two non-statins with the best docking score from the two similarity rankings. It can be seen that the ligand of the extended MCS similarity (in Figure 10) enters the active site much better than the one of the MCS similarity (in Figure 9).

Table 5 Results of the docking run (MCS ext top 25)
Figure 14
figure 14

ZINC599752. 2D structure depiction of ZINC599752 from the extended MCS similarity ranking (MCS e x t ). Rank difference: Δ R a n k =11.

Figure 15
figure 15

ZINC1112466. 2D structure depiction of ZINC1112466 from the extended MCS similarity ranking (MCS e x t ). Rank difference: Δ R a n k =2.

Figure 16
figure 16

ZINC4597014. 2D structure depiction of ZINC4597014 from the extended MCS similarity ranking (MCS e x t ). Rank difference: Δ R a n k =−12.

Figure 17
figure 17

ZINC19313623. 2D structure depiction of ZINC19313623 from the extended MCS similarity ranking (MCS e x t ). Rank difference: Δ R a n k =−2.

As last experiment for the by-hand approach, we calculated similarity rankings with the ECFP similarity and also with an extended version of the ECFP similarity. We use the same binding-relevant substructure as for the MCS similarity. Comparing the differences in enrichment factors of the ligand structures in the ranked databases (MCS and ECFP similarity rankings) with the respective extended variants (see Table 6), it is clear that the extension is beneficial in all cases. Especially the MCS similarity, that shows a slightly weaker performance than the ECFP similarity, benefits from the similarity extension. Here an improvement of Δ E F can be seen in all except one cases (if further improvement is possible). For ECFP a decrease in Δ E F can be seen in all except four cases.

Table 6 Δ EF values for HMGR

For the second data set we use for testing the by-hand approach, PPAR γ, we shorten the selection procedure. By visual inspection of the eight approved drugs shown in Table 3 and Figure 9 as well as binding information on Rosiglitazone given in by Liberato et al.[21] we select two binding relevant substructures as shown in Figure 18. As described in the Materials and methods Section, the substructures were fragmented and used to derive a binary occurrence fingerprint for the extended similarity measure (1). The results for the similarity rankings that are calculated in analogy to the HMGR by-hand experiments are given in Table 7. The results clearly show that the reduced effort to extract the binding-relevant information has direct impact on the ranking performance. Only in half of the settings (MCS lig vs. DUD set , ECFP lig vs. DUD all and ECFP lig vs. DUD set ) we see improvements of the extended similarity measures in comparison the base similarity measures. From that we conclude that it is of high importance to be very careful on selecting the binding-relevant structural information when using the presented approach A (by-hand selection).

Figure 18
figure 18

PPAR γ binding relevant substructures. Binding relevant substructures used for calculating the bind_fp fingerprint for the PPAR γ by-hand experiments (approach A).

Table 7 Δ EF values for PPAR γ

Mining-based experiments

In the following, we first assess for both data-mining based variants (B1: all known ligands used to calculate the fragment occurrence fingerprint or B2: only part of them used), if the extension of the MCS and the ECFP similarity measures with the data mining derived fingerprint improves the quality of the similarity ranking. Second we compare the data mining approach with the by-hand approach for the HMGR data set. The results for variant B1 are given in Tables 8, 9 and 10. To see how the data mining based approach performs, when only few ligand structures are available as background knowledge, we re-ran the experiments with variant B2: using only ten per cent randomly chosen from the respective DuD ligand sets (20% due to smaller ligand set sizes in case of the HMGR, ADA and TK data sets) to extract background knowledge. The results using variant B2 are given in Tables 11, 12 and 13.

Table 8 Mean Δ EF and standard deviation for the MCS and MCS ext similarity methods (approach B1)
Table 9 Mean Δ EF and standard deviation for the ECFP and ECFP ext similarity methods (approach B1)
Table 10 Mean Δ EF and standard deviation for the bind_fp similarity method (approach B1)
Table 11 Mean Δ EF values and standard deviations for the MCS and MCS ext similarity methods (approach B2)
Table 12 Mean Δ EF values and standard deviations for the ECFP and ECFP ext similarity methods (approach B2)
Table 13 Mean Δ EF and standard deviation for the bind_fp similarity method (approach B2)

Testing for the improvement of the extended similarity compared to the baseline similarity, on average, for a given data set, we find the following numbers of wins and losses for a fixed α coefficient of 0.3 weighting the contribution of the extension of the similarity measure in Table 11 (MCS vs MCS ext , approach B2): 8:2 (at 1%), 7:3 (at 5%), 8:2 (at 10%). Similar or even stronger results can be found for other settings, in particular for retrieving 10% of the compounds: 8:2 on Table 12 (ECFP vs. ECFP ext , approach B2), 10:0 on Table 8 (MCS vs. MCS ext , approach B1) and 8:2 on Table 9 (ECFP vs. ECFP ext , approach B1).

Checking whether these results are statistically significant, we chose one of the weakest significance tests, the sign test [22], which is based on only one weak assumption, namely the independence of the measurements. The sign test has a p-value ≤0.109 for a result of 8 wins vs. 2 losses, a p-value ≤0.0215 for 9 wins vs. 1 loss, and even smaller for 10 wins vs. 0 losses. We apply the sign test to determine whether Δ E F is on average greater for one method compared to another for a given data set.

While the results already show improvements of the score for a fixed α of 0.3, one might be interested in the results for an optimal α, which we do not know beforehand. Also, it is interesting to know into which range optimal αs fall and whether 0.3 is a suitable default value. Results are shown in Tables 14, 15 and 16 as well as in Figures 19 and 20. As it turns out, the statistics of the number of wins and losses can still be improved, e.g., from 8:2, 7:3, 8:2 to 10:0, 9:0, 9:1, respectively, and so forth. On the other hand, the optimal αs seem to vary somewhat, with a value of 0.3 not being too large for most data sets and most percentages of retrieved compounds (see Table 14).

Table 14 Best α coefficients for the MCS ext and ECFP ext similarity methods (approach B2)
Table 15 Mean Δ EF and standard deviation using the best α coefficients (approach B1)
Table 16 Mean Δ EF and standard deviation using the best α coefficients (approach B2)
Figure 19
figure 19

Plot of α vs. Mean Δ EF for MCS ext . On the x-axis the values of the combining factor α is plottet versus the mean Δ E F for MCS e x t on the y-axis. (approach B2).

Figure 20
figure 20

Plot of α vs. Mean Δ EF for ECFP ext . On the x-axis the values of the combining factor α is plottet versus the mean Δ E F for MCS e x t on the y-axis. (approach B2).

To account for the variation of Δ E F across different sets within a cross-validation (see the standard deviations in Tables 8, 9, 10, 11, 12 and 13), we wanted to check whether the scores of two compared methods go up or down in a concerted fashion, or whether this is not the case. For this purpose, we present the win/loss statistics for a fixed α of 0.3 in Tables 17 and 18. As can be seen in these tables, the proportion of 8:2 or 9:1 still holds when zooming in on the individual data sets from Tables 8, 9, 11 and 12. Unfortunately, the results are not independent anymore, thus, the sign test can no longer be applied.

Table 17 Win/loss counts for ten random folds for extended similarites on DuD set ( α =0 . 3; approach B1)
Table 18 Win/loss counts for ten random folds for extended similarites on DuD set ( α =0 . 3; approach B2)

To investigate if the extension similarity sim bind_fp on its own is better than the base similarity measures MCS and ECFP we provide Tables 10 and 13. The results show that the bind_fp similarity in general is not better on its own in comparison to the base similarities. Only for 10% of the database in approach B1 the bind_fp similarity performs better in the ranking than MCS or ECFP.

Our final results on the DuD data sets concern the question whether the method is really sensitive against the choice of a suitable α. For this purpose, we present the win/loss statistics for a wide range of α values (from 0.0 to 1.0 with a step size of 0.1), across all the data sets from cross-validation in Tables 19 and 20. Quite surprisingly, the choice of a value of α does not appear to have a strong influence on the win/loss statistics. The proportion of roughly 8:2 or 9:1 still holds in this experiment. Therefore, we may conclude that the method is reasonably robust regarding the choice of a suitable value for α.

Table 19 Win/loss counts for all random folds for extended similarites on DuD set ( α (0.0, 0.1); approach B1)
Table 20 Win/loss counts for all random folds for extended similarites on DuD set ( α (0.0, 0.1); approach B2)

Comparing the data mining based extension results for the HMGR data set (first rows denoted HMGR in Tables 8 and 9) with the by-hand results on HMGR in Table 6 (rows denoted “lig vs DuD s e t ”), we see that the Δ E F values are slightly better for the by-hand extension, but both variants of the data mining based approach are quite competitive. The ECFP e x t results of variant B1 are even better than the by-hand results.

As final experiments to test our data-mining based approaches B1 and B2 we added 25 ChEMBL activity class data sets. The results for approach B1 and B2 are given in Tables 21 and 22 respectively. For those data sets the win counts over all data sets are 19, 21, 21 and 18, 22, 22 (of 25 maximum possible) for 1%, 5% and 10% of the database and MCS e x t and ECFP e x t . According to the sign test the difference between extended and non-enxtended similarities is significant at a level of 0.05 [22].

Table 21 Mean Δ EF and standard deviation for the experiments on the ChEMBL activity classes (approach B2)
Table 22 Mean Δ EF and standard deviation for the experiments on the ChEMBL activity classes (approach B2)


Structural similarity measures, especially the ECFP fingerprints, have been reported to be superior to non-substructural fingerprints [23]. This work shows that and how such structural similarity methods used in virtual screening can be improved further by integrating background knowledge on binding-relevant structural features. We presented an approach based on by-hand selection of the background knowledge as well as an approach working with fragment-based data mining. From our experimental evaluation we conclude that the addition of only one binding-relevant sub-structural feature of a known ligand can substantial improve the enrichment factors in the virtual screening. We additionally show that using data mining based knowledge extraction instead of time consuming by-hand selection of relevant features gives competitive results.


  1. Terstappen G, Reggiani A: In silico research in drug discovery. Trends Pharmacol Sci. 2001, 22: 23-26.

    Article  CAS  Google Scholar 

  2. van de Waterbeemed H, Gifford E: ADMET in silico modelling: towards prediction paradise?. Nat Rev Drug Discov. 2003, 2: 192-204. 10.1038/nrd1032.

    Article  Google Scholar 

  3. Rückert U, Kramer S: Frequent free tree discovery in graph data. Proceedings of the ACM SIG Symposium on Applied Computing (SAC’04). 2004, New York, NY, USA: ACM Press, 564-570.

    Google Scholar 

  4. Raymond J, Gardiner E, Willett P: RASCAL: calculation of graph similarity using maximum common edge subgraphs. Comput J. 2002, 45 (6): 631-644. 10.1093/comjnl/45.6.631.

    Article  Google Scholar 

  5. Rogers D, Hahn M: Extended-connectivity fingerprints. J Chem Inf Model. 2010, 50 (5): 742-754. 10.1021/ci100050t.

    Article  CAS  Google Scholar 

  6. Wallis W, Shoubridge P, Kraetz M, Ray D: Graph distances using graph union. Pattern Recognit Lett. 2001, 22: 701-704. 10.1016/S0167-8655(01)00022-8. [],

    Article  Google Scholar 

  7. Weininger D, Weininger A, Weininger J: SMILES. 2. algorithm for generation of unique SMILES notation. J Chem Inf Comput Sci. 1989, 29 (2): 97-101. 10.1021/ci00062a008.

    Article  CAS  Google Scholar 

  8. Stalring J, Carlsson L, Almeida P, Boyer S: AZOrange-High performance open source machine learning for QSAR modeling in a graphical programming environment. J Cheminformatics. 2011, 3: 28-10.1186/1758-2946-3-28.

    Article  Google Scholar 

  9. Knox C, Law V, Jewison T, Liu P, Ly S, Frolkis A, Pon A, Banco K, Mak C, Neveu V, Djoumbou Y, Eisner R, Guo AC, Wishart DS: DrugBank 3.0: a comprehensive resource for ‘Omics’ research on drugs. Nucl Acids Res. 2011, 39 (suppl 1): D1035-D1041.

    Article  CAS  Google Scholar 

  10. Huang N, Shoichet B, Irwin J: Benchmarking sets for molecular docking. J Med Chem. 2006, 49 (23): 6789-6801. 10.1021/jm0608356.

    Article  CAS  Google Scholar 

  11. Heikamp K, Bajorath J: Large-scale similarity search profiling of ChEMBL compound data sets. J Chem Inf Model. 2011, 51 (8): 1831-1839. 10.1021/ci200199u.

    Article  CAS  Google Scholar 

  12. Irwin JJ, Sterling T, Mysinger MM, Bolstad ES, Coleman RG: ZINC: a free tool to discover chemistry for biology. J Chem Inf Model. 2012, 52 (7): 1757-1768. 10.1021/ci3001277.

    Article  CAS  Google Scholar 

  13. Lewington S, Whitlock G, Clarke R, Sherliker P, Emberson J, Halsey J, Qizilbash N, Peto R, Collins R: Blood cholesterol and vascular mortality by age, sex, and blood pressure: a meta-analysis of individual data from 61 prospective studies with 55000 vascular deaths. The Lancet. 2007, 370 (9602): 1829-1839.

    Article  Google Scholar 

  14. Eisenberg D: Cholesterol lowering in the management of coronary artery disease: the clinical implications of recent trials. Am J Med. 1998, 104 (2, Supplement 1): 2S-5S. 10.1016/S0002-9343(98)00038-2.

    Article  CAS  Google Scholar 

  15. Endo A, Kuroda M, Tanzawa K: Competitive inhibition of 3-hydroxy-3-methylglutaryl coenzyme A reductase by ML-236A and ML-236B fungal metabolites, having hypocholesterolemic activity. FEBS Lett. 1976, 72 (2): 323-326. 10.1016/0014-5793(76)80996-9.

    Article  CAS  Google Scholar 

  16. Berman H, Westbrook J, Feng Z, Gilliland G, Bhat T, Weissig H, Shindyalov I, Bourne P: The protein data bank. Nucl Acids Res. 2000, 28: 235-242. 10.1093/nar/28.1.235.

    Article  CAS  Google Scholar 

  17. Istvan E, Deisenhofer J: Structural mechanism for statin inhibition of HMG-CoA reductase. Science. 2001, 292 (5519): 1160-1164. 10.1126/science.1059344. [],

    Article  CAS  Google Scholar 

  18. Scarsi M, Podvinec M, Roth A, Hug H, Kersten S, Albrecht H, Schwede T, Meyer UA, Ruecker C: Sulfonylureas and Glinides exhibit peroxisome proliferator-activated receptor gamma activity: A combined virtual screening and biological assay approach. Mol Pharmacol. 2007, 71 (2): 398-406.

    Article  CAS  Google Scholar 

  19. Bemis GW, Murcko MA: The properties of known drugs. 1. Molecular frameworks. J Med Chem. 1996, 39 (15): 2887-2893. 10.1021/jm9602928.

    Article  CAS  Google Scholar 

  20. Evers A, Klabunde T: Structure-based drug discovery using GPCR homology modeling: successful virtual screening for antagonists of the alpha1A adrenergic receptor. J Med Chem. 2005, 48 (4): 1088-1097. 10.1021/jm0491804.

    Article  CAS  Google Scholar 

  21. Liberato MV, Nascimento AS, Ayers SD, Lin JZ, Cvoro A, Silveira RL, Martínez L, Souza PCT, Saidemberg D, Deng T, Amato AA, Togashi M, Hsueh WA, Phillips K, Palma MS, Neves FAR, Skaf MS, Webb P, Polikarpov I: Medium chain fatty acids are selective peroxisome proliferator activated receptor (PPAR) gamma activators and Pan-PPAR partial agonists. PLoS ONE. 2012, 7 (5): e36297-10.1371/journal.pone.0036297.

    Article  CAS  Google Scholar 

  22. Demšar J: Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res. 2006, 7: 1-30.

    Google Scholar 

  23. Hert J, Willett P, Wilton DJ, Acklin P, Azzaoui K, Jacoby E, Schuffenhauer A: Comparison of topological descriptors for similarity-based virtual screening using multiple bioactive reference structures. Org Biomol Chem. 2004, 2: 3256-3266. 10.1039/b409865j.

    Article  CAS  Google Scholar 

Download references


We thank A. Schafferhans for her guidance on the usage of the Schrödinger Maestro Suite.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Stefan Kramer.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

TG implemented the similarity methods designed and carried out the experiments and evaluation and wrote the manuscript. LP did the docking experiments and contributed to the manuscript. SK helped to draft the manuscript and contributed to the experimental design and setup. All authors read and approved the final manuscript.

Electronic supplementary material


Additional file 1: Supplementary material for improving structural similarity based virtual screening using background knowledge. The supplementary information contains more extensive result tables and additional mathematical equations. (PDF 131 KB)

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Authors’ original file for figure 3

Authors’ original file for figure 4

Authors’ original file for figure 5

Authors’ original file for figure 6

Authors’ original file for figure 7

Authors’ original file for figure 8

Authors’ original file for figure 9

Authors’ original file for figure 10

Authors’ original file for figure 11

Authors’ original file for figure 12

Authors’ original file for figure 13

Authors’ original file for figure 14

Authors’ original file for figure 15

Authors’ original file for figure 16

Authors’ original file for figure 17

Authors’ original file for figure 18

Authors’ original file for figure 19

Authors’ original file for figure 20

Authors’ original file for figure 21

Authors’ original file for figure 22

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Girschick, T., Puchbauer, L. & Kramer, S. Improving structural similarity based virtual screening using background knowledge. J Cheminform 5, 50 (2013).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: