Machine learning in the life sciences with knime

2.159 visualizações

Publicada em

General audiences presentation from the May Knime Meetup in Boston

Publicada em: Ciências, Tecnologia, Educação
0 comentários
5 gostaram
Estatísticas
Notas
  • Seja o primeiro a comentar

Sem downloads
Visualizações
Visualizações totais
2.159
No SlideShare
0
A partir de incorporações
0
Número de incorporações
28
Ações
Compartilhamentos
0
Downloads
83
Comentários
0
Gostaram
5
Incorporações 0
Nenhuma incorporação

Nenhuma nota no slide

Machine learning in the life sciences with knime

  1. 1. Machine Learning in the Life Sciences... with KNIME! Gregory Landrum NIBR Informatics Novartis Institutes for BioMedical Research, Basel
  2. 2. Cartoon machine learning Training Data Training Model New Items Model Predictions Training a model: Using a model:
  3. 3. The data introducing vocabulary Descriptors End point
  4. 4. A typical life-sciences problem Training Data Training Model New Items Model Predictions Training a model: Using a model: Literature molecules active for an interesting protein target New molecules we are thinking about making. Prioritized list
  5. 5. A problem... Here’s what our input looks like: All data taken from ChEMBL (https://www.ebi.ac.uk/chembl/) Good luck training a model with that!
  6. 6. One solution: Molecular Fingerprints §  Idea : Apply a kernel to a molecule to generate a bit vector or count vector (less frequent) §  Typical kernels extract features of the molecule, hash them, and use the hash to determine bits that should be set §  Typical fingerprint sizes: 1K-4K bits. ...
  7. 7. The toolbox: Knime + the RDKit §  Open-source RDKit-based nodes for Knime providing cheminformatics functionality + §  Trusted nodes distributed from knime community site §  Work in progress: more nodes being added (new wizard makes it easy)
  8. 8. What’s there?
  9. 9. Let’s build a model! Step 1, getting the data ready Detail: we’re using atom-pair fingerprints 100 actives ~83K assumed inactives Detail: we’re using Histamine H3 actives
  10. 10. Let’s build a model! Step 2, training For this example I use 70% of the data (randomly selected) to train the model Detail: the model is a depth-limited random forest with 500 trees
  11. 11. Let’s build a model! Step 3, testing Test with the 30% of the data that was not used to build the model The model is 99.9% accurate. Unfortunately it’s saying “inactive” almost all the time. This makes sense given how unbalanced the data is
  12. 12. Adjusting the model for highly unbalanced data Is there a signal there? Test with the 30% of the data that was not used to build the model Obviously a strong signal there, we just need to figure out how to use it.
  13. 13. Adjusting the model for highly unbalanced data Is there a signal there? Test with the 30% of the data that was not used to build the model Obviously a strong signal there, we just need to figure out how to use it. How about changing the decision boundary? Find the model score that corresponds to this point in the ROC curve for the training data
  14. 14. Adjusting the model for highly unbalanced data Shifting the decision boundary Set decision boundary here Now we’ve got a >99% accurate model that does a good job of retrieving actives without mixing in too many inactives. Training data ROC
  15. 15. Wrapping up §  We were able to build very accurate random forests for predicting biological activity by adjusting the decision boundary for models built using highly unbalanced data §  The same thing works with the Knime “Fingerprint Bayesian” nodes. §  Acknowledgements: •  Manuel Schwarze (NIBR) •  Sereina Riniker (NIBR) •  Nikolas Fechner (NIBR) •  Bernd Wiswedel (Knime) •  Dean Abbott (Abbott Analytics)
  16. 16. Advertising 3rd RDKit User Group Meeting 22-24 October 2014 Merck KGaA, Darmstadt, Germany Talks, “talktorials”, lightning talks, social activities, and a hackathon on the 24th. Announcement and (free) registration links at www.rdkit.org We’re looking for speakers. Please contact greg.landrum@gmail.com

×