O SlideShare utiliza cookies para otimizar a funcionalidade e o desempenho do site, assim como para apresentar publicidade mais relevante aos nossos usuários. Se você continuar a navegar o site, você aceita o uso de cookies. Leia nosso Contrato do Usuário e nossa Política de Privacidade.
O SlideShare utiliza cookies para otimizar a funcionalidade e o desempenho do site, assim como para apresentar publicidade mais relevante aos nossos usuários. Se você continuar a utilizar o site, você aceita o uso de cookies. Leia nossa Política de Privacidade e nosso Contrato do Usuário para obter mais detalhes.
Keeping predictive models up to date is challenging Versioning of models not trivial
Advantages: Fast: Run on local computers Interpretable results: Can be used for hypothesis generation General: Can integrate any modeling technique and be applied to any data set Extensible: Very easy to add new components
European project for creating a interoperable framework for toxicity predictions Academia and industry Parts Ontology and API Query and invocation of predictive services Methods and algorithms Authentication and authorization
Continuous modeling - automating model building on high-performance e-Infrastructures
Continuous modeling - automating model
building on high-performance e-Infrastructures
Department of Pharmaceutical Biosciences and
Science for Life Laboratory, Uppsala, Sweden
Today: We have access to high-throughput technologies to
study biological phenomena
New challenges: Data management and analysis
• Analysis methods, pipelines
• Data integration, security
My research focus
• Enabling high-throughput biology, from e-
infrastructures and up
– Massively parallel sequencing, metabolomics
– Predictive modeling in toxicology and pharmacology
• Particular focus in large-scale predictive modeling
– Tackle large problems
– Evaluate predictive performance
– Easy and secure sharing/consumption of models
– Automate re-building of models
• Predictive toxicology and
pharmacology are becoming data-
– High throughput technologies
• Drug/chemical screening
• Molecular biology (omics)
– More and bigger publicly available data
• Data is continuously updated
• Signatures1 descriptor in CDK2
– Canonical representation of atom
• Support Vector Machine (SVM)
– Robust modeling
1. Faulon, J.-L.; Visco, D. P.; Pophale, R. S. Journal of Chemical Information and
Computer Sciences, 2003, 43, 707-720
2. Steinbeck, C.; Han, Y.; Kuhn, S.; Horlacher, O.; Luttmann, E.; Willighagen, E.
Journal of Chemical Information and Computer Sciences, 2003,43, 493-500.
Interpretation of nonlinear QSAR models
– Compute gradient of decision
function for prediction
– Extract descriptor(s) with largest
component in the gradient
• Demonstrated on RF, SVM, and
Carlsson, L., Helgee, E. A., and Boyer, S.
Interpretation of nonlinear qsar models applied to ames mutagenicity data.
J Chem Inf Model 49, 11 (Nov 2009), 2551–2558.
E. Ahlberg, O. Spjuth, C. Hasselgren, and L. Carlsson. Interpretation of Conformal Prediction
Classification Models. In Statistical Learning and Data Sciences, vol. 9047 of Lecture Notes in
Computer Science. Springer International Publishing, 2015, pp. 323–334.
Modeling large number of observations on HPC
Aim: Measure predictive performance when
QSAR datasets get larger
• When do we need HPC?
• How can we work efficiently with HPC in
• Are nonlinear methods required?
• Computationally expensive problems call for high-
• High-Performance Computing (HPC)
– Fast interconnect between compute nodes
• High-Throughput Computing (HTC)
– Fast interconnect not needed
• Cloud Computing (CC)
– Infrastructure as a Service (IaaS)
UPPMAX high-performance computing center
• Get access to multiple nodes
– 16 compute cores per node
• Get access to large memory machines
– we have nodes with 128, 256, 512, or 2000 GB RAM
• OpenStack private cloud
• However on HPC:
– Only terminal usage, no web server allowed (scripting in bash, perl
and python common)
– Queuing system (e.g. SLURM, SGE)
– Limited job length (e.g. 10 days)
Levels of automation in sequence analysis
• Production: Can be fully automated
• Secondary analysis: Partly automated
• Researchers: Basic science not really
useful to automate, flexibility
Training large number of datasets on HPC
Aim: Build models for hundreds
or thousands of targets
– Challenge to automate data
– Challenge to automate model
Hypothesis: Workflow systems
can enable agile large-scale
Automating analysis on clusters
• Workflow systems can aid development and
• We extended Luigi system into SciLuigi
• Integrate with batch queuing system on HPC
Modeling large datasets on HPC
Modeling large datasets on HPC
• Publish models for easy access
• We use P2 (OSGi) provisioning
Bioclipse and OpenTox
E. Willighagen N. Jeliazkova, B. Hardy, R. Grafström, and O. Spjuth
Computational toxicology using the OpenTox application programming interface and Bioclipse.
BMC Research Notes 2011, 4:487
Could cloud computing improve/simplify modeling?
Modeling on Amazon Elastic Cloud
Number of cores
1 2 4 8 16
B. T. Moghadam, J. Alvarsson, M. Holm, M.
Eklund, L. Carlsson, and O. Spjuth
Scaling predictive modeling in drug
development with cloud computing.
J. Chem. Inf. Model., 2015, 55 (1), pp 19-25
• H2020 infrastructure project (2015-2018)
• Platform for metabolomics data analysis –
study metabolites in primarily clinical
• Integrating data and tools
• Data management, privacy
• Cloud/Microservices architecture
Could Big Data frameworks improve/simplify modeling?
• Map/Reduce, Hadoop, Spark, HDFS/distributed file
systems and others…
• Recently received a lot of attention
• Allow for massively parallel analysis
• How useful are they in pharmaceutical
Hadoop (MapReduce) for massively parallel analysis
Evaluating Hadoop for sequence analysis
• Compare Hadoop and HPC
– Create as identical pipelines as possible
– Investigate scaling and performance
– Shows the bottlenecks with current HPC
A. Siretskiy, L. Pireddu, T. Sundqvist, and O. Spjuth.
A quantitative assessment of the Hadoop framework for
analyzing massively parallel DNA sequencing data.
Gigascience. 2015; 4:26.
Distributed modeling with Spark
• Appealing programming methodology
• Built-in data locality and in-memory
– RDD (Resilient Distributed Dataset):
distributed large-scale dataset
– MLlib: Spark-based distributed
implementation of many ML algorithms. Logistic regression in Hadoop
Parallel Virtual Screening with Spark
Hypothesis: The Spark framework can be used for trivially
parallelizable problems in pharm. Bioinformatics
• Demonstrate on Virtual Screening
• Used OpenEye suite
• Spark API allows for simple programmatic parallelization
• Good scalability in terms of speedup
• Lack of documentation
L. Ahmed, A. Edlund, E. Laure, O. Spjuth.
Using Iterative MapReduce for Parallel Virtual Screening. Cloud
Computing Technology and Science (Cloud- Com), 2013 IEEE 5th
International Conference on , vol.2, no., pp.27,32, 2-5, 2013
Conformal Prediction in Spark
• Evaluate confidence in predictions
• We implemented Inductive Conformal
Prediction (ICP) in Spark, extending MLlib
• Tested on 2 large data sets
– HIGGS: 11M examples. Task: distinguish between
Higgs boson signal process and background
– SUSY: 5M examples. Task: distinguish between
supersymmetric particle signal process and
• Valid predictions
• Good scalability
Conformal Prediction in Spark
M. Capuccini, L. Carlsson, U. Norinder and O. Spjuth.
Conformal Prediction in Spark: Large-Scale Machine Learning with Confidence.
Accepted in IEEE Transaction on Cloud Computing, 2015.
• Automation/continuous modeling is not trivial
– Data management, modeling, model management/governance
• Conformal prediction
– Predictions with confidence
• Large-scale problems requires computational power
– Cloud computing vs High-Performance Computing
• Workflows and Big Data frameworks
– Immature technologies, not well documented
– can be useful for large-scale analysis in pharmaceutical
bioinformatics, especially for automation
Some ongoing projects
• Augment Parallel virtual screening with Machine
• Further develop conformal predictions in distributed
• Large-scale target predictions
• Continue evaluate Spark vs Workflows, Cloud vs HPC
– Still not reached a good agile system but we are getting
• The group is open for collaborations.