Software Project Health Check: Best Practices and Techniques for Your Product...
Learning Performance Models
1. Pooyan Jamshidi Miguel Velez Christian Kaestner Norbert Siegmund
Learning to Sample
Exploiting Similarities across Environments to Learn Performance Models for Configurable Systems
6. Systems are becoming increasingly more configurable and,
therefore, more difficult to understand their performance behavior
010 7/2012 7/2014
e time
1/1999 1/2003 1/2007 1/2011
0
1/2014
Release time
006 1/2010 1/2014
2.2.14
2.3.4
35
se time
ache
1/2006 1/2008 1/2010 1/2012 1/2014
0
40
80
120
160
200
2.0.0
1.0.0
0.19.0
0.1.0
Hadoop
Numberofparameters
Release time
MapReduce
HDFS
[Tianyin Xu, et al., “Too Many Knobs…”, FSE’15]
7. Systems are becoming increasingly more configurable and,
therefore, more difficult to understand their performance behavior
010 7/2012 7/2014
e time
1/1999 1/2003 1/2007 1/2011
0
1/2014
Release time
006 1/2010 1/2014
2.2.14
2.3.4
35
se time
ache
1/2006 1/2008 1/2010 1/2012 1/2014
0
40
80
120
160
200
2.0.0
1.0.0
0.19.0
0.1.0
Hadoop
Numberofparameters
Release time
MapReduce
HDFS
[Tianyin Xu, et al., “Too Many Knobs…”, FSE’15]
2180
218
= 2162
Increase in size of
configuration space
8. Empirical observations confirm that systems are
becoming increasingly more configurable
nia San Diego, ‡Huazhong Univ. of Science & Technology, †NetApp, Inc
tixu, longjin, xuf001, yyzhou}@cs.ucsd.edu
kar.Pasupathy, Rukma.Talwadker}@netapp.com
prevalent, but also severely
software. One fundamental
y of configuration, reflected
parameters (“knobs”). With
m software to ensure high re-
aunting, error-prone task.
nderstanding a fundamental
users really need so many
answer, we study the con-
including thousands of cus-
m (Storage-A), and hundreds
ce system software projects.
ng findings to motivate soft-
ore cautious and disciplined
these findings, we provide
ich can significantly reduce
A as an example, the guide-
ters and simplify 19.7% of
on existing users. Also, we
tion methods in the context
7/2006 7/2008 7/2010 7/2012 7/2014
0
100
200
300
400
500
600
700
Storage-A
Numberofparameters
Release time
1/1999 1/2003 1/2007 1/2011
0
100
200
300
400
500
5.6.2
5.5.0
5.0.16
5.1.3
4.1.0
4.0.12
3.23.0
1/2014
MySQL
Numberofparameters
Release time
1/1998 1/2002 1/2006 1/2010 1/2014
0
100
200
300
400
500
600
1.3.14
2.2.14
2.3.4
2.0.35
1.3.24
Numberofparameters
Release time
Apache
1/2006 1/2008 1/2010 1/2012 1/2014
0
40
80
120
160
200
2.0.0
1.0.0
0.19.0
0.1.0
Hadoop
Numberofparameters
Release time
MapReduce
HDFS
[Tianyin Xu, et al., “Too Many Knobs…”, FSE’15]
9.
10. Configurations determine the performance
behavior of systems
void Parrot_setenv(. . . name,. . . value){
#ifdef PARROT_HAS_SETENV
my_setenv(name, value, 1);
#else
int name_len=strlen(name);
int val_len=strlen(value);
char* envs=glob_env;
if(envs==NULL){
return;
}
strcpy(envs,name);
strcpy(envs+name_len,"=");
strcpy(envs+name_len + 1,value);
putenv(envs);
#endif
}
#ifdef LINUX
extern int Parrot_signbit(double x){
endif
else
PARROT_HAS_SETENV
LINUX
11. Configurations determine the performance
behavior of systems
void Parrot_setenv(. . . name,. . . value){
#ifdef PARROT_HAS_SETENV
my_setenv(name, value, 1);
#else
int name_len=strlen(name);
int val_len=strlen(value);
char* envs=glob_env;
if(envs==NULL){
return;
}
strcpy(envs,name);
strcpy(envs+name_len,"=");
strcpy(envs+name_len + 1,value);
putenv(envs);
#endif
}
#ifdef LINUX
extern int Parrot_signbit(double x){
endif
else
PARROT_HAS_SETENV
LINUX
12. Configurations determine the performance
behavior of systems
void Parrot_setenv(. . . name,. . . value){
#ifdef PARROT_HAS_SETENV
my_setenv(name, value, 1);
#else
int name_len=strlen(name);
int val_len=strlen(value);
char* envs=glob_env;
if(envs==NULL){
return;
}
strcpy(envs,name);
strcpy(envs+name_len,"=");
strcpy(envs+name_len + 1,value);
putenv(envs);
#endif
}
#ifdef LINUX
extern int Parrot_signbit(double x){
endif
else
PARROT_HAS_SETENV
LINUX
13. Configurations determine the performance
behavior of systems
void Parrot_setenv(. . . name,. . . value){
#ifdef PARROT_HAS_SETENV
my_setenv(name, value, 1);
#else
int name_len=strlen(name);
int val_len=strlen(value);
char* envs=glob_env;
if(envs==NULL){
return;
}
strcpy(envs,name);
strcpy(envs+name_len,"=");
strcpy(envs+name_len + 1,value);
putenv(envs);
#endif
}
#ifdef LINUX
extern int Parrot_signbit(double x){
endif
else
PARROT_HAS_SETENV
LINUX
14. Configuration options interact, therefore, creating
a non-linear and complex performance behavior
• Non-linear
• Non-convex
• Multi-modal
number of counters
number of splitters
latency(ms)
100
150
1
200
250
2
300
Cubic Interpolation Over Finer Grid
243 684 10125 14166 18
15. 0 500 1000 1500
Throughput (ops/sec)
0
1000
2000
3000
4000
5000
Averagewritelatency(s)
The default configuration is typically bad and the
optimal configuration is noticeably better than median
better
better
16. 0 500 1000 1500
Throughput (ops/sec)
0
1000
2000
3000
4000
5000
Averagewritelatency(s)
The default configuration is typically bad and the
optimal configuration is noticeably better than median
Default Configurationbetter
better
• Default is bad
17. 0 500 1000 1500
Throughput (ops/sec)
0
1000
2000
3000
4000
5000
Averagewritelatency(s)
The default configuration is typically bad and the
optimal configuration is noticeably better than median
Default Configuration
Optimal
Configuration
better
better
• Default is bad
18. 0 500 1000 1500
Throughput (ops/sec)
0
1000
2000
3000
4000
5000
Averagewritelatency(s)
The default configuration is typically bad and the
optimal configuration is noticeably better than median
Default Configuration
Optimal
Configuration
better
better
• Default is bad
• 2X-10X faster than worst
19. 0 500 1000 1500
Throughput (ops/sec)
0
1000
2000
3000
4000
5000
Averagewritelatency(s)
The default configuration is typically bad and the
optimal configuration is noticeably better than median
Default Configuration
Optimal
Configuration
better
better
• Default is bad
• 2X-10X faster than worst
• Noticeably faster than median
40. We looked at different highly-configurable systems to
gain insights about similarities across environments
[P. Jamshidi, et al., “Transfer learning for performance modeling of configurable systems….”, ASE’17]
SPEAR (SAT Solver)
Analysis time
14 options
16,384 configurations
SAT problems
3 hardware
2 versions
X264 (video encoder)
Encoding time
16 options
4,000 configurations
Video quality/size
2 hardware
3 versions
SQLite (DB engine)
Query time
14 options
1,000 configurations
DB Queries
2 hardware
2 versions
SaC (Compiler)
Execution time
50 options
71,267 configurations
10 Demo programs
41. Observation 1: Not all options and interactions are influential
and interactions degree between options are not high
̂fs( ⋅ ) = 1.2 + 3o1 + 5o3 + 0.9o7 + 0.8o3o7 + 4o1o3o7
ℂ = O1 × O2 × O3 × O4 × O5 × O6 × O7 × O8 × O9 × O10
72. For capturing options and interactions that only appears
in the target, L2S relies on exploration (random sampling)
Performance
Distribution
Influential
Option
Learning
Influential
Interaction
(ii) Sampling
Configurable
System
Performance
Model
ExploitationExploration
Transfer
Knowledge
PerfConf <Conf,Perf>
SourceTarget
(i) Extract
Knowledge
Performance
Influence
Model
73. L2S transfers knowledge about the structure of performance models
from the source to guide the sampling in the target environment
Target (Learn)Source (Given)
DataModel
Transferable
Knowledge
II. INTUITION
standing the performance behavior of configurable
systems can enable (i) performance debugging, (ii)
ance tuning, (iii) design-time evolution, or (iv) runtime
on [11]. We lack empirical understanding of how the
ance behavior of a system will vary when the environ-
the system changes. Such empirical understanding will
important insights to develop faster and more accurate
techniques that allow us to make predictions and
tions of performance for highly configurable systems
ging environments [10]. For instance, we can learn
ance behavior of a system on a cheap hardware in a
ed lab environment and use that to understand the per-
e behavior of the system on a production server before
to the end user. More specifically, we would like to
hat the relationship is between the performance of a
n a specific environment (characterized by software
ation, hardware, workload, and system version) to the
we vary its environmental conditions.
s research, we aim for an empirical understanding of
ance behavior to improve learning via an informed
A. Preliminary concepts
In this section, we provide formal definitions of
cepts that we use throughout this study. The forma
enable us to concisely convey concept throughout
1) Configuration and environment space: Let F
the i-th feature of a configurable system A whic
enabled or disabled and one of them holds by de
configuration space is mathematically a Cartesian
all the features C = Dom(F1) ⇥ · · · ⇥ Dom(F
Dom(Fi) = {0, 1}. A configuration of a syste
a member of the configuration space (feature spa
all the parameters are assigned to a specific valu
range (i.e., complete instantiations of the system’s pa
We also describe an environment instance by 3
e = [w, h, v] drawn from a given environment s
W ⇥H ⇥V , where they respectively represent sets
values for workload, hardware and system version.
2) Performance model: Given a software syste
configuration space F and environmental instances
formance model is a black-box function f : F ⇥
given some observations of the system performanc
combination of system’s features x 2 F in an en
e 2 E. To construct a performance model for a
ON
behavior of configurable
rformance debugging, (ii)
evolution, or (iv) runtime
understanding of how the
ill vary when the environ-
mpirical understanding will
p faster and more accurate
to make predictions and
ghly configurable systems
r instance, we can learn
on a cheap hardware in a
hat to understand the per-
a production server before
cifically, we would like to
een the performance of a
characterized by software
and system version) to the
conditions.
mpirical understanding of
learning via an informed
A. Preliminary concepts
In this section, we provide formal definitions of four con-
cepts that we use throughout this study. The formal notations
enable us to concisely convey concept throughout the paper.
1) Configuration and environment space: Let Fi indicate
the i-th feature of a configurable system A which is either
enabled or disabled and one of them holds by default. The
configuration space is mathematically a Cartesian product of
all the features C = Dom(F1) ⇥ · · · ⇥ Dom(Fd), where
Dom(Fi) = {0, 1}. A configuration of a system is then
a member of the configuration space (feature space) where
all the parameters are assigned to a specific value in their
range (i.e., complete instantiations of the system’s parameters).
We also describe an environment instance by 3 variables
e = [w, h, v] drawn from a given environment space E =
W ⇥H ⇥V , where they respectively represent sets of possible
values for workload, hardware and system version.
2) Performance model: Given a software system A with
configuration space F and environmental instances E, a per-
formance model is a black-box function f : F ⇥ E ! R
given some observations of the system performance for each
combination of system’s features x 2 F in an environment
e 2 E. To construct a performance model for a system A
vations of the system performance for each
ystem’s features x 2 F in an environment
ruct a performance model for a system A
space F, we run A in environment instance
combinations of configurations xi 2 F, and
g performance values yi = f(xi) + ✏i, xi 2
(0, i). The training data for our regression
mply Dtr = {(xi, yi)}n
i=1. In other words, a
is simply a mapping from the input space to
ormance metric that produces interval-scaled
ume it produces real numbers).
distribution: For the performance model,
associated the performance response to each
w let introduce another concept where we
ment and we measure the performance. An
mance distribution is a stochastic process,
that defines a probability distribution over
sures for each environmental conditions. To
rmance distribution for a system A with
given some observations of the system performance for each
combination of system’s features x 2 F in an environment
e 2 E. To construct a performance model for a system A
with configuration space F, we run A in environment instance
e 2 E on various combinations of configurations xi 2 F, and
record the resulting performance values yi = f(xi) + ✏i, xi 2
F where ✏i ⇠ N (0, i). The training data for our regression
models is then simply Dtr = {(xi, yi)}n
i=1. In other words, a
response function is simply a mapping from the input space to
a measurable performance metric that produces interval-scaled
data (here we assume it produces real numbers).
3) Performance distribution: For the performance model,
we measured and associated the performance response to each
configuration, now let introduce another concept where we
vary the environment and we measure the performance. An
empirical performance distribution is a stochastic process,
pd : E ! (R), that defines a probability distribution over
performance measures for each environmental conditions. To
construct a performance distribution for a system A with
Extract Reuse
Learn Learn
74. L2S transfers knowledge about the structure of performance models
from the source to guide the sampling in the target environment
Target (Learn)Source (Given)
DataModel
Transferable
Knowledge
II. INTUITION
standing the performance behavior of configurable
systems can enable (i) performance debugging, (ii)
ance tuning, (iii) design-time evolution, or (iv) runtime
on [11]. We lack empirical understanding of how the
ance behavior of a system will vary when the environ-
the system changes. Such empirical understanding will
important insights to develop faster and more accurate
techniques that allow us to make predictions and
tions of performance for highly configurable systems
ging environments [10]. For instance, we can learn
ance behavior of a system on a cheap hardware in a
ed lab environment and use that to understand the per-
e behavior of the system on a production server before
to the end user. More specifically, we would like to
hat the relationship is between the performance of a
n a specific environment (characterized by software
ation, hardware, workload, and system version) to the
we vary its environmental conditions.
s research, we aim for an empirical understanding of
ance behavior to improve learning via an informed
A. Preliminary concepts
In this section, we provide formal definitions of
cepts that we use throughout this study. The forma
enable us to concisely convey concept throughout
1) Configuration and environment space: Let F
the i-th feature of a configurable system A whic
enabled or disabled and one of them holds by de
configuration space is mathematically a Cartesian
all the features C = Dom(F1) ⇥ · · · ⇥ Dom(F
Dom(Fi) = {0, 1}. A configuration of a syste
a member of the configuration space (feature spa
all the parameters are assigned to a specific valu
range (i.e., complete instantiations of the system’s pa
We also describe an environment instance by 3
e = [w, h, v] drawn from a given environment s
W ⇥H ⇥V , where they respectively represent sets
values for workload, hardware and system version.
2) Performance model: Given a software syste
configuration space F and environmental instances
formance model is a black-box function f : F ⇥
given some observations of the system performanc
combination of system’s features x 2 F in an en
e 2 E. To construct a performance model for a
ON
behavior of configurable
rformance debugging, (ii)
evolution, or (iv) runtime
understanding of how the
ill vary when the environ-
mpirical understanding will
p faster and more accurate
to make predictions and
ghly configurable systems
r instance, we can learn
on a cheap hardware in a
hat to understand the per-
a production server before
cifically, we would like to
een the performance of a
characterized by software
and system version) to the
conditions.
mpirical understanding of
learning via an informed
A. Preliminary concepts
In this section, we provide formal definitions of four con-
cepts that we use throughout this study. The formal notations
enable us to concisely convey concept throughout the paper.
1) Configuration and environment space: Let Fi indicate
the i-th feature of a configurable system A which is either
enabled or disabled and one of them holds by default. The
configuration space is mathematically a Cartesian product of
all the features C = Dom(F1) ⇥ · · · ⇥ Dom(Fd), where
Dom(Fi) = {0, 1}. A configuration of a system is then
a member of the configuration space (feature space) where
all the parameters are assigned to a specific value in their
range (i.e., complete instantiations of the system’s parameters).
We also describe an environment instance by 3 variables
e = [w, h, v] drawn from a given environment space E =
W ⇥H ⇥V , where they respectively represent sets of possible
values for workload, hardware and system version.
2) Performance model: Given a software system A with
configuration space F and environmental instances E, a per-
formance model is a black-box function f : F ⇥ E ! R
given some observations of the system performance for each
combination of system’s features x 2 F in an environment
e 2 E. To construct a performance model for a system A
vations of the system performance for each
ystem’s features x 2 F in an environment
ruct a performance model for a system A
space F, we run A in environment instance
combinations of configurations xi 2 F, and
g performance values yi = f(xi) + ✏i, xi 2
(0, i). The training data for our regression
mply Dtr = {(xi, yi)}n
i=1. In other words, a
is simply a mapping from the input space to
ormance metric that produces interval-scaled
ume it produces real numbers).
distribution: For the performance model,
associated the performance response to each
w let introduce another concept where we
ment and we measure the performance. An
mance distribution is a stochastic process,
that defines a probability distribution over
sures for each environmental conditions. To
rmance distribution for a system A with
given some observations of the system performance for each
combination of system’s features x 2 F in an environment
e 2 E. To construct a performance model for a system A
with configuration space F, we run A in environment instance
e 2 E on various combinations of configurations xi 2 F, and
record the resulting performance values yi = f(xi) + ✏i, xi 2
F where ✏i ⇠ N (0, i). The training data for our regression
models is then simply Dtr = {(xi, yi)}n
i=1. In other words, a
response function is simply a mapping from the input space to
a measurable performance metric that produces interval-scaled
data (here we assume it produces real numbers).
3) Performance distribution: For the performance model,
we measured and associated the performance response to each
configuration, now let introduce another concept where we
vary the environment and we measure the performance. An
empirical performance distribution is a stochastic process,
pd : E ! (R), that defines a probability distribution over
performance measures for each environmental conditions. To
construct a performance distribution for a system A with
Extract Reuse
Learn Learn
̂fs( ⋅ ) = 1.2 + 3o1 + 5o3 + 0.9o7 + 0.8o3o7 + 4o1o3o7
76. “Model shift” builds a model in the source and uses
the shifted model “directly” for predicting the target
[Pavel Valov, et al. “Transferring performance prediction models…”, ICPE’17 ]
log P (θ, Xobs )
Θ
P (θ|Xobs )
log P(θ, Xobs )
Θ
log P(θ, Xobs )
Θ
log P(θ, Xobs )
Θ
P(θ|Xobs ) P(θ|Xobs ) P(θ|Xobs )
Target
Source
Throughput
Machine
twice as fast
77. “Data reuse” combines the data collected in the source with the
ones in the target in a “multi-task” setting to predict the target
DataData
Data
Measure
Measure
Reuse Learn
TurtleBot
Simulator (Gazebo)
[P. Jamshidi, et al., “Transfer learning for improving model predictions ….”, SEAMS’17]
Configurations
f(o1, o2) = 5 + 3o1 + 15o2 7o1 ⇥ o2
79. Configurations of deep neural networks affect
accuracy and energy consumption
0 500 1000 1500 2000 2500
Energy consumption [J]
0
20
40
60
80
100
Validation(test)error
CNN on CNAE-9 Data Set
72%
22X
80. Configurations of deep neural networks affect
accuracy and energy consumption
0 500 1000 1500 2000 2500
Energy consumption [J]
0
20
40
60
80
100
Validation(test)error
CNN on CNAE-9 Data Set
72%
22X
the selected cell is plugged into a large model which is trained on the combinatio
validation sub-sets, and the accuracy is reported on the CIFAR-10 test set. We not
is never used for model selection, and it is only used for final model evaluation. We
cells, learned on CIFAR-10, in a large-scale setting on the ImageNet challenge dat
image
sep.conv3x3/2
sep.conv3x3/2
sep.conv3x3
conv3x3
globalpool
linear&softmax
small CIFAR-10 model
cell
cell
cell
sep.conv3x3/2
sep.conv3x3
sep.conv3x3/2
sep.conv3x3
image
conv3x3
large CIFAR-10 model
cell
cell
cell
cell
cell
sep.conv3x3/2
sep.conv3x3
sep.conv3x3/2
sep.conv3x3
sep.conv3x3
sep.conv3x3
image
conv3x3/2
conv3x3/2
sep.conv3x3/2
globalpool
linear&softmax
ImageNet model
cell
cell
cell
cell
cell
cell
cell
Figure 2: Image classification models constructed using the cells optimized with arc
Top-left: small model used during architecture search on CIFAR-10. Top-right:
model used for learned cell evaluation. Bottom: ImageNet model used for learned
For CIFAR-10 experiments we use a model which consists of 3 ⇥ 3 convolution w
81. Configurations of deep neural networks affect
accuracy and energy consumption
0 500 1000 1500 2000 2500
Energy consumption [J]
0
20
40
60
80
100
Validation(test)error
CNN on CNAE-9 Data Set
72%
22X
the selected cell is plugged into a large model which is trained on the combinatio
validation sub-sets, and the accuracy is reported on the CIFAR-10 test set. We not
is never used for model selection, and it is only used for final model evaluation. We
cells, learned on CIFAR-10, in a large-scale setting on the ImageNet challenge dat
image
sep.conv3x3/2
sep.conv3x3/2
sep.conv3x3
conv3x3
globalpool
linear&softmax
small CIFAR-10 model
cell
cell
cell
sep.conv3x3/2
sep.conv3x3
sep.conv3x3/2
sep.conv3x3
image
conv3x3
large CIFAR-10 model
cell
cell
cell
cell
cell
sep.conv3x3/2
sep.conv3x3
sep.conv3x3/2
sep.conv3x3
sep.conv3x3
sep.conv3x3
image
conv3x3/2
conv3x3/2
sep.conv3x3/2
globalpool
linear&softmax
ImageNet model
cell
cell
cell
cell
cell
cell
cell
Figure 2: Image classification models constructed using the cells optimized with arc
Top-left: small model used during architecture search on CIFAR-10. Top-right:
model used for learned cell evaluation. Bottom: ImageNet model used for learned
For CIFAR-10 experiments we use a model which consists of 3 ⇥ 3 convolution w
the selected cell is plugged into a large model which is trained on the combination of training and
validation sub-sets, and the accuracy is reported on the CIFAR-10 test set. We note that the test set
is never used for model selection, and it is only used for final model evaluation. We also evaluate the
cells, learned on CIFAR-10, in a large-scale setting on the ImageNet challenge dataset (Sect. 4.3).
image
sep.conv3x3/2
sep.conv3x3/2
sep.conv3x3
conv3x3
globalpool
linear&softmax
small CIFAR-10 model
cell
cell
cell
sep.conv3x3/2
sep.conv3x3
sep.conv3x3/2
sep.conv3x3
sep.conv3x3
sep.conv3x3
image
conv3x3
globalpool
linear&softmax
large CIFAR-10 model
cell
cell
cell
cell
cell
cell
sep.conv3x3/2
sep.conv3x3
sep.conv3x3/2
sep.conv3x3
sep.conv3x3
sep.conv3x3
image
conv3x3/2
conv3x3/2
sep.conv3x3/2
globalpool
linear&softmax
ImageNet model
cell
cell
cell
cell
cell
cell
cell
Figure 2: Image classification models constructed using the cells optimized with architecture search.
Top-left: small model used during architecture search on CIFAR-10. Top-right: large CIFAR-10
model used for learned cell evaluation. Bottom: ImageNet model used for learned cell evaluation.
82. Deep neural
network as a
highly
configurable
system
of top/bottom conf.; M6/M7: Number of influential options; M8/M9: Number of options agree
Correlation btw importance of options; M11/M12: Number of interactions; M13: Number of inte
e↵ects; M14: Correlation btw the coe↵s;
Input #1
Input #2
Input #3
Input #4
Output
Hidden
layer
Input
layer
Output
layer
4 Technical Aims and Research Plan
We will pursue the following technical aims: (1) investigate potential criteria for e↵ectiv
exploration of the design space of DNN architectures (Section 4.2), (2) build analytical m
curately predict the performance of a given architecture configuration given other similar
which either have been measured in the target environments or other similar environm
measuring the network performance directly (Section 4.3), and (3), develop a tunni
that exploit the performance model from previous step to e↵ectively search for optima
(Section 4.4).
4.1 Project Timeline
We plan to complete the proposed project in two years. To mitigate project risks, we
project into three major phases:
8
Network
Design
Model
Compiler
Hybrid
Deployment
OS/
Hardware
Scopeof
thisProject
Neural Search
Hardware
Optimization
Hyper-parameter
DNNsystemdevelopmentstack
Deployment
Topology
84. DNN measurements
are costly
Each sample cost ~1h
4000 * 1h ~= 6 months
Yes, that’s the cost we
paid for conducting our
measurements!
85. L2S enables learning a more accurate model with less
samples exploiting the knowledge from the source
3 10 20 30 40 50 60 70
Sample Size
0
20
40
60
80
100
MeanAbsolutePercentageError
100
200
500
L2S+GP
L2S+DataReuseTL
DataReuseTL
ModelShift
Random+CART
(a) DNN (hard)
3 10 20 30 40
Sample Si
0
20
40
60
80
100
MeanAbsolutePercentageError
L
L
D
M
R
(b) XGBoost (h
Convolutional Neural Network
86. L2S enables learning a more accurate model with less
samples exploiting the knowledge from the source
3 10 20 30 40 50 60 70
Sample Size
0
20
40
60
80
100
MeanAbsolutePercentageError
100
200
500
L2S+GP
L2S+DataReuseTL
DataReuseTL
ModelShift
Random+CART
(a) DNN (hard)
3 10 20 30 40
Sample Si
0
20
40
60
80
100
MeanAbsolutePercentageError
L
L
D
M
R
(b) XGBoost (h
Convolutional Neural Network
87. L2S enables learning a more accurate model with less
samples exploiting the knowledge from the source
3 10 20 30 40 50 60 70
Sample Size
0
20
40
60
80
100
MeanAbsolutePercentageError
100
200
500
L2S+GP
L2S+DataReuseTL
DataReuseTL
ModelShift
Random+CART
(a) DNN (hard)
3 10 20 30 40
Sample Si
0
20
40
60
80
100
MeanAbsolutePercentageError
L
L
D
M
R
(b) XGBoost (h
Convolutional Neural Network
88. L2S may also help data-reuse approach to learn
faster
30 40 50 60 70
Sample Size
100
200
500
L2S+GP
L2S+DataReuseTL
DataReuseTL
ModelShift
Random+CART
(a) DNN (hard)
3 10 20 30 40 50 60 70
Sample Size
0
20
40
60
80
100
MeanAbsolutePercentageError 100
200
500
L2S+GP
L2S+DataReuseTL
DataReuseTL
ModelShift
Random+CART
(b) XGBoost (hard)
3 10 20 30 40
Sample Siz
0
20
40
60
80
100
MeanAbsolutePercentageError
L
L
D
M
R
(c) Storm (ha
XGBoost
89. L2S may also help data-reuse approach to learn
faster
30 40 50 60 70
Sample Size
100
200
500
L2S+GP
L2S+DataReuseTL
DataReuseTL
ModelShift
Random+CART
(a) DNN (hard)
3 10 20 30 40 50 60 70
Sample Size
0
20
40
60
80
100
MeanAbsolutePercentageError 100
200
500
L2S+GP
L2S+DataReuseTL
DataReuseTL
ModelShift
Random+CART
(b) XGBoost (hard)
3 10 20 30 40
Sample Siz
0
20
40
60
80
100
MeanAbsolutePercentageError
L
L
D
M
R
(c) Storm (ha
XGBoost
90. L2S may also help data-reuse approach to learn
faster
30 40 50 60 70
Sample Size
100
200
500
L2S+GP
L2S+DataReuseTL
DataReuseTL
ModelShift
Random+CART
(a) DNN (hard)
3 10 20 30 40 50 60 70
Sample Size
0
20
40
60
80
100
MeanAbsolutePercentageError 100
200
500
L2S+GP
L2S+DataReuseTL
DataReuseTL
ModelShift
Random+CART
(b) XGBoost (hard)
3 10 20 30 40
Sample Siz
0
20
40
60
80
100
MeanAbsolutePercentageError
L
L
D
M
R
(c) Storm (ha
XGBoost