SlideShare uma empresa Scribd logo
1 de 85
Assoc Prof Dr Ergin Akalpler
VAR Model
VAR Model
 VECTOR auto-regressive (VAR) integrated model
comprises multiple time series and is quite a useful
tool for forecasting. It can be considered an
extension of the auto-regressive (AR part of
ARIMA) model.
VAR Model
 VAR model involves multiple independent variables and
therefore has more than one equations.
 Each equation uses as its explanatory variables lags of all
the variables and likely a deterministic trend.
 Time series models for VAR are usually based on applying
VAR to stationary series with first differences to original
series and because of that, there is always a possibility of
loss of information about the relationship among integrated
series.
VAR model
 Differencing the series to make them stationary is
one solution, but at the cost of ignoring possibly
important (“long run”) relationships between the
levels. A better solution is to test whether the levels
regressions are trustworthy (“cointegration”.)
VAR Model
 The usual approach is to use Johansen’s method for
testing whether or not cointegration exists. If the answer is
“yes” then a vector error correction model (VECM),
which combines levels and differences, can be estimated
instead of a VAR in levels. So, we shall check if VECM is
been able to outperform VAR for the series we have.
What is the difference between VECM and
VAR?
 Through VECM we can interpret long term and short
term equations.
 We need to determine the number of co-integrating
relationships.
 The advantage of VECM over VAR is that the resulting
VAR from VECM representation has more efficient
coefficient estimates.
Introduction
The basics of the vector autoregressive model.
We lay the foundation for getting started with this crucial multivariate time
series model and cover the important details including:
•What a VAR model is.
•Who uses VAR models.
•Basic types of VAR models.
•How to specify a VAR model.
•Estimation and forecasting with VAR models.
To determine whether VAR model in levels is possible or not, we need to transform
VAR model in levels to a VECM model in differences (with error correction terms),
to which the Johansen test for cointegration is applied.
In other words, we take the following 4 steps
1. construct a VECM model in differences (with error correction terms)
2. apply the Johansen test to the VECM model in differences to find out the
number of cointegration (r) (none or Atmost)
3. if r = 0, estimate VAR in differences
4. if r > 0, estimate VECM model in differences or VAR in levels (at least one
cointegration equation exist)
Its identification depends on the number of cointegration in the following
way.
(none) or 0, r = 0 (no cointegration)
In the case of no cointegration, since all variables are non-stationary in level,
the above VECM model reduces to a VAR model with growth variables.
At most 1, r = 1 (one cointegrating vector)
At most 2, r = 2 (two cointegrating vectors)
At most 3) r = 3 (full cointegration)
In the case of full cointegration, since all variables are stationary, the above
VECM model reduces to a VAR model with level variables.
How to determine Restricted VAR –VECM- or
Unrestricted VAR
 If all variables converted to first difference then they become
stationary (integrated in same order)
 Null hypo: variables are stationary
 Alt Hypo: Variables are not stationary
 If the variables are cointegrated and have long run association
then we run restricted VAR (that is VECM),
 But if the variables are not cointegrated we cannot run VECM
rather we run unrestricted VAR.
RESTRICTED VAR
 After performing cointegration test results will
shows following estimations:
 Trace STATS > TCV
 Null: there is no cointegration
 Alt: There is cointegration
 When the Trace stats is more than TCV we can
reject null hypo there is cointegration
 Probability values are less than 0.05
Lets go UNRESTRICTED VAR
 After performing cointegration test results will
shows following estimations:
 Trace STATS < TCV
 Null: there is no cointegration
 Alt: There is cointegration
 When the Trace stats is less than TCV we cannot
reject null hypo there is no cointegration
 Probability values are more than 0.05
 >
Guideline to VAR
1.Variable selection & purpose
The initial step is to understand the task at hand –
2.what kind of variables should be included,
3.what maximum lag order would make sense based on the frequency and
description of the data.
4.What is the underlying economic or financial theory that is assumed
beforehand?
Var guideline
1.Data analysis and transformations
After selecting the data, the second step would be examine the data plots and summary
statistics and answer these questions:
•Are there any outliers in the data?
•Is there any missing data?
•Do you need to transform the data, e.g. take logarithms?
•Do you need to create any new variables - e.g. GDP per capita, number of children per
household etc.?
Var guideline
Unit root tests
After determining the final set of variables, Yt, we need to
test, whether they have a unit root (I(1), I(2), …) or are
stationary (I(0)).
To do this, use the ADF test and others
Probability values are more than 0.05
Var guideline
 VAR(p) model order selection
 Usually model selection is done based on some kind of
information criteria - usually AIC or BIC sometimes other
information criteria. This can be done using VARselect()
from the vars package. BIC penalizes model complexity
more heavily.
Var guideline
 Cointegrating relationship test for unit root series
If the time series has a unit root, we should check, whether there are any
cointegrating relationships between the series. There are three types of
tests, which can be performed:

Maximum Eigenvalue Test
The maximum eigenvalue test examines whether the largest eigenvalue is
zero relative to the alternative that the next largest eigenvalue is zero.
Maxeigen stats are smaller than TCV for VAR
 Trace Test
 The trace test is a test whether the trace stats are smaller than TCV for
VAR
Var guideline :Estimating the model
 If the series is not cointegrated, we can estimate the model via
VAR() function from package vars for the differences of the series,
ΔYt (if a unit root is present).
 If the series are cointegrated, we need to consider the long-run
relationship by estimating a VECM using either VECM() from
specifying the number of cointegrating relations, which we found
from the previous step.
 Depending on the function, we may also need to specify the lag
order of the VECM representation.
Var guideline : Model diagnostics tests
 Now that we have the model estimated, we need to verify if it is well
specified. This is usually done by examining the residuals from the model.
 The most important test for time series data are tests for autocorrelations (or
serial correlations) of the residuals, also known as Portmanteu test.
 Two most well-known versions of this test are the Ljung-Box and the Box-
Pierce tests, which are implemented in the Box.test() function from the stats
package.
 For multivariate time series alongside autocorrelation, another problem is the
cross-correlation of the residuals,
 i.e., when cor(ϵ1,t,ϵ2,t+s)≠0,s>0.
 For this reason, we may use the serial.test() function from the vars package,
which computes the multivariate test for serially correlated errors.
 A multivariate Ljung-Box test is implemented in the mq() function from the
MTS package.
VAR guideline : Results & conclusions
 After we verify that the model is adequate, we can either
predict() future values, or examine the impulse-response
functions via if r() from the vars package in order to check
how the variables respond to a particular shock.
VAR modeling is a multi-step process and a complete VAR
analysis involves:
1.Specifying and estimating a VAR model.
2.Using inferences to check and revise the model (as needed).
3.Forecasting.
4.Structural analysis.
What are VAR models used for?
 VAR models (vector autoregressive models) are used
for multivariate time series. The structure is that each
variable is a linear function of past lags of itself and past
lags of the other variables.
Who uses VAR models?
VAR models are traditionally widely used in finance and
econometrics because they offer a framework for accomplishing
important modeling goals, including (Stock and Watson 2001):
•Data description.
•Forecasting.
•Structural inference.
•Policy analysis.
The reduced form, recursive, and structural VAR
There are three broad types of VAR models, the reduced form, the
recursive form, and the structural VAR model.
Reduced form VAR models consider each variable to be a function of:
•Its own past values.
•The past values of other variables in the model.
 Recursive VAR models contain all the components of the
reduced form model, but also allow some variables to be
functions of other concurrent variables. By imposing these
short-run relationships, the recursive model allows us to model
model structural shocks.
 Structural VAR models include restrictions that allow us to
identify causal relationships beyond those that can be identified
identified with reduced form or recursive models. These causal
relationships can be used to model and forecast impacts of
individual shocks, such as policy decisions
While reduced form models are the simplest of the VAR models, they do
come with disadvantages:
• variables are not related to one another.
•The error terms will be correlated across equations. This means we cannot
consider what impacts individual shocks will have on the system.
 What makes up a VAR model?
 A VAR model is made up of a system of equations that represents the
relationships between multiple variables. When referring to VAR models,
we often use special language to specify:
• How many endogenous variables there are included.
• How many autoregressive terms are included.
 For example, if we have two endogenous variables and autoregressive
terms, we say the model is a Bivariate VAR(2) model. If we have three
endogenous variables and four autoregressive terms, we say the model is
a Trivariate VAR(4) model.
 In general, a VAR model is composed of n-equations
(representing n endogenous variables) and includes p-lags of the variables.
Specification
 What is the appropriate lag length in the VAR?
 Three criterions:
i. Akaike information criterion (AIC)
ii. Schwarz criterion (SIC)
iii. Hannan-Quinn criterion (HQC)
( all functions of m, T, and variance-covariance matrix)
 In practice: Fix an upper bound of lag length q (12), choose the q which
minimizes one of the information criterion
 AIC is inconsistent
 For T>20, SIC and HQC will always choose smaller models than AIC
Estimation
 Multivariate Generalized Least Squares (GLS) estimates are the
same as equation by equation OLS estimates.
 For unrestricted VAR models: Maximum likelihood (ML)
estimates and equation by equation OLS estimates coincide.
 When a VAR is estimated under some restrictions, ML estimates
are different from OLS estimates;
ML estimates are consistent and efficient if the restrictions are
true.
Presentation of Results
 It is rare to report estimated VAR coefficients.
Instead:
 Impulse responses
 Forecast error variance decomposition: assess the relative
contribution of different shocks to fluctuations in
variables
 Historical Decomposition: given the path of one specific
shock, how will the variables evolve?
How do we decide what endogenous variables to include in our VAR
model?
 From an estimation standpoint, it is important to be deliberate about how
many variables we include in our VAR model. Adding additional variables:
• Increases the number of coefficients to be estimated for each equation
and each number of lags.
• Introduce additional estimation error.
 Deciding what variables to include in a VAR model should be founded in
theory, as much as possible.
 We can use additional tools, like Granger causality or Sims causality, to
test the forecasting relevance of variables.
o
UNRESTRICTED VAR
 Assess the selection of the optimal lag length in a VAR
 Evaluate the use of impulse response functions with a
VAR
 Assess the importance of variations on the standard VAR
 Critically appraise the use of VAR s with financial
models.
 Assess the uses of VECMs
What is a vector autoregressive model?
The vector autoregressive (VAR) model is a workhouse multivariate time
series model that relates current observations of a variable with past
observations of itself and past observations of other variables in the
system.
VAR models differ from univariate autoregressive models because they
allow feedback to occur between the variables in the model. For example,
we could use a VAR model to show how real GDP is a function of policy rate
and how policy rate is, in turn, a function of real GDP.
• A systematic but flexible approach for capturing complex real-world
behavior.
• Better forecasting performance.
• Ability to capture the intertwined dynamics of time series data.
Advantages of VAR models
VAR modeling is a multi-step process and a complete VAR analysis
involves:
1.Specifying and estimating a VAR model.
2.Using inferences to check and revise the model (as needed).
3.Forecasting.
4.Structural analysis.
 How do we choose the number of lags in a VAR model?
 Lag selection is one of the important aspects of VAR model specification. In
practical applications, we generally choose a maximum number of lags, p
max, and evaluate the performance of the model including p=0,1,…,p max.
 The optimal model is then the model VAR(p) which minimizes some lag
selection criteria.
 These methods are usually built into software and lag selection is almost
completely automated now.
Estimating and inference in VAR models
 Despite their seeming complexities, VAR models are quite easy to
estimate. The equation can be estimated using ordinary least
squares given a few assumptions:
• The error term has a conditional mean of zero.
• The variables in the model are stationary.
• Large outliers are unlikely.
• No perfect multicollinearity.
 Under these assumptions, the ordinary least squares
estimates:
• Will be consistent.
• Can be evaluated using traditional t-statistics and p-
values.
• Can be used to jointly test restrictions across multiple
equations.
Forecasting
 One of the most important functions of VAR models is to generate
forecasts. Forecasts are generated for VAR models using an
iterative forecasting algorithm:
1. Estimate the VAR model using OLS for each equation.
2. Compute the one-period-ahead forecast for all variables.
3. Compute the two-period-ahead forecasts, using the one-period-
ahead forecast.
4. Iterate until the h-step ahead forecasts are computed.
Reporting and evaluating VAR models
 Often we are more interested in the dynamics that are predicted by
our VAR models than the actual coefficients that are estimated. For
this reason, it is most common that VAR studies report:
• Granger-causality statistics.
• Impulse response functions.
• Forecast error decompositions
Lag Length in VAR
 When estimating VARs or conducting ‘Granger causality’ tests, the
test can be sensitive to the lag length of the VAR
 Sometimes the lag length corresponds to the data, such that
quarterly data has 4 lags, monthly data has 12 lags etc.
 A more rigorous way to determine the optimal lag length is to use the
Akaike or Schwarz-Bayesian information criteria.
 However the estimations tend to be sensitive to the presence of
autocorrelation, in this case following the use of information
criteria, if there is any evidence of autocorrelation, further lags are
added, above the number indicated by the information criteria, until
the autocorrelation is removed.
Information Criteria
 The main information criteria are the Schwarz-Bayesian criteria
and the Akaike criteria.
 They operate on the basis that there are two competing factors
from adding more lags to a model. More lags will reduce the RSS,
but also means a loss of degrees of freedom (penalty from adding
more lags).
 The aim is the minimise the information criteria, by adding an
extra lag, it will only benefit the model if the reduction in the RSS
outweighs the loss of degrees of freedom.
 In general the Schwarz-Bayesian (SBIC) has a harsher penalty term
than the Akaike (AIC), which leads it to indicate a parsimonious
model is best.
The AIC and SIC
 The two can be expressed as:
parameters
of
No.
-
k
size,
sample
-
variance
residual
ˆ
:
ln
)
ˆ
ln(
2
)
ˆ
ln(
2
2
2
T
Where
T
T
k
SBIC
T
k
AIC








Multivariate Information Criteria
 The multivariate version of the Akaike information criteria
is similar to the univariate:
equations
all
in
regressors
of
number
total
ns
observatio
of
number
matrix)
the
of
diagonal
main
the
off
residuals
the
between
s
covariance
and
diagonal
main
on the
variances
the
gives
(This
residuals.
the
of
matrix
var
ˆ
)
(
/
2
ˆ
log










k
T
iance
Co
Variance
Akaike
T
k
MAIC
Multivariate SBIC
 The multivariate version of the SBIC is:
equations
all
in
regressors
of
number
total
ns
observatio
of
number
residuals
the
of
matrix
var
ˆ
)
log(
ˆ
log










k
T
iance
Co
Variance
T
T
k
MSBIC
The best criterion
 In general there is no agreement on which criteria is best
(some recommends the SBIC).
 The Schwarz-Bayesian is strongly consistent but not
efficient.
 The Akaike is not consistent, generally producing too large
a model, but is more efficient than the Schwarz-Bayesian
criteria.
Criticisms of Causality Tests
Granger causality test, much used in VAR modelling, however do not
explain some aspects of the VAR:
 It does not give the sign of the effect, we do not know if it is positive or
negative
 It does not show how long the effect lasts for.
 It does not provide evidence of whether this effect is direct or indirect.
VARs and seemingly unrelated regression SUR
 In general the VAR has all the lag lengths of the individual
equations the same size.
 It is possible however to have different lag lengths for different
equations, however this involves another estimation method.
 When lag lengths differ, the) seemingly unrelated regression (SUR
approach can be used to estimate the equations, this is often
termed a ‘near-VAR’.
Alternative VARs
 It is possible to include contemporaneous terms in a VAR, however in
this case the VAR is not identified.
 It is also possible to include exogenous variables in the VAR, although
they do not have separate equations where they act as a dependent
variable.
 (They simply act as extra explanatory variables for all the equations in
the VAR.)
 It is worth noting that the impulse response functions can also produce
confidence intervals to determine whether they are significant, this is
routinely done by most computer programmes.
VECMs
 Vector Error Correction Models (VECM) are the basic VAR,
with an error correction term incorporated into the model.
 The reason for the error correction term is the same as
with the standard error correction model, it measures any
movement away from the long-run equilibrium.
 These are often used as part of a multivariate test for
cointegration, such as the Johansen Maximum likelihood
(ML) test.
VECMs
 However there are a number of differing approaches to modelling VECMs,
for instance how many lags should there be on the error correction term,
usually just one regardless of the order of the VAR
 The error correction term becomes more difficult to interpret, as it is not
obvious which variable it affects following a shock
What is Wald test
 The Wald statistic explains the short run causality
between variables whiles the statistics provided by the
lagged error correction terms explain the intensity of the
long run causality effect.
 Short run Granger causalities are determined by Wald
statistic for the significance of the coefficients of the
series.
Criticisms of the VAR
 Many argue that the VAR approach is lacking in theory.
 There is much debate on how the lag lengths should be determined
 It is possible to end up with a model including numerous explanatory
variables, with different signs, which has implications for degrees of
freedom.
 Many of the parameters will be insignificant, this affects the efficiency
of a regression.
 There is always a potential for multicollinearity with many lags of the
same variable
Stationarity and VARs
 Should a VAR include only stationary variables, to be valid?
 Sims argues that even if the variables are not stationary, they should
not be first-differenced.
 However others argue that a better approach is a multivariate test for
cointegration and then use first-differenced variables and the error
correction term
Sample VAR Result
 OLS estimation of a single equation in the Unrestricted VAR
 ******************************************************************************
 Dependent variable is TBILL
 127 observations used for estimation from 1960Q2 to 1991Q4
Regressor Coefficient Standard Error T-Ratio [Prob]
 TBILL(-1) .96200 .067845 14.1795 [.000]
 R10(-1) -.015333 .068439 -.22404 [.823]
 K .36563 .23386 1.5635 [.120]
 R-Squared .90159 R-Bar-Squared .90000
 Akaike Info. Criterion -165.9593 Schwarz Bayesian Criterion -170.22
 Serial Correlation*CHSQ( 4)= 22.3179[.000]
 Dependent variable is R10
 ******************************************************************************
 Regressor Coefficient Standard Error T-Ratio[Prob]
 TBILL(-1) .11106 .039920 2.7821[.006]
 R10(-1) .87432 .040269 21.7117[.000]
 K .26981 .13760 1.9608[.052]
 R-Squared .96507 R-Bar-Squared .96451
 Akaike Info. Criterion -98.6049 Schwarz Bayesian Criterion -102.8712
 Serial Correlation*CHSQ( 4)= 8.6481[.071]
Granger-causality statistics
As we previously discussed, Granger-causality statistics test whether
one variable is statistically significant when predicting another variable.
The Granger-causality statistics are F-statistics that test if the
coefficients of all lags of a variable are jointly equal to zero in the
equation for another variable. As the p-value of the F-statistic
decreases, evidence that a variable is relevant for predict another
variable increases.
Granger causality
 Granger causality tests whether a variable is “helpful” for
forecasting the behavior of another variable.
 It’s important to note that Granger causality only allows us to make
inferences about forecasting capabilities -- not about true causality.
Granger-causality statistics
As we previously discussed, Granger-causality statistics test whether
one variable is statistically significant when predicting another variable.
The Granger-causality statistics are F-statistics that test if the
coefficients of all lags of a variable are jointly equal to zero in the
equation for another variable. As the p-value of the F-statistic
decreases, evidence that a variable is relevant for predict another
variable increases.
Granger Causality Test
 ******************************************************************************
 Dependent variable is R10
 List of the variables deleted from the regression: TBILL (-1)
 127 observations used for estimation from 1960Q2 to 1991Q4
 ******************************************************************************
 Regressor Coefficient Standard Error T-Ratio [Prob]
 R10(-1) .97627 .017142 56.9508 [.000]
 K .20365 .13914 1.4637 [.146]
 ******************************************************************************
 Joint test of zero restrictions on the coefficients of deleted variables:
 F Statistic F( 1, 124)= 7.7400[.006]

 Dependent variable is TBILL
 List of the variables deleted from the regression: R10(-1)
 Regressor Coefficient Standard Error T-Ratio [Prob]
 TBILL(-1) .94817 .028025 33.8328 [.000]
 K .33727 .19589 1.7217 [.088]
 ******************************************************************************
 Joint test of zero restrictions on the coefficients of deleted variables:
 F Statistic F( 1, 124)= .050192[.823]
 *****************************************************************************
 For example, in the Granger-causality test of X on Y, if the p-
value is 0.02 we would say that X does help predict Y at the
5% level. However, if the p-value is 0.3 we would say that
there is no evidence that X helps predict Y.
Granger Causality Tests Continued
 According to Granger, causality can be further sub-divided
into long-run and short-run causality.
 This requires the use of error correction models or VECMs,
depending on the approach for determining causality.
 Long-run causality is determined by the error correction
term, whereby if it is significant, then it indicates
evidence of long run causality from the explanatory
variable to the dependent variable.
 Short-run causality is determined as before, with a test on
the joint significance of the lagged explanatory variables,
using an F-test or Wald test.
Impulse Response and Variance
decomposition
 the impulse responses are the relevant tools for
interpreting the relationships between the variables
 Variance decompositions examine how important each of
the shocks is as a component of the overall
(unpredictable) variance of each of the variables over
time.
 The impulse response function traces the dynamic path of variables in the
system to shocks to other variables in the system. This is done by:
• Estimating the VAR model.
• Implementing a one-unit increase in the error of one of the variables in
the model, while holding the other errors equal to zero.
• Predicting the impacts h-period ahead of the error shock.
• Plotting the forecasted impacts, along with the one-standard-deviation
confidence intervals.
Impulse Response Functions
 Given:























 
0
1
0
at time
shock to
unit
a
Given
:
20
10
0
1
2
1
2
1
1
1
1
u
u
y
t
y
A
Where
u
y
A
y
t
t
t
t




Impulse Response Functions
 These trace out the effect on the dependent variables in the VAR to
shocks to all the variables in the VAR
 Therefore in a system of 2 variables, there are 4 impulse response
functions and with 3 there are 9.
 The shock occurs through the error term and affects the dependent
variable over time.
 In effect the VAR is expressed as a vector moving average model
(VMA), as in the univariate case previously, the shocks to the error
terms can then be traced with regard to their impact on the
dependent variable.
 If the time path of the impulse response function becomes 0 over
time, the system of equations is stable, however they can explode if
unstable.
 The impulse response function traces the dynamic path of variables in the system
to shocks to other variables in the system. This is done by:
• Estimating the VAR model.
• Implementing a one-unit increase in the error of one of the variables in the model,
while holding the other errors equal to zero.
• Predicting the impacts h-period ahead of the error shock.
• Plotting the forecasted impacts, along with the one-standard-deviation confidence
intervals.
 The results show IR (Impulse
response) to dependent variables. Only
for NIR IR function is illustrated on the
table and
 as on the table seen only NIR has
positive response to CPI. But against to
this all other variables have negative
response to NIR
 Impulse Response positive values
have positive negative values have
negative effects on dependent (here
CPI)
R. of
DCPI:
Period RGDP DCPI DNIR DREER
1 -3.870022 10.52160 0.000000 0.000000
2 4.350339 0.388418 0.635650 -3.964539
3 2.581088 -0.057747 1.343376 -0.210536
4 -1.406336 0.760648 0.709599 -0.485223
5 -1.189040 0.131412 0.477037 -0.098667
6 0.043845 -0.346002 0.243500 0.050212
7 0.401353 -0.000346 0.078936 0.053059
8 -0.003204 0.089603 0.006877 -0.037810
9 -0.052022 0.019648 -0.044851 -0.027014
10 -0.032278 -0.017211 0.007166 0.004444
Impulse response sample estimation and interpretation
Variance decomposition estimation and interpretation
 On the table, the variance
decomposition results for CPI
illustrated.
 RGDP and REER affects CPI
more than NIR.
 Higher values have more
effects than smaller values
VD of
DCPI:
Period S.E. RGDP DCPI DNIR DREER
1 11.21076 11.91672 88.08328 0.000000 0.000000
2 12.68381 21.07330 68.90575 0.251152 9.769804
3 13.01512 23.94694 65.44426 1.303893 9.304905
4 13.14111 24.63526 64.53047 1.570594 9.263682
5 13.20444 25.21040 63.92289 1.686082 9.180623
6 13.21138 25.18501 63.92429 1.718280 9.172418
7 13.21782 25.25268 63.86205 1.720173 9.165098
8 13.21818 25.25131 63.86316 1.720107 9.165417
9 13.21840 25.25202 63.86125 1.721200 9.165528
10 13.21845 25.25241 63.86091 1.721216 9.165466
Impulse response functions
Long-run Causality
 Before the ECM can be formed, there first has to be evidence of
cointegration, given that cointegration implies a significant error
correction term, cointegration can be viewed as an indirect test of long-
run causality.
 It is possible to have evidence of long-run causality, but not short-run
causality and vice versa.
 In multivariate causality tests, the testing of long-run causality between
two variables is more problematic, as it is impossible to tell which
explanatory variable is causing the causality through the error correction
term.
A simple example
As an example, let's consider a VAR with three endogenous variables,
the unemployment rate, the inflation rate, and interest rates.
To estimate the structural VAR model of the system, we have to put
restrictions on our model. For example, we may assume that the
Fed follows the inflation targeting rule for setting interest rates. This
assumption would be built into our system as the equation for
interest rates.
Specification
 What is the appropriate lag length in the VAR?
 Three criterions:
i. Akaike information criterion (AIC)
ii. Schwarz criterion (SIC)
iii. Hannan-Quinn criterion (HQC)
( all functions of m, T, and variance-covariance matrix)
 In practice: Fix an upper bound of lag length q (12), choose the q
which minimizes one of the information criterion
 AIC is inconsistent
 For T>20, SIC and HQC will always choose smaller models than AIC
Estimation
 Multivariate generalized least squares (GLS) estimates are the same as
equation by equation OLS estimates.
 For unrestricted VAR models: Multıvariate linera model – (ML)
estimates and equation by equation OLS estimates coincide.
 When a VAR is estimated under some restrictions, ML estimates are
different from OLS estimates;
ML estimates are consistent and efficient if the restrictions are true.
Presentation of Results
 It is rare to report estimated VAR coefficients.
 Impulse responses
 Forecast error variance decomposition: assess the relative contribution
of different shocks to fluctuations in variables
 Historical Decomposition: given the path of one specific shock, how will
the variables evolve?
How do we decide what endogenous variables to include in our VAR
model?
 From an estimation standpoint, it is important to be deliberate about how
many variables we include in our VAR model. Adding additional variables:
• Increases the number of coefficients to be estimated for each equation
and each number of lags.
• Introduce additional estimation error.
 Deciding what variables to include in a VAR model should be founded in
theory, as much as possible.
 We can use additional tools, like Granger causality or Sims causality, to test
the forecasting relevance of variables.
Estimating and inference in VAR models
 Despite their seeming complexities, VAR models are quite easy to
estimate. The equation can be estimated using ordinary least squares
given a few assumptions:
• The error term has a conditional mean of zero.
• The variables in the model are stationary.
• Large outliers are unlikely.
• An outlier is an unusually large or small observation. Outliers can
have a disproportionate effect on statistical results, such as the
mean, which can result in misleading interpretations.
• No perfect multicollinearity.
 Under these assumptions, the ordinary least squares estimates:
• Will be consistent.
• Can be evaluated using traditional t-statistics and p-values.
• Can be used to jointly test restrictions across multiple equations.
Forecasting
 One of the most important functions of VAR models is to generate
forecasts. Forecasts are generated for VAR models using an iterative
forecasting algorithm:
1. Estimate the VAR model using OLS for each equation.
2. Compute the one-period-ahead forecast for all variables.
3. Compute the two-period-ahead forecasts- like times series for one
period of time- , using the one-period-ahead forecast.
4. Iterate until the n period or h-step ahead forecasts are computed.
Reporting and evaluating VAR models
 Often we are more interested in the dynamics that are predicted by
our VAR models than the actual coefficients that are estimated. For
this reason, it is most common that VAR studies report:
• Granger-causality statistics.
• Impulse response functions.
• Forecast error decompositions
 Forecast error decomposition separates the forecast error variance into
proportions attributed to each variable in the model.
 Intuitively, this measure helps us judge how much of an impact one
variable has on another variable in the VAR model and how intertwined
our variables' dynamics are.
 For example, if X is responsible for 85% of the forecast error variance of Y,
it is explaining a large amount of the forecast variation in X. However,
if X is only responsible for 20% of the forecast error variance of Y, much of
the forecast error variance of Y is left unexplained by X.
What is the difference between VAR and VEC
model?
 Through VECM we can interpret long term and short term
equations. We need to determine the number of co-integrating
relationships. The advantage of VECM over VAR is that the
resulting VAR from VECM representation has more efficient
coefficient estimates.

When to use VAR/VECM?
You should use VECM if 1) your variables are nonstationary
and 2) you find a common trend between the variables
(cointegration).
Why VAR is better than AR?
 VAR (vector autoregression) is a generalization of
AR (autoregressive model) for multiple time series,
identifying the linear relationship between them. The
AR can be seen as a particular case of VAR for only one
series
Conclusion
 VAR models are an essential component of multivariate time series
modeling. You should have a better understanding of the
fundamentals of the VAR model including:
• What a VAR model is.
• Who uses VAR models.
• Basic types of VAR models.
• How to specify a VAR model.
• Estimation and forecasting with VAR models.
Conclusion
 VARs have a number of important uses, particularly
causality tests and forecasting
 To assess the affects of any shock to the system, we
need to use impulse response functions and variance
decomposition
 VECMs are an alternative, as they allow first-
differenced variables and an error correction term.
 The VAR has a number of weaknesses, most
importantly its lack of theoretical foundations
Thank you
erginakalpler@csu.edu.tr

Mais conteúdo relacionado

Mais procurados

Heteroskedasticity
HeteroskedasticityHeteroskedasticity
Heteroskedasticity
halimuth
 

Mais procurados (20)

Lesson 5 arima
Lesson 5 arimaLesson 5 arima
Lesson 5 arima
 
ders 7.2 VECM 1.pptx
ders 7.2 VECM 1.pptxders 7.2 VECM 1.pptx
ders 7.2 VECM 1.pptx
 
Heteroscedasticity
HeteroscedasticityHeteroscedasticity
Heteroscedasticity
 
Lecture 4
Lecture 4Lecture 4
Lecture 4
 
Autocorrelation
AutocorrelationAutocorrelation
Autocorrelation
 
Chapter8
Chapter8Chapter8
Chapter8
 
7 classical assumptions of ordinary least squares
7 classical assumptions of ordinary least squares7 classical assumptions of ordinary least squares
7 classical assumptions of ordinary least squares
 
Heteroskedasticity
HeteroskedasticityHeteroskedasticity
Heteroskedasticity
 
Autocorrelation
AutocorrelationAutocorrelation
Autocorrelation
 
Panel data
Panel dataPanel data
Panel data
 
Basics of Structural Equation Modeling
Basics of Structural Equation ModelingBasics of Structural Equation Modeling
Basics of Structural Equation Modeling
 
Multicolinearity
MulticolinearityMulticolinearity
Multicolinearity
 
R square vs adjusted r square
R square vs adjusted r squareR square vs adjusted r square
R square vs adjusted r square
 
Autocorrelation- Concept, Causes and Consequences
Autocorrelation- Concept, Causes and ConsequencesAutocorrelation- Concept, Causes and Consequences
Autocorrelation- Concept, Causes and Consequences
 
Arch & Garch Processes
Arch & Garch ProcessesArch & Garch Processes
Arch & Garch Processes
 
Time Series - Auto Regressive Models
Time Series - Auto Regressive ModelsTime Series - Auto Regressive Models
Time Series - Auto Regressive Models
 
Heteroscedasticity Remedial Measures.pptx
Heteroscedasticity Remedial Measures.pptxHeteroscedasticity Remedial Measures.pptx
Heteroscedasticity Remedial Measures.pptx
 
Autocorrelation- Detection- part 1- Durbin-Watson d test
Autocorrelation- Detection- part 1- Durbin-Watson d testAutocorrelation- Detection- part 1- Durbin-Watson d test
Autocorrelation- Detection- part 1- Durbin-Watson d test
 
Heteroscedasticity
HeteroscedasticityHeteroscedasticity
Heteroscedasticity
 
ders 3 Unit root test.pptx
ders 3 Unit root test.pptxders 3 Unit root test.pptx
ders 3 Unit root test.pptx
 

Semelhante a ders 7.1 VAR.pptx

Econometrcis-Multivariate Time Series Analysis.pptx
Econometrcis-Multivariate Time Series Analysis.pptxEconometrcis-Multivariate Time Series Analysis.pptx
Econometrcis-Multivariate Time Series Analysis.pptx
jbhandari1
 

Semelhante a ders 7.1 VAR.pptx (20)

Sem with amos ii
Sem with amos iiSem with amos ii
Sem with amos ii
 
Arellano bond
Arellano bondArellano bond
Arellano bond
 
SEM
SEMSEM
SEM
 
Eviews forecasting
Eviews forecastingEviews forecasting
Eviews forecasting
 
Unit-3 Data Analytics.pdf
Unit-3 Data Analytics.pdfUnit-3 Data Analytics.pdf
Unit-3 Data Analytics.pdf
 
Unit-3 Data Analytics.pdf
Unit-3 Data Analytics.pdfUnit-3 Data Analytics.pdf
Unit-3 Data Analytics.pdf
 
Unit-3 Data Analytics.pdf
Unit-3 Data Analytics.pdfUnit-3 Data Analytics.pdf
Unit-3 Data Analytics.pdf
 
ANCOVA-Analysis-of-Covariance.pptx
ANCOVA-Analysis-of-Covariance.pptxANCOVA-Analysis-of-Covariance.pptx
ANCOVA-Analysis-of-Covariance.pptx
 
Prob and statistics models for outlier detection
Prob and statistics models for outlier detectionProb and statistics models for outlier detection
Prob and statistics models for outlier detection
 
Machine Learning.pdf
Machine Learning.pdfMachine Learning.pdf
Machine Learning.pdf
 
Chapter 18,19
Chapter 18,19Chapter 18,19
Chapter 18,19
 
NPTL Machine Learning Week 2.docx
NPTL Machine Learning Week 2.docxNPTL Machine Learning Week 2.docx
NPTL Machine Learning Week 2.docx
 
FE3.ppt
FE3.pptFE3.ppt
FE3.ppt
 
Forecasting Default Probabilities in Emerging Markets and Dynamical Regula...
Forecasting Default Probabilities  in Emerging Markets and   Dynamical Regula...Forecasting Default Probabilities  in Emerging Markets and   Dynamical Regula...
Forecasting Default Probabilities in Emerging Markets and Dynamical Regula...
 
Data Science - Part XV - MARS, Logistic Regression, & Survival Analysis
Data Science -  Part XV - MARS, Logistic Regression, & Survival AnalysisData Science -  Part XV - MARS, Logistic Regression, & Survival Analysis
Data Science - Part XV - MARS, Logistic Regression, & Survival Analysis
 
And Then There Are Algorithms - Danilo Poccia - Codemotion Rome 2018
And Then There Are Algorithms - Danilo Poccia - Codemotion Rome 2018And Then There Are Algorithms - Danilo Poccia - Codemotion Rome 2018
And Then There Are Algorithms - Danilo Poccia - Codemotion Rome 2018
 
Abrigo and love_2015_
Abrigo and love_2015_Abrigo and love_2015_
Abrigo and love_2015_
 
Machine learning Mind Map
Machine learning Mind MapMachine learning Mind Map
Machine learning Mind Map
 
Econometrcis-Multivariate Time Series Analysis.pptx
Econometrcis-Multivariate Time Series Analysis.pptxEconometrcis-Multivariate Time Series Analysis.pptx
Econometrcis-Multivariate Time Series Analysis.pptx
 
Ch 6 randomization
Ch 6 randomizationCh 6 randomization
Ch 6 randomization
 

Mais de Ergin Akalpler

Mais de Ergin Akalpler (20)

ders 3.3 Unit root testing section 3 .pptx
ders 3.3 Unit root testing section 3 .pptxders 3.3 Unit root testing section 3 .pptx
ders 3.3 Unit root testing section 3 .pptx
 
ders 3.2 Unit root testing section 2 .pptx
ders 3.2 Unit root testing section 2 .pptxders 3.2 Unit root testing section 2 .pptx
ders 3.2 Unit root testing section 2 .pptx
 
lesson 3.1 Unit root testing section 1 .pptx
lesson 3.1 Unit root testing section 1 .pptxlesson 3.1 Unit root testing section 1 .pptx
lesson 3.1 Unit root testing section 1 .pptx
 
CH 3.2 Macro8_Aggregate Demand _Aggregate Supply long and run.ppt
CH 3.2 Macro8_Aggregate Demand _Aggregate  Supply long and run.pptCH 3.2 Macro8_Aggregate Demand _Aggregate  Supply long and run.ppt
CH 3.2 Macro8_Aggregate Demand _Aggregate Supply long and run.ppt
 
CH 2.2 Aggregate Demand & Aggregate Supply.ppt
CH 2.2  Aggregate Demand & Aggregate Supply.pptCH 2.2  Aggregate Demand & Aggregate Supply.ppt
CH 2.2 Aggregate Demand & Aggregate Supply.ppt
 
CH 1.4 lesson Macro1_Small_Open_Economy.ppt
CH 1.4 lesson Macro1_Small_Open_Economy.pptCH 1.4 lesson Macro1_Small_Open_Economy.ppt
CH 1.4 lesson Macro1_Small_Open_Economy.ppt
 
CH 1.3 lesson macro2_the Closed Economy .ppt
CH 1.3  lesson macro2_the Closed Economy .pptCH 1.3  lesson macro2_the Closed Economy .ppt
CH 1.3 lesson macro2_the Closed Economy .ppt
 
CH 1.2 marginal propensity to save and MP to consume .ppt
CH 1.2 marginal propensity to save and MP to consume .pptCH 1.2 marginal propensity to save and MP to consume .ppt
CH 1.2 marginal propensity to save and MP to consume .ppt
 
CH 1.1 Aggregate Demand & Aggregate Supply.ppt
CH 1.1 Aggregate Demand & Aggregate Supply.pptCH 1.1 Aggregate Demand & Aggregate Supply.ppt
CH 1.1 Aggregate Demand & Aggregate Supply.ppt
 
ch04.2 arz talep egrileri ve denklemleri.ppt
ch04.2 arz talep egrileri ve denklemleri.pptch04.2 arz talep egrileri ve denklemleri.ppt
ch04.2 arz talep egrileri ve denklemleri.ppt
 
ch04.1 arz talep egrisi mikro s-d theorisi.ppt
ch04.1 arz talep egrisi mikro s-d theorisi.pptch04.1 arz talep egrisi mikro s-d theorisi.ppt
ch04.1 arz talep egrisi mikro s-d theorisi.ppt
 
ch04.1 arz ve talep eğrileri micro s-d theo.ppt
ch04.1 arz  ve talep  eğrileri micro s-d theo.pptch04.1 arz  ve talep  eğrileri micro s-d theo.ppt
ch04.1 arz ve talep eğrileri micro s-d theo.ppt
 
Ders 3.3 David Ricardo , ticaretten kazanımlar.ppt
Ders 3.3 David Ricardo , ticaretten kazanımlar.pptDers 3.3 David Ricardo , ticaretten kazanımlar.ppt
Ders 3.3 David Ricardo , ticaretten kazanımlar.ppt
 
Ders 3.2 David Ricardo ch 3.2 karsılatrımalı üstünlük.pptx
Ders 3.2 David Ricardo ch 3.2 karsılatrımalı üstünlük.pptxDers 3.2 David Ricardo ch 3.2 karsılatrımalı üstünlük.pptx
Ders 3.2 David Ricardo ch 3.2 karsılatrımalı üstünlük.pptx
 
Ders 3 .1 Adam Smith ch 3.1 mutlak avantaj.pptx
Ders 3 .1 Adam Smith  ch 3.1 mutlak avantaj.pptxDers 3 .1 Adam Smith  ch 3.1 mutlak avantaj.pptx
Ders 3 .1 Adam Smith ch 3.1 mutlak avantaj.pptx
 
mikroekonomi ders 2 ekonomist gibi düşünmek ch02 mikro.ppt
mikroekonomi ders 2 ekonomist gibi düşünmek  ch02 mikro.pptmikroekonomi ders 2 ekonomist gibi düşünmek  ch02 mikro.ppt
mikroekonomi ders 2 ekonomist gibi düşünmek ch02 mikro.ppt
 
microeconomics ders 1 ch01-10 prensip.ppt
microeconomics ders 1 ch01-10 prensip.pptmicroeconomics ders 1 ch01-10 prensip.ppt
microeconomics ders 1 ch01-10 prensip.ppt
 
CH 1.4 Macro1_Small_Open_Economy.ppt
CH 1.4 Macro1_Small_Open_Economy.pptCH 1.4 Macro1_Small_Open_Economy.ppt
CH 1.4 Macro1_Small_Open_Economy.ppt
 
CH 1.3 macro2_Closed_Economy.ppt
CH 1.3  macro2_Closed_Economy.pptCH 1.3  macro2_Closed_Economy.ppt
CH 1.3 macro2_Closed_Economy.ppt
 
CH 1.2 mps pmc.ppt
CH 1.2 mps pmc.pptCH 1.2 mps pmc.ppt
CH 1.2 mps pmc.ppt
 

Último

20240429 Calibre April 2024 Investor Presentation.pdf
20240429 Calibre April 2024 Investor Presentation.pdf20240429 Calibre April 2024 Investor Presentation.pdf
20240429 Calibre April 2024 Investor Presentation.pdf
Adnet Communications
 
CALL ON ➥8923113531 🔝Call Girls Gomti Nagar Lucknow best sexual service
CALL ON ➥8923113531 🔝Call Girls Gomti Nagar Lucknow best sexual serviceCALL ON ➥8923113531 🔝Call Girls Gomti Nagar Lucknow best sexual service
CALL ON ➥8923113531 🔝Call Girls Gomti Nagar Lucknow best sexual service
anilsa9823
 
VIP Independent Call Girls in Andheri 🌹 9920725232 ( Call Me ) Mumbai Escorts...
VIP Independent Call Girls in Andheri 🌹 9920725232 ( Call Me ) Mumbai Escorts...VIP Independent Call Girls in Andheri 🌹 9920725232 ( Call Me ) Mumbai Escorts...
VIP Independent Call Girls in Andheri 🌹 9920725232 ( Call Me ) Mumbai Escorts...
dipikadinghjn ( Why You Choose Us? ) Escorts
 

Último (20)

(ANIKA) Budhwar Peth Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANIKA) Budhwar Peth Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANIKA) Budhwar Peth Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANIKA) Budhwar Peth Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
The Economic History of the U.S. Lecture 17.pdf
The Economic History of the U.S. Lecture 17.pdfThe Economic History of the U.S. Lecture 17.pdf
The Economic History of the U.S. Lecture 17.pdf
 
Booking open Available Pune Call Girls Talegaon Dabhade 6297143586 Call Hot ...
Booking open Available Pune Call Girls Talegaon Dabhade  6297143586 Call Hot ...Booking open Available Pune Call Girls Talegaon Dabhade  6297143586 Call Hot ...
Booking open Available Pune Call Girls Talegaon Dabhade 6297143586 Call Hot ...
 
20240429 Calibre April 2024 Investor Presentation.pdf
20240429 Calibre April 2024 Investor Presentation.pdf20240429 Calibre April 2024 Investor Presentation.pdf
20240429 Calibre April 2024 Investor Presentation.pdf
 
Call US 📞 9892124323 ✅ Kurla Call Girls In Kurla ( Mumbai ) secure service
Call US 📞 9892124323 ✅ Kurla Call Girls In Kurla ( Mumbai ) secure serviceCall US 📞 9892124323 ✅ Kurla Call Girls In Kurla ( Mumbai ) secure service
Call US 📞 9892124323 ✅ Kurla Call Girls In Kurla ( Mumbai ) secure service
 
The Economic History of the U.S. Lecture 18.pdf
The Economic History of the U.S. Lecture 18.pdfThe Economic History of the U.S. Lecture 18.pdf
The Economic History of the U.S. Lecture 18.pdf
 
Stock Market Brief Deck (Under Pressure).pdf
Stock Market Brief Deck (Under Pressure).pdfStock Market Brief Deck (Under Pressure).pdf
Stock Market Brief Deck (Under Pressure).pdf
 
The Economic History of the U.S. Lecture 26.pdf
The Economic History of the U.S. Lecture 26.pdfThe Economic History of the U.S. Lecture 26.pdf
The Economic History of the U.S. Lecture 26.pdf
 
Vip Call US 📞 7738631006 ✅Call Girls In Sakinaka ( Mumbai )
Vip Call US 📞 7738631006 ✅Call Girls In Sakinaka ( Mumbai )Vip Call US 📞 7738631006 ✅Call Girls In Sakinaka ( Mumbai )
Vip Call US 📞 7738631006 ✅Call Girls In Sakinaka ( Mumbai )
 
The Economic History of the U.S. Lecture 25.pdf
The Economic History of the U.S. Lecture 25.pdfThe Economic History of the U.S. Lecture 25.pdf
The Economic History of the U.S. Lecture 25.pdf
 
The Economic History of the U.S. Lecture 22.pdf
The Economic History of the U.S. Lecture 22.pdfThe Economic History of the U.S. Lecture 22.pdf
The Economic History of the U.S. Lecture 22.pdf
 
Call Girls Service Nagpur Maya Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Maya Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Maya Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Maya Call 7001035870 Meet With Nagpur Escorts
 
TEST BANK For Corporate Finance, 13th Edition By Stephen Ross, Randolph Weste...
TEST BANK For Corporate Finance, 13th Edition By Stephen Ross, Randolph Weste...TEST BANK For Corporate Finance, 13th Edition By Stephen Ross, Randolph Weste...
TEST BANK For Corporate Finance, 13th Edition By Stephen Ross, Randolph Weste...
 
The Economic History of the U.S. Lecture 30.pdf
The Economic History of the U.S. Lecture 30.pdfThe Economic History of the U.S. Lecture 30.pdf
The Economic History of the U.S. Lecture 30.pdf
 
CALL ON ➥8923113531 🔝Call Girls Gomti Nagar Lucknow best sexual service
CALL ON ➥8923113531 🔝Call Girls Gomti Nagar Lucknow best sexual serviceCALL ON ➥8923113531 🔝Call Girls Gomti Nagar Lucknow best sexual service
CALL ON ➥8923113531 🔝Call Girls Gomti Nagar Lucknow best sexual service
 
The Economic History of the U.S. Lecture 23.pdf
The Economic History of the U.S. Lecture 23.pdfThe Economic History of the U.S. Lecture 23.pdf
The Economic History of the U.S. Lecture 23.pdf
 
VIP Independent Call Girls in Andheri 🌹 9920725232 ( Call Me ) Mumbai Escorts...
VIP Independent Call Girls in Andheri 🌹 9920725232 ( Call Me ) Mumbai Escorts...VIP Independent Call Girls in Andheri 🌹 9920725232 ( Call Me ) Mumbai Escorts...
VIP Independent Call Girls in Andheri 🌹 9920725232 ( Call Me ) Mumbai Escorts...
 
(INDIRA) Call Girl Mumbai Call Now 8250077686 Mumbai Escorts 24x7
(INDIRA) Call Girl Mumbai Call Now 8250077686 Mumbai Escorts 24x7(INDIRA) Call Girl Mumbai Call Now 8250077686 Mumbai Escorts 24x7
(INDIRA) Call Girl Mumbai Call Now 8250077686 Mumbai Escorts 24x7
 
06_Joeri Van Speybroek_Dell_MeetupDora&Cybersecurity.pdf
06_Joeri Van Speybroek_Dell_MeetupDora&Cybersecurity.pdf06_Joeri Van Speybroek_Dell_MeetupDora&Cybersecurity.pdf
06_Joeri Van Speybroek_Dell_MeetupDora&Cybersecurity.pdf
 
Dharavi Russian callg Girls, { 09892124323 } || Call Girl In Mumbai ...
Dharavi Russian callg Girls, { 09892124323 } || Call Girl In Mumbai ...Dharavi Russian callg Girls, { 09892124323 } || Call Girl In Mumbai ...
Dharavi Russian callg Girls, { 09892124323 } || Call Girl In Mumbai ...
 

ders 7.1 VAR.pptx

  • 1. Assoc Prof Dr Ergin Akalpler VAR Model
  • 2. VAR Model  VECTOR auto-regressive (VAR) integrated model comprises multiple time series and is quite a useful tool for forecasting. It can be considered an extension of the auto-regressive (AR part of ARIMA) model.
  • 3. VAR Model  VAR model involves multiple independent variables and therefore has more than one equations.  Each equation uses as its explanatory variables lags of all the variables and likely a deterministic trend.  Time series models for VAR are usually based on applying VAR to stationary series with first differences to original series and because of that, there is always a possibility of loss of information about the relationship among integrated series.
  • 4. VAR model  Differencing the series to make them stationary is one solution, but at the cost of ignoring possibly important (“long run”) relationships between the levels. A better solution is to test whether the levels regressions are trustworthy (“cointegration”.)
  • 5. VAR Model  The usual approach is to use Johansen’s method for testing whether or not cointegration exists. If the answer is “yes” then a vector error correction model (VECM), which combines levels and differences, can be estimated instead of a VAR in levels. So, we shall check if VECM is been able to outperform VAR for the series we have.
  • 6. What is the difference between VECM and VAR?  Through VECM we can interpret long term and short term equations.  We need to determine the number of co-integrating relationships.  The advantage of VECM over VAR is that the resulting VAR from VECM representation has more efficient coefficient estimates.
  • 7. Introduction The basics of the vector autoregressive model. We lay the foundation for getting started with this crucial multivariate time series model and cover the important details including: •What a VAR model is. •Who uses VAR models. •Basic types of VAR models. •How to specify a VAR model. •Estimation and forecasting with VAR models.
  • 8. To determine whether VAR model in levels is possible or not, we need to transform VAR model in levels to a VECM model in differences (with error correction terms), to which the Johansen test for cointegration is applied. In other words, we take the following 4 steps 1. construct a VECM model in differences (with error correction terms) 2. apply the Johansen test to the VECM model in differences to find out the number of cointegration (r) (none or Atmost) 3. if r = 0, estimate VAR in differences 4. if r > 0, estimate VECM model in differences or VAR in levels (at least one cointegration equation exist)
  • 9. Its identification depends on the number of cointegration in the following way. (none) or 0, r = 0 (no cointegration) In the case of no cointegration, since all variables are non-stationary in level, the above VECM model reduces to a VAR model with growth variables. At most 1, r = 1 (one cointegrating vector) At most 2, r = 2 (two cointegrating vectors) At most 3) r = 3 (full cointegration) In the case of full cointegration, since all variables are stationary, the above VECM model reduces to a VAR model with level variables.
  • 10. How to determine Restricted VAR –VECM- or Unrestricted VAR  If all variables converted to first difference then they become stationary (integrated in same order)  Null hypo: variables are stationary  Alt Hypo: Variables are not stationary  If the variables are cointegrated and have long run association then we run restricted VAR (that is VECM),  But if the variables are not cointegrated we cannot run VECM rather we run unrestricted VAR.
  • 11. RESTRICTED VAR  After performing cointegration test results will shows following estimations:  Trace STATS > TCV  Null: there is no cointegration  Alt: There is cointegration  When the Trace stats is more than TCV we can reject null hypo there is cointegration  Probability values are less than 0.05
  • 12. Lets go UNRESTRICTED VAR  After performing cointegration test results will shows following estimations:  Trace STATS < TCV  Null: there is no cointegration  Alt: There is cointegration  When the Trace stats is less than TCV we cannot reject null hypo there is no cointegration  Probability values are more than 0.05  >
  • 13. Guideline to VAR 1.Variable selection & purpose The initial step is to understand the task at hand – 2.what kind of variables should be included, 3.what maximum lag order would make sense based on the frequency and description of the data. 4.What is the underlying economic or financial theory that is assumed beforehand?
  • 14. Var guideline 1.Data analysis and transformations After selecting the data, the second step would be examine the data plots and summary statistics and answer these questions: •Are there any outliers in the data? •Is there any missing data? •Do you need to transform the data, e.g. take logarithms? •Do you need to create any new variables - e.g. GDP per capita, number of children per household etc.?
  • 15. Var guideline Unit root tests After determining the final set of variables, Yt, we need to test, whether they have a unit root (I(1), I(2), …) or are stationary (I(0)). To do this, use the ADF test and others Probability values are more than 0.05
  • 16. Var guideline  VAR(p) model order selection  Usually model selection is done based on some kind of information criteria - usually AIC or BIC sometimes other information criteria. This can be done using VARselect() from the vars package. BIC penalizes model complexity more heavily.
  • 17. Var guideline  Cointegrating relationship test for unit root series If the time series has a unit root, we should check, whether there are any cointegrating relationships between the series. There are three types of tests, which can be performed:  Maximum Eigenvalue Test The maximum eigenvalue test examines whether the largest eigenvalue is zero relative to the alternative that the next largest eigenvalue is zero. Maxeigen stats are smaller than TCV for VAR  Trace Test  The trace test is a test whether the trace stats are smaller than TCV for VAR
  • 18. Var guideline :Estimating the model  If the series is not cointegrated, we can estimate the model via VAR() function from package vars for the differences of the series, ΔYt (if a unit root is present).  If the series are cointegrated, we need to consider the long-run relationship by estimating a VECM using either VECM() from specifying the number of cointegrating relations, which we found from the previous step.  Depending on the function, we may also need to specify the lag order of the VECM representation.
  • 19. Var guideline : Model diagnostics tests  Now that we have the model estimated, we need to verify if it is well specified. This is usually done by examining the residuals from the model.  The most important test for time series data are tests for autocorrelations (or serial correlations) of the residuals, also known as Portmanteu test.  Two most well-known versions of this test are the Ljung-Box and the Box- Pierce tests, which are implemented in the Box.test() function from the stats package.  For multivariate time series alongside autocorrelation, another problem is the cross-correlation of the residuals,  i.e., when cor(ϵ1,t,ϵ2,t+s)≠0,s>0.  For this reason, we may use the serial.test() function from the vars package, which computes the multivariate test for serially correlated errors.  A multivariate Ljung-Box test is implemented in the mq() function from the MTS package.
  • 20. VAR guideline : Results & conclusions  After we verify that the model is adequate, we can either predict() future values, or examine the impulse-response functions via if r() from the vars package in order to check how the variables respond to a particular shock.
  • 21. VAR modeling is a multi-step process and a complete VAR analysis involves: 1.Specifying and estimating a VAR model. 2.Using inferences to check and revise the model (as needed). 3.Forecasting. 4.Structural analysis.
  • 22. What are VAR models used for?  VAR models (vector autoregressive models) are used for multivariate time series. The structure is that each variable is a linear function of past lags of itself and past lags of the other variables.
  • 23. Who uses VAR models? VAR models are traditionally widely used in finance and econometrics because they offer a framework for accomplishing important modeling goals, including (Stock and Watson 2001): •Data description. •Forecasting. •Structural inference. •Policy analysis.
  • 24. The reduced form, recursive, and structural VAR There are three broad types of VAR models, the reduced form, the recursive form, and the structural VAR model. Reduced form VAR models consider each variable to be a function of: •Its own past values. •The past values of other variables in the model.
  • 25.  Recursive VAR models contain all the components of the reduced form model, but also allow some variables to be functions of other concurrent variables. By imposing these short-run relationships, the recursive model allows us to model model structural shocks.  Structural VAR models include restrictions that allow us to identify causal relationships beyond those that can be identified identified with reduced form or recursive models. These causal relationships can be used to model and forecast impacts of individual shocks, such as policy decisions
  • 26. While reduced form models are the simplest of the VAR models, they do come with disadvantages: • variables are not related to one another. •The error terms will be correlated across equations. This means we cannot consider what impacts individual shocks will have on the system.
  • 27.  What makes up a VAR model?  A VAR model is made up of a system of equations that represents the relationships between multiple variables. When referring to VAR models, we often use special language to specify: • How many endogenous variables there are included. • How many autoregressive terms are included.  For example, if we have two endogenous variables and autoregressive terms, we say the model is a Bivariate VAR(2) model. If we have three endogenous variables and four autoregressive terms, we say the model is a Trivariate VAR(4) model.  In general, a VAR model is composed of n-equations (representing n endogenous variables) and includes p-lags of the variables.
  • 28. Specification  What is the appropriate lag length in the VAR?  Three criterions: i. Akaike information criterion (AIC) ii. Schwarz criterion (SIC) iii. Hannan-Quinn criterion (HQC) ( all functions of m, T, and variance-covariance matrix)  In practice: Fix an upper bound of lag length q (12), choose the q which minimizes one of the information criterion  AIC is inconsistent  For T>20, SIC and HQC will always choose smaller models than AIC
  • 29. Estimation  Multivariate Generalized Least Squares (GLS) estimates are the same as equation by equation OLS estimates.  For unrestricted VAR models: Maximum likelihood (ML) estimates and equation by equation OLS estimates coincide.  When a VAR is estimated under some restrictions, ML estimates are different from OLS estimates; ML estimates are consistent and efficient if the restrictions are true.
  • 30. Presentation of Results  It is rare to report estimated VAR coefficients. Instead:  Impulse responses  Forecast error variance decomposition: assess the relative contribution of different shocks to fluctuations in variables  Historical Decomposition: given the path of one specific shock, how will the variables evolve?
  • 31. How do we decide what endogenous variables to include in our VAR model?  From an estimation standpoint, it is important to be deliberate about how many variables we include in our VAR model. Adding additional variables: • Increases the number of coefficients to be estimated for each equation and each number of lags. • Introduce additional estimation error.  Deciding what variables to include in a VAR model should be founded in theory, as much as possible.  We can use additional tools, like Granger causality or Sims causality, to test the forecasting relevance of variables. o
  • 32. UNRESTRICTED VAR  Assess the selection of the optimal lag length in a VAR  Evaluate the use of impulse response functions with a VAR  Assess the importance of variations on the standard VAR  Critically appraise the use of VAR s with financial models.  Assess the uses of VECMs
  • 33. What is a vector autoregressive model? The vector autoregressive (VAR) model is a workhouse multivariate time series model that relates current observations of a variable with past observations of itself and past observations of other variables in the system. VAR models differ from univariate autoregressive models because they allow feedback to occur between the variables in the model. For example, we could use a VAR model to show how real GDP is a function of policy rate and how policy rate is, in turn, a function of real GDP.
  • 34. • A systematic but flexible approach for capturing complex real-world behavior. • Better forecasting performance. • Ability to capture the intertwined dynamics of time series data. Advantages of VAR models
  • 35. VAR modeling is a multi-step process and a complete VAR analysis involves: 1.Specifying and estimating a VAR model. 2.Using inferences to check and revise the model (as needed). 3.Forecasting. 4.Structural analysis.
  • 36.  How do we choose the number of lags in a VAR model?  Lag selection is one of the important aspects of VAR model specification. In practical applications, we generally choose a maximum number of lags, p max, and evaluate the performance of the model including p=0,1,…,p max.  The optimal model is then the model VAR(p) which minimizes some lag selection criteria.  These methods are usually built into software and lag selection is almost completely automated now.
  • 37. Estimating and inference in VAR models  Despite their seeming complexities, VAR models are quite easy to estimate. The equation can be estimated using ordinary least squares given a few assumptions: • The error term has a conditional mean of zero. • The variables in the model are stationary. • Large outliers are unlikely. • No perfect multicollinearity.
  • 38.  Under these assumptions, the ordinary least squares estimates: • Will be consistent. • Can be evaluated using traditional t-statistics and p- values. • Can be used to jointly test restrictions across multiple equations.
  • 39. Forecasting  One of the most important functions of VAR models is to generate forecasts. Forecasts are generated for VAR models using an iterative forecasting algorithm: 1. Estimate the VAR model using OLS for each equation. 2. Compute the one-period-ahead forecast for all variables. 3. Compute the two-period-ahead forecasts, using the one-period- ahead forecast. 4. Iterate until the h-step ahead forecasts are computed.
  • 40. Reporting and evaluating VAR models  Often we are more interested in the dynamics that are predicted by our VAR models than the actual coefficients that are estimated. For this reason, it is most common that VAR studies report: • Granger-causality statistics. • Impulse response functions. • Forecast error decompositions
  • 41. Lag Length in VAR  When estimating VARs or conducting ‘Granger causality’ tests, the test can be sensitive to the lag length of the VAR  Sometimes the lag length corresponds to the data, such that quarterly data has 4 lags, monthly data has 12 lags etc.  A more rigorous way to determine the optimal lag length is to use the Akaike or Schwarz-Bayesian information criteria.  However the estimations tend to be sensitive to the presence of autocorrelation, in this case following the use of information criteria, if there is any evidence of autocorrelation, further lags are added, above the number indicated by the information criteria, until the autocorrelation is removed.
  • 42. Information Criteria  The main information criteria are the Schwarz-Bayesian criteria and the Akaike criteria.  They operate on the basis that there are two competing factors from adding more lags to a model. More lags will reduce the RSS, but also means a loss of degrees of freedom (penalty from adding more lags).  The aim is the minimise the information criteria, by adding an extra lag, it will only benefit the model if the reduction in the RSS outweighs the loss of degrees of freedom.  In general the Schwarz-Bayesian (SBIC) has a harsher penalty term than the Akaike (AIC), which leads it to indicate a parsimonious model is best.
  • 43. The AIC and SIC  The two can be expressed as: parameters of No. - k size, sample - variance residual ˆ : ln ) ˆ ln( 2 ) ˆ ln( 2 2 2 T Where T T k SBIC T k AIC        
  • 44. Multivariate Information Criteria  The multivariate version of the Akaike information criteria is similar to the univariate: equations all in regressors of number total ns observatio of number matrix) the of diagonal main the off residuals the between s covariance and diagonal main on the variances the gives (This residuals. the of matrix var ˆ ) ( / 2 ˆ log           k T iance Co Variance Akaike T k MAIC
  • 45. Multivariate SBIC  The multivariate version of the SBIC is: equations all in regressors of number total ns observatio of number residuals the of matrix var ˆ ) log( ˆ log           k T iance Co Variance T T k MSBIC
  • 46. The best criterion  In general there is no agreement on which criteria is best (some recommends the SBIC).  The Schwarz-Bayesian is strongly consistent but not efficient.  The Akaike is not consistent, generally producing too large a model, but is more efficient than the Schwarz-Bayesian criteria.
  • 47. Criticisms of Causality Tests Granger causality test, much used in VAR modelling, however do not explain some aspects of the VAR:  It does not give the sign of the effect, we do not know if it is positive or negative  It does not show how long the effect lasts for.  It does not provide evidence of whether this effect is direct or indirect.
  • 48. VARs and seemingly unrelated regression SUR  In general the VAR has all the lag lengths of the individual equations the same size.  It is possible however to have different lag lengths for different equations, however this involves another estimation method.  When lag lengths differ, the) seemingly unrelated regression (SUR approach can be used to estimate the equations, this is often termed a ‘near-VAR’.
  • 49. Alternative VARs  It is possible to include contemporaneous terms in a VAR, however in this case the VAR is not identified.  It is also possible to include exogenous variables in the VAR, although they do not have separate equations where they act as a dependent variable.  (They simply act as extra explanatory variables for all the equations in the VAR.)  It is worth noting that the impulse response functions can also produce confidence intervals to determine whether they are significant, this is routinely done by most computer programmes.
  • 50. VECMs  Vector Error Correction Models (VECM) are the basic VAR, with an error correction term incorporated into the model.  The reason for the error correction term is the same as with the standard error correction model, it measures any movement away from the long-run equilibrium.  These are often used as part of a multivariate test for cointegration, such as the Johansen Maximum likelihood (ML) test.
  • 51. VECMs  However there are a number of differing approaches to modelling VECMs, for instance how many lags should there be on the error correction term, usually just one regardless of the order of the VAR  The error correction term becomes more difficult to interpret, as it is not obvious which variable it affects following a shock
  • 52. What is Wald test  The Wald statistic explains the short run causality between variables whiles the statistics provided by the lagged error correction terms explain the intensity of the long run causality effect.  Short run Granger causalities are determined by Wald statistic for the significance of the coefficients of the series.
  • 53. Criticisms of the VAR  Many argue that the VAR approach is lacking in theory.  There is much debate on how the lag lengths should be determined  It is possible to end up with a model including numerous explanatory variables, with different signs, which has implications for degrees of freedom.  Many of the parameters will be insignificant, this affects the efficiency of a regression.  There is always a potential for multicollinearity with many lags of the same variable
  • 54. Stationarity and VARs  Should a VAR include only stationary variables, to be valid?  Sims argues that even if the variables are not stationary, they should not be first-differenced.  However others argue that a better approach is a multivariate test for cointegration and then use first-differenced variables and the error correction term
  • 55. Sample VAR Result  OLS estimation of a single equation in the Unrestricted VAR  ******************************************************************************  Dependent variable is TBILL  127 observations used for estimation from 1960Q2 to 1991Q4 Regressor Coefficient Standard Error T-Ratio [Prob]  TBILL(-1) .96200 .067845 14.1795 [.000]  R10(-1) -.015333 .068439 -.22404 [.823]  K .36563 .23386 1.5635 [.120]  R-Squared .90159 R-Bar-Squared .90000  Akaike Info. Criterion -165.9593 Schwarz Bayesian Criterion -170.22  Serial Correlation*CHSQ( 4)= 22.3179[.000]  Dependent variable is R10  ******************************************************************************  Regressor Coefficient Standard Error T-Ratio[Prob]  TBILL(-1) .11106 .039920 2.7821[.006]  R10(-1) .87432 .040269 21.7117[.000]  K .26981 .13760 1.9608[.052]  R-Squared .96507 R-Bar-Squared .96451  Akaike Info. Criterion -98.6049 Schwarz Bayesian Criterion -102.8712  Serial Correlation*CHSQ( 4)= 8.6481[.071]
  • 56. Granger-causality statistics As we previously discussed, Granger-causality statistics test whether one variable is statistically significant when predicting another variable. The Granger-causality statistics are F-statistics that test if the coefficients of all lags of a variable are jointly equal to zero in the equation for another variable. As the p-value of the F-statistic decreases, evidence that a variable is relevant for predict another variable increases.
  • 57. Granger causality  Granger causality tests whether a variable is “helpful” for forecasting the behavior of another variable.  It’s important to note that Granger causality only allows us to make inferences about forecasting capabilities -- not about true causality.
  • 58. Granger-causality statistics As we previously discussed, Granger-causality statistics test whether one variable is statistically significant when predicting another variable. The Granger-causality statistics are F-statistics that test if the coefficients of all lags of a variable are jointly equal to zero in the equation for another variable. As the p-value of the F-statistic decreases, evidence that a variable is relevant for predict another variable increases.
  • 59. Granger Causality Test  ******************************************************************************  Dependent variable is R10  List of the variables deleted from the regression: TBILL (-1)  127 observations used for estimation from 1960Q2 to 1991Q4  ******************************************************************************  Regressor Coefficient Standard Error T-Ratio [Prob]  R10(-1) .97627 .017142 56.9508 [.000]  K .20365 .13914 1.4637 [.146]  ******************************************************************************  Joint test of zero restrictions on the coefficients of deleted variables:  F Statistic F( 1, 124)= 7.7400[.006]   Dependent variable is TBILL  List of the variables deleted from the regression: R10(-1)  Regressor Coefficient Standard Error T-Ratio [Prob]  TBILL(-1) .94817 .028025 33.8328 [.000]  K .33727 .19589 1.7217 [.088]  ******************************************************************************  Joint test of zero restrictions on the coefficients of deleted variables:  F Statistic F( 1, 124)= .050192[.823]  *****************************************************************************
  • 60.  For example, in the Granger-causality test of X on Y, if the p- value is 0.02 we would say that X does help predict Y at the 5% level. However, if the p-value is 0.3 we would say that there is no evidence that X helps predict Y.
  • 61. Granger Causality Tests Continued  According to Granger, causality can be further sub-divided into long-run and short-run causality.  This requires the use of error correction models or VECMs, depending on the approach for determining causality.  Long-run causality is determined by the error correction term, whereby if it is significant, then it indicates evidence of long run causality from the explanatory variable to the dependent variable.  Short-run causality is determined as before, with a test on the joint significance of the lagged explanatory variables, using an F-test or Wald test.
  • 62. Impulse Response and Variance decomposition  the impulse responses are the relevant tools for interpreting the relationships between the variables  Variance decompositions examine how important each of the shocks is as a component of the overall (unpredictable) variance of each of the variables over time.
  • 63.  The impulse response function traces the dynamic path of variables in the system to shocks to other variables in the system. This is done by: • Estimating the VAR model. • Implementing a one-unit increase in the error of one of the variables in the model, while holding the other errors equal to zero. • Predicting the impacts h-period ahead of the error shock. • Plotting the forecasted impacts, along with the one-standard-deviation confidence intervals.
  • 64. Impulse Response Functions  Given:                          0 1 0 at time shock to unit a Given : 20 10 0 1 2 1 2 1 1 1 1 u u y t y A Where u y A y t t t t    
  • 65. Impulse Response Functions  These trace out the effect on the dependent variables in the VAR to shocks to all the variables in the VAR  Therefore in a system of 2 variables, there are 4 impulse response functions and with 3 there are 9.  The shock occurs through the error term and affects the dependent variable over time.  In effect the VAR is expressed as a vector moving average model (VMA), as in the univariate case previously, the shocks to the error terms can then be traced with regard to their impact on the dependent variable.  If the time path of the impulse response function becomes 0 over time, the system of equations is stable, however they can explode if unstable.
  • 66.  The impulse response function traces the dynamic path of variables in the system to shocks to other variables in the system. This is done by: • Estimating the VAR model. • Implementing a one-unit increase in the error of one of the variables in the model, while holding the other errors equal to zero. • Predicting the impacts h-period ahead of the error shock. • Plotting the forecasted impacts, along with the one-standard-deviation confidence intervals.
  • 67.  The results show IR (Impulse response) to dependent variables. Only for NIR IR function is illustrated on the table and  as on the table seen only NIR has positive response to CPI. But against to this all other variables have negative response to NIR  Impulse Response positive values have positive negative values have negative effects on dependent (here CPI) R. of DCPI: Period RGDP DCPI DNIR DREER 1 -3.870022 10.52160 0.000000 0.000000 2 4.350339 0.388418 0.635650 -3.964539 3 2.581088 -0.057747 1.343376 -0.210536 4 -1.406336 0.760648 0.709599 -0.485223 5 -1.189040 0.131412 0.477037 -0.098667 6 0.043845 -0.346002 0.243500 0.050212 7 0.401353 -0.000346 0.078936 0.053059 8 -0.003204 0.089603 0.006877 -0.037810 9 -0.052022 0.019648 -0.044851 -0.027014 10 -0.032278 -0.017211 0.007166 0.004444 Impulse response sample estimation and interpretation
  • 68. Variance decomposition estimation and interpretation  On the table, the variance decomposition results for CPI illustrated.  RGDP and REER affects CPI more than NIR.  Higher values have more effects than smaller values VD of DCPI: Period S.E. RGDP DCPI DNIR DREER 1 11.21076 11.91672 88.08328 0.000000 0.000000 2 12.68381 21.07330 68.90575 0.251152 9.769804 3 13.01512 23.94694 65.44426 1.303893 9.304905 4 13.14111 24.63526 64.53047 1.570594 9.263682 5 13.20444 25.21040 63.92289 1.686082 9.180623 6 13.21138 25.18501 63.92429 1.718280 9.172418 7 13.21782 25.25268 63.86205 1.720173 9.165098 8 13.21818 25.25131 63.86316 1.720107 9.165417 9 13.21840 25.25202 63.86125 1.721200 9.165528 10 13.21845 25.25241 63.86091 1.721216 9.165466
  • 70. Long-run Causality  Before the ECM can be formed, there first has to be evidence of cointegration, given that cointegration implies a significant error correction term, cointegration can be viewed as an indirect test of long- run causality.  It is possible to have evidence of long-run causality, but not short-run causality and vice versa.  In multivariate causality tests, the testing of long-run causality between two variables is more problematic, as it is impossible to tell which explanatory variable is causing the causality through the error correction term.
  • 71. A simple example As an example, let's consider a VAR with three endogenous variables, the unemployment rate, the inflation rate, and interest rates. To estimate the structural VAR model of the system, we have to put restrictions on our model. For example, we may assume that the Fed follows the inflation targeting rule for setting interest rates. This assumption would be built into our system as the equation for interest rates.
  • 72. Specification  What is the appropriate lag length in the VAR?  Three criterions: i. Akaike information criterion (AIC) ii. Schwarz criterion (SIC) iii. Hannan-Quinn criterion (HQC) ( all functions of m, T, and variance-covariance matrix)  In practice: Fix an upper bound of lag length q (12), choose the q which minimizes one of the information criterion  AIC is inconsistent  For T>20, SIC and HQC will always choose smaller models than AIC
  • 73. Estimation  Multivariate generalized least squares (GLS) estimates are the same as equation by equation OLS estimates.  For unrestricted VAR models: Multıvariate linera model – (ML) estimates and equation by equation OLS estimates coincide.  When a VAR is estimated under some restrictions, ML estimates are different from OLS estimates; ML estimates are consistent and efficient if the restrictions are true.
  • 74. Presentation of Results  It is rare to report estimated VAR coefficients.  Impulse responses  Forecast error variance decomposition: assess the relative contribution of different shocks to fluctuations in variables  Historical Decomposition: given the path of one specific shock, how will the variables evolve?
  • 75. How do we decide what endogenous variables to include in our VAR model?  From an estimation standpoint, it is important to be deliberate about how many variables we include in our VAR model. Adding additional variables: • Increases the number of coefficients to be estimated for each equation and each number of lags. • Introduce additional estimation error.  Deciding what variables to include in a VAR model should be founded in theory, as much as possible.  We can use additional tools, like Granger causality or Sims causality, to test the forecasting relevance of variables.
  • 76. Estimating and inference in VAR models  Despite their seeming complexities, VAR models are quite easy to estimate. The equation can be estimated using ordinary least squares given a few assumptions: • The error term has a conditional mean of zero. • The variables in the model are stationary. • Large outliers are unlikely. • An outlier is an unusually large or small observation. Outliers can have a disproportionate effect on statistical results, such as the mean, which can result in misleading interpretations. • No perfect multicollinearity.
  • 77.  Under these assumptions, the ordinary least squares estimates: • Will be consistent. • Can be evaluated using traditional t-statistics and p-values. • Can be used to jointly test restrictions across multiple equations.
  • 78. Forecasting  One of the most important functions of VAR models is to generate forecasts. Forecasts are generated for VAR models using an iterative forecasting algorithm: 1. Estimate the VAR model using OLS for each equation. 2. Compute the one-period-ahead forecast for all variables. 3. Compute the two-period-ahead forecasts- like times series for one period of time- , using the one-period-ahead forecast. 4. Iterate until the n period or h-step ahead forecasts are computed.
  • 79. Reporting and evaluating VAR models  Often we are more interested in the dynamics that are predicted by our VAR models than the actual coefficients that are estimated. For this reason, it is most common that VAR studies report: • Granger-causality statistics. • Impulse response functions. • Forecast error decompositions
  • 80.  Forecast error decomposition separates the forecast error variance into proportions attributed to each variable in the model.  Intuitively, this measure helps us judge how much of an impact one variable has on another variable in the VAR model and how intertwined our variables' dynamics are.  For example, if X is responsible for 85% of the forecast error variance of Y, it is explaining a large amount of the forecast variation in X. However, if X is only responsible for 20% of the forecast error variance of Y, much of the forecast error variance of Y is left unexplained by X.
  • 81. What is the difference between VAR and VEC model?  Through VECM we can interpret long term and short term equations. We need to determine the number of co-integrating relationships. The advantage of VECM over VAR is that the resulting VAR from VECM representation has more efficient coefficient estimates.  When to use VAR/VECM? You should use VECM if 1) your variables are nonstationary and 2) you find a common trend between the variables (cointegration).
  • 82. Why VAR is better than AR?  VAR (vector autoregression) is a generalization of AR (autoregressive model) for multiple time series, identifying the linear relationship between them. The AR can be seen as a particular case of VAR for only one series
  • 83. Conclusion  VAR models are an essential component of multivariate time series modeling. You should have a better understanding of the fundamentals of the VAR model including: • What a VAR model is. • Who uses VAR models. • Basic types of VAR models. • How to specify a VAR model. • Estimation and forecasting with VAR models.
  • 84. Conclusion  VARs have a number of important uses, particularly causality tests and forecasting  To assess the affects of any shock to the system, we need to use impulse response functions and variance decomposition  VECMs are an alternative, as they allow first- differenced variables and an error correction term.  The VAR has a number of weaknesses, most importantly its lack of theoretical foundations