SlideShare a Scribd company logo
1 of 98
Methodology Research Group




      Evaluation of moderation
        and mediation in the
development of personalised therapies
        (stratified medicine)
     MHRN conference, London, 20 March 2013

    Sabine Landau, Institute of Psychiatry, King’s
                   College London
                          &
       Graham Dunn, Institute of Population
          Health, University of Manchester
Outline
                                            Methodology Research Group


1. Introduction to key concepts
   • What is personalised therapy/
     stratified medicine?
                                                         Sabine
   • Causal effects, confounding and RCTs
   • Treatment effect moderation
   • Treatment effect mediation
2. Recap and development of ideas
   • Correct and incorrect approaches to
     treatment effect moderation (stratification)
   • Using moderator (predictive marker) by             Graham
     treatment interactions as instruments for
     mediation investigations
Research Programme:
Efficacy and Mechanisms
Evaluation                                                             Methodology Research Group

Funded by MRC Methodology Research Programme

•   Design and methods of explanatory (causal) analysis for randomised trials of complex
    interventions in mental health (2006-2009)
      – Graham Dunn (PI), Linda Davies, Jonathan Green, Andrew Pickles, Chris Roberts,
         Ian White & Frank Windmeijer.

•   Estimation of causal effects of complex interventions in longitudinal studies with
    intermediate variables (2009-2012)
      – Richard Emsley (MRC Fellow), Graham Dunn.

•   Designs and analysis for the evaluation and validation of social and psychological
    markers in randomised trials of complex interventions in mental health (2010-12)
     – Graham Dunn (PI), Richard Emsley, Linda Davies, Jonathan Green, Andrew
        Pickles, Chris Roberts, Ian White & Frank Windmeijer with Hanhua Liu.

•   Developing methods for understanding mechanism in complex interventions (2013-16)
     – Sabine Landau (PI), Richard Emsley, Graham Dunn, Ian White, Paul Clarke,
       Andrew Pickles & Til Wykes.
Aims of Session 1
                                              Methodology Research Group


• To provide an introduction to causal inference using
  potential outcomes (counterfactuals).

• To show that the concepts of stratified medicine and
  treatment effect moderation are intrinsically linked to
  treatment effect heterogeneity.

• To describe some standard approaches to evaluating
  treatment-effect mechanisms including the key
  assumptions, and highlight some of the potential
  problems with this.

• To briefly describe some newer approaches to
  mechanism evaluation so that you are familiar with
  these concepts and their potential.
Example 1: efficacy and
mechanisms evaluation and
personalised medicine                         Methodology Research Group


• Parenting training may be effective at improving conduct of
  children with behavioural problems, but its effect might be
  greater in some children than in others.
• Similarly, the training is likely to improve aspects of
  parenting and, again, its effect on such parent outcomes is
  likely to vary from one patient to another.
• We might expect that if one parent‟s parenting has been
  improved considerably more than that of another parent
  then the conduct of the first parent‟s child has been
  improved more than that of the second parent‟s child.
    – Who are parenting training programmes effective
      for?
    – What proportion of the training programme effect
      on child conduct is explained by its effect on
      parenting practice?
Example 2: efficacy and
mechanisms evaluation and
personalised medicine                        Methodology Research Group


• A recent large-scale randomised controlled trial (RCT)
  provided evidence for the effectiveness of augmentation of
  antidepressant medication with cognitive behavioural
  therapy (CBT) as a next-step for patients whose depression
  has not responded to pharmacotherapy (Wiles et al, 2012).
• Thus the treatment (CBT) was shown to work for a
  subpopulation who were identified as “non-responders to
  antidepressants”.
• CBT is supposed to work by changing the way how people
  think about themself, the world and other people.
   – Who does CBT work for?
   – What proportion of the CBT effect on depressive
     symptoms is explained by its effect on cognition?
General principle of causal
 inference                                      Methodology Research Group




• Effect size estimates (correlations, regression coefficients,
  odds ratios etc.) can only tell us about association between
  two variables (say X and Y).

• The aim of causal inference is to infer whether this
  association can be given a causal interpretation (e.g. X
  causes Y) by:
   – defining the causal parameters,
   – being explicit about the assumptions made when using a
     respective estimators,
   – thinking about other possible explanations for observed
     effects, especially confounding.
Ideas of causality
  (Cox and Wermuth, 2001)                            Methodology Research Group


• Causality as a stable association
   – An observed association that cannot be accounted for by
     any postulated confounder(s)
      » (but, on its own, this says nothing about the direction of the
        causal effect)
• Bradford Hill‟s criteria
   – A series of conditions which make the hypothesis of
     causality more convincing
      » (but none are either necessary or sufficient to prove causality)
• Causality as an effect of an intervention
   – Potential Outcomes/Counterfactuals (Neyman, Rubin, etc.)
   – The idea of fixing (setting) the values of the explanatory
     variables (Pearl)
• Causality as an explanation of a process
   – This is where science comes in…
How can we formally define
a causal treatment effect?                   Methodology Research Group



 • The potential outcomes/counterfactual approach.

 • It is a comparison between what is and what might
   have been.

 • We wish to estimate the difference between a patient‟s
   observed outcome and the outcome that would have
   been observed if, contrary to fact, the patient‟s
   treatment or care had been different (Neyman, 1923;
   Rubin, 1974).

 • Without the possibility of comparison the treatment
   effect is not well defined e.g. gender as a cause.
Individual treatment
effects (ITEs)                          Methodology Research Group


 • For a given individual, the effect of treatment
   is the difference:

       ITE=Outcometreatment - Outcomecontrol




          We can never observe this!
Causal inference using
counterfactuals                  Methodology Research Group


Receive treatment       Receive control




 Measure outcome       Measure outcome
      Comparison of outcomes gives an
        individual treatment effect
Causal inference using
counterfactuals                   Methodology Research Group


Receive treatment        Receive control




 Measure outcome        Measure outcome
   Comparison of outcomes will not give an
        individual treatment effect
Average treatment effect
(ATE)                                              Methodology Research Group


• The average treatment effect ATE is:

   Average[ITE] = Average[Outcometreatment - Outcomecontrol]

• If the selection of treatment options is purely random (as in
  a perfect RCT) then:

  Ave[Outcometreatment - Outcomecontrol]

  = Ave[Outcometreatment|treatment] - Ave[Outcomecontrol|Control]

  = Ave[Outcome|treatment] - Ave[Outcome|Control]


• ATE defines the efficacy of the treatment w. r. t. to
  control.
Causal inference using
counterfactuals                   Methodology Research Group


Receive treatment        Receive control




 Measure outcome        Measure outcome
   Comparison of average outcomes gives an
        average treatment effect
Problem of confounding
                                                      Methodology Research Group




                                   U




                    Exposure              Outcomes



• Observed variables in squares, unobserved (latent) variables in
  circles.
• An arrow (directed link) between variables represents a causal effect.
• We are interested in the causal effect of Exposure on Outcome (black
  path)
• U is an unmeasured confounder (=cause of Exposure and Outcome).
• The confounder provides a backdoor path connecting Exposure and
  Outcome (red path)
Why randomisation?
                                                   Methodology Research Group


• The strength of randomisation is that it ensures that there are
  no variables (both observed or unobserved) that drive
  treatment allocation.

• In terms of a causal graph, there are no arrows into randomi-
  sation from any other variable, observed or unobserved:
   – Random treatment group is not a descendent of any other
     variable.
   – It is exogenous in the model with response=Outcome and
     covariate=Random treatment group.


• This means that any comparison between randomisation
  groups (e.g. mean difference) estimates a (total) causal
  effect…
   – …provided the trial has been well designed and executed.
Mendelian randomisation
  (from Davey-Smith 2011)                        Methodology Research Group


• “The principle of Mendelian randomization relies on the
  basic (but approximate) laws of Mendelian genetics. If the
  probability that a postmeiotic germ cell, that has received
  any particular allele at segregation, contributes to a viable
  conception is independent of environment (following from
  Mendel‟s first law), and if genetic variants sort
  independently (following from Mendel‟s second law), then
  at a population level these variants will not be associated
  with the confounding factors that generally distort
  conventional observational studies.”

• Basically, genotypes are entirely derived from parents but
  can be considered randomly allocated,
   – e.g. if both parents are type AB, then genotype could
     be AA (probability .25), AB (0.50) or BB (0.25).
Mendelian randomisation
  (from Davey-Smith 2011)                      Methodology Research Group




• Genotypes are equivalent to randomisation…

• As before, in causal graph terms, there are no arrows into
  genes from any other variable, observed or unobserved:
   – Gene is not a descendent of any other variable.
   – It is exogenous in the model with response=Outcome
     and covariate=Gene.

• This means that any comparison between genes (e.g.
  mean difference) estimates a (total) causal effect.
Treatment effect
   heterogeneity                                 Methodology Research Group


• Importantly the definition of a causal parameter, the average
  causal effect (ATE) does not require that the ITEs are equal for
  everyone.
                  Positive effect         Detrimental effect



 Receive
treatment




 Receive
 control
Personalised medicine and
treatment effect heterogeneity                   Methodology Research Group


 • The existence of variation in individual treatment effects
   (ITEs) is the foundation of personalised medicine.
    – Stratified medicine
    – Predictive medicine
    – Genomic medicine

 • If we are to pursue the idea of stratified medicine then we
   must believe in treatment effect heterogeneity.

 • We should therefore use statistical methodology that
   explicitly accounts for such causal effect heterogeneity.
Baseline predictors
                                              Methodology Research Group


 • How does stratified medicine exploit treatment effect
   heterogeneity?

 • We are interested in knowing in advance of treatment
   allocation/decisions to treat who a treatment is most
   effective for.

 • For personalised medicine we need access to pre-
   treatment (baseline) characteristics that predict
   treatment-effect heterogeneity

    – We don‟t just want to predict outcome
Moderators of treatment
                                                            Methodology Research Group



 Baseline (pre-treatment) characteristics that
 influence the effect of treatment on outcome


                 Random
                allocation                               Outcomes




                  Marker




Note this path diagram is no longer a causal graph.

We call such baseline variables a “marker” – for more see Section 2.
Moderation assessment
 in trials                                     Methodology Research Group



• The ability of a baseline variable to act as a treatment
  moderator (also referred to as treatment effect modifier)
  can be investigated by assessing the interaction between
  treatment and the moderator variable in terms of the
  outcome.

• When the treatment has been randomised then the causal
  effect of the treatment (its efficacy) within subpopulations
  defined by the level of the moderator can be estimated.

• (In particular, randomisation within strata defined by the
  levels of the moderator maximises the power of this
  assessment.)
Moderation assessment in
treated cohorts                                Methodology Research Group



• Often investigators look for outcome heterogeneity in a
  cohort of people who received the treatment and interpret
  such heterogeneity as evidence for moderation
   – E.g. for schizophrenics receiving a psychological
     therapy compare functioning between SCZ subtypes

• This approach does not address the moderation
  question!

• The approach assesses whether a baseline variable is
  predictive of outcome but NOT whether it is predictive of
  treatment effects.
Prognostic baseline
 variables                                     Methodology Research Group


• Cohort studies of treated patients can only provide
  assessments of the ability of baseline variables to be
  predictive of the outcome;
   – That is whether they are prognostic variables.

• They cannot say anything about the ability of baseline
  variables to predict treatment effects;
   – That is whether they are predictive (moderator)
     variables.

• In personalised medicine we are after investigating
  moderators.
• However, we may make use of prognostic variables to do
  this in a more powerful way (see Session 2).
Treatment effect mediation
                                              Methodology Research Group


• The aim of efficacy and mechanism investigations is to go
  beyond evaluating whether an intervention is effective and
  to explain why it might be efficacious:
   – What are the putative mechanisms through
     which the treatment acts?

• Usual analysis methods dominated by decomposing total
  effects into direct and indirect effects:
   – Mental health and psychology has been concerned with
     this idea for decades.
   – Widely cited Baron and Kenny paper for mediation
     analysis in social sciences.
   – Makes implicit assumptions which are unlikely to hold.
Simple mediation diagram
                                          Methodology Research Group




                      Mediator




      Exposure                       Outcomes




    Total effect = direct effect + indirect effect
Confounded mediation
 assessment in epidemiology                           Methodology Research Group




                                                        U
               U

                                Mediator




            Exposure                             Outcomes



                                   U

If treatment is not randomised then there is likely to be even more
unmeasured confounding.
How does randomisation
help?                                Methodology Research Group




                                       U
           U

                     Mediator




         Random
                                Outcomes
        allocation


                        U
  “Blocked” by
 randomisation
Mediation in trials
                                                    Methodology Research Group

                                        U – the unmeasured confounders
                                error
                                                      U


                           Mediator




       Random                                                  error
                                              Outcomes
      allocation



                   Covariates
Mediation in genetic
epidemiology                                   Methodology Research Group

                                   U – the unmeasured confounders
                           error
                                                 U


                      Mediator




       Gene                              Outcomes         error



              Covariates
Possible solutions
                                                    Methodology Research Group



• There are basically two ways by which we can ensure that we
  can estimate causal parameters of interest in mechanisms
  investigations (direct and indirect treatment effects):

   – Measure and adjust for potential confounders (sounds
     obvious, not always done) …
      » so that there remains no hidden confounding and traditional
        Baron and Kenny mediation analysis approaches can be
        applied
   – Use estimators that can consistently estimate mediation
     parameters in the presence of hidden confounding …
      » a class of estimators called instrumental variables estimators
        allows for this
      » however, these also require assumptions (see below)
Measuring confounders
                                                    Methodology Research Group


• This can be difficult when knowledge about underlying
  processes is only patchy.
• However, when the putative confounder(s) are known it
  might be possible to obtain measures and thus enable causal
  mediation assessments even for only partly observed
  mediators.
• Example
   – Immunology (Follman, 2006):
      » Trial to compare vaccination with HIV vaccine against
        controls
      » Putative mediator= immune response (only observed in the
        vaccinated group)
      » Interested in whether the vaccination effect on infection rate
        is mediated by the immune response
Vaccine trials
                                               Methodology Research Group



• It is easy to demonstrate that immune response is a
  correlate of protection in the vaccinated arm: the higher
  the response, the lower the infection rate.

• Unfortunately, this correlation does not necessarily imply a
  causal effect.
   – Protection to infection specifically induced by the HIV
     vaccine is confounded with underlying levels of
     protection in the absence of vaccination.
   – Someone capable of producing a large immune
     response would be more resistant to infection, even in
     the absence of vaccination.
“Strange result”
                                                    Methodology Research Group


• Confounding explained the strange result:

   – Immune response observed after HIV vaccination.
       » …though really what is being observed here is the
         combination of protection due to general and specific (HIV
         vaccine) factors
   – Antibody response to the HIV vaccination was strongly
     associated with infection risk in the vaccine group.
       » … though that could just be protection due to general
         factors correlating with infection risk
   – But NO effect of HIV vaccination on infection rate (large
     trial of approx. 5000 participants).

• A correlate of protection is not necessarily a treatment-effect
  mediator, let alone a valid surrogate outcome.
A hypothetical HIV
vaccine trial (Follmann, 2006)                 Methodology Research Group


• Vaccinate everyone before randomisation with an irrelevant
  vaccine (against Rabies, for example).

• Measure the immune response to the Rabies vaccine (a
  proxy of protection due to general factors).

• Randomly allocate participants to receive HIV vaccine or
  Placebo.

• Measure immune response in the HIV vaccinated group.

• Use response to the Rabies vaccine to (multiply) impute the
  missing HIV vaccine response in the Placebo participants.

• Carry out a Baron and Kenny analysis on the imputed data
  which controls for the now observed confounder.
Why do we need
 instrumental variables?                          Methodology Research Group



• All available statistical methods we usually use (for any
  standard analysis), including:

   –   Stratification
   –   Regression
   –   Matching
   –   etc.

  require the one unverifiable condition we identified previously:
             NO UNMEASURED CONFOUNDING

• Instrumental variables allow us to relax this assumption.
Instrumental variables
                                               Methodology Research Group


• For mediation assessment in a trial we are looking for a
  variable that is:
   1. (Strongly) predictive of the intermediate variable;
   2. Has no direct effect on the outcome, except through the
      intermediate variable;
   3. Does not share common causes with the outcome.

• If these conditions hold, in addition to one further
  assumption (no interactions or monotonicity), then such a
  variable can be used as an instrumental variable (IV).

• Randomisation, where available, satisfies criteria 1 and 3.
• If we consider this when designing the trial, we can measure
  variables that MIGHT meet these requirements.
Mediation diagram
with instrumental variables
                                           Methodology Research Group




                              error
                                              U
    Instruments


                            Mediator




          Random                                       error
         allocation                    Outcomes



                      Covariates
Possible instruments
                                                    Methodology Research Group


• The following variables might serve as instrumental
  variables to enable mediation investigations in trials:
   – Baseline variable x randomisation interactions (see
     Section 2)
       » E.g. Mother mental health x training programme interaction
         in parenting example
   – Trial x randomisation interaction in meta-analysis of trials
   – Randomly allocated non-standardised aspects of
     interventions
       » E.g. how and high intensity versions of therapy
   – Genes
       » An application of Mendelian randomisation where it is
         assumed that a gene determining the intermediate
         phenotype only affects the distal phenotype by changing the
         intermediate
Mendelian randomisation:
using genotype as an IV
                                         Methodology Research Group



                           error

                                            U
     GENES


                          Mediator




        Random
       allocation                    Outcomes         error


                    Covariates
Assumptions for instrumental
variables                                    Methodology Research Group


 • IV methods require FOUR assumptions

 • The first 3 assumptions are from the definition:
    – The association between instrument and mediator.
    – No direct effect of the instrument on outcome.
    – No unmeasured confounding for the instrument and
      outcome.

 • There are a wide variety of fourth assumptions and
   different assumptions result in the estimation of
   different causal effects:
    – E.g. no interactions, monotonicity (no defiers).
Instrumental variables:
pros and cons                                                                        Methodology Research Group


                          Advantages                                      Disadvantages

 1. Can allow for unmeasured                                     1. It is impossible to verify that a
    confounding;                                                    variable is an instrument and
                                                                    using a non-instrument
 2. Can allow for measurement                                       introduces additional bias.
    error;
                                                                 2. A weak instrument increases
                                                                    the bias over that of ordinary
 3. Randomisation often meets                                       regression (for finite samples).
    the definition so is an ideal
    instrument.
                                                                 3. Instruments by themselves are
                                                                    actually insufficient to
                                                                    estimate causal effects and we
                                                                    require additional
                                                                    assumptions.

See Hernán and Robins (2006), Epidemiology for further details
Assumption trade-off
                                                Methodology Research Group


• IV methods replace one unverifiable assumption of no
  unmeasured confounding between the intermediate variable
  and the outcome by other unverifiable assumptions
   – no unmeasured confounding for the instruments, or
   – no direct effect of the instruments.

• We need to decide which assumptions are more likely to
  hold in our analysis.

• An IV analysis will also decrease the precision of our
  estimates because of allowing for the unmeasured
  confounding.
In the next session…
                                              Methodology Research Group


• Combining all these ideas:
   – Using baseline moderator variables (predictive
     markers) for evaluation of treatment effect
     mechanisms.
   – Using prognostic baseline variables (markers) as
     confounders or instrumental variables.
   – Improved trial designs to evaluate treatment-effect
     heterogeneity and corresponding mediational
     mechanisms.

• First we will have a short break…
Methodology Research Group




      Evaluation of moderation
        and mediation in the
development of personalised therapies
        (stratified medicine)


              SESSION 2
Aims of Session 2
                                         Methodology Research Group


• Recap main ideas from Session 1.

• Develop these ideas to
   – verify correct and incorrect approaches to
     assessing treatment effect moderation
     (stratification).

• Develop these ideas to
   – suggest trial designs and analyses that use
     moderator (predictive marker) by treatment
     interactions as instruments for mediation
     investigations.
Recap: treatment effects and
treatment-effect moderation             Methodology Research Group




• Potential outcomes & treatment effects

• Average treatment effects

• Treatment-effect heterogeneity (moderation)

• Naïve searches for stratifying factors (moderators)
Treatment effects
                                         Methodology Research Group




• Treatment effects do not make sense (are not defined)
  without comparison.

• We are comparing the outcome we see after therapy
  with the outcome we might have seen had the
  individual not received therapy, or therapy of a
  different kind to that actually experienced.

• We are comparing potential outcomes or
  counterfactuals.
Potential outcomes
                                           Methodology Research Group



• Consider just two alternatives for the treatment of
  depression: therapy (T) or a control condition (C).

• We have an outcome (the Beck Depression Inventory
  score) that could be measured six months after the
  decision to start therapy (or not).

• Let these two potential outcomes be BDI(T) and BDI(C)
  for the therapy and control conditions, respectively.
Comparison of potential outcomes
                                            Methodology Research Group



• The treatment effect for any given individual is the
  difference
                  BDI(T)-BDI(C)

      which we would expect to be a negative
      number if the treatment is beneficial.

• Unfortunately, we never get to see both potential
  outcomes so we can never observe this individual‟s
  treatment effect.
So-called treatment-response is
not a measure of an effect of therapy
                                           Methodology Research Group



• Let‟s now introduced a measure of depression BDI(0)
  that is obtained at the time of the start of therapy.

• The change over time under therapy – i.e.

   – BDI(T) – BDI(0) is not the same as BDI(T) –BDI(C).



• BDI(0) is NOT BDI(C)!
Randomisation and Average
Treatment Effects                         Methodology Research Group


• We get round our problem by working with
  averages:

   Average Treatment Effect = ATE
            = Ave[(BDI(T) – BDI(C)]
            = Ave[BDI(T)] – Ave[BDI(C)]

• If we have random allocation to treatment, R=T or
  C, then

• ATE = Ave[BDI|R=T] – Ave[BDI|R=C]
Treatment-effect
heterogeneity
                                           Methodology Research Group




• The treatment effect BDI(T)-BDI(C) is highly likely to
  vary from one individual to another.

• We would like to know what background information
  moderates (or predicts) the individual‟s treatment
  effect. This is the essence of stratification.

• Let‟s say we have a genotypic marker (G=0,1). We‟d
  like to look at association between G and BDI(T)-
  BDI(C).
Again, we look at averages
                                           Methodology Research Group


• We are concerned with the evaluation of the
  comparison of
                 ATE|G=0
   with
                 ATE|G=1

• This can be done by estimating and/or testing a
  treatment by genotype interaction in a suitably-
  powered RCT.
   – (e.g.see the GENPOD trial: Lewis et al. BJPsych,
     Vol 198, pp 464-471, 2011).
This is not rocket science ….
but what do geneticists usually do?
                                            Methodology Research Group




• Investigators have a cohort of treated individuals.

• They have a measure of treatment outcome, say,
  BDI(T), or treatment response, BDI(T)-BDI(0), on all
  individuals within the cohort. Often, they label people
  as „responders‟ or „non-responders‟.

• They investigate associations between treatment
  outcome and genotypic markers (G).
A treatment outcome is not
a treatment effect                     Methodology Research Group




• BDI(T) is not BDI(T)-BDI(C)!

• Let the treatment effect be Δ.

• Then treatment outcome, BDI(T), is equal to

  BDI(C) + Δ     (to note the obvious!).
Confounding of treatment-
effects with prognosis                   Methodology Research Group



• The genotype (G) may be associated with both the
  treatment effect (Δ) and with treatment-free outcome,
  BDI(C), i.e. prognosis.

• Associating G with treatment outcome, BDI(T), cannot
  distinguish between the two.

• Most importantly, it may be possible for treatment
  outcome to be associated with G even when there is no
  effect of treatment for anyone in the treated cohort!
… and evaluating the so-called
treatment-response doesn‟t help!
                                            Methodology Research Group



• Δ = BDI(T)-BDI(C)

• Treatment response      = BDT(T) – BDI(0)
                          = Δ + BDI(C) – BDI(0)

• Still confounded!
   – At best, these investigations are identifying
     candidates for further (more rigorous)
     investigation.
   – At worst, they are uncovering artefacts.
Our approach to stratified medicine
(personalised therapy)             Methodology Research Group




• Predicting outcome after treatment (responders vs.
  non-responders) is barely scratching the surface of
  stratified medicine.

• Understanding the mechanism underlying the
  stratification is the key scientific question, and the
  methodological challenge.
Our “manifesto”
                                               Methodology Research Group


• Personalised (stratified) medicine and treatment-effect
  mechanisms evaluation are inextricably linked and
  stratification without a corresponding mechanisms
  evaluation lacks credibility;

• In the almost certain presence of mediator-outcome
  confounding, mechanisms evaluation is dependent on
  stratification for its validity;

• Both stratification and treatment-effect mediation can be
  evaluated using a marker stratified trial design together
  with detailed baseline measurement of all known prognostic
  markers and other prognostic covariates;
Our methodological
 solution                                 Methodology Research Group




• Direct and indirect (mediated) effects should be
  estimated through the use of instrumental variable
  methods (the instrumental variable being the
  predictive marker by treatment interaction)
  together with adjustments for all known prognostic
  markers (confounders)

   – the latter adjustments contributing to increased
     precision (as in a conventional analysis of
     treatment effects) rather than bias reduction.
A purely prognostic marker
                              Methodology Research Group




     Randomised
                             Outcome
      Treatment




      Prognostic
        Marker
Prognostic Marker
                                   Methodology Research Group




                               Treated

Outcome                                 Untreated




                               Treatment effect



                    Marker Level
A prognostic marker as
a confounder             Methodology Research Group




    Randomised           Putative
     Treatment           Mediator


                                            U


    Prognostic           Clinical
      Marker             Outcome
Instrumental variables
                                          Methodology Research Group



• If the causal influence of the prognostic marker
  on the final outcome can be fully explained by its
  influence on the intermediate, then the marker
  can be used as an instrumental variable (or
  instrument, for short).

• This is the theoretical rationale in the use of so-
  called „Mendelian Randomisation‟.
An instrumental variable (IV)
                                Methodology Research Group




    Random        Treatment
Allocation (IV)    Received         Outcome




                                U
A prognostic marker as an
instrumental variable                         Methodology Research Group




     Randomised                               Putative
      Treatment                               Mediator


                                                                 U


     Prognostic                               Clinical
       Marker
                  No direct link to outcome   Outcome
Predictive markers
                                          Methodology Research Group


• Although they may have direct predictive effects on
  both intermediate and final outcomes, their essential
  characteristic is that they moderate (influence)
  treatment effects.

• If the treatment-effect moderation on final outcome is
  wholly explained by the moderation of the effect of
  treatment on the intermediate outcome, then the latter
  (i.e. a treatment by marker interaction) can be used as
  an instrument.

• A more subtle (and more realistic?) version of
  Mendelian Randomisation.
Predictive marker
(may also be prognostic)        Methodology Research Group




    Randomised
                               Outcome
     Treatment

                  Moderating
                    effect
     Predictive
      Marker
    (moderator)
Predictive Marker
                                   Methodology Research Group




                               Treated

Outcome


                                            Untreated


                               Treatment effect
                              depends on marker


                    Marker Level
Putting it all together: potential
joint roles of predictive and
prognostic markers                         Methodology Research Group

                        Intermediate
                          Outcome                 U
                         (Mediator)
 Predictive
  Marker                               B
 (moderator)


                                               Final
                 A                           (Clinical)
 Randomised                                  Outcome
                       C
  Treatment



 Prognostic
   Marker                        U – unmeasured confounders
 (risk factor)
Potential roles of prognostic markers:
measured confounder
or instrumental variable                              Methodology Research Group

                                  Intermediate
                                    Outcome                  U
                                   (Mediator)

                                                  B



                                                          Final
                     A                                  (Clinical)
 Randomised                                             Outcome
                                C
  Treatment



 Prognostic
   Marker                                  U – unmeasured confounders
 (risk factor)
                 Dotted line – pathway we might assume are absent
           Alternatively, we might assume that there are no longer any Us
Option 1 – use prognostic marker(s) as a
measured confounder(s) and then assume
there is no hidden confounding (U)
                                                 Methodology Research Group

                              Intermediate
                                Outcome
                               (Mediator)

                                             B



                                                     Final
                 A                                 (Clinical)
 Randomised                                        Outcome
                             C
  Treatment



 Prognostic
  Marker 1           Prognostic
 (confounder)
                      Marker 2
                     (confounder)
Option 2 – Use as prognostic marker as
an instrumental variable
(Mendelian Randomisation)                         Methodology Research Group

                              Intermediate
                                Outcome                  U
                               (Mediator)

                                              B


                      A
                                                      Final
                                                    (Clinical)
 Randomised                                         Outcome
                             C
  Treatment



 Prognostic
   Marker                               U – unmeasured confounders
 (instrument)
                Using the prognostic marker as an instrumental variable
Potential problems with
  Mendelian Randomisation
                                            Methodology Research Group


• Assumption that there is no direct effect of the
  genetic marker on final outcome frequently difficult
  to justify, and practically impossible to verify.
   – Dependent on prior knowledge.

• The marker is likely to be a rather weak instrument
  (i.e. it‟s influence on the intermediate outcome is not
  strong enough).
   – This can lead to problems (see Session 1.)

• Probably wiser to use available prognostic markers
  as observed confounders.
Potential role of
predictive markers                                    Methodology Research Group

                                   Intermediate
                                     Outcome                 U
                                    (Mediator)
   Predictive
    Marker                                        B
   (moderator)


                                                          Final
                      A                                 (Clinical)
   Randomised                                           Outcome
                                 C
    Treatment




                                            U – unmeasured confounders
Red dotted lines – pathways we might be justified in assuming are absent
Stratification & mediational
mechanisms evaluation
                                                     Methodology Research Group

                                 Intermediate
                                   Outcome                  U
                                 (Mediator)
 Predictive
  Marker                                         B
 (moderator)


                                                         Final
                    A                                  (Clinical)
 Randomised                                            Outcome
                               C
  Treatment




                                          U – unmeasured confounders
     Using the treatment by marker interaction as an instrumental variable
Is the treatment by predictive
 marker interaction a valid
 instrument?                                 Methodology Research Group




• Are we correct in assuming that there is no moderating
  effect on pathway B?

• Are we correct in assuming that there is no moderating
  effect on pathway C?

• Dependent on prior knowledge of the biology/biochemistry
  of the system.
Theory-driven stratification
                                           Methodology Research Group



• Prior scientific theory and preliminary evidence
  strongly suggests that a given predictive marker has
  its influence through a specific mechanism (the
  putative mediator).

• No reason to expect that the moderating effect of the
  predictive marker works via a pathway not associated
  with the above mechanism (i.e. we assume that the
  treatment by marker interaction – moderation – is a
  valid instrument).
Using strong theory and all
 available prognostic marker
 information                                            Methodology Research Group

                                    Intermediate
                                      Outcome                  U
                                     (Mediator)
    Predictive
     Marker                                         B
    (moderator)


                                                            Final
                       A                                  (Clinical)
    Randomised                                            Outcome
                                  C
     Treatment



   Prognostic
   Marker(s)
as Confounder(s)
                                             U – unmeasured confounders

         Using the treatment by marker interaction as an instrumental variable
Complicated but Viable!!
                                           Methodology Research Group


• Statistical methods widely available to estimate the
  pathways of this model (we won‟t worry about the
  technical details).

• Health Warning!!
• This model is pretty complex and is dependent on a
  lot of assumptions. Are these assumptions – i.e. the
  theory - defensible? Invalid assumptions lead to
  invalid solutions.
Real examples –
  We don‟t have any!                        Methodology Research Group



• We know of no existing examples of the use of this
  design – we are presently writing it up for publication.
• Examples from our mental health trials involve
  retrospective analyses of archived data.
• Four funded EME trials are under way:
   – Ketamine ECT in depression (Ian Anderson et al.);
   – Minocycline and negative symptoms (Bill Deakin et
     al.);
   – Worry Intervention Trial (Freeman et al.);
   – DBT for depression (Lynch et al.);
   – but none fully utilise biomarker information as
     described here.
A computer-simulated
 example                                 Methodology Research Group


• Trial with 1000 participants
   – (500 treated, 500 controls).
   – Quantitative outcome, y.
• Binary predictive marker (x10):
   – Treatment effect on mediator (m) in its absence is
     10 units; in its presence 60 units.
   – Moderating effect of x10 on outcome solely through
     the mediator (x10 known to be an IV).
   – Variants of x10 equally probable (50:50).
• Nine prognostic uncorrelated binary markers x1-x9.
   – All nine are confounders.
   – Details of their creation of no consequence, here.
The true model (mediator)
                                   Methodology Research Group


 Mediator (m):

 m=5*x1+5*x2+5*x3+5*x4+5*x5+5*x6+5
  *x7+5*x8+5*x9+5*x10+10*treat+50*x
  11+e12

 Where x11 = treat*x10
  (i.e. The treatment by marker interaction)

 e12 is a random „error‟ term

 “ * ” is a multiplication sign.
The true models (outcome)
                                  Methodology Research Group


Outcome (y):

y=5*x1+5*x2+5*x3+5*x4+5*x5+5*x6+5
  *x7+5*x8+5*x9+5*x10+2*m+10*treat
  +e13

e13 is a random „error‟ term (uncorrelated
  with e12).

There is no x11 (interaction) in this model.

THERE ARE NO UNMEASURED COMMON
   CAUSES
(i.e. x1-x9, and x10, are all measured)
Simple summaries
                                                         Methodology Research Group

---------------------------------------------------------------------------
-> treat = 0
    Variable |       Obs        Mean    Std. Dev.       Min        Max
 ------------+-------------------------------------------------------------
           m |       500      74.83       7.58          55.22     97.78
           y |       500     174.47      22.27         116.21    247.78
--------------------------------------------------------------------------
-> treat = 1
    Variable |       Obs        Mean    Std. Dev.       Min        Max
 ------------+-------------------------------------------------------------
           m |       500     108.92     28.42         55.66     159.70
           y |       500     252.91     61.10        124.59     372.49

Note lack of homogeneity of standard deviations across the groups.

TREATMENT GROUP MUCH MORE VARIABLE (AS WE MIGHT EXPECT).
Naïve analysis methods
                                            Methodology Research Group



• I won‟t bother to describe these in detail (but see
  below).

• In the psychological and social science literature they
  will be dominated by approaches similar to those
  advocated by Baron & Kenny (about 17000 citations!)

• At the more hi-tech end of medicine they‟ve rarely got
  round to using the naive methods!
Let‟s pretend we‟ve not
measured x1-x9:
                                            Methodology Research Group


i.e. there are indeed „unmeasured‟
common causes
An instrumental variable regression in Stata:

ivregress 2sls y treat x10 (m = x11), first

This is a two-stage least-squares procedure which
simultaneously estimates the effect of treatment on
m (the first-stage regression), the effect of m on y,
and direct effect of treatment on y (the second
stage).
The first-stage regressions
                                Methodology Research Group




------------------------------------
       m |       Coef.     Std. Err.
---------+--------------------------
  treat |        10.07     0.63
     x11 |       50.47     0.90
-------------------------------------
The second-stage
regressions                     Methodology Research Group




-------------------------------------
       y |      Coef.     Std. Err.
---------+---------------------------
      m |        2.00     0.02
  treat |       10.39     0.87
-------------------------------------
Naïve methods:
the 2nd-stage regression
                                        Methodology Research Group


Use ordinary least-squares to regress y on x10,
m and treat

regress y m x10 treat

------------------------------------------
           y |      Coef.   Std. Err
-------------+----------------------------
           m |      2.19     0.02
       treat |      3.67     0.75
------------------------------------------
DIRECT EFFECT OF TREATMENT SEVERERLY BIASED.
Now use all available data
                                             Methodology Research Group


ivregress 2sls y treat x1 x2 x3 x4 x5 x6 x7 x8 x9 x10
     (m = x11), first

1st stage:     Coef.     Std. Err.
  treat |       9.77       0.26
  x11     |    50.73       0.37
2nd stage:
  m       |     2.01       0.01
  treat |      10.01       0.55

CONSIDERABLE GAIN IN PRECISION
Measurement of prognostic markers not essential, but it
makes the design more efficient (i.e. get away with a
smaller trial) – perhaps the difference between a viable
trial and one that‟s just not feasible.
„Naïve‟ 2nd-stage regression
 using all data                                Methodology Research Group




regress y x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 m treat

-------------------------------------
           y |      Coef.   Std. Err.
-------------+-----------------------
           m |     2.00     0.01
       treat |    10.05     0.54


If (but only if) we‟ve measured all confounders then this is
valid and it is the most precise method. But ... we never know!

Returning to IV: there‟s a balance between bias and precision.
We don‟t get something for nothing.
The Key Ingredients
                                          Methodology Research Group


• Convincing psychological theory concerning the
  potential mechanism for mediation.
• Convincing theory to underline the belief that the
  treatment by moderator (predictive marker)
  interaction is a valid instrument.
• An appropriately powered trial for
   – Valid evaluation of treatment-effect moderation –
     on the mediator as well as on the outcome.
   – Valid use of instrumental variables estimation to
     evaluate the treatment-effect mechanisms
     (mediation).
Design considerations
                                          Methodology Research Group


• How big does the trial have to be? Considerably
  larger than a conventional pragmatic trial.
• How strong does the moderating effect on the
  mediator have to be?
   – Our simulated example used a very strong
     moderating effect.
   – However, presumably it has to be reasonably
     strong to be of any serious interest.
• What does the prevalence of the alleles for the
  predictive biomarker have to be?
   – We used 50:50 (maximum power).
   – More likely to be of the order 90:10.
Conclusions
                                          Methodology Research Group


• The scientific evaluation of stratified/personalised
  medicines/therapies is inseparable from mechanisms
  evaluation.
• So far, progress in trial design for mechanisms
  evaluation appears to have been very limited.
   – Interestingly, much more progress for the „softer‟
     treatments (psychotherapies) than for hi-tech
     medicines.
• Good design involves using prior scientific
  knowledge/evidence and makes full use of data from
  both prognostic and predictive markers.
• The required statistical methods are available and
  reasonably straight forward to use.
Methodology Research Group




Thank You!

More Related Content

What's hot

2010 smg training_cardiff_day1_session4_harbord
2010 smg training_cardiff_day1_session4_harbord2010 smg training_cardiff_day1_session4_harbord
2010 smg training_cardiff_day1_session4_harbordrgveroniki
 
Reach: Pushing Your Clinical Effectiveness to the Next Level
Reach: Pushing Your Clinical Effectiveness to the Next LevelReach: Pushing Your Clinical Effectiveness to the Next Level
Reach: Pushing Your Clinical Effectiveness to the Next LevelScott Miller
 
Treatment Goals Checklist Poster
Treatment Goals Checklist PosterTreatment Goals Checklist Poster
Treatment Goals Checklist PosterErin Stotts
 
2010 smg training_cardiff_day1_session1(3 of 3)beyene
2010 smg training_cardiff_day1_session1(3 of 3)beyene2010 smg training_cardiff_day1_session1(3 of 3)beyene
2010 smg training_cardiff_day1_session1(3 of 3)beyenergveroniki
 
2010 smg training_cardiff_day2_session2_dias
2010 smg training_cardiff_day2_session2_dias2010 smg training_cardiff_day2_session2_dias
2010 smg training_cardiff_day2_session2_diasrgveroniki
 
2010 smg training_cardiff_day1_session3_higgins
2010 smg training_cardiff_day1_session3_higgins2010 smg training_cardiff_day1_session3_higgins
2010 smg training_cardiff_day1_session3_higginsrgveroniki
 
ApplyingOutcomeResearch
ApplyingOutcomeResearchApplyingOutcomeResearch
ApplyingOutcomeResearchBarry Duncan
 
Constructs, variables, hypotheses
Constructs, variables, hypothesesConstructs, variables, hypotheses
Constructs, variables, hypothesesPedro Martinez
 
Grp presentation chap 13
Grp presentation chap 13Grp presentation chap 13
Grp presentation chap 13Azura Zaki
 
2010 smg training_cardiff_day2_session1_salanti
2010 smg training_cardiff_day2_session1_salanti2010 smg training_cardiff_day2_session1_salanti
2010 smg training_cardiff_day2_session1_salantirgveroniki
 
Selection of appropriate statistics and tools
Selection of appropriate statistics and toolsSelection of appropriate statistics and tools
Selection of appropriate statistics and toolsSuresh Kumar Murugesan
 
Anatomy of a meta analysis i like
Anatomy of a meta analysis i likeAnatomy of a meta analysis i like
Anatomy of a meta analysis i likeJames Coyne
 
Experimental research ijaz butt
Experimental research ijaz buttExperimental research ijaz butt
Experimental research ijaz buttIjaz Ahmad
 
Meta-analysis when the normality assumptions are violated (2008)
Meta-analysis when the normality assumptions are violated (2008)Meta-analysis when the normality assumptions are violated (2008)
Meta-analysis when the normality assumptions are violated (2008)Evangelos Kontopantelis
 
Potential Solutions to the Fundamental Problem of Causal Inference: An Overview
Potential Solutions to the Fundamental Problem of Causal Inference: An OverviewPotential Solutions to the Fundamental Problem of Causal Inference: An Overview
Potential Solutions to the Fundamental Problem of Causal Inference: An OverviewEconomic Research Forum
 

What's hot (20)

2010 smg training_cardiff_day1_session4_harbord
2010 smg training_cardiff_day1_session4_harbord2010 smg training_cardiff_day1_session4_harbord
2010 smg training_cardiff_day1_session4_harbord
 
Reach: Pushing Your Clinical Effectiveness to the Next Level
Reach: Pushing Your Clinical Effectiveness to the Next LevelReach: Pushing Your Clinical Effectiveness to the Next Level
Reach: Pushing Your Clinical Effectiveness to the Next Level
 
Lg ph d_slides_vfinal
Lg ph d_slides_vfinalLg ph d_slides_vfinal
Lg ph d_slides_vfinal
 
Treatment Goals Checklist Poster
Treatment Goals Checklist PosterTreatment Goals Checklist Poster
Treatment Goals Checklist Poster
 
Matching methods
Matching methodsMatching methods
Matching methods
 
2010 smg training_cardiff_day1_session1(3 of 3)beyene
2010 smg training_cardiff_day1_session1(3 of 3)beyene2010 smg training_cardiff_day1_session1(3 of 3)beyene
2010 smg training_cardiff_day1_session1(3 of 3)beyene
 
2010 smg training_cardiff_day2_session2_dias
2010 smg training_cardiff_day2_session2_dias2010 smg training_cardiff_day2_session2_dias
2010 smg training_cardiff_day2_session2_dias
 
2010 smg training_cardiff_day1_session3_higgins
2010 smg training_cardiff_day1_session3_higgins2010 smg training_cardiff_day1_session3_higgins
2010 smg training_cardiff_day1_session3_higgins
 
ApplyingOutcomeResearch
ApplyingOutcomeResearchApplyingOutcomeResearch
ApplyingOutcomeResearch
 
Constructs, variables, hypotheses
Constructs, variables, hypothesesConstructs, variables, hypotheses
Constructs, variables, hypotheses
 
SOC2002 Lecture 4
SOC2002 Lecture 4SOC2002 Lecture 4
SOC2002 Lecture 4
 
Grp presentation chap 13
Grp presentation chap 13Grp presentation chap 13
Grp presentation chap 13
 
Experimental research design
Experimental research designExperimental research design
Experimental research design
 
2010 smg training_cardiff_day2_session1_salanti
2010 smg training_cardiff_day2_session1_salanti2010 smg training_cardiff_day2_session1_salanti
2010 smg training_cardiff_day2_session1_salanti
 
Selection of appropriate statistics and tools
Selection of appropriate statistics and toolsSelection of appropriate statistics and tools
Selection of appropriate statistics and tools
 
Anatomy of a meta analysis i like
Anatomy of a meta analysis i likeAnatomy of a meta analysis i like
Anatomy of a meta analysis i like
 
PMED: APPM Workshop: Overview of Methods for Subgroup Identification in Clini...
PMED: APPM Workshop: Overview of Methods for Subgroup Identification in Clini...PMED: APPM Workshop: Overview of Methods for Subgroup Identification in Clini...
PMED: APPM Workshop: Overview of Methods for Subgroup Identification in Clini...
 
Experimental research ijaz butt
Experimental research ijaz buttExperimental research ijaz butt
Experimental research ijaz butt
 
Meta-analysis when the normality assumptions are violated (2008)
Meta-analysis when the normality assumptions are violated (2008)Meta-analysis when the normality assumptions are violated (2008)
Meta-analysis when the normality assumptions are violated (2008)
 
Potential Solutions to the Fundamental Problem of Causal Inference: An Overview
Potential Solutions to the Fundamental Problem of Causal Inference: An OverviewPotential Solutions to the Fundamental Problem of Causal Inference: An Overview
Potential Solutions to the Fundamental Problem of Causal Inference: An Overview
 

Similar to Stratified Medicine Conference Presentation

1PAGE 21. What is the question the authors are asking .docx
1PAGE  21. What is the question the authors are asking .docx1PAGE  21. What is the question the authors are asking .docx
1PAGE 21. What is the question the authors are asking .docxfelicidaddinwoodie
 
Problem solving method and scientific method
Problem solving method and scientific methodProblem solving method and scientific method
Problem solving method and scientific methodSreethaAkhil
 
Psychology 100 Research Design
Psychology 100 Research DesignPsychology 100 Research Design
Psychology 100 Research DesignMeghan Fraley
 
Psychotherapy Efficacy Research: Critical Review
Psychotherapy Efficacy Research: Critical ReviewPsychotherapy Efficacy Research: Critical Review
Psychotherapy Efficacy Research: Critical ReviewDr. Amit Chougule
 
Guide for conducting meta analysis in health research
Guide for conducting meta analysis in health researchGuide for conducting meta analysis in health research
Guide for conducting meta analysis in health researchYogitha P
 
Experimental research
Experimental research Experimental research
Experimental research Shafqat Wattoo
 
Evidenced Based Practice (EVP): GRADE Approach to Evidenced Based Guideline D...
Evidenced Based Practice (EVP): GRADE Approach to Evidenced Based Guideline D...Evidenced Based Practice (EVP): GRADE Approach to Evidenced Based Guideline D...
Evidenced Based Practice (EVP): GRADE Approach to Evidenced Based Guideline D...Michael Changaris
 
Benchmarking the Effectiveness of Psychotherapy Treatment for .docx
Benchmarking the Effectiveness of Psychotherapy Treatment for .docxBenchmarking the Effectiveness of Psychotherapy Treatment for .docx
Benchmarking the Effectiveness of Psychotherapy Treatment for .docxikirkton
 
The Meaningful Assessment of Therapy OutcomesIncorporating .docx
The Meaningful Assessment of Therapy OutcomesIncorporating .docxThe Meaningful Assessment of Therapy OutcomesIncorporating .docx
The Meaningful Assessment of Therapy OutcomesIncorporating .docxcherry686017
 
Quantitative, qualitive and mixed research designs
Quantitative, qualitive and mixed research designsQuantitative, qualitive and mixed research designs
Quantitative, qualitive and mixed research designsAras Bozkurt
 
Research Design for health care students
Research Design for health care studentsResearch Design for health care students
Research Design for health care studentsCharu Parthe
 
6. Randomised controlled trial
6. Randomised controlled trial6. Randomised controlled trial
6. Randomised controlled trialRazif Shahril
 
clinical trials types and design
clinical trials types and designclinical trials types and design
clinical trials types and designUttara Joshi
 
approachesofcommunityhealthnursing-210713073656.pptx
approachesofcommunityhealthnursing-210713073656.pptxapproachesofcommunityhealthnursing-210713073656.pptx
approachesofcommunityhealthnursing-210713073656.pptxAkshayaKiran2
 

Similar to Stratified Medicine Conference Presentation (20)

Experimental design
Experimental designExperimental design
Experimental design
 
1PAGE 21. What is the question the authors are asking .docx
1PAGE  21. What is the question the authors are asking .docx1PAGE  21. What is the question the authors are asking .docx
1PAGE 21. What is the question the authors are asking .docx
 
Problem solving method and scientific method
Problem solving method and scientific methodProblem solving method and scientific method
Problem solving method and scientific method
 
Psychology 100 Research Design
Psychology 100 Research DesignPsychology 100 Research Design
Psychology 100 Research Design
 
Psychotherapy Efficacy Research: Critical Review
Psychotherapy Efficacy Research: Critical ReviewPsychotherapy Efficacy Research: Critical Review
Psychotherapy Efficacy Research: Critical Review
 
Guide for conducting meta analysis in health research
Guide for conducting meta analysis in health researchGuide for conducting meta analysis in health research
Guide for conducting meta analysis in health research
 
Experimental research
Experimental research Experimental research
Experimental research
 
Quantitative research design
Quantitative research designQuantitative research design
Quantitative research design
 
Evidenced Based Practice (EVP): GRADE Approach to Evidenced Based Guideline D...
Evidenced Based Practice (EVP): GRADE Approach to Evidenced Based Guideline D...Evidenced Based Practice (EVP): GRADE Approach to Evidenced Based Guideline D...
Evidenced Based Practice (EVP): GRADE Approach to Evidenced Based Guideline D...
 
Benchmarking the Effectiveness of Psychotherapy Treatment for .docx
Benchmarking the Effectiveness of Psychotherapy Treatment for .docxBenchmarking the Effectiveness of Psychotherapy Treatment for .docx
Benchmarking the Effectiveness of Psychotherapy Treatment for .docx
 
The Meaningful Assessment of Therapy OutcomesIncorporating .docx
The Meaningful Assessment of Therapy OutcomesIncorporating .docxThe Meaningful Assessment of Therapy OutcomesIncorporating .docx
The Meaningful Assessment of Therapy OutcomesIncorporating .docx
 
Quantitative, qualitive and mixed research designs
Quantitative, qualitive and mixed research designsQuantitative, qualitive and mixed research designs
Quantitative, qualitive and mixed research designs
 
Research Paper Critique:Nursing
Research Paper Critique:NursingResearch Paper Critique:Nursing
Research Paper Critique:Nursing
 
Research Design for health care students
Research Design for health care studentsResearch Design for health care students
Research Design for health care students
 
6. Randomised controlled trial
6. Randomised controlled trial6. Randomised controlled trial
6. Randomised controlled trial
 
Making Sense of Mixed Methods Design in Health Research
Making Sense of Mixed Methods Design in Health ResearchMaking Sense of Mixed Methods Design in Health Research
Making Sense of Mixed Methods Design in Health Research
 
Treatment evaluation
Treatment evaluationTreatment evaluation
Treatment evaluation
 
clinical trials types and design
clinical trials types and designclinical trials types and design
clinical trials types and design
 
Experimental Research Presentation
Experimental Research  PresentationExperimental Research  Presentation
Experimental Research Presentation
 
approachesofcommunityhealthnursing-210713073656.pptx
approachesofcommunityhealthnursing-210713073656.pptxapproachesofcommunityhealthnursing-210713073656.pptx
approachesofcommunityhealthnursing-210713073656.pptx
 

More from MHRN South London & South East Hub

More from MHRN South London & South East Hub (20)

Sunlows feb2014
Sunlows feb2014Sunlows feb2014
Sunlows feb2014
 
Sunlows jan2014 updated
Sunlows jan2014 updatedSunlows jan2014 updated
Sunlows jan2014 updated
 
SUNLOWS seminar October 13
SUNLOWS seminar October 13SUNLOWS seminar October 13
SUNLOWS seminar October 13
 
Sunlows July 26th 2013
Sunlows July 26th 2013Sunlows July 26th 2013
Sunlows July 26th 2013
 
Final mhrn young person's event flyer july 13 th
Final mhrn young person's event flyer july 13 th Final mhrn young person's event flyer july 13 th
Final mhrn young person's event flyer july 13 th
 
Sunlows 2013 (june, july and aug)
Sunlows 2013 (june, july and aug)Sunlows 2013 (june, july and aug)
Sunlows 2013 (june, july and aug)
 
2013 Up Coming SUNLOWS
2013 Up Coming SUNLOWS2013 Up Coming SUNLOWS
2013 Up Coming SUNLOWS
 
Presentation edward omeni
Presentation edward omeniPresentation edward omeni
Presentation edward omeni
 
Nicola fear fast r
Nicola fear fast rNicola fear fast r
Nicola fear fast r
 
Nicol ferrier mhrn mar13
Nicol ferrier mhrn mar13Nicol ferrier mhrn mar13
Nicol ferrier mhrn mar13
 
Mhrn presentation oram
Mhrn presentation oramMhrn presentation oram
Mhrn presentation oram
 
Mhrn liam ennis
Mhrn  liam ennisMhrn  liam ennis
Mhrn liam ennis
 
John strang-0313min
John strang-0313minJohn strang-0313min
John strang-0313min
 
Diana rose mhrn 2013
Diana rose mhrn 2013Diana rose mhrn 2013
Diana rose mhrn 2013
 
Scientific mtgpres2013
Scientific mtgpres2013Scientific mtgpres2013
Scientific mtgpres2013
 
Smoking cessation and mental ill health
Smoking cessation and mental ill healthSmoking cessation and mental ill health
Smoking cessation and mental ill health
 
Sz and experimental medicine
Sz and experimental medicineSz and experimental medicine
Sz and experimental medicine
 
Mhrn dv and mental health lmh
Mhrn dv and mental health lmhMhrn dv and mental health lmh
Mhrn dv and mental health lmh
 
Mhrn addictions-0313min
Mhrn addictions-0313minMhrn addictions-0313min
Mhrn addictions-0313min
 
J. secker mhrn presentation
J. secker mhrn presentationJ. secker mhrn presentation
J. secker mhrn presentation
 

Stratified Medicine Conference Presentation

  • 1. Methodology Research Group Evaluation of moderation and mediation in the development of personalised therapies (stratified medicine) MHRN conference, London, 20 March 2013 Sabine Landau, Institute of Psychiatry, King’s College London & Graham Dunn, Institute of Population Health, University of Manchester
  • 2. Outline Methodology Research Group 1. Introduction to key concepts • What is personalised therapy/ stratified medicine? Sabine • Causal effects, confounding and RCTs • Treatment effect moderation • Treatment effect mediation 2. Recap and development of ideas • Correct and incorrect approaches to treatment effect moderation (stratification) • Using moderator (predictive marker) by Graham treatment interactions as instruments for mediation investigations
  • 3. Research Programme: Efficacy and Mechanisms Evaluation Methodology Research Group Funded by MRC Methodology Research Programme • Design and methods of explanatory (causal) analysis for randomised trials of complex interventions in mental health (2006-2009) – Graham Dunn (PI), Linda Davies, Jonathan Green, Andrew Pickles, Chris Roberts, Ian White & Frank Windmeijer. • Estimation of causal effects of complex interventions in longitudinal studies with intermediate variables (2009-2012) – Richard Emsley (MRC Fellow), Graham Dunn. • Designs and analysis for the evaluation and validation of social and psychological markers in randomised trials of complex interventions in mental health (2010-12) – Graham Dunn (PI), Richard Emsley, Linda Davies, Jonathan Green, Andrew Pickles, Chris Roberts, Ian White & Frank Windmeijer with Hanhua Liu. • Developing methods for understanding mechanism in complex interventions (2013-16) – Sabine Landau (PI), Richard Emsley, Graham Dunn, Ian White, Paul Clarke, Andrew Pickles & Til Wykes.
  • 4. Aims of Session 1 Methodology Research Group • To provide an introduction to causal inference using potential outcomes (counterfactuals). • To show that the concepts of stratified medicine and treatment effect moderation are intrinsically linked to treatment effect heterogeneity. • To describe some standard approaches to evaluating treatment-effect mechanisms including the key assumptions, and highlight some of the potential problems with this. • To briefly describe some newer approaches to mechanism evaluation so that you are familiar with these concepts and their potential.
  • 5. Example 1: efficacy and mechanisms evaluation and personalised medicine Methodology Research Group • Parenting training may be effective at improving conduct of children with behavioural problems, but its effect might be greater in some children than in others. • Similarly, the training is likely to improve aspects of parenting and, again, its effect on such parent outcomes is likely to vary from one patient to another. • We might expect that if one parent‟s parenting has been improved considerably more than that of another parent then the conduct of the first parent‟s child has been improved more than that of the second parent‟s child. – Who are parenting training programmes effective for? – What proportion of the training programme effect on child conduct is explained by its effect on parenting practice?
  • 6. Example 2: efficacy and mechanisms evaluation and personalised medicine Methodology Research Group • A recent large-scale randomised controlled trial (RCT) provided evidence for the effectiveness of augmentation of antidepressant medication with cognitive behavioural therapy (CBT) as a next-step for patients whose depression has not responded to pharmacotherapy (Wiles et al, 2012). • Thus the treatment (CBT) was shown to work for a subpopulation who were identified as “non-responders to antidepressants”. • CBT is supposed to work by changing the way how people think about themself, the world and other people. – Who does CBT work for? – What proportion of the CBT effect on depressive symptoms is explained by its effect on cognition?
  • 7. General principle of causal inference Methodology Research Group • Effect size estimates (correlations, regression coefficients, odds ratios etc.) can only tell us about association between two variables (say X and Y). • The aim of causal inference is to infer whether this association can be given a causal interpretation (e.g. X causes Y) by: – defining the causal parameters, – being explicit about the assumptions made when using a respective estimators, – thinking about other possible explanations for observed effects, especially confounding.
  • 8. Ideas of causality (Cox and Wermuth, 2001) Methodology Research Group • Causality as a stable association – An observed association that cannot be accounted for by any postulated confounder(s) » (but, on its own, this says nothing about the direction of the causal effect) • Bradford Hill‟s criteria – A series of conditions which make the hypothesis of causality more convincing » (but none are either necessary or sufficient to prove causality) • Causality as an effect of an intervention – Potential Outcomes/Counterfactuals (Neyman, Rubin, etc.) – The idea of fixing (setting) the values of the explanatory variables (Pearl) • Causality as an explanation of a process – This is where science comes in…
  • 9. How can we formally define a causal treatment effect? Methodology Research Group • The potential outcomes/counterfactual approach. • It is a comparison between what is and what might have been. • We wish to estimate the difference between a patient‟s observed outcome and the outcome that would have been observed if, contrary to fact, the patient‟s treatment or care had been different (Neyman, 1923; Rubin, 1974). • Without the possibility of comparison the treatment effect is not well defined e.g. gender as a cause.
  • 10. Individual treatment effects (ITEs) Methodology Research Group • For a given individual, the effect of treatment is the difference: ITE=Outcometreatment - Outcomecontrol We can never observe this!
  • 11. Causal inference using counterfactuals Methodology Research Group Receive treatment Receive control Measure outcome Measure outcome Comparison of outcomes gives an individual treatment effect
  • 12. Causal inference using counterfactuals Methodology Research Group Receive treatment Receive control Measure outcome Measure outcome Comparison of outcomes will not give an individual treatment effect
  • 13. Average treatment effect (ATE) Methodology Research Group • The average treatment effect ATE is: Average[ITE] = Average[Outcometreatment - Outcomecontrol] • If the selection of treatment options is purely random (as in a perfect RCT) then: Ave[Outcometreatment - Outcomecontrol] = Ave[Outcometreatment|treatment] - Ave[Outcomecontrol|Control] = Ave[Outcome|treatment] - Ave[Outcome|Control] • ATE defines the efficacy of the treatment w. r. t. to control.
  • 14. Causal inference using counterfactuals Methodology Research Group Receive treatment Receive control Measure outcome Measure outcome Comparison of average outcomes gives an average treatment effect
  • 15. Problem of confounding Methodology Research Group U Exposure Outcomes • Observed variables in squares, unobserved (latent) variables in circles. • An arrow (directed link) between variables represents a causal effect. • We are interested in the causal effect of Exposure on Outcome (black path) • U is an unmeasured confounder (=cause of Exposure and Outcome). • The confounder provides a backdoor path connecting Exposure and Outcome (red path)
  • 16. Why randomisation? Methodology Research Group • The strength of randomisation is that it ensures that there are no variables (both observed or unobserved) that drive treatment allocation. • In terms of a causal graph, there are no arrows into randomi- sation from any other variable, observed or unobserved: – Random treatment group is not a descendent of any other variable. – It is exogenous in the model with response=Outcome and covariate=Random treatment group. • This means that any comparison between randomisation groups (e.g. mean difference) estimates a (total) causal effect… – …provided the trial has been well designed and executed.
  • 17. Mendelian randomisation (from Davey-Smith 2011) Methodology Research Group • “The principle of Mendelian randomization relies on the basic (but approximate) laws of Mendelian genetics. If the probability that a postmeiotic germ cell, that has received any particular allele at segregation, contributes to a viable conception is independent of environment (following from Mendel‟s first law), and if genetic variants sort independently (following from Mendel‟s second law), then at a population level these variants will not be associated with the confounding factors that generally distort conventional observational studies.” • Basically, genotypes are entirely derived from parents but can be considered randomly allocated, – e.g. if both parents are type AB, then genotype could be AA (probability .25), AB (0.50) or BB (0.25).
  • 18. Mendelian randomisation (from Davey-Smith 2011) Methodology Research Group • Genotypes are equivalent to randomisation… • As before, in causal graph terms, there are no arrows into genes from any other variable, observed or unobserved: – Gene is not a descendent of any other variable. – It is exogenous in the model with response=Outcome and covariate=Gene. • This means that any comparison between genes (e.g. mean difference) estimates a (total) causal effect.
  • 19. Treatment effect heterogeneity Methodology Research Group • Importantly the definition of a causal parameter, the average causal effect (ATE) does not require that the ITEs are equal for everyone. Positive effect Detrimental effect Receive treatment Receive control
  • 20. Personalised medicine and treatment effect heterogeneity Methodology Research Group • The existence of variation in individual treatment effects (ITEs) is the foundation of personalised medicine. – Stratified medicine – Predictive medicine – Genomic medicine • If we are to pursue the idea of stratified medicine then we must believe in treatment effect heterogeneity. • We should therefore use statistical methodology that explicitly accounts for such causal effect heterogeneity.
  • 21. Baseline predictors Methodology Research Group • How does stratified medicine exploit treatment effect heterogeneity? • We are interested in knowing in advance of treatment allocation/decisions to treat who a treatment is most effective for. • For personalised medicine we need access to pre- treatment (baseline) characteristics that predict treatment-effect heterogeneity – We don‟t just want to predict outcome
  • 22. Moderators of treatment Methodology Research Group Baseline (pre-treatment) characteristics that influence the effect of treatment on outcome Random allocation Outcomes Marker Note this path diagram is no longer a causal graph. We call such baseline variables a “marker” – for more see Section 2.
  • 23. Moderation assessment in trials Methodology Research Group • The ability of a baseline variable to act as a treatment moderator (also referred to as treatment effect modifier) can be investigated by assessing the interaction between treatment and the moderator variable in terms of the outcome. • When the treatment has been randomised then the causal effect of the treatment (its efficacy) within subpopulations defined by the level of the moderator can be estimated. • (In particular, randomisation within strata defined by the levels of the moderator maximises the power of this assessment.)
  • 24. Moderation assessment in treated cohorts Methodology Research Group • Often investigators look for outcome heterogeneity in a cohort of people who received the treatment and interpret such heterogeneity as evidence for moderation – E.g. for schizophrenics receiving a psychological therapy compare functioning between SCZ subtypes • This approach does not address the moderation question! • The approach assesses whether a baseline variable is predictive of outcome but NOT whether it is predictive of treatment effects.
  • 25. Prognostic baseline variables Methodology Research Group • Cohort studies of treated patients can only provide assessments of the ability of baseline variables to be predictive of the outcome; – That is whether they are prognostic variables. • They cannot say anything about the ability of baseline variables to predict treatment effects; – That is whether they are predictive (moderator) variables. • In personalised medicine we are after investigating moderators. • However, we may make use of prognostic variables to do this in a more powerful way (see Session 2).
  • 26. Treatment effect mediation Methodology Research Group • The aim of efficacy and mechanism investigations is to go beyond evaluating whether an intervention is effective and to explain why it might be efficacious: – What are the putative mechanisms through which the treatment acts? • Usual analysis methods dominated by decomposing total effects into direct and indirect effects: – Mental health and psychology has been concerned with this idea for decades. – Widely cited Baron and Kenny paper for mediation analysis in social sciences. – Makes implicit assumptions which are unlikely to hold.
  • 27. Simple mediation diagram Methodology Research Group Mediator Exposure Outcomes Total effect = direct effect + indirect effect
  • 28. Confounded mediation assessment in epidemiology Methodology Research Group U U Mediator Exposure Outcomes U If treatment is not randomised then there is likely to be even more unmeasured confounding.
  • 29. How does randomisation help? Methodology Research Group U U Mediator Random Outcomes allocation U “Blocked” by randomisation
  • 30. Mediation in trials Methodology Research Group U – the unmeasured confounders error U Mediator Random error Outcomes allocation Covariates
  • 31. Mediation in genetic epidemiology Methodology Research Group U – the unmeasured confounders error U Mediator Gene Outcomes error Covariates
  • 32. Possible solutions Methodology Research Group • There are basically two ways by which we can ensure that we can estimate causal parameters of interest in mechanisms investigations (direct and indirect treatment effects): – Measure and adjust for potential confounders (sounds obvious, not always done) … » so that there remains no hidden confounding and traditional Baron and Kenny mediation analysis approaches can be applied – Use estimators that can consistently estimate mediation parameters in the presence of hidden confounding … » a class of estimators called instrumental variables estimators allows for this » however, these also require assumptions (see below)
  • 33. Measuring confounders Methodology Research Group • This can be difficult when knowledge about underlying processes is only patchy. • However, when the putative confounder(s) are known it might be possible to obtain measures and thus enable causal mediation assessments even for only partly observed mediators. • Example – Immunology (Follman, 2006): » Trial to compare vaccination with HIV vaccine against controls » Putative mediator= immune response (only observed in the vaccinated group) » Interested in whether the vaccination effect on infection rate is mediated by the immune response
  • 34. Vaccine trials Methodology Research Group • It is easy to demonstrate that immune response is a correlate of protection in the vaccinated arm: the higher the response, the lower the infection rate. • Unfortunately, this correlation does not necessarily imply a causal effect. – Protection to infection specifically induced by the HIV vaccine is confounded with underlying levels of protection in the absence of vaccination. – Someone capable of producing a large immune response would be more resistant to infection, even in the absence of vaccination.
  • 35. “Strange result” Methodology Research Group • Confounding explained the strange result: – Immune response observed after HIV vaccination. » …though really what is being observed here is the combination of protection due to general and specific (HIV vaccine) factors – Antibody response to the HIV vaccination was strongly associated with infection risk in the vaccine group. » … though that could just be protection due to general factors correlating with infection risk – But NO effect of HIV vaccination on infection rate (large trial of approx. 5000 participants). • A correlate of protection is not necessarily a treatment-effect mediator, let alone a valid surrogate outcome.
  • 36. A hypothetical HIV vaccine trial (Follmann, 2006) Methodology Research Group • Vaccinate everyone before randomisation with an irrelevant vaccine (against Rabies, for example). • Measure the immune response to the Rabies vaccine (a proxy of protection due to general factors). • Randomly allocate participants to receive HIV vaccine or Placebo. • Measure immune response in the HIV vaccinated group. • Use response to the Rabies vaccine to (multiply) impute the missing HIV vaccine response in the Placebo participants. • Carry out a Baron and Kenny analysis on the imputed data which controls for the now observed confounder.
  • 37. Why do we need instrumental variables? Methodology Research Group • All available statistical methods we usually use (for any standard analysis), including: – Stratification – Regression – Matching – etc. require the one unverifiable condition we identified previously: NO UNMEASURED CONFOUNDING • Instrumental variables allow us to relax this assumption.
  • 38. Instrumental variables Methodology Research Group • For mediation assessment in a trial we are looking for a variable that is: 1. (Strongly) predictive of the intermediate variable; 2. Has no direct effect on the outcome, except through the intermediate variable; 3. Does not share common causes with the outcome. • If these conditions hold, in addition to one further assumption (no interactions or monotonicity), then such a variable can be used as an instrumental variable (IV). • Randomisation, where available, satisfies criteria 1 and 3. • If we consider this when designing the trial, we can measure variables that MIGHT meet these requirements.
  • 39. Mediation diagram with instrumental variables Methodology Research Group error U Instruments Mediator Random error allocation Outcomes Covariates
  • 40. Possible instruments Methodology Research Group • The following variables might serve as instrumental variables to enable mediation investigations in trials: – Baseline variable x randomisation interactions (see Section 2) » E.g. Mother mental health x training programme interaction in parenting example – Trial x randomisation interaction in meta-analysis of trials – Randomly allocated non-standardised aspects of interventions » E.g. how and high intensity versions of therapy – Genes » An application of Mendelian randomisation where it is assumed that a gene determining the intermediate phenotype only affects the distal phenotype by changing the intermediate
  • 41. Mendelian randomisation: using genotype as an IV Methodology Research Group error U GENES Mediator Random allocation Outcomes error Covariates
  • 42. Assumptions for instrumental variables Methodology Research Group • IV methods require FOUR assumptions • The first 3 assumptions are from the definition: – The association between instrument and mediator. – No direct effect of the instrument on outcome. – No unmeasured confounding for the instrument and outcome. • There are a wide variety of fourth assumptions and different assumptions result in the estimation of different causal effects: – E.g. no interactions, monotonicity (no defiers).
  • 43. Instrumental variables: pros and cons Methodology Research Group Advantages Disadvantages 1. Can allow for unmeasured 1. It is impossible to verify that a confounding; variable is an instrument and using a non-instrument 2. Can allow for measurement introduces additional bias. error; 2. A weak instrument increases the bias over that of ordinary 3. Randomisation often meets regression (for finite samples). the definition so is an ideal instrument. 3. Instruments by themselves are actually insufficient to estimate causal effects and we require additional assumptions. See Hernán and Robins (2006), Epidemiology for further details
  • 44. Assumption trade-off Methodology Research Group • IV methods replace one unverifiable assumption of no unmeasured confounding between the intermediate variable and the outcome by other unverifiable assumptions – no unmeasured confounding for the instruments, or – no direct effect of the instruments. • We need to decide which assumptions are more likely to hold in our analysis. • An IV analysis will also decrease the precision of our estimates because of allowing for the unmeasured confounding.
  • 45. In the next session… Methodology Research Group • Combining all these ideas: – Using baseline moderator variables (predictive markers) for evaluation of treatment effect mechanisms. – Using prognostic baseline variables (markers) as confounders or instrumental variables. – Improved trial designs to evaluate treatment-effect heterogeneity and corresponding mediational mechanisms. • First we will have a short break…
  • 46. Methodology Research Group Evaluation of moderation and mediation in the development of personalised therapies (stratified medicine) SESSION 2
  • 47. Aims of Session 2 Methodology Research Group • Recap main ideas from Session 1. • Develop these ideas to – verify correct and incorrect approaches to assessing treatment effect moderation (stratification). • Develop these ideas to – suggest trial designs and analyses that use moderator (predictive marker) by treatment interactions as instruments for mediation investigations.
  • 48. Recap: treatment effects and treatment-effect moderation Methodology Research Group • Potential outcomes & treatment effects • Average treatment effects • Treatment-effect heterogeneity (moderation) • Naïve searches for stratifying factors (moderators)
  • 49. Treatment effects Methodology Research Group • Treatment effects do not make sense (are not defined) without comparison. • We are comparing the outcome we see after therapy with the outcome we might have seen had the individual not received therapy, or therapy of a different kind to that actually experienced. • We are comparing potential outcomes or counterfactuals.
  • 50. Potential outcomes Methodology Research Group • Consider just two alternatives for the treatment of depression: therapy (T) or a control condition (C). • We have an outcome (the Beck Depression Inventory score) that could be measured six months after the decision to start therapy (or not). • Let these two potential outcomes be BDI(T) and BDI(C) for the therapy and control conditions, respectively.
  • 51. Comparison of potential outcomes Methodology Research Group • The treatment effect for any given individual is the difference BDI(T)-BDI(C) which we would expect to be a negative number if the treatment is beneficial. • Unfortunately, we never get to see both potential outcomes so we can never observe this individual‟s treatment effect.
  • 52. So-called treatment-response is not a measure of an effect of therapy Methodology Research Group • Let‟s now introduced a measure of depression BDI(0) that is obtained at the time of the start of therapy. • The change over time under therapy – i.e. – BDI(T) – BDI(0) is not the same as BDI(T) –BDI(C). • BDI(0) is NOT BDI(C)!
  • 53. Randomisation and Average Treatment Effects Methodology Research Group • We get round our problem by working with averages: Average Treatment Effect = ATE = Ave[(BDI(T) – BDI(C)] = Ave[BDI(T)] – Ave[BDI(C)] • If we have random allocation to treatment, R=T or C, then • ATE = Ave[BDI|R=T] – Ave[BDI|R=C]
  • 54. Treatment-effect heterogeneity Methodology Research Group • The treatment effect BDI(T)-BDI(C) is highly likely to vary from one individual to another. • We would like to know what background information moderates (or predicts) the individual‟s treatment effect. This is the essence of stratification. • Let‟s say we have a genotypic marker (G=0,1). We‟d like to look at association between G and BDI(T)- BDI(C).
  • 55. Again, we look at averages Methodology Research Group • We are concerned with the evaluation of the comparison of ATE|G=0 with ATE|G=1 • This can be done by estimating and/or testing a treatment by genotype interaction in a suitably- powered RCT. – (e.g.see the GENPOD trial: Lewis et al. BJPsych, Vol 198, pp 464-471, 2011).
  • 56. This is not rocket science …. but what do geneticists usually do? Methodology Research Group • Investigators have a cohort of treated individuals. • They have a measure of treatment outcome, say, BDI(T), or treatment response, BDI(T)-BDI(0), on all individuals within the cohort. Often, they label people as „responders‟ or „non-responders‟. • They investigate associations between treatment outcome and genotypic markers (G).
  • 57. A treatment outcome is not a treatment effect Methodology Research Group • BDI(T) is not BDI(T)-BDI(C)! • Let the treatment effect be Δ. • Then treatment outcome, BDI(T), is equal to BDI(C) + Δ (to note the obvious!).
  • 58. Confounding of treatment- effects with prognosis Methodology Research Group • The genotype (G) may be associated with both the treatment effect (Δ) and with treatment-free outcome, BDI(C), i.e. prognosis. • Associating G with treatment outcome, BDI(T), cannot distinguish between the two. • Most importantly, it may be possible for treatment outcome to be associated with G even when there is no effect of treatment for anyone in the treated cohort!
  • 59. … and evaluating the so-called treatment-response doesn‟t help! Methodology Research Group • Δ = BDI(T)-BDI(C) • Treatment response = BDT(T) – BDI(0) = Δ + BDI(C) – BDI(0) • Still confounded! – At best, these investigations are identifying candidates for further (more rigorous) investigation. – At worst, they are uncovering artefacts.
  • 60. Our approach to stratified medicine (personalised therapy) Methodology Research Group • Predicting outcome after treatment (responders vs. non-responders) is barely scratching the surface of stratified medicine. • Understanding the mechanism underlying the stratification is the key scientific question, and the methodological challenge.
  • 61. Our “manifesto” Methodology Research Group • Personalised (stratified) medicine and treatment-effect mechanisms evaluation are inextricably linked and stratification without a corresponding mechanisms evaluation lacks credibility; • In the almost certain presence of mediator-outcome confounding, mechanisms evaluation is dependent on stratification for its validity; • Both stratification and treatment-effect mediation can be evaluated using a marker stratified trial design together with detailed baseline measurement of all known prognostic markers and other prognostic covariates;
  • 62. Our methodological solution Methodology Research Group • Direct and indirect (mediated) effects should be estimated through the use of instrumental variable methods (the instrumental variable being the predictive marker by treatment interaction) together with adjustments for all known prognostic markers (confounders) – the latter adjustments contributing to increased precision (as in a conventional analysis of treatment effects) rather than bias reduction.
  • 63. A purely prognostic marker Methodology Research Group Randomised Outcome Treatment Prognostic Marker
  • 64. Prognostic Marker Methodology Research Group Treated Outcome Untreated Treatment effect Marker Level
  • 65. A prognostic marker as a confounder Methodology Research Group Randomised Putative Treatment Mediator U Prognostic Clinical Marker Outcome
  • 66. Instrumental variables Methodology Research Group • If the causal influence of the prognostic marker on the final outcome can be fully explained by its influence on the intermediate, then the marker can be used as an instrumental variable (or instrument, for short). • This is the theoretical rationale in the use of so- called „Mendelian Randomisation‟.
  • 67. An instrumental variable (IV) Methodology Research Group Random Treatment Allocation (IV) Received Outcome U
  • 68. A prognostic marker as an instrumental variable Methodology Research Group Randomised Putative Treatment Mediator U Prognostic Clinical Marker No direct link to outcome Outcome
  • 69. Predictive markers Methodology Research Group • Although they may have direct predictive effects on both intermediate and final outcomes, their essential characteristic is that they moderate (influence) treatment effects. • If the treatment-effect moderation on final outcome is wholly explained by the moderation of the effect of treatment on the intermediate outcome, then the latter (i.e. a treatment by marker interaction) can be used as an instrument. • A more subtle (and more realistic?) version of Mendelian Randomisation.
  • 70. Predictive marker (may also be prognostic) Methodology Research Group Randomised Outcome Treatment Moderating effect Predictive Marker (moderator)
  • 71. Predictive Marker Methodology Research Group Treated Outcome Untreated Treatment effect depends on marker Marker Level
  • 72. Putting it all together: potential joint roles of predictive and prognostic markers Methodology Research Group Intermediate Outcome U (Mediator) Predictive Marker B (moderator) Final A (Clinical) Randomised Outcome C Treatment Prognostic Marker U – unmeasured confounders (risk factor)
  • 73. Potential roles of prognostic markers: measured confounder or instrumental variable Methodology Research Group Intermediate Outcome U (Mediator) B Final A (Clinical) Randomised Outcome C Treatment Prognostic Marker U – unmeasured confounders (risk factor) Dotted line – pathway we might assume are absent Alternatively, we might assume that there are no longer any Us
  • 74. Option 1 – use prognostic marker(s) as a measured confounder(s) and then assume there is no hidden confounding (U) Methodology Research Group Intermediate Outcome (Mediator) B Final A (Clinical) Randomised Outcome C Treatment Prognostic Marker 1 Prognostic (confounder) Marker 2 (confounder)
  • 75. Option 2 – Use as prognostic marker as an instrumental variable (Mendelian Randomisation) Methodology Research Group Intermediate Outcome U (Mediator) B A Final (Clinical) Randomised Outcome C Treatment Prognostic Marker U – unmeasured confounders (instrument) Using the prognostic marker as an instrumental variable
  • 76. Potential problems with Mendelian Randomisation Methodology Research Group • Assumption that there is no direct effect of the genetic marker on final outcome frequently difficult to justify, and practically impossible to verify. – Dependent on prior knowledge. • The marker is likely to be a rather weak instrument (i.e. it‟s influence on the intermediate outcome is not strong enough). – This can lead to problems (see Session 1.) • Probably wiser to use available prognostic markers as observed confounders.
  • 77. Potential role of predictive markers Methodology Research Group Intermediate Outcome U (Mediator) Predictive Marker B (moderator) Final A (Clinical) Randomised Outcome C Treatment U – unmeasured confounders Red dotted lines – pathways we might be justified in assuming are absent
  • 78. Stratification & mediational mechanisms evaluation Methodology Research Group Intermediate Outcome U (Mediator) Predictive Marker B (moderator) Final A (Clinical) Randomised Outcome C Treatment U – unmeasured confounders Using the treatment by marker interaction as an instrumental variable
  • 79. Is the treatment by predictive marker interaction a valid instrument? Methodology Research Group • Are we correct in assuming that there is no moderating effect on pathway B? • Are we correct in assuming that there is no moderating effect on pathway C? • Dependent on prior knowledge of the biology/biochemistry of the system.
  • 80. Theory-driven stratification Methodology Research Group • Prior scientific theory and preliminary evidence strongly suggests that a given predictive marker has its influence through a specific mechanism (the putative mediator). • No reason to expect that the moderating effect of the predictive marker works via a pathway not associated with the above mechanism (i.e. we assume that the treatment by marker interaction – moderation – is a valid instrument).
  • 81. Using strong theory and all available prognostic marker information Methodology Research Group Intermediate Outcome U (Mediator) Predictive Marker B (moderator) Final A (Clinical) Randomised Outcome C Treatment Prognostic Marker(s) as Confounder(s) U – unmeasured confounders Using the treatment by marker interaction as an instrumental variable
  • 82. Complicated but Viable!! Methodology Research Group • Statistical methods widely available to estimate the pathways of this model (we won‟t worry about the technical details). • Health Warning!! • This model is pretty complex and is dependent on a lot of assumptions. Are these assumptions – i.e. the theory - defensible? Invalid assumptions lead to invalid solutions.
  • 83. Real examples – We don‟t have any! Methodology Research Group • We know of no existing examples of the use of this design – we are presently writing it up for publication. • Examples from our mental health trials involve retrospective analyses of archived data. • Four funded EME trials are under way: – Ketamine ECT in depression (Ian Anderson et al.); – Minocycline and negative symptoms (Bill Deakin et al.); – Worry Intervention Trial (Freeman et al.); – DBT for depression (Lynch et al.); – but none fully utilise biomarker information as described here.
  • 84. A computer-simulated example Methodology Research Group • Trial with 1000 participants – (500 treated, 500 controls). – Quantitative outcome, y. • Binary predictive marker (x10): – Treatment effect on mediator (m) in its absence is 10 units; in its presence 60 units. – Moderating effect of x10 on outcome solely through the mediator (x10 known to be an IV). – Variants of x10 equally probable (50:50). • Nine prognostic uncorrelated binary markers x1-x9. – All nine are confounders. – Details of their creation of no consequence, here.
  • 85. The true model (mediator) Methodology Research Group Mediator (m): m=5*x1+5*x2+5*x3+5*x4+5*x5+5*x6+5 *x7+5*x8+5*x9+5*x10+10*treat+50*x 11+e12 Where x11 = treat*x10 (i.e. The treatment by marker interaction) e12 is a random „error‟ term “ * ” is a multiplication sign.
  • 86. The true models (outcome) Methodology Research Group Outcome (y): y=5*x1+5*x2+5*x3+5*x4+5*x5+5*x6+5 *x7+5*x8+5*x9+5*x10+2*m+10*treat +e13 e13 is a random „error‟ term (uncorrelated with e12). There is no x11 (interaction) in this model. THERE ARE NO UNMEASURED COMMON CAUSES (i.e. x1-x9, and x10, are all measured)
  • 87. Simple summaries Methodology Research Group --------------------------------------------------------------------------- -> treat = 0 Variable | Obs Mean Std. Dev. Min Max ------------+------------------------------------------------------------- m | 500 74.83 7.58 55.22 97.78 y | 500 174.47 22.27 116.21 247.78 -------------------------------------------------------------------------- -> treat = 1 Variable | Obs Mean Std. Dev. Min Max ------------+------------------------------------------------------------- m | 500 108.92 28.42 55.66 159.70 y | 500 252.91 61.10 124.59 372.49 Note lack of homogeneity of standard deviations across the groups. TREATMENT GROUP MUCH MORE VARIABLE (AS WE MIGHT EXPECT).
  • 88. Naïve analysis methods Methodology Research Group • I won‟t bother to describe these in detail (but see below). • In the psychological and social science literature they will be dominated by approaches similar to those advocated by Baron & Kenny (about 17000 citations!) • At the more hi-tech end of medicine they‟ve rarely got round to using the naive methods!
  • 89. Let‟s pretend we‟ve not measured x1-x9: Methodology Research Group i.e. there are indeed „unmeasured‟ common causes An instrumental variable regression in Stata: ivregress 2sls y treat x10 (m = x11), first This is a two-stage least-squares procedure which simultaneously estimates the effect of treatment on m (the first-stage regression), the effect of m on y, and direct effect of treatment on y (the second stage).
  • 90. The first-stage regressions Methodology Research Group ------------------------------------ m | Coef. Std. Err. ---------+-------------------------- treat | 10.07 0.63 x11 | 50.47 0.90 -------------------------------------
  • 91. The second-stage regressions Methodology Research Group ------------------------------------- y | Coef. Std. Err. ---------+--------------------------- m | 2.00 0.02 treat | 10.39 0.87 -------------------------------------
  • 92. Naïve methods: the 2nd-stage regression Methodology Research Group Use ordinary least-squares to regress y on x10, m and treat regress y m x10 treat ------------------------------------------ y | Coef. Std. Err -------------+---------------------------- m | 2.19 0.02 treat | 3.67 0.75 ------------------------------------------ DIRECT EFFECT OF TREATMENT SEVERERLY BIASED.
  • 93. Now use all available data Methodology Research Group ivregress 2sls y treat x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 (m = x11), first 1st stage: Coef. Std. Err. treat | 9.77 0.26 x11 | 50.73 0.37 2nd stage: m | 2.01 0.01 treat | 10.01 0.55 CONSIDERABLE GAIN IN PRECISION Measurement of prognostic markers not essential, but it makes the design more efficient (i.e. get away with a smaller trial) – perhaps the difference between a viable trial and one that‟s just not feasible.
  • 94. „Naïve‟ 2nd-stage regression using all data Methodology Research Group regress y x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 m treat ------------------------------------- y | Coef. Std. Err. -------------+----------------------- m | 2.00 0.01 treat | 10.05 0.54 If (but only if) we‟ve measured all confounders then this is valid and it is the most precise method. But ... we never know! Returning to IV: there‟s a balance between bias and precision. We don‟t get something for nothing.
  • 95. The Key Ingredients Methodology Research Group • Convincing psychological theory concerning the potential mechanism for mediation. • Convincing theory to underline the belief that the treatment by moderator (predictive marker) interaction is a valid instrument. • An appropriately powered trial for – Valid evaluation of treatment-effect moderation – on the mediator as well as on the outcome. – Valid use of instrumental variables estimation to evaluate the treatment-effect mechanisms (mediation).
  • 96. Design considerations Methodology Research Group • How big does the trial have to be? Considerably larger than a conventional pragmatic trial. • How strong does the moderating effect on the mediator have to be? – Our simulated example used a very strong moderating effect. – However, presumably it has to be reasonably strong to be of any serious interest. • What does the prevalence of the alleles for the predictive biomarker have to be? – We used 50:50 (maximum power). – More likely to be of the order 90:10.
  • 97. Conclusions Methodology Research Group • The scientific evaluation of stratified/personalised medicines/therapies is inseparable from mechanisms evaluation. • So far, progress in trial design for mechanisms evaluation appears to have been very limited. – Interestingly, much more progress for the „softer‟ treatments (psychotherapies) than for hi-tech medicines. • Good design involves using prior scientific knowledge/evidence and makes full use of data from both prognostic and predictive markers. • The required statistical methods are available and reasonably straight forward to use.