SlideShare uma empresa Scribd logo
1 de 323
Object-Centric
      Reflection
Unifying Reflection and Bringing it Back to
                 Objects

              PhD Defense
             Jorge Ressia
                Advisor
             Oscar Nierstrasz
Reflection
A reflective computational system is
capable of inspecting, manipulating and
altering its representation of itself.

                          Smith, 1982
Meta-level


Base-level
Structural
Reflection

Behavioral
Reflection
Reflection
 Today
Reflection
Requirements
Mesage


         Object
Message send
Message


          Object
Partial Reflection
Selective
Reifications
Unanticipated
  Changes
Runtime Integration
Meta-level
Composition
Scoped Reflection
Object-specific
  Reflection
Profiling
Profiling:
Is the activity of analyzing a program
execution.
Domain-Sp

                                         CPU time profiling
                                         Mondrian [9] is an open and agile visualization engine.
                                         visualization using a graph of (possibly nested) nodes an




                                Profile
                                         a serious performance issue was raised1 . Tracking down
                                         performance was not trivial. We first used a standard sam
                                              Execution sampling approximates the time spent in an
                                         by periodically stopping a program and recording the cu
                                         under executions. Such a profiling technique is relatively
                                         little impact on the overall execution. This sampling techn
                                         all mainstream profilers, such as JProfiler, YourKit, xprof
                                              MessageTally, the standard sampling-based profiler in P
                                         tually describes the execution in terms of CPU consumpt
                                         each method of Mondrian:
                                         54.8% {11501ms} MOCanvas>>drawOn:
                                          54.8% {11501ms} MORoot(MONode)>>displayOn:
                                           30.9% {6485ms} MONode>>displayOn:
{               {                            | 18.1% {3799ms} MOEdge>>displayOn:
    {                   {                       ...
                            }                | 8.4% {1763ms} MOEdge>>displayOn:
        }
        }
                                             | | 8.0% {1679ms} MOStraightLineShape>>display:on:
                                             | | 2.6% {546ms} FormCanvas>>line:to:width:color:
            }       {       }                  ...
                                           23.4% {4911ms} MOEdge>>displayOn:
                                               ...

                                            We can observe that the virtual machine spent abou
                                         the method displayOn: defined in the class MORoot. A ro
                                         nested node that contains all the nodes of the edges of t
                                         general profiling information says that rendering nodes a
                                         great share of the CPU time, but it does not help in pin
                                         and edges are responsible for the time spent. Not all grap
                                         consume resources.
                                            Traditional execution sampling profilers center their r
                                         the execution stack and completely ignore the identity of th
                                         the method call and its arguments. As a consequence, it
                                         which objects cause the slowdown. For the example above,
                                         says that we spent 30.9% in MONode>>displayOn: withou
                                         were actually refreshed too often.

                                         Coverage
                                         PetitParser is a parsing framework combining ideas from
                                         parser combinators, parsing expression grammars and pac
                                         grammars and parsers as objects that can be reconfigured
CPU time profiling
                                         Mondrian [9] is an open and agile visualization engine.




                                Profile
                                         visualization using a graph of (possibly nested) nodes an
                                         a serious performance issue was raised1 . Tracking down
                                         performance was not trivial. We first used a standard sam
                                              Execution sampling approximates the time spent in an
                                         by periodically stopping a program and recording the cu
                                         under executions. Such a profiling technique is relatively
                                         little impact on the overall execution. This sampling techn
                                         all mainstream profilers, such as JProfiler, YourKit, xprof
                                              MessageTally, the standard sampling-based profiler in P
                                         tually describes the execution in terms of CPU consumpt
                                         each method of Mondrian:
                                         54.8% {11501ms} MOCanvas>>drawOn:
                                          54.8% {11501ms} MORoot(MONode)>>displayOn:
{               {                          30.9% {6485ms} MONode>>displayOn:
    {                   {                    | 18.1% {3799ms} MOEdge>>displayOn:
                            }                   ...
        }
        }                                    | 8.4% {1763ms} MOEdge>>displayOn:
                                             | | 8.0% {1679ms} MOStraightLineShape>>display:on:
            }       {       }                | | 2.6% {546ms} FormCanvas>>line:to:width:color:
                                               ...
                                           23.4% {4911ms} MOEdge>>displayOn:
                                               ...

                                            We can observe that the virtual machine spent abou
                                         the method displayOn: defined in the class MORoot. A ro
                                         nested node that contains all the nodes of the edges of t
                                         general profiling information says that rendering nodes a
                                         great share of the CPU time, but it does not help in pin
                                         and edges are responsible for the time spent. Not all grap
                                         consume resources.
                                            Traditional execution sampling profilers center their r
                                         the execution stack and completely ignore the identity of th
                                         the method call and its arguments. As a consequence, it
                                         which objects cause the slowdown. For the example above,
                                         says that we spent 30.9% in MONode>>displayOn: withou
                                         were actually refreshed too often.




                                Domain
                                         Coverage
                                         PetitParser is a parsing framework combining ideas from
                                         parser combinators, parsing expression grammars and pac
                                         grammars and parsers as objects that can be reconfigured
                                         1
                                             http://forum.world.st/Mondrian-is-slow-next-step-tc
                                             a2261116
                                         2
                                             http://www.pharo-project.org/
Mondrian
C omp lexity
  stemand Ducasse 2003
Sy zaLan
little impact on the overall execution. This sampling technique is u
all mainstream profilers, such as JProfiler, YourKit, xprof [10], an
     MessageTally, the standard sampling-based profiler in Pharo Sm
tually describes the execution in terms of CPU consumption and i
each method of Mondrian:
54.8% {11501ms} MOCanvas>>drawOn:
 54.8% {11501ms} MORoot(MONode)>>displayOn:
  30.9% {6485ms} MONode>>displayOn:
    | 18.1% {3799ms} MOEdge>>displayOn:
       ...
    | 8.4% {1763ms} MOEdge>>displayOn:
    | | 8.0% {1679ms} MOStraightLineShape>>display:on:
    | | 2.6% {546ms} FormCanvas>>line:to:width:color:
      ...
  23.4% {4911ms} MOEdge>>displayOn:
      ...

   We can observe that the virtual machine spent about 54% o
the method displayOn: defined in the class MORoot. A root is the
nested node that contains all the nodes of the edges of the visua
general profiling information says that rendering nodes and edge
CPU time profiling
Mondrian [9] is an open and agile visualization engine. Mondrian describes a

                              Which is the relationship?
visualization using a graph of (possibly nested) nodes and edges. In June 2010
a serious performance issue was raised1 . Tracking down the cause of the poor
performance was not trivial. We first used a standard sample-based profiler.
     Execution sampling approximates the time spent in an application’s methods
by periodically stopping a program and recording the current set of methods
under executions. Such a profiling technique is relatively accurate since it has
little impact on the overall execution. This sampling technique is used by almost
all mainstream profilers, such as JProfiler, YourKit, xprof [10], and hprof.
     MessageTally, the standard sampling-based profiler in Pharo Smalltalk2 , tex-
tually describes the execution in terms of CPU consumption and invocation for
each method of Mondrian:
54.8% {11501ms} MOCanvas>>drawOn:




                                                                ?
 54.8% {11501ms} MORoot(MONode)>>displayOn:
  30.9% {6485ms} MONode>>displayOn:
    | 18.1% {3799ms} MOEdge>>displayOn:
       ...
    | 8.4% {1763ms} MOEdge>>displayOn:
    | | 8.0% {1679ms} MOStraightLineShape>>display:on:
    | | 2.6% {546ms} FormCanvas>>line:to:width:color:
      ...
  23.4% {4911ms} MOEdge>>displayOn:
      ...

   We can observe that the virtual machine spent about 54% of its time in
the method displayOn: defined in the class MORoot. A root is the unique non-
nested node that contains all the nodes of the edges of the visualization. This
general profiling information says that rendering nodes and edges consumes a
great share of the CPU time, but it does not help in pinpointing which nodes
and edges are responsible for the time spent. Not all graphical elements equally
consume resources.
   Traditional execution sampling profilers center their result on the frames of
Debugging
Debugging:
Is the process of interacting with a
running software system to test and
understand its current behavior.
Mondrian
C omp lexity
  stemand Ducasse 2003
Sy zaLan
Rendering
Shape and Nodes
How do we debug
     this?
Breakpoints
Conditional
Breakpoints
{               {
    {                   {
                            }
        }
        }
            }       {       }
{               {
    {                   {
                            }
        }
        }
            }       {       }
Developer Questions
When during the execution is this method called? (Q.13)
  Where are instances of this class created? (Q.14)
  Where is this variable or data structure being accessed?
  (Q.15)
  What are the values of the argument at runtime? (Q.19)
  What data is being modified in this code? (Q.20)
  How are these types or objects related? (Q.22)
  How can data be passed to (or accessed at) this point
in the code? (Q.28)
  What parts of this data structure are accessed in this
code? (Q.33)
When during the execution is this method called? (Q.13)
  Where are instances of this class created? (Q.14)
  Where is this variable or data structure being accessed?
  (Q.15)
  What are the values of the argument at runtime? (Q.19)
  What data is being modified in this code? (Q.20)
  How are these types or objects related? (Q.22)
  How can data be passed to (or accessed at) this point
in the code? (Q.28)
  What parts of this data structure are accessed in this
code? (Q.33)                                   llito
                                               Si     etal.           g softwar
                                                                               e
                                                             ask durin
                                                   gr ammers s. 2008
                                     Questi ons pro ution task
                                                 evol
Which is the relationship?



When during the execution is this method called? (Q.13)




                                                                            ?
Where are instances of this class created? (Q.14)
Where is this variable or data structure being accessed? (Q.15)
What are the values of the argument at runtime? (Q.19)
What data is being modified in this code? (Q.20)
How are these types or objects related? (Q.22)
How can data be passed to (or accessed at) this point in the code? (Q.28)
What parts of this data structure are accessed in this code? (Q.33)
What is the
problem?
Traditional
Reflection
visualization using a graph of (possibly nested) nodes and edges. In June 2010
a serious performance issue was raised1 . Tracking down the cause of the poor
performance was not trivial. We first used a standard sample-based profiler.
     Execution sampling approximates the time spent in an application’s methods
by periodically stopping a program and recording the current set of methods


                                                 Profiling
under executions. Such a profiling technique is relatively accurate since it has
little impact on the overall execution. This sampling technique is used by almost
all mainstream profilers, such as JProfiler, YourKit, xprof [10], and hprof.
     MessageTally, the standard sampling-based profiler in Pharo Smalltalk2 , tex-
tually describes the execution in terms of CPU consumption and invocation for
each method of Mondrian:
54.8% {11501ms} MOCanvas>>drawOn:
 54.8% {11501ms} MORoot(MONode)>>displayOn:
  30.9% {6485ms} MONode>>displayOn:




                                                                  ?
    | 18.1% {3799ms} MOEdge>>displayOn:
       ...
    | 8.4% {1763ms} MOEdge>>displayOn:
    | | 8.0% {1679ms} MOStraightLineShape>>display:on:
    | | 2.6% {546ms} FormCanvas>>line:to:width:color:
      ...
  23.4% {4911ms} MOEdge>>displayOn:
      ...

   We can observe that the virtual machine spent about 54% of its time in
the method displayOn: defined in the class MORoot. A root is the unique non-
nested node that contains all the nodes of the edges of the visualization. This
general profiling information says that rendering nodes and edges consumes a
great share of the CPU time, but it does not help in pinpointing which nodes
and edges are responsible for the time spent. Not all graphical elements equally
consume resources.
   Traditional execution sampling profilers center their result on the frames of
the execution stack and completely ignore the identity of the object that received
the method call and its arguments. As a consequence, it is hard to track down
which objects cause the slowdown. For the example above, the traditional profiler
says that we spent 30.9% in MONode>>displayOn: without saying which nodes
were actually refreshed too often.

Coverage
Debugging


When during the execution is this method called? (Q.13)




                                                                            ?
Where are instances of this class created? (Q.14)
Where is this variable or data structure being accessed? (Q.15)
What are the values of the argument at runtime? (Q.19)
What data is being modified in this code? (Q.20)
How are these types or objects related? (Q.22)
How can data be passed to (or accessed at) this point in the code? (Q.28)
What parts of this data structure are accessed in this code? (Q.33)
Object Paradox
Object
Paradox
Object
Paradox
Object
Paradox



           Reflection
          Requirements
Object
Paradox



           Reflection
          Requirements
Object
           Paradox



 Unified               Reflection
 Uniform             Requirements
Approach
Object
           Paradox



 Unified               Reflection
 Uniform             Requirements
Approach
Thesis:

To overcome the object paradox while providing a
unified and uniform solution to the key reflection
requirements we need an object-centric reflective
system which targets specific objects as the central
reflection mechanism through explicit meta-objects.
Thesis:

To overcome the object paradox while providing a
unified and uniform solution to the key reflection
requirements we need an object-centric reflective
system which targets specific objects as the central
reflection mechanism through explicit meta-objects.
Thesis:

To overcome the object paradox while providing a
unified and uniform solution to the key reflection
requirements we need an object-centric reflective
system which targets specific objects as the central
reflection mechanism through explicit meta-objects.
Thesis:

To overcome the object paradox while providing a
unified and uniform solution to the key reflection
requirements we need an object-centric reflective
system which targets specific objects as the central
reflection mechanism through explicit meta-objects.
Thesis:

To overcome the object paradox while providing a
unified and uniform solution to the key reflection
requirements we need an object-centric reflective
system which targets specific objects as the central
reflection mechanism through explicit meta-objects.
Object-Centric
Applications                     MetaSpy
                Debugging


                      Talents    Chameleon




Host Language




                        56
Object-Centric
Applications                     MetaSpy
                Debugging


                      Talents    Chameleon




Host Language




                        57
Object-Centric
Applications                     MetaSpy
                Debugging


                      Talents    Chameleon




Host Language




                        58
Object-Centric
Applications                     MetaSpy
                Debugging


                      Talents    Chameleon




Host Language




                        59
Object-Centric
Applications                     MetaSpy
                Debugging


                      Talents    Chameleon




Host Language




                        60
Object-Centric
Applications                     MetaSpy
                Debugging


                      Talents    Chameleon




Host Language




                        61
Object-Centric
Applications                     MetaSpy
                Debugging


                      Talents    Chameleon




Host Language




                        62
Object-Centric
Applications                     MetaSpy
                Debugging


                      Talents    Chameleon




Host Language




                        63
Object-Centric
Applications                     MetaSpy
                Debugging


                      Talents    Chameleon




Host Language




                        64
Object-Centric
 Reflection
Object-Centric
Applications                     MetaSpy
                Debugging


                      Talents    Chameleon




Host Language




                        66
2010
                         time. Nierstrasz
                    @run
   delsnggli,T. Gîrba and O
Mo    e         R
 J. R essia, L.
Organize the
 Meta-level
Explicit
Meta-objects
Object-Centric
Uniform
Solution
Class

          Meta-object




 Object
Class

          Meta-object




 Object
Class

            Meta-object




Adapted Object
Runtime
AST adaptation
Source
 Code
         Syntactic
         Analysis

                     Semantic
                     Analysis

      AST                         Code
   Manipulation                 Generation


                                Executable
                                  Code
Class

            Meta-object




Adapted Object
Structural
Reflection
Behavioral
Reflection
Composition
Meta-object
              Meta-object
Meta-object
              Meta-object




Composed
Meta-object
Any adaptation goes
through a meta-object
No implicit
Interactions
Reflection
Requirements
Partial Reflection
Selective
Reifications
Unanticipated
  Changes
Runtime Integration
Meta-level
Composition
Scoped Reflection
Object-Centric
Applications                     MetaSpy
                Debugging


                      Talents    Chameleon




Host Language
Development



     94
Reuse
Mixins
Linear Composition
Single Composition
Traits
Class = Superclass + State + Traits +
           Glue methods
Multiple
Composition
Flatten Composition
Problem
Streams
read
write
both
binary
character-based
memory
 socket
database
   file
Stream




             PositionableStream       NullStream



        ReadStream      WriteStream



  ReadWriteStream       TextStream         LimitedWriteStream




RWBinaryOrTextStream
Explosion of classes
Dynamic
Composition
Talents
                            S PE 2 012
                ,
             011Nierstrasz, F. Perin and
         ST 2 .
    IW ssia,T. Gîrba, O         i
      J. Re          L . Renggl
Dynamically
composable units of
     reuse
Meta-object
Special Composition
Operations
Define a method
readStreamTalent
      defineMethodNamed: #isReadStream
      performing: '^ true'.
Exclude a method
readStreamTalent
      excludeMethodNamed: #isReadStream
Replace a method
readStreamTalent
      replaceMethodNamed: #isReadStream
      performing: '^ true'.
Read and Buffered
     Stream
Composition
 Operators
Composition
aStream := Stream new.
aStream acquire:
         ( readStreamTalent , bufferedStreamTalent ).
Alias
aStream := Stream new.
aStream acquire: readStreamTalent ,
( bufferedStreamTalent
     @ {#isReadStream -> #isReadBufferedStream})
Exclusion
aStream := Stream new.
aStream acquire: readStreamTalent ,
       (bufferedStreamTalent - #isReadStream).
Talents
Talents
                            S PE 2 012
                ,
             011Nierstrasz, F. Perin and
         ST 2 .
    IW ssia,T. Gîrba, O         i
      J. Re          L . Renggl
Object-Centric
Applications                     MetaSpy
                Debugging


                      Talents    Chameleon




Host Language




                        134
Behavioral
Reflection
Operational
Decomposition
Running System = Sequence of Events
Object


Object


Object


Object
Object
         Debugger

Object
         Feature
         Analysis
Object

         Profiler
Object
Object
         Debugger

Object
         Feature
         Analysis
Object

         Profiler
Object
Object
         Debugger

Object
         Feature
         Analysis
Object

         Profiler
Object
Chameleon
Object
         Debugger

Object
         Feature
         Analysis
Object

         Profiler
Object
Object
         Debugger

Object
         Feature
         Analysis
Object

         Profiler
Object
Object


Object


Object


Object
Object


Object


Object


Object
Meta-object

              Object
Meta-object
              Object
                       Meta-object
              Object
Meta-object

              Object
Meta-object

              Object
Meta-object
              Object
                       Meta-object
              Object
Meta-object

              Object
Definition of new
    events
Adaptation on top of
   these events
EventInstrumentor
        reify: PurchaseEvent
        onClass: Cart
        selector: #purchase
ChameleonAnnouncer
        subscribe: aDiscountChecker
        to: PurchaseEvent
EventInstrumentor
        reify: PointsPurchaseEvent
        onObject: aCart
        selector: #purchase
No source code
 modification
Meta-object

              Object
Meta-object
              Object
                       Meta-object
              Object
Meta-object

              Object
Dynamically defined
 events interface
Reification of the
   adaptation
Chameleon
Rethink tools
Applications
Object-Centric
Applications                     MetaSpy
                Debugging


                      Talents    Chameleon




Host Language




                        157
Object-Centric
Applications                     MetaSpy
                Debugging


                      Talents    Chameleon




Host Language




                        158
Which is the relationship?



When during the execution is this method called? (Q.13)




                                                                            ?
Where are instances of this class created? (Q.14)
Where is this variable or data structure being accessed? (Q.15)
What are the values of the argument at runtime? (Q.19)
What data is being modified in this code? (Q.20)
How are these types or objects related? (Q.22)
How can data be passed to (or accessed at) this point in the code? (Q.28)
What parts of this data structure are accessed in this code? (Q.33)
{               {
    {                   {
                            }
        }
        }
            }       {       }
{               {
    {                   {
                            }
        }
        }
            }       {       }
Object-Centric
 Debugging
                         2
                    201Nierstrasz
               ICSE    .
                            rgel and
                                     O
        J. Ressi   a, A, Be
{               {
    {                   {
                            }
        }
        }
            }       {       }
{               {
    {                   {
                            }
        }
        }
            }       {       }
{               {
    {                   {
                            }
        }
        }
            }       {       }
What does it
  mean?
intercepting
access to object-
 specific runtime
       state
monitoring
object-specific
 interactions
supporting live
  interaction
Mondrian
Shape and Nodes
halt on object in
       call
InstructionStream
pc
How do we debug
     this?
Debugging through operations



  18 step in for first modification

30 operations for next modification
Setting breakpoints



      31 method accessing the variable

                9 assignments

pc setter is used by 5 intensively used classes
stack-centric debugging


                   InstructionStream class>>on:
                   InstructionStream class>>new
                   InstructionStream>>initialize
     step into,    CompiledMethod>>initialPC
     step over,    InstructionStream>>method:pc:
      resume       InstructionStream>>nextInstruction
                   MessageCatcher class>>new
                   InstructionStream>>interpretNextInstructionFor:
                   ...

                              object-centric debugging

        centered on                           centered on
the InstructionStream class          the InstructionStream object
                                               initialize
                  on:
next message,                  next message, method:pc:
                  new           next change nextInstruction                 ...
 next change
                                             interpretNextInstructionFor:
                                             ...
Halt on next message
Halt on next message/s named
Halt on state change
Halt on state change named
Halt on next inherited message
Halt on next overloaded message
Halt on object/s in call
Halt on next message from
package
Object-Centric
 Debugging
                         2
                    201Nierstrasz
               ICSE    .
                            rgel and
                                     O
        J. Ressi   a, A, Be
Object-Centric
Applications                     MetaSpy
                Debugging


                      Talents    Chameleon




Host Language




                        183
CPU time profiling
                                         Mondrian [9] is an open and agile visualization engine.




                                Profile
                                         visualization using a graph of (possibly nested) nodes an
                                         a serious performance issue was raised1 . Tracking down
                                         performance was not trivial. We first used a standard sam
                                              Execution sampling approximates the time spent in an
                                         by periodically stopping a program and recording the cu
                                         under executions. Such a profiling technique is relatively
                                         little impact on the overall execution. This sampling techn
                                         all mainstream profilers, such as JProfiler, YourKit, xprof
                                              MessageTally, the standard sampling-based profiler in P
                                         tually describes the execution in terms of CPU consumpt
                                         each method of Mondrian:
                                         54.8% {11501ms} MOCanvas>>drawOn:
                                          54.8% {11501ms} MORoot(MONode)>>displayOn:
{               {                          30.9% {6485ms} MONode>>displayOn:
    {                   {                    | 18.1% {3799ms} MOEdge>>displayOn:
                            }                   ...
        }
        }                                    | 8.4% {1763ms} MOEdge>>displayOn:
                                             | | 8.0% {1679ms} MOStraightLineShape>>display:on:
            }       {       }                | | 2.6% {546ms} FormCanvas>>line:to:width:color:
                                               ...
                                           23.4% {4911ms} MOEdge>>displayOn:
                                               ...

                                            We can observe that the virtual machine spent abou
                                         the method displayOn: defined in the class MORoot. A ro
                                         nested node that contains all the nodes of the edges of t
                                         general profiling information says that rendering nodes a
                                         great share of the CPU time, but it does not help in pin
                                         and edges are responsible for the time spent. Not all grap
                                         consume resources.
                                            Traditional execution sampling profilers center their r
                                         the execution stack and completely ignore the identity of th
                                         the method call and its arguments. As a consequence, it
                                         which objects cause the slowdown. For the example above,
                                         says that we spent 30.9% in MONode>>displayOn: withou
                                         were actually refreshed too often.




                                Domain
                                         Coverage
                                         PetitParser is a parsing framework combining ideas from
                                         parser combinators, parsing expression grammars and pac
                                         grammars and parsers as objects that can be reconfigured
                                         1
                                             http://forum.world.st/Mondrian-is-slow-next-step-tc
                                             a2261116
                                         2
                                             http://www.pharo-project.org/
Domain-Specific Profiling        3

CPU time profiling

                            Which is the relationship?
Mondrian [9] is an open and agile visualization engine. Mondrian describes a
visualization using a graph of (possibly nested) nodes and edges. In June 2010
a serious performance issue was raised1 . Tracking down the cause of the poor
performance was not trivial. We first used a standard sample-based profiler.
     Execution sampling approximates the time spent in an application’s methods
by periodically stopping a program and recording the current set of methods
under executions. Such a profiling technique is relatively accurate since it has
little impact on the overall execution. This sampling technique is used by almost
all mainstream profilers, such as JProfiler, YourKit, xprof [10], and hprof.
     MessageTally, the standard sampling-based profiler in Pharo Smalltalk2 , tex-
tually describes the execution in terms of CPU consumption and invocation for
each method of Mondrian:
54.8% {11501ms} MOCanvas>>drawOn:
 54.8% {11501ms} MORoot(MONode)>>displayOn:
  30.9% {6485ms} MONode>>displayOn:




                                                                    ?
    | 18.1% {3799ms} MOEdge>>displayOn:
       ...
    | 8.4% {1763ms} MOEdge>>displayOn:
    | | 8.0% {1679ms} MOStraightLineShape>>display:on:
    | | 2.6% {546ms} FormCanvas>>line:to:width:color:
      ...
  23.4% {4911ms} MOEdge>>displayOn:
      ...

   We can observe that the virtual machine spent about 54% of its time in
the method displayOn: defined in the class MORoot. A root is the unique non-
nested node that contains all the nodes of the edges of the visualization. This
general profiling information says that rendering nodes and edges consumes a
great share of the CPU time, but it does not help in pinpointing which nodes
and edges are responsible for the time spent. Not all graphical elements equally
consume resources.
   Traditional execution sampling profilers center their result on the frames of
the execution stack and completely ignore the identity of the object that received
the method call and its arguments. As a consequence, it is hard to track down
which objects cause the slowdown. For the example above, the traditional profiler
says that we spent 30.9% in MONode>>displayOn: without saying which nodes
were actually refreshed too often.

Coverage
PetitParser is a parsing framework combining ideas from scannerless parsing,
Objects




Source    Traditional
 Code      Profilers
Object
Objects
           Profilers




Source    Traditional
 Code      Profilers
MetaSpy

      JOT 2 012
                    al.
          Ressia et
Instrumenter   Profiler
Domain    Profilers
 Domain
 Object


 Domain
 Object


 Domain
 Object
Domain          Profilers
    Domain
    Object


    Domain
    Object


    Domain
    Object




Instrumentation
Domain          Profilers
    Domain
    Object


    Domain
    Object


    Domain
    Object




Instrumentation
Specify the Domain interests
Capture the runtime information
Present the results
Mondrian Profiler
MondrianProfiler>>setUp
self model root allNodes do: [ :node |
 self
  observeObject: node
  selector: #displayOn:
  do: [ ... counting ... ] ]
MetaSpy

      JOT 2 012
                    al.
          Ressia et
Object-Centric
Applications                     MetaSpy
                Debugging


                      Talents    Chameleon




Host Language




                        197
Live
Feature Analysis
Live
Feature Analysis

                   2010
               dels l.
            Mo Denker
                      eta
Software Feature:
A distinguishing characteristic of a
software item.




                                       IE EE 829
Traces
{               {
    {                   {       {               {
                            }       {                   {
        }                                                   }
                                        }
        }
            }               }
                                        }
                                            }               }
                    {
                                                    {
Traces
{               {
    {                   {       {               {
                            }       {                   {
        }                                                   }
                                        }
        }
            }               }
                                        }
                                            }               }
                    {
                                                    {
Traces
{               {
    {                   {       {               {
                            }       {                   {
        }                                                   }
                                        }
        }
            }               }
                                        }
                                            }               }
                    {
                                                    {
{               {
    {                   {
                            }
        }
        }
        }
            }       {
                    {       }
Paint AST nodes
What if we do not
know what to evolve?
?
Execution
Reification
Reification of
  Context
Reuse
Propagation
Dynamically
 install and
  uninstall
Thread Locality
Live
Feature Analysis
loginExecution := Execution new.
loginExecution
  when: ASTNodeExecutionEvent
  do: [ :node | node addFeatureAnnotation: #login ]
loginExecution
  executeOn: [ WebServer new
                  loginAs: 'admin' password: 'pass' ]
printingExecution := Execution new.
printingExecution
 when: ASTNodeExecutionEvent
 do: [ :node | node addFeatureAnnotation: #printing ]
printingExecution
 executeOn: [ WebServer new printStatusReport ]
Scoped
Reflection
Back in time
 Debugger
Back in time
 Debugger

                              Debu gger
                    w
                 Floal. ECOOP 2008
      O   b ject  nhard e
                          t
              Lie
name                        value
                                 init@t1                    null
                                     predecessor

                       name                        value
      :Person                 field-write@t2                'Doe'
                                     predecessor
                       name
                                                   value
                              field-write@t3                'Smith'
person := Person new   t1
...
name := 'Doe'          t2
...
name := 'Smith'        t3
Adapt reached
   objects
Controlled
 Impact
Thesis:

To overcome the object paradox while providing a
unified and uniform solution to the key reflection
requirements we need an object-centric reflective
system which targets specific objects as the central
reflection mechanism through explicit meta-objects.
Object
           Paradox



 Unified               Reflection
 Uniform             Requirements
Approach
Object-Centric
Applications                     MetaSpy
                Debugging


                      Talents    Chameleon




Host Language




                        237
Performance
Live
Feature Analysis
Live
Feature Analysis

                   2010
               dels l.
            Mo Denker
                      eta
179
classes
1292
methods
adapted
all AST nodes
PRCommand >> context: aContext
      context := aContext
20%
slower
35 times
 slower
Implementation
AST adaptation
Source
 Code
         Syntactic
         Analysis

                     Semantic
                     Analysis

      AST                         Code
   Manipulation                 Generation


                                Executable
                                  Code
methodDict
Class                    MethodDictionary       CompiledMethod
                     1                      *


                                                ReflectiveMethod
Object                                                                                  Key
                        run: #isPoint with: #() in: aPoint                                 instance-of
                                                                                           message send
                                    3                     aMethodDictionary                lookup
                                               isPoint : aMetaObjectReflectiveMethod
             Point                             isZero : aCompiledMethod
       isPoint
                                                                              run: #isPoint with: #() in: aPoint


         2                                                             4



               aPoint                               aMetaObject                        aScopedMethodDictionary
                                                                                      isPoint : aReflectiveMethod
                 1                                                          5
                                                run: #isPoint with: #() in: aPoint

aPoint isPoint
Object Discovery
Feasibility
Dynamic Aspects
PBI
PBI

                       011 Eric Tanter
                   SD 2 d
             AO alter Binder an
           M oret, W
 P hilippe
AOP
Dynamic Aspects
Per-object Aspects
Per-object Aspects

                          ftw. Eng.
             SOF T So
         SIG               003
                 tes 2llivan
              No jan and Su
                 Ra
CaesarJ
CaesarJ

                      s AOS D 06
     a nsac tionic etal.
   Tr          Arac
AspectScheme
AspectScheme


              S CP 2 006amurthi.
                                    n
                         an d Krish
              n, Tucker,
       Dutchy
AspectS
AspectS

                 6
            200nza
      O DAL    a
                  and Co
                        st
      H irschfeld
ALIA4J
ALIA4J

            O LS 2 011kşit
      TO Sewe, Mezini, and A
           h,
    Bockisc
ALIA4J AOP
 Debugger
ALIA4J AOP
 Debugger
               2
            201şit
       AOSD          h, and A
                              k
       Yin,   Bockisc
Related Work
Deployment
 Strategies
Deployment
 Strategies

             SD 2 008
        AO Eric Tanter
Scoping
Strategies
Scoping
Strategies

        DLS 2009
                    er
          Eric Tant
dynamic extent
propagation function
activation condition
adaptations
Future Work
Rethink the
debugger UI
Suicide Objects
Barbara Conradt, Programmed cell death
o bel 2 002ohn E. Sulston
              N            ne r and J
               ydne y Bren
 ob Ho rvitz, S
B
                                        Barbara Conradt, Programmed cell death
Automatic      Manual


Temporary     Permanent


Referenced   Unreferenced


  Strong        Weak
Cache
Strings
Dynamic Scoping
Undeployment
 Conditions
Let objects decide
More Reflective
 Applications
Talents
Scoped Talents
Dangerous?
I believe the argument that reflection is dangerous because it gives
programmers too much power is a specious one. Existing programming
languages already give clumsy programmers more than ample
opportunities to shoot themselves in the foot. I'll concede that reflective
systems allow the clumsy programmer to fire several rounds per second
into his foot, without reloading. Still, I'm confident that, as is the case
with features such as pointers, competent programmers will make
appropriate use of the power of reflection with care and skill. Potential
abuse by inept programmers should not justify depriving the
programming community of a potentially vital set of tools.

                                            Brian Foote

Mais conteúdo relacionado

Semelhante a Object-Centric Reflection - Thesis Defense

Interprocedural Constant Propagation
Interprocedural Constant PropagationInterprocedural Constant Propagation
Interprocedural Constant Propagation
james marioki
 
Directive-based approach to Heterogeneous Computing
Directive-based approach to Heterogeneous ComputingDirective-based approach to Heterogeneous Computing
Directive-based approach to Heterogeneous Computing
Ruymán Reyes
 
Containerizing HPC and AI applications using E4S and Performance Monitor tool
Containerizing HPC and AI applications using E4S and Performance Monitor toolContainerizing HPC and AI applications using E4S and Performance Monitor tool
Containerizing HPC and AI applications using E4S and Performance Monitor tool
Ganesan Narayanasamy
 
Deep Convolutional Neural Network acceleration on the Intel Xeon Phi
Deep Convolutional Neural Network acceleration on the Intel Xeon PhiDeep Convolutional Neural Network acceleration on the Intel Xeon Phi
Deep Convolutional Neural Network acceleration on the Intel Xeon Phi
Gaurav Raina
 
Deep Convolutional Network evaluation on the Intel Xeon Phi
Deep Convolutional Network evaluation on the Intel Xeon PhiDeep Convolutional Network evaluation on the Intel Xeon Phi
Deep Convolutional Network evaluation on the Intel Xeon Phi
Gaurav Raina
 
Thesis Report - Gaurav Raina MSc ES - v2
Thesis Report - Gaurav Raina MSc ES - v2Thesis Report - Gaurav Raina MSc ES - v2
Thesis Report - Gaurav Raina MSc ES - v2
Gaurav Raina
 

Semelhante a Object-Centric Reflection - Thesis Defense (20)

Optimizing apps for better performance extended
Optimizing apps for better performance extended Optimizing apps for better performance extended
Optimizing apps for better performance extended
 
Interprocedural Constant Propagation
Interprocedural Constant PropagationInterprocedural Constant Propagation
Interprocedural Constant Propagation
 
Crash course on data streaming (with examples using Apache Flink)
Crash course on data streaming (with examples using Apache Flink)Crash course on data streaming (with examples using Apache Flink)
Crash course on data streaming (with examples using Apache Flink)
 
Hardback solution to accelerate multimedia computation through mgp in cmp
Hardback solution to accelerate multimedia computation through mgp in cmpHardback solution to accelerate multimedia computation through mgp in cmp
Hardback solution to accelerate multimedia computation through mgp in cmp
 
markomanolis_phd_defense
markomanolis_phd_defensemarkomanolis_phd_defense
markomanolis_phd_defense
 
Mid1 Revision
Mid1  RevisionMid1  Revision
Mid1 Revision
 
Directive-based approach to Heterogeneous Computing
Directive-based approach to Heterogeneous ComputingDirective-based approach to Heterogeneous Computing
Directive-based approach to Heterogeneous Computing
 
2011 ecoop
2011 ecoop2011 ecoop
2011 ecoop
 
Containerizing HPC and AI applications using E4S and Performance Monitor tool
Containerizing HPC and AI applications using E4S and Performance Monitor toolContainerizing HPC and AI applications using E4S and Performance Monitor tool
Containerizing HPC and AI applications using E4S and Performance Monitor tool
 
Deep Convolutional Neural Network acceleration on the Intel Xeon Phi
Deep Convolutional Neural Network acceleration on the Intel Xeon PhiDeep Convolutional Neural Network acceleration on the Intel Xeon Phi
Deep Convolutional Neural Network acceleration on the Intel Xeon Phi
 
Deep Convolutional Network evaluation on the Intel Xeon Phi
Deep Convolutional Network evaluation on the Intel Xeon PhiDeep Convolutional Network evaluation on the Intel Xeon Phi
Deep Convolutional Network evaluation on the Intel Xeon Phi
 
Thesis Report - Gaurav Raina MSc ES - v2
Thesis Report - Gaurav Raina MSc ES - v2Thesis Report - Gaurav Raina MSc ES - v2
Thesis Report - Gaurav Raina MSc ES - v2
 
CSMR10a.ppt
CSMR10a.pptCSMR10a.ppt
CSMR10a.ppt
 
Presentation slides for "A formal foundation for trace-based JIT compilation"
Presentation slides for "A formal foundation for trace-based JIT compilation"Presentation slides for "A formal foundation for trace-based JIT compilation"
Presentation slides for "A formal foundation for trace-based JIT compilation"
 
Parallelization of Coupled Cluster Code with OpenMP
Parallelization of Coupled Cluster Code with OpenMPParallelization of Coupled Cluster Code with OpenMP
Parallelization of Coupled Cluster Code with OpenMP
 
Cloud Module 3 .pptx
Cloud Module 3 .pptxCloud Module 3 .pptx
Cloud Module 3 .pptx
 
Performance_Programming
Performance_ProgrammingPerformance_Programming
Performance_Programming
 
Thesis_Report
Thesis_ReportThesis_Report
Thesis_Report
 
Benchmarking and PHPBench
Benchmarking and PHPBenchBenchmarking and PHPBench
Benchmarking and PHPBench
 
A study of Machine Learning approach for Predictive Maintenance in Industry 4.0
A study of Machine Learning approach for Predictive Maintenance in Industry 4.0A study of Machine Learning approach for Predictive Maintenance in Industry 4.0
A study of Machine Learning approach for Predictive Maintenance in Industry 4.0
 

Mais de Jorge Ressia (8)

Object-Centric Debugging
Object-Centric DebuggingObject-Centric Debugging
Object-Centric Debugging
 
Talents Presentation at ESUG 2011
Talents Presentation at ESUG 2011Talents Presentation at ESUG 2011
Talents Presentation at ESUG 2011
 
Subjectopia tools2011
Subjectopia tools2011Subjectopia tools2011
Subjectopia tools2011
 
Advanced OO Design
Advanced OO DesignAdvanced OO Design
Advanced OO Design
 
Opal compiler
Opal compilerOpal compiler
Opal compiler
 
Live featureanalysis
Live featureanalysisLive featureanalysis
Live featureanalysis
 
Runtime evolution
Runtime evolutionRuntime evolution
Runtime evolution
 
05 Problem Detection
05 Problem Detection05 Problem Detection
05 Problem Detection
 

Último

+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 

Último (20)

TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 

Object-Centric Reflection - Thesis Defense

  • 1. Object-Centric Reflection Unifying Reflection and Bringing it Back to Objects PhD Defense Jorge Ressia Advisor Oscar Nierstrasz
  • 3. A reflective computational system is capable of inspecting, manipulating and altering its representation of itself. Smith, 1982
  • 8. Mesage Object
  • 11.
  • 13.
  • 15.
  • 18.
  • 20.
  • 22.
  • 24. Profiling: Is the activity of analyzing a program execution.
  • 25. Domain-Sp CPU time profiling Mondrian [9] is an open and agile visualization engine. visualization using a graph of (possibly nested) nodes an Profile a serious performance issue was raised1 . Tracking down performance was not trivial. We first used a standard sam Execution sampling approximates the time spent in an by periodically stopping a program and recording the cu under executions. Such a profiling technique is relatively little impact on the overall execution. This sampling techn all mainstream profilers, such as JProfiler, YourKit, xprof MessageTally, the standard sampling-based profiler in P tually describes the execution in terms of CPU consumpt each method of Mondrian: 54.8% {11501ms} MOCanvas>>drawOn: 54.8% {11501ms} MORoot(MONode)>>displayOn: 30.9% {6485ms} MONode>>displayOn: { { | 18.1% {3799ms} MOEdge>>displayOn: { { ... } | 8.4% {1763ms} MOEdge>>displayOn: } } | | 8.0% {1679ms} MOStraightLineShape>>display:on: | | 2.6% {546ms} FormCanvas>>line:to:width:color: } { } ... 23.4% {4911ms} MOEdge>>displayOn: ... We can observe that the virtual machine spent abou the method displayOn: defined in the class MORoot. A ro nested node that contains all the nodes of the edges of t general profiling information says that rendering nodes a great share of the CPU time, but it does not help in pin and edges are responsible for the time spent. Not all grap consume resources. Traditional execution sampling profilers center their r the execution stack and completely ignore the identity of th the method call and its arguments. As a consequence, it which objects cause the slowdown. For the example above, says that we spent 30.9% in MONode>>displayOn: withou were actually refreshed too often. Coverage PetitParser is a parsing framework combining ideas from parser combinators, parsing expression grammars and pac grammars and parsers as objects that can be reconfigured
  • 26. CPU time profiling Mondrian [9] is an open and agile visualization engine. Profile visualization using a graph of (possibly nested) nodes an a serious performance issue was raised1 . Tracking down performance was not trivial. We first used a standard sam Execution sampling approximates the time spent in an by periodically stopping a program and recording the cu under executions. Such a profiling technique is relatively little impact on the overall execution. This sampling techn all mainstream profilers, such as JProfiler, YourKit, xprof MessageTally, the standard sampling-based profiler in P tually describes the execution in terms of CPU consumpt each method of Mondrian: 54.8% {11501ms} MOCanvas>>drawOn: 54.8% {11501ms} MORoot(MONode)>>displayOn: { { 30.9% {6485ms} MONode>>displayOn: { { | 18.1% {3799ms} MOEdge>>displayOn: } ... } } | 8.4% {1763ms} MOEdge>>displayOn: | | 8.0% {1679ms} MOStraightLineShape>>display:on: } { } | | 2.6% {546ms} FormCanvas>>line:to:width:color: ... 23.4% {4911ms} MOEdge>>displayOn: ... We can observe that the virtual machine spent abou the method displayOn: defined in the class MORoot. A ro nested node that contains all the nodes of the edges of t general profiling information says that rendering nodes a great share of the CPU time, but it does not help in pin and edges are responsible for the time spent. Not all grap consume resources. Traditional execution sampling profilers center their r the execution stack and completely ignore the identity of th the method call and its arguments. As a consequence, it which objects cause the slowdown. For the example above, says that we spent 30.9% in MONode>>displayOn: withou were actually refreshed too often. Domain Coverage PetitParser is a parsing framework combining ideas from parser combinators, parsing expression grammars and pac grammars and parsers as objects that can be reconfigured 1 http://forum.world.st/Mondrian-is-slow-next-step-tc a2261116 2 http://www.pharo-project.org/
  • 28.
  • 29. C omp lexity stemand Ducasse 2003 Sy zaLan
  • 30. little impact on the overall execution. This sampling technique is u all mainstream profilers, such as JProfiler, YourKit, xprof [10], an MessageTally, the standard sampling-based profiler in Pharo Sm tually describes the execution in terms of CPU consumption and i each method of Mondrian: 54.8% {11501ms} MOCanvas>>drawOn: 54.8% {11501ms} MORoot(MONode)>>displayOn: 30.9% {6485ms} MONode>>displayOn: | 18.1% {3799ms} MOEdge>>displayOn: ... | 8.4% {1763ms} MOEdge>>displayOn: | | 8.0% {1679ms} MOStraightLineShape>>display:on: | | 2.6% {546ms} FormCanvas>>line:to:width:color: ... 23.4% {4911ms} MOEdge>>displayOn: ... We can observe that the virtual machine spent about 54% o the method displayOn: defined in the class MORoot. A root is the nested node that contains all the nodes of the edges of the visua general profiling information says that rendering nodes and edge
  • 31. CPU time profiling Mondrian [9] is an open and agile visualization engine. Mondrian describes a Which is the relationship? visualization using a graph of (possibly nested) nodes and edges. In June 2010 a serious performance issue was raised1 . Tracking down the cause of the poor performance was not trivial. We first used a standard sample-based profiler. Execution sampling approximates the time spent in an application’s methods by periodically stopping a program and recording the current set of methods under executions. Such a profiling technique is relatively accurate since it has little impact on the overall execution. This sampling technique is used by almost all mainstream profilers, such as JProfiler, YourKit, xprof [10], and hprof. MessageTally, the standard sampling-based profiler in Pharo Smalltalk2 , tex- tually describes the execution in terms of CPU consumption and invocation for each method of Mondrian: 54.8% {11501ms} MOCanvas>>drawOn: ? 54.8% {11501ms} MORoot(MONode)>>displayOn: 30.9% {6485ms} MONode>>displayOn: | 18.1% {3799ms} MOEdge>>displayOn: ... | 8.4% {1763ms} MOEdge>>displayOn: | | 8.0% {1679ms} MOStraightLineShape>>display:on: | | 2.6% {546ms} FormCanvas>>line:to:width:color: ... 23.4% {4911ms} MOEdge>>displayOn: ... We can observe that the virtual machine spent about 54% of its time in the method displayOn: defined in the class MORoot. A root is the unique non- nested node that contains all the nodes of the edges of the visualization. This general profiling information says that rendering nodes and edges consumes a great share of the CPU time, but it does not help in pinpointing which nodes and edges are responsible for the time spent. Not all graphical elements equally consume resources. Traditional execution sampling profilers center their result on the frames of
  • 33. Debugging: Is the process of interacting with a running software system to test and understand its current behavior.
  • 35.
  • 36. C omp lexity stemand Ducasse 2003 Sy zaLan
  • 39.
  • 40.
  • 41. How do we debug this?
  • 44. { { { { } } } } { }
  • 45. { { { { } } } } { }
  • 47. When during the execution is this method called? (Q.13) Where are instances of this class created? (Q.14) Where is this variable or data structure being accessed? (Q.15) What are the values of the argument at runtime? (Q.19) What data is being modified in this code? (Q.20) How are these types or objects related? (Q.22) How can data be passed to (or accessed at) this point in the code? (Q.28) What parts of this data structure are accessed in this code? (Q.33)
  • 48. When during the execution is this method called? (Q.13) Where are instances of this class created? (Q.14) Where is this variable or data structure being accessed? (Q.15) What are the values of the argument at runtime? (Q.19) What data is being modified in this code? (Q.20) How are these types or objects related? (Q.22) How can data be passed to (or accessed at) this point in the code? (Q.28) What parts of this data structure are accessed in this code? (Q.33) llito Si etal. g softwar e ask durin gr ammers s. 2008 Questi ons pro ution task evol
  • 49. Which is the relationship? When during the execution is this method called? (Q.13) ? Where are instances of this class created? (Q.14) Where is this variable or data structure being accessed? (Q.15) What are the values of the argument at runtime? (Q.19) What data is being modified in this code? (Q.20) How are these types or objects related? (Q.22) How can data be passed to (or accessed at) this point in the code? (Q.28) What parts of this data structure are accessed in this code? (Q.33)
  • 52. visualization using a graph of (possibly nested) nodes and edges. In June 2010 a serious performance issue was raised1 . Tracking down the cause of the poor performance was not trivial. We first used a standard sample-based profiler. Execution sampling approximates the time spent in an application’s methods by periodically stopping a program and recording the current set of methods Profiling under executions. Such a profiling technique is relatively accurate since it has little impact on the overall execution. This sampling technique is used by almost all mainstream profilers, such as JProfiler, YourKit, xprof [10], and hprof. MessageTally, the standard sampling-based profiler in Pharo Smalltalk2 , tex- tually describes the execution in terms of CPU consumption and invocation for each method of Mondrian: 54.8% {11501ms} MOCanvas>>drawOn: 54.8% {11501ms} MORoot(MONode)>>displayOn: 30.9% {6485ms} MONode>>displayOn: ? | 18.1% {3799ms} MOEdge>>displayOn: ... | 8.4% {1763ms} MOEdge>>displayOn: | | 8.0% {1679ms} MOStraightLineShape>>display:on: | | 2.6% {546ms} FormCanvas>>line:to:width:color: ... 23.4% {4911ms} MOEdge>>displayOn: ... We can observe that the virtual machine spent about 54% of its time in the method displayOn: defined in the class MORoot. A root is the unique non- nested node that contains all the nodes of the edges of the visualization. This general profiling information says that rendering nodes and edges consumes a great share of the CPU time, but it does not help in pinpointing which nodes and edges are responsible for the time spent. Not all graphical elements equally consume resources. Traditional execution sampling profilers center their result on the frames of the execution stack and completely ignore the identity of the object that received the method call and its arguments. As a consequence, it is hard to track down which objects cause the slowdown. For the example above, the traditional profiler says that we spent 30.9% in MONode>>displayOn: without saying which nodes were actually refreshed too often. Coverage
  • 53. Debugging When during the execution is this method called? (Q.13) ? Where are instances of this class created? (Q.14) Where is this variable or data structure being accessed? (Q.15) What are the values of the argument at runtime? (Q.19) What data is being modified in this code? (Q.20) How are these types or objects related? (Q.22) How can data be passed to (or accessed at) this point in the code? (Q.28) What parts of this data structure are accessed in this code? (Q.33)
  • 57. Object Paradox Reflection Requirements
  • 58. Object Paradox Reflection Requirements
  • 59. Object Paradox Unified Reflection Uniform Requirements Approach
  • 60. Object Paradox Unified Reflection Uniform Requirements Approach
  • 61. Thesis: To overcome the object paradox while providing a unified and uniform solution to the key reflection requirements we need an object-centric reflective system which targets specific objects as the central reflection mechanism through explicit meta-objects.
  • 62. Thesis: To overcome the object paradox while providing a unified and uniform solution to the key reflection requirements we need an object-centric reflective system which targets specific objects as the central reflection mechanism through explicit meta-objects.
  • 63. Thesis: To overcome the object paradox while providing a unified and uniform solution to the key reflection requirements we need an object-centric reflective system which targets specific objects as the central reflection mechanism through explicit meta-objects.
  • 64. Thesis: To overcome the object paradox while providing a unified and uniform solution to the key reflection requirements we need an object-centric reflective system which targets specific objects as the central reflection mechanism through explicit meta-objects.
  • 65. Thesis: To overcome the object paradox while providing a unified and uniform solution to the key reflection requirements we need an object-centric reflective system which targets specific objects as the central reflection mechanism through explicit meta-objects.
  • 66. Object-Centric Applications MetaSpy Debugging Talents Chameleon Host Language 56
  • 67. Object-Centric Applications MetaSpy Debugging Talents Chameleon Host Language 57
  • 68. Object-Centric Applications MetaSpy Debugging Talents Chameleon Host Language 58
  • 69. Object-Centric Applications MetaSpy Debugging Talents Chameleon Host Language 59
  • 70. Object-Centric Applications MetaSpy Debugging Talents Chameleon Host Language 60
  • 71. Object-Centric Applications MetaSpy Debugging Talents Chameleon Host Language 61
  • 72. Object-Centric Applications MetaSpy Debugging Talents Chameleon Host Language 62
  • 73. Object-Centric Applications MetaSpy Debugging Talents Chameleon Host Language 63
  • 74. Object-Centric Applications MetaSpy Debugging Talents Chameleon Host Language 64
  • 76. Object-Centric Applications MetaSpy Debugging Talents Chameleon Host Language 66
  • 77. 2010 time. Nierstrasz @run delsnggli,T. Gîrba and O Mo e R J. R essia, L.
  • 82. Class Meta-object Object
  • 83. Class Meta-object Object
  • 84. Class Meta-object Adapted Object
  • 87. Source Code Syntactic Analysis Semantic Analysis AST Code Manipulation Generation Executable Code
  • 88. Class Meta-object Adapted Object
  • 92. Meta-object Meta-object
  • 93. Meta-object Meta-object Composed Meta-object
  • 103. Object-Centric Applications MetaSpy Debugging Talents Chameleon Host Language
  • 104. Development 94
  • 105. Reuse
  • 106. Mixins
  • 107.
  • 109.
  • 111.
  • 112. Traits
  • 113. Class = Superclass + State + Traits + Glue methods
  • 121. Stream PositionableStream NullStream ReadStream WriteStream ReadWriteStream TextStream LimitedWriteStream RWBinaryOrTextStream
  • 124. Talents S PE 2 012 , 011Nierstrasz, F. Perin and ST 2 . IW ssia,T. Gîrba, O i J. Re L . Renggl
  • 130. readStreamTalent defineMethodNamed: #isReadStream performing: '^ true'.
  • 132. readStreamTalent excludeMethodNamed: #isReadStream
  • 134. readStreamTalent replaceMethodNamed: #isReadStream performing: '^ true'.
  • 135. Read and Buffered Stream
  • 138. aStream := Stream new. aStream acquire: ( readStreamTalent , bufferedStreamTalent ).
  • 139. Alias
  • 140. aStream := Stream new. aStream acquire: readStreamTalent , ( bufferedStreamTalent @ {#isReadStream -> #isReadBufferedStream})
  • 142. aStream := Stream new. aStream acquire: readStreamTalent , (bufferedStreamTalent - #isReadStream).
  • 144. Talents S PE 2 012 , 011Nierstrasz, F. Perin and ST 2 . IW ssia,T. Gîrba, O i J. Re L . Renggl
  • 145. Object-Centric Applications MetaSpy Debugging Talents Chameleon Host Language 134
  • 148. Running System = Sequence of Events
  • 150. Object Debugger Object Feature Analysis Object Profiler Object
  • 151. Object Debugger Object Feature Analysis Object Profiler Object
  • 152. Object Debugger Object Feature Analysis Object Profiler Object
  • 154. Object Debugger Object Feature Analysis Object Profiler Object
  • 155. Object Debugger Object Feature Analysis Object Profiler Object
  • 158. Meta-object Object Meta-object Object Meta-object Object Meta-object Object
  • 159. Meta-object Object Meta-object Object Meta-object Object Meta-object Object
  • 161. Adaptation on top of these events
  • 162. EventInstrumentor reify: PurchaseEvent onClass: Cart selector: #purchase
  • 163. ChameleonAnnouncer subscribe: aDiscountChecker to: PurchaseEvent
  • 164. EventInstrumentor reify: PointsPurchaseEvent onObject: aCart selector: #purchase
  • 165. No source code modification
  • 166. Meta-object Object Meta-object Object Meta-object Object Meta-object Object
  • 168. Reification of the adaptation
  • 172. Object-Centric Applications MetaSpy Debugging Talents Chameleon Host Language 157
  • 173. Object-Centric Applications MetaSpy Debugging Talents Chameleon Host Language 158
  • 174. Which is the relationship? When during the execution is this method called? (Q.13) ? Where are instances of this class created? (Q.14) Where is this variable or data structure being accessed? (Q.15) What are the values of the argument at runtime? (Q.19) What data is being modified in this code? (Q.20) How are these types or objects related? (Q.22) How can data be passed to (or accessed at) this point in the code? (Q.28) What parts of this data structure are accessed in this code? (Q.33)
  • 175. { { { { } } } } { }
  • 176. { { { { } } } } { }
  • 177. Object-Centric Debugging 2 201Nierstrasz ICSE . rgel and O J. Ressi a, A, Be
  • 178. { { { { } } } } { }
  • 179. { { { { } } } } { }
  • 180. { { { { } } } } { }
  • 181. What does it mean?
  • 182. intercepting access to object- specific runtime state
  • 184. supporting live interaction
  • 185.
  • 186.
  • 187.
  • 190.
  • 191.
  • 192. halt on object in call
  • 194. pc
  • 195. How do we debug this?
  • 196. Debugging through operations 18 step in for first modification 30 operations for next modification
  • 197. Setting breakpoints 31 method accessing the variable 9 assignments pc setter is used by 5 intensively used classes
  • 198. stack-centric debugging InstructionStream class>>on: InstructionStream class>>new InstructionStream>>initialize step into, CompiledMethod>>initialPC step over, InstructionStream>>method:pc: resume InstructionStream>>nextInstruction MessageCatcher class>>new InstructionStream>>interpretNextInstructionFor: ... object-centric debugging centered on centered on the InstructionStream class the InstructionStream object initialize on: next message, next message, method:pc: new next change nextInstruction ... next change interpretNextInstructionFor: ...
  • 199. Halt on next message Halt on next message/s named Halt on state change Halt on state change named Halt on next inherited message Halt on next overloaded message Halt on object/s in call Halt on next message from package
  • 200. Object-Centric Debugging 2 201Nierstrasz ICSE . rgel and O J. Ressi a, A, Be
  • 201. Object-Centric Applications MetaSpy Debugging Talents Chameleon Host Language 183
  • 202. CPU time profiling Mondrian [9] is an open and agile visualization engine. Profile visualization using a graph of (possibly nested) nodes an a serious performance issue was raised1 . Tracking down performance was not trivial. We first used a standard sam Execution sampling approximates the time spent in an by periodically stopping a program and recording the cu under executions. Such a profiling technique is relatively little impact on the overall execution. This sampling techn all mainstream profilers, such as JProfiler, YourKit, xprof MessageTally, the standard sampling-based profiler in P tually describes the execution in terms of CPU consumpt each method of Mondrian: 54.8% {11501ms} MOCanvas>>drawOn: 54.8% {11501ms} MORoot(MONode)>>displayOn: { { 30.9% {6485ms} MONode>>displayOn: { { | 18.1% {3799ms} MOEdge>>displayOn: } ... } } | 8.4% {1763ms} MOEdge>>displayOn: | | 8.0% {1679ms} MOStraightLineShape>>display:on: } { } | | 2.6% {546ms} FormCanvas>>line:to:width:color: ... 23.4% {4911ms} MOEdge>>displayOn: ... We can observe that the virtual machine spent abou the method displayOn: defined in the class MORoot. A ro nested node that contains all the nodes of the edges of t general profiling information says that rendering nodes a great share of the CPU time, but it does not help in pin and edges are responsible for the time spent. Not all grap consume resources. Traditional execution sampling profilers center their r the execution stack and completely ignore the identity of th the method call and its arguments. As a consequence, it which objects cause the slowdown. For the example above, says that we spent 30.9% in MONode>>displayOn: withou were actually refreshed too often. Domain Coverage PetitParser is a parsing framework combining ideas from parser combinators, parsing expression grammars and pac grammars and parsers as objects that can be reconfigured 1 http://forum.world.st/Mondrian-is-slow-next-step-tc a2261116 2 http://www.pharo-project.org/
  • 203. Domain-Specific Profiling 3 CPU time profiling Which is the relationship? Mondrian [9] is an open and agile visualization engine. Mondrian describes a visualization using a graph of (possibly nested) nodes and edges. In June 2010 a serious performance issue was raised1 . Tracking down the cause of the poor performance was not trivial. We first used a standard sample-based profiler. Execution sampling approximates the time spent in an application’s methods by periodically stopping a program and recording the current set of methods under executions. Such a profiling technique is relatively accurate since it has little impact on the overall execution. This sampling technique is used by almost all mainstream profilers, such as JProfiler, YourKit, xprof [10], and hprof. MessageTally, the standard sampling-based profiler in Pharo Smalltalk2 , tex- tually describes the execution in terms of CPU consumption and invocation for each method of Mondrian: 54.8% {11501ms} MOCanvas>>drawOn: 54.8% {11501ms} MORoot(MONode)>>displayOn: 30.9% {6485ms} MONode>>displayOn: ? | 18.1% {3799ms} MOEdge>>displayOn: ... | 8.4% {1763ms} MOEdge>>displayOn: | | 8.0% {1679ms} MOStraightLineShape>>display:on: | | 2.6% {546ms} FormCanvas>>line:to:width:color: ... 23.4% {4911ms} MOEdge>>displayOn: ... We can observe that the virtual machine spent about 54% of its time in the method displayOn: defined in the class MORoot. A root is the unique non- nested node that contains all the nodes of the edges of the visualization. This general profiling information says that rendering nodes and edges consumes a great share of the CPU time, but it does not help in pinpointing which nodes and edges are responsible for the time spent. Not all graphical elements equally consume resources. Traditional execution sampling profilers center their result on the frames of the execution stack and completely ignore the identity of the object that received the method call and its arguments. As a consequence, it is hard to track down which objects cause the slowdown. For the example above, the traditional profiler says that we spent 30.9% in MONode>>displayOn: without saying which nodes were actually refreshed too often. Coverage PetitParser is a parsing framework combining ideas from scannerless parsing,
  • 204. Objects Source Traditional Code Profilers
  • 205. Object Objects Profilers Source Traditional Code Profilers
  • 206. MetaSpy JOT 2 012 al. Ressia et
  • 207. Instrumenter Profiler
  • 208. Domain Profilers Domain Object Domain Object Domain Object
  • 209. Domain Profilers Domain Object Domain Object Domain Object Instrumentation
  • 210. Domain Profilers Domain Object Domain Object Domain Object Instrumentation
  • 211. Specify the Domain interests Capture the runtime information Present the results
  • 213.
  • 214. MondrianProfiler>>setUp self model root allNodes do: [ :node | self observeObject: node selector: #displayOn: do: [ ... counting ... ] ]
  • 215.
  • 216. MetaSpy JOT 2 012 al. Ressia et
  • 217. Object-Centric Applications MetaSpy Debugging Talents Chameleon Host Language 197
  • 219. Live Feature Analysis 2010 dels l. Mo Denker eta
  • 220. Software Feature: A distinguishing characteristic of a software item. IE EE 829
  • 221. Traces { { { { { { } { { } } } } } } } } } { {
  • 222. Traces { { { { { { } { { } } } } } } } } } { {
  • 223. Traces { { { { { { } { { } } } } } } } } } { {
  • 224. { { { { } } } } } { { }
  • 226.
  • 227. What if we do not know what to evolve?
  • 228.
  • 229. ?
  • 230.
  • 231.
  • 232.
  • 233.
  • 234.
  • 235.
  • 237. Reification of Context
  • 238. Reuse
  • 243. loginExecution := Execution new. loginExecution when: ASTNodeExecutionEvent do: [ :node | node addFeatureAnnotation: #login ]
  • 244. loginExecution executeOn: [ WebServer new loginAs: 'admin' password: 'pass' ]
  • 245. printingExecution := Execution new. printingExecution when: ASTNodeExecutionEvent do: [ :node | node addFeatureAnnotation: #printing ]
  • 246. printingExecution executeOn: [ WebServer new printStatusReport ]
  • 247.
  • 248.
  • 249.
  • 251. Back in time Debugger
  • 252. Back in time Debugger Debu gger w Floal. ECOOP 2008 O b ject nhard e t Lie
  • 253. name value init@t1 null predecessor name value :Person field-write@t2 'Doe' predecessor name value field-write@t3 'Smith' person := Person new t1 ... name := 'Doe' t2 ... name := 'Smith' t3
  • 254.
  • 255. Adapt reached objects
  • 257.
  • 258. Thesis: To overcome the object paradox while providing a unified and uniform solution to the key reflection requirements we need an object-centric reflective system which targets specific objects as the central reflection mechanism through explicit meta-objects.
  • 259. Object Paradox Unified Reflection Uniform Requirements Approach
  • 260. Object-Centric Applications MetaSpy Debugging Talents Chameleon Host Language 237
  • 263. Live Feature Analysis 2010 dels l. Mo Denker eta
  • 267. PRCommand >> context: aContext context := aContext
  • 271.
  • 273. Source Code Syntactic Analysis Semantic Analysis AST Code Manipulation Generation Executable Code
  • 274. methodDict Class MethodDictionary CompiledMethod 1 * ReflectiveMethod
  • 275. Object Key run: #isPoint with: #() in: aPoint instance-of message send 3 aMethodDictionary lookup isPoint : aMetaObjectReflectiveMethod Point isZero : aCompiledMethod isPoint run: #isPoint with: #() in: aPoint 2 4 aPoint aMetaObject aScopedMethodDictionary isPoint : aReflectiveMethod 1 5 run: #isPoint with: #() in: aPoint aPoint isPoint
  • 277.
  • 278.
  • 279.
  • 280.
  • 281.
  • 284. PBI
  • 285. PBI 011 Eric Tanter SD 2 d AO alter Binder an M oret, W P hilippe
  • 286. AOP
  • 289. Per-object Aspects ftw. Eng. SOF T So SIG 003 tes 2llivan No jan and Su Ra
  • 291. CaesarJ s AOS D 06 a nsac tionic etal. Tr Arac
  • 293. AspectScheme S CP 2 006amurthi. n an d Krish n, Tucker, Dutchy
  • 295. AspectS 6 200nza O DAL a and Co st H irschfeld
  • 296. ALIA4J
  • 297. ALIA4J O LS 2 011kşit TO Sewe, Mezini, and A h, Bockisc
  • 299. ALIA4J AOP Debugger 2 201şit AOSD h, and A k Yin, Bockisc
  • 302. Deployment Strategies SD 2 008 AO Eric Tanter
  • 304. Scoping Strategies DLS 2009 er Eric Tant
  • 310. o bel 2 002ohn E. Sulston N ne r and J ydne y Bren ob Ho rvitz, S B Barbara Conradt, Programmed cell death
  • 311.
  • 312. Automatic Manual Temporary Permanent Referenced Unreferenced Strong Weak
  • 313. Cache
  • 316.
  • 323. I believe the argument that reflection is dangerous because it gives programmers too much power is a specious one. Existing programming languages already give clumsy programmers more than ample opportunities to shoot themselves in the foot. I'll concede that reflective systems allow the clumsy programmer to fire several rounds per second into his foot, without reloading. Still, I'm confident that, as is the case with features such as pointers, competent programmers will make appropriate use of the power of reflection with care and skill. Potential abuse by inept programmers should not justify depriving the programming community of a potentially vital set of tools. Brian Foote

Notas do Editor

  1. Reflected on reflection\nThinking it inside out.\nreinvented reflection.\n
  2. \n
  3. \n
  4. A reflective system can be divided into two levels: the base level, which is concerned with the application domain, and the meta-level, which encompasses the self-representation.\n\n
  5. A reflective system can be divided into two levels: the base level, which is concerned with the application domain, and the meta-level, which encompasses the self-representation.\nThese levels are causally connected\n\n\n
  6. \n
  7. Behavioral reflection is concerned with the manipulation of the abstractions which govern the execution of a program.\n\n\n
  8. \n
  9. \n
  10. \n
  11. There has been a lot of work in this domain.\nThis is a summary of the requirements\n
  12. There has been a lot of work in this domain.\nThis is a summary of the requirements\n
  13. \n
  14. \n
  15. \n
  16. \n
  17. \n
  18. \n
  19. \n
  20. \n
  21. refers to a meta-environment that runs at the same level as the application code, i.e., not in the interpreter of the host language \n\n
  22. \n
  23. \n
  24. \n
  25. \n
  26. \n
  27. \n
  28. \n
  29. \n
  30. \n
  31. Code profilers commonly employ execution sampling as the way to obtain dynamic run-time information\n
  32. \n
  33. is a framework for drawing graphs\n
  34. \n
  35. \n
  36. What is the relationship between this and the domain? picture again\n
  37. \n
  38. \n
  39. is a framework for drawing graphs\n
  40. width = the number of attributes of the class\nheight = the number of methods of the class\ncolor = the number of lines of code of the class\n\n
  41. \n
  42. double dispatch\n
  43. one of the nodes was not correctly rendered\n
  44. \n
  45. \n
  46. \n
  47. We have to go back to the code\n
  48. Which questions do these debuggers try to answer?\n
  49. Sillito\n
  50. \n
  51. Which questions do these debuggers try to answer?\n
  52. \n
  53. \n
  54. \n
  55. although object-oriented developers are supposed to think in terms of objects, the tools and environments they use mostly prevent this.\n\nReflective systems prioritizing static mechanisms over object reflection present a gap between the user needs and what the reflective systems provides. Thus, the user is less efficient since he has to introduce ad hoc changes to steer the reflective systems to solve its object-specific needs\n\n\n
  56. requirements +\nobject paradox +\nunified and uniform\n\nsolution object-centric with meta-objects\n\nexplicit object-centric meta-objects.\n
  57. requirements +\nobject paradox +\nunified and uniform\n\nsolution object-centric with meta-objects\n\nexplicit object-centric meta-objects.\n
  58. requirements +\nobject paradox +\nunified and uniform\n\nsolution object-centric with meta-objects\n\nexplicit object-centric meta-objects.\n
  59. requirements +\nobject paradox +\nunified and uniform\n\nsolution object-centric with meta-objects\n\nexplicit object-centric meta-objects.\n
  60. requirements +\nobject paradox +\nunified and uniform\n\nsolution object-centric with meta-objects\n\nexplicit object-centric meta-objects.\n
  61. \n
  62. \n
  63. unified: the approach should solve all the requirements\nuniform: the approach should do this in a unique way, not having special cases that need to be handled differently from the rest.\n
  64. \n
  65. \n
  66. In the remainder of this presentation I am going to show how to address these issues.\nThe graphic displays the architecture of the Bifröst System.\nIt also serves as an agenda for the remainder of this presentation. \n\nExplain what we are going to do in each part.\n\n1.- we will explain the core of ocr\n2.- we will analyze how development itself is changed with this new approach\n3.- we will analyze the impact of ocr on en user applications, does it have a meaningful impact, why should I care about this.\n
  67. \n
  68. \n
  69. \n
  70. \n
  71. \n
  72. \n
  73. \n
  74. \n
  75. \n
  76. \n
  77. \n
  78. We see that there are many different ways of doing reflection, adaptation, instrumentation, many are low level.\nAnd the ones that are highly flexible cannot break free from the limitations of the language.\n
  79. Adaptation semantic abstraction for composing meta-level structure and behavior.\n
  80. The meta-objects can be applied and deapplied at any time.\n
  81. There is no special case, there is no side path, everything works by attaching meta-objects to objects.\n
  82. \n
  83. \n
  84. \n
  85. \n
  86. \n
  87. \n
  88. \n
  89. \n
  90. \n
  91. \n
  92. \n
  93. \n
  94. No change happens without going through them. No implicit interactions!!!\n\nThis makes reflection explicit.\n
  95. \n
  96. There has been a lot of work in this domain.\nThis is a summary of the requirements\n
  97. \n
  98. \n
  99. \n
  100. refers to a meta-environment that runs at the same level as the application code, i.e., not in the interpreter of the host language \n\n
  101. \n
  102. \n
  103. \n
  104. \n
  105. \n
  106. \n
  107. \n
  108. Since My Rectangle is a composition, I cannot access the behavior in MColor or MBorder exclusively.\nMixins have to be applied one at a time.\n
  109. \n\n
  110. Since My Rectangle is a composition, I cannot access the behavior in MColor or MBorder exclusively.\nMixins have to be applied one at a time.\n
  111. \n
  112. Since My Rectangle is a composition, I cannot access the behavior in MColor or MBorder exclusively.\nMixins have to be applied one at a time.\n
  113. Originally in self\nthen developed by nathanel scharli in smalltalk.\nWith operations\n\n
  114. Self traits do not provide composition operators.\n
  115. Originally in self\nthen developed by nathanel scharli in smalltalk.\nWith operations\n\n
  116. Originally in self\nthen developed by nathanel scharli in smalltalk.\nWith operations\n\n
  117. Object evolution\n
  118. Streams are used to iterate over sequences of elements such as sequenced collections, files, and network streams. Streams offer a better way than collections to incrementally read and write a sequence of elements.\n\n
  119. \n
  120. \n
  121. The potential combination of all these various types of streams leads to an explosion in the number of classes.\n\n
  122. Pharo stream hierarchy\n
  123. \n
  124. Traditional stream implementations check in every method if the underlying stream is still opened. With talents we can avoid such cumbersome checks and dynamically acquire a ClosedStreamTalent when a stream is closed.\n\n
  125. \n
  126. \n
  127. \n
  128. \n
  129. \n
  130. \n
  131. \n
  132. \n
  133. \n
  134. \n
  135. \n
  136. \n
  137. \n
  138. \n
  139. \n
  140. \n
  141. \n
  142. \n
  143. \n
  144. Structural changes\n
  145. \n
  146. \n
  147. \n
  148. \n
  149. \n
  150. \n
  151. \n
  152. \n
  153. We instruments the interesting objects\n
  154. We instruments the interesting objects with meta-objects which state under which circumstances the events have to be triggered\n\n
  155. We instruments the interesting objects with meta-objects which state under which circumstances the events have to be triggered\n\n
  156. We instruments the interesting objects with meta-objects which state under which circumstances the events have to be triggered\n\n
  157. \n
  158. The next snippet of code describes their motivating example of a shopping cart discount. Every time a low demanded product is purchased a discount is applied to the purchase value.\n\n
  159. \n
  160. \n
  161. \n
  162. JPI\n
  163. JPI\n
  164. We instruments the interesting objects with meta-objects which state under which circumstances the events have to be triggered\n\n
  165. dynamic JPI\n
  166. The key characteristics are that we have a dynamically defined interface\nJPI have to define and modify the source code previous to execution\nWe can adapt dynamically, no anticipation.\nThe explicit meta-objects allow us to explicitly and dynamically query objects to know which events they can trigger.\n
  167. \n
  168. now have new possibilities that we can rethink existing tools.\n
  169. Does OCR have a meaningful impact on the applications I use every day? Does it change anything?\n
  170. \n
  171. \n
  172. \n
  173. We have to go back to the code\n
  174. \n
  175. We have to go back to the code\n
  176. We have to go back to the code\n
  177. \n
  178. Questions 15, 19, 20, 28 and 33 all have to do with tracking state at runtime. Consider in particular question 15: Where is this variable or data structure being accessed? Let us assume that we want to know where an instance variable of an object is being modified. This is known as keeping track of side-effects [3]. One approach is to use step-wise operations until we reach the modification. However, this can be time-consuming and unreliable. Another approach is to place breakpoints in all assignments related to the instance variable in question. Finding all these assignments might be troublesome depending on the size of the use case, as witnessed by our own experience.\nTracking down the source of this side effect is highly challenging: 31 of the 38 methods defined on InstructionStream access the variable, comprising 12 assignments; the instance variable is written 9 times in InstructionStream’s subclasses. In addition, the variable pc has an accessor that is referenced by 5 intensively-used classes.\n\n
  179. Question 22 poses further difficulties for the debugging approach: How are these types or objects related? In statically typed languages this question can be partially answered by finding all the references to a particular type in another type. Due to polymorphism, however, this may still yield many false positives. (An instance variable of type Object could be potentially bound to instances of any type we are interested in.) Only by examining the runtime behavior can we learn precisely which types are instantiated and bound to which variables. The debugging approach would, however, require heavy use of conditional breakpoints (to filter out types that are not of interest), and might again entail the setting of breakpoints in a large number of call sites.\n\n
  180. Back-in-time debugging [4], [5] can potentially be used to answer many of these questions, since it works by maintain- ing a complete execution history of a program run. There are two critical drawbacks, however, which limit the practical application of back-in-time debugging. First, the approach is inherently post mortem. One cannot debug a running system, but only the history of completed run. Interaction is therefore strictly limited, and extensive exploration may require many runs to be performed. Second, the approach entails considerable overhead both in terms of runtime performance and in terms of memory requirements to build and explore the history.\n\n
  181. \n
  182. We add one or more meta-objects to one or more objects depending on the case, all this happens in the back end.\n
  183. \n
  184. \n
  185. double dispatch\n
  186. one of the nodes was not correctly rendered\n
  187. We have more commands that the ones in the debugger, but we did not know how to put them there.\n
  188. \n
  189. \n
  190. \n
  191. \n
  192. \n
  193. \n
  194. halt on next specific messages\n
  195. Render the traditional UI obsolete. \n30+ years of debugging in the same way.\n
  196. \n
  197. \n
  198. \n
  199. What is the relationship between this and the domain? picture again\n
  200. new dimension of problem-domain\n
  201. new dimension of problem-domain\n
  202. TOOLS 2011\n
  203. \n
  204. \n\n
  205. \n\n
  206. \n
  207. \n
  208. \n
  209. \n
  210. \n
  211. Clicking and drag-and-dropping nodes refreshes the visualization, thus increasing the progress bar of the corresponding nodes. This profile helps identifying unnecessary rendering. We identified a situation in which nodes were refreshing without receiving user actions.\n
  212. TOOLS 2011\n
  213. In the remainder of this presentation I am going to show how to address these issues.\nThe graphic displays the architecture of the Bifröst System.\nIt also serves as an agenda for the remainder of this presentation.\n
  214. Logo?\n
  215. \n
  216. Traces are not directly related to the runtime entities\n
  217. Traces are not directly related to the runtime entities\n
  218. Traces are not directly related to the runtime entities\n
  219. \n
  220. \n
  221. \n
  222. \n
  223. \n
  224. We do not know what to evolve or who should evolve.\nWe need some spreading of the evolution like a disease or cure.\n\nThe adaptation cannot be seen by other objects which are in a different execution.\nThe adaptation is dynamically set.\n
  225. \n
  226. \n
  227. \n
  228. \n
  229. \n
  230. \n
  231. The big difference is that we reify the execution, the dynamic context so we can reflect on them and decide when they should finish, undeploy, etc\n
  232. \n
  233. \n
  234. \n
  235. The propagation is not thread local.\n
  236. Logo?\n
  237. \n
  238. \n
  239. \n
  240. \n
  241. \n
  242. \n
  243. \n
  244. The propagation is not thread local.\n
  245. \n
  246. \n
  247. The impact to the system is bearable \n
  248. We do not annoy any other and we have an impact but is only hitting us since others cannot see it.\n
  249. \n
  250. \n
  251. \n
  252. \n
  253. \n
  254. \n
  255. \n
  256. Logo?\n
  257. \n
  258. \n
  259. \n
  260. \n
  261. Using Pier and adapting only Pier we get a 20 % negative impact in average in the different use cases.\n
  262. If we adapt the whole smalltalk image.\n
  263. \n
  264. \n
  265. \n
  266. \n
  267. \n
  268. \n
  269. \n
  270. \n
  271. \n
  272. \n
  273. \n
  274. \n
  275. \n
  276. \n
  277. technique that supports dynamic dispatch amongst several, possibly independent instrumentations. These instrumentations are saved and indexed by a version identifier. A Prisma scope can be related to a particular version of instrumentations over objects’ methods. When the dynamic extent is running only the meth- ods instrumented versions indexed by the scope should be executed.\n\n
  278. What is the problem with AOP?\n- it was not originally motivated by reifications.\n- the debugger in ALIA4J is going into that direction\n- it is more like a language to express certain “events” but not a full model, aspects are not reified.\n- Do not get me wrong, it is amazing what they have achieved, but the reification problem is still a big one, Mainly because it was not born in a dynamic language.\n
  279. \n
  280. Following the idea of per-object meta-objects Rajan and Sullivan propose per-object aspects. An aspect deployed on a single object only sees the join points produced by this object. This join point observation can be stopped at any time.\n\n
  281. CaesarJ provides deploy blocks which restrict behavioral adaptations to take place only within the dynamic extent of the block. The scope is explicitly embedded within the application code, but the this approach has an implicit exit point which is the ending of the execution of the deploy block. No value can be parameterized in the deploy block. The adaptation is bound at run time but the adaptation cannot be unbound nor rebound during the execution. Finally, the adaptation is applied locally to the thread executing the deploy block.\n\n
  282. AspectScheme is an aspect-oriented procedural language where pointcuts and advices are first-class values. AspectScheme introduces two deployment expressions. One expression can dynamically deploy an aspect over a body expression. The other can statically deploy an aspect which only sees join points occurring lexically in its body.\n\n\n
  283. AspectS is a dynamic aspect system defined in the context of Smalltalk. AspectS support first-class activation conditions which are objects modeling a dynamic condition. Since this a Smalltalk implementation the condition can be dynamically bound at runtime. This means that condition can be installed and uninstalled at runtime. Generally, this changes are global but as Hirschfeld and Costanza showed thread locality can be achieved.\n\n
  284. ALIA4J is an approach for implementing language extensions by reifying dispatching. It uses a fine-grained intermediate representation close to the source level abstractions. The key objective of ALIA4J is to allow researchers to focus on developing source languages for solving specific problems. For example, ALIA4J implemented CaesarJ thus it supports object-specific adaptations. One shortcoming of this approach is that even though the dispatching is reified the adaptation abstraction (meta-object) is not. Thus, achieving dynamically defined interfaces is not possible and so the source code has to be modified to signal an event interface change.\n\n\n
  285. ALIA4J AOP specific debugger realizes the need to explicitly model some AOP abstractions. The authors reflect that aspects adaptation are lost when weaved and is this implicitness that hinders the correct detection and solution of aspect-related bugs. Yin et al. define a debugging model which is aware of aspect-oriented concepts. This solution is targeted at AOP events. We require a solution targeted at object behavioral events.\n\n
  286. \n
  287. \n
  288. \n
  289. \n
  290. \n
  291. \n
  292. \n
  293. Cells have more power. And OOP stems from cells. What if we combine managed with unmanaged memory management?\n\nhttp://www.wormbook.org/chapters/www_programcelldeath/programcelldeath.html\n
  294. Garbage collection is cool, but a black box for most people. Not controllable. Object are not involved.\n
  295. In todays programming language you can choose between two. It is very black and white.\n- Either you have “automatic” collection, or you do everything “manually”.\n- In an automatic setup everything is “temporary”, in a manual all your actions are “permanent”.\n- Either you have a “referenced” object, or an “unreferenced” object. You have no control over unreferenced objects, and it is hard to get them back.\n- Either you have a “strong” reference, or “weak” one. You have to know upfront and you cannot easily change your mind on the fly.\n
  296. Implementing good caches is difficult. Either they are too aggressive and keep objects around for too long1 or they are too relaxed and let objects die too soon. If the objects can decide themselves how long they should stick around it is much simpler to implement a good cache.\n\n
  297. They disappear to fast if they are not referenced, like this\nthey could decide to stick around.What makes them faster if they are required again. Same\nin java. With the string sharing thing.A string that has no references disappears immediately\nand is not shared anymore, should it appear again.\n\n
  298. \n
  299. \n
  300. Tanter in scoping strategies mention the problem of stating a condition for undeployment\n
  301. \n
  302. \n
  303. \n
  304. \n
  305. \n
  306. \n