25. Domain-Sp
CPU time profiling
Mondrian [9] is an open and agile visualization engine.
visualization using a graph of (possibly nested) nodes an
Profile
a serious performance issue was raised1 . Tracking down
performance was not trivial. We first used a standard sam
Execution sampling approximates the time spent in an
by periodically stopping a program and recording the cu
under executions. Such a profiling technique is relatively
little impact on the overall execution. This sampling techn
all mainstream profilers, such as JProfiler, YourKit, xprof
MessageTally, the standard sampling-based profiler in P
tually describes the execution in terms of CPU consumpt
each method of Mondrian:
54.8% {11501ms} MOCanvas>>drawOn:
54.8% {11501ms} MORoot(MONode)>>displayOn:
30.9% {6485ms} MONode>>displayOn:
{ { | 18.1% {3799ms} MOEdge>>displayOn:
{ { ...
} | 8.4% {1763ms} MOEdge>>displayOn:
}
}
| | 8.0% {1679ms} MOStraightLineShape>>display:on:
| | 2.6% {546ms} FormCanvas>>line:to:width:color:
} { } ...
23.4% {4911ms} MOEdge>>displayOn:
...
We can observe that the virtual machine spent abou
the method displayOn: defined in the class MORoot. A ro
nested node that contains all the nodes of the edges of t
general profiling information says that rendering nodes a
great share of the CPU time, but it does not help in pin
and edges are responsible for the time spent. Not all grap
consume resources.
Traditional execution sampling profilers center their r
the execution stack and completely ignore the identity of th
the method call and its arguments. As a consequence, it
which objects cause the slowdown. For the example above,
says that we spent 30.9% in MONode>>displayOn: withou
were actually refreshed too often.
Coverage
PetitParser is a parsing framework combining ideas from
parser combinators, parsing expression grammars and pac
grammars and parsers as objects that can be reconfigured
26. CPU time profiling
Mondrian [9] is an open and agile visualization engine.
Profile
visualization using a graph of (possibly nested) nodes an
a serious performance issue was raised1 . Tracking down
performance was not trivial. We first used a standard sam
Execution sampling approximates the time spent in an
by periodically stopping a program and recording the cu
under executions. Such a profiling technique is relatively
little impact on the overall execution. This sampling techn
all mainstream profilers, such as JProfiler, YourKit, xprof
MessageTally, the standard sampling-based profiler in P
tually describes the execution in terms of CPU consumpt
each method of Mondrian:
54.8% {11501ms} MOCanvas>>drawOn:
54.8% {11501ms} MORoot(MONode)>>displayOn:
{ { 30.9% {6485ms} MONode>>displayOn:
{ { | 18.1% {3799ms} MOEdge>>displayOn:
} ...
}
} | 8.4% {1763ms} MOEdge>>displayOn:
| | 8.0% {1679ms} MOStraightLineShape>>display:on:
} { } | | 2.6% {546ms} FormCanvas>>line:to:width:color:
...
23.4% {4911ms} MOEdge>>displayOn:
...
We can observe that the virtual machine spent abou
the method displayOn: defined in the class MORoot. A ro
nested node that contains all the nodes of the edges of t
general profiling information says that rendering nodes a
great share of the CPU time, but it does not help in pin
and edges are responsible for the time spent. Not all grap
consume resources.
Traditional execution sampling profilers center their r
the execution stack and completely ignore the identity of th
the method call and its arguments. As a consequence, it
which objects cause the slowdown. For the example above,
says that we spent 30.9% in MONode>>displayOn: withou
were actually refreshed too often.
Domain
Coverage
PetitParser is a parsing framework combining ideas from
parser combinators, parsing expression grammars and pac
grammars and parsers as objects that can be reconfigured
1
http://forum.world.st/Mondrian-is-slow-next-step-tc
a2261116
2
http://www.pharo-project.org/
30. little impact on the overall execution. This sampling technique is u
all mainstream profilers, such as JProfiler, YourKit, xprof [10], an
MessageTally, the standard sampling-based profiler in Pharo Sm
tually describes the execution in terms of CPU consumption and i
each method of Mondrian:
54.8% {11501ms} MOCanvas>>drawOn:
54.8% {11501ms} MORoot(MONode)>>displayOn:
30.9% {6485ms} MONode>>displayOn:
| 18.1% {3799ms} MOEdge>>displayOn:
...
| 8.4% {1763ms} MOEdge>>displayOn:
| | 8.0% {1679ms} MOStraightLineShape>>display:on:
| | 2.6% {546ms} FormCanvas>>line:to:width:color:
...
23.4% {4911ms} MOEdge>>displayOn:
...
We can observe that the virtual machine spent about 54% o
the method displayOn: defined in the class MORoot. A root is the
nested node that contains all the nodes of the edges of the visua
general profiling information says that rendering nodes and edge
31. CPU time profiling
Mondrian [9] is an open and agile visualization engine. Mondrian describes a
Which is the relationship?
visualization using a graph of (possibly nested) nodes and edges. In June 2010
a serious performance issue was raised1 . Tracking down the cause of the poor
performance was not trivial. We first used a standard sample-based profiler.
Execution sampling approximates the time spent in an application’s methods
by periodically stopping a program and recording the current set of methods
under executions. Such a profiling technique is relatively accurate since it has
little impact on the overall execution. This sampling technique is used by almost
all mainstream profilers, such as JProfiler, YourKit, xprof [10], and hprof.
MessageTally, the standard sampling-based profiler in Pharo Smalltalk2 , tex-
tually describes the execution in terms of CPU consumption and invocation for
each method of Mondrian:
54.8% {11501ms} MOCanvas>>drawOn:
?
54.8% {11501ms} MORoot(MONode)>>displayOn:
30.9% {6485ms} MONode>>displayOn:
| 18.1% {3799ms} MOEdge>>displayOn:
...
| 8.4% {1763ms} MOEdge>>displayOn:
| | 8.0% {1679ms} MOStraightLineShape>>display:on:
| | 2.6% {546ms} FormCanvas>>line:to:width:color:
...
23.4% {4911ms} MOEdge>>displayOn:
...
We can observe that the virtual machine spent about 54% of its time in
the method displayOn: defined in the class MORoot. A root is the unique non-
nested node that contains all the nodes of the edges of the visualization. This
general profiling information says that rendering nodes and edges consumes a
great share of the CPU time, but it does not help in pinpointing which nodes
and edges are responsible for the time spent. Not all graphical elements equally
consume resources.
Traditional execution sampling profilers center their result on the frames of
47. When during the execution is this method called? (Q.13)
Where are instances of this class created? (Q.14)
Where is this variable or data structure being accessed?
(Q.15)
What are the values of the argument at runtime? (Q.19)
What data is being modified in this code? (Q.20)
How are these types or objects related? (Q.22)
How can data be passed to (or accessed at) this point
in the code? (Q.28)
What parts of this data structure are accessed in this
code? (Q.33)
48. When during the execution is this method called? (Q.13)
Where are instances of this class created? (Q.14)
Where is this variable or data structure being accessed?
(Q.15)
What are the values of the argument at runtime? (Q.19)
What data is being modified in this code? (Q.20)
How are these types or objects related? (Q.22)
How can data be passed to (or accessed at) this point
in the code? (Q.28)
What parts of this data structure are accessed in this
code? (Q.33) llito
Si etal. g softwar
e
ask durin
gr ammers s. 2008
Questi ons pro ution task
evol
49. Which is the relationship?
When during the execution is this method called? (Q.13)
?
Where are instances of this class created? (Q.14)
Where is this variable or data structure being accessed? (Q.15)
What are the values of the argument at runtime? (Q.19)
What data is being modified in this code? (Q.20)
How are these types or objects related? (Q.22)
How can data be passed to (or accessed at) this point in the code? (Q.28)
What parts of this data structure are accessed in this code? (Q.33)
52. visualization using a graph of (possibly nested) nodes and edges. In June 2010
a serious performance issue was raised1 . Tracking down the cause of the poor
performance was not trivial. We first used a standard sample-based profiler.
Execution sampling approximates the time spent in an application’s methods
by periodically stopping a program and recording the current set of methods
Profiling
under executions. Such a profiling technique is relatively accurate since it has
little impact on the overall execution. This sampling technique is used by almost
all mainstream profilers, such as JProfiler, YourKit, xprof [10], and hprof.
MessageTally, the standard sampling-based profiler in Pharo Smalltalk2 , tex-
tually describes the execution in terms of CPU consumption and invocation for
each method of Mondrian:
54.8% {11501ms} MOCanvas>>drawOn:
54.8% {11501ms} MORoot(MONode)>>displayOn:
30.9% {6485ms} MONode>>displayOn:
?
| 18.1% {3799ms} MOEdge>>displayOn:
...
| 8.4% {1763ms} MOEdge>>displayOn:
| | 8.0% {1679ms} MOStraightLineShape>>display:on:
| | 2.6% {546ms} FormCanvas>>line:to:width:color:
...
23.4% {4911ms} MOEdge>>displayOn:
...
We can observe that the virtual machine spent about 54% of its time in
the method displayOn: defined in the class MORoot. A root is the unique non-
nested node that contains all the nodes of the edges of the visualization. This
general profiling information says that rendering nodes and edges consumes a
great share of the CPU time, but it does not help in pinpointing which nodes
and edges are responsible for the time spent. Not all graphical elements equally
consume resources.
Traditional execution sampling profilers center their result on the frames of
the execution stack and completely ignore the identity of the object that received
the method call and its arguments. As a consequence, it is hard to track down
which objects cause the slowdown. For the example above, the traditional profiler
says that we spent 30.9% in MONode>>displayOn: without saying which nodes
were actually refreshed too often.
Coverage
53. Debugging
When during the execution is this method called? (Q.13)
?
Where are instances of this class created? (Q.14)
Where is this variable or data structure being accessed? (Q.15)
What are the values of the argument at runtime? (Q.19)
What data is being modified in this code? (Q.20)
How are these types or objects related? (Q.22)
How can data be passed to (or accessed at) this point in the code? (Q.28)
What parts of this data structure are accessed in this code? (Q.33)
61. Thesis:
To overcome the object paradox while providing a
unified and uniform solution to the key reflection
requirements we need an object-centric reflective
system which targets specific objects as the central
reflection mechanism through explicit meta-objects.
62. Thesis:
To overcome the object paradox while providing a
unified and uniform solution to the key reflection
requirements we need an object-centric reflective
system which targets specific objects as the central
reflection mechanism through explicit meta-objects.
63. Thesis:
To overcome the object paradox while providing a
unified and uniform solution to the key reflection
requirements we need an object-centric reflective
system which targets specific objects as the central
reflection mechanism through explicit meta-objects.
64. Thesis:
To overcome the object paradox while providing a
unified and uniform solution to the key reflection
requirements we need an object-centric reflective
system which targets specific objects as the central
reflection mechanism through explicit meta-objects.
65. Thesis:
To overcome the object paradox while providing a
unified and uniform solution to the key reflection
requirements we need an object-centric reflective
system which targets specific objects as the central
reflection mechanism through explicit meta-objects.
174. Which is the relationship?
When during the execution is this method called? (Q.13)
?
Where are instances of this class created? (Q.14)
Where is this variable or data structure being accessed? (Q.15)
What are the values of the argument at runtime? (Q.19)
What data is being modified in this code? (Q.20)
How are these types or objects related? (Q.22)
How can data be passed to (or accessed at) this point in the code? (Q.28)
What parts of this data structure are accessed in this code? (Q.33)
197. Setting breakpoints
31 method accessing the variable
9 assignments
pc setter is used by 5 intensively used classes
198. stack-centric debugging
InstructionStream class>>on:
InstructionStream class>>new
InstructionStream>>initialize
step into, CompiledMethod>>initialPC
step over, InstructionStream>>method:pc:
resume InstructionStream>>nextInstruction
MessageCatcher class>>new
InstructionStream>>interpretNextInstructionFor:
...
object-centric debugging
centered on centered on
the InstructionStream class the InstructionStream object
initialize
on:
next message, next message, method:pc:
new next change nextInstruction ...
next change
interpretNextInstructionFor:
...
199. Halt on next message
Halt on next message/s named
Halt on state change
Halt on state change named
Halt on next inherited message
Halt on next overloaded message
Halt on object/s in call
Halt on next message from
package
202. CPU time profiling
Mondrian [9] is an open and agile visualization engine.
Profile
visualization using a graph of (possibly nested) nodes an
a serious performance issue was raised1 . Tracking down
performance was not trivial. We first used a standard sam
Execution sampling approximates the time spent in an
by periodically stopping a program and recording the cu
under executions. Such a profiling technique is relatively
little impact on the overall execution. This sampling techn
all mainstream profilers, such as JProfiler, YourKit, xprof
MessageTally, the standard sampling-based profiler in P
tually describes the execution in terms of CPU consumpt
each method of Mondrian:
54.8% {11501ms} MOCanvas>>drawOn:
54.8% {11501ms} MORoot(MONode)>>displayOn:
{ { 30.9% {6485ms} MONode>>displayOn:
{ { | 18.1% {3799ms} MOEdge>>displayOn:
} ...
}
} | 8.4% {1763ms} MOEdge>>displayOn:
| | 8.0% {1679ms} MOStraightLineShape>>display:on:
} { } | | 2.6% {546ms} FormCanvas>>line:to:width:color:
...
23.4% {4911ms} MOEdge>>displayOn:
...
We can observe that the virtual machine spent abou
the method displayOn: defined in the class MORoot. A ro
nested node that contains all the nodes of the edges of t
general profiling information says that rendering nodes a
great share of the CPU time, but it does not help in pin
and edges are responsible for the time spent. Not all grap
consume resources.
Traditional execution sampling profilers center their r
the execution stack and completely ignore the identity of th
the method call and its arguments. As a consequence, it
which objects cause the slowdown. For the example above,
says that we spent 30.9% in MONode>>displayOn: withou
were actually refreshed too often.
Domain
Coverage
PetitParser is a parsing framework combining ideas from
parser combinators, parsing expression grammars and pac
grammars and parsers as objects that can be reconfigured
1
http://forum.world.st/Mondrian-is-slow-next-step-tc
a2261116
2
http://www.pharo-project.org/
203. Domain-Specific Profiling 3
CPU time profiling
Which is the relationship?
Mondrian [9] is an open and agile visualization engine. Mondrian describes a
visualization using a graph of (possibly nested) nodes and edges. In June 2010
a serious performance issue was raised1 . Tracking down the cause of the poor
performance was not trivial. We first used a standard sample-based profiler.
Execution sampling approximates the time spent in an application’s methods
by periodically stopping a program and recording the current set of methods
under executions. Such a profiling technique is relatively accurate since it has
little impact on the overall execution. This sampling technique is used by almost
all mainstream profilers, such as JProfiler, YourKit, xprof [10], and hprof.
MessageTally, the standard sampling-based profiler in Pharo Smalltalk2 , tex-
tually describes the execution in terms of CPU consumption and invocation for
each method of Mondrian:
54.8% {11501ms} MOCanvas>>drawOn:
54.8% {11501ms} MORoot(MONode)>>displayOn:
30.9% {6485ms} MONode>>displayOn:
?
| 18.1% {3799ms} MOEdge>>displayOn:
...
| 8.4% {1763ms} MOEdge>>displayOn:
| | 8.0% {1679ms} MOStraightLineShape>>display:on:
| | 2.6% {546ms} FormCanvas>>line:to:width:color:
...
23.4% {4911ms} MOEdge>>displayOn:
...
We can observe that the virtual machine spent about 54% of its time in
the method displayOn: defined in the class MORoot. A root is the unique non-
nested node that contains all the nodes of the edges of the visualization. This
general profiling information says that rendering nodes and edges consumes a
great share of the CPU time, but it does not help in pinpointing which nodes
and edges are responsible for the time spent. Not all graphical elements equally
consume resources.
Traditional execution sampling profilers center their result on the frames of
the execution stack and completely ignore the identity of the object that received
the method call and its arguments. As a consequence, it is hard to track down
which objects cause the slowdown. For the example above, the traditional profiler
says that we spent 30.9% in MONode>>displayOn: without saying which nodes
were actually refreshed too often.
Coverage
PetitParser is a parsing framework combining ideas from scannerless parsing,
252. Back in time
Debugger
Debu gger
w
Floal. ECOOP 2008
O b ject nhard e
t
Lie
253. name value
init@t1 null
predecessor
name value
:Person field-write@t2 'Doe'
predecessor
name
value
field-write@t3 'Smith'
person := Person new t1
...
name := 'Doe' t2
...
name := 'Smith' t3
258. Thesis:
To overcome the object paradox while providing a
unified and uniform solution to the key reflection
requirements we need an object-centric reflective
system which targets specific objects as the central
reflection mechanism through explicit meta-objects.
323. I believe the argument that reflection is dangerous because it gives
programmers too much power is a specious one. Existing programming
languages already give clumsy programmers more than ample
opportunities to shoot themselves in the foot. I'll concede that reflective
systems allow the clumsy programmer to fire several rounds per second
into his foot, without reloading. Still, I'm confident that, as is the case
with features such as pointers, competent programmers will make
appropriate use of the power of reflection with care and skill. Potential
abuse by inept programmers should not justify depriving the
programming community of a potentially vital set of tools.
Brian Foote
Notas do Editor
Reflected on reflection\nThinking it inside out.\nreinvented reflection.\n
\n
\n
A reflective system can be divided into two levels: the base level, which is concerned with the application domain, and the meta-level, which encompasses the self-representation.\n\n
A reflective system can be divided into two levels: the base level, which is concerned with the application domain, and the meta-level, which encompasses the self-representation.\nThese levels are causally connected\n\n\n
\n
Behavioral reflection is concerned with the manipulation of the abstractions which govern the execution of a program.\n\n\n
\n
\n
\n
There has been a lot of work in this domain.\nThis is a summary of the requirements\n
There has been a lot of work in this domain.\nThis is a summary of the requirements\n
\n
\n
\n
\n
\n
\n
\n
\n
refers to a meta-environment that runs at the same level as the application code, i.e., not in the interpreter of the host language \n\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
Code profilers commonly employ execution sampling as the way to obtain dynamic run-time information\n
\n
is a framework for drawing graphs\n
\n
\n
What is the relationship between this and the domain? picture again\n
\n
\n
is a framework for drawing graphs\n
width = the number of attributes of the class\nheight = the number of methods of the class\ncolor = the number of lines of code of the class\n\n
\n
double dispatch\n
one of the nodes was not correctly rendered\n
\n
\n
\n
We have to go back to the code\n
Which questions do these debuggers try to answer?\n
Sillito\n
\n
Which questions do these debuggers try to answer?\n
\n
\n
\n
although object-oriented developers are supposed to think in terms of objects, the tools and environments they use mostly prevent this.\n\nReflective systems prioritizing static mechanisms over object reflection present a gap between the user needs and what the reflective systems provides. Thus, the user is less efficient since he has to introduce ad hoc changes to steer the reflective systems to solve its object-specific needs\n\n\n
requirements +\nobject paradox +\nunified and uniform\n\nsolution object-centric with meta-objects\n\nexplicit object-centric meta-objects.\n
requirements +\nobject paradox +\nunified and uniform\n\nsolution object-centric with meta-objects\n\nexplicit object-centric meta-objects.\n
requirements +\nobject paradox +\nunified and uniform\n\nsolution object-centric with meta-objects\n\nexplicit object-centric meta-objects.\n
requirements +\nobject paradox +\nunified and uniform\n\nsolution object-centric with meta-objects\n\nexplicit object-centric meta-objects.\n
requirements +\nobject paradox +\nunified and uniform\n\nsolution object-centric with meta-objects\n\nexplicit object-centric meta-objects.\n
\n
\n
unified: the approach should solve all the requirements\nuniform: the approach should do this in a unique way, not having special cases that need to be handled differently from the rest.\n
\n
\n
In the remainder of this presentation I am going to show how to address these issues.\nThe graphic displays the architecture of the Bifröst System.\nIt also serves as an agenda for the remainder of this presentation. \n\nExplain what we are going to do in each part.\n\n1.- we will explain the core of ocr\n2.- we will analyze how development itself is changed with this new approach\n3.- we will analyze the impact of ocr on en user applications, does it have a meaningful impact, why should I care about this.\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
We see that there are many different ways of doing reflection, adaptation, instrumentation, many are low level.\nAnd the ones that are highly flexible cannot break free from the limitations of the language.\n
Adaptation semantic abstraction for composing meta-level structure and behavior.\n
The meta-objects can be applied and deapplied at any time.\n
There is no special case, there is no side path, everything works by attaching meta-objects to objects.\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
No change happens without going through them. No implicit interactions!!!\n\nThis makes reflection explicit.\n
\n
There has been a lot of work in this domain.\nThis is a summary of the requirements\n
\n
\n
\n
refers to a meta-environment that runs at the same level as the application code, i.e., not in the interpreter of the host language \n\n
\n
\n
\n
\n
\n
\n
\n
Since My Rectangle is a composition, I cannot access the behavior in MColor or MBorder exclusively.\nMixins have to be applied one at a time.\n
\n\n
Since My Rectangle is a composition, I cannot access the behavior in MColor or MBorder exclusively.\nMixins have to be applied one at a time.\n
\n
Since My Rectangle is a composition, I cannot access the behavior in MColor or MBorder exclusively.\nMixins have to be applied one at a time.\n
Originally in self\nthen developed by nathanel scharli in smalltalk.\nWith operations\n\n
Self traits do not provide composition operators.\n
Originally in self\nthen developed by nathanel scharli in smalltalk.\nWith operations\n\n
Originally in self\nthen developed by nathanel scharli in smalltalk.\nWith operations\n\n
Object evolution\n
Streams are used to iterate over sequences of elements such as sequenced collections, files, and network streams. Streams offer a better way than collections to incrementally read and write a sequence of elements.\n\n
\n
\n
The potential combination of all these various types of streams leads to an explosion in the number of classes.\n\n
Pharo stream hierarchy\n
\n
Traditional stream implementations check in every method if the underlying stream is still opened. With talents we can avoid such cumbersome checks and dynamically acquire a ClosedStreamTalent when a stream is closed.\n\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
Structural changes\n
\n
\n
\n
\n
\n
\n
\n
\n
We instruments the interesting objects\n
We instruments the interesting objects with meta-objects which state under which circumstances the events have to be triggered\n\n
We instruments the interesting objects with meta-objects which state under which circumstances the events have to be triggered\n\n
We instruments the interesting objects with meta-objects which state under which circumstances the events have to be triggered\n\n
\n
The next snippet of code describes their motivating example of a shopping cart discount. Every time a low demanded product is purchased a discount is applied to the purchase value.\n\n
\n
\n
\n
JPI\n
JPI\n
We instruments the interesting objects with meta-objects which state under which circumstances the events have to be triggered\n\n
dynamic JPI\n
The key characteristics are that we have a dynamically defined interface\nJPI have to define and modify the source code previous to execution\nWe can adapt dynamically, no anticipation.\nThe explicit meta-objects allow us to explicitly and dynamically query objects to know which events they can trigger.\n
\n
now have new possibilities that we can rethink existing tools.\n
Does OCR have a meaningful impact on the applications I use every day? Does it change anything?\n
\n
\n
\n
We have to go back to the code\n
\n
We have to go back to the code\n
We have to go back to the code\n
\n
Questions 15, 19, 20, 28 and 33 all have to do with tracking state at runtime. Consider in particular question 15: Where is this variable or data structure being accessed? Let us assume that we want to know where an instance variable of an object is being modified. This is known as keeping track of side-effects [3]. One approach is to use step-wise operations until we reach the modification. However, this can be time-consuming and unreliable. Another approach is to place breakpoints in all assignments related to the instance variable in question. Finding all these assignments might be troublesome depending on the size of the use case, as witnessed by our own experience.\nTracking down the source of this side effect is highly challenging: 31 of the 38 methods defined on InstructionStream access the variable, comprising 12 assignments; the instance variable is written 9 times in InstructionStream’s subclasses. In addition, the variable pc has an accessor that is referenced by 5 intensively-used classes.\n\n
Question 22 poses further difficulties for the debugging approach: How are these types or objects related? In statically typed languages this question can be partially answered by finding all the references to a particular type in another type. Due to polymorphism, however, this may still yield many false positives. (An instance variable of type Object could be potentially bound to instances of any type we are interested in.) Only by examining the runtime behavior can we learn precisely which types are instantiated and bound to which variables. The debugging approach would, however, require heavy use of conditional breakpoints (to filter out types that are not of interest), and might again entail the setting of breakpoints in a large number of call sites.\n\n
Back-in-time debugging [4], [5] can potentially be used to answer many of these questions, since it works by maintain- ing a complete execution history of a program run. There are two critical drawbacks, however, which limit the practical application of back-in-time debugging. First, the approach is inherently post mortem. One cannot debug a running system, but only the history of completed run. Interaction is therefore strictly limited, and extensive exploration may require many runs to be performed. Second, the approach entails considerable overhead both in terms of runtime performance and in terms of memory requirements to build and explore the history.\n\n
\n
We add one or more meta-objects to one or more objects depending on the case, all this happens in the back end.\n
\n
\n
double dispatch\n
one of the nodes was not correctly rendered\n
We have more commands that the ones in the debugger, but we did not know how to put them there.\n
\n
\n
\n
\n
\n
\n
halt on next specific messages\n
Render the traditional UI obsolete. \n30+ years of debugging in the same way.\n
\n
\n
\n
What is the relationship between this and the domain? picture again\n
new dimension of problem-domain\n
new dimension of problem-domain\n
TOOLS 2011\n
\n
\n\n
\n\n
\n
\n
\n
\n
\n
Clicking and drag-and-dropping nodes refreshes the visualization, thus increasing the progress bar of the corresponding nodes. This profile helps identifying unnecessary rendering. We identified a situation in which nodes were refreshing without receiving user actions.\n
TOOLS 2011\n
In the remainder of this presentation I am going to show how to address these issues.\nThe graphic displays the architecture of the Bifröst System.\nIt also serves as an agenda for the remainder of this presentation.\n
Logo?\n
\n
Traces are not directly related to the runtime entities\n
Traces are not directly related to the runtime entities\n
Traces are not directly related to the runtime entities\n
\n
\n
\n
\n
\n
We do not know what to evolve or who should evolve.\nWe need some spreading of the evolution like a disease or cure.\n\nThe adaptation cannot be seen by other objects which are in a different execution.\nThe adaptation is dynamically set.\n
\n
\n
\n
\n
\n
\n
The big difference is that we reify the execution, the dynamic context so we can reflect on them and decide when they should finish, undeploy, etc\n
\n
\n
\n
The propagation is not thread local.\n
Logo?\n
\n
\n
\n
\n
\n
\n
\n
The propagation is not thread local.\n
\n
\n
The impact to the system is bearable \n
We do not annoy any other and we have an impact but is only hitting us since others cannot see it.\n
\n
\n
\n
\n
\n
\n
\n
Logo?\n
\n
\n
\n
\n
Using Pier and adapting only Pier we get a 20 % negative impact in average in the different use cases.\n
If we adapt the whole smalltalk image.\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
technique that supports dynamic dispatch amongst several, possibly independent instrumentations. These instrumentations are saved and indexed by a version identifier. A Prisma scope can be related to a particular version of instrumentations over objects’ methods. When the dynamic extent is running only the meth- ods instrumented versions indexed by the scope should be executed.\n\n
What is the problem with AOP?\n- it was not originally motivated by reifications.\n- the debugger in ALIA4J is going into that direction\n- it is more like a language to express certain “events” but not a full model, aspects are not reified.\n- Do not get me wrong, it is amazing what they have achieved, but the reification problem is still a big one, Mainly because it was not born in a dynamic language.\n
\n
Following the idea of per-object meta-objects Rajan and Sullivan propose per-object aspects. An aspect deployed on a single object only sees the join points produced by this object. This join point observation can be stopped at any time.\n\n
CaesarJ provides deploy blocks which restrict behavioral adaptations to take place only within the dynamic extent of the block. The scope is explicitly embedded within the application code, but the this approach has an implicit exit point which is the ending of the execution of the deploy block. No value can be parameterized in the deploy block. The adaptation is bound at run time but the adaptation cannot be unbound nor rebound during the execution. Finally, the adaptation is applied locally to the thread executing the deploy block.\n\n
AspectScheme is an aspect-oriented procedural language where pointcuts and advices are first-class values. AspectScheme introduces two deployment expressions. One expression can dynamically deploy an aspect over a body expression. The other can statically deploy an aspect which only sees join points occurring lexically in its body.\n\n\n
AspectS is a dynamic aspect system defined in the context of Smalltalk. AspectS support first-class activation conditions which are objects modeling a dynamic condition. Since this a Smalltalk implementation the condition can be dynamically bound at runtime. This means that condition can be installed and uninstalled at runtime. Generally, this changes are global but as Hirschfeld and Costanza showed thread locality can be achieved.\n\n
ALIA4J is an approach for implementing language extensions by reifying dispatching. It uses a fine-grained intermediate representation close to the source level abstractions. The key objective of ALIA4J is to allow researchers to focus on developing source languages for solving specific problems. For example, ALIA4J implemented CaesarJ thus it supports object-specific adaptations. One shortcoming of this approach is that even though the dispatching is reified the adaptation abstraction (meta-object) is not. Thus, achieving dynamically defined interfaces is not possible and so the source code has to be modified to signal an event interface change.\n\n\n
ALIA4J AOP specific debugger realizes the need to explicitly model some AOP abstractions. The authors reflect that aspects adaptation are lost when weaved and is this implicitness that hinders the correct detection and solution of aspect-related bugs. Yin et al. define a debugging model which is aware of aspect-oriented concepts. This solution is targeted at AOP events. We require a solution targeted at object behavioral events.\n\n
\n
\n
\n
\n
\n
\n
\n
Cells have more power. And OOP stems from cells. What if we combine managed with unmanaged memory management?\n\nhttp://www.wormbook.org/chapters/www_programcelldeath/programcelldeath.html\n
Garbage collection is cool, but a black box for most people. Not controllable. Object are not involved.\n
In todays programming language you can choose between two. It is very black and white.\n- Either you have “automatic” collection, or you do everything “manually”.\n- In an automatic setup everything is “temporary”, in a manual all your actions are “permanent”.\n- Either you have a “referenced” object, or an “unreferenced” object. You have no control over unreferenced objects, and it is hard to get them back.\n- Either you have a “strong” reference, or “weak” one. You have to know upfront and you cannot easily change your mind on the fly.\n
Implementing good caches is difficult. Either they are too aggressive and keep objects around for too long1 or they are too relaxed and let objects die too soon. If the objects can decide themselves how long they should stick around it is much simpler to implement a good cache.\n\n
They disappear to fast if they are not referenced, like this\nthey could decide to stick around.What makes them faster if they are required again. Same\nin java. With the string sharing thing.A string that has no references disappears immediately\nand is not shared anymore, should it appear again.\n\n
\n
\n
Tanter in scoping strategies mention the problem of stating a condition for undeployment\n