2. A Real Life Experience
• A person has sent an mail to the C levels of company
about a complaint.
• Then the request comes top level to the down.( CEO -
> CTO -> Director -> Dept. Man -> Man -> Developer )
• The developer, analyze the code but unable to find
any information about the complaint. Because the
complaint was about a situation that happened 4
months ago.
• Analyze, code reading, testing, again code reading, …
• A few days later, the problem has been solved.
• … but a few days have been passed
• …
3. not instrumented code
FUNCTION ExecuteWebService
(
pic_RequestXML IN NCLOB,
pis_ServiceName IN VARCHAR2,
pis_ChannelName IN VARCHAR2,
pin_TransactionId IN NUMBER
) RETURN INTEGER
...
...
BEGIN
...
...
vs_SOAPEnvelopeXMLNS := 'xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"';
vs_SOAPFaultXPATH := '/soap:Envelope/soap:Body/soap:Fault';
...
...
RETURN cn_SUCCESS;
EXCEPTION
WHEN OTHERS THEN
...
...
RETURN cn_FAILURE;
END;
4. instrumentation[1] : Add Trace information into the
code(function calls, input parameters etc.)
FUNCTION ExecuteWebService
(
pic_RequestXML IN NCLOB,
pis_ServiceName IN VARCHAR2,
pis_ChannelName IN VARCHAR2,
pin_TransactionId IN NUMBER
) RETURN INTEGER
...
...
BEGIN
LogInfo( 'Executing ExecuteWebService function.' );
LogInfo('[ServiceName,ChannelName,TransactionId]');
LogInfo( '[' || pis_ServiceName || ',' || pis_ChannelName || ',' || pin_TransactionId || ']' );
...
...
vs_SOAPEnvelopeXMLNS := 'xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"';
vs_SOAPFaultXPATH := '/soap:Envelope/soap:Body/soap:Fault';
...
...
RETURN cn_SUCCESS;
EXCEPTION
WHEN OTHERS THEN
...
...
RETURN cn_FAILURE;
END;
5. instrumentation[2] : Add Debug information into the
code(Error code, exception messages etc)
FUNCTION ExecuteWebService
(
pic_RequestXML IN NCLOB,
pis_ServiceName IN VARCHAR2,
pis_ChannelName IN VARCHAR2,
pin_TransactionId IN NUMBER
) RETURN INTEGER
...
...
BEGIN
LogInfo( 'Executing ExecuteWebService function.' );
LogInfo('[ServiceName,ChannelName,TransactionId]');
LogInfo( '[' || pis_ServiceName || ',' || pis_ChannelName || ',' || pin_TransactionId || ']' );
...
...
vs_SOAPEnvelopeXMLNS := 'xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"';
vs_SOAPFaultXPATH := '/soap:Envelope/soap:Body/soap:Fault';
...
...
RETURN cn_SUCCESS;
EXCEPTION
WHEN OTHERS THEN
...
...
vs_ErrorMessage := 'Error in ExecuteWebService function :' || SQLERRM;
vs_ErrorMessage := vs_ErrorMessage || cs_NEWLINE || dbms_utility.format_error_backtrace;
vs_ErrorMessage := vs_ErrorMessage || cs_NEWLINE || 'UTL_HTTP Detail SQL Code = ' || vs_DetailSQLCode;
vs_ErrorMessage := vs_ErrorMessage || cs_NEWLINE || 'UTL_HTTP Detail SQL Message = ' || vs_DetailSQLMessage;
LogError(vs_ErrorMessage );
RETURN cn_FAILURE;
END;
6. instrumentation[3] : Add Performance information into
the code(performance counters, timing and profiling)
FUNCTION ExecuteWebService
(
pic_RequestXML IN NCLOB,
pis_ServiceName IN VARCHAR2,
pis_ChannelName IN VARCHAR2,
pin_TransactionId IN NUMBER
) RETURN INTEGER
...
...
BEGIN
SetStartTime();
LogInfo( 'Executing ExecuteWebService function.' );
LogInfo('[ServiceName,ChannelName,TransactionId]');
LogInfo( '[' || pis_ServiceName || ',' || pis_ChannelName || ',' || pin_TransactionId || ']' );
...
...
SetStatistics('INIT');
vs_SOAPEnvelopeXMLNS := 'xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"';
vs_SOAPFaultXPATH := '/soap:Envelope/soap:Body/soap:Fault';
...
...
RETURN cn_SUCCESS;
EXCEPTION
WHEN OTHERS THEN
...
...
vs_ErrorMessage := 'Error in ExecuteWebService function :' || SQLERRM;
vs_ErrorMessage := vs_ErrorMessage || cs_NEWLINE || dbms_utility.format_error_backtrace;
vs_ErrorMessage := vs_ErrorMessage || cs_NEWLINE || 'UTL_HTTP Detail SQL Code = ' || vs_DetailSQLCode;
vs_ErrorMessage := vs_ErrorMessage || cs_NEWLINE || 'UTL_HTTP Detail SQL Message = ' || vs_DetailSQLMessage;
LogError(vs_ErrorMessage );
RETURN cn_FAILURE;
END;
7. Please, please, please instrument your code![*]
• Dev: Why is this call failing?
• Me: What are the parameter values you are calling it with in your code?
• Dev: Values X, Y and Z.
• Me: Have you called the routine directly with those values?
• Dev: Yes, and it worked fine.
• Me: That would suggest those are not the values you are really using then.
• Dev: But they are.
• Me: How do you know. Have to traced the variable values before you made the call.
• Dev: No, but they are the correct values.
• Me: Can you put in some trace messages to check?
• Dev: (walks away) … grumble … grumble … stupid trace … wasting my time …
• Me: (some time later) So what values were you passing in?
• Dev: One of the values was not what I was expecting, which is why it broke.
No matter who you are or how cool you think you are at programming,
you can never know exactly what is going on in your code unless you
instrument it. Just shut up and do it!
Experience tells me you will read this and ignore it, but that is a big
mistake! A real professional *always* comments and instruments their
code!
[*] : from Tim Hall
8. Definition
From Wikipedia, the free encyclopedia
• In context of computer programming, instrumentation
refers to an ability to monitor or measure the level of a
product's performance, to diagnose errors and to write
trace information.
• Programmers implement instrumentation in the form of
code instructions that monitor specific components in a
system (for example, instructions may output logging
information to appear on screen).
• When an application contains instrumentation code, it can
be managed using a management tool.
• Instrumentation is necessary to review the performance of
the application.
• Instrumentation approaches can be of two types, source
instrumentation and binary instrumentation.
9. Definition – cont.
In programming, instrumentation means the ability of an application to
incorporate:
• Code tracing - receiving informative messages about the execution of
an application at run time.
• Debugging and (structured) exception handling - tracking down and
fixing programming errors in an application under development.
• Profiling (computer programming) - a means by which dynamic
program behaviors can be measured during a training run with a
representative input. This is useful for properties of a program which
cannot be analyzed statically with sufficient precision, such as alias
analysis.
• Performance counters - components that allow the tracking of the
performance of the application.
• Computer data logging - components that allow the logging and
tracking of major events in the execution of the application.
11. Our case against debugging[*]
• We have reproduced this experience several times,
with various customers, and noticed that about 50%
of project time is spent debugging and not
producing code. And we are not even talking about
the testing process here. Amazingly, we have
recently seen a question on a LinkedIn group about
the time spent in debugging, and 50% was the
number mentioned as the first response to this
thread by a developer. By the way, we need to
congratulate the guy for pulling out this number as
he is fully aware of where his time goes, which
obviously is not the case for many developers.
[*] from http://blog.softfluent.com/2011/04/20/our-case-against-debugging/
12. Our case against debugging – cont.
• The main reason would be obvious to a financial guy. It
is the "capex" versus "opex" difference. Debugging is an
operational expense. Once you closed the debugger, all
that you have learned vanishes and your time has been
spent without any remaining value. As a comparison,
imagine you have spent the same time putting
appropriate tracing in relevant functions, outputting the
key parameter values to a relevant file in a manner that
can be reproduced and configured. You will get the same
benefits as a developer to check that your program does
what it should and detect potential mistakes. But at the
same time, you will have prepared future analysis once
the program evolves. Tracing is a capital expense.
13. Our case against debugging – cont.
• Pushing the reasoning a bit further, one can
easily understand that this is also a matter
of team versus individual vision. The investment
put in tracing will benefit to everybody while the
debugging approach is an individual process that
cannot easily be shared. As an example, I had a
"magnetic resonance imaging" exam last month,
and I was surprised and happy that all this exam
was actually recorded and given to me on a CD. I
am now able to share it with my usual doctor as
well as any specialist that I would like to have a
second look at the exam.
14. Our case against debugging – cont.
• Similarly, it is also a matter of developer machine versus
production environment. As computing gets more and
more complex, with server-based and cloud-based
infrastructure, as well as load-balancing or security
constraints, the developer desktop – although simulating
those infrastructures – will more and more differ from the
real execution context. The risk of not being able to debug
some scenarios is higher and tracing is often the only real
option to understand what is happening. By the way, think
about what is happening with airplanes. We have very
powerful simulators that can simulate anything one can
imagine. Still, we put black boxes in aircrafts to
understand what happened under unpredicted
circumstances. So tracing has to be done anyway.
15. Effective Code Instrumentation? [*]
From Achilles : All too often I read statements about some new
framework and their "benchmarks." My question is a general one
but to the specific points of:
1. What approach should a developer take to effectively
instrument code to measure performance?
2. When reading about benchmarks and performance testing, what
are some red-flags to watch out for that might not represent real
results?
And best answer from : Patrick
There are two methods of measuring performance: using code
instrumentation and using sampling… (please read the
conversation..)
[*] : from http://stackoverflow.com/questions/2345081/effective-code-instrumentation
16. Instrumentation [*]
• To the developers that say “this is extra code that will just make my code
run slower” I respond “well fine, we will take away V$ views, there will be
no SQL_TRACE, no 10046 level 12 traces, in fact – that entire events
subsystem in Oracle, it is gone”. Would Oracle run faster without this
stuff? Undoubtedly – not. It would run many times slower, perhaps
hundreds of times slower. Why? Because you would have no clue where to
look to find performance related issues. You would have nothing to go
on. Without this “overhead” (air quotes intentionally used to denote
sarcasm there), Oracle would not have a chance of performing as well as it
does. Because you would not have a change to make it perform
well. Because you would not know where even to begin.
• So, a plea to all developers, get on the instrumentation bandwagon. You’ll
find your code easier to debug (note how Oracle doesn’t fly a developer to
your site to debug the kernel, there is enough instrumentation to do it
remotely). You’ll find your code easier to tune. You’ll find your code easier
to maintain over time. Also, make this instrumentation part of the
production code, don’t leave it out! Why? Because, funny thing about
production – you are not allowed to drop in “debug” code at the drop of a
hat, but you are allowed to update a row in a configuration table, or in a
configuration file! Your trace code, like Oracle’s should always be there, just
waiting to be enabled.
[*] : from Tom Kyte
17. How to Instrument
• Use database tables for logging, tracing and performance counters.
Archive these log information, when it is not frequently analyzed. If not
necessary, erase these table data, otherwise DBAs will complain.
• It can also be used file system(utl_file package) for logging. But note
that, database tables are very easy to analyze and query.
• Use Oracle’s built-in tools for performance monitoring and profiling.
(Statspack, AWR reports, PL/SQL Hierarcihal Profiler etc)
• Only log, necessary information. These log records are read by human
eyes. Tons of log information are very difficult to read and analyze.
• Please analyze the log and trace information. Be proactive. Take actions
before something goes wrong.
• Instrumentation shall be a quality issue and shall not be skipped. It is a
code dicipline.
• In Code Review process, reviewer shall mark un-instrumented code.