SlideShare a Scribd company logo
1 of 49
An introduction to
JVM performance
Performance-talk disclaimer
EVERYTHING IS A LIE!!
Please keep in mind:
• The JVM’s performance model is an implementation detail you cannot rely on.
• Performance is hard to get right and it is difficult to measure.
• We look at HotSpot in this talk, other JVMs might behave differently.
• Occasionally, implementations are performant without appearing to be.
How is Java code executed?
Java javac JVM processor
source code byte code machine code
Optimizations are applied almost exclusively after handing resposibility to the
JVM. This makes them difficult to trace as the JVM is often seen as a black box.
Other compilers such as for example scalac might however apply optimizations
such as resolving tail recursion into ordinary loops.
HotSpot: interpretation and tiered compilation
interpreter C1 (client) C2 (server)
level 0 level 1 level 2 level 3 level 4
C2 is busy
trivial method
machine code
templating
no
profiling
simple
profiling
advanced
profiling
profile-based
optimization
Mostly, steady state performance is of interest. Compilation only of “hot spots” with
a single method as the smallest compilation unit.
A central building block: call sites
class Foo {
void bar() {
System.out.println("Hello!");
}
}
A call site, that is a
specific method call
instruction in the code.
void doSomething(Foo val) {
val.bar();
}
Other than in many languages, in Java, most method calls are virtual.
The question is: How does the JVM reason about what code to execute?
Method invocation is a very common task for a JVM, it better be fast!
indirection
Virtual method tables (vtables / itables)
# Method Code
1 hashCode() 0x234522
2 equals(Object) 0x65B4A6
3 toString() 0x588252
… … …
8 bar()
class Foo {
void bar() {
System.out.println("Hello!");
}
}
class Sub extends Foo {
@Override
void bar() {
System.out.println("Woops!");
}
}
# Method Run
1 hashCode() 0x234522
2 equals(Object) 0x65B4A6
3 toString() 0x588252
… … …
8 bar()
class Foo
class Sub
Single inheritance allows for index-based lookup of a method implementation.
But resolving this triple indirection on every method call is still too slow!
Inline caches
class Foo {
void bar() {
System.out.println("Hello!");
}
}
void doSomething(Foo val) {
val.bar(); [cache: val => Foo: address]
}
cachedlink
Inline caches observe instance classes and remember the address of a class’s
method implementation. This would avoid the lookup in a virtual method table.
Smalltalk is a prominent user of such caches. But this double indirection is still to slow!
Monomorphic (“linked”) call site
class Foo {
void bar() {
System.out.println("Hello!");
}
}
void doSomething(Foo val) {
[assert: val => Foo]
[goto: method address]
}
directlink
The JVM is based on making optimistic assumptions and adding traps when these
assumptions are not met (“adaptive runtime”). Heuristics show that most call sites only
ever observe a single class (“monomorphic”). These same heuristics also show that
non-monomorphic call sites often observe many types (“megamorphic”).
The JVM has created a
profile for this call site.
It is now optimisitc
about what instances it
will observe.
monomorphic bimorphic polymorphic megamorphic
direct link
vtable
lookup
(about 90%)
A call site’s profile is generated at runtime and it is adapted after collecting sufficient
information. In general, the JVM tries to be optimistic and becomes more pessimistic
once it must. This is an adaptive approach, native programs cannot do this.
optimization
deoptimization
home of rumors
conditional
direct link
(data structures) (but dominant targets)
Inlining
void doSomething(Foo val) {
[assert: val => Foo]
System.out.println("Hello!");
}
inlined
Inlining is often consider an “uber optimization” as it gives the JVM more code to
omtimize as a single block. The C1 compiler does only little inlining after performing “class
hierarchy analysis” (CHA). The C2 compiler inlines monomorphic and bimorphic call sites
(with a conditional jump) and the dominant target (> 90%) of a megamorphic call site.
Small methods (< 35 byte) are always inlined. Huge methods are never inlined.
class Foo {
void bar() {
System.out.println("Hello!");
}
}
void doSomething(Foo val) {
[assert: val => Foo]
[goto: method address]
}
Call receiver profiling: every type matters!
List<String> list = ... // either ArrayList or LinkedList
list.size(); // a bimorphic call site
// new class turns call site into megamorphic state
new ArrayList<String>() {{
add("foo");
add("bar");
}};
When the JVM profiles call sites or conducts class hierarchy analysis, it takes the receiver
type at a call site into consideration, it does not analyze if a method is actually
overridden. For this reason, every type matters (even when calling final methods).
You might wonder why this is not optimized:
Looking up an object’s class is an order-one operation. Examining a class hierarchy is not.
The JVM needs to choose a trade-off when optimizing and analyzing the hierarchy does
not pay off (educated guess). “Double brace initialization” is a however often introducing
new (obsolete) types at call sites. Often enough, this results in vtable/itable lookups!
Microoptimizing method dispatch
interface Foo { void m(); }
class Sub1 implements Foo { @Override void m() { ... } }
class Sub2 implements Foo { @Override void m() { ... } }
class Sub3 implements Foo { @Override void m() { ... } }
void doSomething(Foo foo) {
foo.m();
}
If all three types are
observed, this call site is
megamorphic. A target
is only inlined if it is
dominant (>90%).
Do not microoptimize, unless you must! The improvement is minimal.
In general: static/private > class virtual (null check) > interface virtual (null + type check).
This is true for all dispatchers (C2, C1, interpreter)
Source: http://shipilev.net/blog/2015/black-magic-method-dispatch/
class Foo {
int id // 1, 2, 3
static void sub1() { ... }
static void sub2() { ... }
static void sub3() { ... }
}
Fields are never
resolved dynamically.
Static call sites always
have an explicit target.
Idea: avoid dynamic
dispatch but emulate it
at the call site.
(“call by id”)
void doSomething(Foo foo) {
switch (foo.id) {
case 1: Foo.sub1(); break;
case 2: Foo.sub2(); break;
case 3: Foo.sub3(); break;
default: throw new IllegalStateException();
}
}
static void log(Object... args) {
System.out.println("Log: ");
for (Object arg : args) {
System.out.println(arg.toString());
}
}
void doSomething() {
System.out.println("Log: ");
System.out.println("foo".toString());
System.out.println(new Integer(4).toString());
System.out.println(new Object().toString());
}
Call site specialization
void doSomething() {
log("foo", 4, new Object());
}
inlined
void doSomething() {
System.out.println("Log: ");
Object[] args = new Object[]{"foo",4,new Object()};
for (Object arg : args) {
System.out.println(arg.toString());
}
}
Thanks to inlining (and loop unrolling), additional call sites are introduced.
This way, formerly megamorphic call sites can become monomorphic after duplication.
Generally, optimizations allow for new optimizations. This is especially true for inlining.
Unroll the entire loop as
it is now of a fixed size.
ONE TYPE GOOD!
MANY TYPES BAD!
The Hulk performance rule #1
All programs are typed!
Types (which do not equal to classes) allow us to identify “things” in our programs
that are similar. If nothing in your program has similarities, there might be something
wrong. Thus, even machines for dynamic languages look for types. (e.g. V8, Nashorn)
var foo = { };
foo.x = 'foo';
foo.y = 42;
var bar = { };
bar.y = 42;
bar.x = 'bar';
*
x
x, y
y
y, x
If your program has no structure, how should an
optimizer find any? Any “dynamic program” is typed,
but it is so implicitly. In the end, you simply did not
make this structure explicit.
V8, hidden class
int size = 20_000;
int maximum = 100;
int[] values = randomValues(size, maximum);
Arrays.sort(values);
Can the outcome of this
conditional instruction be
predicted (by the processor)?
Branch prediction
A conditional control flow
is referred to as branch.
int sum = 0;
for (int i = 0; i < 1_000; i++) {
for (int value : values) {
if (value > 50) {
sum += value;
} else {
sum -= value;
}
}
}
Warning: This example is too simple, the VM (loop interchange, conditional moves) has
become smarter than that. After adding more “noise”, the example would however work.
An unfortunate example where the above problem applies are (currently!) Java 8 streams
which build on (internal) iteration and conditionals (i.e. filters). If the VM fails to inline such
a stream expression (under a polluted profile), streams can be a performance bottle neck.
Loop peeling (in combination with branch specialization)
int[][] matrix = ...
for (int[] row : matrix) {
boolean first = true;
for (int value : row) {
if(first) {
first = false;
System.out.println("Row: ");
}
System.out.print(value + " ");
}
System.out.println(" --- ");
}
int[][] matrix = ...
for (int[] row : matrix) {
boolean first = true;
int index = 0;
if(first) {
first = false;
System.out.println("Row: ");
}
System.out.print(value + " ");
for (index = 1; index < row.length; index++) {
if(first) {
first = false;
System.out.println("Row: ");
}
System.out.print(value + " ");
}
System.out.println(" --- ");
}
Disclaimer: There is much more “loop stuff”.
PREDICTION GOOD!
RANDOM BAD!
The Hulk performance rule #2
Keep in mind:
Obviously, any application contains an inherent
unpredictability that cannot be removed.
Performant programs should however not add
more complexity as necessary as this burdens
modern processors which prefer processing
long, predictable pipes of instructions.
List<String> list = ...;
for (String s : list) {
System.out.println(s);
}
Escape analysis
List<String> list = ...;
Iterator<String> it = list.iterator();
while (it.hasNext()) {
System.out.println(it.next());
}
object
allocation
Escape analysis is difficult (expensive) to conduct. By avoiding long scopes, i.e. writing
short methods, an object’s scope is easier to determine. This will most likely improve in
future JVM implementations.
scope
Any heap allocated object needs to be garbage collected at some point. Even worse,
accessing an object on the heap implies an indirection what should be avoided.
STACK GOOD!
HEAP BAD!
The Hulk performance rule #3
long start = System.currentTimeMillis();
long end = System.currentTimeMillis();
System.out.println("Took " + (end - start) + " ms");
int sum = 0;
for (int value : values) {
sum += value;
}
int size = 20_000;
int[] values = randomValues(size);
int sum = 0;
for (int value : values) {
sum += value;
}
int size = 20_000;
int[] values = randomValues(size);
Dead-code elimination
Also, the outcome might dependant on the JVM’s collected code profile that was
gathered before the benchmark is run. Also, the measured time represents wall-clock
time which is not a good choice for measuring small amounts of time.
void run() {
int size = 500_000;
for (int i = ; i < 10_000; i++) {
doBenchmark(randomValues(size));
}
int[] values = randomValues(size);
System.out.println("This time is for real!");
doBenchmark(values);
}
void doBenchmark(int[] values) {
long start = System.nanoTime();
int sum = 0;
for (int value : values) {
sum += value;
}
long end = System.nanoTime();
System.out.println("Ignore: " + sum);
System.out.println("Took " + (end - start) + " ns");
}
A better benchmark
A good benchmark: JMH
class Sum {
int[] values;
@Setup
void setup() {
values = randomValues(size);
}
@Benchmark
int sum() {
int sum = 0;
for (int value : values) {
sum += value;
}
return sum;
}
}
In general, avoid measuring loops.
Assuring JIT-compilation
void foo() {
for (int i = 0; i < 10000; i++);
// do something runtime intensive.
}
Due to “back-edge overflow”, the method is compiled upon its first invocation.
As the loop is not useful, it is eliminated as dead code.
This can sometimes help for testing long-running benchmarks that are not invoked
sufficiently often in a benchmark‘s warm-up phase which is time-constrained.
This can also be used in production systems to force the JIT to warm up a method.
The method only needs to be invoked a single time before using it. This should
however be used with care as it is making an assumption about the inner workings
of the used JVM.
Measuring the right thing, the right way
Measuring the performance of two operational blocks does not normally resemble the
performance of the performance of both blocks if executed subsequently.
The actual performance might be better or worse (due to “profile pollution”)!
Best example for such “volume contractions”: Repeated operations. The more the JIT
has to chew on, the more the compiler can usually optimize.
HARNESS GOOD!
SELF-MADE BAD!
The Hulk performance rule #4
On-stack replacement
public static void main(String[] args) {
int size = 500_000;
long start = System.nanoTime();
int sum = 0;
for (int value : randomValues(size)) {
sum += value;
}
long end = System.nanoTime();
System.out.println("Took " + (end - start) + " ns");
}
On-stack replacement allows the compilation of methods that are already running.
If you need it, you did something wrong. (It mainly tackles awkward benchmarks.)
ON-STACK
REPLACEMENT?
OVERRATED!
The Hulk performance rule #5
However:
If the VM must deoptimize a running method,
this also implies an on-stack replacement of
the running, compiled method. Normally, such
deoptimization is however not referred to as
on-stack replacement.
Intrinsics
The HotSpot intrinsics are listed in vmSymbols.hpp
class Integer {
public static int bitCount(int i) {
i = i - ((i >>> 1) & 0x55555555);
i = (i & 0x33333333) + ((i >>> 2) & 0x33333333);
i = (i + (i >>> 4)) & 0x0f0f0f0f;
i = i + (i >>> 8); i = i + (i >>> 16);
return i & 0x3f;
}
}
On x86, this method can be reduced to the POPCNT instruction.
Ideally, the JVM would discover the legitimacy of this reduction from analyzing the
given code. Realistically, the JVM requires hints for such reductions. Therefore, some
methods of the JCL are known to the JVM to be reducible.
Such reductions are also performed for several native methods of the JCL. JNI is
normally to be avoided as native code cannot be optimized by the JIT compiler.
Algorithmic complexity
Remember that data structures are a sort of algorithm!
Date getTomorrowsDate() {
Thread.sleep(24 * 60 * 60 * 1000);
return new Date();
}
class ArrayList<E> implements List<E> {
E[] data;
}
class LinkedList<E> implements List<E> {
Node<E> first, last;
}
Aside access patterns, data locality is an important factor for performance.
Sometimes, you can also trade memory footprint for speed.
THINK GOOD!
GUESS BAD!
The Hulk performance rule #6
Reflection, method handles and regular invocation
Method method = Foo.class.getDeclaredMethod("bar");
int result = method.invoke(new Foo(), 42);
class Method {
Object invoke(Object obj, Object... args);
}
boxing
2xboxing
Escape analysis to the rescue? Hopefully in the future. Today, it does not look so good.
class Foo {
int bar(int value) {
return value * 2;
}
}
Reflection, method handles and regular invocation
class Foo {
int bar(int value) {
return value * 2;
}
}
MethodType methodType = MethodType
.methodType(int.class, int.class);
MethodHandle methodHandle = MethodHandles
.lookup()
.findVirtual(Foo.class, "bar", methodType);
int result = methodHandle.invokeExact(new Foo(), 42);
class MethodHandle {
@PolymorphicSignature
Object invokeExact(Object... args) throws Throwable;
}
This is nothing you could do but JVM magic. Method handles also work for fields.
Further intrinsification methods: share/vm/classfile/vmSymbols.hpp
REFLECTION GOOD!
BOXING BAD!
The Hulk performance rule #7
Exception performance
boolean doSomething(int i) {
try {
return evaluate(i);
} catch (Exception e) {
return false;
}
}
boolean evaluate(int i) throws Exception {
if(i > 0) {
return true;
} else {
throw new Exception();
}
}
Exceptions can be used to implement “distributed control flow”. But please don’t!
Source: http://shipilev.net/blog/2014/exceptional-performance/
Exception performance (2)
dynamic/static: exception is created on throw vs. exception is stored in field
stackless: avoid stack creation by flag or overridding creation method
chained / rethrow: wrapping catched exception vs. throwing again
EXCEPTION
CONTROL-FLOW?
HULK SMASH!
The Hulk performance rule #8
Main memory
False sharing
class Shared {
int x;
int y;
}
14 7 “foo” 71 97 “bar”
L1 cache (1)
L1 cache (2)
1: writes x
2: writes y
14 7 “foo” 71 97 “bar”
14 7 “foo” 71 97 “bar”
24 7 “foo” 71 97 “bar”
14 1 “foo” 71 97 “bar”
contention class Shared {
@Contended
int x;
@Contended
int y;
}
14
7
“foo” 71 97 “bar”
Field annotation increases memory usage significantly! Adding “padding fields” can
simulate the same effect but object memory layouts are an implementation detail and
changed in the past. Note that arrays are always allocated in continuous blocks!
Conversely, cache (line) locality can improve a single thread‘s performance.
Volatile access performance (x86 Ivy bridge, 64-bit)
Source:http://shipilev.net/blog/2014/all-accesses-are-atomic/
Volatile access performance (x86 Ivy bridge, 32-bit)
Source:http://shipilev.net/blog/2014/all-accesses-are-atomic/
private void synchronized foo() {
// ...
}
private void synchronized bar() {
// ...
}
void doSomething() {
synchronized(this) {
foo(); // without lock
bar(); // without lock
}
}
void doSomething() {
foo();
bar();
}
Lock coarsening
private void foo() {
// ...
}
private void bar() {
// ...
}
locksandunlockstwiceLocks are initially biased towards the first locking thread. (This is currently only possible
if the Identity hash code is not yet computed.) In conflict, locks are promoted
to become “thick” locks.
VOLATILE SLOW!
BLOCKING SLOWER!
The Hulk performance rule #9
javac optimizations: constant folding of compile-time constants
class Foo {
final boolean foo = true;
}
class Bar {
void bar(Foo foo) {
boolean bar = foo.foo;
}
}
javac inlines all compile-time constants (JLS §15.28): compile-time constants are
primitives and strings with values that can be fully resolved at javac-compilation time.
"foo" // compile-time constant
"bar".toString() // no compile-time constant
Most common use case: defining static final fields that are shared with other classes.
This does not require linking or even loading of the class that contains such constants.
This also means that the referring classes need to be recompiled if constants change!
class Foo {
final boolean foo = true;
}
class Bar {
void bar(Foo foo) {
foo.getClass(); // null check
boolean bar = true;
}
}
Be aware of compile-time constants when using reflection! Also, be aware of stackless
NullPointerExceptions which are thrown by C2-compiled Object::getClass invocations.
constant-folding
withnullcheck
indisguise(JLS!)
JLS?
TL;DR!
The Hulk performance rule #10
“A fool with a tool is still a fool“
The basic problem: (Heisenberg)
Once you measure a system‘s performance, you change the system.
In a simple case, a no-op method that reports its runtime is not longer no-op.
“A fool with a tool is still a fool“ (2)
Many profilers use the JVMTI for collecting data. Such “native-C agents” are only
activated when the JVM reaches a safe-point where the JVM can expose a sort
of “consistent state” to this “foreign code”.
blocked
running
If the application only reaches a safe point when a thread is blocked then a profiler would
suggest that the application is never running. This is of course nonsense.
“Honest profiler” (Open Source): Collects data by using UNIX signals.
“Flight recorder” (Oracle JDK): Collects data on a lower level than JVMTI.
“A fool with a tool is still a fool“ (3)
push %rbp
mov %rsp,%rbp
mov $0x0,%eax
movl $0x0,-0x4(%rbp)
movl $0x5,-0x8(%rbp)
mov -0x8(%rbp),%ecx
add $0x6,%ecx
mov %ecx,-0xc(%rbp)
pop %rbp
retq
int doSomething() {
int a = 5;
int b = a + 6;
return b;
}
For some use cases, it helps to look at the assembly. For this you need a development
build or you need to compile the disassembler manually. Google is your friend. Sort of
painful on Windows. JMH has great support for mapping used processor circles to
assembly using Unix’s “perf”. JITWatch is a great log viewer for JIT code.
The JVM can expose quite a lot (class loading, garbage collection, JIT compilation,
deoptimization, etc.) when using specific XX flags. Possible to print JIT assembly.
Generally speaking, the
JVM honors clean code,
appropriate typing, small
methods and predictable
control flow. It is a clear
strength of the JVM that
you do not need to know
much about the JVM‘s
execution model in order
to write performance
applications. When writing
critical code segments, a
closer analysis might
however be appropriate.
Professor Hulk’s general performance rule
http://rafael.codes
@rafaelcodes
http://documents4j.com
https://github.com/documents4j/documents4j
http://bytebuddy.net
https://github.com/raphw/byte-buddy

More Related Content

What's hot

What's hot (20)

What's new in Java 11
What's new in Java 11What's new in Java 11
What's new in Java 11
 
Kotlin Coroutines. Flow is coming
Kotlin Coroutines. Flow is comingKotlin Coroutines. Flow is coming
Kotlin Coroutines. Flow is coming
 
Java modules
Java modulesJava modules
Java modules
 
Functional Programming 101 with Scala and ZIO @FunctionalWorld
Functional Programming 101 with Scala and ZIO @FunctionalWorldFunctional Programming 101 with Scala and ZIO @FunctionalWorld
Functional Programming 101 with Scala and ZIO @FunctionalWorld
 
ZIO: Powerful and Principled Functional Programming in Scala
ZIO: Powerful and Principled Functional Programming in ScalaZIO: Powerful and Principled Functional Programming in Scala
ZIO: Powerful and Principled Functional Programming in Scala
 
Threading Made Easy! A Busy Developer’s Guide to Kotlin Coroutines
Threading Made Easy! A Busy Developer’s Guide to Kotlin CoroutinesThreading Made Easy! A Busy Developer’s Guide to Kotlin Coroutines
Threading Made Easy! A Busy Developer’s Guide to Kotlin Coroutines
 
Java Threads and Concurrency
Java Threads and ConcurrencyJava Threads and Concurrency
Java Threads and Concurrency
 
CMake - Introduction and best practices
CMake - Introduction and best practicesCMake - Introduction and best practices
CMake - Introduction and best practices
 
Capabilities for Resources and Effects
Capabilities for Resources and EffectsCapabilities for Resources and Effects
Capabilities for Resources and Effects
 
Introduction to Spring Boot
Introduction to Spring BootIntroduction to Spring Boot
Introduction to Spring Boot
 
Sockets and Socket-Buffer
Sockets and Socket-BufferSockets and Socket-Buffer
Sockets and Socket-Buffer
 
LLVM Instruction Selection
LLVM Instruction SelectionLLVM Instruction Selection
LLVM Instruction Selection
 
Java Programming | Java Tutorial For Beginners | Java Training | Edureka
Java Programming | Java Tutorial For Beginners | Java Training | EdurekaJava Programming | Java Tutorial For Beginners | Java Training | Edureka
Java Programming | Java Tutorial For Beginners | Java Training | Edureka
 
Concurrency With Go
Concurrency With GoConcurrency With Go
Concurrency With Go
 
Javascript this keyword
Javascript this keywordJavascript this keyword
Javascript this keyword
 
Java 9/10/11 - What's new and why you should upgrade
Java 9/10/11 - What's new and why you should upgradeJava 9/10/11 - What's new and why you should upgrade
Java 9/10/11 - What's new and why you should upgrade
 
Java interface
Java interfaceJava interface
Java interface
 
nl80211 and libnl
nl80211 and libnlnl80211 and libnl
nl80211 and libnl
 
Implicit parameters, when to use them (or not)!
Implicit parameters, when to use them (or not)!Implicit parameters, when to use them (or not)!
Implicit parameters, when to use them (or not)!
 
RxNetty vs Tomcat Performance Results
RxNetty vs Tomcat Performance ResultsRxNetty vs Tomcat Performance Results
RxNetty vs Tomcat Performance Results
 

Viewers also liked

Viewers also liked (8)

Java byte code in practice
Java byte code in practiceJava byte code in practice
Java byte code in practice
 
Introduction to JVM languages and Fantom (very brief)
Introduction to JVM languages and Fantom (very brief)Introduction to JVM languages and Fantom (very brief)
Introduction to JVM languages and Fantom (very brief)
 
Java Memory Management Tricks
Java Memory Management Tricks Java Memory Management Tricks
Java Memory Management Tricks
 
Business Plan in 60 minutes
Business Plan in 60 minutesBusiness Plan in 60 minutes
Business Plan in 60 minutes
 
Take a Look at Akka+Java (English version)
Take a Look at Akka+Java (English version)Take a Look at Akka+Java (English version)
Take a Look at Akka+Java (English version)
 
Getting started with Java 9 modules
Getting started with Java 9 modulesGetting started with Java 9 modules
Getting started with Java 9 modules
 
Making Java more dynamic: runtime code generation for the JVM
Making Java more dynamic: runtime code generation for the JVMMaking Java more dynamic: runtime code generation for the JVM
Making Java more dynamic: runtime code generation for the JVM
 
Introduction to java
Introduction to javaIntroduction to java
Introduction to java
 

Similar to An introduction to JVM performance

JavaOne 2012 - JVM JIT for Dummies
JavaOne 2012 - JVM JIT for DummiesJavaOne 2012 - JVM JIT for Dummies
JavaOne 2012 - JVM JIT for Dummies
Charles Nutter
 
Low Level Exploits
Low Level ExploitsLow Level Exploits
Low Level Exploits
hughpearse
 
RailswayCon 2010 - Dynamic Language VMs
RailswayCon 2010 - Dynamic Language VMsRailswayCon 2010 - Dynamic Language VMs
RailswayCon 2010 - Dynamic Language VMs
Lourens Naudé
 

Similar to An introduction to JVM performance (20)

JavaOne 2012 - JVM JIT for Dummies
JavaOne 2012 - JVM JIT for DummiesJavaOne 2012 - JVM JIT for Dummies
JavaOne 2012 - JVM JIT for Dummies
 
Low Level Exploits
Low Level ExploitsLow Level Exploits
Low Level Exploits
 
Java Tutorial | My Heart
Java Tutorial | My HeartJava Tutorial | My Heart
Java Tutorial | My Heart
 
Synapseindia reviews.odp.
Synapseindia reviews.odp.Synapseindia reviews.odp.
Synapseindia reviews.odp.
 
Java Tutorial
Java TutorialJava Tutorial
Java Tutorial
 
Java_Tutorial_Introduction_to_Core_java.ppt
Java_Tutorial_Introduction_to_Core_java.pptJava_Tutorial_Introduction_to_Core_java.ppt
Java_Tutorial_Introduction_to_Core_java.ppt
 
Java tut1
Java tut1Java tut1
Java tut1
 
Java tut1 Coderdojo Cahersiveen
Java tut1 Coderdojo CahersiveenJava tut1 Coderdojo Cahersiveen
Java tut1 Coderdojo Cahersiveen
 
Java tut1
Java tut1Java tut1
Java tut1
 
Javatut1
Javatut1 Javatut1
Javatut1
 
JRuby and Invokedynamic - Japan JUG 2015
JRuby and Invokedynamic - Japan JUG 2015JRuby and Invokedynamic - Japan JUG 2015
JRuby and Invokedynamic - Japan JUG 2015
 
Java coding pitfalls
Java coding pitfallsJava coding pitfalls
Java coding pitfalls
 
Java tutorials
Java tutorialsJava tutorials
Java tutorials
 
Øredev 2011 - JVM JIT for Dummies (What the JVM Does With Your Bytecode When ...
Øredev 2011 - JVM JIT for Dummies (What the JVM Does With Your Bytecode When ...Øredev 2011 - JVM JIT for Dummies (What the JVM Does With Your Bytecode When ...
Øredev 2011 - JVM JIT for Dummies (What the JVM Does With Your Bytecode When ...
 
RailswayCon 2010 - Dynamic Language VMs
RailswayCon 2010 - Dynamic Language VMsRailswayCon 2010 - Dynamic Language VMs
RailswayCon 2010 - Dynamic Language VMs
 
A topology of memory leaks on the JVM
A topology of memory leaks on the JVMA topology of memory leaks on the JVM
A topology of memory leaks on the JVM
 
Java tut1
Java tut1Java tut1
Java tut1
 
Java Tutorial
Java TutorialJava Tutorial
Java Tutorial
 
Tutorial java
Tutorial javaTutorial java
Tutorial java
 
Java Tut1
Java Tut1Java Tut1
Java Tut1
 

More from Rafael Winterhalter

More from Rafael Winterhalter (10)

Java and OpenJDK: disecting the ecosystem
Java and OpenJDK: disecting the ecosystemJava and OpenJDK: disecting the ecosystem
Java and OpenJDK: disecting the ecosystem
 
Byte code field report
Byte code field reportByte code field report
Byte code field report
 
Event-Sourcing Microservices on the JVM
Event-Sourcing Microservices on the JVMEvent-Sourcing Microservices on the JVM
Event-Sourcing Microservices on the JVM
 
Java 10, Java 11 and beyond
Java 10, Java 11 and beyondJava 10, Java 11 and beyond
Java 10, Java 11 and beyond
 
Code generation for alternative languages
Code generation for alternative languagesCode generation for alternative languages
Code generation for alternative languages
 
Monitoring distributed (micro-)services
Monitoring distributed (micro-)servicesMonitoring distributed (micro-)services
Monitoring distributed (micro-)services
 
An Overview of Project Jigsaw
An Overview of Project JigsawAn Overview of Project Jigsaw
An Overview of Project Jigsaw
 
Migrating to JUnit 5
Migrating to JUnit 5Migrating to JUnit 5
Migrating to JUnit 5
 
Unit testing concurrent code
Unit testing concurrent codeUnit testing concurrent code
Unit testing concurrent code
 
Understanding Java byte code and the class file format
Understanding Java byte code and the class file formatUnderstanding Java byte code and the class file format
Understanding Java byte code and the class file format
 

Recently uploaded

Mastering Windows 7 A Comprehensive Guide for Power Users .pdf
Mastering Windows 7 A Comprehensive Guide for Power Users .pdfMastering Windows 7 A Comprehensive Guide for Power Users .pdf
Mastering Windows 7 A Comprehensive Guide for Power Users .pdf
mbmh111980
 
AI/ML Infra Meetup | Improve Speed and GPU Utilization for Model Training & S...
AI/ML Infra Meetup | Improve Speed and GPU Utilization for Model Training & S...AI/ML Infra Meetup | Improve Speed and GPU Utilization for Model Training & S...
AI/ML Infra Meetup | Improve Speed and GPU Utilization for Model Training & S...
Alluxio, Inc.
 

Recently uploaded (20)

How to pick right visual testing tool.pdf
How to pick right visual testing tool.pdfHow to pick right visual testing tool.pdf
How to pick right visual testing tool.pdf
 
Entropy, Software Quality, and Innovation (presented at Princeton Plasma Phys...
Entropy, Software Quality, and Innovation (presented at Princeton Plasma Phys...Entropy, Software Quality, and Innovation (presented at Princeton Plasma Phys...
Entropy, Software Quality, and Innovation (presented at Princeton Plasma Phys...
 
Implementing KPIs and Right Metrics for Agile Delivery Teams.pdf
Implementing KPIs and Right Metrics for Agile Delivery Teams.pdfImplementing KPIs and Right Metrics for Agile Delivery Teams.pdf
Implementing KPIs and Right Metrics for Agile Delivery Teams.pdf
 
how-to-download-files-safely-from-the-internet.pdf
how-to-download-files-safely-from-the-internet.pdfhow-to-download-files-safely-from-the-internet.pdf
how-to-download-files-safely-from-the-internet.pdf
 
What need to be mastered as AI-Powered Java Developers
What need to be mastered as AI-Powered Java DevelopersWhat need to be mastered as AI-Powered Java Developers
What need to be mastered as AI-Powered Java Developers
 
A Comprehensive Appium Guide for Hybrid App Automation Testing.pdf
A Comprehensive Appium Guide for Hybrid App Automation Testing.pdfA Comprehensive Appium Guide for Hybrid App Automation Testing.pdf
A Comprehensive Appium Guide for Hybrid App Automation Testing.pdf
 
The Impact of PLM Software on Fashion Production
The Impact of PLM Software on Fashion ProductionThe Impact of PLM Software on Fashion Production
The Impact of PLM Software on Fashion Production
 
Agnieszka Andrzejewska - BIM School Course in Kraków
Agnieszka Andrzejewska - BIM School Course in KrakówAgnieszka Andrzejewska - BIM School Course in Kraków
Agnieszka Andrzejewska - BIM School Course in Kraków
 
Mastering Windows 7 A Comprehensive Guide for Power Users .pdf
Mastering Windows 7 A Comprehensive Guide for Power Users .pdfMastering Windows 7 A Comprehensive Guide for Power Users .pdf
Mastering Windows 7 A Comprehensive Guide for Power Users .pdf
 
Microsoft 365 Copilot; An AI tool changing the world of work _PDF.pdf
Microsoft 365 Copilot; An AI tool changing the world of work _PDF.pdfMicrosoft 365 Copilot; An AI tool changing the world of work _PDF.pdf
Microsoft 365 Copilot; An AI tool changing the world of work _PDF.pdf
 
Secure Software Ecosystem Teqnation 2024
Secure Software Ecosystem Teqnation 2024Secure Software Ecosystem Teqnation 2024
Secure Software Ecosystem Teqnation 2024
 
A Guideline to Zendesk to Re:amaze Data Migration
A Guideline to Zendesk to Re:amaze Data MigrationA Guideline to Zendesk to Re:amaze Data Migration
A Guideline to Zendesk to Re:amaze Data Migration
 
GraphSummit Stockholm - Neo4j - Knowledge Graphs and Product Updates
GraphSummit Stockholm - Neo4j - Knowledge Graphs and Product UpdatesGraphSummit Stockholm - Neo4j - Knowledge Graphs and Product Updates
GraphSummit Stockholm - Neo4j - Knowledge Graphs and Product Updates
 
Tree in the Forest - Managing Details in BDD Scenarios (live2test 2024)
Tree in the Forest - Managing Details in BDD Scenarios (live2test 2024)Tree in the Forest - Managing Details in BDD Scenarios (live2test 2024)
Tree in the Forest - Managing Details in BDD Scenarios (live2test 2024)
 
Top Mobile App Development Companies 2024
Top Mobile App Development Companies 2024Top Mobile App Development Companies 2024
Top Mobile App Development Companies 2024
 
AI/ML Infra Meetup | ML explainability in Michelangelo
AI/ML Infra Meetup | ML explainability in MichelangeloAI/ML Infra Meetup | ML explainability in Michelangelo
AI/ML Infra Meetup | ML explainability in Michelangelo
 
AI/ML Infra Meetup | Improve Speed and GPU Utilization for Model Training & S...
AI/ML Infra Meetup | Improve Speed and GPU Utilization for Model Training & S...AI/ML Infra Meetup | Improve Speed and GPU Utilization for Model Training & S...
AI/ML Infra Meetup | Improve Speed and GPU Utilization for Model Training & S...
 
How to install and activate eGrabber JobGrabber
How to install and activate eGrabber JobGrabberHow to install and activate eGrabber JobGrabber
How to install and activate eGrabber JobGrabber
 
OpenChain @ LF Japan Executive Briefing - May 2024
OpenChain @ LF Japan Executive Briefing - May 2024OpenChain @ LF Japan Executive Briefing - May 2024
OpenChain @ LF Japan Executive Briefing - May 2024
 
SQL Injection Introduction and Prevention
SQL Injection Introduction and PreventionSQL Injection Introduction and Prevention
SQL Injection Introduction and Prevention
 

An introduction to JVM performance

  • 2. Performance-talk disclaimer EVERYTHING IS A LIE!! Please keep in mind: • The JVM’s performance model is an implementation detail you cannot rely on. • Performance is hard to get right and it is difficult to measure. • We look at HotSpot in this talk, other JVMs might behave differently. • Occasionally, implementations are performant without appearing to be.
  • 3. How is Java code executed? Java javac JVM processor source code byte code machine code Optimizations are applied almost exclusively after handing resposibility to the JVM. This makes them difficult to trace as the JVM is often seen as a black box. Other compilers such as for example scalac might however apply optimizations such as resolving tail recursion into ordinary loops.
  • 4. HotSpot: interpretation and tiered compilation interpreter C1 (client) C2 (server) level 0 level 1 level 2 level 3 level 4 C2 is busy trivial method machine code templating no profiling simple profiling advanced profiling profile-based optimization Mostly, steady state performance is of interest. Compilation only of “hot spots” with a single method as the smallest compilation unit.
  • 5. A central building block: call sites class Foo { void bar() { System.out.println("Hello!"); } } A call site, that is a specific method call instruction in the code. void doSomething(Foo val) { val.bar(); } Other than in many languages, in Java, most method calls are virtual. The question is: How does the JVM reason about what code to execute? Method invocation is a very common task for a JVM, it better be fast! indirection
  • 6. Virtual method tables (vtables / itables) # Method Code 1 hashCode() 0x234522 2 equals(Object) 0x65B4A6 3 toString() 0x588252 … … … 8 bar() class Foo { void bar() { System.out.println("Hello!"); } } class Sub extends Foo { @Override void bar() { System.out.println("Woops!"); } } # Method Run 1 hashCode() 0x234522 2 equals(Object) 0x65B4A6 3 toString() 0x588252 … … … 8 bar() class Foo class Sub Single inheritance allows for index-based lookup of a method implementation. But resolving this triple indirection on every method call is still too slow!
  • 7. Inline caches class Foo { void bar() { System.out.println("Hello!"); } } void doSomething(Foo val) { val.bar(); [cache: val => Foo: address] } cachedlink Inline caches observe instance classes and remember the address of a class’s method implementation. This would avoid the lookup in a virtual method table. Smalltalk is a prominent user of such caches. But this double indirection is still to slow!
  • 8. Monomorphic (“linked”) call site class Foo { void bar() { System.out.println("Hello!"); } } void doSomething(Foo val) { [assert: val => Foo] [goto: method address] } directlink The JVM is based on making optimistic assumptions and adding traps when these assumptions are not met (“adaptive runtime”). Heuristics show that most call sites only ever observe a single class (“monomorphic”). These same heuristics also show that non-monomorphic call sites often observe many types (“megamorphic”). The JVM has created a profile for this call site. It is now optimisitc about what instances it will observe.
  • 9. monomorphic bimorphic polymorphic megamorphic direct link vtable lookup (about 90%) A call site’s profile is generated at runtime and it is adapted after collecting sufficient information. In general, the JVM tries to be optimistic and becomes more pessimistic once it must. This is an adaptive approach, native programs cannot do this. optimization deoptimization home of rumors conditional direct link (data structures) (but dominant targets)
  • 10. Inlining void doSomething(Foo val) { [assert: val => Foo] System.out.println("Hello!"); } inlined Inlining is often consider an “uber optimization” as it gives the JVM more code to omtimize as a single block. The C1 compiler does only little inlining after performing “class hierarchy analysis” (CHA). The C2 compiler inlines monomorphic and bimorphic call sites (with a conditional jump) and the dominant target (> 90%) of a megamorphic call site. Small methods (< 35 byte) are always inlined. Huge methods are never inlined. class Foo { void bar() { System.out.println("Hello!"); } } void doSomething(Foo val) { [assert: val => Foo] [goto: method address] }
  • 11. Call receiver profiling: every type matters! List<String> list = ... // either ArrayList or LinkedList list.size(); // a bimorphic call site // new class turns call site into megamorphic state new ArrayList<String>() {{ add("foo"); add("bar"); }}; When the JVM profiles call sites or conducts class hierarchy analysis, it takes the receiver type at a call site into consideration, it does not analyze if a method is actually overridden. For this reason, every type matters (even when calling final methods). You might wonder why this is not optimized: Looking up an object’s class is an order-one operation. Examining a class hierarchy is not. The JVM needs to choose a trade-off when optimizing and analyzing the hierarchy does not pay off (educated guess). “Double brace initialization” is a however often introducing new (obsolete) types at call sites. Often enough, this results in vtable/itable lookups!
  • 12. Microoptimizing method dispatch interface Foo { void m(); } class Sub1 implements Foo { @Override void m() { ... } } class Sub2 implements Foo { @Override void m() { ... } } class Sub3 implements Foo { @Override void m() { ... } } void doSomething(Foo foo) { foo.m(); } If all three types are observed, this call site is megamorphic. A target is only inlined if it is dominant (>90%). Do not microoptimize, unless you must! The improvement is minimal. In general: static/private > class virtual (null check) > interface virtual (null + type check). This is true for all dispatchers (C2, C1, interpreter) Source: http://shipilev.net/blog/2015/black-magic-method-dispatch/ class Foo { int id // 1, 2, 3 static void sub1() { ... } static void sub2() { ... } static void sub3() { ... } } Fields are never resolved dynamically. Static call sites always have an explicit target. Idea: avoid dynamic dispatch but emulate it at the call site. (“call by id”) void doSomething(Foo foo) { switch (foo.id) { case 1: Foo.sub1(); break; case 2: Foo.sub2(); break; case 3: Foo.sub3(); break; default: throw new IllegalStateException(); } }
  • 13. static void log(Object... args) { System.out.println("Log: "); for (Object arg : args) { System.out.println(arg.toString()); } } void doSomething() { System.out.println("Log: "); System.out.println("foo".toString()); System.out.println(new Integer(4).toString()); System.out.println(new Object().toString()); } Call site specialization void doSomething() { log("foo", 4, new Object()); } inlined void doSomething() { System.out.println("Log: "); Object[] args = new Object[]{"foo",4,new Object()}; for (Object arg : args) { System.out.println(arg.toString()); } } Thanks to inlining (and loop unrolling), additional call sites are introduced. This way, formerly megamorphic call sites can become monomorphic after duplication. Generally, optimizations allow for new optimizations. This is especially true for inlining. Unroll the entire loop as it is now of a fixed size.
  • 14. ONE TYPE GOOD! MANY TYPES BAD! The Hulk performance rule #1
  • 15. All programs are typed! Types (which do not equal to classes) allow us to identify “things” in our programs that are similar. If nothing in your program has similarities, there might be something wrong. Thus, even machines for dynamic languages look for types. (e.g. V8, Nashorn) var foo = { }; foo.x = 'foo'; foo.y = 42; var bar = { }; bar.y = 42; bar.x = 'bar'; * x x, y y y, x If your program has no structure, how should an optimizer find any? Any “dynamic program” is typed, but it is so implicitly. In the end, you simply did not make this structure explicit. V8, hidden class
  • 16. int size = 20_000; int maximum = 100; int[] values = randomValues(size, maximum); Arrays.sort(values); Can the outcome of this conditional instruction be predicted (by the processor)? Branch prediction A conditional control flow is referred to as branch. int sum = 0; for (int i = 0; i < 1_000; i++) { for (int value : values) { if (value > 50) { sum += value; } else { sum -= value; } } } Warning: This example is too simple, the VM (loop interchange, conditional moves) has become smarter than that. After adding more “noise”, the example would however work. An unfortunate example where the above problem applies are (currently!) Java 8 streams which build on (internal) iteration and conditionals (i.e. filters). If the VM fails to inline such a stream expression (under a polluted profile), streams can be a performance bottle neck.
  • 17. Loop peeling (in combination with branch specialization) int[][] matrix = ... for (int[] row : matrix) { boolean first = true; for (int value : row) { if(first) { first = false; System.out.println("Row: "); } System.out.print(value + " "); } System.out.println(" --- "); } int[][] matrix = ... for (int[] row : matrix) { boolean first = true; int index = 0; if(first) { first = false; System.out.println("Row: "); } System.out.print(value + " "); for (index = 1; index < row.length; index++) { if(first) { first = false; System.out.println("Row: "); } System.out.print(value + " "); } System.out.println(" --- "); } Disclaimer: There is much more “loop stuff”.
  • 18. PREDICTION GOOD! RANDOM BAD! The Hulk performance rule #2 Keep in mind: Obviously, any application contains an inherent unpredictability that cannot be removed. Performant programs should however not add more complexity as necessary as this burdens modern processors which prefer processing long, predictable pipes of instructions.
  • 19. List<String> list = ...; for (String s : list) { System.out.println(s); } Escape analysis List<String> list = ...; Iterator<String> it = list.iterator(); while (it.hasNext()) { System.out.println(it.next()); } object allocation Escape analysis is difficult (expensive) to conduct. By avoiding long scopes, i.e. writing short methods, an object’s scope is easier to determine. This will most likely improve in future JVM implementations. scope Any heap allocated object needs to be garbage collected at some point. Even worse, accessing an object on the heap implies an indirection what should be avoided.
  • 20. STACK GOOD! HEAP BAD! The Hulk performance rule #3
  • 21. long start = System.currentTimeMillis(); long end = System.currentTimeMillis(); System.out.println("Took " + (end - start) + " ms"); int sum = 0; for (int value : values) { sum += value; } int size = 20_000; int[] values = randomValues(size); int sum = 0; for (int value : values) { sum += value; } int size = 20_000; int[] values = randomValues(size); Dead-code elimination Also, the outcome might dependant on the JVM’s collected code profile that was gathered before the benchmark is run. Also, the measured time represents wall-clock time which is not a good choice for measuring small amounts of time.
  • 22. void run() { int size = 500_000; for (int i = ; i < 10_000; i++) { doBenchmark(randomValues(size)); } int[] values = randomValues(size); System.out.println("This time is for real!"); doBenchmark(values); } void doBenchmark(int[] values) { long start = System.nanoTime(); int sum = 0; for (int value : values) { sum += value; } long end = System.nanoTime(); System.out.println("Ignore: " + sum); System.out.println("Took " + (end - start) + " ns"); } A better benchmark
  • 23. A good benchmark: JMH class Sum { int[] values; @Setup void setup() { values = randomValues(size); } @Benchmark int sum() { int sum = 0; for (int value : values) { sum += value; } return sum; } } In general, avoid measuring loops.
  • 24. Assuring JIT-compilation void foo() { for (int i = 0; i < 10000; i++); // do something runtime intensive. } Due to “back-edge overflow”, the method is compiled upon its first invocation. As the loop is not useful, it is eliminated as dead code. This can sometimes help for testing long-running benchmarks that are not invoked sufficiently often in a benchmark‘s warm-up phase which is time-constrained. This can also be used in production systems to force the JIT to warm up a method. The method only needs to be invoked a single time before using it. This should however be used with care as it is making an assumption about the inner workings of the used JVM.
  • 25. Measuring the right thing, the right way Measuring the performance of two operational blocks does not normally resemble the performance of the performance of both blocks if executed subsequently. The actual performance might be better or worse (due to “profile pollution”)! Best example for such “volume contractions”: Repeated operations. The more the JIT has to chew on, the more the compiler can usually optimize.
  • 26. HARNESS GOOD! SELF-MADE BAD! The Hulk performance rule #4
  • 27. On-stack replacement public static void main(String[] args) { int size = 500_000; long start = System.nanoTime(); int sum = 0; for (int value : randomValues(size)) { sum += value; } long end = System.nanoTime(); System.out.println("Took " + (end - start) + " ns"); } On-stack replacement allows the compilation of methods that are already running. If you need it, you did something wrong. (It mainly tackles awkward benchmarks.)
  • 28. ON-STACK REPLACEMENT? OVERRATED! The Hulk performance rule #5 However: If the VM must deoptimize a running method, this also implies an on-stack replacement of the running, compiled method. Normally, such deoptimization is however not referred to as on-stack replacement.
  • 29. Intrinsics The HotSpot intrinsics are listed in vmSymbols.hpp class Integer { public static int bitCount(int i) { i = i - ((i >>> 1) & 0x55555555); i = (i & 0x33333333) + ((i >>> 2) & 0x33333333); i = (i + (i >>> 4)) & 0x0f0f0f0f; i = i + (i >>> 8); i = i + (i >>> 16); return i & 0x3f; } } On x86, this method can be reduced to the POPCNT instruction. Ideally, the JVM would discover the legitimacy of this reduction from analyzing the given code. Realistically, the JVM requires hints for such reductions. Therefore, some methods of the JCL are known to the JVM to be reducible. Such reductions are also performed for several native methods of the JCL. JNI is normally to be avoided as native code cannot be optimized by the JIT compiler.
  • 30. Algorithmic complexity Remember that data structures are a sort of algorithm! Date getTomorrowsDate() { Thread.sleep(24 * 60 * 60 * 1000); return new Date(); } class ArrayList<E> implements List<E> { E[] data; } class LinkedList<E> implements List<E> { Node<E> first, last; } Aside access patterns, data locality is an important factor for performance. Sometimes, you can also trade memory footprint for speed.
  • 31. THINK GOOD! GUESS BAD! The Hulk performance rule #6
  • 32. Reflection, method handles and regular invocation Method method = Foo.class.getDeclaredMethod("bar"); int result = method.invoke(new Foo(), 42); class Method { Object invoke(Object obj, Object... args); } boxing 2xboxing Escape analysis to the rescue? Hopefully in the future. Today, it does not look so good. class Foo { int bar(int value) { return value * 2; } }
  • 33. Reflection, method handles and regular invocation class Foo { int bar(int value) { return value * 2; } } MethodType methodType = MethodType .methodType(int.class, int.class); MethodHandle methodHandle = MethodHandles .lookup() .findVirtual(Foo.class, "bar", methodType); int result = methodHandle.invokeExact(new Foo(), 42); class MethodHandle { @PolymorphicSignature Object invokeExact(Object... args) throws Throwable; } This is nothing you could do but JVM magic. Method handles also work for fields. Further intrinsification methods: share/vm/classfile/vmSymbols.hpp
  • 34. REFLECTION GOOD! BOXING BAD! The Hulk performance rule #7
  • 35. Exception performance boolean doSomething(int i) { try { return evaluate(i); } catch (Exception e) { return false; } } boolean evaluate(int i) throws Exception { if(i > 0) { return true; } else { throw new Exception(); } } Exceptions can be used to implement “distributed control flow”. But please don’t!
  • 36. Source: http://shipilev.net/blog/2014/exceptional-performance/ Exception performance (2) dynamic/static: exception is created on throw vs. exception is stored in field stackless: avoid stack creation by flag or overridding creation method chained / rethrow: wrapping catched exception vs. throwing again
  • 38. Main memory False sharing class Shared { int x; int y; } 14 7 “foo” 71 97 “bar” L1 cache (1) L1 cache (2) 1: writes x 2: writes y 14 7 “foo” 71 97 “bar” 14 7 “foo” 71 97 “bar” 24 7 “foo” 71 97 “bar” 14 1 “foo” 71 97 “bar” contention class Shared { @Contended int x; @Contended int y; } 14 7 “foo” 71 97 “bar” Field annotation increases memory usage significantly! Adding “padding fields” can simulate the same effect but object memory layouts are an implementation detail and changed in the past. Note that arrays are always allocated in continuous blocks! Conversely, cache (line) locality can improve a single thread‘s performance.
  • 39. Volatile access performance (x86 Ivy bridge, 64-bit) Source:http://shipilev.net/blog/2014/all-accesses-are-atomic/
  • 40. Volatile access performance (x86 Ivy bridge, 32-bit) Source:http://shipilev.net/blog/2014/all-accesses-are-atomic/
  • 41. private void synchronized foo() { // ... } private void synchronized bar() { // ... } void doSomething() { synchronized(this) { foo(); // without lock bar(); // without lock } } void doSomething() { foo(); bar(); } Lock coarsening private void foo() { // ... } private void bar() { // ... } locksandunlockstwiceLocks are initially biased towards the first locking thread. (This is currently only possible if the Identity hash code is not yet computed.) In conflict, locks are promoted to become “thick” locks.
  • 42. VOLATILE SLOW! BLOCKING SLOWER! The Hulk performance rule #9
  • 43. javac optimizations: constant folding of compile-time constants class Foo { final boolean foo = true; } class Bar { void bar(Foo foo) { boolean bar = foo.foo; } } javac inlines all compile-time constants (JLS §15.28): compile-time constants are primitives and strings with values that can be fully resolved at javac-compilation time. "foo" // compile-time constant "bar".toString() // no compile-time constant Most common use case: defining static final fields that are shared with other classes. This does not require linking or even loading of the class that contains such constants. This also means that the referring classes need to be recompiled if constants change! class Foo { final boolean foo = true; } class Bar { void bar(Foo foo) { foo.getClass(); // null check boolean bar = true; } } Be aware of compile-time constants when using reflection! Also, be aware of stackless NullPointerExceptions which are thrown by C2-compiled Object::getClass invocations. constant-folding withnullcheck indisguise(JLS!)
  • 45. “A fool with a tool is still a fool“ The basic problem: (Heisenberg) Once you measure a system‘s performance, you change the system. In a simple case, a no-op method that reports its runtime is not longer no-op.
  • 46. “A fool with a tool is still a fool“ (2) Many profilers use the JVMTI for collecting data. Such “native-C agents” are only activated when the JVM reaches a safe-point where the JVM can expose a sort of “consistent state” to this “foreign code”. blocked running If the application only reaches a safe point when a thread is blocked then a profiler would suggest that the application is never running. This is of course nonsense. “Honest profiler” (Open Source): Collects data by using UNIX signals. “Flight recorder” (Oracle JDK): Collects data on a lower level than JVMTI.
  • 47. “A fool with a tool is still a fool“ (3) push %rbp mov %rsp,%rbp mov $0x0,%eax movl $0x0,-0x4(%rbp) movl $0x5,-0x8(%rbp) mov -0x8(%rbp),%ecx add $0x6,%ecx mov %ecx,-0xc(%rbp) pop %rbp retq int doSomething() { int a = 5; int b = a + 6; return b; } For some use cases, it helps to look at the assembly. For this you need a development build or you need to compile the disassembler manually. Google is your friend. Sort of painful on Windows. JMH has great support for mapping used processor circles to assembly using Unix’s “perf”. JITWatch is a great log viewer for JIT code. The JVM can expose quite a lot (class loading, garbage collection, JIT compilation, deoptimization, etc.) when using specific XX flags. Possible to print JIT assembly.
  • 48. Generally speaking, the JVM honors clean code, appropriate typing, small methods and predictable control flow. It is a clear strength of the JVM that you do not need to know much about the JVM‘s execution model in order to write performance applications. When writing critical code segments, a closer analysis might however be appropriate. Professor Hulk’s general performance rule