SlideShare uma empresa Scribd logo
1 de 21
JMH
Agenda
Background
Types of Benchmarking
Factors in benchmarking
Why are hand-written Benchmarks bad
Hands On
JMH Modes, Time Unit and Benchmark State
Background
When software developers are concerned with performance of their system.They
may resort to these options
Performance Testing to determine the performance of an already built system.
MSDN provides a very thorough guide on the subject.
Profiling to analyze and investigate bottlenecks when a system is running.
Benchmarking to compare the relative performance of systems.
Analysis to determine the algorithmic complexity (Big-O notation).
Types of Benchmarking
This leads to two commonly known types of benchmarks:
Macrobenchmarks are used to test entire system configurations. Macro
benchmarks are done between different platforms to compare the efficiency
between them.
Microbenchmarks are used to compare different implementations in an isolated
context, for example a single component. Micro benchmarks are done within
the same platform for a small snippet of code.
MicroBenchmarking
Micro benchmarks are generally done for two reasons.
To compare different approaches of code which implements the same logic and
choose the best one to use.
To identify any bottlenecks in a suspected area of code during performance
optimization.
So benchmarking is used for comparisons.Benchmark is the process of recording
the performance of a system
Factors in benchmarking
Benchmark candidate: What piece of software do we benchmark?
Comparison against a baseline:determined by customer requirements or you might be just looking for
the best relative performance in a specific scenario among a set of benchmark candidates.
Metrics: Which metrics do we use to determine performance? Like Throughput or Average time
Benchmarking scenario: Do we consider single-threaded or multi-threaded performance? How does a
data structure behave when accessed concurrently by multiple writers?
Benchmarking duration
Why are hand-written Benchmarks bad
Because you need to take into account these factors
JVM consists of three main components that work together: the runtime including the interpreter, the
garbage collector and the JIT-compiler. Due to these components, we neither know in advance
which machine code will be executed nor how it behaves exactly at runtime
Oracle's HotSpot JVM applies a vast amount of optimizations on Java code(more than 70 optimization
technique)
Some compiler optimizations like dead code elimination, loop unrolling, lock coalescing and in-
lining. You might be benchmarking a different code than what you are thinking.
Why are hand-written Benchmarks bad
Each method is executed in interpreted mode at first. The Java interpreter requests that it should be
JIT-compiled. Consequently, we have to run the benchmarked code often enough before the actual
measurement starts to ensure that all benchmarked code has been JIT-compiled beforehand.. You
should not see any JIT-compiler activity after the warmup phase.
Benchmark code falls victim to dead code elimination: In certain circumstances the JIT-compiler may
be able to detect that the benchmark does not do anything and eliminates large parts or even the
whole benchmark code.
False sharing: In multithreaded microbenchmarks, false sharing can severely affect measured
performance. See false sharing
Why are hand-written Benchmarks bad
Reliance on a specific environment: The JVM version, the OS and the hardware
could be different in a microbenchmark and an application. whether its is
single core or multi-core or hyper threaded and its impact on your program to
benchmark.
When running on the same environment, we need to remember to switch off all
other programs. Machine should be silent. Background processes can
compete for resources and cause delay.
Warm up phase in Benchmarking
Before recording the numbers, do multiple runs of the code snippet to warm up
the environment. This is to initialize the environment. Java JIT takes time to
analyze and optimize the code on initial runs. We should give enough number
of iterations for it to stabilize otherwise we will end up adding the JIT
overheads to the performance.
Similarly we may not get the caching benefits that happens at different levels.
Creating your first benchmark
mvn archetype:generate 
-DinteractiveMode=false 
-DarchetypeGroupId=org.openjdk.jmh 
-DarchetypeArtifactId=jmh-java-benchmark-archetype 
-DgroupId=org.sample 
-DartifactId=test 
-Dversion=1.0
If you want to benchmark an alternative JVM language, use another archetype
artifact ID from the list of existing ones,
Creating your first benchmark
Building the benchmarks. After the project is generated, you can build it with the
following Maven command:
$ cd test/
$ mvn clean install
Running the benchmarks: $ java -jar target/benchmarks.jar
Archetypes for kotlin, groovy, scala and java are provided.
Understanding JMH code
We have already completed the first step by annotating a method with
@Benchmark.
JMH implements multiple annotation processors that generate the final
microbenchmark class. This generated class contains setup and
measurement code as well as code that's required to minimize unwanted
optimizations of the JIT compiler in the microbenchmark.
JMH contains a Runner class somewhat similar to JUnit so it is possible to run
embedded microbenchmarks using the JMH Java API.
Understanding JMH
You can see that JMH creates multiple JVM forks. For each for fork, it runs n
warmup iterations (shown in blue in the picture below), which do not get
measured and are just needed to reach steady state before m iterations are run
(shown in red in the picture below).
BenchMark Modes
Throughput: Rate at which the processing is done.
@BenchmarkMode({Mode.Throughput}) calculates the operations per
second. The timebound can be configured.
Average Time: Measures the average execution time.
@BenchmarkMode({Mode.AverageTime}) calculates seconds by operations.
The timebound can be configured. Its the reciprocal of throughput.
Benchmark Time Unit
JMH enables you to specify what time units you want the benchmark results
printed in. The time unit will be used for all benchmark modes your
benchmark is executed in.
You specify the benchmark time unit using the JMH annotation
@OutputTimeUnit. The @OutputTimeUnit annotation takes a
java.util.concurrent.TimeUnit as parameter to specify the actual time unit to
use.
Benchmark State
Sometimes you way want to initialize some variables that your benchmark code needs, but which you
do not want to be part of the code your benchmark measures. Such variables are called "state"
variables. State variables are declared in special state classes, and an instance of that state class
can then be provided as parameter to the benchmark method.
@State annotation signals to JMH that this is a state class.
A state object can be reused across multiple calls to your benchmark method. JMH provides different
"scopes" that the state object can be reused in. There state scope is specified in the parameter of
the @State annotation.
Benchmark State
State Scopes
A state object can be reused across multiple calls to your benchmark method.
JMH provides different "scopes" that the state object can be reused in. Their state
scope is specified in the parameter of the @State annotation.The Scope class
contains the following scope constants:
Thread - Each thread running the benchmark will create its own instance of the
state object.
Benchmark-All threads running the benchmark share the same state object.
Benchmark State class Requirments
A JMH state class must obey the following rules:
The class must be declared public
If the class is a nested class, it must be declared static (e.g. public static class ...)
The class must have a public no-arg constructor (no parameters to the constructor).
When these rules are obeyed you can annotate the class with the @State annotation to make JMH
recognize it as a state class.
http://openjdk.java.net/projects/code-tools/jmh/
http://daniel.mitterdorfer.name/articles/2014/benchmarking-hello-jmh/
http://tutorials.jenkov.com/java-performance/jmh.html
http://javapapers.com/java/java-micro-benchmark-with-jmh/
https://github.com/nilskp/jmh-charts
https://github.com/melix/jmh-gradle-plugin
Thanks
Github : https://github.com/ackhare/JMHDemoForSession
Presented By :- chetan khare

Mais conteúdo relacionado

Mais procurados

Data source integration guide for HP Performance Agent
Data source integration guide for HP Performance AgentData source integration guide for HP Performance Agent
Data source integration guide for HP Performance Agent
hernajes
 
Protecting Java EE Web Apps with Secure HTTP Headers
Protecting Java EE Web Apps with Secure HTTP HeadersProtecting Java EE Web Apps with Secure HTTP Headers
Protecting Java EE Web Apps with Secure HTTP Headers
Frank Kim
 
Apache Sling : JCR, OSGi, Scripting and REST
Apache Sling : JCR, OSGi, Scripting and RESTApache Sling : JCR, OSGi, Scripting and REST
Apache Sling : JCR, OSGi, Scripting and REST
Carsten Ziegeler
 
서버리스 애플리케이션 구축 패턴 및 구축 사례 - AWS Summit Seoul 2017
서버리스 애플리케이션 구축 패턴 및 구축 사례 - AWS Summit Seoul 2017서버리스 애플리케이션 구축 패턴 및 구축 사례 - AWS Summit Seoul 2017
서버리스 애플리케이션 구축 패턴 및 구축 사례 - AWS Summit Seoul 2017
Amazon Web Services Korea
 

Mais procurados (20)

Storing data long term with Amazon S3 Glacier Deep Archive - STG301 - New Yor...
Storing data long term with Amazon S3 Glacier Deep Archive - STG301 - New Yor...Storing data long term with Amazon S3 Glacier Deep Archive - STG301 - New Yor...
Storing data long term with Amazon S3 Glacier Deep Archive - STG301 - New Yor...
 
기업의 미래를 바꾸는 AI 플랫폼
기업의 미래를 바꾸는 AI 플랫폼기업의 미래를 바꾸는 AI 플랫폼
기업의 미래를 바꾸는 AI 플랫폼
 
CloudFormation Getting Started with YAML
CloudFormation Getting Started with YAMLCloudFormation Getting Started with YAML
CloudFormation Getting Started with YAML
 
[팝콘 시즌1] 이인영 : AWS QuickSight 맛보기
[팝콘 시즌1] 이인영 : AWS QuickSight 맛보기[팝콘 시즌1] 이인영 : AWS QuickSight 맛보기
[팝콘 시즌1] 이인영 : AWS QuickSight 맛보기
 
Amazon SageMaker 오버뷰 - 강성문, AWS AI/ML 스페셜리스트 :: AIML 특집 웨비나
Amazon SageMaker 오버뷰 - 강성문, AWS AI/ML 스페셜리스트 :: AIML 특집 웨비나Amazon SageMaker 오버뷰 - 강성문, AWS AI/ML 스페셜리스트 :: AIML 특집 웨비나
Amazon SageMaker 오버뷰 - 강성문, AWS AI/ML 스페셜리스트 :: AIML 특집 웨비나
 
Spring Cloud Netflixを使おう #jsug
Spring Cloud Netflixを使おう #jsugSpring Cloud Netflixを使おう #jsug
Spring Cloud Netflixを使おう #jsug
 
좌충우돌 Data Engineering 학습기
좌충우돌 Data Engineering 학습기좌충우돌 Data Engineering 학습기
좌충우돌 Data Engineering 학습기
 
Data source integration guide for HP Performance Agent
Data source integration guide for HP Performance AgentData source integration guide for HP Performance Agent
Data source integration guide for HP Performance Agent
 
Protecting Java EE Web Apps with Secure HTTP Headers
Protecting Java EE Web Apps with Secure HTTP HeadersProtecting Java EE Web Apps with Secure HTTP Headers
Protecting Java EE Web Apps with Secure HTTP Headers
 
Spark & Zeppelin을 활용한 한국어 텍스트 분류
Spark & Zeppelin을 활용한 한국어 텍스트 분류Spark & Zeppelin을 활용한 한국어 텍스트 분류
Spark & Zeppelin을 활용한 한국어 텍스트 분류
 
IBM QRadar WinCollector - Managed Vs Stand Alone
IBM QRadar  WinCollector - Managed Vs Stand AloneIBM QRadar  WinCollector - Managed Vs Stand Alone
IBM QRadar WinCollector - Managed Vs Stand Alone
 
Road to Microservices
Road to MicroservicesRoad to Microservices
Road to Microservices
 
Apache Sling : JCR, OSGi, Scripting and REST
Apache Sling : JCR, OSGi, Scripting and RESTApache Sling : JCR, OSGi, Scripting and REST
Apache Sling : JCR, OSGi, Scripting and REST
 
クラウドでも非機能要求グレードは必要だよね
クラウドでも非機能要求グレードは必要だよねクラウドでも非機能要求グレードは必要だよね
クラウドでも非機能要求グレードは必要だよね
 
20190731 Black Belt Online Seminar Amazon ECS Deep Dive
20190731 Black Belt Online Seminar Amazon ECS Deep Dive20190731 Black Belt Online Seminar Amazon ECS Deep Dive
20190731 Black Belt Online Seminar Amazon ECS Deep Dive
 
Amazon Game Tech Night #22 AWSで実現するデータレイクとアナリティクス
Amazon Game Tech Night #22 AWSで実現するデータレイクとアナリティクスAmazon Game Tech Night #22 AWSで実現するデータレイクとアナリティクス
Amazon Game Tech Night #22 AWSで実現するデータレイクとアナリティクス
 
한글과컴퓨터의 클라우드 마이그레이션, 거버넌스 그리고 모더나이제이션-박인재, AWS ISV SA Manager / 박상형, 한글과컴퓨터 I...
한글과컴퓨터의 클라우드 마이그레이션, 거버넌스 그리고 모더나이제이션-박인재, AWS ISV SA Manager / 박상형, 한글과컴퓨터 I...한글과컴퓨터의 클라우드 마이그레이션, 거버넌스 그리고 모더나이제이션-박인재, AWS ISV SA Manager / 박상형, 한글과컴퓨터 I...
한글과컴퓨터의 클라우드 마이그레이션, 거버넌스 그리고 모더나이제이션-박인재, AWS ISV SA Manager / 박상형, 한글과컴퓨터 I...
 
[Paper] eXplainable ai(xai) in computer vision
[Paper] eXplainable ai(xai) in computer vision[Paper] eXplainable ai(xai) in computer vision
[Paper] eXplainable ai(xai) in computer vision
 
서버리스 애플리케이션 구축 패턴 및 구축 사례 - AWS Summit Seoul 2017
서버리스 애플리케이션 구축 패턴 및 구축 사례 - AWS Summit Seoul 2017서버리스 애플리케이션 구축 패턴 및 구축 사례 - AWS Summit Seoul 2017
서버리스 애플리케이션 구축 패턴 및 구축 사례 - AWS Summit Seoul 2017
 
Scalable serverless architectures using event-driven design - MAD308 - New Yo...
Scalable serverless architectures using event-driven design - MAD308 - New Yo...Scalable serverless architectures using event-driven design - MAD308 - New Yo...
Scalable serverless architectures using event-driven design - MAD308 - New Yo...
 

Destaque

Destaque (17)

Introduction to thymeleaf
Introduction to thymeleafIntroduction to thymeleaf
Introduction to thymeleaf
 
JFree chart
JFree chartJFree chart
JFree chart
 
Spring Web Flow
Spring Web FlowSpring Web Flow
Spring Web Flow
 
RESTEasy
RESTEasyRESTEasy
RESTEasy
 
Jsoup
JsoupJsoup
Jsoup
 
Actors model in gpars
Actors model in gparsActors model in gpars
Actors model in gpars
 
Unit test-using-spock in Grails
Unit test-using-spock in GrailsUnit test-using-spock in Grails
Unit test-using-spock in Grails
 
Reactive java - Reactive Programming + RxJava
Reactive java - Reactive Programming + RxJavaReactive java - Reactive Programming + RxJava
Reactive java - Reactive Programming + RxJava
 
Cosmos DB Service
Cosmos DB ServiceCosmos DB Service
Cosmos DB Service
 
Apache tika
Apache tikaApache tika
Apache tika
 
Java 8 features
Java 8 featuresJava 8 features
Java 8 features
 
Grails with swagger
Grails with swaggerGrails with swagger
Grails with swagger
 
Introduction to gradle
Introduction to gradleIntroduction to gradle
Introduction to gradle
 
Introduction to es6
Introduction to es6Introduction to es6
Introduction to es6
 
Hamcrest
HamcrestHamcrest
Hamcrest
 
Progressive Web-App (PWA)
Progressive Web-App (PWA)Progressive Web-App (PWA)
Progressive Web-App (PWA)
 
Vertx
VertxVertx
Vertx
 

Semelhante a Jmh

Measurement .Net Performance with BenchmarkDotNet
Measurement .Net Performance with BenchmarkDotNetMeasurement .Net Performance with BenchmarkDotNet
Measurement .Net Performance with BenchmarkDotNet
Vasyl Senko
 
Software testing
Software testingSoftware testing
Software testing
nil65
 
Performancetestingjmeter 131210111657-phpapp02
Performancetestingjmeter 131210111657-phpapp02Performancetestingjmeter 131210111657-phpapp02
Performancetestingjmeter 131210111657-phpapp02
Nitish Bhardwaj
 
jmeter - Google Search.pdf
jmeter - Google Search.pdfjmeter - Google Search.pdf
jmeter - Google Search.pdf
MohanKoppala3
 
Best practices in enterprise applications
Best practices in enterprise applicationsBest practices in enterprise applications
Best practices in enterprise applications
Chandra Sekhar Saripaka
 
imperative programming language, java, android
imperative programming language, java, androidimperative programming language, java, android
imperative programming language, java, android
i i
 
Qtp interview questions
Qtp interview questionsQtp interview questions
Qtp interview questions
Ramu Palanki
 
Qtp interview questions
Qtp interview questionsQtp interview questions
Qtp interview questions
Ramu Palanki
 

Semelhante a Jmh (20)

Measurement .Net Performance with BenchmarkDotNet
Measurement .Net Performance with BenchmarkDotNetMeasurement .Net Performance with BenchmarkDotNet
Measurement .Net Performance with BenchmarkDotNet
 
Java performance tuning
Java performance tuningJava performance tuning
Java performance tuning
 
Software testing
Software testingSoftware testing
Software testing
 
Automation using ibm rft
Automation using ibm rftAutomation using ibm rft
Automation using ibm rft
 
Performancetestingjmeter 131210111657-phpapp02
Performancetestingjmeter 131210111657-phpapp02Performancetestingjmeter 131210111657-phpapp02
Performancetestingjmeter 131210111657-phpapp02
 
jmeter interview q.pdf
jmeter interview q.pdfjmeter interview q.pdf
jmeter interview q.pdf
 
Mastering Distributed Performance Testing
Mastering Distributed Performance TestingMastering Distributed Performance Testing
Mastering Distributed Performance Testing
 
EXPERIMENTAL EVALUATION AND RESULT DISCUSSION OF METAMORPHIC TESTING AUTOMATI...
EXPERIMENTAL EVALUATION AND RESULT DISCUSSION OF METAMORPHIC TESTING AUTOMATI...EXPERIMENTAL EVALUATION AND RESULT DISCUSSION OF METAMORPHIC TESTING AUTOMATI...
EXPERIMENTAL EVALUATION AND RESULT DISCUSSION OF METAMORPHIC TESTING AUTOMATI...
 
jmeter - Google Search.pdf
jmeter - Google Search.pdfjmeter - Google Search.pdf
jmeter - Google Search.pdf
 
Basic of jMeter
Basic of jMeter Basic of jMeter
Basic of jMeter
 
Performance testing and j meter
Performance testing and j meterPerformance testing and j meter
Performance testing and j meter
 
Java Virtual Machine - Internal Architecture
Java Virtual Machine - Internal ArchitectureJava Virtual Machine - Internal Architecture
Java Virtual Machine - Internal Architecture
 
Java Performance & Profiling
Java Performance & ProfilingJava Performance & Profiling
Java Performance & Profiling
 
just in time JIT compiler
just in time JIT compilerjust in time JIT compiler
just in time JIT compiler
 
Best practices in enterprise applications
Best practices in enterprise applicationsBest practices in enterprise applications
Best practices in enterprise applications
 
J Meter Intro
J Meter IntroJ Meter Intro
J Meter Intro
 
JMeter Intro
JMeter IntroJMeter Intro
JMeter Intro
 
imperative programming language, java, android
imperative programming language, java, androidimperative programming language, java, android
imperative programming language, java, android
 
Qtp interview questions
Qtp interview questionsQtp interview questions
Qtp interview questions
 
Qtp interview questions
Qtp interview questionsQtp interview questions
Qtp interview questions
 

Mais de NexThoughts Technologies

Mais de NexThoughts Technologies (20)

Alexa skill
Alexa skillAlexa skill
Alexa skill
 
GraalVM
GraalVMGraalVM
GraalVM
 
Docker & kubernetes
Docker & kubernetesDocker & kubernetes
Docker & kubernetes
 
Apache commons
Apache commonsApache commons
Apache commons
 
HazelCast
HazelCastHazelCast
HazelCast
 
MySQL Pro
MySQL ProMySQL Pro
MySQL Pro
 
Microservice Architecture using Spring Boot with React & Redux
Microservice Architecture using Spring Boot with React & ReduxMicroservice Architecture using Spring Boot with React & Redux
Microservice Architecture using Spring Boot with React & Redux
 
Swagger
SwaggerSwagger
Swagger
 
Solid Principles
Solid PrinciplesSolid Principles
Solid Principles
 
Arango DB
Arango DBArango DB
Arango DB
 
Jython
JythonJython
Jython
 
Introduction to TypeScript
Introduction to TypeScriptIntroduction to TypeScript
Introduction to TypeScript
 
Smart Contract samples
Smart Contract samplesSmart Contract samples
Smart Contract samples
 
My Doc of geth
My Doc of gethMy Doc of geth
My Doc of geth
 
Geth important commands
Geth important commandsGeth important commands
Geth important commands
 
Ethereum genesis
Ethereum genesisEthereum genesis
Ethereum genesis
 
Ethereum
EthereumEthereum
Ethereum
 
Springboot Microservices
Springboot MicroservicesSpringboot Microservices
Springboot Microservices
 
An Introduction to Redux
An Introduction to ReduxAn Introduction to Redux
An Introduction to Redux
 
Google authentication
Google authenticationGoogle authentication
Google authentication
 

Último

Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
WSO2
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 

Último (20)

Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
 
CNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In PakistanCNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In Pakistan
 
Cyberprint. Dark Pink Apt Group [EN].pdf
Cyberprint. Dark Pink Apt Group [EN].pdfCyberprint. Dark Pink Apt Group [EN].pdf
Cyberprint. Dark Pink Apt Group [EN].pdf
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 

Jmh

  • 1. JMH
  • 2. Agenda Background Types of Benchmarking Factors in benchmarking Why are hand-written Benchmarks bad Hands On JMH Modes, Time Unit and Benchmark State
  • 3. Background When software developers are concerned with performance of their system.They may resort to these options Performance Testing to determine the performance of an already built system. MSDN provides a very thorough guide on the subject. Profiling to analyze and investigate bottlenecks when a system is running. Benchmarking to compare the relative performance of systems. Analysis to determine the algorithmic complexity (Big-O notation).
  • 4. Types of Benchmarking This leads to two commonly known types of benchmarks: Macrobenchmarks are used to test entire system configurations. Macro benchmarks are done between different platforms to compare the efficiency between them. Microbenchmarks are used to compare different implementations in an isolated context, for example a single component. Micro benchmarks are done within the same platform for a small snippet of code.
  • 5. MicroBenchmarking Micro benchmarks are generally done for two reasons. To compare different approaches of code which implements the same logic and choose the best one to use. To identify any bottlenecks in a suspected area of code during performance optimization. So benchmarking is used for comparisons.Benchmark is the process of recording the performance of a system
  • 6. Factors in benchmarking Benchmark candidate: What piece of software do we benchmark? Comparison against a baseline:determined by customer requirements or you might be just looking for the best relative performance in a specific scenario among a set of benchmark candidates. Metrics: Which metrics do we use to determine performance? Like Throughput or Average time Benchmarking scenario: Do we consider single-threaded or multi-threaded performance? How does a data structure behave when accessed concurrently by multiple writers? Benchmarking duration
  • 7. Why are hand-written Benchmarks bad Because you need to take into account these factors JVM consists of three main components that work together: the runtime including the interpreter, the garbage collector and the JIT-compiler. Due to these components, we neither know in advance which machine code will be executed nor how it behaves exactly at runtime Oracle's HotSpot JVM applies a vast amount of optimizations on Java code(more than 70 optimization technique) Some compiler optimizations like dead code elimination, loop unrolling, lock coalescing and in- lining. You might be benchmarking a different code than what you are thinking.
  • 8. Why are hand-written Benchmarks bad Each method is executed in interpreted mode at first. The Java interpreter requests that it should be JIT-compiled. Consequently, we have to run the benchmarked code often enough before the actual measurement starts to ensure that all benchmarked code has been JIT-compiled beforehand.. You should not see any JIT-compiler activity after the warmup phase. Benchmark code falls victim to dead code elimination: In certain circumstances the JIT-compiler may be able to detect that the benchmark does not do anything and eliminates large parts or even the whole benchmark code. False sharing: In multithreaded microbenchmarks, false sharing can severely affect measured performance. See false sharing
  • 9. Why are hand-written Benchmarks bad Reliance on a specific environment: The JVM version, the OS and the hardware could be different in a microbenchmark and an application. whether its is single core or multi-core or hyper threaded and its impact on your program to benchmark. When running on the same environment, we need to remember to switch off all other programs. Machine should be silent. Background processes can compete for resources and cause delay.
  • 10. Warm up phase in Benchmarking Before recording the numbers, do multiple runs of the code snippet to warm up the environment. This is to initialize the environment. Java JIT takes time to analyze and optimize the code on initial runs. We should give enough number of iterations for it to stabilize otherwise we will end up adding the JIT overheads to the performance. Similarly we may not get the caching benefits that happens at different levels.
  • 11. Creating your first benchmark mvn archetype:generate -DinteractiveMode=false -DarchetypeGroupId=org.openjdk.jmh -DarchetypeArtifactId=jmh-java-benchmark-archetype -DgroupId=org.sample -DartifactId=test -Dversion=1.0 If you want to benchmark an alternative JVM language, use another archetype artifact ID from the list of existing ones,
  • 12. Creating your first benchmark Building the benchmarks. After the project is generated, you can build it with the following Maven command: $ cd test/ $ mvn clean install Running the benchmarks: $ java -jar target/benchmarks.jar Archetypes for kotlin, groovy, scala and java are provided.
  • 13. Understanding JMH code We have already completed the first step by annotating a method with @Benchmark. JMH implements multiple annotation processors that generate the final microbenchmark class. This generated class contains setup and measurement code as well as code that's required to minimize unwanted optimizations of the JIT compiler in the microbenchmark. JMH contains a Runner class somewhat similar to JUnit so it is possible to run embedded microbenchmarks using the JMH Java API.
  • 14. Understanding JMH You can see that JMH creates multiple JVM forks. For each for fork, it runs n warmup iterations (shown in blue in the picture below), which do not get measured and are just needed to reach steady state before m iterations are run (shown in red in the picture below).
  • 15. BenchMark Modes Throughput: Rate at which the processing is done. @BenchmarkMode({Mode.Throughput}) calculates the operations per second. The timebound can be configured. Average Time: Measures the average execution time. @BenchmarkMode({Mode.AverageTime}) calculates seconds by operations. The timebound can be configured. Its the reciprocal of throughput.
  • 16. Benchmark Time Unit JMH enables you to specify what time units you want the benchmark results printed in. The time unit will be used for all benchmark modes your benchmark is executed in. You specify the benchmark time unit using the JMH annotation @OutputTimeUnit. The @OutputTimeUnit annotation takes a java.util.concurrent.TimeUnit as parameter to specify the actual time unit to use.
  • 17. Benchmark State Sometimes you way want to initialize some variables that your benchmark code needs, but which you do not want to be part of the code your benchmark measures. Such variables are called "state" variables. State variables are declared in special state classes, and an instance of that state class can then be provided as parameter to the benchmark method. @State annotation signals to JMH that this is a state class. A state object can be reused across multiple calls to your benchmark method. JMH provides different "scopes" that the state object can be reused in. There state scope is specified in the parameter of the @State annotation.
  • 18. Benchmark State State Scopes A state object can be reused across multiple calls to your benchmark method. JMH provides different "scopes" that the state object can be reused in. Their state scope is specified in the parameter of the @State annotation.The Scope class contains the following scope constants: Thread - Each thread running the benchmark will create its own instance of the state object. Benchmark-All threads running the benchmark share the same state object.
  • 19. Benchmark State class Requirments A JMH state class must obey the following rules: The class must be declared public If the class is a nested class, it must be declared static (e.g. public static class ...) The class must have a public no-arg constructor (no parameters to the constructor). When these rules are obeyed you can annotate the class with the @State annotation to make JMH recognize it as a state class.