2. About Me
I’m pushing the database engine as
hard as I can captain, she’s going
blow.
An independent SQL consultant.
A user of SQL Server since 2000.
14+ years of SQL Server experience.
The ‘Standard’ stuff What I’m passionate about !
3. The Exercise
Squeeze every last drop of performance
out of the hardware !
ostress –E –dSingletonInsert –Q”exec usp_insert” –n40
4. Test Environment
SQL Server 2016 CTP 2.3
Windows server 2012 R2
2 x 10 Xeon V3 cores 2.2Ghz with hyper-threading enabled
64GB DDR 4 quad channel memory
4 x SanDisk Extreme Pro 480GB Raid 1 (64K allocation size ) )
ostress used for generating concurrent workload
Use the conventional database engine to begin with . . .
5. I Will Be Using Windows Performance Toolkit . . . A Lot !
It allows CPU time to be
quantified across the whole
database engine.
Not just what Microsoft deem
what we should see
but everything !.
The database engine
equivalent of seeing the Matrix
in code form ;-)
7. The “Last Page Problem”
Min
Min
Min Min
Min
Min
Min
Min Min
Min
HOBT_ROOT
Max
8. Overcoming The “Last Page” Problem
600
616
982
7946
8170
0 1000 2000 3000 4000 5000 6000 7000 8000 9000
SPID Offset
Partition + SPID Offset
NEWID()
IDENTITY
NEWSEQUENTIALiD
Elapsed Time (s)
KeyType
Elapsed Time (s) / Key Type
What are
we waiting
on ?
9. Can Delayed Durability Help ?
265
600
0 100 200 300 400 500 600 700
Delayed durability
Conventional
Elapsed Time (s)
LoggingType
Elapsed time (s) / Logging Type
11. Fixing CPU Core Starvation With Trace Flag 8008
The scheduler with
least load is now
favoured over the
‘Preferred’ scheduler.
Documented in this
CSS engineers note.
Elapsed time has
gone backwards, it is
now 453 seconds !
why ?.
13. How Spinlocks Work
A task on a scheduler will spin until it can acquire the spinlock it is after
For short lived waits this uses less CPU cycles than a yielding then waiting for
the task thread to be at the head of the runnable queue.
15. Introducing The LOGCACHE_ACCESS Spinlock
Buffer Offset (cache line)
LOGCACHE
ACCESS
Alloc Slot in
Buffer
MemCpy
Slot
Content
Log Writer
Writer Queue
Async I/O
Completion Port
Slot
1
LOGBUFFER
WRITELOG
LOG
FLUSHQ
Signal thread
which issued
commit
T0
Tn
Slot
127
Slot
126
The bit we are
interested in
16. Anatomy of A Modern CPU
Core
L3 Cache
L1 Instruction
Cache 32KB
L2 Unified Cache 256K
Power
and
Clock
QPI
Memory
Controller
L1 Data Cache
32KB
Core
CoreL1 Instruction
Cache 32KB
L2 Unified Cache 256K
L1 Data Cache
32KB
Core
TLBMemory bus
C P U
QPI. . .
Un-core
L0 UOP Cache L0 UOP Cache
17. Memory, Cache Lines and The CPU Cache
C P U
new OperationData() new OperationData() new OperationData()
Cache Line Cache LineCache Line
64B
Cache Line
Cache Line
Cache Line
Cache Line
Tag
Tag
Tag
Tag
C P U C a c h e
18. Spinlocks and Memory
spin_acquire
Int s
spin_acquire
Int s
spin_acquire
Int s
Transfer cache line
Transfer cache line
CPU CPU
L3
Core
Core
C P U
L3
Core
Core
C P U
21. Scalability With and Without A CPU Core Dedicated To The Log Writer
0
100,000
200,000
300,000
400,000
500,000
600,000
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38
Inserts/s
Insert Threads
Insert Rate / Insert Threads
Baseline (Batch Size=1) Log Writer With Dedicated Core Batch Size=1
22. . . . and What About LOGCACHE_ACCESS Spins ?
0
2,000,000,000
4,000,000,000
6,000,000,000
8,000,000,000
10,000,000,000
12,000,000,000
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34
Spins
Threads
LOGCACHE_ACCESS spins / Thread Count
Baseline Log Writer with Dedicated CPU Core
23. What Difference Has This Made To Where CPU Time Is Going ?
With the
default
CPU
affinity
mask
Log writer
with
dedicated
CPU core
63,166,836 ms
(40 threads)
Vs.
220,168 ms
(38 threads)
24. Optimizations That Failed To Make The Grade
Large memory pages
Allows The Look aside buffer to cover a more
memory for logical to physical memory
mapping.
Trace flag 2330
Stops spins on OPT_IDX_STATS.
Trace flag 1118
prevents mixed allocation extents
– enabled by default in SQL Server 2016
25. A Different Spinlock Is Now The Most Spin Intensive
A new spinlock is now the most spin intensive:
XDESMGR, probably spinlock<109,9,1>
what does it do ?
26. Digging Into The Call Stack To Understand Undocumented Spinlocks
xperf -on PROC_THREAD+LOADER+PROFILE -StackWalk Profile
xperf –d stackwalk.etl
1. Start trace
2. Run workload
3. Stop trace
4. Load trace into WPA
5. Locate spinlock in call stack 6. ‘Invert’ the call stack
27. Examining The XDESMGR Spinlock By Digging Into The Call Stack
This serialises access to the part of the database engine that allocates
and destroys transaction ids.
How do you relieve pressure on this spinlock ?
Have multiple insert statement per transaction.
28. Options For Dealing With The XDESMGR Spinlock
Relieving pressure on the LOGCACHE_ACCESS spinlock makes the
XDESMGR spinlock the bottleneck.
There are three places to go at this point:
Increase the ratio of transactions to DML statements.
Shard the table across databases and instances.
Use in memory OLTP native transactions.
29. Increasing The Batch Size By Just One Makes A Big Difference !
0
100,000
200,000
300,000
400,000
500,000
600,000
700,000
800,000
900,000
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36
Insert Rate / Thread Count
Baseline (Batch Size=1) Log Writer With Dedicated Core Batch Size=1
Log Writer With Dedicated Core Batch Size=2
30. . . . and The Difference This Makes To XDESMGR Spins
0
20,000,000,000
40,000,000,000
60,000,000,000
80,000,000,000
100,000,000,000
120,000,000,000
140,000,000,000
160,000,000,000
180,000,000,000
200,000,000,000
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36
XDESMGR Spins / Thread Count
Baseline (Batch Size=1) Log Writer With Dedicated Core Batch Size=1
Log Writer With Dedicated Core Batch Size=2
31. Does It Matter Which NUMA Node The Insert Runs On ?
L3
Core 0
Core 1
Core 2
Core 4
Core 3
Core 5
Core 6
Core 7
Core 9
Core 8
C P U
L3
Core 0
Core 1
Core 2
Core 4
Core 3
Core 5
Core 6
Core 7
Core 9
Core 8
C P U
Faster here ?
Numa Node 0
. . . Or faster here?
Numa Node 1
“Whats really
going to bake your
noodle . . .”
8 threads
here
73 s
8 threads
here
125 s
32. What Does Windows Performance Toolkit Have To Tell Us ?
18 insert
thread log
writer
CPU socket
Co-location.
18 insert
threads not
co-located on
same socket
as the log
writer
84,697 ms
Vs.
11,281,235 ms
33. So I Should Look At Tuning The CPU Affinity Mask ?
Get the basics right first:
Minimize transaction log fragmentation ( both internal and external ).
Use low latency storage.
Avoid log intensive operations, page splits etc . . .
Use minimally logged operations where appropriate.
Only when:
All of the above has been done.
The disk row store engine is being used.
The workload is OLTP heavy using more than 12 CPU cores, 6 per socket,
look at giving the log writer a CPU core to itself.
34. Hard To Solve Logging Issues
I’m have to use the disk row store engine.
My single threaded app cannot easily be multi threaded.
How do I get the best possible write log performance ?
Use NUMA connection affinity
to connect to the same socket
as the log writer.
Disable hyper-threading,
whole cores and always faster
than hyper-threads.
‘Affinitize’ the rest of the
database engine away from
the log writer thread ‘Home’
CPU core.
Go for a CPU with the best
single threaded performance
available.
35. The CPU Cycle Cost Of Spinlock Cache Line Transfer
spin_acquire
Int s
spin_acquire
Int s
spin_acquire
Int s
Transfer cache line
Transfer cache line
CPU CPU
L3Core
C P U
C P U
C P U
C P U
100 CPU cycles
Core
34 CPU cycles
100 CPU cycles
34 CPU cycles
Core to core on the same socket Core to core on different sockets
37. This Man Seriously Knows A Lot About Memory
Ulrich Drepper, author of:
What Every Programmer Should Know About Memory
From Understanding CPU Caches
“Use per CPU memory; lock thread to specific CPU”
This is our CPU affinity mask trick
38. Cache Line Ping Pong
IOHub
CPU 6 CPU 7
CPU 4 CPU 5
CPU 2 CPU 3
CPU 0 CPU 1
IOHubIOHub
IOHub
“Cache line
ping pong
is deadly for
performance”
The more CPU sockets and cores you
have the greater the ramifications this
has for SQL Server scalability on
“Big boxes”.
39. ‘Sharding’ The Database Across Instances
L3
Core 0
Core 1
Core 2
Core 4
Core 3
Core 5
Core 6
Core 7
Core 9
Core 8
C P U
L3
Core 0
Core 1
Core 2
Core 4
Core 3
Core 5
Core 6
Core 7
Core 9
Core 8
C P U
Instance A - ‘Affinitized’
to NUMA Node 0
Instance B - ‘Affinitized’
to NUMA Node 1 ‘Shard’ databases
across instances.
2 x
LOGCACHE_ACCES
S and XDESMGR
spinlocks.
Spinlock cache
entries are bound
by the latency of
the L3 cache, not
the quick path
inter-connect.
40. What Can We Get From An Instance ‘Affinitized’ To One CPU Socket ?
0
50,000
100,000
150,000
200,000
250,000
300,000
350,000
400,000
450,000
500,000
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
Inserts/s
Threads
Insert Rate / Thread Count
41. With a Batch Size of 2, 32 Threads Achieve The Best Throughput
Logging
related
activity
Latching !
Where to now ?
42. In Memory OLTP To The Rescue, But What Will It Give Us ?
Only redo is written to the transaction
log (durability = SCHEMA_AND_DATA)
Does this relieve pressure on the
LOGCACHE_ACCESS spinlock ?.
Zero latching and locking.
Native procedure compilation.
No “Last page” problem due to
IMOLTP’s use of hash buckets.
Spinlocks will still be in play though .
43. Insert Scalability with A Non Natively Compiled Stored Procedure
0
100,000
200,000
300,000
400,000
500,000
600,000
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
Inserts/s
Threads
Insert Rate / Thread Count
Default Engine IMOLTP Range Index
IMOLTP Hash Index bc=8388608 IMOLTP Hash Index bc=16777216
44. What Does The BLOCKER_ENUM Spinlock Protect ?
Transaction synchronization between the default and in-memory OLTP engines ?
45. Where Are Our CPU Cycles Going, The Overhead Of Language Processing
Time to try native in memory OLTP transactions
and compiled stored procedures ?
47. Hash Indexes Bucket Count and Balancing The Equation
Smaller bucket counts = better cache line reuse
+ reduced TLB thrashing
+ reduced hash table cache out
Larger bucket counts = reduced cache line reuse
+ increased TLB thrashing
+ less hash bucket scanning for lookups
48. Is Our CPU Affinity Mask Trick Relevant To In Memory OLTP ?
Default CPU
affinity mask
and 18 insert
threads.
A CPU core
dedicated to
the log writer
and 18 insert
threads.
49. Optimizations That Failed To Make The Grade
Large memory pages
As per the default database engine, this made
no difference to performance.
Turning off adjacent cache line pre-fetching
This can degrade performance by saturating
the memory bus when hyper threading is in
use and cause cache pollution when the
pre-fetched line is not used.
50. Takeaways
Monotonically increasing keys do not scale with the default database engine.
Dedicate a CPU core to the log write to relieve pressure on the LOGCACHE_ACCESS
spinlock.
Batch DML statements together per transaction to relieve XDESMGR spinlock pressure.
The further the LOGCACHE_ACCESS spinlock cache line has to travel, the more
performance is degraded.
Native compilation results in a performance increase of over an order of magnitude
(at least) over non natively compiled stored procedures.
There is a bucket count “Sweet spot” for IMOLTP hash indexes which is influenced by
hash collisions, bucket scans and hash lookup table cache out.
51. Further Reading
Super scaling singleton inserts blog post
Tuning The LOGCACHE_ACCESS Spinlock On A “Big Box” blog post
Tuning The XDESMGR Spinlock On A “Big Box” blog post
SQL Server 2008 R2 introduced the concept of “Exponential back off”.
The log writer is always assigned to the first CPU core of one of the CPU sockets, which is usually socket 0 (NUMA node 0). Because hyper-threading is enabled, each physical CPU core appears in the affinity mask as two logical processors, which is why two logical processors are being removed from the affinity mask. Where hyper-threading to be disabled there would be a 1:1 relationship between logical processors and physical CPU cores, in which case only one logical processor would be removed from the affinity mask.
LOGBUFFER waits => Occurs when a task is waiting for space in the log buffer to store a log record. Consistently high values may indicate that the log devices cannot keep up with the amount of log being generated by the server. Essentially 30 threads causes the write band width of our storage to be saturated.
The LOGCACHE_ACCESS spins for both tests are very similar, the key difference is that with the “CPU affinity mask trick” we are getting the same number of spins as we do with the baseline with superior insert throughput.
Changing the CPU affinity mask has ensured that when log writer needs to release the cache line associated with the LOGCACHE_ACCESS spinlock, no SQL OS scheduler level swap in of the log writer is required first. Not only does this cost us CPU time but the sharing of a CPU core by the log writer and any other task means that data and instructions in the L1/2 cache of the core may be wiped out when the other task is running.
As is invariably the case with performance tuning, you remove one bottleneck only for a new one to appear somewhere else.
I am assuming you are already using the lowest latency storage available PCIe based flash with a NVMe driver. The term “Affinitizing” the rest of the database engine away from the log writer thread is a grandiose way of referring to the CPU affinity mask trick.
134217728 corresponds to the
134217728 corresponds to the
Using a natively compiled stored procedure for the insert into an in memory table makes a tremendous difference, we can see that even with two threads and a compiled procedure that the in memory OTLP engine is being its disk based row store counter part. Other takeaways include the fact that a hash index beats a range index for insert throughput and that there is a bucket count sweet spot for the best performance.