Issue du salon orienté développeurs d'Intel (l'IDF) voici une présentation plutôt sympa sur le stockage dit "scale out" avec une présentation des différents fournisseurs de solutions (slide 6) comprenant ceux qui font du mode fichier, bloc et objet. Puis du benchmark sur certains d'entre eux dont Swift, Ceph et GlusterFS.
2024: Domino Containers - The Next Step. News from the Domino Container commu...
Scale-out Storage on Intel® Architecture Based Platforms: Characterizing and Tuning Practices
1. Scale-out Storage on Intel®
Architecture Based Platforms:
Characterizing and Tuning Practices
Yongjie Sun, Application Engineer, Intel
Xiwei Huang, Senior Application Engineer, Intel
Jin Chen, Application Engineer, Intel
SFTS007
2. Agenda
• Dilemma of Data Center Storage
• Intel® Architecture (IA) based Scale-out Storage
Solution Overview
• Increasing Performance of IA based Scale-Out
Storage Solutions With Intel® Products
• Characteristics and Tuning Practices
– Swift*
– Ceph*
– Gluster FS*
• Summary
2
3. Storage Consumption Analysis
Capacity(Petabytes)
180,000
160,000 Content depots and public
clouds/ Huge Un-Structured
140,000
Exponential
Growth
120,000 Public Cloud – Enterprise
Hosting Services
100,000
80,000
Traditional
60,000 Un-Structure
40,000 Traditional Linear Growth
Structure data
20,000
0 Year
2006 2007 2008 2009 2010 2011 2012 2013 2014 2015
Worldwide Enterprise Storage Consumption Capacity Shipped by Model, 2006–2015 (PB)
Mobile & Cloud drive exponential growth in Storage Consumption
Source: IDC, 2011 Worldwide Enterprise Storage Systems 2011–2015 Forecast Update, Doc#231051
3
4. Can Traditional Storage Solutions Meet
the Emerging Needs?
Traditional
Scale-up Storage
Typical New New Storage
Requirements
Storage User
Scenarios • Capacity: from GB to
TB/PB/EB
• a large number of
Micro-blogs unstructured • Price: $ per MB
messages and photos • Throughput: Supports
hundreds/thousands of hosts
• Surveillance video, at the same time
Safe City pictures, and log
• Response time: Response • Large-Volume
files
time & Throughput remain Centralized Storage
• Patient Records/High unchanged while Scaling Arrays
Healthcare Quality Medical • Flexibility: Dynamic • Hosts are attached to
Images (CT)
Allocations and Easy Storage Arrays with
Enterprise • virtual machine
Management for Business Hardware
flexibility Controllers/Cables
Cloud images
• Fault tolerance: No Single- • High Performance /High
Point Failure throughput
• Fault tolerance on Disk
Level
• Expensive solutions
Better Solution: Scale-out storage based on the Intel®
Architecture Platform
4
5. What is Scale Out Storage?
Definition:
• Massive but low-cost hardware
infrastructure. Intel® Architecture
Platform is the most preferable choice. Client Client Client
Client
• Scalable system architecture, multiple
data servers to share the storage load,
metadata server locator store information
†IA Platform
• High performance/High throughput IA PlatformPlatform
IA Platform
IA
• High reliability/High availability
Data Control Flow
• High extensibility Flow
Category:
• Distributed file system
• Distributed object storage Data Server Metadata
Data Server Metadata
Server
• Distributed block device Data Server Metadata
Server
Data Server
Server
Characteristics: IA Platform
IA Platform
IA Platform
IA Platform IA Platform
• Cold data, no high requirement for access IA Platform IA Platform
frequency and real-time
• Both structured & Un-structure data
Scalable storage design is usually closely integrated with business
5 †IA Platform = Intel® Architecture Platform
6. Scale-Out Storage Category Overview
IBM* SONAS*
EMC* lsilon* Swift
EMC* Atmos* GlusterFS*
Dell* FluidFS*
Ceph
HP* StoreAll* Lustre*
Storage DDN* WOS*
DDN* EXAScaler* Ceph* Sheepdog
Hitachi* NAS (HNAS) Amplidata* HDFS* …
AmpliStor* Object
Quantum StorNext MogileFS
Storage system
Huawei* OceanStor*
MooseFS
N9000 …
Red Hat* Storage FastDFS
Server 2.0 …
Oracle* ZFS …
Commercial File- Commercial Object- Open Source File- Open Source
Based Scale-Out Based Scale-Out based Scale-Out Object-Based
NAS Storage Storage Scale-Out Storage
Scale-Out Storage Solution
Commodity Storage Solution = Intel® Xeon® Processor based
Servers + Open Source Software Stack
6
7. Open Source Scale-Out Storage
Project Key Features Storage Maturity
Name Type
Swift* • Support multi proxy server and NO SPOF Object- Not many
• Support multi-tenant. Python* based. based commercial
• PB level storage deployments
• AWS S3 interface compatible
Ceph* • Include multi Meta Servers and NO SPOF File- Emerging
• POSIX-compliant, C based based/Obj solutions,
• Support block storage, object storage and file system ect-based Inktank* is the
company
which provides
enterprise-
class
commercial
support for
Ceph.
GlusterFS* • No Meta Server and No SPOF File-based 100+
• POSIX-compliant , C based Country/Regio
• Supports NFS, CIFS, HTTP, FTP, Gluster SDK/API ns is using
access GlusterFS
• Design for several hundred PBs of data
Lustre* • Include Meta Server and have SPOF File-based Over 40% of
• POSIX-compliant, C based Top 100 HPC
• Supported 10K+ Nodes, PB + storage, 100GB/s projects
adopts Lustre
77
9. Increasing Performance of Scale-Out Storage
With Leading Intel® Solid State Drive
Fast and Consistent Fast and Consistent
Performance Performance
SATA III 6 Gbps Interface End-to-end data protection
75K/36K IOPS 4K Random R/W Power loss protection
50/65us Average Latency 256-bit AES Encryption
<500us Max latency ECC protected memory
500/460 MBps Sustained Seq. 2.0 Million hours MTBF
High-Endurance
Technology
10 DWPD over five years Capacity
Meets JEDEC endurance standard
2.5-inch: 100/200/400/800 GB
1.8-inch: 200/400GB
Intel® SSD DC S3500/S3700 series
9
10. Increasing Performance of Scale-Out Storage
With Leading Intel® 10G Ethernet
GbE Server Connections
• New technology
– Add-in cards and then move to LOM when demand is
> 50%
• New data centers are being built with 10GbE
– Save cost, lower power, decrease complexity, and 10GbE Server Connections
future proof
– Virtualization growth
– Unified Networking(LAN, iSCSI, FCoE)
• Intel® server platform code name Romley - 10G 15% 80%
options Reduction in
Infrastructure
Reduction in
Cables and
Costs Switch ports
– Add card – easy sell up option
– Mezz/Riser cards – Lower cost configure to order
45% 2x
– 1GB/10G dual layout – New future upgrade Reduction in
Improved
Bandwidth
capability Power per
per Server
rack
– 10G baseT and 10G SFP+ LOM – new lowest cost
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as
SYSmark* and MobileMark*, are measured using specific computer systems, components, software, operations and functions. Any change to any of those
factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated
10 purchases, including the performance of that product when combined with other products. For more information go to http://www.intel.com/performance.
14. Swift*: Architecture Overview
• Swift*
– A distributed object storage system designed to scale from a single
machine to thousands of servers
– Is optimized for multi-tenancy and high concurrency
– Swift is ideal for backups, web and mobile content, and any other
unstructured data that can grow without bound
• Mainly components
– Proxy Service
– Account Service
– Container Service
– Object Service
– Authentication Service
• Main Features
– Durability (zone, replica)
– No Single Point of Failure (NWR)
– Scalability
– Multi-tenant
14
15. Swift*: Testing Environment
• Hardware List
Purpose Count CPU Memory Disk NIC
X5670
Workload
4 2.93GHz 24G SATA* 1000Mbit/s
Clients
2*6
E5-2680
Proxy 1 2.70GHz 64G SATA 1000Mbit/s*2
2*8
E5-2680
Storage 4 2.70GHz 64G SATA 1000Mbit/s
2*8
• Software Stack
Software Version
swauth 1.04
Swift 1.7.4
COSBench 2.1.0
collectd 4.10.1
15
16. Swift*: Workloads
• Intel developed a benchmark tool to measure Cloud
Object Storage Service performance
• Components:
COSBench* – Controller
– Driver
– Console/Portal
Performance sensitive metrics: CPU usage, NIC usage
Workload Configuration Mmetrics Target
Small Read Object size=64kb, runtime 5min IOPS, RESP TIME Website hosting
Large Read Object size=1mb, runtime 5min IOPS, RESP TIME Music
Small Write Object size=64kb, runtime 5min IOPS, RESP TIME Online game
Large Write Object size=1mb, runtime 5min IOPS, RESP TIME Enterprise
IOPS: IO per second
RESP TIME: response time
16
17. Swift*: Baseline
Workload IOPS REPS (ms) Success
Swift Configuration:
1. Proxy worker: 64 Rate
2. Object worker: 16 Small Read 1615.25 313.63 99.8%
3. Account worker:16
4. Container worker: 16 Large Read 108.16 4772.13 99.8%
5. XFS inode size: 1024
6. Others use default Small Write 493.58 1039.64 100%
Large Write 37.96 6852.46 99.94%
Proxy: CPU usage ~50%, NIC Usage ~100%
Storage: NIC Usage ~50%, CPU ~40%
NIC bandwidth used up
Use Intel® 10G NIC to replace the original 1000Mbit/s NIC
17
18. Tuning – Using Intel® 82599EB 10 Gigabit Ethernet Controller
Workload IOPS REPS (ms) Success VS
Rate Baseline
Small Read 4271.4 159.74 99.9% >150%
Large Read 99.49% >150% Did not reach
406.42 2478.9
our expectation
Small Write 560.64 916.97 100% ~13.5%
Large Write 94.76 3980.7 100% ~150%
Proxy: CPU usage ~50%, NIC Usage ~30%
Storage: NIC Usage ~50%, CPU ~40%
Deep Analysis
100
90 CPU0 used up, mainly used to deal soft irq.
80
soft%
70
60
Proxy 50
Server 40 sys%
30
20
10
user%
0
cpu2
cpu3
cpu4
cpu5
cpu6
cpu7
cpu8
cpu9
cpu10
cpu11
cpu12
cpu13
cpu14
cpu15
cpu16
cpu17
cpu18
cpu19
cpu20
cpu21
cpu22
cpu23
cpu24
cpu25
cpu26
cpu27
cpu28
cpu29
cpu30
cpu31
Total
cpu 0
cpu 1
18
19. Tuning – Using Intel® 82599EB 10 Gigabit
Ethernet Controller (Con’t)
• Know your NIC
– Intel® 10G NIC has multi-queues
– Each queue own 1 IRQ number
dmesg | grep ixgbe
cat /proc/softirqs | grep NET
Soft IRQ not balance
Deep search: stap & addr2line
BKM: bind each IRQ to 1 core
19
20. Tuning – Using Intel® 82599EB 10 Gigabit
Ethernet Controller (Con’t)
• IRQ Number << CPU cores
– BKM: bind IRQ to same physical CPU or same NUMA node
• Know your CPU architecture
Bind IRQ in turn:
cpu0-cpu7, cpu16-cpu23
cpu8-cpu15, cpu24-cpu31
20
21. Tuning – Using Intel® 82599EB 10 Gigabit
Ethernet Controller (Con’t)
• Important extra component: memcached
– Used for:
Cache client token
Cache Ring* for search
– Tuning with:
Increasing the initial memory
Increasing the client concurrency
• dmesg: ip_conntrack: table is full, dropping packet
– BKM: increase the NAT Hash track table size
emp: net.ipv4.netfilter.ip_conntrack_max = 655350
• Others:
– Linux* ulimit
21
22. Tuning – Using Intel® 82599EB 10 Gigabit
Ethernet Controller (Con’t)
Workload IOPS REPS (ms) Success Rate Vs Tuning
Before
Small Read 7571.4 189.74 99.9% >90%
Large Read 736.42 2678.9 99.49% >90%
Small Write 563.34 716.97 100% ~0%
Large Write 121.38 3280.7 100% ~30%
(except small write)Proxy: CPU usage ~50%, NIC Usage ~40%
Storage: NIC Usage ~50%, CPU ~40%
Speed KB/S proxy NIC storage CPU
CPU %
140000
60
120000 50
100000
40
80000
60000
30
40000 20
20000
10
0
0
TX RX
user% sys% iowait%
22
23. Tuning – Scale Up Disk
Scale up storage node: from 2 SATA disks up to 4 SATA disks
Workload IOPS REPS (ms) Success Rate Vs Tuning
Before
Small Write 723.34 696.17 100% ~28%
Speed KB/S proxy NIC storage CPU
CPU %
70
250000
60
200000
50
150000 40
100000 30
20
50000
10
0
0
TX RX user% sys% iowait%
23
24. Tuning – Use Intel® SSD 320 Series for Account
& Container
• Intel® SSD can improve the DISK performance, but too expensive to
replace all SATA*
• Account & Container data can be stored in SSD to improve
performance
Workload: container own to many objects, then write …
Workload IOPS REPS (ms) Success Rate
Special 245.19 303.19 100%
Workload IOPS REPS (ms) Success Rate Vs Tuning
Before
Special 298.13 292.23 100% >20%
24
25. Swift* Tuning Summary
• Sample configuration
– Hardware
10GbE for proxy node or 10GbE for load balancer & proxy node
More disks in storage node
SSD used for account & container
– Software
Bind each IRQ to per core
Increase memcached memory & concurrency
Increase the NAT Hash track table size
– Swift
Proxy worker: 64 ( twice cpu cores)
Object worker: 16 (half cpu cores)
Account worker:16 (half cpu cores)
Container worker: 16(half cpu cores)
XFS inode size: 1024
Memcached for authorization
25
26. Swift* Tuning Summary
Workload IOPS REPS (ms) Success Rate Vs Baseline
Small Read 7571.4 189.74 99.9% 350%
Large Read 736.42 2678.9 99.49% 350%
Small Write 723.34 696.17 100% ~50%
Large Write 121.38 3280.7 100% ~220%
Large Scale deployment
sample
26
28. Ceph*: Architecture Overview
Ceph* uniquely delivers object,
APP APP HOST/VM Client
block, and file storage in one
unified system. It is highly
reliable, easy to manage, and free.
RADOSGW RBD CEPH FS
A reliable and
A bucket- fully- A POSIX-
based REST
Three interfaces: gateway.
distributed
block device.
compliant
distributed
Compatible With a Linux* file system,
1. CephFS with S3 and kernel client with a Linux
Swift and a kernel client
2. Ceph RADOS Gateway QEMU/KVM and support
driver for FUSE
3. Ceph Block Devices (RBD) LIBRADOS
A library allowing apps to directly access RADOS,
Our focus is Ceph RBD. with support for C, C++, Java*, Python*, Ruby,
and PHP
RADOS
A reliable, autonomic, distributed object store
comprised of self-healing, self-managing, intelligent
storage nodes
28
29. Ceph*: Arch Overview (Cont.)
• MDS (Metadata Server Cluster) System architecture. Clients perform file I/O by
• OSD (Object Storage Cluster) communicating directly with OSDs. Each process
• MON (Cluster Monitors) can either link directly to a client instance or
interact with a mounted file system.
• Client
29
31. Workload & Baseline Result
• Workload
- Benchmark Tool : iozone v3.397
- Single Client R/W Testing
iozone -i 0 -i 1 -r X -s Y -f /mnt/rbd-block/iozone -Rb ./rbd-X-Y.xls –I -+r
X is the record size, Y is the file size.
-I Using O_DIRECT for all operations
-+r Using O_RSYNC|O_SYNC for all operations
• Performance
1 Client R/W Performance
120,000
100,000
Throughput(KB)
Write 1M
80,000
60,000
Write 4M System Network IO
Write 16M
40,000
Read 1M
20,000 Read 4M
0 Read 16M
record size 256M 512M 1G 2G
File Size(Byte)
31
33. Performance Tuning Practices
Step 2: Private Network for OSDs
Reason: Ceph* can configure separated network across OSDs for internal data
transportation(data redundancy copy), which can offload OSD outbound
bandwidth.
Action: Configure Ceph with Dedicated Result: Slight boost for write
Private Network 100000 1.02x
Throughput(KB/S)
ceph.conf: 80000 1.04x
[osd] 60000
cluster network = 192.168.3.0/24 1.06x
40000
public network = 10.0.0.0/24
[osd.0] 20000
public addr = 10.0.0.19:6802 0
cluster addr = 192.168.3.19 1M 4M 16M
SSD SSD-Private
33
34. Performance Tuning Practices
Step 3: 1Gbe Network Adaptor bonding
Reason:
We may observe the client’s NIC bandwidth has been used up
Action: Configure Client to use adaptor bonding Result: Slight boost for write
120000
1.10x
100000
Throughput(KB/S)
1.02x
80000
60000
1.02x
40000
20000
0
1M 4M 16M
SSD-Private SSD-private-Bonding
34
35. Performance Tuning Practices
Step4: Use 10Gbe to replace 1Gbe
Reason:
The emulated block device has high IO wait; NIC throughput
is unbalanced
Result: great boost in Read
600000
Action: 4.33x
500000
A way is to adjust Throughput(KB/s)
bonding load balance 400000
algorithm; 300000
Given that full utilization of 200000 1.02x
bonding is limited to 200MB/s, 100000
here 10Gbe will be adopted
0
directly. ReWrite Read
1G Bonding/SSD 10G/SSD
35
38. GlusterFS*: Architecture
Storage
A scale-out NAS file system GlusterFS* Gateway
based on a stackable user Client
space design
• Server NFS CIFS(Samba)
• Brick RDMA
• Client
• Sub volume
Volume Volume
• Volume
Server Side
brick
brick
brick
brick
brick
brick
brick
Gluster Volume
Storage
brick
brick
brick
brick
Cloud
38
44. Gluster FS*: Striped Volume Tuning
• Gluster FS* volume • Hardware Optimization
– Type: Striped – Use Intel® SSD to replace HDD
– Volume options – Use Intel® 10G NIC to replace 1Gbe NIC
322.687
300 250
355.532
317.006
250 200 130.563
200 2.45X 1.13X
150
150
3.19X
93.803
Read(MB/s) 100 Write(MB/s)
2.8X
100 Network(MB/s) Network(MB/s)
50
50 3.25X
0 0
Baseline: disable volume options;
Options: enable relevant volume optimization options;
SSD: bricks on SSD
10G: Both client and server use 10G NIC
44
45. Tuning Best Known Methods
• GlusterFS volume options optimization
– Read large files
io-thread-count: 64
cache-size: 2GB
cache-max-file-size and cache-min-file-size
– Write large files
write-behind-window-size: 1GB
write-behind: on
io-thread-count: 64
flush-behind: on
• Hardware optimization
– Use Intel® SSD to replace HDD
– Use Intel® 10G NIC to replace 1Gbe NIC
45
47. Summary
• Scale-out Storage is the one of the new major trends of Data
Center storage evolution
• Intel® Platform and Products can greatly increase the
performance and expand usage models for scale-out storage
solutions
• Open source solutions generally need careful tuning before
achieving reliable performance
47
48. Next Steps
Our Plans
• Scalability Optimization Ceph*/GlusterFS*
• SSD Usage models
For Audience
• Is Scale-out Storage suitable for you?
• Contact us!
48
49. Additional Sources of Information:
• Other Sessions
– TECS003 - Lustre*: The Exascale File System, Now at Intel - Room 306B
at 17:00
• Demos in the showcase
– Teamsun* OpenStack* Swift* Scale-Out storage solution based on Intel
10GBE
– Customer Application Case Study: Intel® Xeon Phi™ Platform After Porting
and Tuning
– Resource Scheduler & Performance Monitoring for Intel® Xeon® Processor
& Intel Xeon Phi Hybrid Cluster
• More web based info
– http://www.intel.cn/content/www/cn/zh/ethernet-controllers/ethernet-
controllers.html (Chinese)
– http://www.intel.cn/content/www/cn/zh/solid-state-drives/solid-state-
drives-ssd.html (Chinese)
– http://www.intel.cn/content/www/cn/zh/intelligent-systems/embedded-
software-tools-for-developers-to-debug-and-optimize.html (Chinese)
49
51. Legal Disclaimer
• Any software source code reprinted in this document is furnished under a software license and may only be used or copied
in accordance with the terms of that license.
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to
whom the Software is furnished to do so, subject to the following conditions:
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT
LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
• Software and workloads used in performance tests may have been optimized for performance only on Intel
microprocessors. Performance tests, such as SYSmark* and MobileMark*, are measured using specific computer systems,
components, software, operations and functions. Any change to any of those factors may cause the results to vary. You
should consult other information and performance tests to assist you in fully evaluating your contemplated purchases,
including the performance of that product when combined with other products. For more information go to
http://www.intel.com/performance.
51
52. Intel's compilers may or may not optimize to the same degree for non-Intel
microprocessors for optimizations that are not unique to Intel microprocessors.
These optimizations include SSE2, SSE3, and SSE3 instruction sets and other
optimizations. Intel does not guarantee the availability, functionality, or
effectiveness of any optimization on microprocessors not manufactured by Intel.
Microprocessor-dependent optimizations in this product are intended for use with
Intel microprocessors. Certain optimizations not specific to Intel
microarchitecture are reserved for Intel microprocessors. Please refer to the
applicable product User and Reference Guides for more information regarding the
specific instruction sets covered by this notice.
Notice revision #20110804
52
53. Risk Factors
The above statements and any others in this document that refer to plans and expectations for the first quarter, the year and the
future are forward-looking statements that involve a number of risks and uncertainties. Words such as “anticipates,” “expects,”
“intends,” “plans,” “believes,” “seeks,” “estimates,” “may,” “will,” “should” and their variations identify forward-looking
statements. Statements that refer to or are based on projections, uncertain events or assumptions also identify forward-looking
statements. Many factors could affect Intel’s actual results, and variances from Intel’s current expectations regarding such factors
could cause actual results to differ materially from those expressed in these forward-looking statements. Intel presently considers the
following to be the important factors that could cause actual results to differ materially from the company’s expectations. Demand
could be different from Intel's expectations due to factors including changes in business and economic conditions; customer acceptance
of Intel’s and competitors’ products; supply constraints and other disruptions affecting customers; changes in customer order patterns
including order cancellations; and changes in the level of inventory at customers. Uncertainty in global economic and financial
conditions poses a risk that consumers and businesses may defer purchases in response to negative financial events, which could
negatively affect product demand and other related matters. Intel operates in intensely competitive industries that are characterized by
a high percentage of costs that are fixed or difficult to reduce in the short term and product demand that is highly variable and difficult
to forecast. Revenue and the gross margin percentage are affected by the timing of Intel product introductions and the demand for and
market acceptance of Intel's products; actions taken by Intel's competitors, including product offerings and introductions, marketing
programs and pricing pressures and Intel’s response to such actions; and Intel’s ability to respond quickly to technological
developments and to incorporate new features into its products. The gross margin percentage could vary significantly from
expectations based on capacity utilization; variations in inventory valuation, including variations related to the timing of qualifying
products for sale; changes in revenue levels; segment product mix; the timing and execution of the manufacturing ramp and
associated costs; start-up costs; excess or obsolete inventory; changes in unit costs; defects or disruptions in the supply of materials
or resources; product manufacturing quality/yields; and impairments of long-lived assets, including manufacturing, assembly/test and
intangible assets. Intel's results could be affected by adverse economic, social, political and physical/infrastructure conditions in
countries where Intel, its customers or its suppliers operate, including military conflict and other security risks, natural disasters,
infrastructure disruptions, health concerns and fluctuations in currency exchange rates. Expenses, particularly certain marketing and
compensation expenses, as well as restructuring and asset impairment charges, vary depending on the level of demand for Intel's
products and the level of revenue and profits. Intel’s results could be affected by the timing of closing of acquisitions and divestitures.
Intel’s current chief executive officer plans to retire in May 2013 and the Board of Directors is working to choose a successor. The
succession and transition process may have a direct and/or indirect effect on the business and operations of the company. In
connection with the appointment of the new CEO, the company will seek to retain our executive management team (some of whom are
being considered for the CEO position), and keep employees focused on achieving the company’s strategic goals and objectives. Intel's
results could be affected by adverse effects associated with product defects and errata (deviations from published specifications), and
by litigation or regulatory matters involving intellectual property, stockholder, consumer, antitrust, disclosure and other issues, such as
the litigation and regulatory matters described in Intel's SEC reports. An unfavorable ruling could include monetary damages or an
injunction prohibiting Intel from manufacturing or selling one or more products, precluding particular business practices, impacting
Intel’s ability to design its products, or requiring other remedies such as compulsory licensing of intellectual property. A detailed
discussion of these and other factors that could affect Intel’s results is included in Intel’s SEC filings, including the company’s most
recent Form 10-Q, report on Form 10-K and earnings release.
Rev. 1/17/13
53