SlideShare uma empresa Scribd logo
1 de 3
Baixar para ler offline
Accelerate Your AI Today
Intel® Xeon® Scalable Processors continue to be the foundation for
artificial intelligence, machine learning, and deep learning, and Intel
continues to focus on speeding up the ENTIRE data pipeline, not just
accelerating small chunks of it.
There are many startups creating AI hardware accelerators promising
huge performance gains, but did you know that you can get “software AI
accelerator” on Intel® Xeon® Scalable Processors that can deliver up to
100x performance gains on machine learning (ML) workloads by simply
installing our free Intel® Distribution for Python.6
For deep learning (DL) workloads, 3rd Generation Intel® Xeon® Scalable
Processors are showing a greater than 10x improvement with
Intel® DL Boost and optimized software!
But how does that compare against the competition? Since most
data scientists don’t run a single AI workload, we looked at a recent
Kaggle survey and chose a broad range of popular machine and deep
learning models, including training and inference.
Not only did we outperform both AMD EPYC 7763 (codenamed
Milan) and Nvidia A100 GPUs on a majority of workloads, but we
also outperformed them across the geomean of these key customer
workloads. Long story short, customers choose Intel® Xeon® Scalable
Processors for their performance and TCO benefits, and—as you’ll read
later—it delivers the performance they need!
Inferencing Performance with Software Optimizations7
on 3rd Generation Intel® Xeon® Scalable Processors
Continued hardware innovation
and software optimizations drive
AI performance gains on Intel®
Xeon® Scalable Processors.
How Do We Do It?
Higher
is better.
faster from gen-on-gen
(natural language processing)2
faster with Intel-optimized versions over
default TensorFlow (image recognition) /
Scikit-Learn (SVC & kNN predict)1
10-100x
74%
faster than AMD EPYC 7763
(object detection)3
25x
higher perf than AMD EPYC
7763 (Milan) across 20 key
customer AI workloads4
1.5x
higher performance than
Nvidia A100 across 20 key
customer AI workloads5
1.3x
up
to
up
to
up
to
up
to
up
to
How much extra money should you spend to save a little time? The answer: It all depends how much
time we’re talking about. It’s an unrealistic representation of a data scientist’s day to say that they run a
program and then just sit on their hands waiting for it to resolve. They ingest, process, and experiment
with data to create accurate models and strategies. This process takes a lot of experimentation and time
… time measured in hours and days, not microseconds. Intel looks at the entire pipeline, not just one
aspect of it. The graphic below shows a standard machine learning (ML) pipeline and how data scientists
spend their time. There are misconceptions in the industry that GPUs are required to handle this
workflow, which is not based on what a data scientist does on a daily basis.
A Day in the Life of a Data Scientist
So how do you compare the performance of different solutions for an end-to-end (E2E) ML pipeline?
We’ve tested many real E2E workflows that read data from a large dataset (Readcsv in the chart below),
iterate on the data multiple times to create a model (ETL and Model Training), and then run predictions
on the model (ML Time).
3rd Generation Intel® Xeon® Scalable Processors deliver 25% faster E2E data science at all phases of the
pipeline comparable to 2nd Generation Intel® Xeon® Scalable Processors.8
But what about GPUs? Will we be waiting hours or days longer to get results? When you look at the
whole picture, 3rd Generation Intel® Xeon® Scalable Processors deliver competitive performance as
GPUs for this representative E2E workload. In fact, the difference in completion time is less than the
average time between eye blinks9
—a far cry from what some may want you to believe!
End-to-End Machine Learning Performance10
Lower is better.
Consider This:
3rd Generation Intel® Xeon® Scalable
Processors deliver competitive
performance without the likely added
cost and complexity of switching to a
GPU platform
Do I Always Need the
Highest Performance?
Get the performance you NEED by optimizing on the
Intel® Xeon® Scalable Processor hardware you already use and trust.
Performance Results:
	 1	 See [117] at www.intel.com/3gen-xeon-config. Results may vary.
	 2	 See [123] at www.intel.com/3gen-xeon-config. Results may vary.
	 3	 See [45] at www.intel.com/3gen-xeon-config. Results may vary.
	 4	 See [43] at www.intel.com/3gen-xeon-config. Results may vary.
	 5	 See [44] at www.intel.com/3gen-xeon-config. Results may vary.
	 6 	 Intel® Distribution for Python is available to optimize performance for all Intel data center CPUs
	 7	 See [118] at www.intel.com/3gen-xeon-config. Results may vary.
	 8	 Hardware configuration for Intel® Xeon® Platinum 8380: 1-node, 2x Intel® Xeon® Platinum 8380 (40C/2.3GHz, 270W TDP) processor on Intel® Software
Development Platform with 512 GB (16 slots/ 32GB/ 3200) total DDR4 memory, ucode X55260, HT on, Turbo on, Ubuntu 20.04 LTS, 5.4.0-65-generic,
2x Intel® SSD D3-S4610 Series. Hardware configuration for Intel® Xeon® Platinum 8280: 1-node, 2x Intel® Xeon® Platinum 8280L processor on Intel®
Software Development Platform (28C) with 384GB (12 slots/32GB/2933MHz) total DDR4 memory, ucode 0x4003003, HT on, Turbo on, Ubuntu 20.04
LTS, 5.4.0-65-generic, 2x Intel® SSD DC S3520 Series. Software: Python 3.7.9, Pre-processing Modin 0.8.3, Omniscidbe v5.4.1, Intel Optimized Scikit-
Learn 0.24.1, OneDAL Daal4py 2021.2, XGBoost 1.3.3, Dataset source: IPUMS USA: https://usa.ipums.org/usa/, Dataset (size, shape): (21721922, 45),
Datatypes int64 and float64, Dataset size on disk 362.07 MB, Dataset format .csv.gz, Accuracy metric MSE: mean squared error; COD: coefficient of
determination, tested by Intel, and results as of March 2021.
	 9	 Source: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4043155/
	
10	 Nvidia A100 is 1.9 seconds faster than 3rd Gen Intel® Xeon® Scalable processor supporting Intel® DL Boost on Census end-to-end machine Learning
performance. Hardware configuration for Intel® Xeon® Platinum 8380: 1-node, 2x Intel® Xeon® Platinum 8380 (40C/2.3GHz, 270W TDP) processor
on Intel® Software Development Platform with 512 GB (16 slots/ 32GB/ 3200) total DDR4 memory, ucode X55260, HT on, Turbo on, Ubuntu 20.04
LTS, 5.4.0-65-generic, 4x Intel® SSD D3-S4610 Series, tested by Intel, and results as of March 2021. Hardware configuration for Nvidia A100: 1-node,
2-socket AMD EPYC 7742 (64C) with 512 GB (16 slots/ 32GB/ 3200) total DDR4 memory, ucode 0x8301034, HT on, Turbo on, Ubuntu 18.04.5 LTS,
5.4.0-42-generic, NVIDIA A100 (DGX-A100), 1.92TB M.2 NVMe, 1.92TB M.2 NVMe RAID. Software configuration for Intel® Xeon® Platinum 8380:
Python 3.7.9, Pre-processing Modin 0.8.3, Omniscidbe v5.4.1, Intel Optimized Scikit-Learn 0.24.1, OneDAL Daal4py 2021.2, XGBoost 1.3.3. Software
configuration for Nvidia A100: Python 3.7.9, Pre-processing CuDF 0.17, Intel Optimized Scikit-Learn Sklearn 0.24, OneDAL CuML 0.17, XGBoost
1.3.0dev.rapidsai0.17, Nvidia RAPIDS 0.17, CUDA Toolkit CUDA 11.0.221. Dataset source: IPUMS USA: https://usa.ipums.org/usa/, Dataset (size, shape):
(21721922, 45), Datatypes int64 and float64, Dataset size on disk 362.07 MB, Dataset format .csv.gz, Accuracy metric MSE: mean squared error; COD:
coefficient of determination, tested by Intel, and results as of March 2021.
	
11	 Configuration: 2-socket Intel® Xeon® E5-2650 v4 processor 24 cores HT OFF, Total Memory 256 GB (16x 16GB / 2133 MHz), Linux-3.10.0-693.21.1.el7.
x86_64-x86_64-with-redhat-7.5-Maipo, BIOS: SE5C610.86B.01.01.0024.021320181901, Intel® Deep Learning Deployment Toolkit version 2018.1.249,
Intel® MKL-DNN version 0.14. Patch disclaimer: Performance results are based on testing as of June 15th 2018 and may not reflect all publicly available
security updates. No product can be absolutely secure.
Performance varies by use, configuration and other factors.
Learn more at www.Intel.com/PerformanceIndex. Performance
results are based on testing as of dates shown in configurations
and may not reflect all publicly available updates. See backup
for configuration details. No product or component can be
absolutely secure. Your costs and results may vary. Intel
technologies may require enabled hardware, software or service
activation.
Code names are used by Intel to identify products, technologies,
or services that are in development and not publicly available.
These are not "commercial" names and not intended to function
as trademarks.
© Intel Corporation. Intel, the Intel logo, and other Intel marks
are trademarks of Intel Corporation or its subsidiaries. Other
names and brands may be claimed as the property of others.
GE engineers needed an inferencing solution
that could keep pace with their imaging
pipeline and make it flexible enough to
deploy on different CT scanner models
or even in the data center or cloud … all
without increasing their costs. They had four
unused Intel® Xeon® Processor cores in their
machines and needed to hit a goal of at least
100 images per second in order to keep up
with their imaging pipeline. In collaboration
with the Intel team and utilizing the
OpenVINO™ toolkit, GE was able to realize
high performance and low TCO on
Intel® Xeon® Scalable Processors, resulting
in a 14x speed increase compared to their
baseline solution and 5.9x above their
inferencing targets!11
Check out the white paper here.
Inferencing Throughput
Higher
is better.

Mais conteúdo relacionado

Mais procurados

Arun Rathinasabapathy, Senior Software Engineer, LexisNexis at MLconf ATL 2016
Arun Rathinasabapathy, Senior Software Engineer, LexisNexis at MLconf ATL 2016Arun Rathinasabapathy, Senior Software Engineer, LexisNexis at MLconf ATL 2016
Arun Rathinasabapathy, Senior Software Engineer, LexisNexis at MLconf ATL 2016
MLconf
 
Deploying and Monitoring Heterogeneous Machine Learning Applications with Cli...
Deploying and Monitoring Heterogeneous Machine Learning Applications with Cli...Deploying and Monitoring Heterogeneous Machine Learning Applications with Cli...
Deploying and Monitoring Heterogeneous Machine Learning Applications with Cli...
Databricks
 
Accelerating Real Time Video Analytics on a Heterogenous CPU + FPGA Platform
Accelerating Real Time Video Analytics on a Heterogenous CPU + FPGA PlatformAccelerating Real Time Video Analytics on a Heterogenous CPU + FPGA Platform
Accelerating Real Time Video Analytics on a Heterogenous CPU + FPGA Platform
Databricks
 
Raven: End-to-end Optimization of ML Prediction Queries
Raven: End-to-end Optimization of ML Prediction QueriesRaven: End-to-end Optimization of ML Prediction Queries
Raven: End-to-end Optimization of ML Prediction Queries
Databricks
 

Mais procurados (20)

Arun Rathinasabapathy, Senior Software Engineer, LexisNexis at MLconf ATL 2016
Arun Rathinasabapathy, Senior Software Engineer, LexisNexis at MLconf ATL 2016Arun Rathinasabapathy, Senior Software Engineer, LexisNexis at MLconf ATL 2016
Arun Rathinasabapathy, Senior Software Engineer, LexisNexis at MLconf ATL 2016
 
Building AI to play the FIFA video game using distributed TensorFlow on Analy...
Building AI to play the FIFA video game using distributed TensorFlow on Analy...Building AI to play the FIFA video game using distributed TensorFlow on Analy...
Building AI to play the FIFA video game using distributed TensorFlow on Analy...
 
Deep Learning for Natural Language Processing Using Apache Spark and TensorFl...
Deep Learning for Natural Language Processing Using Apache Spark and TensorFl...Deep Learning for Natural Language Processing Using Apache Spark and TensorFl...
Deep Learning for Natural Language Processing Using Apache Spark and TensorFl...
 
Kaz Sato, Evangelist, Google at MLconf ATL 2016
Kaz Sato, Evangelist, Google at MLconf ATL 2016Kaz Sato, Evangelist, Google at MLconf ATL 2016
Kaz Sato, Evangelist, Google at MLconf ATL 2016
 
COBOL to Apache Spark
COBOL to Apache SparkCOBOL to Apache Spark
COBOL to Apache Spark
 
Deploying and Monitoring Heterogeneous Machine Learning Applications with Cli...
Deploying and Monitoring Heterogeneous Machine Learning Applications with Cli...Deploying and Monitoring Heterogeneous Machine Learning Applications with Cli...
Deploying and Monitoring Heterogeneous Machine Learning Applications with Cli...
 
Flux - Open Machine Learning Stack / Pipeline
Flux - Open Machine Learning Stack / PipelineFlux - Open Machine Learning Stack / Pipeline
Flux - Open Machine Learning Stack / Pipeline
 
Machine Learning on Google Cloud with H2O
Machine Learning on Google Cloud with H2OMachine Learning on Google Cloud with H2O
Machine Learning on Google Cloud with H2O
 
Deploying Enterprise Scale Deep Learning in Actuarial Modeling at Nationwide
Deploying Enterprise Scale Deep Learning in Actuarial Modeling at NationwideDeploying Enterprise Scale Deep Learning in Actuarial Modeling at Nationwide
Deploying Enterprise Scale Deep Learning in Actuarial Modeling at Nationwide
 
MLOps - Build pipelines with Tensor Flow Extended & Kubeflow
MLOps - Build pipelines with Tensor Flow Extended & KubeflowMLOps - Build pipelines with Tensor Flow Extended & Kubeflow
MLOps - Build pipelines with Tensor Flow Extended & Kubeflow
 
How to deploy machine learning models into production
How to deploy machine learning models into productionHow to deploy machine learning models into production
How to deploy machine learning models into production
 
Distributed Models Over Distributed Data with MLflow, Pyspark, and Pandas
Distributed Models Over Distributed Data with MLflow, Pyspark, and PandasDistributed Models Over Distributed Data with MLflow, Pyspark, and Pandas
Distributed Models Over Distributed Data with MLflow, Pyspark, and Pandas
 
Accelerating Real Time Video Analytics on a Heterogenous CPU + FPGA Platform
Accelerating Real Time Video Analytics on a Heterogenous CPU + FPGA PlatformAccelerating Real Time Video Analytics on a Heterogenous CPU + FPGA Platform
Accelerating Real Time Video Analytics on a Heterogenous CPU + FPGA Platform
 
Jean-François Puget, Distinguished Engineer, Machine Learning and Optimizatio...
Jean-François Puget, Distinguished Engineer, Machine Learning and Optimizatio...Jean-François Puget, Distinguished Engineer, Machine Learning and Optimizatio...
Jean-François Puget, Distinguished Engineer, Machine Learning and Optimizatio...
 
Raven: End-to-end Optimization of ML Prediction Queries
Raven: End-to-end Optimization of ML Prediction QueriesRaven: End-to-end Optimization of ML Prediction Queries
Raven: End-to-end Optimization of ML Prediction Queries
 
Chris Nicholson, CEO Skymind at The AI Conference
Chris Nicholson, CEO Skymind at The AI Conference Chris Nicholson, CEO Skymind at The AI Conference
Chris Nicholson, CEO Skymind at The AI Conference
 
Jeremy Nixon, Machine Learning Engineer, Spark Technology Center at MLconf AT...
Jeremy Nixon, Machine Learning Engineer, Spark Technology Center at MLconf AT...Jeremy Nixon, Machine Learning Engineer, Spark Technology Center at MLconf AT...
Jeremy Nixon, Machine Learning Engineer, Spark Technology Center at MLconf AT...
 
Building A Production-Level Machine Learning Pipeline
Building A Production-Level Machine Learning PipelineBuilding A Production-Level Machine Learning Pipeline
Building A Production-Level Machine Learning Pipeline
 
Video Analytics on Hadoop webinar victor fang-201309
Video Analytics on Hadoop webinar victor fang-201309Video Analytics on Hadoop webinar victor fang-201309
Video Analytics on Hadoop webinar victor fang-201309
 
Transparent Hardware Acceleration for Deep Learning
Transparent Hardware Acceleration for Deep LearningTransparent Hardware Acceleration for Deep Learning
Transparent Hardware Acceleration for Deep Learning
 

Semelhante a Accelerate Your AI Today

“Acceleration of Deep Learning Using OpenVINO: 3D Seismic Case Study,” a Pres...
“Acceleration of Deep Learning Using OpenVINO: 3D Seismic Case Study,” a Pres...“Acceleration of Deep Learning Using OpenVINO: 3D Seismic Case Study,” a Pres...
“Acceleration of Deep Learning Using OpenVINO: 3D Seismic Case Study,” a Pres...
Edge AI and Vision Alliance
 
Win 7 & Intel V Pro Tech
Win 7 & Intel V Pro TechWin 7 & Intel V Pro Tech
Win 7 & Intel V Pro Tech
technext1
 

Semelhante a Accelerate Your AI Today (20)

AIDC India - AI on IA
AIDC India  - AI on IAAIDC India  - AI on IA
AIDC India - AI on IA
 
Accelerate Machine Learning Software on Intel Architecture
Accelerate Machine Learning Software on Intel Architecture Accelerate Machine Learning Software on Intel Architecture
Accelerate Machine Learning Software on Intel Architecture
 
E5 Intel Xeon Processor E5 Family Making the Business Case
E5 Intel Xeon Processor E5 Family Making the Business Case E5 Intel Xeon Processor E5 Family Making the Business Case
E5 Intel Xeon Processor E5 Family Making the Business Case
 
HPC DAY 2017 | Accelerating tomorrow's HPC and AI workflows with Intel Archit...
HPC DAY 2017 | Accelerating tomorrow's HPC and AI workflows with Intel Archit...HPC DAY 2017 | Accelerating tomorrow's HPC and AI workflows with Intel Archit...
HPC DAY 2017 | Accelerating tomorrow's HPC and AI workflows with Intel Archit...
 
Accelerating AI from the Cloud to the Edge
Accelerating AI from the Cloud to the EdgeAccelerating AI from the Cloud to the Edge
Accelerating AI from the Cloud to the Edge
 
“Acceleration of Deep Learning Using OpenVINO: 3D Seismic Case Study,” a Pres...
“Acceleration of Deep Learning Using OpenVINO: 3D Seismic Case Study,” a Pres...“Acceleration of Deep Learning Using OpenVINO: 3D Seismic Case Study,” a Pres...
“Acceleration of Deep Learning Using OpenVINO: 3D Seismic Case Study,” a Pres...
 
Accelerating AI Adoption with Partners
Accelerating AI Adoption with PartnersAccelerating AI Adoption with Partners
Accelerating AI Adoption with Partners
 
“Getting Efficient DNN Inference Performance: Is It Really About the TOPS?,” ...
“Getting Efficient DNN Inference Performance: Is It Really About the TOPS?,” ...“Getting Efficient DNN Inference Performance: Is It Really About the TOPS?,” ...
“Getting Efficient DNN Inference Performance: Is It Really About the TOPS?,” ...
 
TDC2019 Intel Software Day - Tecnicas de Programacao Paralela em Machine Lear...
TDC2019 Intel Software Day - Tecnicas de Programacao Paralela em Machine Lear...TDC2019 Intel Software Day - Tecnicas de Programacao Paralela em Machine Lear...
TDC2019 Intel Software Day - Tecnicas de Programacao Paralela em Machine Lear...
 
【視覺進化論】AI智慧視覺運算技術論壇_2_ChungYeh
【視覺進化論】AI智慧視覺運算技術論壇_2_ChungYeh【視覺進化論】AI智慧視覺運算技術論壇_2_ChungYeh
【視覺進化論】AI智慧視覺運算技術論壇_2_ChungYeh
 
Driving Industrial InnovationOn the Path to Exascale
Driving Industrial InnovationOn the Path to ExascaleDriving Industrial InnovationOn the Path to Exascale
Driving Industrial InnovationOn the Path to Exascale
 
High Performance Computing: The Essential tool for a Knowledge Economy
High Performance Computing: The Essential tool for a Knowledge EconomyHigh Performance Computing: The Essential tool for a Knowledge Economy
High Performance Computing: The Essential tool for a Knowledge Economy
 
Intel 6th Gen vPro
Intel 6th Gen vProIntel 6th Gen vPro
Intel 6th Gen vPro
 
Intel xeon-scalable-processors-overview
Intel xeon-scalable-processors-overviewIntel xeon-scalable-processors-overview
Intel xeon-scalable-processors-overview
 
Intel Powered AI Applications for Telco
Intel Powered AI Applications for TelcoIntel Powered AI Applications for Telco
Intel Powered AI Applications for Telco
 
AI for good: Scaling AI in science, healthcare, and more.
AI for good: Scaling AI in science, healthcare, and more.AI for good: Scaling AI in science, healthcare, and more.
AI for good: Scaling AI in science, healthcare, and more.
 
Intel® Xeon® Processor E5-2600 v3 Product Family Application Showcase – Big D...
Intel® Xeon® Processor E5-2600 v3 Product Family Application Showcase – Big D...Intel® Xeon® Processor E5-2600 v3 Product Family Application Showcase – Big D...
Intel® Xeon® Processor E5-2600 v3 Product Family Application Showcase – Big D...
 
TDC2018SP | Trilha IA - Inteligencia Artificial na Arquitetura Intel
TDC2018SP | Trilha IA - Inteligencia Artificial na Arquitetura IntelTDC2018SP | Trilha IA - Inteligencia Artificial na Arquitetura Intel
TDC2018SP | Trilha IA - Inteligencia Artificial na Arquitetura Intel
 
Win 7 & Intel V Pro Tech
Win 7 & Intel V Pro TechWin 7 & Intel V Pro Tech
Win 7 & Intel V Pro Tech
 
Re-architecting the Datacenter to Deliver Better Experiences (Intel)
Re-architecting the Datacenter to Deliver Better Experiences (Intel)Re-architecting the Datacenter to Deliver Better Experiences (Intel)
Re-architecting the Datacenter to Deliver Better Experiences (Intel)
 

Mais de DESMOND YUEN

NASA Spinoffs Help Fight Coronavirus, Clean Pollution, Grow Food, More
NASA Spinoffs Help Fight Coronavirus, Clean Pollution, Grow Food, MoreNASA Spinoffs Help Fight Coronavirus, Clean Pollution, Grow Food, More
NASA Spinoffs Help Fight Coronavirus, Clean Pollution, Grow Food, More
DESMOND YUEN
 
A Survey on Security and Privacy Issues in Edge Computing-Assisted Internet o...
A Survey on Security and Privacy Issues in Edge Computing-Assisted Internet o...A Survey on Security and Privacy Issues in Edge Computing-Assisted Internet o...
A Survey on Security and Privacy Issues in Edge Computing-Assisted Internet o...
DESMOND YUEN
 

Mais de DESMOND YUEN (20)

2022-AI-Index-Report_Master.pdf
2022-AI-Index-Report_Master.pdf2022-AI-Index-Report_Master.pdf
2022-AI-Index-Report_Master.pdf
 
Small Is the New Big
Small Is the New BigSmall Is the New Big
Small Is the New Big
 
Intel® Blockscale™ ASIC Product Brief
Intel® Blockscale™ ASIC Product BriefIntel® Blockscale™ ASIC Product Brief
Intel® Blockscale™ ASIC Product Brief
 
Cryptography Processing with 3rd Gen Intel Xeon Scalable Processors
Cryptography Processing with 3rd Gen Intel Xeon Scalable ProcessorsCryptography Processing with 3rd Gen Intel Xeon Scalable Processors
Cryptography Processing with 3rd Gen Intel Xeon Scalable Processors
 
Intel 2021 Product Security Report
Intel 2021 Product Security ReportIntel 2021 Product Security Report
Intel 2021 Product Security Report
 
How can regulation keep up as transformation races ahead? 2022 Global regulat...
How can regulation keep up as transformation races ahead? 2022 Global regulat...How can regulation keep up as transformation races ahead? 2022 Global regulat...
How can regulation keep up as transformation races ahead? 2022 Global regulat...
 
NASA Spinoffs Help Fight Coronavirus, Clean Pollution, Grow Food, More
NASA Spinoffs Help Fight Coronavirus, Clean Pollution, Grow Food, MoreNASA Spinoffs Help Fight Coronavirus, Clean Pollution, Grow Food, More
NASA Spinoffs Help Fight Coronavirus, Clean Pollution, Grow Food, More
 
A Survey on Security and Privacy Issues in Edge Computing-Assisted Internet o...
A Survey on Security and Privacy Issues in Edge Computing-Assisted Internet o...A Survey on Security and Privacy Issues in Edge Computing-Assisted Internet o...
A Survey on Security and Privacy Issues in Edge Computing-Assisted Internet o...
 
PUTTING PEOPLE FIRST: ITS IS SMART COMMUNITIES AND CITIES
PUTTING PEOPLE FIRST:  ITS IS SMART COMMUNITIES AND  CITIESPUTTING PEOPLE FIRST:  ITS IS SMART COMMUNITIES AND  CITIES
PUTTING PEOPLE FIRST: ITS IS SMART COMMUNITIES AND CITIES
 
BUILDING AN OPEN RAN ECOSYSTEM FOR EUROPE
BUILDING AN OPEN RAN ECOSYSTEM FOR EUROPEBUILDING AN OPEN RAN ECOSYSTEM FOR EUROPE
BUILDING AN OPEN RAN ECOSYSTEM FOR EUROPE
 
An Introduction to Semiconductors and Intel
An Introduction to Semiconductors and IntelAn Introduction to Semiconductors and Intel
An Introduction to Semiconductors and Intel
 
Changing demographics and economic growth bloom
Changing demographics and economic growth bloomChanging demographics and economic growth bloom
Changing demographics and economic growth bloom
 
Intel’s Impacts on the US Economy
Intel’s Impacts on the US EconomyIntel’s Impacts on the US Economy
Intel’s Impacts on the US Economy
 
2021 private networks infographics
2021 private networks infographics2021 private networks infographics
2021 private networks infographics
 
Transforming the Modern City with the Intel-based 5G Smart City Road Side Uni...
Transforming the Modern City with the Intel-based 5G Smart City Road Side Uni...Transforming the Modern City with the Intel-based 5G Smart City Road Side Uni...
Transforming the Modern City with the Intel-based 5G Smart City Road Side Uni...
 
Increasing Throughput per Node for Content Delivery Networks
Increasing Throughput per Node for Content Delivery NetworksIncreasing Throughput per Node for Content Delivery Networks
Increasing Throughput per Node for Content Delivery Networks
 
3rd Generation Intel® Xeon® Scalable Processor - Achieving 1 Tbps IPsec with ...
3rd Generation Intel® Xeon® Scalable Processor - Achieving 1 Tbps IPsec with ...3rd Generation Intel® Xeon® Scalable Processor - Achieving 1 Tbps IPsec with ...
3rd Generation Intel® Xeon® Scalable Processor - Achieving 1 Tbps IPsec with ...
 
"Life and Learning After One-Hundred Years: Trust Is The Coin Of The Realm."
"Life and Learning After One-Hundred Years: Trust Is The Coin Of The Realm.""Life and Learning After One-Hundred Years: Trust Is The Coin Of The Realm."
"Life and Learning After One-Hundred Years: Trust Is The Coin Of The Realm."
 
Telefónica views on the design, architecture, and technology of 4G/5G Open RA...
Telefónica views on the design, architecture, and technology of 4G/5G Open RA...Telefónica views on the design, architecture, and technology of 4G/5G Open RA...
Telefónica views on the design, architecture, and technology of 4G/5G Open RA...
 
Machine programming
Machine programmingMachine programming
Machine programming
 

Último

EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
Earley Information Science
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
Joaquim Jorge
 

Último (20)

08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
Evaluating the top large language models.pdf
Evaluating the top large language models.pdfEvaluating the top large language models.pdf
Evaluating the top large language models.pdf
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?
 

Accelerate Your AI Today

  • 1. Accelerate Your AI Today Intel® Xeon® Scalable Processors continue to be the foundation for artificial intelligence, machine learning, and deep learning, and Intel continues to focus on speeding up the ENTIRE data pipeline, not just accelerating small chunks of it. There are many startups creating AI hardware accelerators promising huge performance gains, but did you know that you can get “software AI accelerator” on Intel® Xeon® Scalable Processors that can deliver up to 100x performance gains on machine learning (ML) workloads by simply installing our free Intel® Distribution for Python.6 For deep learning (DL) workloads, 3rd Generation Intel® Xeon® Scalable Processors are showing a greater than 10x improvement with Intel® DL Boost and optimized software! But how does that compare against the competition? Since most data scientists don’t run a single AI workload, we looked at a recent Kaggle survey and chose a broad range of popular machine and deep learning models, including training and inference. Not only did we outperform both AMD EPYC 7763 (codenamed Milan) and Nvidia A100 GPUs on a majority of workloads, but we also outperformed them across the geomean of these key customer workloads. Long story short, customers choose Intel® Xeon® Scalable Processors for their performance and TCO benefits, and—as you’ll read later—it delivers the performance they need! Inferencing Performance with Software Optimizations7 on 3rd Generation Intel® Xeon® Scalable Processors Continued hardware innovation and software optimizations drive AI performance gains on Intel® Xeon® Scalable Processors. How Do We Do It? Higher is better. faster from gen-on-gen (natural language processing)2 faster with Intel-optimized versions over default TensorFlow (image recognition) / Scikit-Learn (SVC & kNN predict)1 10-100x 74% faster than AMD EPYC 7763 (object detection)3 25x higher perf than AMD EPYC 7763 (Milan) across 20 key customer AI workloads4 1.5x higher performance than Nvidia A100 across 20 key customer AI workloads5 1.3x up to up to up to up to up to
  • 2. How much extra money should you spend to save a little time? The answer: It all depends how much time we’re talking about. It’s an unrealistic representation of a data scientist’s day to say that they run a program and then just sit on their hands waiting for it to resolve. They ingest, process, and experiment with data to create accurate models and strategies. This process takes a lot of experimentation and time … time measured in hours and days, not microseconds. Intel looks at the entire pipeline, not just one aspect of it. The graphic below shows a standard machine learning (ML) pipeline and how data scientists spend their time. There are misconceptions in the industry that GPUs are required to handle this workflow, which is not based on what a data scientist does on a daily basis. A Day in the Life of a Data Scientist So how do you compare the performance of different solutions for an end-to-end (E2E) ML pipeline? We’ve tested many real E2E workflows that read data from a large dataset (Readcsv in the chart below), iterate on the data multiple times to create a model (ETL and Model Training), and then run predictions on the model (ML Time). 3rd Generation Intel® Xeon® Scalable Processors deliver 25% faster E2E data science at all phases of the pipeline comparable to 2nd Generation Intel® Xeon® Scalable Processors.8 But what about GPUs? Will we be waiting hours or days longer to get results? When you look at the whole picture, 3rd Generation Intel® Xeon® Scalable Processors deliver competitive performance as GPUs for this representative E2E workload. In fact, the difference in completion time is less than the average time between eye blinks9 —a far cry from what some may want you to believe! End-to-End Machine Learning Performance10 Lower is better. Consider This: 3rd Generation Intel® Xeon® Scalable Processors deliver competitive performance without the likely added cost and complexity of switching to a GPU platform
  • 3. Do I Always Need the Highest Performance? Get the performance you NEED by optimizing on the Intel® Xeon® Scalable Processor hardware you already use and trust. Performance Results: 1 See [117] at www.intel.com/3gen-xeon-config. Results may vary. 2 See [123] at www.intel.com/3gen-xeon-config. Results may vary. 3 See [45] at www.intel.com/3gen-xeon-config. Results may vary. 4 See [43] at www.intel.com/3gen-xeon-config. Results may vary. 5 See [44] at www.intel.com/3gen-xeon-config. Results may vary. 6 Intel® Distribution for Python is available to optimize performance for all Intel data center CPUs 7 See [118] at www.intel.com/3gen-xeon-config. Results may vary. 8 Hardware configuration for Intel® Xeon® Platinum 8380: 1-node, 2x Intel® Xeon® Platinum 8380 (40C/2.3GHz, 270W TDP) processor on Intel® Software Development Platform with 512 GB (16 slots/ 32GB/ 3200) total DDR4 memory, ucode X55260, HT on, Turbo on, Ubuntu 20.04 LTS, 5.4.0-65-generic, 2x Intel® SSD D3-S4610 Series. Hardware configuration for Intel® Xeon® Platinum 8280: 1-node, 2x Intel® Xeon® Platinum 8280L processor on Intel® Software Development Platform (28C) with 384GB (12 slots/32GB/2933MHz) total DDR4 memory, ucode 0x4003003, HT on, Turbo on, Ubuntu 20.04 LTS, 5.4.0-65-generic, 2x Intel® SSD DC S3520 Series. Software: Python 3.7.9, Pre-processing Modin 0.8.3, Omniscidbe v5.4.1, Intel Optimized Scikit- Learn 0.24.1, OneDAL Daal4py 2021.2, XGBoost 1.3.3, Dataset source: IPUMS USA: https://usa.ipums.org/usa/, Dataset (size, shape): (21721922, 45), Datatypes int64 and float64, Dataset size on disk 362.07 MB, Dataset format .csv.gz, Accuracy metric MSE: mean squared error; COD: coefficient of determination, tested by Intel, and results as of March 2021. 9 Source: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4043155/ 10 Nvidia A100 is 1.9 seconds faster than 3rd Gen Intel® Xeon® Scalable processor supporting Intel® DL Boost on Census end-to-end machine Learning performance. Hardware configuration for Intel® Xeon® Platinum 8380: 1-node, 2x Intel® Xeon® Platinum 8380 (40C/2.3GHz, 270W TDP) processor on Intel® Software Development Platform with 512 GB (16 slots/ 32GB/ 3200) total DDR4 memory, ucode X55260, HT on, Turbo on, Ubuntu 20.04 LTS, 5.4.0-65-generic, 4x Intel® SSD D3-S4610 Series, tested by Intel, and results as of March 2021. Hardware configuration for Nvidia A100: 1-node, 2-socket AMD EPYC 7742 (64C) with 512 GB (16 slots/ 32GB/ 3200) total DDR4 memory, ucode 0x8301034, HT on, Turbo on, Ubuntu 18.04.5 LTS, 5.4.0-42-generic, NVIDIA A100 (DGX-A100), 1.92TB M.2 NVMe, 1.92TB M.2 NVMe RAID. Software configuration for Intel® Xeon® Platinum 8380: Python 3.7.9, Pre-processing Modin 0.8.3, Omniscidbe v5.4.1, Intel Optimized Scikit-Learn 0.24.1, OneDAL Daal4py 2021.2, XGBoost 1.3.3. Software configuration for Nvidia A100: Python 3.7.9, Pre-processing CuDF 0.17, Intel Optimized Scikit-Learn Sklearn 0.24, OneDAL CuML 0.17, XGBoost 1.3.0dev.rapidsai0.17, Nvidia RAPIDS 0.17, CUDA Toolkit CUDA 11.0.221. Dataset source: IPUMS USA: https://usa.ipums.org/usa/, Dataset (size, shape): (21721922, 45), Datatypes int64 and float64, Dataset size on disk 362.07 MB, Dataset format .csv.gz, Accuracy metric MSE: mean squared error; COD: coefficient of determination, tested by Intel, and results as of March 2021. 11 Configuration: 2-socket Intel® Xeon® E5-2650 v4 processor 24 cores HT OFF, Total Memory 256 GB (16x 16GB / 2133 MHz), Linux-3.10.0-693.21.1.el7. x86_64-x86_64-with-redhat-7.5-Maipo, BIOS: SE5C610.86B.01.01.0024.021320181901, Intel® Deep Learning Deployment Toolkit version 2018.1.249, Intel® MKL-DNN version 0.14. Patch disclaimer: Performance results are based on testing as of June 15th 2018 and may not reflect all publicly available security updates. No product can be absolutely secure. Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex. Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure. Your costs and results may vary. Intel technologies may require enabled hardware, software or service activation. Code names are used by Intel to identify products, technologies, or services that are in development and not publicly available. These are not "commercial" names and not intended to function as trademarks. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others. GE engineers needed an inferencing solution that could keep pace with their imaging pipeline and make it flexible enough to deploy on different CT scanner models or even in the data center or cloud … all without increasing their costs. They had four unused Intel® Xeon® Processor cores in their machines and needed to hit a goal of at least 100 images per second in order to keep up with their imaging pipeline. In collaboration with the Intel team and utilizing the OpenVINO™ toolkit, GE was able to realize high performance and low TCO on Intel® Xeon® Scalable Processors, resulting in a 14x speed increase compared to their baseline solution and 5.9x above their inferencing targets!11 Check out the white paper here. Inferencing Throughput Higher is better.