SlideShare uma empresa Scribd logo
1 de 43
Never-Ending Language Learner
“But Watson couldn’t distinguish between polite language and profanity — 
which the Urban Dictionary is full of” 
- Eric Brown (IBM)
Subverting Machine Learning 
for Fun And Profit 
Ram Shankar Siva Kumar, John Walton 
Email: Ram.Shankar@Microsoft.com; JoWalt@Microsoft.com
Goals 
• This talk: 
• Is a primer on Adversarial Machine Learning 
• Will show, through a sampling, how ML algorithms are vulnerable 
• Illustrates how to defend against such attacks 
• This talk IS NOT 
• An exhaustive review of all algorithms 
• End goal: Gain an intuitive understanding of ML algorithms and how 
to attack them
Agenda 
• Motivation to Attack ML systems 
• Practical Attacks and Defenses 
• Best Practices
ML is everywhere… 
“Machine Learning is shifting from an academic discipline to an 
industrial tool” – John Langford
In Security…!! 
“The only effective approach to 
defending against today’s ever-increasing 
volume and diversity of 
attacks is to shift to fully 
automated systems capable of 
discovering and neutralizing 
attacks instantly.” 
- Mike Walker (on DARPA 
Cyber Grand Challenge)
Computer 
System 
Data 
Program 
Output 
Computer 
System 
Data 
Output 
Program 
Traditional Programming 
Machine Learning 
Source: Lectures by Pedro Domingos
Things to note about 
• For the program to be functional, input data must be functional 
• What does a program/model look like? 
• Literally, bunch of numbers and data points 
• The output model can be expressed in terms of parameters: 
Linear Regression 
y = 225x + 875 
3,500 
3,000 
2,500 
2,000 
1,500 
1,000 
500 
0 R² = 0.574 
1 2 3 4 5 6 7 8 
Number of Logons 
Time 
Non-Linear 
y = 982.23e0.1305x 
R² = 0.6624 
3,500 
3,000 
2,500 
2,000 
1,500 
1,000 
500 
0 
1 2 3 4 5 6 7 8 
Number of Logons = 225 * Time + 875 Number of Logons = 982*e 
0.1305* (Time)
Malicious Mindset 
• Data and parameters define the model 
• By controlling the data or parameters, you can change the model 
• Where do you find them? 
• Data 
• At the source 
• Collected in a big data store 
• Stored in the cloud (MLaaS) 
• Parameters: 
• Code repository
The mother lode 
Data is collected 
Data is within 
anomaly detector’s 
purview 
Anomaly is 
significant for 
detector 
Anomaly is 
surfaced! 
Source: Arun Viswanathan, Kymie Tan, and Clifford Neuman, Deconstructing the Assessment of Anomaly-based Intrusion Detectors, RAID 2013.
Putting it all together 
• Opportunity = ML is/will be everywhere 
• Prevalence = ML is/will be widely used in security 
• Ease = (most) ML algorithms can be easily subverted by controlling 
data/parameters 
• High rate of return = Once subverted, you can evade or even control 
the system 
Opportunity * Prevalence * Ease * High Rate of Return =
Agenda 
• Motivation to Attack ML systems 
• Practical Attacks and Defenses 
• Intuitive understanding of the algorithm 
• How the system looks before the attack? 
• How the system looks after the attack? 
• How to defend from these attacks? 
• Takeaway – From Evasion to total control of the system 
• Best Practices
About the dataset 
• Used Enron Spam Dataset 
• Came out of the Federal investigation of Enron corporation 
• Real world corpus of spam and ham messages. 
• 619,446 email messages belonging to 158 users. After cleaning it up 
(removing duplicate messages, discussion threads), you end up with 
200,399 messages.
Word P(Word|Spam) P(Word|Ham) 
Assets 0/3 2/3 
Assignment 0/3 2/3 
Cialis 3/3 0/3 
Group 0/3 2/3 
Viagra 1/3 0/3 
Vallium 2/3 0/3 
Naïve Bayes Algorithm 
Choose whichever probability is higher: 
푃 푆푝푎푚 푀 ∝ 푃 푆푝푎푚 ∗ 푃(W|Spam) 
푃 퐻푎푚 푀 ∝ 푃 퐻푎푚 ∗ 푃(W|Ham) 
P(Spam|M) = 0.5*(0/3)*(0/3)*(0/3) = 0 
P(Ham|M) = 0.5*(2/3)*(2/3)*(2/3) = 0.14 
Since 0.14 > 0 => Message is more likely to be 
Ham
Before Attack 
• Built a vanilla Naïve Bayes classifier on Enron email dataset (with 
some normalizations) 
• Goal: Given a new subject, can I predict if it is spam or ham? 
• Testing on 20% of data, you get test accuracy of 62%
After the attack 
• Good Word Attack: Introduce innocuous words in the message 
E.g: Gas Meeting Expense Report Payroll 
-> Test Accuracy dropped to 52.8% 
100 
80 
60 
40 
20 
0 
0 10 20 30 
False Positive Rate 
Number of Benign words added
Takeaway 
• How to use in real-world: Spear phishing 
• By manipulating the input to the algorithm, we can increase the false 
positive rate 
• Make the system unusable!
Support Vector Machines – The Ferrari of ML 
• Immensely popular 
• Quite fast 
• Deliver a solid performance 
• Widely used in classification setting 
In Security setting, beginning to gain 
popularity in the Malware community. 
• Goal: Given a piece of code, is it Malicious 
or benign?
Intuition 
Which is the right decision boundary?
SVM Intuition 
Choose the hyperplane, that maximizes the 
margin between the positive and negative 
examples! 
Those examples on the boundary are called 
support vectors!
Facts about SVMs 
• Output of SVM = a set of weights + Support vectors 
• Once you have the support vectors {special points in the 
training data}, rest of the training data can be thrown away 
• Takeaway: A good part of the model, is determined by 
support vectors 
• Intuition: Controlling the support vectors, should help us to 
control the model
Going after support vectors
Takeaway 
• How it can be used in real-world: Fool the malware classifier 
• Changes to support vectors, lead to changes in decision boundary
Clustering 
• Widely used learning algorithm for 
anomaly detection
Attack Intuition 
Center 
Before Attack 
After Attack 
Attack Point 
to be included 
Source:Laskov, Pavel, and Marius Kloft. "A framework for quantitative security analysis of machine learning." Proceedings of the 2nd ACM workshop on Security and artificial intelligence. ACM, 2009.
Takeaway 
• In order to attack the algorithm, we don’t change the parameter 
(centroid) -> Simply send in data as part of “normal” traffic 
• Increased the false negative rate
Summary of Attacks 
Algorithm Result of Attack What does this mean? 
Naïve Bayes Increased false positive rate You can make the system unusable 
K-means 
clustering 
Increased false negative rate You can evade detection 
SVM Control of the decision boundary You have full control of what gets alerted 
and what doesn’t
Ensembling – You can’t fool ‘em all 
- Build separate models to detect 
malicious activity 
- The models are chosen so that they are 
orthogonal 
- Each model independently assess for 
maliciousness 
- Results are combining using a separate 
function
• Used Gaussian Naïve Bayes, linear SVM in addition to Naïve Bayes 
• Used a simple majority voting method, to combine the three outputs.
Using Robust Learning Methods 
• Intuition: Treat the tainted data points 
as outliers (presumably because of 
noise) 
Outlier?
Instead of Consider 
Vanilla Naïve Bayes Multinomial Model (even better 
than multivariate Bernoulli model) 
SVM Robust SVM (feature noise, and 
label noise) 
K-means with finite window K-means with infinite window 
Logistic Regression Robust Logistic Regression using 
Shift Parameters 
Vanilla PCA Robust PCA with Laplcian 
Threshold (Antidote)
Caution! 
• Pros: Well studied field with a gamut of choices 
• Optimization perspective 
• Game Theoretic perspective 
• Statistical perspective 
• Cons: 
• Some of these algorithms have higher computational complexity than standard 
algorithms 
Standard SVM: 10 minutes Robust SVM: 1 hr and 8 mins 
(Single node implementation, 50k data points, 20% test, no kernel ) 
• Requires a lot more tuning and babysitting
Agenda 
• Motivation to Attack ML systems 
• Practical Attacks and Defenses 
• Best Practices
Threat Modeling 
• Adversary Goal - Evasion? Poisoning? Deletion? 
• Adversary’s knowledge – Perfect Knowledge? Limited Knowledge? 
• Training set or part of it 
• Feature representation of each sample 
• Type of a learning algorithm and the form of its decision function 
• Parameters and hyper-parameters of the learned model 
• Feedback from the classifier; e.g., classifier labels for samples chosen by the 
adversary. 
• Attacker’s capability 
• Ability to modify – Complete or partial? 
Source:Biggio, Battista, Blaine Nelson, and Pavel Laskov. "Poisoning attacks against support vector machines." arXiv preprint arXiv:1206.6389 (2012).
Tablestakes 
• Secure log sources 
• Secure your storage space 
• Monitor data quality 
• Treat parameters and features as secrets 
• Don’t use publically available datasets to train your system 
• When designing the system, avoid interactive feedback
3 Key Takeaways 
1) Naïve implementation of machine Learning Algorithms are 
vulnerable to attacks. 
2) Attackers can evade detections, cause the system to be unusable or 
even control it. 
3) Trustworthy results depend on trustworthy data.
Thank you! 
- TwC: Tim Burell 
- Azure Security: Ross Snider, Shrikant 
Adhirkala, Sacha Faust Bourque, 
Bryan Smith, Marcin Olszewski, 
Ashish Kurmi, Lars Mohr, Ben 
Ridgway 
- O365 Security: Dave Hull, Chetan 
Bhat, Jerry Cochran 
- MSR: Jay Stokes, Gang Wang 
(intern) 
- LCA: Matt Sommer 
Source: http://www.lecun.org/gallery/libpro/20011121-allyourbayes/dsc01228-02-h.jpg

Mais conteúdo relacionado

Mais procurados

rsec2a-2016-jheaton-morning
rsec2a-2016-jheaton-morningrsec2a-2016-jheaton-morning
rsec2a-2016-jheaton-morning
Jeff Heaton
 

Mais procurados (20)

Navy security contest-bigdataforsecurity
Navy security contest-bigdataforsecurityNavy security contest-bigdataforsecurity
Navy security contest-bigdataforsecurity
 
Intern Poster Presentation
Intern Poster PresentationIntern Poster Presentation
Intern Poster Presentation
 
Machine Learning Algorithm & Anomaly detection 2021
Machine Learning Algorithm & Anomaly detection 2021Machine Learning Algorithm & Anomaly detection 2021
Machine Learning Algorithm & Anomaly detection 2021
 
Anomaly detection, part 1
Anomaly detection, part 1Anomaly detection, part 1
Anomaly detection, part 1
 
Simple math for anomaly detection toufic boubez - metafor software - monito...
Simple math for anomaly detection   toufic boubez - metafor software - monito...Simple math for anomaly detection   toufic boubez - metafor software - monito...
Simple math for anomaly detection toufic boubez - metafor software - monito...
 
rsec2a-2016-jheaton-morning
rsec2a-2016-jheaton-morningrsec2a-2016-jheaton-morning
rsec2a-2016-jheaton-morning
 
AI model security and robustness
AI model security and robustnessAI model security and robustness
AI model security and robustness
 
Anomaly Detection - Real World Scenarios, Approaches and Live Implementation
Anomaly Detection - Real World Scenarios, Approaches and Live ImplementationAnomaly Detection - Real World Scenarios, Approaches and Live Implementation
Anomaly Detection - Real World Scenarios, Approaches and Live Implementation
 
Anomaly Detection for Real-World Systems
Anomaly Detection for Real-World SystemsAnomaly Detection for Real-World Systems
Anomaly Detection for Real-World Systems
 
Today
TodayToday
Today
 
Anomaly detection
Anomaly detectionAnomaly detection
Anomaly detection
 
Anomaly detection with machine learning at scale
Anomaly detection with machine learning at scaleAnomaly detection with machine learning at scale
Anomaly detection with machine learning at scale
 
Anomaly Detection and Spark Implementation - Meetup Presentation.pptx
Anomaly Detection and Spark Implementation - Meetup Presentation.pptxAnomaly Detection and Spark Implementation - Meetup Presentation.pptx
Anomaly Detection and Spark Implementation - Meetup Presentation.pptx
 
Security evaluation of pattern classifiers under attack
Security evaluation of pattern classifiers under attack Security evaluation of pattern classifiers under attack
Security evaluation of pattern classifiers under attack
 
Security evaluation of pattern classifiers under attack
Security evaluation of pattern classifiers under attackSecurity evaluation of pattern classifiers under attack
Security evaluation of pattern classifiers under attack
 
An Introduction to Anomaly Detection
An Introduction to Anomaly DetectionAn Introduction to Anomaly Detection
An Introduction to Anomaly Detection
 
Machine Learning in Malware Detection
Machine Learning in Malware DetectionMachine Learning in Malware Detection
Machine Learning in Malware Detection
 
Protection Poker: An Agile Security Game
Protection Poker: An Agile Security GameProtection Poker: An Agile Security Game
Protection Poker: An Agile Security Game
 
Machine Learning for Malware Classification and Clustering
Machine Learning for Malware Classification and ClusteringMachine Learning for Malware Classification and Clustering
Machine Learning for Malware Classification and Clustering
 
Introduction to machine learning
Introduction to machine learningIntroduction to machine learning
Introduction to machine learning
 

Semelhante a Subverting Machine Learning Detections for fun and profit

BsidesLVPresso2016_JZeditsv6
BsidesLVPresso2016_JZeditsv6BsidesLVPresso2016_JZeditsv6
BsidesLVPresso2016_JZeditsv6
Rod Soto
 
BlueHat Seattle 2019 || The good, the bad & the ugly of ML based approaches f...
BlueHat Seattle 2019 || The good, the bad & the ugly of ML based approaches f...BlueHat Seattle 2019 || The good, the bad & the ugly of ML based approaches f...
BlueHat Seattle 2019 || The good, the bad & the ugly of ML based approaches f...
BlueHat Security Conference
 
Understand How Machine Learning Defends Against Zero-Day Threats
Understand How Machine Learning Defends Against Zero-Day ThreatsUnderstand How Machine Learning Defends Against Zero-Day Threats
Understand How Machine Learning Defends Against Zero-Day Threats
Rahul Mohandas
 
BlueHat v18 || Crafting synthetic attack examples from past cyber-attacks for...
BlueHat v18 || Crafting synthetic attack examples from past cyber-attacks for...BlueHat v18 || Crafting synthetic attack examples from past cyber-attacks for...
BlueHat v18 || Crafting synthetic attack examples from past cyber-attacks for...
BlueHat Security Conference
 
Using Deception to Enhance Security: A Taxonomy, Model, and Novel Uses -- The...
Using Deception to Enhance Security: A Taxonomy, Model, and Novel Uses -- The...Using Deception to Enhance Security: A Taxonomy, Model, and Novel Uses -- The...
Using Deception to Enhance Security: A Taxonomy, Model, and Novel Uses -- The...
Mohammed Almeshekah
 

Semelhante a Subverting Machine Learning Detections for fun and profit (20)

Seminar Presentation | Network Intrusion Detection using Supervised Machine L...
Seminar Presentation | Network Intrusion Detection using Supervised Machine L...Seminar Presentation | Network Intrusion Detection using Supervised Machine L...
Seminar Presentation | Network Intrusion Detection using Supervised Machine L...
 
BsidesLVPresso2016_JZeditsv6
BsidesLVPresso2016_JZeditsv6BsidesLVPresso2016_JZeditsv6
BsidesLVPresso2016_JZeditsv6
 
Adversarial machine learning for av software
Adversarial machine learning for av softwareAdversarial machine learning for av software
Adversarial machine learning for av software
 
BlueHat Seattle 2019 || The good, the bad & the ugly of ML based approaches f...
BlueHat Seattle 2019 || The good, the bad & the ugly of ML based approaches f...BlueHat Seattle 2019 || The good, the bad & the ugly of ML based approaches f...
BlueHat Seattle 2019 || The good, the bad & the ugly of ML based approaches f...
 
Cybersecurity Challenges with Generative AI - for Good and Bad
Cybersecurity Challenges with Generative AI - for Good and BadCybersecurity Challenges with Generative AI - for Good and Bad
Cybersecurity Challenges with Generative AI - for Good and Bad
 
High time to add machine learning to your information security stack
High time to add machine learning to your information security stackHigh time to add machine learning to your information security stack
High time to add machine learning to your information security stack
 
Primer on major data mining algorithms
Primer on major data mining algorithmsPrimer on major data mining algorithms
Primer on major data mining algorithms
 
aml.pdf
aml.pdfaml.pdf
aml.pdf
 
Understand How Machine Learning Defends Against Zero-Day Threats
Understand How Machine Learning Defends Against Zero-Day ThreatsUnderstand How Machine Learning Defends Against Zero-Day Threats
Understand How Machine Learning Defends Against Zero-Day Threats
 
Understand How Machine Learning Defends Against Zero-Day Threats
Understand How Machine Learning Defends Against Zero-Day ThreatsUnderstand How Machine Learning Defends Against Zero-Day Threats
Understand How Machine Learning Defends Against Zero-Day Threats
 
Machine Duping 101: Pwning Deep Learning Systems
Machine Duping 101: Pwning Deep Learning SystemsMachine Duping 101: Pwning Deep Learning Systems
Machine Duping 101: Pwning Deep Learning Systems
 
Robustness Metrics for ML Models based on Deep Learning Methods
Robustness Metrics for ML Models based on Deep Learning MethodsRobustness Metrics for ML Models based on Deep Learning Methods
Robustness Metrics for ML Models based on Deep Learning Methods
 
Design and Development of an Efficient Malware Detection Using ML
Design and Development of an Efficient Malware Detection Using MLDesign and Development of an Efficient Malware Detection Using ML
Design and Development of an Efficient Malware Detection Using ML
 
Icacci presentation-isi-ransomware
Icacci presentation-isi-ransomwareIcacci presentation-isi-ransomware
Icacci presentation-isi-ransomware
 
BlueHat v18 || Crafting synthetic attack examples from past cyber-attacks for...
BlueHat v18 || Crafting synthetic attack examples from past cyber-attacks for...BlueHat v18 || Crafting synthetic attack examples from past cyber-attacks for...
BlueHat v18 || Crafting synthetic attack examples from past cyber-attacks for...
 
IDS - Analysis of SVM and decision trees
IDS - Analysis of SVM and decision treesIDS - Analysis of SVM and decision trees
IDS - Analysis of SVM and decision trees
 
Machine learning in computer security
Machine learning in computer securityMachine learning in computer security
Machine learning in computer security
 
Using Deception to Enhance Security: A Taxonomy, Model, and Novel Uses -- The...
Using Deception to Enhance Security: A Taxonomy, Model, and Novel Uses -- The...Using Deception to Enhance Security: A Taxonomy, Model, and Novel Uses -- The...
Using Deception to Enhance Security: A Taxonomy, Model, and Novel Uses -- The...
 
Analytics for large-scale time series and event data
Analytics for large-scale time series and event dataAnalytics for large-scale time series and event data
Analytics for large-scale time series and event data
 
Machine Learning techniques used in AI.
Machine Learning  techniques used in AI.Machine Learning  techniques used in AI.
Machine Learning techniques used in AI.
 

Último

CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
giselly40
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
Earley Information Science
 

Último (20)

Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Tech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdfTech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdf
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 

Subverting Machine Learning Detections for fun and profit

  • 2.
  • 3. “But Watson couldn’t distinguish between polite language and profanity — which the Urban Dictionary is full of” - Eric Brown (IBM)
  • 4. Subverting Machine Learning for Fun And Profit Ram Shankar Siva Kumar, John Walton Email: Ram.Shankar@Microsoft.com; JoWalt@Microsoft.com
  • 5. Goals • This talk: • Is a primer on Adversarial Machine Learning • Will show, through a sampling, how ML algorithms are vulnerable • Illustrates how to defend against such attacks • This talk IS NOT • An exhaustive review of all algorithms • End goal: Gain an intuitive understanding of ML algorithms and how to attack them
  • 6. Agenda • Motivation to Attack ML systems • Practical Attacks and Defenses • Best Practices
  • 7. ML is everywhere… “Machine Learning is shifting from an academic discipline to an industrial tool” – John Langford
  • 8. In Security…!! “The only effective approach to defending against today’s ever-increasing volume and diversity of attacks is to shift to fully automated systems capable of discovering and neutralizing attacks instantly.” - Mike Walker (on DARPA Cyber Grand Challenge)
  • 9. Computer System Data Program Output Computer System Data Output Program Traditional Programming Machine Learning Source: Lectures by Pedro Domingos
  • 10. Things to note about • For the program to be functional, input data must be functional • What does a program/model look like? • Literally, bunch of numbers and data points • The output model can be expressed in terms of parameters: Linear Regression y = 225x + 875 3,500 3,000 2,500 2,000 1,500 1,000 500 0 R² = 0.574 1 2 3 4 5 6 7 8 Number of Logons Time Non-Linear y = 982.23e0.1305x R² = 0.6624 3,500 3,000 2,500 2,000 1,500 1,000 500 0 1 2 3 4 5 6 7 8 Number of Logons = 225 * Time + 875 Number of Logons = 982*e 0.1305* (Time)
  • 11. Malicious Mindset • Data and parameters define the model • By controlling the data or parameters, you can change the model • Where do you find them? • Data • At the source • Collected in a big data store • Stored in the cloud (MLaaS) • Parameters: • Code repository
  • 12. The mother lode Data is collected Data is within anomaly detector’s purview Anomaly is significant for detector Anomaly is surfaced! Source: Arun Viswanathan, Kymie Tan, and Clifford Neuman, Deconstructing the Assessment of Anomaly-based Intrusion Detectors, RAID 2013.
  • 13.
  • 14. Putting it all together • Opportunity = ML is/will be everywhere • Prevalence = ML is/will be widely used in security • Ease = (most) ML algorithms can be easily subverted by controlling data/parameters • High rate of return = Once subverted, you can evade or even control the system Opportunity * Prevalence * Ease * High Rate of Return =
  • 15. Agenda • Motivation to Attack ML systems • Practical Attacks and Defenses • Intuitive understanding of the algorithm • How the system looks before the attack? • How the system looks after the attack? • How to defend from these attacks? • Takeaway – From Evasion to total control of the system • Best Practices
  • 16. About the dataset • Used Enron Spam Dataset • Came out of the Federal investigation of Enron corporation • Real world corpus of spam and ham messages. • 619,446 email messages belonging to 158 users. After cleaning it up (removing duplicate messages, discussion threads), you end up with 200,399 messages.
  • 17.
  • 18.
  • 19. Word P(Word|Spam) P(Word|Ham) Assets 0/3 2/3 Assignment 0/3 2/3 Cialis 3/3 0/3 Group 0/3 2/3 Viagra 1/3 0/3 Vallium 2/3 0/3 Naïve Bayes Algorithm Choose whichever probability is higher: 푃 푆푝푎푚 푀 ∝ 푃 푆푝푎푚 ∗ 푃(W|Spam) 푃 퐻푎푚 푀 ∝ 푃 퐻푎푚 ∗ 푃(W|Ham) P(Spam|M) = 0.5*(0/3)*(0/3)*(0/3) = 0 P(Ham|M) = 0.5*(2/3)*(2/3)*(2/3) = 0.14 Since 0.14 > 0 => Message is more likely to be Ham
  • 20. Before Attack • Built a vanilla Naïve Bayes classifier on Enron email dataset (with some normalizations) • Goal: Given a new subject, can I predict if it is spam or ham? • Testing on 20% of data, you get test accuracy of 62%
  • 21. After the attack • Good Word Attack: Introduce innocuous words in the message E.g: Gas Meeting Expense Report Payroll -> Test Accuracy dropped to 52.8% 100 80 60 40 20 0 0 10 20 30 False Positive Rate Number of Benign words added
  • 22. Takeaway • How to use in real-world: Spear phishing • By manipulating the input to the algorithm, we can increase the false positive rate • Make the system unusable!
  • 23. Support Vector Machines – The Ferrari of ML • Immensely popular • Quite fast • Deliver a solid performance • Widely used in classification setting In Security setting, beginning to gain popularity in the Malware community. • Goal: Given a piece of code, is it Malicious or benign?
  • 24. Intuition Which is the right decision boundary?
  • 25. SVM Intuition Choose the hyperplane, that maximizes the margin between the positive and negative examples! Those examples on the boundary are called support vectors!
  • 26. Facts about SVMs • Output of SVM = a set of weights + Support vectors • Once you have the support vectors {special points in the training data}, rest of the training data can be thrown away • Takeaway: A good part of the model, is determined by support vectors • Intuition: Controlling the support vectors, should help us to control the model
  • 28. Takeaway • How it can be used in real-world: Fool the malware classifier • Changes to support vectors, lead to changes in decision boundary
  • 29. Clustering • Widely used learning algorithm for anomaly detection
  • 30. Attack Intuition Center Before Attack After Attack Attack Point to be included Source:Laskov, Pavel, and Marius Kloft. "A framework for quantitative security analysis of machine learning." Proceedings of the 2nd ACM workshop on Security and artificial intelligence. ACM, 2009.
  • 31.
  • 32. Takeaway • In order to attack the algorithm, we don’t change the parameter (centroid) -> Simply send in data as part of “normal” traffic • Increased the false negative rate
  • 33. Summary of Attacks Algorithm Result of Attack What does this mean? Naïve Bayes Increased false positive rate You can make the system unusable K-means clustering Increased false negative rate You can evade detection SVM Control of the decision boundary You have full control of what gets alerted and what doesn’t
  • 34. Ensembling – You can’t fool ‘em all - Build separate models to detect malicious activity - The models are chosen so that they are orthogonal - Each model independently assess for maliciousness - Results are combining using a separate function
  • 35. • Used Gaussian Naïve Bayes, linear SVM in addition to Naïve Bayes • Used a simple majority voting method, to combine the three outputs.
  • 36. Using Robust Learning Methods • Intuition: Treat the tainted data points as outliers (presumably because of noise) Outlier?
  • 37. Instead of Consider Vanilla Naïve Bayes Multinomial Model (even better than multivariate Bernoulli model) SVM Robust SVM (feature noise, and label noise) K-means with finite window K-means with infinite window Logistic Regression Robust Logistic Regression using Shift Parameters Vanilla PCA Robust PCA with Laplcian Threshold (Antidote)
  • 38. Caution! • Pros: Well studied field with a gamut of choices • Optimization perspective • Game Theoretic perspective • Statistical perspective • Cons: • Some of these algorithms have higher computational complexity than standard algorithms Standard SVM: 10 minutes Robust SVM: 1 hr and 8 mins (Single node implementation, 50k data points, 20% test, no kernel ) • Requires a lot more tuning and babysitting
  • 39. Agenda • Motivation to Attack ML systems • Practical Attacks and Defenses • Best Practices
  • 40. Threat Modeling • Adversary Goal - Evasion? Poisoning? Deletion? • Adversary’s knowledge – Perfect Knowledge? Limited Knowledge? • Training set or part of it • Feature representation of each sample • Type of a learning algorithm and the form of its decision function • Parameters and hyper-parameters of the learned model • Feedback from the classifier; e.g., classifier labels for samples chosen by the adversary. • Attacker’s capability • Ability to modify – Complete or partial? Source:Biggio, Battista, Blaine Nelson, and Pavel Laskov. "Poisoning attacks against support vector machines." arXiv preprint arXiv:1206.6389 (2012).
  • 41. Tablestakes • Secure log sources • Secure your storage space • Monitor data quality • Treat parameters and features as secrets • Don’t use publically available datasets to train your system • When designing the system, avoid interactive feedback
  • 42. 3 Key Takeaways 1) Naïve implementation of machine Learning Algorithms are vulnerable to attacks. 2) Attackers can evade detections, cause the system to be unusable or even control it. 3) Trustworthy results depend on trustworthy data.
  • 43. Thank you! - TwC: Tim Burell - Azure Security: Ross Snider, Shrikant Adhirkala, Sacha Faust Bourque, Bryan Smith, Marcin Olszewski, Ashish Kurmi, Lars Mohr, Ben Ridgway - O365 Security: Dave Hull, Chetan Bhat, Jerry Cochran - MSR: Jay Stokes, Gang Wang (intern) - LCA: Matt Sommer Source: http://www.lecun.org/gallery/libpro/20011121-allyourbayes/dsc01228-02-h.jpg

Notas do Editor

  1. NELL or Never Ending Language Learning is a research project at CMU that learns the web. started by Tom Mitchell. Uses innovative semi-supervised learning techniques, wherein it learns most of the facts on its own, but there is minimal human involvement. For instance, it automatically learned that broiled chicken is a type of meat. You can even follow it on Twitter, and rate its confidence One day, it learned that Donald Trump is a type of wig.
  2. The same problem even plagued Watson. The researchers wanted to teach the couldn’t distinguish between polite language and profanity — which the Urban Dictionary is full of. Watson picked up some bad habits from reading Wikipedia as well. In tests it even used the word “bullshit” in an answer to a researcher’s query
  3. By 2016, 25 Percent of Large Global Companies Will Have Adopted Big Data Analytics For At Least One Security or Fraud Detection Use Case - Defenders are key;
  4. Some things to note since the last slide, -> For the end program to be useful, the input data must be functional -> So what does this program, or in machine learning speak model actually look like? Actually, it is not any fancy equation or math. It is literally a bunch of numbers/data points. To illustrate this we modeled the number of logons at various time. Our input to the system is a time series of logons. In case of a linear regression, you get a linear relationship. The red numbers, which are the parameters is what gets stored. When we did a non-linear regression, only the numbers change.
  5. So, what are the takeaways? Data (like the time series of logon) and parameters (the end numbers) define the model By controlling one or both of them, you can control the model. Quick digression – where do you find the data/parameters:
  6. So, now that we know that an attacker can control the model or parameters, what can he do with it? For this we will walk through how Anomaly Detection system works. Walkthrough As you can see, once the data is corrupted, the end anomalies that get surfaced is also corrupted.
  7. In fact, things get really bad. You can increase the false negative rate (and evade) detection. You can increase the false positive rate and frustrate the Incident responder (or even take complete control of the system.
  8. Here is the data set: when you are before attack. One of the four datasets used in Spam -- CSDMC2010, SpamAs- sassin, LingSpam, and Enron-Spam. I am going to Built a vanilla Naïve Bayes classifier on Enron email dataset (with some normalizations) 619,446 email messages belonging to 158 users. After cleaning it up (removing duplicate messages, discussion threads), you end up with 200,399 messages.
  9. Which is Spam, and which is ham?
  10. Intuition behind Naïve Bayes – Conditional independence – that is the naïve part. Also thought of as Bag of Words model. Talk about Bag of Words assumption
  11. I want to walk What is normalization – You don’t estimate MLE; you estimate the log of it (makes calculation easy); If it is marked out as spam from Outlook.
  12. Spear phishing avoiding – The attackers can get succesfful – Drive by downloads.
  13. They control the input and that is how they use the center point. I control the input, which influences the parameters.
  14. Matrix Funny picture -
  15. SVM - http://jmlr.org/proceedings/papers/v20/biggio11/biggio11.pdf
  16. (Slide)
  17. A. Globerson and S. T. Roweis. Nightmare at test time: robust learning by feature deletion. In William W. Cohen and Andrew Moore, eds, ICML, vol. 148 of ACM Int’l Conf. Proc. Series, pp. 353–360, 2006. Typically seek a Nash equilibrium: neither player has an incentive to change his strategy I Does not mean that either player’s payoff is maximized Combine
  18. i.e., how real objects such as emails, network packets are mapped into the classifier’s feature space;
  19. Model – Data health metrics;