BSidesLondon 20th April 2011 - David Rook (@securityninja)
-----------------------
This demonstration filled talk will start by discussing the problems with the security code review approaches most people follow and the reasons why I created Agnitio. This will include a look at existing manual and automated static analysis procedures and tools. The talk will move onto exploring the Principles of Secure Development and how the principles have been mapped to over 60 different checklist items in Agnitio.
---- for more about David go to
http://www.securityninja.co.uk/
---- for more about Agnito go to
http://sourceforge.net/projects/agnitiotool/
Axa Assurance Maroc - Insurer Innovation Award 2024
Agnitio: its static analysis, but not as we know it
1. David Rook Agnitio It’s static analysis, but not as we know it SecurityBSides, London
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23. Cross Site Scripting Injection Flaws Security Misconfiguration Information Leakage Race Condition Broken Authentication Session Management Cross Site Request Forgery Buffer Copy without Checking Size on Input Insecure Direct Object Reference Failure to Restrict URL Access Insecure Cryptographic Storage SQL Injection Content Spoofing Insufficient Authorisation Insufficient Authentication Abuse of Functionality Predictable Resource Location Unrestricted Upload of File with Dangerous Type Failure to Preserve SQL Query Structure Failure to Preserve Web Page Structure Failure to Preserve OS Command Structure URL Redirection to Untrusted Site Insufficient Transport Layer Protection Improper Limitation of a Pathname to a Restricted Directory Improper Control of Filename for Include/Require Statement in PHP Program Incorrect Permission Assignment for Critical Resource Download of Code Without Integrity Check Information Exposure Through an Error Message Reliance on Untrusted Inputs in a Security Decision Use of Hard-coded Credentials Buffer Access with Incorrect Length Value Improper Check for Unusual or Exceptional Conditions Use of a Broken or Risky Cryptographic Algorithm Missing Encryption of Sensitive Data Missing Authentication for Critical Function Integer Overflow or Wraparound Improper Validation of Array Index Incorrect Calculation of Buffer Size Unvalidated Redirects and Forwards Allocation of Resource Without Limits or Throttling Improper Access Control The principles of secure development
24.
25.
26.
27. Cross Site Scripting Injection Flaws Security Misconfiguration Information Leakage Race Condition Broken Authentication Session Management Cross Site Request Forgery Buffer Copy without Checking Size on Input Insecure Direct Object Reference Failure to Restrict URL Access Insecure Cryptographic Storage SQL Injection Content Spoofing Insufficient Authorisation Insufficient Authentication Abuse of Functionality Predictable Resource Location Unrestricted Upload of File with Dangerous Type Failure to Preserve SQL Query Structure Failure to Preserve Web Page Structure Failure to Preserve OS Command Structure URL Redirection to Untrusted Site Insufficient Transport Layer Protection Improper Limitation of a Pathname to a Restricted Directory Improper Control of Filename for Include/Require Statement in PHP Program Incorrect Permission Assignment for Critical Resource Download of Code Without Integrity Check Information Exposure Through an Error Message Reliance on Untrusted Inputs in a Security Decision Use of Hard-coded Credentials Buffer Access with Incorrect Length Value Improper Check for Unusual or Exceptional Conditions Use of a Broken or Risky Cryptographic Algorithm Missing Encryption of Sensitive Data Missing Authentication for Critical Function Integer Overflow or Wraparound Improper Validation of Array Index Incorrect Calculation of Buffer Size Unvalidated Redirects and Forwards Allocation of Resource Without Limits or Throttling Improper Access Control The principles of secure development Input Validation Output Validation Error Handling Authentication Authorisation Session Management Secure Communications Secure Storage Secure Resource Access Auditing and Logging
42. My “shoot for the moon” vision for Agnitio “ we pretty much need a Burp Pro equivalent for Static Analysis – awesome, powerful in the right hands, and completely affordable!” http://www.securityninja.co.uk/application-security/can-you-implement-static-analysis-without-breaking-the-bank/comment-page-1#comment-9777
Even the best tools can be "noisy" requiring manual intervention to identify "real"/useful flaws. High number of false positives is a guarantee with the automated static analysis tools The human reviewer can find vulnerabilities a piece of software can’t, specifically issues relating to business logic flaws and A+A. The human can analyse the findings instantly, in context and with a “real” opinion on the actual risk of the finding, Automated tools provide good coverage but fail to understand "context" and more subtle/complex bugs. Downside with manual ("human") reviews is that they rely on humans. Even the best humans get tired and make mistakes, you can only concentrate properly and review code for a short (relative to tools) period of time before they begin to be of little use. In my opinion a skilled/trained human reviewer is better than a tool but Manual reviews, automated reviews and DAST all find different issues and you should try to combine them all. No silver bullet! SmartBear Software conducted the largest study ever done on code review. It encompassed 2500 code reviews, 50 programmers, and 3.2 million lines of code at Cisco Systems.
Even the best tools can be "noisy" requiring manual intervention to identify "real"/useful flaws. High number of false positives is a guarantee with the automated static analysis tools The human reviewer can find vulnerabilities a piece of software can’t, specifically issues relating to business logic flaws and A+A. The human can analyse the findings instantly, in context and with a “real” opinion on the actual risk of the finding, Automated tools provide good coverage but fail to understand "context" and more subtle/complex bugs. Downside with manual ("human") reviews is that they rely on humans. Even the best humans get tired and make mistakes, you can only concentrate properly and review code for a short (relative to tools) period of time before they begin to be of little use. In my opinion a skilled/trained human reviewer is better than a tool but Manual reviews, automated reviews and DAST all find different issues and you should try to combine them all. No silver bullet! SmartBear Software conducted the largest study ever done on code review. It encompassed 2500 code reviews, 50 programmers, and 3.2 million lines of code at Cisco Systems.
Even the best tools can be "noisy" requiring manual intervention to identify "real"/useful flaws. High number of false positives is a guarantee with the automated static analysis tools The human reviewer can find vulnerabilities a piece of software can’t, specifically issues relating to business logic flaws and A+A. The human can analyse the findings instantly, in context and with a “real” opinion on the actual risk of the finding, Automated tools provide good coverage but fail to understand "context" and more subtle/complex bugs. Downside with manual ("human") reviews is that they rely on humans. Even the best humans get tired and make mistakes, you can only concentrate properly and review code for a short (relative to tools) period of time before they begin to be of little use. In my opinion a skilled/trained human reviewer is better than a tool but Manual reviews, automated reviews and DAST all find different issues and you should try to combine them all. No silver bullet! SmartBear Software conducted the largest study ever done on code review. It encompassed 2500 code reviews, 50 programmers, and 3.2 million lines of code at Cisco Systems.
Tools don’t get tired and can review a lot of code in one go, it can be left to run for many hours, days without tiring and cover almost all of the code. They can help find issues earlier in the SDLC if ingrained in the development phase == lower fix cost but not necessarily lower total cost. If tools produce high amounts of false positives then finding the real issues might actually negate some of/all of the cost saving gained from introducing the tool into the development phase of the SDLC. Tools are expensive to buy, should never be used out of the box. Need to be configured to meet your own requirements - not a five minute job - tuning will takes months. As I mentioned on the previous slide - humans can find vulnerabilities that tools can’t. Tools can often lead to people having a false sense of security, no issues found by the tool != no issues in the code as we will see in the next few slides. Lots of factors involved in getting adoption/buy in. Things like false positive and negative rates, integration into the process, integration into IDE’s, making SA happen automatically, how do you review the issues found, figure out the bugs that matter. SmartBear Software conducted the largest study ever done on code review. It encompassed 2500 code reviews, 50 programmers, and 3.2 million lines of code at Cisco Systems.
SQL Injection - FindBugs found it
Same SQL Injection but not found by CodePro Analytix
Path Manipulation flaw not found by FindBugs
Same Path Manipulation flaw found by CodePro Analytix
Reflected XSS not found by FindBugs or CodePro Analytix
Reflected XSS when we use the username value in the “echo” found by RIPS. Storing the username and password in the clear not found by RIPS.
Reflected XSS when we use the username value in the “echo” found by RIPS. Storing the username and password using hashing for the password without a salt not found by RIPS.
Security code reviews - a magical mystery tour or a real process driven task? Ringo Starr recalled "Paul had a great piece of paper-just a blank piece of white paper with a circle on it. The plan was: 'We start here-and we’ve got to do something here...' We filled it in as we went along."
Open source, hosted on Source Forge Over 4000 downloads from 80+ countries since November 2010 Agnitio is latin for : recognition, knowledge
Even if your process is good it might not be smart