More often than not, company executives ask the wrong questions about software security. This session will discuss techniques for changing the conversation about software security in order to encourage executives to ask the right questions – and provide answers that show progress towards meaningful objectives. Caroline will discuss a progression of software security capabilities and the metrics that correspond to different levels of maturity. She’ll discuss an approach for developing key metrics for your unique software security program and walk through a detailed example.
2. • “What most people do when faced with creating a
metrics program is calculate a few measurements that
seem interesting on the surface. This is the traffic light
approach that oversimplifies the data.
• Or they barrage the audience with a ton of detailed
metrics that overwhelm the reader.
• But for most organizations, none of that works.
• And what happens if you just do nothing? Then you
have little to no understanding of the effectiveness of
your AppSec program.”
• BSIMM Community Member, 2015
3. Agenda
1. Questions from Executives
2. AppSec Capabilities and Metrics
3. Common Metrics Scenarios
4. Developing Key Metrics
5. A Detailed Example
4. Questions from Executives
• More often than not, company executives ask the “wrong”
questions about AppSec:
• This data is often not available, and even if it is, it’s very
hard to find an “apples to apples” comparison.
• What’s our mean time to recover from a security incident?
• Mean time to recover is largely outside of your control. It
depends on the incident!
• How does our bug count compare to that of our competitors?
5. Questions from Executives
• What about when executives ask the “right” questions?
• “We’ve invested so much money into the AppSec program…
• What’s the impact on the firm’s risk posture?
• What value are we getting out of the dollars spent?”
• What kinds of questions are your executives asking about
your program?
• How do you respond? What challenges do you face in
answering their questions?
6. Why Metrics?
• Execs (and customers, auditors, regulators, etc.) want to
know about risk management.
• How do you talk about AppSec and risk management?
• Good software helps business.
• Bad software hurts business.
o We’re doing all of these things to make our software good and
prevent it from being bad.
7. Why Metrics?
• But how do we know we’re doing the right things? How
do we know if we’re doing enough? Too much? Too
little?
1. Start with risk management objectives
2. Ask questions about managing risk
3. Answer those questions with data based on the activities
you are doing.
8. Vocabulary
• Measurement vs. Metric – what’s the
difference?
• A measurement is the value of a specific characteristic of a given
entity
• A metric is the aggregation of one or more measurements to
create a piece of business intelligence.
o What is the question the metric answers?
o What is the decision the metric supports?
o What is the environmental context?
• What types of measurements do you collect from your
program?
• What types of questions and decisions do these
measurements help to support?
10. AppSec Capabilities
Capability Maturity
1. Risk Identification
(Find defects)
2. Policy Compliance
(Require testing)
3. Risk Reduction
(Fix defects)
4. Risk
Prevention
(Prevent defects)
How mature are the
capabilities in your
program?
11. AppSec Metrics
Capability Maturity
DataAvailability
1. Defect Discovery
Participation / Coverage
Metrics
2. Policy Compliance
Metrics
3. Effectiveness
Metrics
4. Risk
Prevention
Metrics
How mature are your
program metrics?
12. Scenario #1 - Explain the Incident
• An incident occurs.
• “Check the box” metrics are implemented, stay the
same, and fail to provide any useful information.
13. Scenario #2A - Vanity Metrics
• We have “metrics!”
• BUT – the AppSec Team cannot explain the impact of their
effort. Executive Management cannot make decisions
based on the information.
14. Scenario #2B - Lots of Effort, Little Reward
• AppSec counts a lot of things, shares those counts with
some people, and call them metrics
• Executive Management asks, “so what?” and AppSec
struggles to come up with a satisfactory response.
15. Scenario #2C - Charlie Brown Grown Up Speak
• Executive Management doesn’t understand what is
being presented by AppSec
• AppSec earns a reputation for being wasteful or simply
impossible to understand
16. Scenario #3 - Proactive Communication
• The AppSec Team explains AppSec in a way that is
understood by Executive Management.
• The AppSec Team provides context for metrics and
explains how to interpret the data, helping stakeholders
to understand the intended message.
17. The Cigital Approach
1. Identify Risk Management Objectives
2. Take an Inventory of Current and Planned Activities
3. Define Key Metrics
18. Why Metrics?
• Execs (and customers, auditors, regulators, etc.) want to
know about risk management.
• How do you talk about AppSec and risk management?
• Good software helps business.
• Bad software hurts business.
o We’re doing all of these things to make our software good and
prevent it from being bad.
19. 1. Identify Risk Management Objectives
Application Portfolio
We are not appropriately managing AppSec risk if we are not:
• Able to enumerate our current software portfolio
• Able to enumerate our deployed applications and databases
• Using an AppSec program to ensure, with sign-off, the
appropriate security posture for every application
• Assigning a “risk designator” to every software asset, software
project, software security defect, and data asset
• Managing risk across the entire portfolio
• Providing a complete risk picture for executive management
20. 1. Identify Risk Management Objectives
SSDLC
We are not appropriately managing AppSec risk if we are not:
• Guiding every software project through a Secure SDLC
• Ensuring appropriate levels of defect discovery are applied
• Ensuring defects are documented and remediated, and
variances are documented and tracked
• Tuning our Secure SDLC to reduce friction with engineering
• Moving efforts “left” in the Secure SDLC
• Analyzing the risk associated with hundreds of “medium”
security defects in production
• Using threat and attack intelligence to continually improve
21. 1. Identify Risk Management Objectives
Policies, Standards, and Outreach
We are not appropriately managing AppSec risk if we are not:
• Using a foundational governance structure of policies and
standards
• Incorporating every stakeholder in the software security
strategy
• Performing regular out-reach to executives and to all
stakeholders
• Ensuring all stakeholders have the appropriate level of
training
22. 1. Identify Risk Management Objectives
Context: Software Environment and Vendors
We are not appropriately managing AppSec risk if we are not:
• Ensuring all adjacent IT, information, and data security
practices are sufficiently mature
• Establishing software security requirements with all software
vendors
23. 1. Identify Risk Management Objectives
Continuous Improvement
We are not appropriately managing AppSec risk if we are not:
• Aiming for a level of maturity beyond simple compliance with
external drivers
• Using customized metrics and KPIs to chart ongoing progress
24. 2. Create an Inventory of Current and Planned Activities
· Secure SDLC with Gates
· Satellite
· Metrics
· Portfolio Management
· Policy and Standards
· Vendor Management
· Defect Discovery: Design
· Defect Discovery: Fuzzing
· Defect Discovery: Penetration Testing
· Defect Discovery: Quality Assurance
· Defect Discovery: Code Review
· Defect Discovery: Research
· Defect Management
· Attack Intelligence
· Open Source Management
· Risk and Compliance
· Secure By Design
· AppSec Outreach
· Competency Management
· IT Operations
25. 3. Define Key Metrics
• We are not appropriately managing AppSec risk if we are
not guiding every software project through a Secure
SDLC that determines whether the software is
acceptably secure
• What percentage of the applications in the portfolio have
been reviewed and signed off, indicating an acceptable
level of security?
• Per risk ranking
• Per tech stack
• Per business unit
• Per software project type
26. 3. Define Key Metrics
• We are not appropriately managing AppSec risk if we are
not guiding every software project through a Secure
SDLC that determines whether the software is
acceptably secure
• What percentage of software projects in the last 12
months have been reviewed and signed off, indicating an
acceptable level of security?
• Per risk ranking
• Per tech stack
• Per business unit
• Per software project type
27. 3. Define Key Metrics
• We are not appropriately managing AppSec risk if we are
not guiding every software project through a Secure
SDLC that determines whether the software is
acceptably secure
• What percentage of software projects in the last 12
months did not go through the Secure SDLC?
• Per reason
• Per risk ranking
• Per tech stack
• Per business unit
• Per software project type
28. 3. Define Key Metrics
• We are not appropriately managing AppSec risk if we are
not guiding every software project through a Secure
SDLC that determines whether the software is
acceptably secure
• What percentage of software projects in the last 12
months have passed all software security checkpoints?
• Per risk ranking
• Per tech stack
• Per business unit
• Per software project type
29. 3. Define Key Metrics
• We are not appropriately managing AppSec risk if we are
not guiding every software project through a Secure
SDLC that determines whether the software is
acceptably secure
• What percentage of the applications have 1 or more
open exceptions for not passing an Secure SDLC gate?
• Per risk ranking
• Per tech stack
• Per business unit
• Per software project type
30. 3. Define Key Metrics
• We are not appropriately managing AppSec risk if we are
not guiding every software project through a Secure
SDLC that determines whether the software is
acceptably secure
• For each security checkpoint in the Secure SDLC, what
is the average percentage of artifacts provided versus
expected across all software projects in the last 12
months?
• Per risk ranking
• Per tech stack
• Per business unit
• Per software project type
31. Evolve the Program, Evolve the Metrics
1. Identify Risk Management Objectives
2. Take an Inventory of Current and Planned Activities
3. Define Key Metrics
33. 1. Identify Risk Management Objectives
We may not be appropriately managing software security risk if we are not:
1. Using an SSI with full-time SSG to ensure, with sign-off, the appropriate
security posture for every application in the firm’s portfolio
2. Using a foundational governance structure of policies and standards and
measuring adherence to their requirements
3. Able to enumerate their current software portfolio, including open source
software
4. Able to enumerate their deployed applications and databases, including the
various kinds of PII processed and stored
5. Performing regular out-reach on management issues by the SSG to
executives and on technical issues by the satellite to all stakeholders
6. Guiding every software project (whether in-house development, out-sourced
development, or COTS acquisition) through a Secure SDLC (an SDLC with
software security checkpoints) that determines whether the software is
acceptably secure
7. Assigning a “risk designator” to every software asset (application risk
ranking), software project (project impact assessment), software security
defect (defect severity rating), and data asset (data classification label)
34. 1. Identify Risk Management Objectives
We may not be appropriately managing software security risk if we are not:
8. Aiming for a level of maturity beyond simple compliance with external drivers
9. Managing risk across the portfolio rather than only managing budget by
neglecting portions of the portfolio
10. Establishing software security requirements with all software vendors,
including those whose software remotely processes sensitive data
11. Ensuring appropriate levels of defect discovery are applied to all software at
required checkpoints and also periodically regardless of whether it’s been
modified
12. Ensuring all software security defects are documented, all are remediated
according to policy, and all variances are documented and tracked
13. Using threat and attack intelligence to continually improve the Secure SDLC
and the portfolio
14. Providing a complete software portfolio risk picture for executive
management
35. 1. Identify Risk Management Objectives
We may not be appropriately managing software security risk if we are not:
15. Moving efforts “left” in the Secure SDLC to maximize prevention efforts
16. Analyzing the math associated with allowing dozens or even hundreds of
“medium” security defects in production while dropping everything to fix one
“high” defect
17. Tuning their Secure SDLC to both reduce friction with and work at the speeds
required by engineering
18. Using customized metrics and KPIs to chart ongoing progress
19. Incorporating every stakeholder in the software security strategy
20. Ensuring all stakeholders have the appropriate level of software security
training
21. Ensuring all adjacent IT, information, and data security practices are sufficiently
mature to not undermine software security efforts
36. What makes a metric?
• Metric Name – a unique, descriptive name that humans can understand
• Description – a short narrative explaining the metric and its importance
• Intended Audience – names the stakeholders for whom the metric is being created
• Question Answered – Write out the exact question the metric answers, given that it
may take several evolutions of the metric to fully answer the question or that it may
actually be the trend line that answers the question
o The question will likely also evolve multiple times as the stakeholders get a handle on
what’s actually important
• Component Measurements – Describe each of the metric’s component parts,
including each associated data source
o Include any useful comments about the data and its collection, such as whether it’s manual
or automated, it’s dependent upon a particular person, the data are reliable, special access
is required, and so on
• Metric Calculation – Give the formula for combining the components to create the
metric
o Many formulas may be as simple as “A over B”
37. What makes a metric?
• Update Cycle – Tell how often the metric is calculated
• Location – Tell where the metric can be found by those authorized to access it
• Expected Value Range – The acceptable upper and lower boundaries for the metric
o Upper Trigger Action – The action taken when the metric value rises above its
upper boundary
o Lower Trigger Action – The action taken when the metric value falls below its
lower boundary
• Expected Trend – Tell how the values are expected to move over time
o An upward trend may be good for some numbers and bad for others. There
may be a need for upper and lower values and triggers specific to the trend.
• Targets – Note the metric values expected to be achieved at specific times, if any
38. What makes a metric?
• Benchmark – Describe any reference point used for comparison.
o This might be a similar metric from another firm, the same metric from some
past time period (e.g., year-on-year), and so on
• Precision and Accuracy (optional) – Describe any known know data capture issues
in these areas.
o Although the expectation is that data capture is always 100% precise and 100%
accurate, that often isn’t true. Document cases where it’s possible to precisely
capture data known to be inaccurate and where it may not be possible to
precisely capture the accurate data.
• Feedback Loop (optional) – Describes the periodic process by which the a group
judges the metric in terms of usefulness, accuracy, and so on, and directs efforts to
make any required changes
39. My Fitness Pal (iPhone App)
• I ask questions and make decisions about my health
every day
What should I eat for breakfast?
How much? How often?
What kind of exercise should I do?
For what length of time? How often?
• I can change my behavior by setting goals and
measuring progress
SMART goals
Specific, measurable, actionable, reasonable, time-based
40. Vocabulary
• Measurement vs. Metric – what’s the
difference?
o It is 67 degrees Fahrenheit in San Francisco
o I had 2 cups of coffee this morning
41. The Cigital Approach
1. Identify Objectives
Firm:
Publicly owned firm generates revenue primarily through 10
Internet-facing web applications.
Audience:
Executives
SSI Objectives:
• Achieve a defensible level of “due care” as expected by
various groups such as shareholders, the Board of Directors,
regulators, law enforcement, and the public.
• Do not allow into production bugs for which well-known
automated attacks exist.
42. The Cigital Approach
2. Create an Inventory
Data from SSI Inventory:
• The SSG uses static analysis during development and penetration testing
during QA to check for bugs for which well-known automated attacks exist.
The SSG has deployed a commercial static analysis tool with appropriate
rules enabled and runs the tool on applications during development.
• The SSG relies on external penetration testers who, as part of their
penetration testing service, use a commercial tool that performs dynamic
scanning to discover ~40 common vulnerabilities.
• The SSG has issued a policy that states development teams must fix any
exploitable software security bug discovered by an automated commercial
tool prior to the code going to production.
43. The Cigital Approach
3. Define Key Metrics
SSG Communication Objectives:
• Coverage: The scope of the SSI is all 10 Internet-facing web applications; however, only
eight are undergoing penetration testing during quality assurance and only five
currently receive static analysis during development. The SSG wants to increase the
Executive understanding of coverage for these software security activities.
• Policy Variance: Some application teams comply with the stated policy while others do
not. The SSG wants to increase the Executive understanding of policy compliance by
the application teams.
• Effectiveness: Depending on the level of coverage and policy compliance for each
application team, the effectiveness of the software security controls will vary. The SSG
wants to compare the effectiveness of the controls across the application portfolio by
looking at software security bugs discovered by an automated commercial tool that
are found versus fixed, as well as which are discovered post-production during a
software security incident.
44. The Cigital Approach
3. Define Key Metrics
Questions:
• What is the level of software security testing coverage across the revenue
generating web applications?
• Which application teams comply with the software security policy?
• How effective is the software security testing and defect management
capability, given the various levels of coverage and compliance?
45. The Cigital Approach
3. Define Key Metrics
Static Analysis Effectiveness
What is the effectiveness of preventing critical severity defects found through Fortify static
analysis from going into production?
Tool Efficacy = # Critical Severity Defects Fixed / # Critical Severity Defects Reported
Ineffectiveness Indicator
What percentage of software security defects found in production were also found prior to
production (but not addressed)?
Ineffectiveness Indicator
= # Software Security Defects found in production which were also found prior to production /
Total # Software Security Defects found in production
46. The Cigital Approach
3. Define Key Metrics
Penetration Testing Participation
Are revenue-generating development teams employing penetration testing to discover
risks?
Penetration Testing Participation [for time period] [by business unit]
= # Applications Pen Tested / # Total Applications
Static Analysis (Fortify) Participation
How many Fake Firm developers are using Fortify to scan their code for security defects?
% Code Scanned by Static Analysis = KLOC scanned / KLOC released
Software Security Defect Density in Production
What is the density of open exploitable critical, high, and medium severity software security
defects discovered by any automated commercial tool and allowed to go to production?
Software Security Defect Density in Production [by application]
= # Open Exploitable Critical, High, and Medium Severity Defects / KLOC
47. Security Metrics Phases
Discover & Define
Instrument Process &
Populate Dashboard
Communications Plan
• Software security
objectives
• Activity inventory
• Key metrics
definitions
• Guidance on
visualization and
outreach
• Identify process
flows, data
sources, and
owners for each
• Obtain data and
automate data
collection
• Populate metrics
• Visualize metrics
• Communicate
metrics to the
right people at the
right time with the
right visualizations
Phase 1 Phase 2 Phase 3
48. Security Metrics Phases
Discover & Define
Instrument Process &
Populate Dashboard
Communications Plan
Phase 1 Phase 2 Phase 3
Option 1
Already have metrics, need help to get to the next level
Option 2
Don’t have metrics, want to get started quickly
This approach works for firms that already have metrics, and for firms that don’t.
49. Discover & Define
Instrument Process &
Populate Dashboard
Communications Plan
Phase 1 Phase 2 Phase 3
Executive Summary High level overview of project goals, approach, and conclusions.
Methodology Overview Description of Cigital’s interview and artifact review-driven software security metrics
development process.
Software Security
Context & Objectives
Based on client interviews, a description of the client’s unique external, internal, and
organizational context and goals for the software security program.
Key Metrics Definitions # recommended metrics definitions customized to the client’s unique risk management view,
current software security activities, and planned software security activities.
Metrics Template List and descriptions of the 14 attributes of a mature and comprehensive metrics definition.
The deliverable for Phase 1 is a final report, including the items below:
Security Metrics Deliverables: Phase 1
50. Sample Schedule: Phase 1
Day 1 Day 2 Day 3 Day 4 Day 5
Days 1 and 2 (on-site)
• Cigital consultants go on-site to
deliver software security
instructor-led training (security
metrics theory)
• Data gathering is performed via
interviews and artifact review to
identify software security
objectives and inventory software
security activities.
Days 3, 4, and 5 (remote)
• Cigital consultants perform analysis and
develop an initial draft of the customized key
metrics definitions.
• A detailed review of the initial key metrics
definitions is conducted with the client and
feedback is obtained.
• Client feedback is incorporated into a final
report and presented in a read-out meeting.
• Each metrics engagement is unique and should be scoped individually.
• A Phase 1 metrics engagement will require a minimum of 1 week of effort.
• Clients can increase the depth and breadth of a Phase 1 engagement by
scheduling additional weeks of effort.
51. Discover & Define
Instrument Process &
Populate Dashboard
Communications Plan
Phase 1 Phase 2 Phase 3
The deliverable for a Phase 2 engagement is a specification document
and a deployed technology solution, including the items below:
Security Metrics Deliverables: Phase 2
* A Phase 2 metrics engagement assumes that Phase 1 has already been completed, either by Cigital or by the client.
Process Flow (Document) Based on client security metrics definitions and interviews, a description and diagram of current
security activity process flows and supporting data sources with identified owners
Architecture and API
Specification (Document)
Recommended architecture and API specifications for data collection and key metrics
implementation
Stakeholder Roles and
Responsibilities
(Document)
A description of roles and responsibilities which will be required of the process and data source
owners to support on-going data collection, metrics calculations, and dashboard population
Deployed Technology
Solution
With support from the necessary client stakeholders, Cigital will build, test, and deploy the solution
as described in the specification document
52. Sample Schedule: Phase 2
Week 1 Week 2 Week 3 Week 4
Weeks 1 and 2
• Review client security metrics
definitions and create detailed
documentation of relevant
security activity process flows and
data sources.
• Identify owners for process flows
and source data systems.
• Define architecture and API
specification for metrics
implementation
Weeks 3 and 4
• Define roles and responsibilities for
process and data source owners to
support on-going data collection,
metrics calculation, and dashboard
population.
• Secure stakeholder buy-in for
solution implementation
• Build, test, and deploy the solution
• Each metrics engagement is unique and should be scoped individually.
• Phase 2 schedule will be highly dependent on the complexity of
chosen client metrics and security activity processes.
53. Discover & Define
Instrument Process &
Populate Dashboard
Communications Plan
Phase 1 Phase 2 Phase 3
Metrics Narratives and
Visualizations
(Presentation)
Cigital will create a custom presentation including the firm’s software security metrics,
contextual narratives for each, and visualizations to meaningfully display the data.
Report: Stakeholders,
Objections, and
Responses (Report)
A list and detailed description of the client’s software security stakeholders – the metrics
recipients. A set of customized potential questions from stakeholders in response to the
metrics and recommended responses for the client to use in objection handling.
The deliverable for a Phase 3 engagement is a final report and a
presentation, including the items below:
Security Metrics Deliverables: Phase 3
* A Phase 3 metrics engagement assumes that Phases 1 and 2 have already been completed, either by Cigital or by the client.
54. Sample Schedule: Phase 3
Day 1 Day 2 Day 3 Day 4 Day 5
Days 1 and 2
• Cigital will conduct interviews to
understand what the SSI owner is trying to
achieve with the SSI and how the metrics
and context shared around those metrics
tell that story.
• Cigital will also lead an interactive
discussion with the SSG on the roles and
perspectives of various metrics recipients
(software security stakeholders).
• If applicable, Cigital consultants will
conduct interviews with software security
stakeholders (metrics recipients) to obtain a
first hand perspective on software security
metrics and communications received to
date.
Days 3, 4, and 5
• Cigital consultants perform analysis and develop an initial
draft of the custom metrics presentation.
• Cigital consultants perform analysis and develop an initial
draft of the potential questions from stakeholders and
recommended responses.
• A detailed review of the presentation and report is
conducted with the client and feedback is obtained.
• Client feedback is incorporated into a final report and
presentation. Cigital presents the final report and
presentation in a read-out meeting.
• Each metrics engagement is unique and should be scoped
individually.
• A Phase 3 metrics engagement will typically require 1 week
of effort.
• Clients can increase the depth and breadth of a Phase 3
engagement by scheduling additional weeks of effort.
55. 1. Identify Risk Management Objectives
Firm Specific Context
• Existing and planned software security processes
• Existing definitions for data classification levels, application risk
classification levels, development project impact levels, security defect
severity levels, technology stacks.
• External environmental context for the SSI – e.g. regulatory or contractual
requirements, legal precedents in standards of due care, customer
demands, or market drivers
• Internal environmental context for the SSI – e.g. related business
objectives, culture, how decisions are made, how projects are funded, how
values are embedded and objectives are communicated
• The firm’s risk tolerance and the factors that affect risk tolerance
• Role and value of software in the organization
• Structure of software in the organization – how software is developed,
acquired, deployed
• Application portfolio inventory and status
• Purpose, impact, or desired outcome of the SSI – e.g. compliance,
improvement, marketing discriminator
56. One Dozen Software Security Metrics
1. Application Portfolio Visibility
• What parts of the application portfolio do we have visibility into
from a security perspective?
2. Application Portfolio Risk
• What parts of the application portfolio have the highest risk?
3. Testing Frequency by Risk Level
• How frequently do apps at different risk levels undergo security
testing?
57. One Dozen Software Security Metrics
4. Defect Discovery Participation
• Are teams employing [defect discovery method] to discover risks?
5. Defect Density by Risk Level
• What is the density of open critical severity defects by risk level?
6. Defect Density by Tech Type
• What is the density of open critical severity defects by technology
type?
7. Defect Management Effectiveness
• How many of the critical defects found actually get fixed?
8. Defect Remediation Timeframes
• What percentage of defects found are fixed within an appropriate
amount of time?
58. One Dozen Software Security Metrics
9. SSDLC (Secure Software Development Lifecycle)
Gates
• What percentage of software development projects pass all
required security gates?
10.Compliance Approval
• How much of the app portfolio has been reviewed for compliance
and approved?
11.Software Vendor Security
• How many of the software vendors have been reviewed for
security and approved?
12.Competency Management
• How many software developers have taken software security
training in the past year?
Notas do Editor
More often than not, company executives ask the wrong questions about software security. This session will discuss techniques for changing the conversation about software security in order to encourage executives to ask the right questions – and provide answers that show progress towards meaningful objectives. Caroline will discuss a progression of software security capabilities and the metrics that correspond to different levels of maturity. She’ll discuss an approach for developing key metrics for your unique software security program and walk through a detailed example.
More often than not, company executives ask the wrong questions about software security. This session will discuss techniques for changing the conversation about software security in order to encourage executives to ask the right questions – and provide answers that show progress towards meaningful objectives. Caroline will discuss a progression of software security capabilities and the metrics that correspond to different levels of maturity. She’ll discuss an approach for developing key metrics for your unique software security program and walk through a detailed example.
WE know, as application security professionals, that these are the wrong questions to ask -
1A. Even if you could convince your competitors to give you their bug counts, it’s likely that you are different in so many ways that it’s impossible to get any kind of useful “apples to apples” comparison. Your apps might be written in different languages, on different tech stacks, and maybe you have very different philosophies on the use of open source code and third party components.
1B. Your appsec defect discovery methods are likely different, and at different states of maturity. If you have a very mature static analysis program that is mandatory for your entire codebase and your competitor is doing penetration testing on only their most critical apps, it’s unlikely that even if you were sharing bug counts, that it would be meaningful in any significant way.
2. Beyond having an incident response plan in place, knowing who to call for what, and performing table top exercises for various types of incidents, mean time to recover from an incident is largely out of your control. You and the other teams involved in IR are likely doing your best in a response situation, and measuring mean time to recover isn’t going to help the situation.
Executives don’t know that these are the wrong questions because it is not their job to know. They hire folks like us to be experts so that they don’t have to. They trust us to tell them what they need to know, and this includes telling them what questions to ask and giving them data driven answers that can be used to make the best decisions for the program and for the firm.
Executives would ask:
Is our application security better this year?
What are we getting in return for the budget we gave you?
How do we compare with our peers?
To which we would respond with:
Embarrassed silence, or
Mumble something about how security is like insurance, you can’t measure the cost
Visibility
Visibility into the current status of an existing functional area or process
Education and a common language
Lexicon for the applications security team to communicate with and educate stakeholders and sponsors
Improvement
Enable better management, promote informed decision-making, and drive change throughout the organization
Visibility
Visibility into the current status of an existing functional area or process
Education and a common language
Lexicon for the applications security team to communicate with and educate stakeholders and sponsors
Improvement
Enable better management, promote informed decision-making, and drive change throughout the organization
Dan Geer – asymmetrical problem. Limited resources against an infinite number of attacks. At a program level, the question to answer for software security is what’s the best way to spend a limited budget?
Now in my current role at Cigital, I study and score activities in real-world software security initiatives, and it’s basically the opposite of theoretical. The BSIMM is a study of real-world software security initiatives organized so that firms can determine where they stand with their software security initiatives and how to evolve their efforts over time. A major distinction about the BSIMM is that it’s descriptive and not prescriptive.
BSIMM says, this is what people are actually doing. It does not say this is what you should do. I believe that particular question – what should I be doing? - depends on environmental context and it is unique for every organization.
Instrument processes as needed for ongoing data collection and measurement generation. This may take many months and require changes to tools, networks, firewalls, business relationships, job descriptions, databases, policies, people, culture, and much more. It’s hard, but achievable, especially if you’re trying to retrofit a large firm with lots of silos and fiefdoms.
Finally, the visualizations—often created by skilled graphic artists— required and desired by the metrics consumers will influence the direction of the metrics program. Over time, the firm will require significant automation of data capture, manipulation, correlation, and presentation.
Although everyone wishes it wasn’t true, metric presentation—the look and feel—is often as important as the metric itself. The person with the prettiest, most understandable, and most actionable chart often gets the funding someone else needed more. It’s not necessarily about being the squeaky wheel, although that is sometimes true as well. It’s more about executives being able to understand what they’re funding and how it’s progressing even if they don’t understand exactly how that effort is making anything better.
The narrative accompanying a metric also matters. Its meaning must be instantly clear to the audience who must use it to make decisions. If a metrics presentation turns into a discussion about meaning, source data, methodology, accuracy, presentation style, color choice, objectivity, and so on, the audience has been lost and resources will end up elsewhere.
An incident occurs. Executive management asks,
Why / how did this happen?
What do we need to do to stop the bleeding?
How do we prevent recurrence?
What do I tell our customers / regulators / auditors?
AppSec scrambles to provide answers. Tactical point solutions are put in place.
“Check the box” metrics are implemented, stay the same, and fail to provide any useful information.
“Did we pen test all the apps that have regulated information this quarter? Can we show that in a pretty chart? Okay, great. See you again in 3 months.”
We have “metrics!”
There are 8 people that do AppSec, we used 109% of our budget this quarter, and we found 12 critical bugs.
This application has 6 defects, each of which has been assigned a 2 on a scale of 1-5. We’ll add that up, call it 12 and that means GREEN.
Please give me more resources?
The AppSec Team cannot explain the impact of their effort.
Executive Management cannot make decisions based on the information.
“We put a lot of effort into defining, capturing data for, calculating, and presenting metrics.”
AppSec counts some things, shares those counts with some people, and call them metrics
PowerPoint decks with hundreds of slides that simply list various counts
Executive Management asks, “so what?” and AppSec struggles to come up with a satisfactory response.
AppSec can’t easily explain why they present the data they do
Executives don’t know what to ask
Without context provided so decision-makers can meaningfully interpret the data, the recipients will interpret it however comes naturally to them.
A recipient might react positively or negatively, when in fact the desired and intended response expected by the AppSec Team may have been the opposite
Executive Management doesn’t understand what is being presented by AppSec
The “metrics” are ignored or worse, cause confusion and negatively impact the AppSecTeam’s credibility.
After receiving and attempting (with no help or context for interpreting the information) to understand the data once or twice, Executive Management declines meetings from the AppSec Team or ignore emails containing reports
AppSec earns a reputation for being wasteful or simply impossible to understand
The AppSec Team explains AppSec in a way that is understood by Executive Management.
“Here’s the plan. I’ll keep you up to date on our progress and alert you to any risks or issues as they occur.”
The AppSec Team provides context for metrics and explains how to interpret the data, helping stakeholders to understand the intended message.
“We found 9 critical bugs this month.”
This was expected because we just rolled out a new defect discovery capability.
This is considered acceptable because the bugs were found in development, before production.
Remediation tasks have been assigned and it looks like the bugs will be fixed within the recommended time.
Visibility
Visibility into the current status of an existing functional area or process
Education and a common language
Lexicon for the applications security team to communicate with and educate stakeholders and sponsors
Improvement
Enable better management, promote informed decision-making, and drive change throughout the organization
Able to enumerate our current software portfolio, including open source software
Able to enumerate our deployed applications and databases, including the various kinds of PII processed and stored
Using an AppSec program to ensure, with sign-off, the appropriate security posture for every application in the portfolio
Assigning a “risk designator” to every software asset (application risk ranking), software project (project impact assessment), software security defect (defect severity rating), and data asset (data classification label)
Managing risk across the portfolio rather than only managing budget and neglecting portions of the portfolio
Providing a complete software portfolio risk picture for executive management
Guiding every software project (whether in-house development, out-sourced development, or COTS acquisition) through a Secure SDLC that determines whether the software is acceptably secure
Ensuring appropriate levels of defect discovery are applied to all software at required checkpoints and periodically regardless of whether it’s been modified
Ensuring all software security defects are documented, all are remediated according to policy, and all variances are documented and tracked
Moving efforts “left” in the Secure SDLC to maximize prevention efforts
Tuning our Secure SDLC to reduce friction with and work at the speeds required by engineering
Analyzing the risk associated with hundreds of “medium” security defects in production while dropping everything to fix one “high” defect
Using threat and attack intelligence to continually improve the Secure SDLC and the portfolio
We are not appropriately managing appsec risk if we are not:
Using a foundational governance structure of policies and standards and measuring adherence to their requirements
Incorporating every stakeholder in the software security strategy
Performing regular out-reach on management issues by the SSG to executives and on technical issues by the satellite to all stakeholders
Ensuring all stakeholders have the appropriate level of software security training
Ensuring all adjacent IT, information, and data security practices are sufficiently mature to not undermine software security efforts
Establishing software security requirements with all software vendors, including those whose software remotely processes sensitive data
We are not appropriately managing AppSec risk if we are not guiding every software project (whether in-house development, out-sourced development, or COTS acquisition) through a Secure SDLC that determines whether the software is acceptably secure
We are not appropriately managing AppSec risk if we are not guiding every software project (whether in-house development, out-sourced development, or COTS acquisition) through a Secure SDLC that determines whether the software is acceptably secure
We are not appropriately managing AppSec risk if we are not guiding every software project (whether in-house development, out-sourced development, or COTS acquisition) through a Secure SDLC that determines whether the software is acceptably secure
Per reason: Did not know about the Secure SDLC process, Bypassed Secure SDLC process to meet project deadlines, etc.
Per business unit: FIB, FIB, FIB, FIB, FIB, etc.
Per technology stack: web app, mobile, thick client, mainframe, embedded, etc.
Per application risk ranking: Critical, High, Medium, Low
Per time period: During the past quarter, During the past half, During the past year, etc.
Per software project type: Internal development, Bespoke development, COTS integration, FOSS integration, etc.
We are not appropriately managing AppSec risk if we are not guiding every software project (whether in-house development, out-sourced development, or COTS acquisition) through a Secure SDLC that determines whether the software is acceptably secure
We are not appropriately managing AppSec risk if we are not guiding every software project (whether in-house development, out-sourced development, or COTS acquisition) through a Secure SDLC that determines whether the software is acceptably secure
Per risk ranking: Critical, High, Medium, Low
Per technology stack: web app, mobile, thick client, mainframe, embedded, etc.
Per business unit: BU1, BU2, BU3, etc.
Per software project type: Internal development, COTS integration, FOSS integration, etc.
We are not appropriately managing AppSec risk if we are not guiding every software project (whether in-house development, out-sourced development, or COTS acquisition) through a Secure SDLC that determines whether the software is acceptably secure
Dan Geer – asymmetrical problem. Limited resources against an infinite number of attacks. At a program level, the question to answer for software security is what’s the best way to spend a limited budget?
Now in my current role at Cigital, I study and score activities in real-world software security initiatives, and it’s basically the opposite of theoretical. The BSIMM is a study of real-world software security initiatives organized so that firms can determine where they stand with their software security initiatives and how to evolve their efforts over time. A major distinction about the BSIMM is that it’s descriptive and not prescriptive.
BSIMM says, this is what people are actually doing. It does not say this is what you should do. I believe that particular question – what should I be doing? - depends on environmental context and it is unique for every organization.
Software security bugs discovered by an automated commercial tool
Not fixed or determined to be a false positive
Software security bugs discovered by an automated commercial tool
Not fixed or determined to be a false positive
Instrument processes as needed for ongoing data collection and measurement generation. This may take many months and require changes to tools, networks, firewalls, business relationships, job descriptions, databases, policies, people, culture, and much more. It’s hard, but achievable, especially if you’re trying to retrofit a large firm with lots of silos and fiefdoms.
Finally, the visualizations—often created by skilled graphic artists— required and desired by the metrics consumers will influence the direction of the metrics program. Over time, the firm will require significant automation of data capture, manipulation, correlation, and presentation.
Although everyone wishes it wasn’t true, metric presentation—the look and feel—is often as important as the metric itself. The person with the prettiest, most understandable, and most actionable chart often gets the funding someone else needed more. It’s not necessarily about being the squeaky wheel, although that is sometimes true as well. It’s more about executives being able to understand what they’re funding and how it’s progressing even if they don’t understand exactly how that effort is making anything better.
The narrative accompanying a metric also matters. Its meaning must be instantly clear to the audience who must use it to make decisions. If a metrics presentation turns into a discussion about meaning, source data, methodology, accuracy, presentation style, color choice, objectivity, and so on, the audience has been lost and resources will end up elsewhere.
Instrument processes as needed for ongoing data collection and measurement generation. This may take many months and require changes to tools, networks, firewalls, business relationships, job descriptions, databases, policies, people, culture, and much more. It’s hard, but achievable, especially if you’re trying to retrofit a large firm with lots of silos and fiefdoms.
Finally, the visualizations—often created by skilled graphic artists— required and desired by the metrics consumers will influence the direction of the metrics program. Over time, the firm will require significant automation of data capture, manipulation, correlation, and presentation.
Although everyone wishes it wasn’t true, metric presentation—the look and feel—is often as important as the metric itself. The person with the prettiest, most understandable, and most actionable chart often gets the funding someone else needed more. It’s not necessarily about being the squeaky wheel, although that is sometimes true as well. It’s more about executives being able to understand what they’re funding and how it’s progressing even if they don’t understand exactly how that effort is making anything better.
The narrative accompanying a metric also matters. Its meaning must be instantly clear to the audience who must use it to make decisions. If a metrics presentation turns into a discussion about meaning, source data, methodology, accuracy, presentation style, color choice, objectivity, and so on, the audience has been lost and resources will end up elsewhere.
Instrument processes as needed for ongoing data collection and measurement generation. This may take many months and require changes to tools, networks, firewalls, business relationships, job descriptions, databases, policies, people, culture, and much more. It’s hard, but achievable, especially if you’re trying to retrofit a large firm with lots of silos and fiefdoms.
Finally, the visualizations—often created by skilled graphic artists— required and desired by the metrics consumers will influence the direction of the metrics program. Over time, the firm will require significant automation of data capture, manipulation, correlation, and presentation.
Although everyone wishes it wasn’t true, metric presentation—the look and feel—is often as important as the metric itself. The person with the prettiest, most understandable, and most actionable chart often gets the funding someone else needed more. It’s not necessarily about being the squeaky wheel, although that is sometimes true as well. It’s more about executives being able to understand what they’re funding and how it’s progressing even if they don’t understand exactly how that effort is making anything better.
The narrative accompanying a metric also matters. Its meaning must be instantly clear to the audience who must use it to make decisions. If a metrics presentation turns into a discussion about meaning, source data, methodology, accuracy, presentation style, color choice, objectivity, and so on, the audience has been lost and resources will end up elsewhere.
Instrument processes as needed for ongoing data collection and measurement generation. This may take many months and require changes to tools, networks, firewalls, business relationships, job descriptions, databases, policies, people, culture, and much more. It’s hard, but achievable, especially if you’re trying to retrofit a large firm with lots of silos and fiefdoms.
Finally, the visualizations—often created by skilled graphic artists— required and desired by the metrics consumers will influence the direction of the metrics program. Over time, the firm will require significant automation of data capture, manipulation, correlation, and presentation.
Although everyone wishes it wasn’t true, metric presentation—the look and feel—is often as important as the metric itself. The person with the prettiest, most understandable, and most actionable chart often gets the funding someone else needed more. It’s not necessarily about being the squeaky wheel, although that is sometimes true as well. It’s more about executives being able to understand what they’re funding and how it’s progressing even if they don’t understand exactly how that effort is making anything better.
The narrative accompanying a metric also matters. Its meaning must be instantly clear to the audience who must use it to make decisions. If a metrics presentation turns into a discussion about meaning, source data, methodology, accuracy, presentation style, color choice, objectivity, and so on, the audience has been lost and resources will end up elsewhere.
Instrument processes as needed for ongoing data collection and measurement generation. This may take many months and require changes to tools, networks, firewalls, business relationships, job descriptions, databases, policies, people, culture, and much more. It’s hard, but achievable, especially if you’re trying to retrofit a large firm with lots of silos and fiefdoms.
Finally, the visualizations—often created by skilled graphic artists— required and desired by the metrics consumers will influence the direction of the metrics program. Over time, the firm will require significant automation of data capture, manipulation, correlation, and presentation.
Although everyone wishes it wasn’t true, metric presentation—the look and feel—is often as important as the metric itself. The person with the prettiest, most understandable, and most actionable chart often gets the funding someone else needed more. It’s not necessarily about being the squeaky wheel, although that is sometimes true as well. It’s more about executives being able to understand what they’re funding and how it’s progressing even if they don’t understand exactly how that effort is making anything better.
The narrative accompanying a metric also matters. Its meaning must be instantly clear to the audience who must use it to make decisions. If a metrics presentation turns into a discussion about meaning, source data, methodology, accuracy, presentation style, color choice, objectivity, and so on, the audience has been lost and resources will end up elsewhere.
SM: New SSI owner has some “must do” items and some “want to do” items. “Must do” items are either internal or external compliance, PCI, regulatory, HIPAA… internal might be a specific project or initiative they have going on – “we’re going to paint everything orange”
There should be a metric on “must do” items and “want to do” items. This might be a tag – special initiative that I want to track.
How much of my app portfolio am I covering? Which of my “must do’s” am I getting done? Which of my “want to do’s” are getting done?
Instead of Testing Frequency by Risk Level, something like Gate Progress… Gate Usage…People not by-passing the gates…
SM: We should get on the same page – looking across the 20 capabilities that we’ve defined as the legs of an SSI stool, could we mark each one of the metrics / measurement recommendations as belonging to a specific capability?
If yes, can prioritize them according to capabilities being built out.
SM: We will eventually as we get more sophisticated distinguish between “counts” and “metrics”
#12 is a measurement. Change the denominators and get different meaning.