How to Migrate Your Organization to a More Security-Minded Culture – From Development to Production to Partners
How to Migrate Your Organization to a
More Security-Minded Culture – From
Development to Production to Partners
Transcript of a discussion on creating broader awareness of security risks and building a security-minded
culture across organizations and ecosystems.
Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: TraceableAI
Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re
listening to BriefingsDirect.
Bringing broader awareness of security risks and building a security-minded culture within any
public or private organization has been a top priority for years. Yet halfway through 2021, IT
security remains as much a threat as ever -- with multiple major breaches and attacks costing
tens of millions of dollars occurring nearly weekly.
Why are the threat vectors not declining? Why, with all the tools and investment, are businesses
still regularly being held up for ransom or having their data breached? To what degree are
behavior, culture, attitude, and organizational dissonance to blame?
Stay with us now as we probe into these more human elements
of IT security with a leading chief information security officer
To learn more about adjusting the culture of security to make
organizations more resilient, please join me in welcoming
Adrian Ludwig, CISO at Atlassian. Welcome, Adrian.
Adrian Ludwig: Hi, Dana. Glad to be here.
Gardner: Adrian, we are constantly bombarded with headlines
showing how IT security is failing. Yet, for many people, they
continue on their merry way -- business as usual.
Are we now living in a world where such breaches amount to acceptable losses? Are people not
concerned because the attacks are perceived as someone else’s problem?
Security on the back burner or in the forefront?
Ludwig: A lot of that is probably true, depending on whom you ask and what their state of mind
is on a given day. We’re definitely seeing a lot more than we’ve seen in the past. And there’s
some interesting twists to the language. What we’re seeing does not necessarily imply that
there is more exploitation going on or that there are more problems -- but it’s definitely the case
that we’re getting a lot more visibility.
I think it’s a little bit of both. There probably are more attacks going on, and we also have better
Gardner: Isn’t security something we should all be thinking about, not just the CISOs?
Ludwig: It’s interesting how people don’t want to think about it. They appoint somebody, give
them a title, and then say that person is now responsible for making security happen.
But the reality is, within any organization, doing the right thing -- whether that be security,
keeping track of the money, or making sure that things are going the way you’re expecting -- is
a responsibility that’s shared across the entire organization. That’s something that we are now
becoming more accustomed to. The security space is realizing it’s not just about the security
folks doing a good job. It’s about enabling the entire organization to understand what’s important
to be more secure and making that as easy as possible. So, there’s an element of culture
change and of improving the entire organization.
Gardner: What’s making these softer approaches -- behavior, culture, management, and
attitude – more important now? Is there something about security technology that has changed
that makes us now need to look at how people think?
Ludwig: We’re beginning to realize that technology is not going to solve all our problems. When
I first went into the security business, the company I worked for, a government agency, still had
posters on the wall from World War II: Loose lips sink ships.
The idea of security culture is not
new, but the awareness is, across
organizations that any person could
be subject to phishing, or any
person could have their credentials
taken -- those mistakes could be
originating at any place in the
organization. That broad-based
awareness is relatively new. It probably helps that we’ve all been locked in our houses for the
last year, paying a lot more attention to the media, and hearing about attacks that have been
going on at governments, the hacking, and all those things. That has raised awareness as well.
Gardner: It’s confounding that people authenticate better in their personal lives. They don’t
want their credit cards or bank accounts pillaged. They have a double standard when it comes
to what they think about protecting themselves versus protecting the company they work for.
Your data, safer at home or at work?
Ludwig: Yes, it’s interesting. We used to think enterprise security could be more difficult from
the user experience standpoint because people would put up with it because it was work.
But the opposite might be true, that people are more self-motivated in the consumer space and
they’re willing to put up with something more challenging than they would in an enterprise.
There might be some truth to that, Dana.
The idea of security culture is not new, but the
awareness is, across organizations that any person
could be subject to phishing, or any person could
have their credentials taken – those mistakes could
be originating at any place in the organization.
Gardner: The passwords I use for my bank account are long and complex, and the passwords I
use when I’m in the business environment … maybe not so much. It gets us back to how you
think and your attitude for improved security. How do we get people to think differently?
Ludwig: There’s a few different things to consider. One is that the security people need to think
differently. It’s not necessarily about changing the behavior of every employee in the company.
Some of it is about figuring out how to implement critical solutions that provide security without
There is a phrase, the paved path or road; so, making the secure way the easy way to do
something. When people started using YubiKey U2F [an open authentication standard that
enables internet users to securely access any number of online services with a single security
key] as a second-factor authentication, it was actually a lot easier than having to input your
password all over the place -- and it’s more secure.
That’s the kind of thing we’re looking for. How do we
enable enhanced security while also having a better
user experience? What’s true in authentication could be
true in any number of other places as well.
Second, we need to focus on developers. We need to make the developer experience more
secure and build more confidence and trustworthiness in the software we’re building, as well as
in the types of tools used to build.
Developers can find strength in vulnerability
Gardner: You brought up another point of interest to me. There’s a mindset that when you
hand something off in an organization -- it could be from app development into production, or
from product design into manufacturing -- people like to move on. But with security, that type of
hand-off can be a risk factor.
Beginning with developers, how would you change that hand-off? Should developers be thinking
about security in the same way that the IT production people do?
Ludwig: It’s tricky. Security is about having the whole system work the way that everybody
expects it to. If there’s a breakdown anywhere in that system, and it doesn’t work the way you’re
expecting, then you say, “Oh, it’s insecure.” But no one has figured out what those hidden
A developer expects the code they write isn’t going to have vulnerabilities. Even if they make a
mistake, even if there’s a performance bug, that shouldn’t introduce a security problem. And
there are improvements being made in programming languages to help with that.
Certain languages are highly prone to security being a common failure. I grew up using C and
C++. Security wasn’t something that was even thought of in the design of those languages.
Java, a lot more security was thought of in the design of that language, so it’s intrinsically safer.
Does that mean there are no security issues that can happen if you’re using Java? No.
How do we enable enhanced
security while also having a
better user experience?
Similar types of expectations exist at other places in the development pipeline as well.
Gardner: I suppose another shift has been from applications developed to reside in a data
center, behind firewalls and security perimeters. But now -- with microservices, cloud-native
applications, and multiple application programming interfaces (APIs) being brought together
interdependently -- we’re no longer aware of where the code is running.
Don’t you have to think differently as a developer because of the way applications in production
Ludwig: Yes, it’s definitely made a big difference. We used to describe applications as being
monoliths. There were very few parts of the application that were exposed.
At this point, most applications are microservices. And
that means across an application, there might be 1,000
different parts of the application that are publicly
exposed. They all must have some level of security
checks being done on them to make sure that if they’re
handling an input that might be coming from the other
side of the world that it’s being handled correctly.
So, yes, the design and the architecture have definitely exposed a lot more of the app’s surface.
There’s been a bit of a race to make the tools better, but the architectures are getting more
complicated. And I don’t know, it’s neck and neck on whether things are getting more secure or
they’re getting less secure as these architectures get bigger and more exposed.
We have to think about that. How do we design processes to deal with that? How do you design
technology, and what’s the culture that needs to be in place? I think part of it is having a culture
of every single developer being conscious of the fact that the decisions they’re making have
security implications. So that’s a lot of work to do.
Gardner: Another attitude adjustment that’s necessary is assuming that breaches are going to
happen and to stifle them as quickly as possible. It’s a little different mindset, but the more
people involved with looking for anomalies, who are willing to have their data or behaviors
examined for anomalies makes sense.
Is there a needed cultural shift that goes with assuming you’re going to be breached and making
sure the damage is limited?
Assume the worst, prepare to limit the damage
Ludwig: Yes. A big part of the cultural shift is being comfortable taking feedback from anybody
that you have a problem and that there’s something that you need to fix. That’s the first step.
Companies should let anybody identify a security problem -- and that could be anybody inside
or outside of the company. Bug bounties. We’re in a bit of a revolution in terms of enabling
better visibility into potential security problems.
But once you have that sort of culture, you start thinking, “Okay. How do I actually monitor
what’s going on in each of the different areas?” With that visibility, exposure, and understanding
At this point, most applications
are microservices. That means
… there might be 1,000
different parts of the application
that are publicly exposed.
what’s going in and out of specific applications, you can detect when there’s something you’re
not expecting. That turns out to be really difficult, if what you’re looking at is very big and very,
Decomposing an application down into
smaller pieces, being able to trace the
behaviors within those pieces, and
understanding which APIs each of those
different microservices is exposing turns
out to be really important.
If you combine decomposing applications into smaller pieces with monitoring what’s going on in
them and creating a culture where anybody can find a potential security flaw, surface it, and
react to it -- those are good building blocks for having an environment where you have a lot
more security than you would have otherwise.
Gardner: Another shift we’ve seen in the past several years is the advent of big data. Not only
can we manage big data quickly, but we can also do it at a reasonable cost. That has brought
about machine learning (ML) and movement to artificial intelligence (AI). So, now there’s an
opportunity to put another arrow in our quiver of tools and use big data ML to buttress our
security and provide a new culture of awareness as a result.
Ludwig: I think so. There are a bunch of companies trying to do that, to look at the patterns that
exist within applications, and understand what those patterns look like. In some instances, they
can alert you when there’s something not operating the way that is expected and maybe guide
you to rearchitecting and make your applications more efficient and secure.
There are a few different approaches being explored. Ultimately, at this point, most applications
are so complicated -- and have been developed in such a chaotic manner -- it’s impossible to
understand what’s going on inside of them. That’s the right time that the robots give it a shot
and see if we can figure it out by turning the machines on themselves.
Gardner: Yes. Fight fire with fire.
Let’s get back to the culture of security. If you ask the people in the company to think differently
about security, they all nod their heads and say they’ll try. But there has to be a leadership shift,
too. Who is in charge of such security messaging? Who has the best voice for having the whole
company think differently and better about security? Who’s in charge of security?
C-suite must take the lead
Ludwig: Not the security people. That will be a surprise for a lot of people to hear me say that.
The reality is if you’re in security, you’re not normal. And the normal people don’t want to hear
from the not-normal person who’s paranoid that they need to be more paranoid.
That’s a realization it took me several years to realize. If the security person keeps saying, “The
sky is falling, the sky is falling,” people aren’t going to listen. They say, “Security is important.”
And the others reply, “Yes, of course, security is important to you, you’re the security guy.”
Decomposing an application down into
smaller pieces… and understanding which
APIs each of those different microservices is
exposing turns out to be really important.
If the head of the business, or the CEO, consistently says, “We need to make this a priority.
Security is really important, and these are the people who are going to help us understand what
that means and how to execute on it,” then that ends up being a really healthy relationship.
The companies I’ve seen turn themselves around to become good at security are the ones such
as Microsoft, Google, or others where the CEO made it personal, and said, “We’re going to fix
this, and it’s my number-one priority. We’re going to invest in it, and I’m going to hire a great
team of security professionals to help us make that happen. I’m going to work with them and
enable them to be successful.”
Alternatively, there are companies where the CEO says, “Oh, the board has asked us to get a
good security person, so I’ve hired this person and you should do what he says.” That’s the path
to a disgruntled bunch of folks across the entire organization. They will conclude that security is
just lip service, it’s not that important. “We’re just doing it because we have to,” they will say.
And that is not where you want to end up.
Gardner: You can’t just talk the talk, you have to walk the walk and do it all the time, over and
over again, with a loud voice, right?
Ludwig: Yes. And eventually it gets quieter.
Eventually, you don’t need to have the top level
saying this is the most important thing. It
becomes part of the culture. People realize
that’s just the way – and it’s not that it’s just the
way we do things, but it is a number-one value for us. It’s the number-one thing for our
customers, too, and so culture shift ends up happening.
Gardner: Security mindfulness becomes the fabric within the organization. But to get there
requires change and changing behaviors has always been hard.
Are there carrots? Are there sticks? When the top echelon of the organization, public or private,
commits to security, how do you then execute on that? Are there some steps that you’ve
learned or seen that help people get incentivized -- or whacked upside the head, so to speak,
Talk the #$%&* security talk and listen up
Ludwig: We definitely haven’t gone for “whacked upside the head.” I’m not sure that works for
anybody at this point, but maybe I’m just a progressive when it comes to how to properly train
What we have seen work is just talking about it on a regular basis, asking about the things that
we’re doing from a security standpoint. Are they working? Are they getting in your way?
Honestly, showing that there’s thoughtfulness and concern going into the development of those
security improvements goes a long way toward making people more comfortable with following
through on them.
Eventually, you don’t need to have the
top level saying this is the most important
thing. It becomes part of the culture.
A great example is … You roll out two-factor authentication, and then you ask, “Is it getting in
the way? Is there anything that we can do to make this better? This is not the be-all and end-all.
We want to improve this over time.”
That type of introspection by the security organization is
surprising to some people. The idea that the security team
doesn’t want it to be disruptive, that they don’t want to get in
the way, can go a long way toward it feeling as though these
new protections are less disruptive and less problematic
than they might otherwise feel.
Gardner: And when the organization is focused on developers? Developers can be, you know
Gardner: “Ornery” works. If you can make developers work toward a fabric of security
mindedness and culture, you can probably do it to anyone. What have you learned on injecting
a better security culture within the developer corps?
Ludwig: A lot of it starts, again, at the top. You know, we have core values that invoke vulgarity
to both emphasize how important they are, but also how simple they are.
One of Atlassian’s values is, “Don’t fuck the customer.” And as a result of that, it’s very easy to
remember, and it’s very easy to invoke. “Hey, if we don’t do this correctly, that’s going to hurt
the customer.” We can’t let that happen as a top-level value.
We also have “Open company, no-bullshit”. If somebody says, “I see a problem over here,” then
we need to follow up on it, right? There’s not a temptation to cover it up, to hide it, to pretend it’s
not an issue. It’s about driving change and making sure that we’re implementing solutions that
actually fix things.
There are countless examples of a feature that was built, and we really want to ship it, but it
turns out it’s got a problem and we can’t do it because that would actually be a problem for the
customer. So, we back off and go from there.
How you talk about security makes the difference
Gardner: Words are powerful. Brands are powerful. Messaging is powerful. What you just said
made me think, “Maybe the word security isn’t the right word.” If we use the words “customer
experience,” maybe that’s better. Have you found that? Is “security” the wrong word nowadays?
Maybe we should be thinking about creating an experience at a larger level that connotes
success and progress.
Ludwig: Super interesting. Apple doesn’t use the word “security” very much at all. As a
consumer brand, what they focus on is privacy, right? The idea that they’ve built highly secure
products is motivated by the users’ right to privacy and the users’ desire to have their
information remain private. But they don’t talk about security.
The security team doesn’t
want it to be disruptive, they
don’t want to get in the way.
I always thought that was a really an interesting decision on their part. When I was at Google,
we did some branding analysis, and we also came up with insights about how we talked about
security. It’s a negative from a customer’s standpoint. And so, most of the references that you’ll
see coming out of Google are security and privacy. They always attach those two things
together. It’s not a coincidence. I think you’re right that the branding is problematic.
Microsoft uses trustworthy, as in trustworthy computing. So, I guess the rest of us are a little bit
slow to pick up on that, but ultimately, it’s a combination of security and a bunch of other things
that we’re trying to enable to make sure that the products do what we’re expecting them to do.
Gardner: I like resilience. I think that cuts across these terms because it’s not just the security,
it’s how well the product is architected, how well it performs. Is it hardened, in a sense, so that it
performs in trying circumstances – even when there are issues of scale or outside threats, and
so forth. How do you like “resilience,” and how does that notion of business continuity come into
play when we are trying to improve the culture?
Ludwig: Yes, “resilience” is a pretty good term. It comes up in the pop psychology space as
well. You can try to make your children more resilient. Those are the ones that end up being the
most successful, right? It certainly is an element of what you’re trying to build.
A “resilient” system is one in which there’s an
understanding that it’s not going to be perfect.
It’s going to have some setbacks, and you need
to have it recoverable when there are setbacks.
You need to design with an expectation that
there are going to be problems. I still remember
the first time I heard about a squirrel shorting
out a data center and taking down the whole
data center. It can happen, right? It does happen. Or, you know, you get a solar event and that
takes down computers.
There are lots of different things that you need to build to recover from accidental threats, and
there are ones that are more intentional -- like when somebody deploys ransomware and tries to
take your pipeline offline.
Gardner: To be more resilient in our organizations, one of the things that we’ve seen with
developers and IT operations is DevOps. Has DevOps been a good lesson for broader
resilience? Is there something we can do with other silos in organization to make them more
DevOps derives know-how from experience
Ludwig: I think so. Ultimately, there are lots of different ways people describe DevOps, but I
think about taking what used to be a very big thing and acknowledging that you can’t
comprehend the complexity of that big thing. Choosing instead to embrace the idea that you
should do lots of little things, in aggregate, and that they’re going to end up being a big thing.
And that is a core ethos of DevOps, that each individual developer is going to write a little bit of
code and then they’re going to ship it. You’re going to do that over and over and over. You are
going to do that very, very, very quickly. And they’re going to be responsible for running their
A “resilient” system is … going to have
some setbacks, and you need to have it
recoverable when there are setbacks.
You need to design with an expectation
that there are going to be problems.
own thing. That’s the operations part of the development. But the result is, over time, you get
closer to a good product because you can gain feedback from customers, you’re able to see
how it’s working in reality, and you’ll be able to get testing that takes place with real data. There
are lots of advantages to that. But the critical part of it, from a security standpoint, is it makes it
possible to respond to security flaws in near real-time.
Often, organizations just aren’t pushing code
frequently enough to be able to know how to fix
a security problem. They are like, “Oh, our next
release window is 90 days from now. I can’t
possibly do anything between now and then.”
Getting to a point where you have an improvement process that’s really flexible and that’s being
exercised every single day is what you get by having DevOps.
And so, if you think about that same mentality for other parts of your organization, it definitely
makes them able to react when something unexpected happens.
Gardner: Perhaps we should be looking to our software development organizations for lessons
on cultural methods that we can apply elsewhere. They’re on the bleeding edge of being more
secure, more productive, and they’re doing it through better communications and culture.
Ludwig: It’s interesting to phrase it that way because that sounds highfalutin, and that they
achieved it out of expertise and brilliance. What it really is, is the humbleness of realizing that
the compiler tells you your code is wrong every single day. There’s a new user bug every single
day. And eventually you get beaten down by all those, and you decide you’re just going to react
every single day instead of having this big thing build up.
So, yes, I think DevOps is a good example but it’s a result of realizing how many flaws there are
more than anything highfalutin, that’s for sure.
Gardner: The software doesn’t just eat the world; the software can show the world the new,
Ludwig: Yes, hopefully so.
Future security, best business practices inseparable
Gardner: Adrian, any thoughts about the future of better security, privacy, and resilience? How
will ML and AI provide more analysis and improvements to come?
Ludwig: Probably the most important thing going on right now in the context of security is the
realization by the senior executives and boards that security is something they need to be
proponents for. They are pushing to make it possible for organizations to be more secure. That
has fascinating ramifications all the way down the line.
If you look at the best security organizations, they know the best way to enable security within
their companies and for their customers is to make security as easy as possible. You get a
combination of the non-security executive saying, “Security is the number-one thing,” and at the
same time, the security executive realizes the number-one thing to implement security is to
make it as easy as possible to embrace and to not be disruptive.
Often, organizations just aren’t pushing
code frequently enough to be able to
know how to fix a security problem.
And so, we are seeing faster investment in security that works because it’s easier. And I think
that’s going to make a huge difference.
There are also several foundational technology shifts that have turned out to be very pro-
security, which wasn’t why they were built -- but it’s turning out to be the case. For example, in
the consumer space the move toward the web rather than desktop applications has enabled
greater security. We saw a movement toward mobile operating systems as a primary
mechanism for interacting with the web versus desktop operating systems. It turns out that
those had a fundamentally more secure design, and so the risks there have gone down.
The enterprise has been a little slow, but I see the shift away from behind-the-firewall software
toward cloud-based and software as a service (SaaS) software as enabling a lot better security
for most organizations. Eventually, I think it will be for all organizations.
Those shifts are happening at the same
time as we have cultural shifts. I’m really
optimistic that over the next decade or two
we’re going to get to a point where security
is not something we talk about. It’s just
something built-in and expected in much
the same way as we don’t spend too much
time now talking about having access to
the Internet. That used to be a critical stumbling block. It’s hard to find a place now that doesn’t
or won’t soon have access.
Gardner: These security practices and capabilities become part-and-parcel of good business
conduct. We’ll just think of it as doing a good job, and those companies that don’t do a good job
will suffer the consequences and the Darwinian nature of capitalism will take over.
Ludwig: I think it will.
Gardner: I’m afraid we’ll have to leave it there. You’ve been listening to a sponsored
BriefingsDirect discussion on building security-minded cultures within public and private
And we’ve learned how behavior, culture, attitude, and organizational shifts create both hurdles
and solutions for making businesses more intrinsically resilient by nature.
So, join me in thanking our guest, Adrian Ludwig, CISO at Atlassian. Thank you so much,
Adrian, I really enjoyed it.
Ludwig: Thanks, Dana. I had a good time as well.
Gardner: And a big thank you to our audience for joining this BriefingsDirect IT security culture
discussion. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout
this series of Traceable AI-sponsored BriefingsDirect interviews.
Stay tuned for our next podcast in this series, with a deep-dive look at new security tools and
methods with Sanjay Nagaraj, Chief Technology Officer and Co-Founder at Traceable AI.
Over the next decade or two, we’re going to
get to a point where security is … something
built-in and expected in much the same way
we don’t spend too much time now talking
about having access to the Internet.
Look for other security podcasts and content at www.briefingsdirect.com.
Thanks again for listening. Please pass this along to your business community and do come
back for our next chapter.
Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Traceable.ai.
Transcript of a discussion on creating broader awareness of security risks and building a security-minded
culture across organizations and ecosystems. Copyright Interarbor Solutions, LLC, 2005-2021. All rights
You may also be interested in:
● How API security provides a killer use case for ML and AI
● Securing APIs demands tracing and machine learning that analyze behaviors to head off attacks
● Rise of APIs brings new security threat vector -- and need for novel defenses
● Learn More About the Technologies and Solutions Behind Traceable.ai.
● Three Threat Vectors Addressed by Zero Trust App Sec
● Web Application Security is Not API Security
● Does SAST Deliver? The Challenges of Code Scanning.
● Everything You Need to Know About Authentication and Authorization in Web APIs
● Top 5 Ways to Protect Against Data Exposure
● TraceAI : Machine Learning Driven Application and API Security