Security concerns are often dealt with as an afterthought—the focus is on building a product, and then security features or compensating controls are thrown in after the product is nearly ready to launch. Why do so many development teams take this approach? For one, they may not have an application security team to advise them. Or the security team may be seen as a roadblock, insisting on things that make the product less user friendly, or in tension with performance goals or other business demands. But security doesn’t need to be a bolt-on in your software process; good design principles should go hand in hand with a strong security stance. What does your engineering team need to know to begin designing safer, more robust software from the get-go?
Drawing on experience working in application security with companies of various sizes and maturity levels, Wendy Knox Everette focuses on several core principles and provides some resources for you to do more of a deep dive into various topics. Wendy begins by walking you through the design phase, covering the concerns you should pay attention to when you’re beginning work on a new feature or system: encapsulation, access control, building for observability, and preventing LangSec-style parsing issues. This is also the best place to perform an initial threat model, which sounds like a big scary undertaking but is really just looking at the moving pieces of this application and thinking about who might use them in unexpected ways, and why.
She then turns to security during the development phase. At this point, the focus is on enforcing secure defaults, using standard encryption libraries, protecting from malicious injection, insecure deserialization, and other common security issues. You’ll learn what secure configurations to enable, what monitoring and alerting to put in place, how to test your code, and how to update your application, especially any third-party dependencies.
Now that the software is being used by customers, are you done? Not really. It’s important to incorporate information about how customers interact as well as any security incidents back into your design considerations for the next version. This is the time to dust off the initial threat model and update it, incorporating everything you learned along the way.
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
Security engineering 101 when good design & security work together
1. Security Engineering 101:
When good design &
security work together
Wendy Knox Everette - @wendyck
O’Reilly Software Architecture
June 13, 2019
2. Who am I?
Senior Security Advisor
in the Risk Advisory
Services group at
Leviathan Security
Group in Seattle.
Software developer &
security nerd.
@wendyck
3. What is security engineering,
isn’t it just secure coding
standards?
And how does secure
development fit into a
company’s broader security
program?
4. What are some of the big issues we’re
concerned about?
- Authentication and authorization
- Information disclosure
- Tampering, Repudiation
14. Design flaws: little to big
● Trusting user text input
● Trusting user headers sent in http
requests
● That API can only be called internally….
Are you sure?
● Installing 3rd party libraries (innocuous
now, can it turn malicious? How would you
know?)
● Rolling your own crypto
● Authentication: make your own? (what are
your authentication flows, how are you
storing creds?) vs using an OAUTH
(privacy risk for your user?)
16. Granular access controls help avoid the
anti-pattern of making every engineer at
your company an admin in your platform
17. Begin with the expectation of multiple
permissions levels, as bolting on a multi-
level structure on a code base with a lot of
“if user_type > 1” checks is painful.
22. Don’t write complex
parsers
Language-theoretic Security, or langsec,
is one of the causes of application security
vulnerabilities today.
- Parsing or unparsing bugs
- Caused by software failing to
correctly handle inputs
- You have these in your code base
24. Parse errors
1. “insufficient recognition, where input-checking logic is unfit to validate a
program’s assumptions about inputs”
2. “Parser differentials, wherein two or more components of a system fail to
interpret input equivalently”
From The Seven Turrets of Babel: A Taxonomy of LangSec Errors and How to
Expunge Them http://langsec.org/papers/langsec-cwes-secdev2016.pdf
More Lang Sec papers & discussion: http://langsec.org/
25. Don’t roll your own parser
“But I just need to pull out this bit of data”........ 4 years later, we have a pile of
exploits duct-taped together into an essential piece of code that I can’t take down.
26. What causes these issues?
1. Input recognition and validation code scattered throughout your code
2. The code doesn’t match the programmers' assumptions about safety and
validity of data
-langsec.org
27. “for complex input languages the problem of
full recognition of valid or expected inputs
may be UNDECIDABLE, in which case no
amount of input-checking code or testing will
suffice to secure the program”
-http://langsec.org/
28. Please make our pen testers’ lives harder
https://twitter.com/perrymetzger/status/1092528170354573312
29. Don’t use poor
authentication
Do users log into your system?
How do you secure their
credentials and how do you
help them protect their
accounts?
30. Not just a good design principle, it makes your
code easier to update (to patch emergent
security issues), helps you build good
permissions models, and protects resources
from each other.
Don’t ignore
encapsulation
32. Language choice is not really security neutral
● Use after free and heap corruption vulnerabilities are super useful for building
exploits
● Memory safe languages like Rust protect you in ways that C can’t
● If you’d like to regularly be appalled by these issues, here’s the Twitter
account you want: https://twitter.com/@LazyFishBarrel
33. That’s a lot to cover!
Think of this as an introduction to the
types of security issues you should
consider as you design software.
There are too many resources to list
here, but the OWASP Top 10 and the
OWASP Mobile Top 10 are decent
places to start
https://www.owasp.org/index.php/Category:OWA
SP_Top_Ten_Project
34. OWASP Top 10
1. Injection
2. Broken Authentication
3. Sensitive Data Exposure
4. XML External Entities (XXE)
5. Broken Access Control
6. Security Misconfiguration
7. Cross-Site Scripting (XSS)
8. Insecure Deserialization
9. Using Components with Known Vulnerabilities
10.Insufficient Logging & Monitoring
https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project
37. At the implementation
stage
Now that you’ve established your security
standards, how do we make sure that the
code is produced to those standards?
38. Addressing
security early
and often
Do developers in your organization know what areas of
your code are security sensitive?
Are your engineers empowered to ask for security
reviews?
39. Build good
working
relationships
How are interactions
between developers
and security?
Do they think you just
write vulns all day long?
Do you think they
always say no?
How do you talk about
risks that you need to
take?
40. What do your conversations about risk
look like?
41. Document and
train
How do you enforce consistent
escaping of user input?
What are your access control
standards?
Do you have style guides that
help prevent LangSec-style
parsing issues?
42. Role specific
security training
Start with some basic application
security training for everyone
Can your application security
engineers offer ongoing classes,
send people out to classes?
We’ve found that teaching PMs
some basics can be a force
multiplier- they will recognize
security sensitive areas and
engage resources.
43. Ownership
Are engineers empowered to request
changes when they see a security impact?
Do your teams see customer reports? Do
you share pen test report findings? Bug
bounty findings?
44. Familiarity
Can you play OWASP JuiceShop or
another CTF to learn what security issues
look like?
● https://www.owasp.org/index.php/O
WASP_Juice_Shop_Project
56. Clouds are so
secure!
Private keys & Data access -
these are both critical to the
security level of your cloud
hosted system
GitHub’s blog on their
scanning for auth tokens:
https://github.blog/2018-10-17-behind-
the-scenes-of-github-token-scanning/
57. Backup files & configurations still around?
Do you store user uploaded data?
How often do you patch- how old is the longest
running process you have?
Are you running VMs when you should run
lamdas?
64. Monitoring &
alerting
How do you know the health of
your system?
Do you have enough logging to
for useful incident response
capabilities?
65. What are you
logging?
Where?
Do you have logs of user
access to sensitive data?
Do you know where people
connected from?
What timezone are your
logs in? Are all your logs in
the same timezone?
Who has write access to
logs?
66. Change Management
How does your organization know what’s changed
in your production environment?
Who has the ability to make those changes?
67. “You can’t secure
what you don’t know
you own”
● What’s the process for creating
new internet exposed endpoints?
Don’t make this so onerous that
people route around it - the
Shadow IT problem can make
things worse
● What tools do people in your
organization use that can create
internet exposed endpoints
(Salesforce?)
70. How are non-
emergent security
issues fixed?
If it’s not a sev 1 emergent issue, what’s your mean time
to rolling out a patch?
Do you track whether security issues on your backlog
are being exploited or impacting users?
71. How is technical debt
prioritized and tracked?
Do you constantly push out new features and
ignore older sections of your code?
Who is assigned maintenance work - is it the
least experienced developer on your team? Or
do you have one engineer who is always
saddled with fixing the oldest code?
72. Vulnerability management
(applying their patches)
How do you find out about updates to libraries you
incorporate?
● Github vulnerability alerts if you’re on Github
○ https://help.github.com/en/articles/about-security-alerts-for-
vulnerable-dependencies
74. In summary….
Hopefully your team is doing a lot of these things, and hopefully you’ve gotten
some new ideas to dig into.
The biggest force multiplier we’ve found is to educate & empower everyone in
your organization to be responsible for the security of your environment.
75. Thank you & Questions
Please don’t forget to rate this session!
Wendy Knox Everette
@wendyck
Leviathan Security Group
https://www.leviathansecurity.com/
Notas do Editor
I’m a software developer & a security engineer, working as a senior security advisor at Leviathan Security Group in Seattle
I work with smaller cloud hosted companies on a variety of security topics- this can range from ensuring that their cloud environments are well secured to helping with software design and code reviews to designing risk management processes.
You can find me on twitter at @wendyck
We’re going to take a look at how to improve the security of the code that your team writes & how application security fits into a broader security stance. And we’re going to look at ways that security and development teams can work together.
Application security, which we’ll focus on, is distinct from other areas like network security, but is still a very broad field that covers everything from devsecops to coding flaws.
Most of you aren’t security experts here, and I can’t teach you everything about app sec in 40 minutes, so I’ll go through a few of the biggest concerns at a high level. Our goal here will really be to think about the types of problems that cause a lot of security vulnerabilities, and give you some starting points for more research.
So when we talk about application security, there are some broad classes of vulnerabilities that we’re concerned with like authenticating people and services, protecting sensitive information from leaking, and maintaining the integrity of the information in our systems.
Who can access things & how is a core security concern. If your system is a single public web page, you still have authorization concerns. You’ll need to restrict access to make changes to that page only to your authorized developers.
Spoofing vulnerabilities are flaws that allow users to take actions as other users. Facebook had a spoofing flaw last in their “view your profile page as” functionality. The “view as” functionality was meant to allow users to see what information on their page was visible to others, but they could then steal access tokens of the user they were viewing as.
Credential stuffing is a newish attack, where users take username & password combinations leaked in prior data breaches and try them on different websites. Because so many people re-use passwords, this works surprisingly often. If the site doesn’t implement multifactor authentication or other protections, then the attackers can access the accounts.
Are you protecting your data from being read by users who shouldn’t have access to it? Can you assert that no one has made changes that they shouldn’t have permissions to perform?
What about information about your services? What about error messages your website produces?
Can attackers enumerate accounts on your system by entering an email address & password, and then checking the error message to see if it says “incorrect email” or “incorrect password”? This can allow attackers to determine whether an account exists on the system & is a form of information disclosure
There are many ways attackers can tamper with your system. For instance, web parameter tampering is a way that attackers can modify hidden input form fields, cookies, headers, or anything else sent over the wire to your web application. Cross Site Request Forgery is a form of this attack, where attackers craft a url that causes a system to make some change, like adding an address to an address book, and then getting an unsuspecting user’s browser to load that url & have the action occur
So let’s start with one of the most important things we can do- incorporate security concerns into the planning and design phase. Shifting security reviews as early in the process as possible is one of the best ways to better secure your systems. Even better is not just getting external reviews as an add-on, but starting to think about security as something each team is responsible for
In addition to capturing feature requirements, like what a new user sign up flow should look like, we should capture important information about the security aspects of the system. For a new user sign up, do we understand how to build the web forms to protect against XSS? How will new user credentials be stored? Do we integrate with an external authentication system? Try to make sure everyone understands these requirements before we start coding.
You might also have regulatory requirements, or security guarantees that you have to make to partners or other parties who you integrate with.These will probably affect the controls you need to incorporate and the features you have to support.
Before we come up with any more security requirements, we should do some threat modeling.
This sounds like a scary hacker thing, but really…. It’s just thinking creatively about how to misuse tools and poking at the assumptions that we’ve made. It’s a form of thinking creatively & it can be really fun.
So let’s do a simple threat model, just so that we can see how easy it can be. Let’s say that your website wants to start allowing customers to upload photos on reviews. Awesome! More engaging content! We’re going to dive in and start tweaking the layout, updating user prompts, and doing all the work to add this form to our website.
But what else are we allowing? Have we just looked at the happy path through the review writing process? How could this be abused?
What happens if non-photo files are uploaded? What if the user uploads photos but never publishes the reviews they’re attached to?
Do those files just sit on s3 forever? Can they be externally linked to? Can I use your photo hosting to host malware?
Do you ever scan over your file upload services and see what’s being stored? Do you look at access logs? Do you check the referrer headers to see what is linking to content that you host?
Threat modeling like this is sadly really too rare in most software development planning cycles. We’re driven by requirements and metrics, and there’s often no specific home in planning cycles for the consideration of how to break or misuse what we’re building. Oftentimes fraud and abuse are uncovered only after something has been live for a while. Or sometimes you have people on your team who can come up with these misuse patterns, but their bugs may get triaged out of releases. Making a point to surface and address these concerns early will make your products safer and more secure.
As we do this threat modeling, what are some other examples of misuse and abuse that we might want to think about? What are we trusting that we shouldn’t? What assumptions have we encoded into the system?
Trusting anything that comes into our system
Misconfigurations and Misunderstanding who can reach our systems
3rd party javascript libraries that get turned into malware droppers or bitcoin miners after they’ve achieved sufficient uptake
Designing your own encryption methods and using it to encrypt sensitive information
Using a login system without thinking about the risks and tradeoffs that your implementation brings with it
Let’s dig into a few of these areas where security design flaws lurk. Access Control is a big one that we need to think about early. Re-doing a poorly designed access control system later can be incredibly painful. Living with a weak one can mean playing whack a mole with security issues.
Do you have only regular users and then admins who can see everything on your platform? What if you now want to hire content moderators? Do you have to give them admin, even though they don’t need to see user payment information?
Some design considerations here to think about include checking authorization when you grant access to a resource. Do you need a network call, and how do you fail if that doesn’t succeed? What does the check look like?
You should design to support multiple levels of authentication even if you don’t build them all now so that you don’t have to begin all over again when you add those content moderators.
Try to create one, or a small number, of access granting functions. Don’t rewrite this code over and over, because that causes bugs to slip in.
Separate business logic, if you can.
Try to avoid using numeric labels for permissions levels. When it’s late at night, and you have to fix a problem, will you remember if Admin is 0, or 1, or 2?
Here’s a design consideration that’s closely related to access control. Unfortunately, this is one security control that engineers LOVE to work on. Writing software that does your encryption & decryption is really attractive to a lot of software developers.
Here’s the biggest design flaw around encryption: trying to do it yourself. Don’t roll your own crypto.
Check with your organization’s security experts about how to encrypt and decrypt information. Use standard encryption libraries. Do not implement your own algorithms to encrypt.
But seriously. Cryptography has to be tested and validated by experts. Tiny issues that are easy to miss can lead to an unraveling of the entire encryption scheme. If you have read about the way the Allies cracked the German enigma machine, it’s a great story and is a wonderful example of the way that numerous small flaws can lead to plaintext recovery
“Don’t roll your own crypto” is well known. But parsing errors are still somewhat esoteric, despite underlying a large number of app sec issues.
Input handling bugs are an entire class of issue that encompasses classic vulnerabilities like XSS and SQL Injection. Almost all of these can be rolled up into the meta issue of lang sec parse & unparse bugs. Langsec.org defines these issues as a meta class of problems stemming from “ad hoc programming of input handling at all layers of network stacks”
Well known security vulnerabilities like Heartbleed are lang sec issues. If you’re writing code that has to interpret any form of input, you probably just code up some solution to extract what you need from it and move on. But these can break down with malicious inputs.
This tweet is from a thread that does a great job of explaining lang sec issues
Parsing errors usually fall into these two buckets.
first one is generally made worse by starting to work with the input that you have before you’ve finished parsing the entire input - this is basically at the root of the SQL injection class of bugs
Secondly, when you have two pieces of code that need to respond in the same way to a given input, but run different parsing code, some inputs may lead to different, and unexpected, outcomes
This should be as well known as “don’t roll your own crypto” - vulnerabilities lurk in underdeveloped parsers. And once you’ve written one, it can be extremely difficult to rip it out and replace it, leading to a vulnerable-patch-vulnerable-try to patch and create new vulns - try to patch again - vulnerable- patch cycle that can be never ending
Langsec.org collects papers and other information about these issues, and they define two top level problems that lead to these bugs. First, separating the parsing around your codebase makes it hard to fully validate before you take action; for an extremely simple example of this, think of SQL injection issues, where we start the query on the database before we’ve fully extracted the incoming query string and confirmed that it’s not malicious.
Secondly, the code that you wrote doesn’t do what you think it does. It does only what you told it to do. This is kind of simplistic, because this is really the root cause of all software bugs, but it leads to nasty implications here
If the input you’re parsing is sufficiently complex, it’s possible that we can never fully validate it, especially as your input grammar approaches Turing completeness. You should try to make your acceptable inputs as minimal as possible where you can. Where you can’t, understand that this is going to be an issue and think about how to defend against it with mitigating controls, if you can.
From the same twitter thread - these vulnerabilities exist all over because people are permissive with their parsing, they use partially parsed inputs, and they make assumptions about how cases are handled when connecting components.
To wrap this up,
Inputs you expect should be fully defined during your design phase, ideally with a grammar
Fully validate all inputs before you take action on them in your code
Next, Authentication is another component that can be great to outsource if at all possible. Supporting SSO through SAML is a much better security choice than building your own stack to handle login, log out, password change, forgot password, account storage, and so forth.
User interfaces can have a big impact on security here. For instance, if you write your own login, do you have a password strength meter for users? Do you allow pasting text into the password field, so that users can use password managers?
Encapsulation is another good design pattern regardless of security concerns. It has a broad security impact across access controls, preventing information disclosure, and enabling you to work on small sections of your code. More modular code is more maintainable code, which helps you respond to security issues quickly and contain the impact.
Monitoring and alerting are important not just for catching outages or performance issues, but also to detect security incidents. This can require a lot of tuning, because you don’t want to be flooded with false positives. For instance, you don’t necessarily want to be paged every time someone performs a search with an angle bracket character. But you would want to be alerted if a web resource started making massive requests, well above your expected volume, against a database that holds your customer account information. That could be an attacker who found a sql injection bug and is exfiltrating your customer information.
This is a controversial topic, but if you’re designing a system from scratch, and you have the chance to use a memory safe language, think about doing so. Memory corruption bugs are loved by exploit developers, and starving them of these resources is a good way to improve the security of your product.
At the same time, be careful about choosing trendy language that most of your programmers don’t know, as you can end up without the necessary expertise to keep it up to date if your experts leave.
This was a lot of content, and wasn’t nearly enough time to teach you how to design to handle all these challenges. As you begin to dive into these issues, the OWASP top 10 list can be a helpful guide.
These are the current web owasp top 10. Many of the issues that we’ve discussed mostly fit into these buckets, and these are a great start as a checklist of security issues to validate your systems against. When you design your security requirements for a new feature, take time to make sure that you’ve thought about whether of these bugs could occur in the new code. If you’re writing a new search display page, you’ll want to think about XSS. But you also want to consider if you have enough logging. And you’ll want to make sure that the search engine you use isn’t one that’s vulnerable. And could the search surface information that the user shouldn’t have access to see?
It can be an incredibly useful exercise to sit down with your team & make up your own custom ranking of these issues. If you don’t use XML, but do get XSS issues, then you should move XSS higher in your personalized ranking.
Fixing security issues at the design phase is infinitely more effective than trying to mitigate them after the code is finished and nearly ready to deploy.
Every compensating control we don’t have to put in place, because the problem was caught early, is a big win.
So now that we’ve talked about the design phase, let’s discuss the fun part where we write code.
Developing security standards for your projects is great, but then oftentimes reality hits as deadlines loom and you’re working through sprint cycles to iterate and launch.
All the concerns we just went over are still going to come into play - security issues can’t just be designed away with a good set of requirements, unfortunately. But we can follow some practices that should have a security positive impact on our software.
Empowering your engineers to be responsible for the security of your code is one of the biggest wins. Do they have security experts to reach out to, and can they engage them when they need to?
How well does your development team and the security team work together?
Do you get value from pairing with your security team, or are they always a roadblock for you?
Do you know how to explain to the security team why you need to do something?
The security team should be helping you to make decisions about what security concerns to prioritize, but they need to understand what goals you’re trying to achieve.
Can you explain to them what metrics you’re seeking to affect, and what approaches you’ve already tried, and what problems you saw?
They should also be able to explain the security impact to you of various choices you’re considering. Do you want to use a new javascript library to assist with image resizing on your reviews page? What do they tell you about the risks of incorporating this into your website?
Make sure your team can find documentation on your security requirements and your standards. Ensure that they know what resources they can engage with. Also put effort into keeping this up to date, as new threats emerge.
Train your senior engineers to recognize patterns that cause security problems so that they can catch issues in code reviews.
Start with basic security training - that raises awareness - with all your developers. If you have some security enthusiasts in your engineering team, have them create an hour long training and give it every few months. Maybe it just presents the OWASP top 10.
Expand as you can.
Don’t forget people outside your core technical staff like PMs or customer support engineers - they might not need to the same technical training about parsing, but teaching them about user input issues can go a long way to empowering them to spot issues early
It doesn’t work to assume that someone else in the process owns the security of your code or your systems and to rely on external checks. One of the best ways to help your teams understand security impacts in the code they write is to familiarize them with real world reported issues and empower them to tackle similar concerns that come across in your codebase. If you have a bug bounty program, do they get involved with assigning an impact to reported issues? Do they have a way to see customer reports of bugs? If you have a Security Operations Center, or SOC, have they ever visited to see what alerts are being raised?
We’ve found that setting teams up with a purposely vulnerable system to hack on like Juice Shop is a great tool. They learn how easy some of these exploits are, and they can explore new vulnerabilities they haven’t seen before. There are many online CTFs beyond just Juice Shop, and you might consider giving teams time to play with them, or do a group project where you tackle them together.
Teamwork is really helpful here, because these can be discouraging if you don’t know the tricks for attacks just yet. Get several engineers together to research potential attacks and brainstorm how to iterate from an attack that works on one level to the next.
In addition to training, there are some tools we can put to use in our organizations to improve our security
If you have automated tests and deployment pipelines, static & dynamic analysis can be great tools to add in. They often don’t get used because of the worry of false positives and the tuning required, but they’re incredibly helpful.
Static analysis tools help ensure that secure coding policies are being followed. They’re language specific, so you’ll need to find one for the programming language that you use. They scan your source code for issues like numerical errors, input validation, race conditions, path traversals, pointers problems and dereferences
Dynamic analysis, in contrast, is a form of run-time verification.
These tools monitor application behavior for memory corruption, user privilege issues, and other critical security problems. Often times they have to be trained by giving them a set of urls and credentials, or a fuzzing corpus.
These can be harder to hook into a deployment pipeline, but automating these against your builds on a regular basis will help to turn up issues
Code reviews are a really underrated tool. This is where you can enforce standards, such using the same code to perform permission checks, making sure that XSS protections haven’t been accidentally disengaged, and reviewing for adequate logging and error messages.
Pull in your security engineers for reviews when you work in sensitive areas. Work to teach your teams what to look for to prevent security issues. For instance, if they see parsers, or code that is trusting user input, or performing permissions checks in non-standard ways, what do they do?
This makes sense, right?
If you have to go into an unfamiliar area of your code base to fix an issue, can you cleanly make the fix without causing a regression? What if the change you made altered some property that provided a security assurance? How would you know?
This is almost always true. Users break everything. They fall way off the happy path.
So to take advantage of our users finding all those issues for us, we can launch software in controlled ways that enable us to detect and respond to problems.
Canary launches are pushed just to a few onlines with heightened alerting on them, and help you detect issues early. If no emergent issues are detected, you can do a wider rollout.
Launch flags can help you slowly roll out code then enable it for greater numbers of people. They’re also great if you detect an issue, as you can turn off just that feature instead of having to debate rolling back
The goal through all of this is to build more resilient code, and make it harder for developers to accidentally cause security issues. We want to make sure that we improve our processes and pick up the low hanging fruit. For instance, if you find that your engineers frequently cause XSS issues when they fix html entity display bugs, what could we do to break this loop?
We could create checklists for code reviews - is the sanitize function called correctly every time? And we can run static analysis and dynamic analysis tools that check for XSS. We can make sure that we test the code against XSS inputs before we launch it to the customer base. All of these are ways to catch an issue and interrupt the vulnerability from reaching the onlines.
Security engineering isn’t just about software flaws.
No matter where your product lives, account lifecycle, configuration management and protecting secrets are going to be concern.
Hosting in the cloud removes many older security concerns, like building physical security controls to secure server farms. But, it introduces a host of new ones.
“Just run it in AWS” is a really appealing solution, and for most systems it’s a great choice, security wise. But it brings new issues. For instance Would you know if someone checked your AWS secret access key into a public github repo?
Are you sure that all the databases and s3 buckets that you have are protected from being accessed by the public?
There are many more problems with cloud hosting that you should plan for. Some of these are old problems, like backups and patching.
Some are new ones - have you looked at the available cloud services and figured out how to use the ones that really fit your use the best, or are you just replicating your old data centers into cloud environments/ If you can move something in a lamda, you’re in a much better space from a security perspective. Although there are still access control and account permissions to configure, a lot of the overhead around patching and securing machines is handled for you
There are a lot of great tools out there that will help you lock down your cloud environments. Azure and AWS offer some great tooling, and there are independent ones like this great short questionnaire that help you get a handle on the level of risk you’re taking on based on your configurations.
This is a pretty typical response pattern for a small SAAS company. They use a bastion, but they don’t require 2FA on all GitHub accounts. They have CloudTrail turned on, but they’re not sure if any one configured it.
This is a sample response for this set of answers to the questionnaire. It weights different question & answer sets differently, and keys off some interactions, as some configurations can become more dangerous in the absence of other controls. So this is a nice tool to play with just to see where you might be able to get some wins by tightening up some controls.
[Timing - go run this?]
One of the biggest wins you can make in a cloud environment is handling secrets and keys in a mature way.
How hard is it for you to rotate secrets? Do you use AWS Secret Manager or AWS KMS?
And how quickly can you offboard an engineer’s access when they leave your company?
Could an engineer getting phished put your root AWS credentials at risk?
This is definitely true at many more SAAS companies than you would expect.
Now let’s move to launching our secure code.
Once we’ve launched, we need to be able to detect issues and respond appropriately. Here’s where our logging and alarming are critical. How do we know if there are issues in the system? Who gets alerted and how?
If there was a security incident, could you reconstruct what happened?
Logging configuration is an entire topic of its own. If you don’t do incident response tabletops or respond to real issues, you may not realize how little you can reconstruct from the logs in your environment.
Even worse, if all your systems log into different time zones, you may need to do a lot of processing to normalize the data and construct time lines to find out what happened.
What do the permissions on your logs look like? Could an attacker modify them, hiding what they did? How would you detect this?
Change management is a really critical and often neglected security control. Maturing this area is one of the process improvements to focus on.
Could you detect a backdoor that one of your programmers inserted into your codebase? How many people have to sign off on each code change before it reaches production? Can you look back and determine who made the commits that launched on a given day, and what tests and code reviews were done on them?
Security engineers love to claim that asset management is the key to protecting everything. As software developers, we can sometimes think that this doesn’t apply to our area. But your teammates probably create new microservices or new marketing blogs. These are new endpoints- do you track them?
And does your organization allow you or other employees to create tools through Salesforce or similar applications that can expose data to the internet?
So wrapping up, now that we’ve launched our software and it’s live, we still have security concerns to handle.
Do you have the ability to easily push updates to your production environments?
Many teams have good oncall processes for handling high priority bugs that emerge, but lower priority issues linger in their backlog and never end up getting addressed. Even worse, some issues can be logged into a backlog at a time when they’re low risk, but then an exploit is released that takes advantage of them, or another bug is found that can be chained with the original issue to create a higher severity problem. Do you have some way to discover these events, and reprioritize?
This has a major impact on your security stance. Many teams give technical debt cleanup work to the least experienced team members, while others outsource it, and others make just a few veteran coders primarily responsible for tackling issues in the oldest sections of the code. None of these strategies will really set you up for success.
It’s not just code that you wrote that needs to be updated. Third party libraries that you depend on can have security issues in them that are patched, and you’ll need to track and manage the rollout of these updates.
Do you have expected timelines for pushing these patches to your production systems? Do you patch in your test environments?