1. Webinar Security: Apps of Steel
Martin Gandar: Welcome to our security seminar on creating apps of
steel. I'm Martin Gandar, Strategic Marketing Director of Service2Media,
and I'm going to start proceedings by highlighting some of the main issues
around app security. I'll end up asking our Security Director, Derk
Tegeler, a really, really hard question, which, once he's answered, we'll
then throw open the floor, take the questions that you've been sending us
on instant messaging.
So let's start off with looking at the state of the market. Basically, the
feature based on these tiny devices implies business priorities that
penalize security. Changes that we just can't keep up with and problems
that they cause.
Security is all about the technical stuff. Policies, standards, processes,
best practices and procedures. You've heard it all before. The aim here
being to achieve a certain level of trust. Trust, however, if you're more
popularly aware, is really a matter of perception, and my perception is
that it's really, really poor.
Security fixes don't even reach [inaudible 01:03] devices. Apps are
themselves insecure. Networks can't be trusted, PKI is broken, DNS is a bit
of a joke, at least in terms of security, and Android alone counts, what,
129 Trojans and spyware apps, and we're not even thinking about things like
Carrier IQ and similar semi-legal initiatives that have dubious security
issues associated with them.
So, who has to deal with this? Mobile designers and developers or mobile
manufacturers? Although most manufacturers make security fixes available
through mobile OS updates, end users often don't get them. Even if the
update hits the market as an available software upgrade, which is rarely
the case, frankly, the end user has a choice, and it's a bit of a scary
one. Do they take it, or not? It's not a very trivial thing to do, so they
often don't.
Although the trend towards smarter and easier update mechanisms is clearly
there, and that improves the security landscape, the end result is still
that millions of devices are left out there, unpatched. Well, what does
that mean for those honorable people who are trying to build decent apps
that are secure? Frankly, ignorance and sloppy designs often lead to
insecure apps.
We've seen apps that don't bother at all with transmission security, and
passwords and other sensitive information being transmitted using plain
HTTP. Or encrypted communication through HTTPS, but without an actual
certificate of validation, enabling any rogue operators to mount a man-in-
2. the-middle attack quite easily.
Apps are, basically, compressed software packages. Remove that gift wrap
and you'll find the images, the logos and the icons inside them, but you'll
also find actual code. JavaScript, for example. This can all be easily
modified, repackaged and redistributed as a clone. Even object code can be
reverse-engineered and replaced. In most cases, detection of this is pretty
straightforward for people like ourselves, but impossible for the average
end user to see.
Then, of course, there's the app stores. What help do they give us? Well,
traditional secure software application distribution was in a very
controlled manner, directly from the manufacturers to the end user. With a
modern app store, the chain of trust here is broken. Applications are
delivered to a third party, which can refuse or delay distribution, which
in itself could be a business risk, of course, because it means you may not
get an app out in time, etc., etc. Technically, they can modify and replace
the intended application. Of course, this is not in the interest of the app
stores. They're not going to do this. The key here is that the
manufacturer, you, has lost control over the last mile.
Then, of course, there are those people out there to get us with malware. F-
Secure mentions 717 threats in its latest quarterly report, distributed out
of all the different device types, but probably the most in Android, to be
honest. And Symbian, most at 528, I think, in the Symbian. Most of these
are Trojans, meaning that a significant control of the mobile device has
been achieved by ill-intentioned people or organizations. And once they're
on that device, malware is nanometers away from your app and its storage.
We won't spend a lot of time on that subject, because it's a big one, but
just be aware that the problem is here, it's real and it's growing.
Remember Carrier IQ last year? I mentioned it earlier. Although ill
intention hasn't really been proven, it did arrive on millions of devices,
caching keyboard events and URLs and sending information to some servers.
We don't know quite entirely what it did send, but, technically, it was and
is still possible to recall accounts, passwords and match them to URLs. In
the wrong hands, this of course causes a great deal of trouble on a massive
scale.
Let's move on to another major subject, a major problem, which is the
networks. Most of your apps communicate with back-end systems. Ignoring for
a short moment reliability issues there, it seems a simple and seamless
process. You send a packet from your app to your server, it sends a
response, and everything works fine. Yet these packets flow through
multiple layers of systems and third party organizations, all with their
own motivations and business objectives.
A simple example. In order to send a packet to, let's say,
3. backend.example.com, the mobile device will request a destination IP
address from a DNS server, which is just not under your control and you and
I are probably the operators. So, network operators and other last minute
Internet providers have their own set of problems requiring caching,
filtering, image resizing, compression, etc.
The problem here, similar to the app distribution problem, is that third
parties have access to your content and you need to trust them and you have
no control or recourse over anything they do wrong. You have to trust them
to deliver your content with a reasonable amount of confidentiality and
integrity.
One of the reasons for using HTTP, and preferably HTTPS, is that some
operators filter, that is, block, direct TCP connections based on port
assignments. From a technical perspective, it's trivial to circumvent these
limitations, but not on a global scale.
So when you're talking about the operators, what can they do to you? Well,
operators use bandwidth limitation techniques, effectively slowing down
your apps. Operators can add, remove or modify HTTP headers. Operators can
also add, remove or modify ads. Operators fail to protect your
communications adequately if they use old encryption mechanisms, among
other things.
By the way, if you want to know a lot more about that, and the options
there, look at the German-based Chaos Club. They have some great,
enlightening material on the subject.
Operators can also redirect traffic using a combination of techniques; DNS,
transparent proxies, etc. And they're often required by law, in particular,
in various local countries which are less open than we are, maybe, to
implement content filtering or blocking mechanisms.
Still, networks, but Wi-Fi, which is another problem entirely, a variety of
techniques allow malicious intent to proliferate. Spoofing, pretending to
be public hot spots, and other techniques to achieve man-in-the-middle type
attacks. The consequences are, information, credentials, transactions can
be eavesdropped in and/or modified.
The solution's often sought in the HTTPS protocol. However, both DNS and
PFKI are vulnerable, especially if you're connected to the Internet through
one organization that controls both.
So where does that leave us? What's the conclusion to all this? Well, in
short, as the developer, you're responsible for the confidentiality and
integrity of the data you send, store and process. The end users of your
apps cannot be expected to know everything and act with a serious, security-
conscious mindset. They can't do it. It's your job to protect them.
4. So, the really tough question I'm going to ask Derk is how do we make apps
of steel that are decently secure, without massive overhead and effort?
It's really tricky. Over to you, Derk.
DerkTegeler: Right. Thanks for that, Martin. It's a tall order for
today. Okay, I've got a lot of ground to cover, so please send questions
and comments through the IM function. I'll try to cover them all at the
end.
Mitigation strategies; let's first have a look at a few known
vulnerabilities. The Android OS has a disastrous history of problems. Some
relate to the design by Google. However, most are attributable to OEM
manufacturers. I'll take two small examples.
Zero permission apps on Android have read access to the SD card. So don't
store anything precious there.
On some devices, the Android Internet permission is read-accessed through
lots of information, including stack traces of every running thread. They
should have included a memory dock to make things simpler.
The keychain is broken on iOS, so don't use the native cryptoid DI, use a
proven alternative. This has been exposed by the Fraunhofer Institute quite
some time ago, and has, to my knowledge, not been fixed. It's good crypto
stuff from Apple, but bad key management.
BlackBerry; multiple vulnerabilities in the WebKit-based browsers, and the
PDF distillers, and the [inaudible 09:32] file system, etc. The RIM
[inaudible 09:35] system is more complex, as it includes BES, the
Blackberry Enterprise Server and, without stopping at the availability
issue, multiple vulnerabilities have been found, mostly in the rendering
area.
Windows Phone 7; for once, Microsoft did a reasonably good job in terms of
security, despite a few early issues. However, phones with the mobile
Microsoft operating systems are manufactured by OEMs, who often include
extra functionality or simply need to add software to the OS to drive the
hardware. Alex Plaskett from MWR InfoSecurity labs has shown relevant and
exploitable vulnerabilities.
In short, mobile operating systems have lots of issues, and many of these
will remain with us for years to come. Just assume that all devices are
insecure, period.
Okay, let's turn our attention to software development organizations for a
moment. Do you have a change control board? And if you do, are detailed
accounts of impact and risk assessments available for review and audit? My
5. principle advice to any software development organization is this: Read
through the ISO2701 documents and make sure your organization is process-
based and that risks are managed explicitly. Add accountability, good
documentation, and every audit will be smooth, enjoyable and very
instructive.
Reserve a budget for security. You will end up spending more money if you
do not. Security audits, or worse, a vulnerability discovered after the end
of the development project, will force you to fix stuff that would have
been cheaper, less time-consuming and with significantly less image impact.
Risk analysis; I'll present a quantitative approach shortly, but you may
ask yourself, "Why bother? And why bother with security in the first
place?" If money is involved, as in a banking app, you already know that
you need to do something about security. If not, please do a PIA, a privacy
impact analysis. Most countries have strict legislation regarding the
processing, storage and transmission of privacy-related information. Make
sure you conform or exceed current legislation.
Multi-channel everywhere. You've most certainly heard about multi-factor
authentication. In essence, you'd like to use different, independent
delivery channels for sensitive information. For example, a new debit card
is to be picked up at a bank, but the initial code to enable the card is
sent by mail, snail mail, that is, making it difficult for an unauthorized
person to hold both items necessary to make transactions at the same time.
Hardware, one-time passwords, or OTP devices, is another method of making
it difficult for an ill-intentioned person to hold all the elements
required to log-in or confirm a transaction.
Firstly, mobile applications must be easy to use if they stand any chance
of adoption on the market. This implies that multi-factor authentication
becomes difficult to implement, as your end users do not want to carry OTP
devices at all times, for example.
Secondly, using different technical channels, for example, SMS and HTTPS,
do terminate on the same device. If the device is compromised, it is
plausible that all these channels are compromised as well. You do raise the
security bar, as it is less probably that both SMS and HTTPS are
compromised at the same time, but be aware that it is not sufficient in
some cases.
Okay, let's move on to design. This is a complex field, requiring lots of
discipline, knowledge and experience. I'd like to go over some of the
basics, giving some handles towards better and more secure design.
Information classification; in order to get some more insight and provide
the coders with some ground rules, make an inventory of the information
flowing through or being processed through the app. Next, identify where
6. the information is accessible. Finally, assign rules for every single item
identified, for processing, for transmission and for storage.
For example, news, which is public, non-sensitive stuff, is processed both
on the client and the server. It may be transmitted and stored in clear,
for instance, using plain HTTP. Please note that there is still a risk
here. In a banking app, repetitive reading of an article concerning the
Facebook APO made a note of possible intention to buy shares.
However, rules for a password are totally different. Passwords are
processed both on the client and on the server. Transmission of a password
is prohibited through the network, so use a secure, non-replayable
protocol, such as TLSSRP. Storage on the server should be permanent and
encrypted, preferably using a one-way encryption mechanism. Storage on the
client should be encrypted and non-permanent, preferably for a duration no
longer than the session.
Risk analysis. Similarly to the information classification, an inventory of
all the risks make the need for mitigation measures apparent and, more
importantly, make the explicit acceptance of rest risks for all
stakeholders possible.
Let me explain. By analyzing the application and its dependencies, risk
elements became apparent. Let's start with a simple risk quantification.
You assign a number between one and five to the technical impact, the image
impact and the likelihood. Multiply them all to obtain a quantified risk. A
more complex risk quantification involves money. Replace the impact
arbitrary figure by a realistic amount in your currency. Replace the
likelihoodmultiplicator by a realistic percentage.
Let's take a simple example; a standard banking app uses IP-based network
communications. So, risk can be decomposed into elements, namely, impact
and likelihood. The impact can be decomposed into technical impact, this
translates into an exploitable vulnerability, or an image impact which
translates into damage in the public opinion area. Unless the identified
risk is completely eliminated, the impact never changes. Only the
likelihood can be diminished by mitigation.
So, in this case, the technical impact is 5, the image impact is 5 as well,
the likelihood is very high, so also 5. So the quantified risk becomes 125.
This is the maximum.
As a mitigation, we can use HTTP, and this translates into a reduced
likelihood after mitigation and we'll assign the number of 2. So your rest
risk is 50 and you have the choice to accept or not.
Threat modeling is a technical risk analysis of your app or service. Once a
design emerges, a threat model can be made. You identify the key assets,
7. you decompose the application into key components. By the way, this helps
you to compartmentalize your code, which is always a good idea. You
identify and categorize the threats to each asset or component, you rate
the threat based on the risk ranking and, finally, you develop mitigation
strategies that are implemented in design code and test cases. A cursory
alternative is to trace the high-risk information through the app and
identify possible weaknesses. I don't recommend it, but it's always better
than nothing.
If you use third-party libraries or depend on the availability and
integrity of data offered elsewhere, you run the risk that someone else
introduces a bug or a vulnerability that will show up in your app. Use good
logging to cover your backside, but more importantly, please make a
thorough analysis of your dependencies and code defensively. Check your
input and your outputs. Check if your data is coming from the right place,
etc.
Another good method to improve the security of your app is to use security
levels. Transactions can be coupled with a security level. For example, if
you'd like to activate an app, you could enforce the use of a corporate
network and the use of an OTP password generator. During this heightened
security context, you could generate and transfer keys, activate accounts,
etc. For financial transactions, you could allow small transactions with a
password or a PIN, or enforce the use of an OTP password device for larger
transactions.
Finally, good key management. I'll pass commenting on the recent Yahoo!
private key disclosure through the Access Chrome extension. This example is
really extreme. Having said that, the ability of malicious parties to
decrypt information is rarely the result of a broken algorithm. A bad
implementation of an algorithm has more chances of hitting the headlines.
The most important is your key management. My advice, make it very
difficult or preferably impossible to guess or capture keys. If you must
store them, encrypt them. If you must transmit them, do so using
encryption, preferably using bidirectional authentication.
Software development. Let's start with secure coding standards. Identify
and document best practices for your ecosystem. Make it something lively.
Involve all your developers and capture new findings from code reviews and
other identified vulnerabilities.
Defensive programming. Most trouble comes from a lack of input validation.
This, in turn, is due mainly to a misplaced trust in the quality of the
work of someone else. Just a simple example to illustrate the problem. Two
developers are working together. One writes the client code, and the other
concentrates on the server part. The server guy is under time pressure,
resulting in an error in encoding. The client expects news articles encoded
8. in UTF-8, but receives Latin-1 data, and posted the data without further
verification, resulting in strange-looking news articles.
Don't leak. One may forget to clean up debugging code before going live,
inadvertently disclosing sensitive data. Audit and transaction logs may
contain exploitable information.
And a tip, you may use the SD card on a mobile device to store information
temporarily. After use, you dutifully overwrite the location, thinking the
information is lost forever. Nothing is further from the truth, as tiny
storage controllers pseudo-randomizes the actual location of the data on
the chip, and there are lots of tools out there to read the chips, byte by
byte.
Finally, document; in the code, in the release notes, and all other
documentations. This enables proper collaborative development, proper
maintenance. It enables you to transfer your code up to a new team that
needs to take over from you.
Finally, think about the app life cycle. Hopefully, your applications will
go through lots of different versions and have a very long lifetime.
Finally, let's have a look at the life cycle management platform
interactive. We actively prevent tampering with the app once published or
present on the mobile device. Our apps are packaged in such a way that a
runtime component decrypts the business logic before running it. We also
provide our developers with tools to simplify the use of some common
security-related activities in the code. I'll come back to this shortly.
Secure coding standards give our partners a strong basis. I've touched on
the importance of good, secure coding standards. To get our partners
started, we share our own standards.
We don't limit sharing our experience and know-how to our standards. We
give our partners access to templates, best practices and code snippets
maintained by our own developers. We encourage our partners to share their
experience through our development site. Together, we are all stronger.
Interactive APIs; I'll just cover very quickly the APIs that we offer for
our developers. We have an authentication manager which provides you with a
means to verify the app authenticity. We have cryptographic functions, base
64 encoding and decoding. We have a SHA-1 and a SHA-256 hashing API. We
have AES encryption and decryption, and we can generate. And, finally, we
offer HTTPS support, including OAuth.