SlideShare uma empresa Scribd logo
1 de 72
Baixar para ler offline
Hybrid Trust Model for Assurance of Public Keys in Social Networks
1
Student Number: 100807646
Author: Max Kington
Hybrid Trust Model for Assurance of
Public Keys in Social Networks
Max Kington
Supervisor: Dr Allan Tomlinson
Submitted as part of the requirements for the award of the MSc in Information Security at Royal
Holloway, University of London
I declare that this assignment is all my own work and that I have acknowledged all quotations from
published or unpublished work of other people. I also declare that I have read the statements on
plagiarism in Section 1 of the Regulations Governing Examination and Assessment Offences, and in
accordance with these regulations I submit this project report as my own work.
Signature:
Date:
Hybrid Trust Model for Assurance of Public Keys in Social Networks
2
Contents
1 Executive Summary ........................................................................................................................... 6
2 Introduction ....................................................................................................................................... 8
3 Problem Statement ............................................................................................................................ 9
4 Context .............................................................................................................................................10
4.1 History of Interpersonal Communication and Security Stance ...................................................10
4.2 Evolution....................................................................................................................................12
4.3 Mass Surveillance .......................................................................................................................13
4.4 Conclusion ..................................................................................................................................14
5 Trust Models.....................................................................................................................................15
5.1 Incumbent Models for Trust.......................................................................................................15
5.1.1 Certificate Authorities .........................................................................................................15
5.1.2 Web of Trust .......................................................................................................................17
6 Problem Statement Validation..........................................................................................................22
6.3 Our Goal ....................................................................................................................................22
6.3.1 Requirements Review...........................................................................................................22
6.4 Attackers....................................................................................................................................23
7 State of the Art of PKI protection....................................................................................................24
7.1 Attack Resilient Public-Key Infrastructure (ARPKI) ................................................................24
7.1.1 Mode of Operation...............................................................................................................24
7.1.2 Evaluation ...........................................................................................................................24
7.1.3 Concerns & Open Questions ................................................................................................25
7.1.4 Relation to Requirements ....................................................................................................26
7.1.4 Conclusion ...........................................................................................................................27
7.2 CONIKS: Bringing Key Transparency to End Users..................................................................28
7.2.1 Mode of Operation...............................................................................................................28
7.2.2 Evaluation ...........................................................................................................................29
7.2.3 Concerns & Open Questions ................................................................................................30
7.2.4 Relation to Requirements ....................................................................................................30
7.2.5 Conclusion ...........................................................................................................................31
7.3 Conclusion on the State of the Art.............................................................................................31
8 Hybrid Trust Model for Assurance of Public Keys in Social Networks .............................................32
8.1 Introduction................................................................................................................................32
Hybrid Trust Model for Assurance of Public Keys in Social Networks
3
8.2 Assumptions ...............................................................................................................................32
8.2.1 Technical Detail...................................................................................................................33
8.2.1 Attackers .............................................................................................................................33
8.3 Nomenclature & Message Format...............................................................................................34
8.3.1 Messages ..............................................................................................................................34
8.4 Method of Operation ..................................................................................................................35
8.4.1 Phase 1: Bootstrap...............................................................................................................36
8.4.2 Phase 2: Confirmation Phase...............................................................................................37
8.4.3 Phase 3: Mesh Confirmation................................................................................................41
8.4.4 Phase 4: Ongoing Monitoring ..............................................................................................44
8.4.5 Phase 5: Gossip....................................................................................................................45
8.5 Security Analysis........................................................................................................................47
8.5.1 Limitations ..........................................................................................................................47
8.5.2 Considerations .....................................................................................................................47
8.5.3 First Tier Attacks................................................................................................................48
8.5.4 Second Tier Attacks ............................................................................................................49
8.5.5 Third Tier Attacks ..............................................................................................................53
9 Social Networks.................................................................................................................................59
9.1 Networks with Weak Ties ..........................................................................................................60
9.2 Network Behaviour Applied to the Protocol ..............................................................................61
10 Conclusion.......................................................................................................................................63
11 Implementation Detail.....................................................................................................................65
12 Future Work ...................................................................................................................................66
13 References........................................................................................................................................67
Hybrid Trust Model for Assurance of Public Keys in Social Networks
4
List of Figures and Tables
Figure 1: Web of Trust with Chains of Trust ......................................................................................18
Figure 2: Web of Trust with Disjoint Networks...................................................................................20
Figure 3: Back to Back Man-in-the-Middle Attack..............................................................................33
Figure 4: Bootstrap of Key Monitoring................................................................................................36
Figure 5: Key Confirmation Challenge Response Protocol...................................................................37
Figure 6: Mesh Confirmation Request and Response ...........................................................................42
Figure 7: Ongoing Monitoring Protocol................................................................................................45
Figure 8: Bootstrap and Key Confirmation to Enable Impersonation of Two Parties..........................49
Figure 9: Identity Key Confirmation and Mesh Confirmation .............................................................50
Figure 10: George Performing Selective Deception of Endpoints .........................................................51
Figure 11: Gossip Protocol Interaction between Multiple Endpoints ...................................................53
Figure 12: Network of Individuals and Triadic Closure........................................................................59
Table 1: Query Message and Reply......................................................................................................37
Table 2: Key Confirmation Request Message.......................................................................................38
Table 3: Key Confirmation Request Message.......................................................................................39
Table 4: Key Confirmation Challenge Response Message ....................................................................39
Table 5: Mesh Confirmation Request Message.....................................................................................42
Table 6: Mesh Confirmation Response Message...................................................................................43
Table 7: Gossip Request Message.........................................................................................................46
Table 8: Gossip Reply Message ............................................................................................................46
Table 9: Gossip Correlation Message ...................................................................................................46
Hybrid Trust Model for Assurance of Public Keys in Social Networks
5
Acknowledgements
I would first like to thank my project supervisor, Dr Allan Tomlinson, Senior Lecturer at the
Information Security Group at Royal Holloway, University of London who listened diligently as I
scribbled on a whiteboard in 2014 in his office and provided such detailed commentary on what must
have seemed like an unsorted array of drafts.
I would also like to express my thanks to Dr Paul Gill, Senior Lecturer at the Department of Security
and Crime Science at University College London for suggested reading on crime related social
networks and behaviours. Very much not a natural area of research for a technologist such as myself.
Additional thanks to Dr Chris Mackmurdo formerly Counter Terrorism Specialist for the Diplomatic
Service at the Foreign and Commonwealth Office who was incredibly responsive and provided me
pointed advice on reading material on cellular social structures.
Finally the support of Joanne who has given me the time and space to complete this work. For that I
am deeply grateful.
Hybrid Trust Model for Assurance of Public Keys in Social Networks
6
1 Executive Summary
Our work has resulted in the design of a novel approach to public key assurance by providing a new
approach to PKI. This critical function underpins the ability to create end-to-end encrypted
communications channels, our interest lies specifically of their use within social networks.
To place our work in context, we describe the background of interpersonal computer based
communication examining email and then chat and instant messaging communications and their
movement from Bulletin Board Systems onto the early Internet. We follow this with observations
about the cryptosystems which were then developed afterwards to deliver confidentiality and integrity
guarantees. We demonstrate why systems evolved the way they did by looking at the history and why
there was a lack of development on the security aspects of popular communications systems from the
outset.
We touch briefly on the growth of instant messaging platforms generally and in some specific
arenas [76, 80] in the communications system space. This explosion of growth explains their
importance to users. For threat actors these platforms are similarly important targets for
exploitation; we assert these systems should provide effective security from mass surveillance. We do
not limit this activity to nation states and extend it to include criminal groups and vendors. We also
examine the usability challenges and the inability of users to make meaningful risk decisions.
We then review two differing public key trust models; Certificate Authorities and the Web of
Trust, both of which have allied goals, to enable the proper authentication of public key components,
when employed in conjunction with technologies like PGP, S/MIME and SSL [3, 5, 77], ensuring the
authenticity of the endpoint our communications are destined for.
We go on to explore these models and their practical challenges; the web of trust in terms of
its usability issues and lack of popularity when compared with certificate authorities. Additionally we
look at certificate authority compromises in recent times which have led to a number of novel
approaches to the web PKI problem [12, 13] and see if there is anything that we can learn from this.
We later turn to recent revelations on mass surveillance to put the threat and consequently
the risk into perspective. In broad terms we found that whilst this activity doesn’t change the threats
as they have been reasoned about for decades, it has fundamentally altered the way we quantify the
risk. Capable nation state threat actors were not assumed to be practically engaged in their work in
the way they have turned out to be and that has had a profound effect on the way they are perceived
by both the technical community and wider society [28].
We go on to set out our goals and define a series of requirements which candidate systems can
be evaluated against. We validate the requirements against our previous research.
We then look to the current state of the art and the novel approaches to authenticating
public keys in PKI; ARPKI [10]; CONIKS [42]; which we evaluate in detail, both singularly and with
respect to our problem statement.
We design a protocol where we combine concepts from the Certificate Authority model and
the Web of Trust to propose our own hybrid trust protocol. This is discussed in depth; looking at the
assumptions we can make for the operating environment, the problem it proposes to solve and then
performed detailed security analysis of the protocol.
Hybrid Trust Model for Assurance of Public Keys in Social Networks
7
We then validate the behaviour of social networks and their nature in supporting the protocol
operation. Finally we draw conclusions on the analysis, discuss implementation details and future
work.
Hybrid Trust Model for Assurance of Public Keys in Social Networks
8
2 Introduction
Our work focuses on the security of person to person communication and recently instant messaging
within social networks.
The interest in the security aspect for us stems from a number of different factors that have
come together to link broad popular interest and need in recent years.
Firstly, the popularity of messaging platforms which have experienced an explosion of growth
because of social networking combined with the ubiquity of Internet connected devices [24, 25].
Secondly, the evolution of the threat profile due to the growth of criminal enterprise and
government activity as detailed in the mass surveillance disclosures of the last 3 years.
We explore incumbent models and the state of the art in this context to find them all limited
in some way or other. This has led to our development and evaluation of a new hybrid trust model
and associated protocol. This has the aim to protect users against well placed attackers from
performing large scale subversion of an end-to-end encrypted instant messaging platform, used within
a social network.
Hybrid Trust Model for Assurance of Public Keys in Social Networks
9
3 Problem Statement
The creation of public key cryptography has revolutionised our ability to construct end-to-end
encrypted channels without the need to first exchange secrets since the 1970s. The very nature of
public key cryptography leads us to a new problem; the authentication of public keys and the
development of a variety of innovative and now widely deployed solutions to this problem.
Fast forward to 2016, over a billion users now use interpersonal messaging protected by end-
to-end encryption [52, 23].
Still, modern protocols rely on providing users with a way to perform out of band key
verification [23] or not at all [53].
We believe that there must be a mechanism to provide key verification for a large user base
that provides a good level of security without needing to resort to out of band verification or make
value judgements about individual trust, specifically applied to social network messaging.
Hybrid Trust Model for Assurance of Public Keys in Social Networks
10
4 Context
4.1 History of Interpersonal Communication and Security Stance
It is worth considering a recent history of computer based interpersonal communication; like email
and chat; the drivers or lack of, behind their security mechanisms and how approaches to security
have evolved and why.
Starting with email, beginning a timeline in 1973 with RFC561 [16]. The main relevance to
the discussion is to highlight that there are no confidentiality or integrity guarantees. The Mail
Protocol described in RFC524 [15] discusses identity only that;
“The identity of the author of a piece of mail can be verified, avoiding forgery and misrepresentation.”
(White, 1973) [15]
How this is done is not discussed. Confidentially and integrity were not primary concerns until nearly
20 years later.
In 1991, Phil Zimmerman introduced Pretty Good Privacy (PGP) [3]. A hybrid cryptosystem
using public keys to transmit symmetric ephemeral keys to protect an underlying payload produced
on a file by file or message by message basis. It is helpful to note that the Web of Trust wasn’t
introduced until 1992 (which initially wasn’t even referred to as the Web of Trust) [3], [4]. PGP
doesn’t mandate that it is used for email and PGP for use with email specifically doesn’t get a
standardization track until 1996 [6].
A differing but related concept is Secure/Multipurpose Internet Mail Extensions (S/MIME).
Introduced in 1995, it also isn’t standardised until RFC1847 in 1998 [5]. S/MIME relies on public keys
delivered via X509 certificates to bind identities together which are then used for signing and
encryption and then MIME is used to encode and encapsulate the message for transmission.
In parallel to email as store and forward communication a more dynamic messaging ecosystem
was evolving almost entirely separately. Internet Relay Chat (IRC) [7] although standardized in 1993
(based on BitNet Relay was created in 1985 [8]) which has no strong confidentiality or integrity
guarantees).
Does, therefore, a lack of standards allow us to infer anything about lack of security in
systems? It’s worth considering that in the nascent days of the internet the perceived need for
standards was much lower than it is now based on our own knowledge of how technologies evolved.
Is this supportable however? There may be a number of reasons for this not least because the
internet was not the driver of global commerce that it is today; Cristiano Antonelli describes it as
follows, referencing Farrell and Saloner 1987 [17] :
“The demand for standardized products may be higher because of relevant network externalities.
Demand may be more elastic because of lower inertia determined by switching costs for consumers
and users of previous units of durable products. The demand for standardized products may also be
higher because of the important revenue effects generated by lower transaction costs for acquiring
information on the characteristics of the products and their performances (Farrell and Saloner, 1987;
Saloner, 1990).”
(Antonelli, 1994) [17]
Hybrid Trust Model for Assurance of Public Keys in Social Networks
11
After a broader discussion on conferencing in RFC1324, still security is left as an exercise for the
reader;
“It might seem that encrypting the message before transmission to other servers in some way would
solve this, but this is better left as an option which is implemented in clients and thus leaves it to the
users to decide how secure they want their conference to be.”
(Reed, 1992) [9]
We therefore reasonably conclude that security in person to person communication was an
afterthought and a number of hybrid cryptosystems evolved to introduce confidentiality and integrity.
Take up where tools need explicit invocation has been limited - in part because of the
challenges of changing incumbent systems once they are deployed as was the case in technologies like
Email and IRC and also partly the lack of the perceived need to protect communications in this way.
This may have been affected by a lag in the standards processes which didn’t have the same emphasis
that they do today.
All of this highlights that security mechanisms were not deemed a real issue on standards
tracks or for implementers until just before the turn of the century.
What then of the world from 1995 onwards? If the answer was surely standards, now there
are standards [5, 6, 14]. Perhaps there are too many? There are more elements involved however; for
security to be employed routinely we assert that it has to be useable. Again the topic of usability in
software generally and specifically in security is an expansive topic. In an evaluation of PGP 5
(helpfully this evaluates an interpersonal communications tool the likes of which we are concerned
with), Whitten and Tygar describe it as follows [37]:
“Strong cryptography, provably correct protocols, and bug-free code will not provide security if the
people who use the software forget to click on the encrypt button when they need privacy, give up on
a communication protocol because they are too confused about which cryptographic keys they need to
use, or accidentally configure their access control mechanisms to make their private data world-
readable.”
(Whitten & Tygar, 1999) [37]
Sheng et al 2006 assert [38] that this doesn’t get any better 7 years later:
“We found that key verification and signing is still severely lacking, such that no user was able to
successfully verify their keys. Similar to PGP 5, users had difficulty with signing keys. Three of our
users were not able to verify the validity of the key successfully and did not understand the reasoning
to do so.”
(Sheng, Broderick, Hyland, & Koranda, 2006) [38]
This supports our view that usability is a key thread however it still is not the whole picture. If
security measures need to be employed deliberately then there must be some impetus on the user to
engage those mechanisms, useable or otherwise. It is simply not enough to have usable security
mechanisms if the user does not care to turn them on. The psychology of what makes people safer is
again an extensive topic although we lean on Howe et al [39]. In their paper, The Psychology of the
Security for the Home Computer User they perform a meta-study of a wide range of analysis dealing
Hybrid Trust Model for Assurance of Public Keys in Social Networks
12
with how people understand threats from viruses to hackers to criminal actors; how users view
themselves more positively than their peers; what brings threats into mind and much more. It is in
fact hard to do their paper justice, however, they conclude:
“Generally, home computer users view others as being more at risk. When they are aware of the
threats, home computer users do care about security and view it as their responsibility. However,
many studies suggest that users often do not understand the threats and sometimes are not willing or
able to incur the costs to defend against them.”
(Howe, Ray, Roberts, Urbanska, & Byrne, 2012) [39]
This suggests that as system designers we must accept that users will rightly struggle to quantify
risks, even if we produce standards allowing software to interoperate and make it usable for end users,
they may simply not enable it. Simply put protection mechanisms must be built in and always
enabled.
4.2 Evolution
The landscape has changed dramatically in the intervening 45 years since the first email RFC. Global
commerce has been revolutionised by the availability and adoption of the Internet. Indeed in many
countries internet access is considered a public utility [20], [21]:
“In Chapter 1 we outlined the importance of the internet to everyone’s lives—at work and at home.
Later in this Chapter we show the personal and economic benefit of online skills; which will only be
secured with universal access to the internet. As Lucy Hastings from Age UK said: “… access to the
internet should be treated as a utility service”. We agree. The Government should make it its ambition
to ensure universal access for the entire population. If this could be achieved, the UK would be well-
placed to achieve significant growth.”
(The Select Committee on Digital Skills, 2015) [21]
The Internet is now a significant marketplace. In 2013, as estimated by the Office of National
Statistics e-commerce on the Internet accounted for £557 billion of revenue [45]. E-Commerce in the
US according to the US Census Bureau accounted for an estimated $92.8 billion in trade in the 1
st
quarter of 2016 alone [46].
With this growth in legitimate business however, criminal activity is (and likely always has
been) also present. This seems intuitive and supportable by the outward actions of government;
Cyber Crime has been a Tier 1 priority of the UK Government since the Strategic Defence and
Security Review published in 2015 which laid out a commitment to investing £1.9 billion over the
next 5 years for government to tackle this type of crime [47].
As interesting as this is we must bring this back to our work, specifically; we highlight this to
show that the underlying use of the Internet has changed, we hope for the better but that this change,
alters the threat profile. As we discussed elsewhere in 4.2, users already struggle to quantify the
threat meaningfully themselves. This is supported by the fact that popular messaging protocols have
now introduced end-to-end encryption [23, 24] in recent times, although introducing other risks [84].
Hybrid Trust Model for Assurance of Public Keys in Social Networks
13
4.3 Mass Surveillance
At the start of this work we highlighted that the last 3 years has seen significant increase in the level
of interest in encryption and specifically end-to-end encryption from the public, from the media, from
business and from academia [10, 13, 34, 51].
Since the Edward Snowden disclosures in 2013 a significant amount of commentary has been
generated in the public domain around the existence and operation of mass surveillance programs like
PRISM [69, 70].
This adds to the context in which our work finds itself in, when we seek to define the
capabilities of our attackers we can now look to the detail which was previously secret. This has also
had an effect on the state of the art which we should review:
“As a result of the Snowden revelations, the onset of commercial encryption has accelerated by seven
years,”
(James Clapper, Director of National Intelligence, 2016) [87]
In the case of end-to-end encryption, man in the middle attacks (MITM) don’t cease to exist but they
do move. If as we believe, man in the middle attacks [78] should be given greater credibility in the
light of these disclosures then we should be able to find some evidence of this within the literature. It
doesn’t take long to discover a programme, codenamed FLYING PIG [49] which seeks to gather data
on proposed targets for SSL MITM attack. Another programme codename QUANTUM [50] combines
a range of man-in-the-middle and man-on-the-side attacks to perform a combination of highly
sophisticated watering hole attacks.
All of this taken together seems to have spurred on the evolution of systems deploying
encryption. Nevertheless this activity has produced a hesitance amongst users to go in search of
information and altered the way people behave. This predicted “chilling” effect on society brought
about by these programmes now appears to be in evidence. The most recent study we could find
concludes [48]:
“The results in this case study, however, provide empirical evidence consistent with chilling effects on
the activities of Internet users due to government surveillance.“
(Penney, W. J, 2016) [48]
Some argue that the disproportionality of this needs to be confronted head on [41];
“The threats to privacy online are increasing and with them the risks to freedom of expression.
However, there has been a growing fight back with journalists exposing surveillance programmes, civil
society challenging mass surveillance and companies that have strengthened privacy protections in
their products. Most importantly, since the Snowden revelations, hundreds of millions of individual
internet users have taken steps to protect their privacy online.”
(Emmerson, 2015) [41]
Hybrid Trust Model for Assurance of Public Keys in Social Networks
14
We think that a more nuanced view is needed and that in many cases the arguments of absolutes are
unhelpful. Fundamentally we view the role of the courts in balancing the objectives of governments
in performing their obligations of protecting society as paramount.
The same author does comment on legitimate surveillance arising from the need to combat
crime and protect national security although to our mind there is still insufficient discussion of how to
strike the balance just simply that the scales have tipped too far in the eyes of some [41]:
“Governments can have legitimate reasons for using communications surveillance, for example to
combat crime or protect national security. However because surveillance interferes with the rights to
privacy and freedom of expression, it must be done in accordance with strict criteria: surveillance
must be targeted, based on reasonable suspicion, undertaken in accordance with the law, necessary to
meet a legitimate aim and be conducted in a manner that is proportionate to that aim, and non-
discriminatory.”
(Emmerson, 2015) [41]
4.4 Conclusion
With an understanding of this context we should be able to identify where the need for protection
arises from, how it has changed and where the gaps lie. Popular communication mechanisms have
evolved to support end-to-end encryption [23, 24] albeit years after the academic literature tackled
this very problem [82].
We don’t seek to dwell on the point of mass surveillance; it is a practical reality and it
informs the threat profile. It is something that we academically look to our protocol to provide
protection from, we make no value judgement on mass surveillance itself, a matter we feel is for wider
society to consider.
We conclude by stating that end-to-end encryption provides significant protection against
mass surveillance but in this environment, industrialised MITM attacks are a real threat without
robust PKI.
Hybrid Trust Model for Assurance of Public Keys in Social Networks
15
5 Trust Models
5.1 Incumbent Models for Trust
There are a number of existing models which aim to provide assurance of public keys. We focus on
two, Certificate Authorities (CA) and the Web of Trust (WoT) we examine how they work and some
of their limitations.
5.1.1 Certificate Authorities
Certificate Authorities, are trusted third parties that issue certificates which bind identities to public
keys and certify the authenticity of such a binding are a popular mechanism for distributing key
material. CAs are arguably the most widely deployed PKI as they underpin Secure Sockets Layer
(SSL) and its successor Transport Layer Security (TLS) traffic on the World Wide Web. Google
state that 77% of all of their traffic is encrypted [55] with others predicting that 70% of all internet
traffic will be encrypted by the end of 2016 [56]. CAs as they stand are central to this encryption.
Certificate Authorities work by signing (and thereby certifying) the authenticity of an
identity and key binding. The CA signing keys themselves are distributed by an out of band
mechanism, typically coming in a software package or being pre-installed with an operating system.
This is where the trust relationship for a CA is formed.
For Certificate Authorities to function, we must trust the CA to only sign certificates where
the binding between an identity and a public key is genuine. Once a certificate has been issued they
can be used until they expire. In the case of X.509 certificates, they are typically issued with a
lifespan between 3 months and 2 years after which a client with access to a broadly accurate clock
should no longer trust it.
In the instance where private keys are lost or stolen the loser can approach the certificate
authority and request that they publish a revocation either by putting the certificate fingerprint on a
Certificate Revocation List (CRL) [79] or in response to queries from Online Certificate Status
Protocol (OCSP) requests [85]. Owing to the centralised nature of a CA we have that option,
assuming we can perform (or re-perform) some kind of identity verification. Although we note that
CRLs are not the only method for doing certificate revocation [81].
Crucially certificates are not simply just file formats, by being signed and trusted documents
they provide a chain of trust to prevent MITM attacks.
Whilst helpful for revocation, this centralised trust is perhaps that most significant issue with
the CA model. If an illegitimate certificate is issued for a given identity that identity is then open to
impersonation and MITM attack. This can happen in a number of ways:
 CA fooled into issuing a certificate to an imposter
 CA deliberately issues a fraudulent certificate
 CA signing key stolen
In the normal model, in all three of these cases there is complete trust in the CA. These attacks have
been detected in the wild, especially in relation to CAs that support of the Web.
Hybrid Trust Model for Assurance of Public Keys in Social Networks
16
Arguably the most serious of these was the DigiNotar breach, detected in 2011. FOX-IT, a
security consultancy, were commissioned by the Dutch government to analyse the breach; DigiNotar
certificates were used for some Dutch government services. Their report states [57]:
“The signing of 128 rogue certificates was detected on July 19th during the daily routine security
check. These certificates were revoked immediately;
 During analysis on July 20th the generation of another 129 certificates was detected. These
were also revoked on July 21th; -
 Various security measures on infrastructure, system monitoring and OCSP validation have
been taken immediately to prevent further attacks.
 More fraudulent issued certificates were discovered during the investigation and 75 more
certificates were revoked on July 27
th
 On July 29
th
a *.google.com certificate issued was discovered that was not revoked before.
This certificate was revoked on July 29
th
.
 DigiNotar found evidence on July 28th that rogue certificates were verified by internet
addresses originating from Iran.”
(DigiNotar Certificate Authority breach “Operation Black Tulip”, FOX-IT Report) [57]
Whilst DigiNotar is perhaps the most comprehensive and well documented breach, Comodo, another
CA was also breached resulting in a number of other fraudulent certificates. A number of other
mistakes and issues have led to similar problems [58, 59, 60, 61].
5.1.1.2 Certificate Transparency
In response to these attacks, Google began work on Certificate Transparency [34]. Certificate
Transparency works by introducing three new concepts to allow the monitoring of the behaviour of a
CA to detect the existence (however that occurs) of fraudulent certificates.
 Certificate Logs: Certificate logs can be appended to by anyone observing a certificate and are
inserted into a Merkle Hash Tree [73] which provides an integrity guarantee to the log. Logs
are maintained by log servers, run by CAs and third parties.
 Certificate Monitors: Monitor the log servers and ensure that they contain the right
certificates. They take periodic snapshots of the logs and look for suspicious certificates,
namely fraudulent certificates, ones with unusual metadata or ones which grant too many
rights.
 Certificate Auditors: Certificate auditors perform a similar role to monitors but check the
certificates that they are being presented with from websites, with what appears in the Log
Server, additionally sending certificates they are being presented with to the certificate log for
addition.
The expectation is that certificate auditors be built into browser clients and that both CAs and
auditors will add certificates they either observe or issue to publicly available logs.
We can see how it attempts to expose the three attack instances we discussed above when
applied to the Web. Where a CA is fooled into issuing a fraudulent certificate it is issued to the
public log and could be detected by a monitor. In this case a monitor acting on behalf of the genuine
Hybrid Trust Model for Assurance of Public Keys in Social Networks
17
domain owner. Regardless of whether the certificate is used for a highly selective MITM attack or not
there is a chance of detection. In the instance where a CA deliberately issues a fraudulent certificate
or the CA signing key is stolen (and we presume the person issuing the certificate fails to add it to the
log) when it is presented to the client the audit feature in the browser will detect that it isn’t present
in the log server but also it will add the fraudulent certificate which can then be detected by a
monitor.
The concept of certificate transparency is an interesting one and something which if applied
for our purposes warrants further consideration.
An issue of note is the effecting peering between multiple certificate authorities on the Web.
A typical browser on a modern operating system may trust anywhere in the region of 30 different
Certificate Authorities and many more intermediate ones. This isn’t the case in a single centralised
system. It makes the concept of Certificate Pinning [13], specifying that only a given CA can issue
certificates for a domain less relevant to us as a situation that led to the banning of a Chinese CA by
Google [60] would not occur.
5.1.1.3 Conclusion
Certificate Authorities have been hugely successful. Only in the last 5 years have we seen a serious
uptick in the number of attacks on the infrastructure, we believe driven in part by the increasing
deployment of encryption by default in many products and services. Google being central to this
adoption in no small part by a policy which ranks https websites results higher in search results [62],
thereby incentivising a move to encryption.
5.1.2 Web of Trust
An alternative approach to a Certificate Authority is the Web of Trust (WoT) [3].
“As time goes on, you will accumulate keys from other people that you may want to designate as
trusted introducers. Everyone else will each choose their own trusted introducers. And everyone will
gradually accumulate and distribute with their key a collection of certifying signatures from other
people, with the expectation that anyone receiving it will trust at least one or two of the signatures.
This will cause the emergence of a decentralized fault-tolerant web of confidence for all public keys.”
(Zimmermann, P. 1994) [3]
The Web of Trust was first introduced by Phil Zimmermann in 1992 to support PGP [3]. The basic
premise is that users maintain a key ring containing the public keys of others. These keys can have
varying degrees of trust. They can be trusted directly by the person who owns the keychain or they
can be signed by another person who in turn is trusted.
Hybrid Trust Model for Assurance of Public Keys in Social Networks
18
Identity:
Alice@alice.com
Public Key: DE3453
Signed By:
Bob@bob.com
Mary@mary.com
Identity: Bob@bob.com
Public Key FA1256
Signed By:
Tim@tim.com
Identity: Tim@tim.com
Public Key: CD49237
Signed By: Bill@bill.com
Bob Signs Alice s Key
signs
Identity: Bill@bill.com
Public Key: FF1122
Signed by
Frank@frank.com
signs
Identity:
Frank@frank.com
Public Key: ED8392
Signs
Identity:
Mary@mary.com
Public Key AE3491
Signed By:
Frank@frank.com
Signs
Signs
Figure 1: Web of Trust with Chains of Trust
Figure 1 depicts six user certificates (represented as circles) as a directed graph, Alice, Bob, Tim,
Frank, Mary and Bill. The arrows show the relationships between those people and who has signed
certificates. Frank has signed Bill’s certificate, Bill has signed Tim’s and both Mary and Bob have
signed Alice’s.
This has required these users to make an affirmative decision to sign the certificates of others.
The PGP literature talks about the idea of “Key Signing Parties”, where people meet in person to
verify keys used to sign certificates out of band.
If we look at Figure 1, note the dashed line between Frank and Alice. If Frank wishes to send
secure messages to Alice and obtains her certificate, either by asking her for it, finding it on her
website or obtaining it from a key server he must make a decision about how much he trusts it. He
may opt to trust the certificate directly if he knows Alice; perhaps he can phone her and ask for the
particulars of the certificate.
The WoT is designed to allow a path of trust to a certificate through peers, where no one
person holds any commanding authority in asserting the identity (or claimed identity) of the person
who issues the certificate. In our example, Frank seeking to validate Alice’s certificate has two paths
to Alice’s certificate. Frank trusts Bill, as he’s signed his certificate. Frank in turn fetches Tim’s
certificate and if he is to trust this path by extension must trust Bill’s trust in Tim and the same with
Tim’s trust of Bob and finally Bob’s trust of Alice.
Hybrid Trust Model for Assurance of Public Keys in Social Networks
19
Ideally we want to find the shortest chain of trust that we can to avoid the number of entities
we need to trust implicitly. Approaches to locating short chains of trust are an active areas of
research [83]. We can see from this chain that we need to trust the intermediate signatories and there
is a connection. The concept of a “trusted introducer” is designed to shorten these chains and improve
trust by reducing the number of people that need to be trusted.
In Figure 1 Alice’s certificate is also signed by Mary. Mary’s certificate is also signed by
Frank so in this this case, there are two different routes to Alice’s certificate.
This is an important point, the decentralized approach of the WoT doesn’t simply exist to
avoid the need for a centralised authority but it also increases the resiliency of the network. If it later
transpires that Bill has been compromised, that chain of trust is broken unless Frank is willing or able
to close the loop with either Tim or Bob, in any case he may simply be better off trusting Alice’s
certificate directly after verification. In the case of multiple paths, Frank still has a trustable path to
Alice through Mary. This leads to an obvious issue of our needing to trust all of the people along the
path of trust.
The other difficulty is key rotation. When Alice wishes to introduce a new key she must let
the rest of the system know she has a new key. Early versions of PGP didn’t specify expiry dates on
certificates which meant that users could continue using public key material to send messages
indefinitely. This was rectified however the WoT still has an issue with revocation, again the solution
is not straightforward [81].
This may seem to be a desirable model but as we saw with Certificate Authorities, there are
also downsides.
5.1.2.1 Sybil Attacks
An issue with any decentralized model is the lack of a single authoritative source, an entity that can
categorically say what is true and what is not. In distributed systems there are numerous protocols
for enabling system components to come to a consensus about the state of a system [64], [65] the most
well-known of which is probably Paxos [66]. Whilst consensus algorithms have been used to great
effect to build fault tolerant systems where active attackers are involved, agreeing on a single truth,
securely is hard [67]. In fault tolerant system consensus the aim is to allow a cluster to determine
which nodes have failed. The key difference is that nodes are expected to be mistaken because they
are faulty, not because they are lying.
This may appear as a separate conversation but it is important because this lack of a central
authority presents a vulnerability which affects decentralized systems, Sybil attacks [63]:
“Peer-to-peer systems often rely on redundancy to diminish their dependence on potentially hostile
peers. If distinct identities for remote entities are not established either by an explicit certification
authority (as in Farsite [3]) or by an implicit one (as in CFS [8]), these systems are susceptible to
Sybil attacks, in which a small number of entities counterfeit multiple identities so as to compromise a
disproportionate share of the system.”
(Douceur, 2002) [63]
Hybrid Trust Model for Assurance of Public Keys in Social Networks
20
Network A
Network B
Identity:
Alice@alice.com
Public Key: DE3453
Signed By:
Bob@bob.com
Identity:
Bob@bob.com
Public Key FA1256
Signed By:
Tim@tim.com
Identity:
Tim@tim.com
Public Key:
CD49237
Signed By:
Bill@bill.com
Bob Signs Alice s Key
signs
Identity:
Bill@bill.com
Public Key: FF1122
Signed by
Tim@tim.com
signs
Identity:
Harry@harry.com
Public Key FE1956
Identity:
Alice@alice.com
Public Key: AB1234
Signed By:
Peter@peter.com
Harry@harry.com
Charlie@charlie.com
Identity:
Peter@peter.com
Public Key AD1914 Identity:
charlie@charlie.com
Public Key CB1914
Signs
Signs
Signs
Identity:
Frank@frank.com
Public Key:
ED8392
Figure 2: Web of Trust with Disjoint Networks.
Figure 2 shows a number of system participants in two networks; Network B at the bottom has four
distinct identities, Peter, Alice, Charlie and Harry. Network A at the top of the diagram also has four
identities, Bill, Tim, Bob and another Alice with a distinct public key. To illustrate a Sybil attack, in
this example, Network B’s members are all controlled by the same entity and Alice is an imposter.
Network A’s four members are genuine and the real Alice is a member of that group. Now if we
consider Frank who is not connected to either network; he wishes to communicate with Alice. If this
were an environment using a PGP key server Frank would be able to find both keys claiming to be
Alice’s (DE3453 and AB1234). The difficulty here is the prima facie evidence is that the AB1234
appears to be ‘more trusted’ if Frank cannot make a judgement call about Peter, Charlie and Harry in
Network A.
Hybrid Trust Model for Assurance of Public Keys in Social Networks
21
5.1.2.2 Conclusion
If we apply the Web of Trust to a system like WhatsApp where the identifiers are mobile phone
numbers, a malicious user could (and with a Billion users we take the view it is almost a certainty)
claim an identity and create an entirely false user base to support it. This would require human users
to make a determination about which key to trust.
It also means that users would need to endorse keys in some way, either being prompted to
sign keys they are presented with or opting to select trusted introducers. We fear that this is likely to
be error prone, as we discussed in 4.1, we view poor usability as a factor in leading to poor security
decisions as well as users not really understanding the risks.
The WoT as applied in a social network appears at first glance to naturally align but the
risks means that on its own it may be less than ideal.
Hybrid Trust Model for Assurance of Public Keys in Social Networks
22
6 Problem Statement Validation
In 2016 we feel that there is still a need for a trust model that works well for a human user, doesn’t
rely on the user being technically savvy to function and to do individual key authentication via out of
band mechanisms. It must be consistently available yet protect its users from its operators
compromising the system. It must also be compatible with a closed system, one controlled and
operated by a single entity. Certificate Authorities meet these requirements well but have to be kept
honest. The Web of Trust strengths and weaknesses stem from its decentralized nature; key
revocation is difficult, critical is finding people who will be diligent in attesting to the authenticity of
a key underpin the entire system. This requires users to become involved in the process of signing the
keys of others. The biggest issue to our mind is the susceptibility to Sybil attacks in disjointed
networks; especially for systems based around an identity the user does not own [23], [24] and where
there is limited sanction for those who engage in such activity if identities are not verified in some
way.
6.3 Our Goal
To define a protocol which requires no specific intervention from users to allow public key verification
to support end-to-end encrypted channels, and which is resistant to mass surveillance.
We should however be more concrete about this and specify a set of requirements through
which we can evaluate potential schemes. Let us also consider primary requirements alongside
additional, less critical needs.
The protocol should;
1.1 Enable an end user to verify the authenticity of another end users’ public key component
1.2 Enable an end user to detect the presence of a fraudulent public key component purporting to
have been published by themselves.
1.3 Prevent the system operator, or anyone who can compromise the Certificate Authority or
compel the Certificate Authority operator to assist them, from introducing fraudulent keys
AND using those keys to mount successful man in the middle attacks
1.4 Provide protection in a closed ecosystem, namely where all of the system components are
owned and operated by the same entity
Additionally;
2.1 Be constructed in a way that allows external monitoring of endpoint software.
6.3.1 Requirements Review
Requirements 1.1, 1.2 and 1.3 are core aims; we assert that without this we have a protocol of little
use. 1.3 is also slightly subtle; we want to be able to prevent MITM attacks not simply detect them.
Requirement 1.4 is due to the fact if the protocol is going to be of any real use it has to work
for existing systems. For obvious commercial reasons incumbent platforms which reach into the
billion+ user base range are closed systems [24], [25], [52].
Hybrid Trust Model for Assurance of Public Keys in Social Networks
23
Requirement 1.4 and 2.1 are linked; in a closed system where one entity is in control of the
entire ecosystem then we don’t only assume, but we know that entity is also producing the endpoint
software. If our threat actors are as capable as we assert they are in section 4.3 then this legal
compulsion may extend to the software the user runs as we have seen [33]. This is a distinct
requirement because we enter the realms of directed surveillance.
6.4 Attackers
Attackers for the purposes of the evaluation are assumed to be well-motivated and technically astute.
It is also assumed they have the power to compel anyone in the system to assist them to the best of
their ability through whatever means they have at their disposal [32, 33].
We assume the following about our attackers:
 Attackers can compromise the software applications and infrastructure running on the server
side
 Attackers can shape traffic and direct messages away from their intended targets or drop
them selectively
 Attackers can build in features that are of use to them therefore allowing them to run
indefinitely on the server
 Attackers cannot modify messages sent from uncompromised endpoints
 Attackers cannot (or do not want to) compromise the applications running on some or all of
the participating endpoints
We make the point about the attacker not wanting to compromise a varying number of the endpoints
even if it were practical because the risk of discovery, whilst interfering with both server and client
components of the system. An implementation which makes the detection of tampering through clear
and observable protocol behaviour would be a useful addition.
Hybrid Trust Model for Assurance of Public Keys in Social Networks
24
7 State of the Art of PKI protection
There has been significant work done in the area of PKI protection in recent years [86]. We evaluate
two differing methods, both of which were developed after the Snowden disclosures. Attack Resilient
Public-Key Infrastructure, a method which has received a reasonable amount of academic attention.
The other, CONIKS, has also garnered significant interest although as yet there is an absence of
critical analysis.
7.1 Attack Resilient Public-Key Infrastructure (ARPKI)
ARPKI, Attack Resilient Public-Key Infrastructure [10] is a proposed model to introduce a provable
level of transparency to a certificate authority using a TAMARIN [11] prover. It aims to provide a
robust approach to the problem of CA compromise.
7.1.1 Mode of Operation
ARPKI introduces a number of system actors; two Certificate Authorities (CA) and an Integrity Log
Server (ILS). The design is such that where n-1 entities are compromised the system maintains
integrity.
In ARPKI, the domain owner registers a public key with one CA (CA1) but also nominates a
second CA (CA2) and an ILS. CA1 performs the role of verifier and attests the authenticity of the
public key and issues an Attack Resilient Certificate (ARCert). CA1 also transmits a record of its
activity to ILS1. ILS1 in turn is responsible for synchronizing its view of the world with any other
ILS instances in the system and for ensuring that other ILS instances are diligent in recording the
ARCert in their records. The ILS then publishes signed copies of their integrity trees which are in
turn verified by CA2 and any other optional verifiers engaged in the system.
At the end of registration the domain is in possession of an ARCert signed by CA1, CA2 and
ILS1. A user wishing to initiate a secure connection to the domain obtains the ARCert and validates
the signatures against CA1, CA2 and ILS.
To help assess the relevance of this work in relation to our own problem statement it is worth
examining their adversary model to see if the threat environment relates to that of our own or if it is
indeed attempting to solve a different problem.
In their research Basin et al. state their assumptions [10];
 Attackers can control the network
 Assume attackers can compromise some long term secrets
 Not all parties can be compromised
7.1.2 Evaluation
The author’s assumptions are useful and relevant to our problem statement however two important
considerations arise from this. Firstly, the nature of compromise is not discussed which leaves an
open question: By what means could actors within the system chose or be compelled to behave
Hybrid Trust Model for Assurance of Public Keys in Social Networks
25
dishonestly. This is important because if the actors operating system components can be identified
the assumption about not compromising all parties may not be a reasonable one. For example, where
a finite number of components are operated by the same organisation or appear in the same or
cooperating jurisdictions.
The next components are the desired security properties;
 Connection Integrity: Clients establishing connections must be assured that they are
communicating with the legitimate owner of a domain
 Only legitimate certificates are registered
 Only legitimate updates occur to certificates
 Attacks must be publically visible
The first property should underpin all models within a PKI, regardless of approach and method of
implementation. The second and third properties are also important as these are critical points of
attack. Illegitimate certificates are a major concern and both certificate transparency [12] and
certificate pinning [13] have been proposed as methods to mitigate that threat which we have
previously discussed.
The final property of interest is attack visibility. Basin et al state: “If an adversary successfully
launches an attack against the infrastructure by compromising entities, the attack should become
publically visible for detection”.
(Basin et al., 2014) [10]
The criteria that the compromise must be publically visible introduce an interesting question about
which actor in the system will do the detection, how are they incentivised to monitor and what action
they will take when such detection occurs.
7.1.3 Concerns & Open Questions
The design leads to some implicit system requirements which whilst acknowledged the practical
challenges receive little treatment. These are chiefly at the intersection of security systems design and
distributed systems engineering.
In terms of concerns, we wonder in the real world; who runs Certificate Authorities? Perhaps
more critically as they are the checks and balances, the ILS servers? The authors acknowledge that if
CA’s and ILS’ were able and willing to main separate lists then an attack exists. The authors appear
to have discounted this. This is fair in terms of internal consistency, as their assumptions state that
not every entity within the system is open to compromise. We note that entities are organisational
and not individual. Accordingly we question if this is realistic in the face of a nation state actor with
the ability to issue legally binding requirements [32], [33]. Furthermore the authors state that this
concern is negated by virtue of the fact that it would be quickly detected but crucially they don’t
explain how.
“Given two disjoint sets of CAs, where one set is honest and the other is compromised, if a domain
successfully registered a certificate for itself using the honest CAs, we would like to guarantee that no
bogus certificate can be registered for that domain by the adversary. But, if all the ILSes are
compromised and willing to keep two separate logs, then the adversary can register an ARCert for the
Hybrid Trust Model for Assurance of Public Keys in Social Networks
26
domain using the disjoint set of compromised CAs and ARPKI would not prevent this attack.
However, this attack is highly likely to be detected quickly, and all the dishonest ILSes and CAs can be
held accountable.”
(Basin et al., 2014) [10]
If these lists could be maintained and only delivered to the target of an attack, then this becomes
practical. There appears to be an inherent assumption that dishonest CA’s and ILS’ would continue
to go about publishing their dishonest activity to the wider world for monitoring. We believe this is
significantly flawed, especially with our greater understanding of what the attacker is capable of [50].
Furthermore there is an assumption that not all CA’s and ILS’ are in the same operating
network, this is not the environment our system operates in.
7.1.4 Relation to Requirements
Finally, reviewing ARPKI against the requirements we set out in section 6.3 and addressed inline:
Q: Can ARPKI enable an end user to verify the authenticity of another end user public key
component?
A: Yes, ARPKI will enable an end user to leverage an ILS or group of ILS’ to verify the authenticity
of the public key component.
Q: Does ARPKI enable the end user to detect the presence of a fraudulent public key component
purporting to have been published by them
A: Yes, a client can monitor their own public key and would know if someone was fraudulently issuing
a public key and it also allows a user to be able to determine if they have been deliberately omitted
from the directory.
Q: Does ARPKI prevent an attacker who can compel the system operator to assist them from
introducing fraudulent keys AND using those keys to mount successful man in the middle attacks?
A: No it does not, even if assuming that as highlighted in 3.1.2 a system operator isn’t operating all
components. If we take a strict interpretation however of the assertion of ‘system operator’ then the
answer is certainly not, because we can compel multiple system owners to become involved in the
fraud as per [10]
“…if all the ILSes are compromised and willing to keep two separate logs, then the adversary can
register an ARCert for the domain using the disjoint set of compromised CAs and ARPKI would not
prevent this attack.”
(Basin et al., 2014) [10]
Q: Does ARPKI provide protection in a closed ecosystem, namely where all of the system components
are owned and operated by the same entity?
Hybrid Trust Model for Assurance of Public Keys in Social Networks
27
A: No and this is perhaps the most profound observation in our evaluation. ILS servers need to exist
as part of a cooperative but separate and open environment to be effective.
It therefore does not meet our requirements.
7.1.4 Conclusion
ARPKI has some interesting ideas and the role of an ILS as a form of certificate monitor having a
specific role for monitoring keys is an interesting one that warrants further consideration.
Applied to the problem statement of a social network ARPKI has less applicability especially
with the core role of nominated third parties operating ILS which are visible to all. In a closed
network, the operator of an ILS would likely be the CA, which should not be the case in this system.
For large peer to peer systems it is not clear who could perform this role; however full credit must be
given to the authors for some excellent work which appears to have much better applicability to the
CA problem of the World Wide Web.
We do however note that the presence of a formal verification method is a good design feature
and the ability to hold entities publically accountable is an excellent component of the protocol,
however difficult in a closed system.
Hybrid Trust Model for Assurance of Public Keys in Social Networks
28
7.2 CONIKS: Bringing Key Transparency to End Users
CONIKS is an end-user key verification service designed specifically for use with end-to-end encrypted
communications channels [42].
Fundamentally CONIKS creates a novel hash-chain based directory of key bindings. These
key bindings are then snapshotted at a recurring interval, the hash chain signed by the directory
provider and published to clients who monitor this chain and then compare their own binding to
ensure the correctness of that binding (non-equivocation). Where the signed binding is then published
this then exists as proof that the directory is correct or has been compromised (or compelled to
publish a fraudulent key).
7.2.1 Mode of Operation
CONIKS creates an ecosystem with 4 documented actors, we highlight their assumptions within:
 Identity Providers – an Identity Provider can be thought of as a CA. CONIKS refers to
Identity Providers being responsible for issuing name-to-key bindings within their namespace.
There is also an assumption that there is some other PKI which manages the distribution of
signing keys for the providers themselves.
 Clients – Clients refers to the client software which is run on a Users’ trusted device. The
authors make the point that CONIKS does not address the problem of compromise of the
client endpoints. Clients monitor the consistency of their own bindings but nobody else’s.
The next point is rather striking however [42]:
“We also assume clients have network access which cannot be reliably blocked by
their communication provider. This is necessary for whistleblowing if a client detects
misbehaviour by an identity provider (more details in s.4.2).
CONIKS cannot ensure security if clients have no means of communication that is
not under their communication providers controls.
2
2
Even given a communication provider who also controls all network access, it may be
possible for users to whistleblow manually by reading information from their device
using a channel such as physical mail or sneakernet, but we will not model this in
detail”
(Melara, Blankstein, Bonneau, Freedman, et al., 2015) [42]
We return to this in more detail later.
 Auditors – Auditors verify that the Identity Provider is not equivocating, that is modifying
the directory and publishing the results, either publishing false keys or removing legitimate
ones. Auditors track the chain of signed snapshots. They also publish and gossip with other
auditors and all clients serve as auditors for their own Identity Provider.
 Users – Users are listed separately because of the availability of varying security levels
depending on the users own threat posture described by differing local policies users may
choose to operate.
Hybrid Trust Model for Assurance of Public Keys in Social Networks
29
In the centre of the system an Identity Provider manages a set of name-to-key bindings in a Merkle
Prefix Tree [73]. On a recurring basis (some system specific time interval) the identity provider
generates a non-repudiable “snapshot” of the directory by digitally signing the root of the Merkle
prefix tree to form a Signed Tree Root (STR). Clients then can use an STR to check the consistency
of key bindings in a highly efficient manner that scales to very large directories. Each STR contains
the hash to the previous STR which creates a linear history of the directory from CONIKS section 3
[42].
The directory also has an appealing property of being privacy preserving, which is that a
private index is computed as an unpredictable function of the username and a nonce. This means the
directory doesn’t leak data about names and furthermore at the index the users given public key is
not published but rather a hash of the key and a nonce. This prevents an attacker from enumerating
the directory with a known set of public keys.
To verify a given public key an authentication path through the tree is published as a ‘route’
to the relevant entry in the directory. This allows for users to check if they have been included in the
view of the directory which was subsequently snapshot in an STR. It has the additional ability for
users to then be able to check for both spurious keys and if they have been dropped from the
directory, as they can compute their own routes.
Key lookups (CONIKS s.4.1.2) are done by requesting the public key for a given user, using
the authors’ example, Bob requests Alice’s key. The identity provider returns the public key, the
authentication path and the STR. Bob is now in possession of an STR, Alice’s public key and an
authentication path within a given STR snapshot which Bob can verify.
7.2.2 Evaluation
The fact that identity providers are responsible for issuing bindings within their namespace seems
appropriate. Without some authority users would need to make discerning judgements of where
name-to-key bindings would come from, this is consistent with a CA model. Additionally the authors
don’t deal with the compromise of the endpoint software, again this seems reasonable to us.
Ultimately if the client endpoint is untrusted then we cannot rely on anything it purports to do unless
we are able to monitor its behaviour effectively.
We take issue with the need for clients to have network access that cannot be reliably blocked
by their communication provider. The CONIKS authors state that this is necessary for
whistleblowing. It is not the assumption we take specific issue with but with an implied notion that
this activity is then going to reach anyone for which this is useful. Where equivocation is now
provable does this prevent an attack from being mounted on a given user or set of users? Whilst no
signature method can prevent the introduction of integrity violations they can be used to allow them
to be detected. We think that this concept of publically visible proof of equivocation is useful; it does
not however prevent a fraudulent key from being user to impersonate a user. A MITM attack could
still be mounted, except that the client may know it is possibly occurring, however there is still
nothing in the message that allows the client to know it has happened.
Hybrid Trust Model for Assurance of Public Keys in Social Networks
30
The authors discuss the idea of a divergent STR:
“An identity provider may attempt to equivocate by presenting diverging views of the name-to-key
bindings in its namespace to different users. Because CONIKS providers issue signed, chained
“snapshots” of each version of the key directory, any equivocation to two distinct parties must be
maintained forever or else it will be detected by auditors who can then broadcast non-repudiable
cryptographic evidence, ensuring that equivocation will be detected with high probability (see
Appendix B for a detailed analysis).”
(Melara, Blankstein, Bonneau, Felten, & Freedman, 2015) Section.2.2
If we revisit our attackers from s 6.4: “Attackers can build in features that are of use to them therefore
allowing them to run indefinitely”; we assert that equivocation could be maintained forever.
7.2.3 Concerns & Open Questions
There are a number of open questions, some of which are acknowledged as future work. One is how
broadcast of STRs is done, is left out of scope. This strikes us as strange because a client’s ability to
inform the world of equivocation is absolutely at the heart of CONIKS’ strengths. As is the related
concept of a gossip protocol, this is also strangely unexplained.
It is also not clear to us the correctness of messages in and out of the client can be verified.
In a divergent STR environment, the ability to provide proof of equivocation must be extracted from
the application. This is not an insurmountable problem but a solution is still missing and relates to
Gossip, it frustrates us that this is missing.
7.2.4 Relation to Requirements
In structuring our conclusion as to its applicability let us finally review CONIKS in the context of the
goals we laid out for ourselves in 6.3:
Q: Does CONIKS enable one end user to verify the authenticity of another end users’ public key
component?
A: Not on its own. The protocol would allow Alice to obtain a fraudulent public key for Bob, obtain
a false STR (which could act as a proof of equivocation) and verify the key. Alice could then use this
key to begin communication.
Q: Does CONIKS Enable the end user to detect the presence of a fraudulent public key component
purporting to have been published by them?
A: Yes, assuming the fraudulent key is published to them as well. Where divergent STRs are possible
then we require another mechanism to detect their use.
Q: Does CONIKS prevent an attacker who can compel the system operator to assist them from
introducing fraudulent keys AND using those keys to mount successful man in the middle attacks?
A: No it does not. The critical part is that a back to back man in the middle attack can be mounted
if a divergent view of STRs can be maintained. Whilst noting that it is the introduction of fraudulent
keys and the ability to use them to mount attacks that we take together in our requirement.
Hybrid Trust Model for Assurance of Public Keys in Social Networks
31
Q: Could CONIKS provide protection in a closed ecosystem, namely where all of the system
components are owned and operated by the same entity?
A: Yes it could but again we require some other mechanism which is otherwise unspecified to allow
users to spread these divergent STRs.
7.2.5 Conclusion
We feel CONIKS has some genuinely interesting and novel ideas; the privacy preserving concepts for
instance are, in the current climate to be applauded. We like the idea of differing security policies
particularly.
We think the authors must tackle the practical engineering problems associated with the
deployment of such systems; going beyond the creation of cryptographic protocols and evaluate in
detail their proposed behaviour in the real world. We saw in 4.3 the adversaries are not attacking
only the cryptography but the practical implementations of systems to achieve their goals.
The most fundamental issue is this: If the attacker can drop whistleblowing messages or
prevent them from being received the system provides no protection. How whistleblowing messages
are received by clients receives almost no commentary. This we feel is a serious omission. We also do
not believe that users need to be totally isolated to prevent the transmission of whistleblowing
messages, merely selectively isolated.
7.3 Conclusion on the State of the Art
Both ARPKI and CONIKS attempt to tackle the trustworthiness of a CA. They do this by
introducing additional opportunities for monitoring the behaviour of a CA. ARPKI is a model that
may work well when there are different entities managing however in our case we expect that not to
be true. CONIKS turns users into monitors but leaves out how they will communicate a warning
when something fraudulent is detected. Both schemes we feel fall short of an ideal solution and
motivate the need for something new.
Hybrid Trust Model for Assurance of Public Keys in Social Networks
32
8 Hybrid Trust Model for Assurance of Public Keys in Social
Networks
8.1 Introduction
The fundamental premise of our model is that participants cooperate to monitor the Certificate
Authority to ensure its honesty in a system where there is a single centralised Trusted Third Party
(TTP). This enables it to be applied to user-to-user messaging systems like WhatsApp [23] and Apple
iMessage [24]. In these systems the vendor is responsible for creating the software that runs on
devices, the protocol (including end-to-end encryption) and operating the infrastructure. For both
WhatsApp and Apple iMessage the trusted third party should perhaps not be described as so much a
third party but rather the only party. In these cases, public key components are made available in a
queryable directory; they act as a single CA.
In our model we adopt elements of Certificate Transparency [34] and combine them with a
peer to peer, Web of Trust [3] like behaviour where we trust our friends to become monitors and
auditors accumulating keys they can share with us for verification.
In a system operated by a single entity with a large user base a centralised CA allows the
operator to maintain control of the ecosystem. This control we assert is of commercial importance
when it comes to generating revenue. Based on the amount of money involved in acquisitions in
recent years there must be significant expectation of the value of these products [1], [2], especially if
they expect to sell on the asset.
This centralised source of trust within this closed ecosystem must however be kept honest for
the security of the systems’ participants.
For user to user messaging systems, operators must ensure the availability of the CA to
enable users to locate public key information. As an engineering challenge we assert that this is a
reasonable model (based on real world examples). This is on par with ensuring the availability of
other critical components like message brokers to enable the storage and forwarding of messages. The
CA can also deal with key revocation, transmission and availability of successive public keys. The
TTP can specify any authentication requirements for proof of identity in a way that is relevant to the
user base, i.e. multi-factor authentication, email access etc.
8.2 Assumptions
Our system allows people to cooperate to detect the publication of fraudulent public keys. To enable
this we make a number of assumptions about the environment our trust model operates within.
1. Participants can discover each other through a shared class of identifier e.g. a telephone
number, username or other value that has meaning within the network
2. Participants can communicate with each other over an electronic communications channel
3. The social networks with which people want to communicate already exist in some form, i.e.
contact lists on devices or from social media networking tools like Weibo and Facebook
Hybrid Trust Model for Assurance of Public Keys in Social Networks
33
8.2.1 Technical Detail
For completeness we are explicit about other protective measures for safe guarding protocol messages.
Transport level encryption should be employed between the software running on a client device, the
Certificate Authority & Message Broker. This is to provide confidentiality, to prevent tampering and
replay of protocol messages by someone outside the infrastructure. We imagine that this is supported
by the typical mechanisms including code signing and Transport Layer Security (TLS) already
available [74, 75]. The root CA signing key is embedded within the Endpoint distribution to allow CA
message signatures to be verified.
Messages contain directional identities to prevent reflection attacks and random challenges to prevent
replay of messages by attackers at the Message Broker.
8.2.1 Attackers
In 6.4 we discussed the presumed attackers capabilities based on our discussion of the recent
disclosure of nation state capabilities in 4.3. As we discuss the protocol messages and the behaviour
we should keep in mind what we consider to be the attacker’s goals as well as their capabilities. This
way we can focus on what they are likely to do in pursuit of those goals.
We view the main attacker’s goals in relation to our protocol is to:
1. Impersonate a user
2. Read plaintext messages sent between two users
3. By impersonation read plaintext messages sent between two users
To be explicit we mean impersonation where an attacker pretends to be a genuine user in their
entirety, generating messages and receiving them. For reading messages we mean simply that,
reading messages between two users and the last point is where they perform a MITM attack by
virtue of being an imposter and where relevant completing protocol runs with each party
independently.
Alice
(Genuine)
Ian
(Impersonating Bob)
Bob
(Genuine)
Ian
(Impersonating Alice)
Figure 3: Back to Back Man-in-the-Middle Attack
Figure 3 shows Ian doing a back-to-back MITM attack by completing independent protocol runs with
Alice and Bob.
We also assume that no new cryptographic attack is available to the attacker and the
signature operations or asymmetric encryption operations are secure. We also assume that the
attacker has not stolen any private key component.
Hybrid Trust Model for Assurance of Public Keys in Social Networks
34
8.3 Nomenclature & Message Format
It is worth specifying a few key terms at this point:
Identity – Identifier with some intrinsic meaning within the network, e.g. name, email address or
telephone number.
Identity Key – Public key component which is used for operations to prove identity. This is the
public key which, in our model is published by the CA and can be queried by other users.
Certificate Authority – Infrastructure component responsible for publishing and responding to queries
for Identity Keys
Endpoint – Software running on a user device, e.g. a smartphone or desktop.
Message Broker – Infrastructure responsible for routing messages between Endpoints.
Users – People that use the Endpoint
Source – The source of an Identity Key obtained by an Endpoint. This can be, Prior Knowledge, Key
Confirmation, Gossip or CA.
8.3.1 Messages
Furthermore we describe the message format as follows using an example:
Alice  sign (Alice, IK_Alice, Bob, IK_Bob, challenge
Alice
)
IK_Alice
 Bob
The preceding structure describes a message, sent from Alice to Bob. The contents within sign () are
signed using Alice’s Identity Key described as IK_Alice. All Identity Keys are described as
IK_Owner or in the case of fraudulent keys representing a user as IK_Fake_Owner.
Curly braces are used to denote a set {item1, item2} or {class}.
Identity is a globally visible unique identifier which is bound to the endpoint. Alice in the example is
Alice’s identity, for example a telephone number. IK_Alice is the Identity Key for Alice, IK_Bob is
the Identity Key for Bob. Noting that depending on the message direction that the perception of the
key is important to the protocol. Alice may know her Identity Key but she has a belief in the
Identity Key of Bob obtained from some source.
challenge
Alice
is a challenge issued by Alice to Bob; again the direction of the message is important.
challenge
Alice
is the original challenge returned in the opposite direction in the following message but
signed so as to prove possession of the Identity Key.
Hybrid Trust Model for Assurance of Public Keys in Social Networks
35
Also note that we specify Identity and Identity Key in protocol messages, this is to simplify
explanation of important components of the message, and in reality some lightweight certificate would
be used.
8.4 Method of Operation
The first role of a new client after it joins the system is to start building its long term memory of
Identity Keys.
Important: The registration of the Identity Keys public component is deliberately left out of scope.
The reason is that this process must be compatible with the nature of the identity, like an email
address as a wholly owned identity or where the identity may be considered ‘on-loan’ like a telephone
number even if the ‘rental’ has been on long term basis (the author has rented their mobile telephone
number at time of writing for over 17 years). This should not be mistaken for triviality; the lifecycle
of an identity is an important design consideration but will be system specific.
This Identity value is determined as being relevant to the underlying system the protocol is
applied to but we stipulate that identity must be consistent between all Endpoints, Message Brokers
and the CA.
One of the core tenets is that participants take an active role in monitoring the keys that
directories publish for members of their social network. In its simplest forms this involves querying
the CA for the Identity Keys of all of the participants a client is interested in. This is a related
concept to certificate transparency [34] which turns participants into both certificate monitors and
certificate auditors.
Understanding the perspective that others have of a Users’ Identity Key underpins the system.
We note that this implicitly leads to the exposure of the relationships that a participant may
have within a given social network to the CA. We view this an acceptable price to pay due to the
following; In a system where the store and forward facility is being operated by the same entity as the
CA and we consider the attack models previously discussed we assume that the well-positioned
attacker can view all of the message traffic flow at the message broker. It is therefore reasonable to
assume that this would be well within their capabilities in any case, regardless of this disclosure. We
return later to the distinction between placing a watch on all known Identity Keys vs. a subset and
the resultant trade-off’s.
The protocol consists of five logical phases:
1. Bootstrap
2. Confirmation
3. Mesh Confirmation
4. Ongoing Monitoring
5. Gossip protocol
To summarise at a high level, Bootstrap is where an Endpoint collates identities of interest and
queries the CA for the associated Identity Keys. Confirmation is a bi-directional challenge response
Hybrid Trust Model for Assurance of Public Keys in Social Networks
36
process performed between two Endpoints to confirm keys. Mesh confirmation involves an Endpoint
asking other Endpoints for their knowledge of a given Identity Key. Ongoing Monitoring is the
process of keeping an up-to-date knowledge of Identity Keys by re-querying the CA and performing
confirmation in the event of changes. The Gossip Protocol attaches requests for Identity Key data to
normal (non-key related) communication to frustrate the selective removal of those messages.
8.4.1 Phase 1: Bootstrap
Goal: The goal of the bootstrap phase is for every endpoint to build an inventory of identities for
monitoring and obtain an initial view of those identities and associated Identity Keys.
Result: Endpoint has knowledge of key data obtained from the Certificate Authority
At bootstrap the first action is to collect identities which are relevant to the system. As we discussed
earlier the identifier, e.g. telephone number or email address, needs to be useful to the social network
which allows other known participants to be located.
Collection of identities may involve querying of some pre-existing database like an address
book, contact list or remotely provided friend list. The application then queries the CA for all of the
Identity Keys for the identities that it has collected.
Certificate
Authority
2. Obtains identity keys
3. Query's Identity Keys
4. Identity Keys returned
5. Stores keys in long term memory
Alice
1. Collates known identities
Figure 4: Bootstrap of Key Monitoring
Figure 4 shows the bootstrap of the network participant, collating the identifiers which are relevant to
the network and then obtaining Identity Keys and storing them.
Hybrid Trust Model for Assurance of Public Keys in Social Networks
37
Table 1: Query Message and Reply
For an attacker to begin impersonation they would need to provide an Identity Key for which they
have the corresponding private key otherwise they will not be able to sign messages proving
authenticity.
At the conclusion of this phase, Alice has a list of Identity Keys for the identities she has
queried for from the CA which she does not yet trust. QueryId is a 16 bit sequential number which
allows the Endpoint to correlate distinct query requests.
8.4.2 Phase 2: Confirmation Phase
Goal: To interactively communicate with a user for which you have discovered an Identity Key.
Result: Both parties now understand each other’s perspectives of Identity Keys and from where they
were obtained.
The next phase is to perform a challenge response protocol with all of the users who are first degree
connections; that is anyone who’s Identity Keys are being monitored by the user because of an
existing relationship. This is to force a potential attacker into beginning their subversion of pairs of
Identity Keys from the very outset of the system. We describe this in more detail in 8.5.
Figure 5: Key Confirmation Challenge Response Protocol
Alice Bob Dave
1. Key Confirmation
3. Confirmation Response
& Challenge
4. Challenge Response
2. Verify Key
Certificate
Authority
i. Key Confirmation
ii. Fetch Identity Key for A
iii. Identity Key Response
iv. Verify Key
v. Confirmation Response
& Challenge
vi. Challenge Response
Message 1 Alice query({identities}, queryId)) CertificateAuthority
Message 2 CertificateAuthoritysign({identity,identityKey}, queryId)
IKCA
 Alice
Message Function Query for a set of Identity Keys from the CA
Aim To obtain a list of identities and keys from the CA for ongoing monitoring
and use
Hybrid Trust Model for Assurance of Public Keys in Social Networks
38
Figure 5 shows the Key Confirmation protocol being between Alice and Bob as well as between Alice
and Dave. Steps (1-4) show what happens when Bob already has knowledge of Alice’s Identity Key.
Steps (i-vi), show a subsequent run where Alice seeks confirmation of Dave’s Identity Key but where
Dave is not monitoring Alice’s Identity Key already.
The specific nature of the challenge response protocol is important. It performs entity
authentication and includes information, which, provide defences against other attacks.
8.4.2.1 Key Confirmation Request: Outbound
Message 1 Alice  sign(Alice, IK_Alice, Bob, IK_Bob, challenge
Alice
)
IK_Alice
 Bob
Message Function To provide Bob with Alice’s identity and her Identity Key. Also to show Bob
what Alice believes are Bob’s identity and Identity Key are. Lastly to issue a
challenge in a block signed by Alice.
Aim To allow Bob to learn and add to his long term memory Alice’s perspective of
herself and to allow Bob to detect any discrepancy in Alice’s perspective of
him.
Table 2: Key Confirmation Request Message
The purpose of this message is to provide Bob with a perspective on how his identity is viewed by
others. This helps Bob understand if others are using incorrect keys, either maliciously provided or
perhaps out of date; leading to a key refresh. It also provides Bob the key that Alice is claiming is
hers, which he records. Alice also issues a challenge; a 256 bit value chosen at random so that when
the reply is returned Alice knows that it was in direct response to her request and avoids a replay of
messages. Alice also includes Bob’s Identity Key in a known position in the message to avoid
reflection attacks. The use of the long term memory of keys stored by participants will become
apparent when we examine the gossip protocol.
In a correctly functioning system, Bob first computes the signature for comparison from any
prior knowledge he has of Alice. In an ideal setting Alice and Bob have a strong tie in the social
sense and there is a good chance that Bob is already monitoring Alice’s keys. Having verified the
signature, Bob can now do some additional verification on the payload. First of all Bob will be able
to see that Alice’s claimed Identity Key is the same as the one Bob has retrieved from the CA. Bob
can also verify that Alice has his correct key; if she does not then when he replies to the challenge
Alice’s signature verification will fail.
After these are correctly verified Bob also checks that the keys which Alice has obtained for
Bob are also correct and can therefore assume that the directory is publishing Bob’s correct keys.
If an attacker wishes to impersonate Alice to Bob, he must drop this message because he
cannot allow Bob to see Alice’s real Identity Key and must replace it with his own. An attacker will
not be able to do this unless he is also able to get Bob to trust a fraudulent key enabling the attacker
to re-sign the message and continue the protocol run, otherwise when Bob verifies the signature of the
message he will detect that it has been tampered with and fail to complete the protocol. He will also
not complete the challenge response and Alice will have a key in an unconfirmed state, more on this
in 8.4.2.4
Hybrid Trust Model for Assurance of Public Keys in Social Networks
39
8.4.2.2 Key Confirmation Message: Response
Message 2 Bob sign(Bob, IK_Bob, Alice, IK_Alice, challenge
Alice
, challenge
Bob,
source)
IK_Bob
 Alice
Message function To provide Alice, Bobs perspective of himself and his perspective of Alice and
to return Alice’s challenge and issue Bobs own challenge
Aim To allow Alice to detect any discrepancy in Bob’s perspective and for Alice to
verify Bob is involved in the live protocol run
Table 3: Key Confirmation Request Message
The purpose of this message is for Bob to confirm that Alice’s perspective of Bob is correct and that
Bob has confirmed this in response to Alice’s confirmation request.
Bob replies with his own response, returns the challenge and what he believes to be Alice’s
Identity Key. He also tells Alice where he got the key from.
Alice now gets to understand what Bob sees for the same reasons discussed previously. If
both views are consistent, we have achieved our goal of assurance of purpose of the Identity Keys and
can run additional protocols to construct end-to-end encrypted channels. The keys have been
mutually authenticated and we have been able to verify that keys being published by the directory
are correct and up to date.
For the confirmation phase of Alice and Dave’s keys the protocol run is the same however
Dave has gone to the CA to obtain Alice’s Identity Key. This is slightly different in that Dave must
obtain Alice’s key from the CA. In doing so Dave adds this to his long term memory for ongoing
monitoring.
Importantly, key confirmation messages should include whether the key has been obtained
from the directory or retrieved from the clients existing memory (this is the source attribute).
For an attacker to impersonate Bob to Alice he must prevent this message from reaching
Alice unless he can provide a fraudulent Identity Key to Alice and replace it with his own. If the
message reaches Alice she will see Bob’s real Identity Key and detect any fraudulent key she has been
supplied.
It is important to note here that if an attacker wishes to impersonate both parties in a back
to back MITM attack both Alice and Bob are going to need to have been lied to about both of their
keys. It is a fundamental goal of this challenge response phase to force the attacker to begin his
subversion of both Alice and Bob from the very beginning of their communication. Were the attacker
can do this he can now complete any end-to-end encryption protocol based on these identities and
read Alice and Bob’s communication.
8.4.2.3 Key Confirmation Message: Challenge Response
Message 3 Alice sign(challenge
Bob
)
IK_Alice
 Bob
Message function To return Bob’s challenge
Aim To allow Bob to know Alice is involved in the live protocol run
Table 4: Key Confirmation Challenge Response Message
Hybrid Trust Model for Assurance of Public Keys in Social Networks
40
Finally the response to Bob’s challenge is returned, signed by Alice which Bob verifies using the key
from the initial message. With this complete we have now performed mutual authentication of
Identity Keys. This may seem a little redundant but this is important because although Alice has
seemingly initiated the request and Bob has replied, Alice also volunteered information in the first
message for authentication. This requirement to respond allows Bob to understand if this was a live
request from Alice and not a replay.
An optional extension would be for the person doing key confirmation to respond stating for
how long they had been observing the key for (in the case of Dave a newly obtained key could be
represented by the value 0) or potentially even reply with a chain of previous keys.
We leave this out of scope for the moment because of the non-trivial complexity that this
introduces specifically to understanding system time and resolving clock timing in distributed systems
is a research area in and of itself, as noted [40]:
“In a distributed system, it is sometimes impossible to say that one of two events occurred first. The
relation "happened before" is therefore only a partial ordering of the events in the system. We have
found that problems often arise because people are not fully aware of this fact and its implications.”
(Lamport, 1978) [40]
Hybrid Trust Model for Assurance of Public Keys in Social Networks
41
8.4.2.4 Key Confirmation & Key State
At the end of the confirmation phase, Alice can have keys in one of three states:
 Confirmed
 At Risk
 Untrusted
Confirmed keys are where both parties have been able to perform mutual key confirmation. At risk
keys are keys which have not been confirmed. We deem these keys to be at risk. Untrusted keys are
where some element of the protocol has highlighted a discrepancy and they therefore should not be
used.
We should briefly discuss the approach of the protocol to ensure the most security which may
be at odds with existing systems. In a store and forward environment participants may not be online
to be able to complete confirmation. In this case those Identity Keys could be considered at risk. It is
quite plausible that the messages may have been dropped by the attacker.
In the case of WhatsApp [23], the protocol has been designed to transmit everything in
message headers which would allow the recipient to construct an encrypted session and decrypt the
transmitted message.
“After building a long-running encryption session, the initiator can immediately start sending
messages to the recipient, even if the recipient is offline. Until the recipient responds, the initiator
includes the information (in the header of all messages sent) that the recipient requires to build a
corresponding session.”
(WhatsApp Inc. 2016 WhatsApp Encryption Overview Technical white paper) [23]
In our case, we would want to prevent that from happening until bidirectional key confirmation could
be completed. If this is not done unilateral impersonation can be done and an attacker can read
messages destined for the original recipient.
8.4.3 Phase 3: Mesh Confirmation
Goal: To obtain the perspectives of peers within a given network on the Identity Keys of other users.
Result: Endpoints are able to obtain other perspectives on their own Identity Keys and those of others
and detect discrepancies
Once Key Confirmation has been completed with the intended recipient we now begin Mesh
Confirmation. Mesh Confirmation is where Alice calls upon other members of the system to help with
Key Confirmation. This is an important step which forces an attacker to take ever greater risks to
subvert the system.
During Mesh confirmation Alice will call upon other system actors, Fred, Hillary and James
to confirm keys. The Mesh Confirmation message will involve the identity of a user, the Identity Key
as observed by the requestor and a response.
The other important thing of note is that when an Endpoint is called upon to perform Key
Confirmation, if they were not already monitoring that key binding they will now begin to do so.
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis

Mais conteúdo relacionado

Mais procurados

Trinity Impulse - Event Aggregation to Increase Stundents Awareness of Events...
Trinity Impulse - Event Aggregation to Increase Stundents Awareness of Events...Trinity Impulse - Event Aggregation to Increase Stundents Awareness of Events...
Trinity Impulse - Event Aggregation to Increase Stundents Awareness of Events...Jason Cheung
 
Assessment of Rehabilitation Requirements in _Modified_
Assessment of Rehabilitation Requirements in _Modified_Assessment of Rehabilitation Requirements in _Modified_
Assessment of Rehabilitation Requirements in _Modified_monaps1
 
10.1.1.3.9670
10.1.1.3.967010.1.1.3.9670
10.1.1.3.9670reema2601
 
Book root cause analysis in health care compressed
Book root cause analysis in health care compressedBook root cause analysis in health care compressed
Book root cause analysis in health care compressedMilantikaKristanti
 
Control engineering assignment
Control engineering assignment Control engineering assignment
Control engineering assignment Abdo Ali Alsharai
 
Medical device innovation_handbook
Medical device innovation_handbookMedical device innovation_handbook
Medical device innovation_handbookWalt Maclay
 
Student Database Management System
Student Database Management SystemStudent Database Management System
Student Database Management SystemAjay Bidyarthy
 
Ibm spss categories
Ibm spss categoriesIbm spss categories
Ibm spss categoriesDũ Lê Anh
 
Phase III – Analysis of Macroeconomic impact
Phase III – Analysis of Macroeconomic impactPhase III – Analysis of Macroeconomic impact
Phase III – Analysis of Macroeconomic impacteconsultbw
 
Pharma Info Sys
Pharma Info SysPharma Info Sys
Pharma Info Syschris20854
 
Ftc report on consumer reporting agency errors 370 pages
Ftc report on consumer reporting agency errors   370 pagesFtc report on consumer reporting agency errors   370 pages
Ftc report on consumer reporting agency errors 370 pagesUmesh Heendeniya
 
Project appraisal system at APSFC
Project appraisal system at APSFCProject appraisal system at APSFC
Project appraisal system at APSFCSharath Malkani
 
NELA First Patient Report 2015
NELA First Patient Report 2015NELA First Patient Report 2015
NELA First Patient Report 2015mikegrocott
 
USGS Sampling and Analysis Plan for 2 Test Water Wells in Pavillion, WY
USGS Sampling and Analysis Plan for 2 Test Water Wells in Pavillion, WYUSGS Sampling and Analysis Plan for 2 Test Water Wells in Pavillion, WY
USGS Sampling and Analysis Plan for 2 Test Water Wells in Pavillion, WYMarcellus Drilling News
 
Medical Malpractice And Contract Disclosure A Study Of The
Medical Malpractice And Contract Disclosure A Study Of TheMedical Malpractice And Contract Disclosure A Study Of The
Medical Malpractice And Contract Disclosure A Study Of Thelegal5
 

Mais procurados (20)

Pentest standard
Pentest standardPentest standard
Pentest standard
 
Trinity Impulse - Event Aggregation to Increase Stundents Awareness of Events...
Trinity Impulse - Event Aggregation to Increase Stundents Awareness of Events...Trinity Impulse - Event Aggregation to Increase Stundents Awareness of Events...
Trinity Impulse - Event Aggregation to Increase Stundents Awareness of Events...
 
Assessment of Rehabilitation Requirements in _Modified_
Assessment of Rehabilitation Requirements in _Modified_Assessment of Rehabilitation Requirements in _Modified_
Assessment of Rehabilitation Requirements in _Modified_
 
10.1.1.3.9670
10.1.1.3.967010.1.1.3.9670
10.1.1.3.9670
 
05.safety metrics
05.safety metrics05.safety metrics
05.safety metrics
 
Book root cause analysis in health care compressed
Book root cause analysis in health care compressedBook root cause analysis in health care compressed
Book root cause analysis in health care compressed
 
Control engineering assignment
Control engineering assignment Control engineering assignment
Control engineering assignment
 
Lecturenotesstatistics
LecturenotesstatisticsLecturenotesstatistics
Lecturenotesstatistics
 
Medical device innovation_handbook
Medical device innovation_handbookMedical device innovation_handbook
Medical device innovation_handbook
 
Student Database Management System
Student Database Management SystemStudent Database Management System
Student Database Management System
 
Ibm spss categories
Ibm spss categoriesIbm spss categories
Ibm spss categories
 
Phase III – Analysis of Macroeconomic impact
Phase III – Analysis of Macroeconomic impactPhase III – Analysis of Macroeconomic impact
Phase III – Analysis of Macroeconomic impact
 
Pharma Info Sys
Pharma Info SysPharma Info Sys
Pharma Info Sys
 
Ftc report on consumer reporting agency errors 370 pages
Ftc report on consumer reporting agency errors   370 pagesFtc report on consumer reporting agency errors   370 pages
Ftc report on consumer reporting agency errors 370 pages
 
Project appraisal system at APSFC
Project appraisal system at APSFCProject appraisal system at APSFC
Project appraisal system at APSFC
 
CID review
CID reviewCID review
CID review
 
NELA First Patient Report 2015
NELA First Patient Report 2015NELA First Patient Report 2015
NELA First Patient Report 2015
 
Slr kitchenham
Slr kitchenhamSlr kitchenham
Slr kitchenham
 
USGS Sampling and Analysis Plan for 2 Test Water Wells in Pavillion, WY
USGS Sampling and Analysis Plan for 2 Test Water Wells in Pavillion, WYUSGS Sampling and Analysis Plan for 2 Test Water Wells in Pavillion, WY
USGS Sampling and Analysis Plan for 2 Test Water Wells in Pavillion, WY
 
Medical Malpractice And Contract Disclosure A Study Of The
Medical Malpractice And Contract Disclosure A Study Of TheMedical Malpractice And Contract Disclosure A Study Of The
Medical Malpractice And Contract Disclosure A Study Of The
 

Destaque

José sanfélix genovés respuestas a preguntas del blog
José sanfélix genovés respuestas a preguntas del blogJosé sanfélix genovés respuestas a preguntas del blog
José sanfélix genovés respuestas a preguntas del blogsemfycsemfyc
 
Debate 3: Respuestas en el blog
Debate 3:  Respuestas en el blogDebate 3:  Respuestas en el blog
Debate 3: Respuestas en el blogsemfycsemfyc
 
Concorrencia preliminar floresta pe
Concorrencia preliminar floresta peConcorrencia preliminar floresta pe
Concorrencia preliminar floresta peblogdoelvis
 
Actualizaciones Neumología:Marisa Irizar (Asma) sma sem fyc 12
Actualizaciones Neumología:Marisa Irizar (Asma) sma sem fyc 12Actualizaciones Neumología:Marisa Irizar (Asma) sma sem fyc 12
Actualizaciones Neumología:Marisa Irizar (Asma) sma sem fyc 12semfycsemfyc
 
Bio-identical Hormone Optimization for Women, teacher presentation
Bio-identical Hormone Optimization for Women, teacher presentationBio-identical Hormone Optimization for Women, teacher presentation
Bio-identical Hormone Optimization for Women, teacher presentationPaul Cox
 
Tutorial practico de instalacion y trabajo de windows 7
Tutorial practico de instalacion y trabajo de windows 7Tutorial practico de instalacion y trabajo de windows 7
Tutorial practico de instalacion y trabajo de windows 7Mario Marquez Mujica
 
Các giải pháp marketing điểm đến nhằm phát triển du lịch hà nội
Các giải pháp marketing điểm đến nhằm phát triển du lịch hà nộiCác giải pháp marketing điểm đến nhằm phát triển du lịch hà nội
Các giải pháp marketing điểm đến nhằm phát triển du lịch hà nộihttps://www.facebook.com/garmentspace
 

Destaque (14)

José sanfélix genovés respuestas a preguntas del blog
José sanfélix genovés respuestas a preguntas del blogJosé sanfélix genovés respuestas a preguntas del blog
José sanfélix genovés respuestas a preguntas del blog
 
Avaliação- 2
Avaliação- 2Avaliação- 2
Avaliação- 2
 
Avaliação- 3
Avaliação-  3Avaliação-  3
Avaliação- 3
 
Seguindo em frente
Seguindo em frenteSeguindo em frente
Seguindo em frente
 
C.sociales imanol
C.sociales imanolC.sociales imanol
C.sociales imanol
 
Debate 3: Respuestas en el blog
Debate 3:  Respuestas en el blogDebate 3:  Respuestas en el blog
Debate 3: Respuestas en el blog
 
Concorrencia preliminar floresta pe
Concorrencia preliminar floresta peConcorrencia preliminar floresta pe
Concorrencia preliminar floresta pe
 
Actualizaciones Neumología:Marisa Irizar (Asma) sma sem fyc 12
Actualizaciones Neumología:Marisa Irizar (Asma) sma sem fyc 12Actualizaciones Neumología:Marisa Irizar (Asma) sma sem fyc 12
Actualizaciones Neumología:Marisa Irizar (Asma) sma sem fyc 12
 
Bendigo Health
Bendigo HealthBendigo Health
Bendigo Health
 
Bio-identical Hormone Optimization for Women, teacher presentation
Bio-identical Hormone Optimization for Women, teacher presentationBio-identical Hormone Optimization for Women, teacher presentation
Bio-identical Hormone Optimization for Women, teacher presentation
 
Lembre se do que lhe fiz
Lembre se do que lhe fizLembre se do que lhe fiz
Lembre se do que lhe fiz
 
CV K
CV KCV K
CV K
 
Tutorial practico de instalacion y trabajo de windows 7
Tutorial practico de instalacion y trabajo de windows 7Tutorial practico de instalacion y trabajo de windows 7
Tutorial practico de instalacion y trabajo de windows 7
 
Các giải pháp marketing điểm đến nhằm phát triển du lịch hà nội
Các giải pháp marketing điểm đến nhằm phát triển du lịch hà nộiCác giải pháp marketing điểm đến nhằm phát triển du lịch hà nội
Các giải pháp marketing điểm đến nhằm phát triển du lịch hà nội
 

Semelhante a thesis

Forensic Examinations of Mobile Phones (iPhone Forensics)
Forensic Examinations of Mobile Phones (iPhone Forensics)Forensic Examinations of Mobile Phones (iPhone Forensics)
Forensic Examinations of Mobile Phones (iPhone Forensics)reagentom
 
FCC Interop Board Final Report 05 22 12
FCC Interop Board Final Report 05 22 12FCC Interop Board Final Report 05 22 12
FCC Interop Board Final Report 05 22 12Claudio Lucente
 
Cict6640 sandis wanjala wamalwa
Cict6640 sandis wanjala wamalwaCict6640 sandis wanjala wamalwa
Cict6640 sandis wanjala wamalwaSandisWanjala
 
JohnWhelanThesisFinal
JohnWhelanThesisFinalJohnWhelanThesisFinal
JohnWhelanThesisFinalJohn Whelan
 
Realtimesamplingofutilization
RealtimesamplingofutilizationRealtimesamplingofutilization
RealtimesamplingofutilizationVicente Nava
 
Cag3 1
Cag3 1Cag3 1
Cag3 1yeappt
 
Project_ReportTBelle(1)
Project_ReportTBelle(1)Project_ReportTBelle(1)
Project_ReportTBelle(1)Tyler Belle
 
Writing Sample (Long)
Writing Sample (Long)Writing Sample (Long)
Writing Sample (Long)John Olderman
 
It Handbook On Mergers Acqui 130975
It Handbook On Mergers Acqui 130975It Handbook On Mergers Acqui 130975
It Handbook On Mergers Acqui 130975Kellermann Robert
 
NIOSH electrical safety trades - student manual DHHS (NIOSH) Publication No....
NIOSH  electrical safety trades - student manual DHHS (NIOSH) Publication No....NIOSH  electrical safety trades - student manual DHHS (NIOSH) Publication No....
NIOSH electrical safety trades - student manual DHHS (NIOSH) Publication No....cccscoetc
 
Design of a bionic hand using non invasive interface
Design of a bionic hand using non invasive interfaceDesign of a bionic hand using non invasive interface
Design of a bionic hand using non invasive interfacemangal das
 
Whitepaper on distributed ledger technology
Whitepaper on distributed ledger technologyWhitepaper on distributed ledger technology
Whitepaper on distributed ledger technologyUnder the sharing mood
 
Emerging Technologies for Energy Savings Performance Contracting in the Feder...
Emerging Technologies for Energy Savings Performance Contracting in the Feder...Emerging Technologies for Energy Savings Performance Contracting in the Feder...
Emerging Technologies for Energy Savings Performance Contracting in the Feder...Tony Loup
 
Worcester Art Museum: Green Technology Evaluation
Worcester Art Museum: Green Technology EvaluationWorcester Art Museum: Green Technology Evaluation
Worcester Art Museum: Green Technology EvaluationFlanna489y
 
Affine Term-Structure Models Theory And Implementation
Affine Term-Structure Models  Theory And ImplementationAffine Term-Structure Models  Theory And Implementation
Affine Term-Structure Models Theory And ImplementationAmber Ford
 
Mobile phone intelligent jamming system
Mobile phone intelligent jamming systemMobile phone intelligent jamming system
Mobile phone intelligent jamming systemPatel Ibrahim
 
Guidelines for the Use of Consultants under Islamic Development Bank Financing
Guidelines for the Use of Consultants under Islamic Development Bank FinancingGuidelines for the Use of Consultants under Islamic Development Bank Financing
Guidelines for the Use of Consultants under Islamic Development Bank FinancingJoy Irman
 
Tensioned Building Construction - Testing and Validation
Tensioned Building Construction - Testing and ValidationTensioned Building Construction - Testing and Validation
Tensioned Building Construction - Testing and ValidationRobert Lewis
 

Semelhante a thesis (20)

Forensic Examinations of Mobile Phones (iPhone Forensics)
Forensic Examinations of Mobile Phones (iPhone Forensics)Forensic Examinations of Mobile Phones (iPhone Forensics)
Forensic Examinations of Mobile Phones (iPhone Forensics)
 
Upwind - Design limits and solutions for very large wind turbines
Upwind - Design limits and solutions for very large wind turbinesUpwind - Design limits and solutions for very large wind turbines
Upwind - Design limits and solutions for very large wind turbines
 
FCC Interop Board Final Report 05 22 12
FCC Interop Board Final Report 05 22 12FCC Interop Board Final Report 05 22 12
FCC Interop Board Final Report 05 22 12
 
Cict6640 sandis wanjala wamalwa
Cict6640 sandis wanjala wamalwaCict6640 sandis wanjala wamalwa
Cict6640 sandis wanjala wamalwa
 
JohnWhelanThesisFinal
JohnWhelanThesisFinalJohnWhelanThesisFinal
JohnWhelanThesisFinal
 
Realtimesamplingofutilization
RealtimesamplingofutilizationRealtimesamplingofutilization
Realtimesamplingofutilization
 
Cag3 1
Cag3 1Cag3 1
Cag3 1
 
Project_ReportTBelle(1)
Project_ReportTBelle(1)Project_ReportTBelle(1)
Project_ReportTBelle(1)
 
Writing Sample (Long)
Writing Sample (Long)Writing Sample (Long)
Writing Sample (Long)
 
It Handbook On Mergers Acqui 130975
It Handbook On Mergers Acqui 130975It Handbook On Mergers Acqui 130975
It Handbook On Mergers Acqui 130975
 
Future Inspection of Overhead Transmission Lines
 Future Inspection of Overhead Transmission Lines Future Inspection of Overhead Transmission Lines
Future Inspection of Overhead Transmission Lines
 
NIOSH electrical safety trades - student manual DHHS (NIOSH) Publication No....
NIOSH  electrical safety trades - student manual DHHS (NIOSH) Publication No....NIOSH  electrical safety trades - student manual DHHS (NIOSH) Publication No....
NIOSH electrical safety trades - student manual DHHS (NIOSH) Publication No....
 
Design of a bionic hand using non invasive interface
Design of a bionic hand using non invasive interfaceDesign of a bionic hand using non invasive interface
Design of a bionic hand using non invasive interface
 
Whitepaper on distributed ledger technology
Whitepaper on distributed ledger technologyWhitepaper on distributed ledger technology
Whitepaper on distributed ledger technology
 
Emerging Technologies for Energy Savings Performance Contracting in the Feder...
Emerging Technologies for Energy Savings Performance Contracting in the Feder...Emerging Technologies for Energy Savings Performance Contracting in the Feder...
Emerging Technologies for Energy Savings Performance Contracting in the Feder...
 
Worcester Art Museum: Green Technology Evaluation
Worcester Art Museum: Green Technology EvaluationWorcester Art Museum: Green Technology Evaluation
Worcester Art Museum: Green Technology Evaluation
 
Affine Term-Structure Models Theory And Implementation
Affine Term-Structure Models  Theory And ImplementationAffine Term-Structure Models  Theory And Implementation
Affine Term-Structure Models Theory And Implementation
 
Mobile phone intelligent jamming system
Mobile phone intelligent jamming systemMobile phone intelligent jamming system
Mobile phone intelligent jamming system
 
Guidelines for the Use of Consultants under Islamic Development Bank Financing
Guidelines for the Use of Consultants under Islamic Development Bank FinancingGuidelines for the Use of Consultants under Islamic Development Bank Financing
Guidelines for the Use of Consultants under Islamic Development Bank Financing
 
Tensioned Building Construction - Testing and Validation
Tensioned Building Construction - Testing and ValidationTensioned Building Construction - Testing and Validation
Tensioned Building Construction - Testing and Validation
 

thesis

  • 1. Hybrid Trust Model for Assurance of Public Keys in Social Networks 1 Student Number: 100807646 Author: Max Kington Hybrid Trust Model for Assurance of Public Keys in Social Networks Max Kington Supervisor: Dr Allan Tomlinson Submitted as part of the requirements for the award of the MSc in Information Security at Royal Holloway, University of London I declare that this assignment is all my own work and that I have acknowledged all quotations from published or unpublished work of other people. I also declare that I have read the statements on plagiarism in Section 1 of the Regulations Governing Examination and Assessment Offences, and in accordance with these regulations I submit this project report as my own work. Signature: Date:
  • 2. Hybrid Trust Model for Assurance of Public Keys in Social Networks 2 Contents 1 Executive Summary ........................................................................................................................... 6 2 Introduction ....................................................................................................................................... 8 3 Problem Statement ............................................................................................................................ 9 4 Context .............................................................................................................................................10 4.1 History of Interpersonal Communication and Security Stance ...................................................10 4.2 Evolution....................................................................................................................................12 4.3 Mass Surveillance .......................................................................................................................13 4.4 Conclusion ..................................................................................................................................14 5 Trust Models.....................................................................................................................................15 5.1 Incumbent Models for Trust.......................................................................................................15 5.1.1 Certificate Authorities .........................................................................................................15 5.1.2 Web of Trust .......................................................................................................................17 6 Problem Statement Validation..........................................................................................................22 6.3 Our Goal ....................................................................................................................................22 6.3.1 Requirements Review...........................................................................................................22 6.4 Attackers....................................................................................................................................23 7 State of the Art of PKI protection....................................................................................................24 7.1 Attack Resilient Public-Key Infrastructure (ARPKI) ................................................................24 7.1.1 Mode of Operation...............................................................................................................24 7.1.2 Evaluation ...........................................................................................................................24 7.1.3 Concerns & Open Questions ................................................................................................25 7.1.4 Relation to Requirements ....................................................................................................26 7.1.4 Conclusion ...........................................................................................................................27 7.2 CONIKS: Bringing Key Transparency to End Users..................................................................28 7.2.1 Mode of Operation...............................................................................................................28 7.2.2 Evaluation ...........................................................................................................................29 7.2.3 Concerns & Open Questions ................................................................................................30 7.2.4 Relation to Requirements ....................................................................................................30 7.2.5 Conclusion ...........................................................................................................................31 7.3 Conclusion on the State of the Art.............................................................................................31 8 Hybrid Trust Model for Assurance of Public Keys in Social Networks .............................................32 8.1 Introduction................................................................................................................................32
  • 3. Hybrid Trust Model for Assurance of Public Keys in Social Networks 3 8.2 Assumptions ...............................................................................................................................32 8.2.1 Technical Detail...................................................................................................................33 8.2.1 Attackers .............................................................................................................................33 8.3 Nomenclature & Message Format...............................................................................................34 8.3.1 Messages ..............................................................................................................................34 8.4 Method of Operation ..................................................................................................................35 8.4.1 Phase 1: Bootstrap...............................................................................................................36 8.4.2 Phase 2: Confirmation Phase...............................................................................................37 8.4.3 Phase 3: Mesh Confirmation................................................................................................41 8.4.4 Phase 4: Ongoing Monitoring ..............................................................................................44 8.4.5 Phase 5: Gossip....................................................................................................................45 8.5 Security Analysis........................................................................................................................47 8.5.1 Limitations ..........................................................................................................................47 8.5.2 Considerations .....................................................................................................................47 8.5.3 First Tier Attacks................................................................................................................48 8.5.4 Second Tier Attacks ............................................................................................................49 8.5.5 Third Tier Attacks ..............................................................................................................53 9 Social Networks.................................................................................................................................59 9.1 Networks with Weak Ties ..........................................................................................................60 9.2 Network Behaviour Applied to the Protocol ..............................................................................61 10 Conclusion.......................................................................................................................................63 11 Implementation Detail.....................................................................................................................65 12 Future Work ...................................................................................................................................66 13 References........................................................................................................................................67
  • 4. Hybrid Trust Model for Assurance of Public Keys in Social Networks 4 List of Figures and Tables Figure 1: Web of Trust with Chains of Trust ......................................................................................18 Figure 2: Web of Trust with Disjoint Networks...................................................................................20 Figure 3: Back to Back Man-in-the-Middle Attack..............................................................................33 Figure 4: Bootstrap of Key Monitoring................................................................................................36 Figure 5: Key Confirmation Challenge Response Protocol...................................................................37 Figure 6: Mesh Confirmation Request and Response ...........................................................................42 Figure 7: Ongoing Monitoring Protocol................................................................................................45 Figure 8: Bootstrap and Key Confirmation to Enable Impersonation of Two Parties..........................49 Figure 9: Identity Key Confirmation and Mesh Confirmation .............................................................50 Figure 10: George Performing Selective Deception of Endpoints .........................................................51 Figure 11: Gossip Protocol Interaction between Multiple Endpoints ...................................................53 Figure 12: Network of Individuals and Triadic Closure........................................................................59 Table 1: Query Message and Reply......................................................................................................37 Table 2: Key Confirmation Request Message.......................................................................................38 Table 3: Key Confirmation Request Message.......................................................................................39 Table 4: Key Confirmation Challenge Response Message ....................................................................39 Table 5: Mesh Confirmation Request Message.....................................................................................42 Table 6: Mesh Confirmation Response Message...................................................................................43 Table 7: Gossip Request Message.........................................................................................................46 Table 8: Gossip Reply Message ............................................................................................................46 Table 9: Gossip Correlation Message ...................................................................................................46
  • 5. Hybrid Trust Model for Assurance of Public Keys in Social Networks 5 Acknowledgements I would first like to thank my project supervisor, Dr Allan Tomlinson, Senior Lecturer at the Information Security Group at Royal Holloway, University of London who listened diligently as I scribbled on a whiteboard in 2014 in his office and provided such detailed commentary on what must have seemed like an unsorted array of drafts. I would also like to express my thanks to Dr Paul Gill, Senior Lecturer at the Department of Security and Crime Science at University College London for suggested reading on crime related social networks and behaviours. Very much not a natural area of research for a technologist such as myself. Additional thanks to Dr Chris Mackmurdo formerly Counter Terrorism Specialist for the Diplomatic Service at the Foreign and Commonwealth Office who was incredibly responsive and provided me pointed advice on reading material on cellular social structures. Finally the support of Joanne who has given me the time and space to complete this work. For that I am deeply grateful.
  • 6. Hybrid Trust Model for Assurance of Public Keys in Social Networks 6 1 Executive Summary Our work has resulted in the design of a novel approach to public key assurance by providing a new approach to PKI. This critical function underpins the ability to create end-to-end encrypted communications channels, our interest lies specifically of their use within social networks. To place our work in context, we describe the background of interpersonal computer based communication examining email and then chat and instant messaging communications and their movement from Bulletin Board Systems onto the early Internet. We follow this with observations about the cryptosystems which were then developed afterwards to deliver confidentiality and integrity guarantees. We demonstrate why systems evolved the way they did by looking at the history and why there was a lack of development on the security aspects of popular communications systems from the outset. We touch briefly on the growth of instant messaging platforms generally and in some specific arenas [76, 80] in the communications system space. This explosion of growth explains their importance to users. For threat actors these platforms are similarly important targets for exploitation; we assert these systems should provide effective security from mass surveillance. We do not limit this activity to nation states and extend it to include criminal groups and vendors. We also examine the usability challenges and the inability of users to make meaningful risk decisions. We then review two differing public key trust models; Certificate Authorities and the Web of Trust, both of which have allied goals, to enable the proper authentication of public key components, when employed in conjunction with technologies like PGP, S/MIME and SSL [3, 5, 77], ensuring the authenticity of the endpoint our communications are destined for. We go on to explore these models and their practical challenges; the web of trust in terms of its usability issues and lack of popularity when compared with certificate authorities. Additionally we look at certificate authority compromises in recent times which have led to a number of novel approaches to the web PKI problem [12, 13] and see if there is anything that we can learn from this. We later turn to recent revelations on mass surveillance to put the threat and consequently the risk into perspective. In broad terms we found that whilst this activity doesn’t change the threats as they have been reasoned about for decades, it has fundamentally altered the way we quantify the risk. Capable nation state threat actors were not assumed to be practically engaged in their work in the way they have turned out to be and that has had a profound effect on the way they are perceived by both the technical community and wider society [28]. We go on to set out our goals and define a series of requirements which candidate systems can be evaluated against. We validate the requirements against our previous research. We then look to the current state of the art and the novel approaches to authenticating public keys in PKI; ARPKI [10]; CONIKS [42]; which we evaluate in detail, both singularly and with respect to our problem statement. We design a protocol where we combine concepts from the Certificate Authority model and the Web of Trust to propose our own hybrid trust protocol. This is discussed in depth; looking at the assumptions we can make for the operating environment, the problem it proposes to solve and then performed detailed security analysis of the protocol.
  • 7. Hybrid Trust Model for Assurance of Public Keys in Social Networks 7 We then validate the behaviour of social networks and their nature in supporting the protocol operation. Finally we draw conclusions on the analysis, discuss implementation details and future work.
  • 8. Hybrid Trust Model for Assurance of Public Keys in Social Networks 8 2 Introduction Our work focuses on the security of person to person communication and recently instant messaging within social networks. The interest in the security aspect for us stems from a number of different factors that have come together to link broad popular interest and need in recent years. Firstly, the popularity of messaging platforms which have experienced an explosion of growth because of social networking combined with the ubiquity of Internet connected devices [24, 25]. Secondly, the evolution of the threat profile due to the growth of criminal enterprise and government activity as detailed in the mass surveillance disclosures of the last 3 years. We explore incumbent models and the state of the art in this context to find them all limited in some way or other. This has led to our development and evaluation of a new hybrid trust model and associated protocol. This has the aim to protect users against well placed attackers from performing large scale subversion of an end-to-end encrypted instant messaging platform, used within a social network.
  • 9. Hybrid Trust Model for Assurance of Public Keys in Social Networks 9 3 Problem Statement The creation of public key cryptography has revolutionised our ability to construct end-to-end encrypted channels without the need to first exchange secrets since the 1970s. The very nature of public key cryptography leads us to a new problem; the authentication of public keys and the development of a variety of innovative and now widely deployed solutions to this problem. Fast forward to 2016, over a billion users now use interpersonal messaging protected by end- to-end encryption [52, 23]. Still, modern protocols rely on providing users with a way to perform out of band key verification [23] or not at all [53]. We believe that there must be a mechanism to provide key verification for a large user base that provides a good level of security without needing to resort to out of band verification or make value judgements about individual trust, specifically applied to social network messaging.
  • 10. Hybrid Trust Model for Assurance of Public Keys in Social Networks 10 4 Context 4.1 History of Interpersonal Communication and Security Stance It is worth considering a recent history of computer based interpersonal communication; like email and chat; the drivers or lack of, behind their security mechanisms and how approaches to security have evolved and why. Starting with email, beginning a timeline in 1973 with RFC561 [16]. The main relevance to the discussion is to highlight that there are no confidentiality or integrity guarantees. The Mail Protocol described in RFC524 [15] discusses identity only that; “The identity of the author of a piece of mail can be verified, avoiding forgery and misrepresentation.” (White, 1973) [15] How this is done is not discussed. Confidentially and integrity were not primary concerns until nearly 20 years later. In 1991, Phil Zimmerman introduced Pretty Good Privacy (PGP) [3]. A hybrid cryptosystem using public keys to transmit symmetric ephemeral keys to protect an underlying payload produced on a file by file or message by message basis. It is helpful to note that the Web of Trust wasn’t introduced until 1992 (which initially wasn’t even referred to as the Web of Trust) [3], [4]. PGP doesn’t mandate that it is used for email and PGP for use with email specifically doesn’t get a standardization track until 1996 [6]. A differing but related concept is Secure/Multipurpose Internet Mail Extensions (S/MIME). Introduced in 1995, it also isn’t standardised until RFC1847 in 1998 [5]. S/MIME relies on public keys delivered via X509 certificates to bind identities together which are then used for signing and encryption and then MIME is used to encode and encapsulate the message for transmission. In parallel to email as store and forward communication a more dynamic messaging ecosystem was evolving almost entirely separately. Internet Relay Chat (IRC) [7] although standardized in 1993 (based on BitNet Relay was created in 1985 [8]) which has no strong confidentiality or integrity guarantees). Does, therefore, a lack of standards allow us to infer anything about lack of security in systems? It’s worth considering that in the nascent days of the internet the perceived need for standards was much lower than it is now based on our own knowledge of how technologies evolved. Is this supportable however? There may be a number of reasons for this not least because the internet was not the driver of global commerce that it is today; Cristiano Antonelli describes it as follows, referencing Farrell and Saloner 1987 [17] : “The demand for standardized products may be higher because of relevant network externalities. Demand may be more elastic because of lower inertia determined by switching costs for consumers and users of previous units of durable products. The demand for standardized products may also be higher because of the important revenue effects generated by lower transaction costs for acquiring information on the characteristics of the products and their performances (Farrell and Saloner, 1987; Saloner, 1990).” (Antonelli, 1994) [17]
  • 11. Hybrid Trust Model for Assurance of Public Keys in Social Networks 11 After a broader discussion on conferencing in RFC1324, still security is left as an exercise for the reader; “It might seem that encrypting the message before transmission to other servers in some way would solve this, but this is better left as an option which is implemented in clients and thus leaves it to the users to decide how secure they want their conference to be.” (Reed, 1992) [9] We therefore reasonably conclude that security in person to person communication was an afterthought and a number of hybrid cryptosystems evolved to introduce confidentiality and integrity. Take up where tools need explicit invocation has been limited - in part because of the challenges of changing incumbent systems once they are deployed as was the case in technologies like Email and IRC and also partly the lack of the perceived need to protect communications in this way. This may have been affected by a lag in the standards processes which didn’t have the same emphasis that they do today. All of this highlights that security mechanisms were not deemed a real issue on standards tracks or for implementers until just before the turn of the century. What then of the world from 1995 onwards? If the answer was surely standards, now there are standards [5, 6, 14]. Perhaps there are too many? There are more elements involved however; for security to be employed routinely we assert that it has to be useable. Again the topic of usability in software generally and specifically in security is an expansive topic. In an evaluation of PGP 5 (helpfully this evaluates an interpersonal communications tool the likes of which we are concerned with), Whitten and Tygar describe it as follows [37]: “Strong cryptography, provably correct protocols, and bug-free code will not provide security if the people who use the software forget to click on the encrypt button when they need privacy, give up on a communication protocol because they are too confused about which cryptographic keys they need to use, or accidentally configure their access control mechanisms to make their private data world- readable.” (Whitten & Tygar, 1999) [37] Sheng et al 2006 assert [38] that this doesn’t get any better 7 years later: “We found that key verification and signing is still severely lacking, such that no user was able to successfully verify their keys. Similar to PGP 5, users had difficulty with signing keys. Three of our users were not able to verify the validity of the key successfully and did not understand the reasoning to do so.” (Sheng, Broderick, Hyland, & Koranda, 2006) [38] This supports our view that usability is a key thread however it still is not the whole picture. If security measures need to be employed deliberately then there must be some impetus on the user to engage those mechanisms, useable or otherwise. It is simply not enough to have usable security mechanisms if the user does not care to turn them on. The psychology of what makes people safer is again an extensive topic although we lean on Howe et al [39]. In their paper, The Psychology of the Security for the Home Computer User they perform a meta-study of a wide range of analysis dealing
  • 12. Hybrid Trust Model for Assurance of Public Keys in Social Networks 12 with how people understand threats from viruses to hackers to criminal actors; how users view themselves more positively than their peers; what brings threats into mind and much more. It is in fact hard to do their paper justice, however, they conclude: “Generally, home computer users view others as being more at risk. When they are aware of the threats, home computer users do care about security and view it as their responsibility. However, many studies suggest that users often do not understand the threats and sometimes are not willing or able to incur the costs to defend against them.” (Howe, Ray, Roberts, Urbanska, & Byrne, 2012) [39] This suggests that as system designers we must accept that users will rightly struggle to quantify risks, even if we produce standards allowing software to interoperate and make it usable for end users, they may simply not enable it. Simply put protection mechanisms must be built in and always enabled. 4.2 Evolution The landscape has changed dramatically in the intervening 45 years since the first email RFC. Global commerce has been revolutionised by the availability and adoption of the Internet. Indeed in many countries internet access is considered a public utility [20], [21]: “In Chapter 1 we outlined the importance of the internet to everyone’s lives—at work and at home. Later in this Chapter we show the personal and economic benefit of online skills; which will only be secured with universal access to the internet. As Lucy Hastings from Age UK said: “… access to the internet should be treated as a utility service”. We agree. The Government should make it its ambition to ensure universal access for the entire population. If this could be achieved, the UK would be well- placed to achieve significant growth.” (The Select Committee on Digital Skills, 2015) [21] The Internet is now a significant marketplace. In 2013, as estimated by the Office of National Statistics e-commerce on the Internet accounted for £557 billion of revenue [45]. E-Commerce in the US according to the US Census Bureau accounted for an estimated $92.8 billion in trade in the 1 st quarter of 2016 alone [46]. With this growth in legitimate business however, criminal activity is (and likely always has been) also present. This seems intuitive and supportable by the outward actions of government; Cyber Crime has been a Tier 1 priority of the UK Government since the Strategic Defence and Security Review published in 2015 which laid out a commitment to investing £1.9 billion over the next 5 years for government to tackle this type of crime [47]. As interesting as this is we must bring this back to our work, specifically; we highlight this to show that the underlying use of the Internet has changed, we hope for the better but that this change, alters the threat profile. As we discussed elsewhere in 4.2, users already struggle to quantify the threat meaningfully themselves. This is supported by the fact that popular messaging protocols have now introduced end-to-end encryption [23, 24] in recent times, although introducing other risks [84].
  • 13. Hybrid Trust Model for Assurance of Public Keys in Social Networks 13 4.3 Mass Surveillance At the start of this work we highlighted that the last 3 years has seen significant increase in the level of interest in encryption and specifically end-to-end encryption from the public, from the media, from business and from academia [10, 13, 34, 51]. Since the Edward Snowden disclosures in 2013 a significant amount of commentary has been generated in the public domain around the existence and operation of mass surveillance programs like PRISM [69, 70]. This adds to the context in which our work finds itself in, when we seek to define the capabilities of our attackers we can now look to the detail which was previously secret. This has also had an effect on the state of the art which we should review: “As a result of the Snowden revelations, the onset of commercial encryption has accelerated by seven years,” (James Clapper, Director of National Intelligence, 2016) [87] In the case of end-to-end encryption, man in the middle attacks (MITM) don’t cease to exist but they do move. If as we believe, man in the middle attacks [78] should be given greater credibility in the light of these disclosures then we should be able to find some evidence of this within the literature. It doesn’t take long to discover a programme, codenamed FLYING PIG [49] which seeks to gather data on proposed targets for SSL MITM attack. Another programme codename QUANTUM [50] combines a range of man-in-the-middle and man-on-the-side attacks to perform a combination of highly sophisticated watering hole attacks. All of this taken together seems to have spurred on the evolution of systems deploying encryption. Nevertheless this activity has produced a hesitance amongst users to go in search of information and altered the way people behave. This predicted “chilling” effect on society brought about by these programmes now appears to be in evidence. The most recent study we could find concludes [48]: “The results in this case study, however, provide empirical evidence consistent with chilling effects on the activities of Internet users due to government surveillance.“ (Penney, W. J, 2016) [48] Some argue that the disproportionality of this needs to be confronted head on [41]; “The threats to privacy online are increasing and with them the risks to freedom of expression. However, there has been a growing fight back with journalists exposing surveillance programmes, civil society challenging mass surveillance and companies that have strengthened privacy protections in their products. Most importantly, since the Snowden revelations, hundreds of millions of individual internet users have taken steps to protect their privacy online.” (Emmerson, 2015) [41]
  • 14. Hybrid Trust Model for Assurance of Public Keys in Social Networks 14 We think that a more nuanced view is needed and that in many cases the arguments of absolutes are unhelpful. Fundamentally we view the role of the courts in balancing the objectives of governments in performing their obligations of protecting society as paramount. The same author does comment on legitimate surveillance arising from the need to combat crime and protect national security although to our mind there is still insufficient discussion of how to strike the balance just simply that the scales have tipped too far in the eyes of some [41]: “Governments can have legitimate reasons for using communications surveillance, for example to combat crime or protect national security. However because surveillance interferes with the rights to privacy and freedom of expression, it must be done in accordance with strict criteria: surveillance must be targeted, based on reasonable suspicion, undertaken in accordance with the law, necessary to meet a legitimate aim and be conducted in a manner that is proportionate to that aim, and non- discriminatory.” (Emmerson, 2015) [41] 4.4 Conclusion With an understanding of this context we should be able to identify where the need for protection arises from, how it has changed and where the gaps lie. Popular communication mechanisms have evolved to support end-to-end encryption [23, 24] albeit years after the academic literature tackled this very problem [82]. We don’t seek to dwell on the point of mass surveillance; it is a practical reality and it informs the threat profile. It is something that we academically look to our protocol to provide protection from, we make no value judgement on mass surveillance itself, a matter we feel is for wider society to consider. We conclude by stating that end-to-end encryption provides significant protection against mass surveillance but in this environment, industrialised MITM attacks are a real threat without robust PKI.
  • 15. Hybrid Trust Model for Assurance of Public Keys in Social Networks 15 5 Trust Models 5.1 Incumbent Models for Trust There are a number of existing models which aim to provide assurance of public keys. We focus on two, Certificate Authorities (CA) and the Web of Trust (WoT) we examine how they work and some of their limitations. 5.1.1 Certificate Authorities Certificate Authorities, are trusted third parties that issue certificates which bind identities to public keys and certify the authenticity of such a binding are a popular mechanism for distributing key material. CAs are arguably the most widely deployed PKI as they underpin Secure Sockets Layer (SSL) and its successor Transport Layer Security (TLS) traffic on the World Wide Web. Google state that 77% of all of their traffic is encrypted [55] with others predicting that 70% of all internet traffic will be encrypted by the end of 2016 [56]. CAs as they stand are central to this encryption. Certificate Authorities work by signing (and thereby certifying) the authenticity of an identity and key binding. The CA signing keys themselves are distributed by an out of band mechanism, typically coming in a software package or being pre-installed with an operating system. This is where the trust relationship for a CA is formed. For Certificate Authorities to function, we must trust the CA to only sign certificates where the binding between an identity and a public key is genuine. Once a certificate has been issued they can be used until they expire. In the case of X.509 certificates, they are typically issued with a lifespan between 3 months and 2 years after which a client with access to a broadly accurate clock should no longer trust it. In the instance where private keys are lost or stolen the loser can approach the certificate authority and request that they publish a revocation either by putting the certificate fingerprint on a Certificate Revocation List (CRL) [79] or in response to queries from Online Certificate Status Protocol (OCSP) requests [85]. Owing to the centralised nature of a CA we have that option, assuming we can perform (or re-perform) some kind of identity verification. Although we note that CRLs are not the only method for doing certificate revocation [81]. Crucially certificates are not simply just file formats, by being signed and trusted documents they provide a chain of trust to prevent MITM attacks. Whilst helpful for revocation, this centralised trust is perhaps that most significant issue with the CA model. If an illegitimate certificate is issued for a given identity that identity is then open to impersonation and MITM attack. This can happen in a number of ways:  CA fooled into issuing a certificate to an imposter  CA deliberately issues a fraudulent certificate  CA signing key stolen In the normal model, in all three of these cases there is complete trust in the CA. These attacks have been detected in the wild, especially in relation to CAs that support of the Web.
  • 16. Hybrid Trust Model for Assurance of Public Keys in Social Networks 16 Arguably the most serious of these was the DigiNotar breach, detected in 2011. FOX-IT, a security consultancy, were commissioned by the Dutch government to analyse the breach; DigiNotar certificates were used for some Dutch government services. Their report states [57]: “The signing of 128 rogue certificates was detected on July 19th during the daily routine security check. These certificates were revoked immediately;  During analysis on July 20th the generation of another 129 certificates was detected. These were also revoked on July 21th; -  Various security measures on infrastructure, system monitoring and OCSP validation have been taken immediately to prevent further attacks.  More fraudulent issued certificates were discovered during the investigation and 75 more certificates were revoked on July 27 th  On July 29 th a *.google.com certificate issued was discovered that was not revoked before. This certificate was revoked on July 29 th .  DigiNotar found evidence on July 28th that rogue certificates were verified by internet addresses originating from Iran.” (DigiNotar Certificate Authority breach “Operation Black Tulip”, FOX-IT Report) [57] Whilst DigiNotar is perhaps the most comprehensive and well documented breach, Comodo, another CA was also breached resulting in a number of other fraudulent certificates. A number of other mistakes and issues have led to similar problems [58, 59, 60, 61]. 5.1.1.2 Certificate Transparency In response to these attacks, Google began work on Certificate Transparency [34]. Certificate Transparency works by introducing three new concepts to allow the monitoring of the behaviour of a CA to detect the existence (however that occurs) of fraudulent certificates.  Certificate Logs: Certificate logs can be appended to by anyone observing a certificate and are inserted into a Merkle Hash Tree [73] which provides an integrity guarantee to the log. Logs are maintained by log servers, run by CAs and third parties.  Certificate Monitors: Monitor the log servers and ensure that they contain the right certificates. They take periodic snapshots of the logs and look for suspicious certificates, namely fraudulent certificates, ones with unusual metadata or ones which grant too many rights.  Certificate Auditors: Certificate auditors perform a similar role to monitors but check the certificates that they are being presented with from websites, with what appears in the Log Server, additionally sending certificates they are being presented with to the certificate log for addition. The expectation is that certificate auditors be built into browser clients and that both CAs and auditors will add certificates they either observe or issue to publicly available logs. We can see how it attempts to expose the three attack instances we discussed above when applied to the Web. Where a CA is fooled into issuing a fraudulent certificate it is issued to the public log and could be detected by a monitor. In this case a monitor acting on behalf of the genuine
  • 17. Hybrid Trust Model for Assurance of Public Keys in Social Networks 17 domain owner. Regardless of whether the certificate is used for a highly selective MITM attack or not there is a chance of detection. In the instance where a CA deliberately issues a fraudulent certificate or the CA signing key is stolen (and we presume the person issuing the certificate fails to add it to the log) when it is presented to the client the audit feature in the browser will detect that it isn’t present in the log server but also it will add the fraudulent certificate which can then be detected by a monitor. The concept of certificate transparency is an interesting one and something which if applied for our purposes warrants further consideration. An issue of note is the effecting peering between multiple certificate authorities on the Web. A typical browser on a modern operating system may trust anywhere in the region of 30 different Certificate Authorities and many more intermediate ones. This isn’t the case in a single centralised system. It makes the concept of Certificate Pinning [13], specifying that only a given CA can issue certificates for a domain less relevant to us as a situation that led to the banning of a Chinese CA by Google [60] would not occur. 5.1.1.3 Conclusion Certificate Authorities have been hugely successful. Only in the last 5 years have we seen a serious uptick in the number of attacks on the infrastructure, we believe driven in part by the increasing deployment of encryption by default in many products and services. Google being central to this adoption in no small part by a policy which ranks https websites results higher in search results [62], thereby incentivising a move to encryption. 5.1.2 Web of Trust An alternative approach to a Certificate Authority is the Web of Trust (WoT) [3]. “As time goes on, you will accumulate keys from other people that you may want to designate as trusted introducers. Everyone else will each choose their own trusted introducers. And everyone will gradually accumulate and distribute with their key a collection of certifying signatures from other people, with the expectation that anyone receiving it will trust at least one or two of the signatures. This will cause the emergence of a decentralized fault-tolerant web of confidence for all public keys.” (Zimmermann, P. 1994) [3] The Web of Trust was first introduced by Phil Zimmermann in 1992 to support PGP [3]. The basic premise is that users maintain a key ring containing the public keys of others. These keys can have varying degrees of trust. They can be trusted directly by the person who owns the keychain or they can be signed by another person who in turn is trusted.
  • 18. Hybrid Trust Model for Assurance of Public Keys in Social Networks 18 Identity: Alice@alice.com Public Key: DE3453 Signed By: Bob@bob.com Mary@mary.com Identity: Bob@bob.com Public Key FA1256 Signed By: Tim@tim.com Identity: Tim@tim.com Public Key: CD49237 Signed By: Bill@bill.com Bob Signs Alice s Key signs Identity: Bill@bill.com Public Key: FF1122 Signed by Frank@frank.com signs Identity: Frank@frank.com Public Key: ED8392 Signs Identity: Mary@mary.com Public Key AE3491 Signed By: Frank@frank.com Signs Signs Figure 1: Web of Trust with Chains of Trust Figure 1 depicts six user certificates (represented as circles) as a directed graph, Alice, Bob, Tim, Frank, Mary and Bill. The arrows show the relationships between those people and who has signed certificates. Frank has signed Bill’s certificate, Bill has signed Tim’s and both Mary and Bob have signed Alice’s. This has required these users to make an affirmative decision to sign the certificates of others. The PGP literature talks about the idea of “Key Signing Parties”, where people meet in person to verify keys used to sign certificates out of band. If we look at Figure 1, note the dashed line between Frank and Alice. If Frank wishes to send secure messages to Alice and obtains her certificate, either by asking her for it, finding it on her website or obtaining it from a key server he must make a decision about how much he trusts it. He may opt to trust the certificate directly if he knows Alice; perhaps he can phone her and ask for the particulars of the certificate. The WoT is designed to allow a path of trust to a certificate through peers, where no one person holds any commanding authority in asserting the identity (or claimed identity) of the person who issues the certificate. In our example, Frank seeking to validate Alice’s certificate has two paths to Alice’s certificate. Frank trusts Bill, as he’s signed his certificate. Frank in turn fetches Tim’s certificate and if he is to trust this path by extension must trust Bill’s trust in Tim and the same with Tim’s trust of Bob and finally Bob’s trust of Alice.
  • 19. Hybrid Trust Model for Assurance of Public Keys in Social Networks 19 Ideally we want to find the shortest chain of trust that we can to avoid the number of entities we need to trust implicitly. Approaches to locating short chains of trust are an active areas of research [83]. We can see from this chain that we need to trust the intermediate signatories and there is a connection. The concept of a “trusted introducer” is designed to shorten these chains and improve trust by reducing the number of people that need to be trusted. In Figure 1 Alice’s certificate is also signed by Mary. Mary’s certificate is also signed by Frank so in this this case, there are two different routes to Alice’s certificate. This is an important point, the decentralized approach of the WoT doesn’t simply exist to avoid the need for a centralised authority but it also increases the resiliency of the network. If it later transpires that Bill has been compromised, that chain of trust is broken unless Frank is willing or able to close the loop with either Tim or Bob, in any case he may simply be better off trusting Alice’s certificate directly after verification. In the case of multiple paths, Frank still has a trustable path to Alice through Mary. This leads to an obvious issue of our needing to trust all of the people along the path of trust. The other difficulty is key rotation. When Alice wishes to introduce a new key she must let the rest of the system know she has a new key. Early versions of PGP didn’t specify expiry dates on certificates which meant that users could continue using public key material to send messages indefinitely. This was rectified however the WoT still has an issue with revocation, again the solution is not straightforward [81]. This may seem to be a desirable model but as we saw with Certificate Authorities, there are also downsides. 5.1.2.1 Sybil Attacks An issue with any decentralized model is the lack of a single authoritative source, an entity that can categorically say what is true and what is not. In distributed systems there are numerous protocols for enabling system components to come to a consensus about the state of a system [64], [65] the most well-known of which is probably Paxos [66]. Whilst consensus algorithms have been used to great effect to build fault tolerant systems where active attackers are involved, agreeing on a single truth, securely is hard [67]. In fault tolerant system consensus the aim is to allow a cluster to determine which nodes have failed. The key difference is that nodes are expected to be mistaken because they are faulty, not because they are lying. This may appear as a separate conversation but it is important because this lack of a central authority presents a vulnerability which affects decentralized systems, Sybil attacks [63]: “Peer-to-peer systems often rely on redundancy to diminish their dependence on potentially hostile peers. If distinct identities for remote entities are not established either by an explicit certification authority (as in Farsite [3]) or by an implicit one (as in CFS [8]), these systems are susceptible to Sybil attacks, in which a small number of entities counterfeit multiple identities so as to compromise a disproportionate share of the system.” (Douceur, 2002) [63]
  • 20. Hybrid Trust Model for Assurance of Public Keys in Social Networks 20 Network A Network B Identity: Alice@alice.com Public Key: DE3453 Signed By: Bob@bob.com Identity: Bob@bob.com Public Key FA1256 Signed By: Tim@tim.com Identity: Tim@tim.com Public Key: CD49237 Signed By: Bill@bill.com Bob Signs Alice s Key signs Identity: Bill@bill.com Public Key: FF1122 Signed by Tim@tim.com signs Identity: Harry@harry.com Public Key FE1956 Identity: Alice@alice.com Public Key: AB1234 Signed By: Peter@peter.com Harry@harry.com Charlie@charlie.com Identity: Peter@peter.com Public Key AD1914 Identity: charlie@charlie.com Public Key CB1914 Signs Signs Signs Identity: Frank@frank.com Public Key: ED8392 Figure 2: Web of Trust with Disjoint Networks. Figure 2 shows a number of system participants in two networks; Network B at the bottom has four distinct identities, Peter, Alice, Charlie and Harry. Network A at the top of the diagram also has four identities, Bill, Tim, Bob and another Alice with a distinct public key. To illustrate a Sybil attack, in this example, Network B’s members are all controlled by the same entity and Alice is an imposter. Network A’s four members are genuine and the real Alice is a member of that group. Now if we consider Frank who is not connected to either network; he wishes to communicate with Alice. If this were an environment using a PGP key server Frank would be able to find both keys claiming to be Alice’s (DE3453 and AB1234). The difficulty here is the prima facie evidence is that the AB1234 appears to be ‘more trusted’ if Frank cannot make a judgement call about Peter, Charlie and Harry in Network A.
  • 21. Hybrid Trust Model for Assurance of Public Keys in Social Networks 21 5.1.2.2 Conclusion If we apply the Web of Trust to a system like WhatsApp where the identifiers are mobile phone numbers, a malicious user could (and with a Billion users we take the view it is almost a certainty) claim an identity and create an entirely false user base to support it. This would require human users to make a determination about which key to trust. It also means that users would need to endorse keys in some way, either being prompted to sign keys they are presented with or opting to select trusted introducers. We fear that this is likely to be error prone, as we discussed in 4.1, we view poor usability as a factor in leading to poor security decisions as well as users not really understanding the risks. The WoT as applied in a social network appears at first glance to naturally align but the risks means that on its own it may be less than ideal.
  • 22. Hybrid Trust Model for Assurance of Public Keys in Social Networks 22 6 Problem Statement Validation In 2016 we feel that there is still a need for a trust model that works well for a human user, doesn’t rely on the user being technically savvy to function and to do individual key authentication via out of band mechanisms. It must be consistently available yet protect its users from its operators compromising the system. It must also be compatible with a closed system, one controlled and operated by a single entity. Certificate Authorities meet these requirements well but have to be kept honest. The Web of Trust strengths and weaknesses stem from its decentralized nature; key revocation is difficult, critical is finding people who will be diligent in attesting to the authenticity of a key underpin the entire system. This requires users to become involved in the process of signing the keys of others. The biggest issue to our mind is the susceptibility to Sybil attacks in disjointed networks; especially for systems based around an identity the user does not own [23], [24] and where there is limited sanction for those who engage in such activity if identities are not verified in some way. 6.3 Our Goal To define a protocol which requires no specific intervention from users to allow public key verification to support end-to-end encrypted channels, and which is resistant to mass surveillance. We should however be more concrete about this and specify a set of requirements through which we can evaluate potential schemes. Let us also consider primary requirements alongside additional, less critical needs. The protocol should; 1.1 Enable an end user to verify the authenticity of another end users’ public key component 1.2 Enable an end user to detect the presence of a fraudulent public key component purporting to have been published by themselves. 1.3 Prevent the system operator, or anyone who can compromise the Certificate Authority or compel the Certificate Authority operator to assist them, from introducing fraudulent keys AND using those keys to mount successful man in the middle attacks 1.4 Provide protection in a closed ecosystem, namely where all of the system components are owned and operated by the same entity Additionally; 2.1 Be constructed in a way that allows external monitoring of endpoint software. 6.3.1 Requirements Review Requirements 1.1, 1.2 and 1.3 are core aims; we assert that without this we have a protocol of little use. 1.3 is also slightly subtle; we want to be able to prevent MITM attacks not simply detect them. Requirement 1.4 is due to the fact if the protocol is going to be of any real use it has to work for existing systems. For obvious commercial reasons incumbent platforms which reach into the billion+ user base range are closed systems [24], [25], [52].
  • 23. Hybrid Trust Model for Assurance of Public Keys in Social Networks 23 Requirement 1.4 and 2.1 are linked; in a closed system where one entity is in control of the entire ecosystem then we don’t only assume, but we know that entity is also producing the endpoint software. If our threat actors are as capable as we assert they are in section 4.3 then this legal compulsion may extend to the software the user runs as we have seen [33]. This is a distinct requirement because we enter the realms of directed surveillance. 6.4 Attackers Attackers for the purposes of the evaluation are assumed to be well-motivated and technically astute. It is also assumed they have the power to compel anyone in the system to assist them to the best of their ability through whatever means they have at their disposal [32, 33]. We assume the following about our attackers:  Attackers can compromise the software applications and infrastructure running on the server side  Attackers can shape traffic and direct messages away from their intended targets or drop them selectively  Attackers can build in features that are of use to them therefore allowing them to run indefinitely on the server  Attackers cannot modify messages sent from uncompromised endpoints  Attackers cannot (or do not want to) compromise the applications running on some or all of the participating endpoints We make the point about the attacker not wanting to compromise a varying number of the endpoints even if it were practical because the risk of discovery, whilst interfering with both server and client components of the system. An implementation which makes the detection of tampering through clear and observable protocol behaviour would be a useful addition.
  • 24. Hybrid Trust Model for Assurance of Public Keys in Social Networks 24 7 State of the Art of PKI protection There has been significant work done in the area of PKI protection in recent years [86]. We evaluate two differing methods, both of which were developed after the Snowden disclosures. Attack Resilient Public-Key Infrastructure, a method which has received a reasonable amount of academic attention. The other, CONIKS, has also garnered significant interest although as yet there is an absence of critical analysis. 7.1 Attack Resilient Public-Key Infrastructure (ARPKI) ARPKI, Attack Resilient Public-Key Infrastructure [10] is a proposed model to introduce a provable level of transparency to a certificate authority using a TAMARIN [11] prover. It aims to provide a robust approach to the problem of CA compromise. 7.1.1 Mode of Operation ARPKI introduces a number of system actors; two Certificate Authorities (CA) and an Integrity Log Server (ILS). The design is such that where n-1 entities are compromised the system maintains integrity. In ARPKI, the domain owner registers a public key with one CA (CA1) but also nominates a second CA (CA2) and an ILS. CA1 performs the role of verifier and attests the authenticity of the public key and issues an Attack Resilient Certificate (ARCert). CA1 also transmits a record of its activity to ILS1. ILS1 in turn is responsible for synchronizing its view of the world with any other ILS instances in the system and for ensuring that other ILS instances are diligent in recording the ARCert in their records. The ILS then publishes signed copies of their integrity trees which are in turn verified by CA2 and any other optional verifiers engaged in the system. At the end of registration the domain is in possession of an ARCert signed by CA1, CA2 and ILS1. A user wishing to initiate a secure connection to the domain obtains the ARCert and validates the signatures against CA1, CA2 and ILS. To help assess the relevance of this work in relation to our own problem statement it is worth examining their adversary model to see if the threat environment relates to that of our own or if it is indeed attempting to solve a different problem. In their research Basin et al. state their assumptions [10];  Attackers can control the network  Assume attackers can compromise some long term secrets  Not all parties can be compromised 7.1.2 Evaluation The author’s assumptions are useful and relevant to our problem statement however two important considerations arise from this. Firstly, the nature of compromise is not discussed which leaves an open question: By what means could actors within the system chose or be compelled to behave
  • 25. Hybrid Trust Model for Assurance of Public Keys in Social Networks 25 dishonestly. This is important because if the actors operating system components can be identified the assumption about not compromising all parties may not be a reasonable one. For example, where a finite number of components are operated by the same organisation or appear in the same or cooperating jurisdictions. The next components are the desired security properties;  Connection Integrity: Clients establishing connections must be assured that they are communicating with the legitimate owner of a domain  Only legitimate certificates are registered  Only legitimate updates occur to certificates  Attacks must be publically visible The first property should underpin all models within a PKI, regardless of approach and method of implementation. The second and third properties are also important as these are critical points of attack. Illegitimate certificates are a major concern and both certificate transparency [12] and certificate pinning [13] have been proposed as methods to mitigate that threat which we have previously discussed. The final property of interest is attack visibility. Basin et al state: “If an adversary successfully launches an attack against the infrastructure by compromising entities, the attack should become publically visible for detection”. (Basin et al., 2014) [10] The criteria that the compromise must be publically visible introduce an interesting question about which actor in the system will do the detection, how are they incentivised to monitor and what action they will take when such detection occurs. 7.1.3 Concerns & Open Questions The design leads to some implicit system requirements which whilst acknowledged the practical challenges receive little treatment. These are chiefly at the intersection of security systems design and distributed systems engineering. In terms of concerns, we wonder in the real world; who runs Certificate Authorities? Perhaps more critically as they are the checks and balances, the ILS servers? The authors acknowledge that if CA’s and ILS’ were able and willing to main separate lists then an attack exists. The authors appear to have discounted this. This is fair in terms of internal consistency, as their assumptions state that not every entity within the system is open to compromise. We note that entities are organisational and not individual. Accordingly we question if this is realistic in the face of a nation state actor with the ability to issue legally binding requirements [32], [33]. Furthermore the authors state that this concern is negated by virtue of the fact that it would be quickly detected but crucially they don’t explain how. “Given two disjoint sets of CAs, where one set is honest and the other is compromised, if a domain successfully registered a certificate for itself using the honest CAs, we would like to guarantee that no bogus certificate can be registered for that domain by the adversary. But, if all the ILSes are compromised and willing to keep two separate logs, then the adversary can register an ARCert for the
  • 26. Hybrid Trust Model for Assurance of Public Keys in Social Networks 26 domain using the disjoint set of compromised CAs and ARPKI would not prevent this attack. However, this attack is highly likely to be detected quickly, and all the dishonest ILSes and CAs can be held accountable.” (Basin et al., 2014) [10] If these lists could be maintained and only delivered to the target of an attack, then this becomes practical. There appears to be an inherent assumption that dishonest CA’s and ILS’ would continue to go about publishing their dishonest activity to the wider world for monitoring. We believe this is significantly flawed, especially with our greater understanding of what the attacker is capable of [50]. Furthermore there is an assumption that not all CA’s and ILS’ are in the same operating network, this is not the environment our system operates in. 7.1.4 Relation to Requirements Finally, reviewing ARPKI against the requirements we set out in section 6.3 and addressed inline: Q: Can ARPKI enable an end user to verify the authenticity of another end user public key component? A: Yes, ARPKI will enable an end user to leverage an ILS or group of ILS’ to verify the authenticity of the public key component. Q: Does ARPKI enable the end user to detect the presence of a fraudulent public key component purporting to have been published by them A: Yes, a client can monitor their own public key and would know if someone was fraudulently issuing a public key and it also allows a user to be able to determine if they have been deliberately omitted from the directory. Q: Does ARPKI prevent an attacker who can compel the system operator to assist them from introducing fraudulent keys AND using those keys to mount successful man in the middle attacks? A: No it does not, even if assuming that as highlighted in 3.1.2 a system operator isn’t operating all components. If we take a strict interpretation however of the assertion of ‘system operator’ then the answer is certainly not, because we can compel multiple system owners to become involved in the fraud as per [10] “…if all the ILSes are compromised and willing to keep two separate logs, then the adversary can register an ARCert for the domain using the disjoint set of compromised CAs and ARPKI would not prevent this attack.” (Basin et al., 2014) [10] Q: Does ARPKI provide protection in a closed ecosystem, namely where all of the system components are owned and operated by the same entity?
  • 27. Hybrid Trust Model for Assurance of Public Keys in Social Networks 27 A: No and this is perhaps the most profound observation in our evaluation. ILS servers need to exist as part of a cooperative but separate and open environment to be effective. It therefore does not meet our requirements. 7.1.4 Conclusion ARPKI has some interesting ideas and the role of an ILS as a form of certificate monitor having a specific role for monitoring keys is an interesting one that warrants further consideration. Applied to the problem statement of a social network ARPKI has less applicability especially with the core role of nominated third parties operating ILS which are visible to all. In a closed network, the operator of an ILS would likely be the CA, which should not be the case in this system. For large peer to peer systems it is not clear who could perform this role; however full credit must be given to the authors for some excellent work which appears to have much better applicability to the CA problem of the World Wide Web. We do however note that the presence of a formal verification method is a good design feature and the ability to hold entities publically accountable is an excellent component of the protocol, however difficult in a closed system.
  • 28. Hybrid Trust Model for Assurance of Public Keys in Social Networks 28 7.2 CONIKS: Bringing Key Transparency to End Users CONIKS is an end-user key verification service designed specifically for use with end-to-end encrypted communications channels [42]. Fundamentally CONIKS creates a novel hash-chain based directory of key bindings. These key bindings are then snapshotted at a recurring interval, the hash chain signed by the directory provider and published to clients who monitor this chain and then compare their own binding to ensure the correctness of that binding (non-equivocation). Where the signed binding is then published this then exists as proof that the directory is correct or has been compromised (or compelled to publish a fraudulent key). 7.2.1 Mode of Operation CONIKS creates an ecosystem with 4 documented actors, we highlight their assumptions within:  Identity Providers – an Identity Provider can be thought of as a CA. CONIKS refers to Identity Providers being responsible for issuing name-to-key bindings within their namespace. There is also an assumption that there is some other PKI which manages the distribution of signing keys for the providers themselves.  Clients – Clients refers to the client software which is run on a Users’ trusted device. The authors make the point that CONIKS does not address the problem of compromise of the client endpoints. Clients monitor the consistency of their own bindings but nobody else’s. The next point is rather striking however [42]: “We also assume clients have network access which cannot be reliably blocked by their communication provider. This is necessary for whistleblowing if a client detects misbehaviour by an identity provider (more details in s.4.2). CONIKS cannot ensure security if clients have no means of communication that is not under their communication providers controls. 2 2 Even given a communication provider who also controls all network access, it may be possible for users to whistleblow manually by reading information from their device using a channel such as physical mail or sneakernet, but we will not model this in detail” (Melara, Blankstein, Bonneau, Freedman, et al., 2015) [42] We return to this in more detail later.  Auditors – Auditors verify that the Identity Provider is not equivocating, that is modifying the directory and publishing the results, either publishing false keys or removing legitimate ones. Auditors track the chain of signed snapshots. They also publish and gossip with other auditors and all clients serve as auditors for their own Identity Provider.  Users – Users are listed separately because of the availability of varying security levels depending on the users own threat posture described by differing local policies users may choose to operate.
  • 29. Hybrid Trust Model for Assurance of Public Keys in Social Networks 29 In the centre of the system an Identity Provider manages a set of name-to-key bindings in a Merkle Prefix Tree [73]. On a recurring basis (some system specific time interval) the identity provider generates a non-repudiable “snapshot” of the directory by digitally signing the root of the Merkle prefix tree to form a Signed Tree Root (STR). Clients then can use an STR to check the consistency of key bindings in a highly efficient manner that scales to very large directories. Each STR contains the hash to the previous STR which creates a linear history of the directory from CONIKS section 3 [42]. The directory also has an appealing property of being privacy preserving, which is that a private index is computed as an unpredictable function of the username and a nonce. This means the directory doesn’t leak data about names and furthermore at the index the users given public key is not published but rather a hash of the key and a nonce. This prevents an attacker from enumerating the directory with a known set of public keys. To verify a given public key an authentication path through the tree is published as a ‘route’ to the relevant entry in the directory. This allows for users to check if they have been included in the view of the directory which was subsequently snapshot in an STR. It has the additional ability for users to then be able to check for both spurious keys and if they have been dropped from the directory, as they can compute their own routes. Key lookups (CONIKS s.4.1.2) are done by requesting the public key for a given user, using the authors’ example, Bob requests Alice’s key. The identity provider returns the public key, the authentication path and the STR. Bob is now in possession of an STR, Alice’s public key and an authentication path within a given STR snapshot which Bob can verify. 7.2.2 Evaluation The fact that identity providers are responsible for issuing bindings within their namespace seems appropriate. Without some authority users would need to make discerning judgements of where name-to-key bindings would come from, this is consistent with a CA model. Additionally the authors don’t deal with the compromise of the endpoint software, again this seems reasonable to us. Ultimately if the client endpoint is untrusted then we cannot rely on anything it purports to do unless we are able to monitor its behaviour effectively. We take issue with the need for clients to have network access that cannot be reliably blocked by their communication provider. The CONIKS authors state that this is necessary for whistleblowing. It is not the assumption we take specific issue with but with an implied notion that this activity is then going to reach anyone for which this is useful. Where equivocation is now provable does this prevent an attack from being mounted on a given user or set of users? Whilst no signature method can prevent the introduction of integrity violations they can be used to allow them to be detected. We think that this concept of publically visible proof of equivocation is useful; it does not however prevent a fraudulent key from being user to impersonate a user. A MITM attack could still be mounted, except that the client may know it is possibly occurring, however there is still nothing in the message that allows the client to know it has happened.
  • 30. Hybrid Trust Model for Assurance of Public Keys in Social Networks 30 The authors discuss the idea of a divergent STR: “An identity provider may attempt to equivocate by presenting diverging views of the name-to-key bindings in its namespace to different users. Because CONIKS providers issue signed, chained “snapshots” of each version of the key directory, any equivocation to two distinct parties must be maintained forever or else it will be detected by auditors who can then broadcast non-repudiable cryptographic evidence, ensuring that equivocation will be detected with high probability (see Appendix B for a detailed analysis).” (Melara, Blankstein, Bonneau, Felten, & Freedman, 2015) Section.2.2 If we revisit our attackers from s 6.4: “Attackers can build in features that are of use to them therefore allowing them to run indefinitely”; we assert that equivocation could be maintained forever. 7.2.3 Concerns & Open Questions There are a number of open questions, some of which are acknowledged as future work. One is how broadcast of STRs is done, is left out of scope. This strikes us as strange because a client’s ability to inform the world of equivocation is absolutely at the heart of CONIKS’ strengths. As is the related concept of a gossip protocol, this is also strangely unexplained. It is also not clear to us the correctness of messages in and out of the client can be verified. In a divergent STR environment, the ability to provide proof of equivocation must be extracted from the application. This is not an insurmountable problem but a solution is still missing and relates to Gossip, it frustrates us that this is missing. 7.2.4 Relation to Requirements In structuring our conclusion as to its applicability let us finally review CONIKS in the context of the goals we laid out for ourselves in 6.3: Q: Does CONIKS enable one end user to verify the authenticity of another end users’ public key component? A: Not on its own. The protocol would allow Alice to obtain a fraudulent public key for Bob, obtain a false STR (which could act as a proof of equivocation) and verify the key. Alice could then use this key to begin communication. Q: Does CONIKS Enable the end user to detect the presence of a fraudulent public key component purporting to have been published by them? A: Yes, assuming the fraudulent key is published to them as well. Where divergent STRs are possible then we require another mechanism to detect their use. Q: Does CONIKS prevent an attacker who can compel the system operator to assist them from introducing fraudulent keys AND using those keys to mount successful man in the middle attacks? A: No it does not. The critical part is that a back to back man in the middle attack can be mounted if a divergent view of STRs can be maintained. Whilst noting that it is the introduction of fraudulent keys and the ability to use them to mount attacks that we take together in our requirement.
  • 31. Hybrid Trust Model for Assurance of Public Keys in Social Networks 31 Q: Could CONIKS provide protection in a closed ecosystem, namely where all of the system components are owned and operated by the same entity? A: Yes it could but again we require some other mechanism which is otherwise unspecified to allow users to spread these divergent STRs. 7.2.5 Conclusion We feel CONIKS has some genuinely interesting and novel ideas; the privacy preserving concepts for instance are, in the current climate to be applauded. We like the idea of differing security policies particularly. We think the authors must tackle the practical engineering problems associated with the deployment of such systems; going beyond the creation of cryptographic protocols and evaluate in detail their proposed behaviour in the real world. We saw in 4.3 the adversaries are not attacking only the cryptography but the practical implementations of systems to achieve their goals. The most fundamental issue is this: If the attacker can drop whistleblowing messages or prevent them from being received the system provides no protection. How whistleblowing messages are received by clients receives almost no commentary. This we feel is a serious omission. We also do not believe that users need to be totally isolated to prevent the transmission of whistleblowing messages, merely selectively isolated. 7.3 Conclusion on the State of the Art Both ARPKI and CONIKS attempt to tackle the trustworthiness of a CA. They do this by introducing additional opportunities for monitoring the behaviour of a CA. ARPKI is a model that may work well when there are different entities managing however in our case we expect that not to be true. CONIKS turns users into monitors but leaves out how they will communicate a warning when something fraudulent is detected. Both schemes we feel fall short of an ideal solution and motivate the need for something new.
  • 32. Hybrid Trust Model for Assurance of Public Keys in Social Networks 32 8 Hybrid Trust Model for Assurance of Public Keys in Social Networks 8.1 Introduction The fundamental premise of our model is that participants cooperate to monitor the Certificate Authority to ensure its honesty in a system where there is a single centralised Trusted Third Party (TTP). This enables it to be applied to user-to-user messaging systems like WhatsApp [23] and Apple iMessage [24]. In these systems the vendor is responsible for creating the software that runs on devices, the protocol (including end-to-end encryption) and operating the infrastructure. For both WhatsApp and Apple iMessage the trusted third party should perhaps not be described as so much a third party but rather the only party. In these cases, public key components are made available in a queryable directory; they act as a single CA. In our model we adopt elements of Certificate Transparency [34] and combine them with a peer to peer, Web of Trust [3] like behaviour where we trust our friends to become monitors and auditors accumulating keys they can share with us for verification. In a system operated by a single entity with a large user base a centralised CA allows the operator to maintain control of the ecosystem. This control we assert is of commercial importance when it comes to generating revenue. Based on the amount of money involved in acquisitions in recent years there must be significant expectation of the value of these products [1], [2], especially if they expect to sell on the asset. This centralised source of trust within this closed ecosystem must however be kept honest for the security of the systems’ participants. For user to user messaging systems, operators must ensure the availability of the CA to enable users to locate public key information. As an engineering challenge we assert that this is a reasonable model (based on real world examples). This is on par with ensuring the availability of other critical components like message brokers to enable the storage and forwarding of messages. The CA can also deal with key revocation, transmission and availability of successive public keys. The TTP can specify any authentication requirements for proof of identity in a way that is relevant to the user base, i.e. multi-factor authentication, email access etc. 8.2 Assumptions Our system allows people to cooperate to detect the publication of fraudulent public keys. To enable this we make a number of assumptions about the environment our trust model operates within. 1. Participants can discover each other through a shared class of identifier e.g. a telephone number, username or other value that has meaning within the network 2. Participants can communicate with each other over an electronic communications channel 3. The social networks with which people want to communicate already exist in some form, i.e. contact lists on devices or from social media networking tools like Weibo and Facebook
  • 33. Hybrid Trust Model for Assurance of Public Keys in Social Networks 33 8.2.1 Technical Detail For completeness we are explicit about other protective measures for safe guarding protocol messages. Transport level encryption should be employed between the software running on a client device, the Certificate Authority & Message Broker. This is to provide confidentiality, to prevent tampering and replay of protocol messages by someone outside the infrastructure. We imagine that this is supported by the typical mechanisms including code signing and Transport Layer Security (TLS) already available [74, 75]. The root CA signing key is embedded within the Endpoint distribution to allow CA message signatures to be verified. Messages contain directional identities to prevent reflection attacks and random challenges to prevent replay of messages by attackers at the Message Broker. 8.2.1 Attackers In 6.4 we discussed the presumed attackers capabilities based on our discussion of the recent disclosure of nation state capabilities in 4.3. As we discuss the protocol messages and the behaviour we should keep in mind what we consider to be the attacker’s goals as well as their capabilities. This way we can focus on what they are likely to do in pursuit of those goals. We view the main attacker’s goals in relation to our protocol is to: 1. Impersonate a user 2. Read plaintext messages sent between two users 3. By impersonation read plaintext messages sent between two users To be explicit we mean impersonation where an attacker pretends to be a genuine user in their entirety, generating messages and receiving them. For reading messages we mean simply that, reading messages between two users and the last point is where they perform a MITM attack by virtue of being an imposter and where relevant completing protocol runs with each party independently. Alice (Genuine) Ian (Impersonating Bob) Bob (Genuine) Ian (Impersonating Alice) Figure 3: Back to Back Man-in-the-Middle Attack Figure 3 shows Ian doing a back-to-back MITM attack by completing independent protocol runs with Alice and Bob. We also assume that no new cryptographic attack is available to the attacker and the signature operations or asymmetric encryption operations are secure. We also assume that the attacker has not stolen any private key component.
  • 34. Hybrid Trust Model for Assurance of Public Keys in Social Networks 34 8.3 Nomenclature & Message Format It is worth specifying a few key terms at this point: Identity – Identifier with some intrinsic meaning within the network, e.g. name, email address or telephone number. Identity Key – Public key component which is used for operations to prove identity. This is the public key which, in our model is published by the CA and can be queried by other users. Certificate Authority – Infrastructure component responsible for publishing and responding to queries for Identity Keys Endpoint – Software running on a user device, e.g. a smartphone or desktop. Message Broker – Infrastructure responsible for routing messages between Endpoints. Users – People that use the Endpoint Source – The source of an Identity Key obtained by an Endpoint. This can be, Prior Knowledge, Key Confirmation, Gossip or CA. 8.3.1 Messages Furthermore we describe the message format as follows using an example: Alice  sign (Alice, IK_Alice, Bob, IK_Bob, challenge Alice ) IK_Alice  Bob The preceding structure describes a message, sent from Alice to Bob. The contents within sign () are signed using Alice’s Identity Key described as IK_Alice. All Identity Keys are described as IK_Owner or in the case of fraudulent keys representing a user as IK_Fake_Owner. Curly braces are used to denote a set {item1, item2} or {class}. Identity is a globally visible unique identifier which is bound to the endpoint. Alice in the example is Alice’s identity, for example a telephone number. IK_Alice is the Identity Key for Alice, IK_Bob is the Identity Key for Bob. Noting that depending on the message direction that the perception of the key is important to the protocol. Alice may know her Identity Key but she has a belief in the Identity Key of Bob obtained from some source. challenge Alice is a challenge issued by Alice to Bob; again the direction of the message is important. challenge Alice is the original challenge returned in the opposite direction in the following message but signed so as to prove possession of the Identity Key.
  • 35. Hybrid Trust Model for Assurance of Public Keys in Social Networks 35 Also note that we specify Identity and Identity Key in protocol messages, this is to simplify explanation of important components of the message, and in reality some lightweight certificate would be used. 8.4 Method of Operation The first role of a new client after it joins the system is to start building its long term memory of Identity Keys. Important: The registration of the Identity Keys public component is deliberately left out of scope. The reason is that this process must be compatible with the nature of the identity, like an email address as a wholly owned identity or where the identity may be considered ‘on-loan’ like a telephone number even if the ‘rental’ has been on long term basis (the author has rented their mobile telephone number at time of writing for over 17 years). This should not be mistaken for triviality; the lifecycle of an identity is an important design consideration but will be system specific. This Identity value is determined as being relevant to the underlying system the protocol is applied to but we stipulate that identity must be consistent between all Endpoints, Message Brokers and the CA. One of the core tenets is that participants take an active role in monitoring the keys that directories publish for members of their social network. In its simplest forms this involves querying the CA for the Identity Keys of all of the participants a client is interested in. This is a related concept to certificate transparency [34] which turns participants into both certificate monitors and certificate auditors. Understanding the perspective that others have of a Users’ Identity Key underpins the system. We note that this implicitly leads to the exposure of the relationships that a participant may have within a given social network to the CA. We view this an acceptable price to pay due to the following; In a system where the store and forward facility is being operated by the same entity as the CA and we consider the attack models previously discussed we assume that the well-positioned attacker can view all of the message traffic flow at the message broker. It is therefore reasonable to assume that this would be well within their capabilities in any case, regardless of this disclosure. We return later to the distinction between placing a watch on all known Identity Keys vs. a subset and the resultant trade-off’s. The protocol consists of five logical phases: 1. Bootstrap 2. Confirmation 3. Mesh Confirmation 4. Ongoing Monitoring 5. Gossip protocol To summarise at a high level, Bootstrap is where an Endpoint collates identities of interest and queries the CA for the associated Identity Keys. Confirmation is a bi-directional challenge response
  • 36. Hybrid Trust Model for Assurance of Public Keys in Social Networks 36 process performed between two Endpoints to confirm keys. Mesh confirmation involves an Endpoint asking other Endpoints for their knowledge of a given Identity Key. Ongoing Monitoring is the process of keeping an up-to-date knowledge of Identity Keys by re-querying the CA and performing confirmation in the event of changes. The Gossip Protocol attaches requests for Identity Key data to normal (non-key related) communication to frustrate the selective removal of those messages. 8.4.1 Phase 1: Bootstrap Goal: The goal of the bootstrap phase is for every endpoint to build an inventory of identities for monitoring and obtain an initial view of those identities and associated Identity Keys. Result: Endpoint has knowledge of key data obtained from the Certificate Authority At bootstrap the first action is to collect identities which are relevant to the system. As we discussed earlier the identifier, e.g. telephone number or email address, needs to be useful to the social network which allows other known participants to be located. Collection of identities may involve querying of some pre-existing database like an address book, contact list or remotely provided friend list. The application then queries the CA for all of the Identity Keys for the identities that it has collected. Certificate Authority 2. Obtains identity keys 3. Query's Identity Keys 4. Identity Keys returned 5. Stores keys in long term memory Alice 1. Collates known identities Figure 4: Bootstrap of Key Monitoring Figure 4 shows the bootstrap of the network participant, collating the identifiers which are relevant to the network and then obtaining Identity Keys and storing them.
  • 37. Hybrid Trust Model for Assurance of Public Keys in Social Networks 37 Table 1: Query Message and Reply For an attacker to begin impersonation they would need to provide an Identity Key for which they have the corresponding private key otherwise they will not be able to sign messages proving authenticity. At the conclusion of this phase, Alice has a list of Identity Keys for the identities she has queried for from the CA which she does not yet trust. QueryId is a 16 bit sequential number which allows the Endpoint to correlate distinct query requests. 8.4.2 Phase 2: Confirmation Phase Goal: To interactively communicate with a user for which you have discovered an Identity Key. Result: Both parties now understand each other’s perspectives of Identity Keys and from where they were obtained. The next phase is to perform a challenge response protocol with all of the users who are first degree connections; that is anyone who’s Identity Keys are being monitored by the user because of an existing relationship. This is to force a potential attacker into beginning their subversion of pairs of Identity Keys from the very outset of the system. We describe this in more detail in 8.5. Figure 5: Key Confirmation Challenge Response Protocol Alice Bob Dave 1. Key Confirmation 3. Confirmation Response & Challenge 4. Challenge Response 2. Verify Key Certificate Authority i. Key Confirmation ii. Fetch Identity Key for A iii. Identity Key Response iv. Verify Key v. Confirmation Response & Challenge vi. Challenge Response Message 1 Alice query({identities}, queryId)) CertificateAuthority Message 2 CertificateAuthoritysign({identity,identityKey}, queryId) IKCA  Alice Message Function Query for a set of Identity Keys from the CA Aim To obtain a list of identities and keys from the CA for ongoing monitoring and use
  • 38. Hybrid Trust Model for Assurance of Public Keys in Social Networks 38 Figure 5 shows the Key Confirmation protocol being between Alice and Bob as well as between Alice and Dave. Steps (1-4) show what happens when Bob already has knowledge of Alice’s Identity Key. Steps (i-vi), show a subsequent run where Alice seeks confirmation of Dave’s Identity Key but where Dave is not monitoring Alice’s Identity Key already. The specific nature of the challenge response protocol is important. It performs entity authentication and includes information, which, provide defences against other attacks. 8.4.2.1 Key Confirmation Request: Outbound Message 1 Alice  sign(Alice, IK_Alice, Bob, IK_Bob, challenge Alice ) IK_Alice  Bob Message Function To provide Bob with Alice’s identity and her Identity Key. Also to show Bob what Alice believes are Bob’s identity and Identity Key are. Lastly to issue a challenge in a block signed by Alice. Aim To allow Bob to learn and add to his long term memory Alice’s perspective of herself and to allow Bob to detect any discrepancy in Alice’s perspective of him. Table 2: Key Confirmation Request Message The purpose of this message is to provide Bob with a perspective on how his identity is viewed by others. This helps Bob understand if others are using incorrect keys, either maliciously provided or perhaps out of date; leading to a key refresh. It also provides Bob the key that Alice is claiming is hers, which he records. Alice also issues a challenge; a 256 bit value chosen at random so that when the reply is returned Alice knows that it was in direct response to her request and avoids a replay of messages. Alice also includes Bob’s Identity Key in a known position in the message to avoid reflection attacks. The use of the long term memory of keys stored by participants will become apparent when we examine the gossip protocol. In a correctly functioning system, Bob first computes the signature for comparison from any prior knowledge he has of Alice. In an ideal setting Alice and Bob have a strong tie in the social sense and there is a good chance that Bob is already monitoring Alice’s keys. Having verified the signature, Bob can now do some additional verification on the payload. First of all Bob will be able to see that Alice’s claimed Identity Key is the same as the one Bob has retrieved from the CA. Bob can also verify that Alice has his correct key; if she does not then when he replies to the challenge Alice’s signature verification will fail. After these are correctly verified Bob also checks that the keys which Alice has obtained for Bob are also correct and can therefore assume that the directory is publishing Bob’s correct keys. If an attacker wishes to impersonate Alice to Bob, he must drop this message because he cannot allow Bob to see Alice’s real Identity Key and must replace it with his own. An attacker will not be able to do this unless he is also able to get Bob to trust a fraudulent key enabling the attacker to re-sign the message and continue the protocol run, otherwise when Bob verifies the signature of the message he will detect that it has been tampered with and fail to complete the protocol. He will also not complete the challenge response and Alice will have a key in an unconfirmed state, more on this in 8.4.2.4
  • 39. Hybrid Trust Model for Assurance of Public Keys in Social Networks 39 8.4.2.2 Key Confirmation Message: Response Message 2 Bob sign(Bob, IK_Bob, Alice, IK_Alice, challenge Alice , challenge Bob, source) IK_Bob  Alice Message function To provide Alice, Bobs perspective of himself and his perspective of Alice and to return Alice’s challenge and issue Bobs own challenge Aim To allow Alice to detect any discrepancy in Bob’s perspective and for Alice to verify Bob is involved in the live protocol run Table 3: Key Confirmation Request Message The purpose of this message is for Bob to confirm that Alice’s perspective of Bob is correct and that Bob has confirmed this in response to Alice’s confirmation request. Bob replies with his own response, returns the challenge and what he believes to be Alice’s Identity Key. He also tells Alice where he got the key from. Alice now gets to understand what Bob sees for the same reasons discussed previously. If both views are consistent, we have achieved our goal of assurance of purpose of the Identity Keys and can run additional protocols to construct end-to-end encrypted channels. The keys have been mutually authenticated and we have been able to verify that keys being published by the directory are correct and up to date. For the confirmation phase of Alice and Dave’s keys the protocol run is the same however Dave has gone to the CA to obtain Alice’s Identity Key. This is slightly different in that Dave must obtain Alice’s key from the CA. In doing so Dave adds this to his long term memory for ongoing monitoring. Importantly, key confirmation messages should include whether the key has been obtained from the directory or retrieved from the clients existing memory (this is the source attribute). For an attacker to impersonate Bob to Alice he must prevent this message from reaching Alice unless he can provide a fraudulent Identity Key to Alice and replace it with his own. If the message reaches Alice she will see Bob’s real Identity Key and detect any fraudulent key she has been supplied. It is important to note here that if an attacker wishes to impersonate both parties in a back to back MITM attack both Alice and Bob are going to need to have been lied to about both of their keys. It is a fundamental goal of this challenge response phase to force the attacker to begin his subversion of both Alice and Bob from the very beginning of their communication. Were the attacker can do this he can now complete any end-to-end encryption protocol based on these identities and read Alice and Bob’s communication. 8.4.2.3 Key Confirmation Message: Challenge Response Message 3 Alice sign(challenge Bob ) IK_Alice  Bob Message function To return Bob’s challenge Aim To allow Bob to know Alice is involved in the live protocol run Table 4: Key Confirmation Challenge Response Message
  • 40. Hybrid Trust Model for Assurance of Public Keys in Social Networks 40 Finally the response to Bob’s challenge is returned, signed by Alice which Bob verifies using the key from the initial message. With this complete we have now performed mutual authentication of Identity Keys. This may seem a little redundant but this is important because although Alice has seemingly initiated the request and Bob has replied, Alice also volunteered information in the first message for authentication. This requirement to respond allows Bob to understand if this was a live request from Alice and not a replay. An optional extension would be for the person doing key confirmation to respond stating for how long they had been observing the key for (in the case of Dave a newly obtained key could be represented by the value 0) or potentially even reply with a chain of previous keys. We leave this out of scope for the moment because of the non-trivial complexity that this introduces specifically to understanding system time and resolving clock timing in distributed systems is a research area in and of itself, as noted [40]: “In a distributed system, it is sometimes impossible to say that one of two events occurred first. The relation "happened before" is therefore only a partial ordering of the events in the system. We have found that problems often arise because people are not fully aware of this fact and its implications.” (Lamport, 1978) [40]
  • 41. Hybrid Trust Model for Assurance of Public Keys in Social Networks 41 8.4.2.4 Key Confirmation & Key State At the end of the confirmation phase, Alice can have keys in one of three states:  Confirmed  At Risk  Untrusted Confirmed keys are where both parties have been able to perform mutual key confirmation. At risk keys are keys which have not been confirmed. We deem these keys to be at risk. Untrusted keys are where some element of the protocol has highlighted a discrepancy and they therefore should not be used. We should briefly discuss the approach of the protocol to ensure the most security which may be at odds with existing systems. In a store and forward environment participants may not be online to be able to complete confirmation. In this case those Identity Keys could be considered at risk. It is quite plausible that the messages may have been dropped by the attacker. In the case of WhatsApp [23], the protocol has been designed to transmit everything in message headers which would allow the recipient to construct an encrypted session and decrypt the transmitted message. “After building a long-running encryption session, the initiator can immediately start sending messages to the recipient, even if the recipient is offline. Until the recipient responds, the initiator includes the information (in the header of all messages sent) that the recipient requires to build a corresponding session.” (WhatsApp Inc. 2016 WhatsApp Encryption Overview Technical white paper) [23] In our case, we would want to prevent that from happening until bidirectional key confirmation could be completed. If this is not done unilateral impersonation can be done and an attacker can read messages destined for the original recipient. 8.4.3 Phase 3: Mesh Confirmation Goal: To obtain the perspectives of peers within a given network on the Identity Keys of other users. Result: Endpoints are able to obtain other perspectives on their own Identity Keys and those of others and detect discrepancies Once Key Confirmation has been completed with the intended recipient we now begin Mesh Confirmation. Mesh Confirmation is where Alice calls upon other members of the system to help with Key Confirmation. This is an important step which forces an attacker to take ever greater risks to subvert the system. During Mesh confirmation Alice will call upon other system actors, Fred, Hillary and James to confirm keys. The Mesh Confirmation message will involve the identity of a user, the Identity Key as observed by the requestor and a response. The other important thing of note is that when an Endpoint is called upon to perform Key Confirmation, if they were not already monitoring that key binding they will now begin to do so.