The Resurrecting Duckling: Security Issues For Ad-Hoc Wireless Networks
The Resurrecting Duckling: Security Issues For Ad-Hoc Wireless Networks
The Resurrecting Duckling: Security Issues For Ad-Hoc Wireless Networks
1 Introduction
photographer wants to record a voice note with the picture, the camera must in-
corporate digital audio hardware; in the future, the camera might let him speak
into his digital audio recorder or cellphone. Each device, by becoming a network
node, may take advantage of the services offered by other nearby devices instead
of having to duplicate their functionality.
The three main constraints on Piconet, and on similar systems which support
ad-hoc networks of battery operated personal devices, are as follows:
Peanut CPU: the computing power of the processor in the node is typically
small, so large computations are slow.
Battery power: the total energy available to the node is a scarce resource. The
node likes to go to sleep whenever possible. It is not desirable to use idle
time to perform large computations in the background.
High latency: to conserve power, nodes are off most of the time and only
turn on their receiver periodically. Communicating with such nodes involves
waiting until they next wake up.
The Resurrecting Duckling 3
2 Availability
Availability means ensuring that the service offered by the node will be available
to its users when expected. In most non-military scenarios, this is the security
property of greatest relevance for the user. All else counts little if the device
cannot do what it should.
exhaustion attacks are a real threat, and are much more powerful than better
known denial of service threats such as CPU exhaustion; once the battery runs
out the attacker can stop and walk away, leaving the victim disabled. We call
this technique the sleep deprivation torture attack.
For any public access server, there is necessarily a tension between the con-
trasting goals of being useful to unknown users and not succumbing to vandals.
Whereas some applications can restrict access to known principals, in others
(such as web servers and name servers) this is infeasible since the very useful-
ness of the service comes from its being universally available.
If a server has a primary function (such as sending the outside temperature
to the meteorological office every hour) and a distinct auxiliary function (such as
sending the current temperature to anyone who requests it) then these functions
can be prioritised; a reservation mechanism can ensure that the higher prior-
ity use receives a guaranteed share of the resource regardless of the number of
requests generated by the lower priority uses. (The highest priority use of all
may be battery management: if one can estimate fairly accurately the amount
of usable energy remaining, then the service can be monitored and managed
provided that the process does not itself consume too much of the resource it is
intended to conserve.)
3 Authenticity
It follows that the length of any validity period (whether for certificates or
access control lists) will be a trade-off between timeliness and convenience. But
relying on expiration dates imposes on the nodes the extra cost of running a
secure clock—otherwise the holder of an expired certificate might reset a node’s
clock to a time within the validity period. As many Piconet nodes would not nor-
mally have an onboard clock, the classical approach to authentication is suspect.
Thankfully, there is a better way.
whom it was issued, who for this purpose might be wearing a very short range
radio ring: at present, in the USA, a large number of the firearm injuries sus-
tained by policemen come from stolen police guns. Similar considerations might
apply to more substantial weapon systems, such as artillery, that might fall into
enemy hands.
3.4 Imprinting
During the imprinting phase, as we said, a shared secret is established between
the duckling and the mother. Again, we might think that this is easy to do. If
at least one of the two principals involved can perform the expensive public key
operations (decrypt and sign), the other device then simply generates a random
secret and encrypts it under the public key of the powerful device from which it
gets back a signed confirmation.
But many of our nodes lack the ability to do public key, and even if they
did it would still not help much. Suppose that a doctor picks up a thermometer
and tries to get his palmtop to do a Diffie-Hellman key exchange with it over
the air. How can he be sure that the key has been established with the right
thermometer? If both devices have screens, then a hash of the key might be
displayed and verified manually; but this is bad engineering as it is both tedious
and error-prone, and in an environment where we want neither. We are not likely
to want to give a screen to every device; after all, sharing peripherals is one of
the goals of ad-hoc networking.
In many applications, there will only be one satisfactory solution, and we
advocate its use generally as it is effective, cheap and simple: physical contact.
When the device is in the pre-birth state, simply touching it with an electrical
contact that transfers the bits of a shared secret constitutes the imprinting. No
cryptography is involved, since the secret is transmitted in plaintext, and there
is no ambiguity about which two entities are involved in the binding.
Note that an imprinted duckling may still interact with principals other than
its mother—it just cannot be controlled by them. In our medical application, we
would usually want the thermometer to report the patient’s temperature to any
device in the ward which asked for it. Only in exceptional circumstances (such as
a celebrity patient, or a patient with a socially stigmatised condition) would the
patient require encrypted communications to a single doctor’s PDA. So should
we also have an option of imprinting the device with a cleartext access control
list (and perhaps the patient’s name), rather than an ignition key?
8 Frank Stajano and Ross Anderson
This brings us back to the issue raised at the end of section 1.2, namely
how we might enable a single device to support security mechanisms of differing
strength. The solution that we favour is to always bootstrap by establishing a
shared secret and to use strong cryptography to download more specific policies
into the node. The mother can always send the duckling an access control list
or whatever in a message protected by the shared secret. Having a key in place
means that the mother can change its mind later; so if the patient is diagnosed
HIV positive and requests secure handling of his data from then on, the doctor
does not have to kill and reinitialise all the equipment at his bedside. In general,
it appears sound policy to delegate from a position of strength.
4 Integrity
So far we have seen that denial of service, the goals of authentication, and
the mechanisms for identifying other principals are surprisingly different in an
ad-hoc network. Is there any role for the more conventional computer security
mechanisms? The answer appears to be a qualified yes when we look at integrity.
Integrity means ensuring that the node has not been maliciously altered.
The recipient wants to be sure that the measurements come from the genuine
thermometer and not from a node that has been modified to send out incorrect
temperature values (maybe so as to disrupt the operation of the recipient’s
nuclear power plant).
in a bogus device; but if they meet the much weaker requirement of tamper-
evidentness (say with sealed enclosures), a forger will not be able to produce an
intact seal on the bogus device. So we will have confidence in a certificate which
we receive protected under an ignition key that we shared successfully with a
device whose seal was intact. (This is the first example we know of a purely
“bearer” certificate: it need not contain a name or even a pseudonym.) We will
now discuss this in more detail.
For nodes to be useful, there has to be a way to upload software into them, if
nothing else during manufacture; in many applications we will also want to do
this after deployment. So we will want to prevent opponents from exploiting the
upload mechanism, whatever it is, to infiltrate malicious code, and we will want
to be able to detect whether a given node is running genuine software or not.
Neither of these goals can be met without assuming that at least some core
bootstrap portion of the node escapes tampering. The validity of such an as-
sumption will depend on the circumstances; the expected motivation and abil-
ity of the attackers, and the effort spent not just on protecting the node with
tamper-resistance mechanisms and seals, but in inspection, audit and other sys-
tem controls.
5 Confidentiality
We find that we have little to say about confidentiality other than remarking
that it is pointless to attempt to protect the secrecy of a communication without
first ensuring that one is talking to the right principal. Authenticity is where the
real issues are and, once these are solved, protecting confidentiality is simply a
matter of encrypting the session using whatever key material is available.
In the event that covert or jam-resistant communications are required, then
the key material can be used to initialise spread-spectrum or frequency-hopping
communication. Note that, in the absence of shared key material and an accur-
ate time source, such techniques are problematic during the important initial
resource discovery phase in which devices try to determine which other nodes
are nearby.
6 Conclusions
We examined the main security issues that arise in an ad-hoc wireless network
of mobile devices. The design space of this environment is constrained by tight
bounds on power budget and CPU cycles, and by the intermittent nature of
communication. This combination makes much of the conventional wisdom about
authentication, naming and service denial irrelevant; even tamper resistance is
not completely straightforward.
There are interesting new attacks, such as the sleep deprivation torture, and
limitations on the acceptable primitives for cryptographic protocols. However,
The Resurrecting Duckling 11
there are also new opportunities opened up by the model of secure transient asso-
ciation, which we believe may become increasingly important in real networking
applications.
The contribution of this paper was to spell out the new problems and op-
portunities, and to offer a new way of thinking about the solution space—the
resurrecting duckling security policy model.
7 Acknowledgements
We thank Alan Jones for suggesting the wireless thermometer, a prototype of
which had just been built in the context of Piconet, as a minimal but still
meaningful practical example.
References
1. Ross Anderson and Markus Kuhn. Tamper resistance—a cautionary note. In
Proc. 2nd USENIX Workshop on Electronic Commerce, 1996.
2. Ross Anderson and Markus Kuhn. Low cost attacks on tamper resistant devices. In
Mark Lomas et al., editor, Security Protocols, 5th International Workshop Proceed-
ings, volume 1361 of Lecture Notes in Computer Science, pages 125–136. Springer-
Verlag, 1997.
3. Infrared Data Association. http://www.irda.org/.
4. Frazer Bennett, David Clarke, Joseph B. Evans, Andy Hopper, Alan Jones, and
David Leask. Piconet: Embedded mobile networking. IEEE Personal Communic-
ations, 4(5):8–15, October 1997.
5. Kenneth J. Biba. Integrity considerations for secure computer systems. Technical
Report MTR-3153, MITRE Corporation, April 1975.
6. HomeRF Working Group. http://www.homerf.org/.
7. Jaap Haartsen, Mahmoud Naghshineh, Jon Inouye, Olaf J. Joeressen, and Warren
Allen. Bluetooth: Visions, goals, and architecture. ACM Mobile Computing and
Communications Review, 2(4):38–45, October 1998.
8. IEEE. IEEE standard for a high performance serial bus. IEEE Standard 1394,
1995.
9. Roger G. Johnston and Anthony R.E. Garcia. Vulnerability assessment of security
seals. Journal of Security Administration, 20(1):15–27, June 1997.
10. Konrad Lorenz. Er redete mit dem Vieh, den Vögeln und den Fischen (King
Solomon’s ring). Borotha-Schoeler, Wien, 1949.
11. Sun Microsystems. http://java.sun.com/features/1998/03/rings.html.
12. Kevin J. Negus, John Waters, Jean Tourrilhes, Chris Romans, Jim Lansford, and
Stephen Hui. HomeRF and SWAP: Wireless networking for the connected home.
ACM Mobile Computing and Communications Review, 2(4):28–37, October 1998.
13. Bluetooth SIG. http://www.bluetooth.com/.