Multicast is a communication primitive of significant utility in today's networked world. Roughly speaking,
multicast enables a node in a network to efficiently send a single unit
of data to an arbitrary set of recipient nodes.
By using multicast,
a node is rid of sending separate copies of the data for every target recipient; it need
only send one copy of the data and multiple copies of the same are created and transmitted
within the network in a manner that conserves overall network bandwidth. The following
picture illustrates this with an example:
Transmitting to multiple recipients using
individual point-to-point connections
Multicast is different from broadcast - the latter applies only to the situation in which the target recipients include all nodes in the network (in the above picture, this would mean transmitting to all yellow nodes as well) and is thus only a special case of multicast. As one would imagine, multicast is useful in implementing many network applications that involve communcation in groups, e.g. video conferences, Internet-based PayTV networks and online games. With an increase in the popularity of such applications, the importance of multicast is also increasing.
As in point-to-point communication, one of the most important security concerns in
multicast is maintaining privacy of communication. How do multiple network users who
are in the same multicast "group" exchange data such that noone outside the group can decipher what
is being sent?
A natural way to guarantee this would be to have users share a common key, called the
group key, and to require that all multicast
transmissions from any user within the group be encrypted using that key.
If we are guaranteed that the group key is known
only to group members, then such an encryption protocol
would trivially solve the privacy problem.
This, however, is not as easy as it sounds. What makes the task challenging is the fact that
multicast groups are highly dynamic - users leave and/or join them at various (a priori unknown)
instants. For example, the subscription base of an Internet pay-per-view service changes
every hour and naturally, the service
provider would like to keep its transmissions secret from any user who is
not currently subscribed to its service (even though such a user could be a subscriber
in the future or in the past).
The problem essentially boils down to designing a protocol that securely distributes
a group key to all members and that also updates this key
over time such that at any moment, all and only the current members
can compute its value.
More generally, such a key distribution protocol should guarantee that for any instant
t, the group key at time
appears "as good as random" to
users who are
in the group at time t
and that this property be satisfied even if those users
This, however, is not as easy as it sounds. What makes the task challenging is the fact that multicast groups are highly dynamic - users leave and/or join them at various (a priori unknown) instants. For example, the subscription base of an Internet pay-per-view service changes every hour and naturally, the service provider would like to keep its transmissions secret from any user who is not currently subscribed to its service (even though such a user could be a subscriber in the future or in the past). The problem essentially boils down to designing a protocol that securely distributes a group key to all members and that also updates this key over time such that at any moment, all and only the current members can compute its value. More generally, such a key distribution protocol should guarantee that for any instant t, the group key at time t appears "as good as random" to users who are not in the group at time t and that this property be satisfied even if those users
The above problem, often referred to as multicast key distribution, or multicast encryption, is well-studied in the literature and there has been a significant bit of research on this topic in the last decade. The problem was originally introduced by Fiat And Naor way back in 1993 (although in their paper, it was defined for the specific setting in which all users are assumed to be stateless, a formulation referred to as broadcast encryption). Most existing protocols for multicast encryption use a centralized model of key distribution: it is assumed that that there exists a group center, say Alice, who shares an individual "long-lived" key with every potential user of the multicast channel (see figure on the left) and who uses these keys to distribute the group key securely to all legitimate participants. (The long-lived key for any user i can be created when i first joins the group, and shared using standard techniques based on public-key cryptography.) A simple protocol for group key distribution in this model would be the following (see figure on the right): Whenever a group membership change occurs (a user leaves or joins the multicast group), Alice generates a fresh key k and sends k encrypted under the long-lived keys of each of the current members. In the figure, users depicted in green are the current members, and so Alice sends k encrypted under their long-lived keys.
While a lot of work has been done on improving the efficiency of multicast encryption protocols, the efforts devoted towards rigorously analyzing the security of such improvements have been quite abysmal. Protocols are typically analyzed using a symbolic model of computation, one in which keys and messages are treated as abstract data types and cryptographic primitives as symbolic operations performed on these data types. Besides, it is assumed that these operations have perfect security properties and that malicious parties know nothing about their internal working nor can they exploit such knowledge in any way to subvert the protocol.
Symbolic security analysis treats cryptographic
primitives in a blackbox manner, assuming that
they have perfect security properties.
For example, if the adversary sees the encryption
of a message M under a key K and he does
not know K, then, from this, he learns
nothing about M (nor anything about K).
Such a model of computation leads to simple, tractable proofs of security; for example, in the case of multicast encryption, one only needs to show, inductively, that non-members at any instant cannot recover the group key at that instant using repeated application of the symbolic decryption operation (starting from the set of keys known to them initially). In practice, however, cryptographic primitives are not perfect; they only have computational security properties (meaning that computationally bounded attackers cannot break them with non-negligible probability) and a conclusion from such an inductive analysis may not apply to actual implementations of protocols. Cryptographic operations like encryption can leak "partial" information about the involved messages and/or keys and malicious users could potentially exploit such partial information leakage in order to break the protocol. So...
As you might expect, the answer is "no". In the following paper, we survey 11 multicast protocols from the literature and find that 9 of these protocols were published with only symbolic proofs of security. (Typically, the symbolic proofs were given using only high-level, informal arguments.) We uncover security weaknesses in 7 out of these 9 protocols and show that group keys in these protocols could be completely compromised in the face of computational attacks (even if the underlying primitives are instantiated securely).
Eve is given the power to make group membership changes and also to corrupt protocol participants at will. She can choose to corrupt multiple participants and acquire their internal state. (This models the ability of malicious users to collude with each other.) She is also given access to all information ever sent by Alice on the network. Her goal is to be able to distinguish between the group key for any instant when all group members are honest (i.e., not corrupted by Eve) and a purely random, independently-generated key. If we can prove that no matter what Eve does (as long as it takes polynomial time to do so), she can achieve this goal only with negligible probability, then the protocol would be called secure.
The Best of Both Worlds. Proving security of protocols with respect to such a definition of security is not easy; to do this for nine protocols is bound to be quite time-consuming, too. To make the task easier, we propose taking the following approach. We first ask: Under what conditions does a protocol that is symbolically secure, also satisfy the above definition of security? As already discussed, not every protocol that is symbolically secure is provably secure against computational adversaries. However, by imposing certain simple syntactic constraints on the protocol, it is possible to bridge this gap between the symbolic and computational worlds. In the following two papers, we define a set of sufficient conditions, which we call safety conditions, and prove that if a protocol satisfies these conditions and is proven secure in the symbolic model of computation, then it is guaranteed to be computationally secure as well (provided, of course, that the underlying primitives are implemented in a computationally secure manner).
What is most important about the above results is the fact that it considerably simplies the task of proving protocols secure. Given these results, protocols need not be proven (computationally) secure from scratch---if one can show that a protocol is symbolically secure (this can be done using simple inductive arguments) and also verify that it satisfies all our safety conditions (or else it can be made to satisfy them without significant modifications), then computationally security follows "automatically"! This gives us the best of both worlds---the ease of performing symbolic security proofs and the rigor of computational security definitions.
Our work is part of a broad area of research initiated by Abadi and Rogaway which tries to establish connections connections between symbolic and computational notions of security in the same manner that we do for multicast encryption above. While the result of Abadi and Rogaway is applicable to a large class of encryption protocols, it can be used to analyze security only against passive eavesdroppers. Our results involve a more general adversarial model, where the adversary (Eve) is given the power to adaptively influence the execution of the protocol and to corrupt protocol participants as well. (Besides, we also consider encryption protocols that make use of PRGs.) Our results could potentially be used to analyze security of any cryptographic task involving a combined usage of encryption and PRGs, and where security against adaptive adversaries is the desired goal; multicast encryption is a natural application that requires security analysis of this type.
In the following paper, we show that for a large class of encryption protocols (which includes most multicast encryption protocols in the literature), one can, in fact, argue about security against adaptive corruptions without resorting to non-standard models or non-standard notions of encryption, and obtain reasonably strong guarantees on the strength of the protocol against adaptive attacks.
Currently, the above result is applicable only to protocols that use encryption; extending this result to incorporate PRGs as well is part of our ongoing work.
 Amos Fiat and Moni Naor. Broadcast Encryption. Proceedings of CRYPTO, 1993.
 Ran Canetti and Juan Garay and Gene Itkis and Daniele Micciancio and Moni Naor and Benny Pinkas. Multicast Security: A taxonomy and some efficient constructions. Proceedings of INFOCOM 1999.
 Martin Abadi and Philip Rogaway. Reconciling Two Views of Cryptography. Journal of Cryptology, Volume 15(2):103-127, 2002.
 Adi Shamir. How to Share a Secret. Communications of the ACM, Volume 22(11), 1979.