Academia.eduAcademia.edu

Concurrent oblivious transfer

Proceedings 41st Annual Symposium on Foundations of Computer Science

We consider the problem of designing an efficient oblivious transfer (OT) protocol that is provably secure in a concurrent setting, i.e., where many OT sessions may be running concurrently with their messages interleaved arbitrarily. Known OT protocols use zero-knowledge proofs, and no concurrent zero-knowledge proofs are known that use less than a poly-logarithmic number of rounds (at least without requiring a pre-processing phase, a public random string, an auxiliary string, timing constraints, or pre-distributed public keys). We introduce a model for proving security of concurrent OT protocols, and present a protocol that is proven secure in this model based on the Decisional Diffie-Hellman problem. The protocol is efficient, requiring only a slightly non-constant number of rounds.

Concurrent Oblivious Transfer J UAN A. G ARAY P HILIP M AC K ENZIE Bell Laboratories – Lucent Technologies 600 Mountain Ave. Murray Hill, NJ 07974, USA garay,philmac ✁ @research.bell-labs.com Abstract We consider the problem of designing an efficient oblivious transfer (OT) protocol that is provably secure in a concurrent setting, i.e., where many OT sessions may be running concurrently with their messages interleaved arbitrarily. Known OT protocols use zero-knowledge proofs, and no concurrent zero-knowledge proofs are known that use less than a poly-logarithmic number of rounds (at least without requiring a pre-processing phase, a public random string, an auxiliary string, timing constraints, or pre-distributed public keys). We introduce a model for proving security of concurrent OT protocols, and present a protocol that is proven secure in this model based on the Decisional DiffieHellman problem. The protocol is efficient, requiring only a slightly non-constant number of rounds. 1. Introduction Oblivious Transfer (OT) is a fundamental cryptographic primitive introduced by Rabin [45]. Informally, an OT protocol is a two-party protocol in which Alice transmits a bit ✂ to Bob, Bob receives the bit with probability ☎✄ , and Alice does not learn whether Bob receives the bit. A vari☎ ✝ ✆ -OT, has Alice transmitting ant of the✂✠problem, called ✟☛✡☞✂ ✂✏✎ to Bob, Bob✄ choosing which bit to retwo bits ✞ ✂ ✎ ✄✍✌ ceive, Alice not learning , and Bob not learning . OT, ✑ ☎ ✝ ✄✠✒ ✆ -OT, and other variants have been studied extensively ✄ (e.g., [1, 2, 3, 5, 10, 9, 11, 22, 48, 19, 25, 30, 39, 37]) and have been used in many cryptographic protocols (e.g., [38, 43, 8]). Beaver [2] was the first to formally define security for OT in a stand-alone setting (hereafter referred to as stand-alone OT). He defined it in relation to an ideal system for OT in which Alice and Bob communicate through a trusted host that performs the OT operation, and in which all communication to and from the trusted host is absolutely private. To prove a real OT protocol is secure, one must be able to simulate the real protocol using an ideal OT protocol. The idea is that an adversary that breaks the real protocol can be used to create an adversary that breaks the ideal protocol, which by definition is impossible. Concurrent composition of protocols. When constructing cryptographic protocols for the real world, and in particular, when they are meant to be run over the Internet, they need to be secure even when instances of the protocols are run concurrently. There has been a great deal of work on securing authentication and key exchange protocols in a concurrent setting (e.g., [4, 6, 7, 24, 35, 36, 42, 44, 49]). 1 There has also been recent work on zero-knowledge protocols in a concurrent setting. Dwork, Naor and Sahai [27] considered the general case of concurrent zero knowledge in an asynchronous setting, and showed that assuming bounds on the relative speed of the processors, one can construct four-round zero-knowledge arguments for any language in NP. This result was improved by Dwork and Sahai [28], who showed that the timing assumption can be reduced to a pre-processing phase. The timing assumptions were later eliminated by Di Crescenzo and Ostrovsky [18], who presented a constant-round concurrent zero-knowledge protocol (proofs and arguments) with preprocessing for all languages in NP. Recently, Damgård [21] presented constant-round concurrent zero-knowledge protocols assuming an auxiliary string is available to all players (see [21]), and Canetti et al. [12] presented constantround concurrent zero-knowledge protocols in the publickey model. With no pre-processing or timing assumptions—i.e., the “vanilla” model, Kilian, Petrank and Rackoff [41] showed that if a language ✓ has a four-round proof-of-membership, and one can black-box simulate polynomially many asynchronous concurrent proofs, then ✓✕✔ BPP. The number of rounds in this lower bound has recently been increased to 1 This work was preceded by many attacks on previous protocols that were not designed with careful consideration of the concurrent setting. seven by Rosen [47]. Regarding upper bounds, Richardson and Kilian [46] presented a concurrent zero-knowledge proof for this setting but they required a polynomial number of rounds (polynomial in the concurrency). More recently, Kilian and Petrank [40] have reduced the number of rounds to poly-logarithmic. To our knowledge, there has been no previous work on OT in a concurrent setting (hereafter referred to as concurrent OT). On the other hand, there have been protocols which are meant to be run in a concurrent setting and which are based on OT [43, 8]. In these papers, OT is essentially assumed to be a black-box in which concurrently-running protocols have no effect on each other. However, all OT protocols in the literature have used zero-knowledge proofs, which, as discussed above, may not necessarily be secure when run concurrently! In fact, the only ones shown to be secure require at least a poly-logarithmic number of rounds (in the concurrency), and thus would be very costly in practice. ☎ ✝ Our results. In this paper we focus on ✆ -OT.2 We ✄ extend Beaver’s stand-alone security model for OT [2] to ☎ ✝ ✆ -OT in a concurrent setting. We then present a protocol ☎ ✝ ✄ for concurrent ✆ -OT in the standard model (i.e., no tim✄ ing constraints, pre-processing, auxiliary strings, etc.) and prove its security. Our protocol is based on a protocol due to Bellare and Micali [5], and is the first protocol to be proven secure without using zero-knowledge protocols (i.e., using only witness-hiding protocols). The security of our protocol is based on the Decisional Diffie-Hellman problem 3 and has a number of rounds (and number of exponentiations) which is only slightly non-constant. (Contrast this to the most efficient concurrent zero-knowledge protocol known which requires a poly-logarithmic number of rounds.) The technical contributions that underlie our results are as follows. A first contribution is to show an OT protocol that does not need to use a zero-knowledge protocol in order to be provably secure. This is not trivial, and in fact, the Bellare-Micali protocol that we extend is the only one that we have found that allows this. A major difficulty (especially in the concurrent setting), is to allow simulations without the zero-knowledge property of the ZK-protocols. That is, if a party must commit to certain values (or to the knowledge of certain values using a proof of knowledge), then our OT simulator must also, since the proof is not zeroknowledge. A second contribution is the development of a new proof of representations, in which some part of the representation ✝ for ✆ ✁ -OT imply protocols for other variants of OT [15]. 3 Most known (provable) OT protocols are based on specific complexity assumptions; in [33] an OT protocol is given based on any trapdoor permutation. 2 Protocols must be guaranteed to not be zero. This turns out to be very useful in our protocol, and may be useful in other contexts. A third contribution is the development of proofs of knowledge that work in a concurrent setting, even if we cannot guarantee perfect indistinguishability between normal execution and execution after the extractor “rewinds.” 4 We do this using a slightly non-constant number of standard three-round proofs of knowledge, and show that the extraction works with high probability for at least one of the standard proofs of knowledge. 2. Basic Cryptographic Concepts ✂ We say that a function ☎✞ ✄ is negligible if ✂ for any ✌ ✆✞✝✠✟☛✡ ✞✌☞ , there is an ✄✎✍ such that for ✄✑✏✒✄✎✍ , ✞☛✄ ✕✗✖ ✆✞✝✠✟☎✡ ✌ ✌✔✓ ✞☛✄ . ✌ Discrete logs and the DDH problem. For a prime ✘ , we will work in a group ✙✛✚ of order ✘ with generator ✜ . For a specific example, we could use a group formed by integer ✕ be prime, modular arithmetic as follows. Let ✆✣✢✤✟ ✘✦✥ and let ✧✩✪ ★ be the multiplicative group modulo ✆ , with ✜ a generator of order ✘ in ✧✦✪ ★ . Then ✙ ✚ would be the subgroup generated by ✜ . ✡ For ✫ ✢ ✜✭✬ , we use DL ✞☎✫ ✜ ✢✯✮ to denote the dis✌ crete log of✡ ✫ with respect to ✜ . Additionally, for ✰ ✢ ✜✞✱ , ✢ let ✲✩✳ ✞☛✫ ✰ ✜✭✬ ✱ . We assume the hardness of the De✌ cision Diffie-Hellman problem (DDH). In this problem, it is the goal of the polynomial-time adversary to distinguish the following two distributions with a non-negligible advantage over a random guess: ✡ ✡ ✡ ✡ ✞☎✫ ✰ ✧ with ✫ ✰ ✧ ✔✵✴✶✙✷✚ , and 1. ✞☎✫ 2. ✡ ✰ ✡ ✌ ✲✸✳ ✞☎✫ ✡ ✰ ✌ ✌ with ✫ ✡ ✔✎✴✹✙✷✚ . ✰ ✡ ✡ ✡ ✡ Representations. We let REP ✞ ✡ ✞✺✜ ✜ ☎ ☞✻☞✼☞ ✜✾✽ denote ✌ ✌ ✄ a representation of ✡ over multiple bases/generators, i.e., ✡✿✢ ✜❁❀❃❂ ✜✭☎ ❀✗❄ ☞✼☞✻☞❅✜ ✽ ❀✗❆ . In our protocols, we will need to prove ✄ knowledge of representations over multiple bases, but with the extra condition that the representation over one of them ✡ ✡ ✡ ✡ to deis non-zero. We use NZ-REP ❇❉❈✠✞ ✡ ✞✺✜ ✜ ☎ ☞✼☞✼☞ ✜ ✽ ✄ ✌ ✌ note a representation over the multiple bases, with the exponent on ✜✾❊ being non-zero. 3. Model and Definition of Secure OT ☎ ✝ Our model used to prove security of ✆ -OT in a concurrent setting is an extension of the model✄ used for standalone OT by Beaver [2]. 4 In particular, to avoid nested and interleaved rewinds in the concurrent setting, which could blow up the running time of the simulation, the extractor operates a rewind execution differently than the normal simulation operation. ☎ ✝ Security for ✆ -OT is defined☎ ✝ using an ideal system, ✄ which describes the service (of ✆ -OT) that is to be provided, and a real system, which✄ describes the world in which the protocol participants and adversaries work. The ideal system should be defined such that an “ideal world adversary” cannot (by definition) break the security. Then, intuitively, a proof of security would show that anything an adversary can do in the real system can also be done in the ideal system, and thus it would follow that the protocol is secure in the real system. The ideal system. As described above, the ideal system described by Beaver for stand-alone OT includes two play✂ ers, Alice and Bob, and a trusted host. Alice sends a bit ✡☞✂ ✡ to the trusted host, who then sends ✞ or ✞ to Bob ✌ ✌ with equal probability. Also, either player can send a quit message to the trusted host to abort the protocol. We assume all communication to or from ☎ ✝ the trusted host is absolutely private. Extending this to ✆ -OT we would first have Bob ✄ send a bit ✑ to the trusted host, then have Alice send two bits ✂ ✟ ✡☞✂ ✞ to the trusted host, and finally have the trusted host ✄ ✌ ✂ ✎ send bit to Bob. Security is considered with respect to a cheating Bob or a cheating Alice who tries to gain information that is not allowed. One can assume the bits that Alice and Bob send to the trusted host are a function of an environment , the distribution of which is known to the adversary. 5 (Often can be thought of as a random string drawn from some known distribution.) ☎ ✝ The ideal system for concurrent ✆ -OT has multiple Al✄ ice and Bob instances that send bits to, and receive bits from, the trusted host. Again the bits sent to the trusted host are assumed to be a function of an environment . Security is considered with respect to a set of cheating Bob instances or a set of cheating Alice instances.6 ☎ ✝ In both the stand-alone and the concurrent model of ✆ -OT, once Bob receives a bit from the trusted host, both ✄ the index of the bit received and the bit itself is output. This models the adversary somehow obtaining these values, and is necessary for proving the security of these protocols in a “black-box” fashion so they can be used within larger protocols. Because in the ideal system, (1) the sender in the ideal system cannot see the bit ✑ sent by the receiver, and (2) the ✂ ✎ receiver cannot see the other bit sent by the sender, it ☎ ✝ ✄ ✒ should be clear that the ideal ✆ -OT is completely secure. ✕ ✁ ✁ ✁ ✄ 5 Note that we specifically do not assume Alice or Bob has committed to any bits. This would be called “committed” or “verifiable” oblivious transfer [16, 17] and is beyond the scope of this paper. 6 This is similar to the model for concurrent zero-knowledge, in which the adversary may corrupt either the provers (when considering soundness), or the verifiers (when considering zero-knowledge) but not some of each. See Section 6 for more discussion on this issue. The real system. Now we (briefly) describe the real system in which we assume an OT protocol runs. As in the ideal system there will be multiple Alice and Bob instances. In the real system these are defined as state machines. Each instance starts in some initial state, and may transform its state based on its input bits (which could be computed as a function of an environment ), and after that, only when it receives a message. At that point it updates its state and possibly sends a response message As in the ideal system, security is considered with respect to a set of cheating Bob instances or a set of cheating Alice instances. ✁ Definition of security. Security for oblivious transfer is defined as follows: 1. Completeness: whenever any real world adversary faithfully delivers messages between an Alice and Bob instance the oblivious transfer completes successfully. 2. Simulatability for Bob (Alice): for every efficient real world adversary that corrupts a set of Alice (Bob) instances, there exists an efficient ideal world advercorrupting a corresponding set of Alice (Bob) sary instances such that produces a transcript of messages and adversary view that is computationally indistinguishable from that produced by . ✂ ✂✛★ ✂ ★ ✂ 4. An OT Protocol without Zero-Knowledge Proofs ☎ ✝ We arrive at our concurrent ✆ -OT protocol through a ✄ series a transformations. ☎ ✝ We start by describing a noninteractive (stand-alone) ✆ -OT protocol due to☎ Bellare and ✝ ✄ Micali [5]. We transform it into an interactive ✆ -OT proto✄ col that uses witness-hiding proofs of knowledge, which we ☎ ✝ then (in Section 5) turn into a concurrent ✆ -OT protocol with slightly non-constant number of rounds.✄ ✆ 4.1. The Bellare-Micali ✆ ✜ ☎ ✝ ✄ -OT protocol ✄ ✧✦✪ ★ ✧✩✪ ★ Let prime , generator of , and element ✔ be known to all users; it is assumed that nobody knows the discrete log of . Bob chooses his secret and public keys ✡ ✡ ✡ ✎ ✔ as follows: he picks ✑ ✔ and at ✎ ✎ random, and sets and . His public key ✄✠✡ ✒ ✎ ✟ ✡ is ✞ and his secret key is ✞ ✑ . Anybody can check ✌ ✄ ✌ that Bob’s public key is correctly formed, by checking that ✟ . Since the discrete log of is not known, Bob ✟ ✄ should not know the discrete logs of both and (as✄ suming finding discrete logs is infeasible); moreover, the public key does not reveal which of the two discrete logs ✂ ✟☛✡ ✂ . The protocol Bob knows. Alice has a pair of bits ✞ ✄ ✡ ✡ ✡ ✡ ✢ ✄ ✞✕ ✝ ✮ ✟☎ ☞✻☞✼☞ ✡✆ ✠✆☛ ✝ ✆ ☎ ✡ ✢ ✜✭✬✌☞ ✡ ✢✎❇✑✍ ✏ ☞ ✮ ✄ ✡ ✡ ✄✍✌ Alice ✂✁☎✄✝✆✞✁ Bob ☛✡ ✁✠✟ ✟ ✄ ✆ ✆✖✕☎✕✗✕✗✆☛✘✚✙✜✛✣✢ ✁✍✌✏✎✒✑✔✓ ✄✦✥★✧✪✩✪✫✣✆ ✥✬✧✪✩ ✆ ✤ ✁ ❂ ✥✬✮ ✩ ✭ ✭ ✄ ✁ ✁ ❂ ✄✰✆ ✆✖✲✱✢✰✳ Picks ✯ ✯ ✁ ✌ ✎ ✑✱✓ s.t. ✴ ✭ ✄✰✆ ✄✖✵✶✥✷✁✗✄✰✆✗✴ ✭ ✆ ✵✶✥✬✁ ✁ ✯ ✁ ✁ ✯ Picks ☞ ☞ ✤ Computes ✪ ✩ ✫ ✄✍✥✬✮ ✆ ✤ ✄ ✆ ✤ ✆ ✁ ✯ ✄ ✆ ✯ ✁ ✸ ✭ ✻ Figure 1. The Bellare-Micali non-interactive OT protocol; Bob’s public key is ✞ ✡ is shown in Figure 1. In the figure, ✽ ✮ ✡✿✾ denotes the inner product mod 2 of the strings ✮ and ✡ , and ❀ ✢❂❁ ✆❃❁ . Bellare and Micali note that to prove security, a Bob must also provide a zero knowledge proof of knowledge of his secret key. However, no proof of security is given. Actually, it is not clear how to prove security in Beaver’s security model, although including the zero-knowledge proof of knowledge seems to make their protocol “halfway” provably secure (that is, simulatable for the case when the adversary corrupts Bob). This proof would be based on the computational Diffie-Hellman problem (and uses the GoldreichLevin theorem [34]). 4.2. The new stand-alone OT protocol ✥ ✤✺✻✹ ☞ Computes ✁ ✥✼✴ ✆ ✵ ✻ ✭ ✻ ✡ ✯ ✻ ✟ ✡ ✡ and his secret key is ✞ ✑ ✄ ✌ ✡ ✮ ✎ ✌ keys. To our knowledge, a proof of knowledge for this type of special representation is new and might be of more general interest. Proof of knowledge of the special representation. For simplicity, we present here the POK of the particular representation that will be used in our OT protocol: over three bases, with one representation being non-zero; we defer the general cases (1 out of ❀ , several bases out of ❀ ) to the full version of the paper. The protocol is shown in Figure 2. representation Lemma 1 The protocol of Figure 2 is a witnessindistinguishable proof of knowledge protocol for ✡ ✡ ✡ NZ-REP❅ ✞ ✡ ✞✺✜ ✄ ✄ . ✌ ✌ We now present ☎ ✝ a modification of the protocol above that achieves secure ✆ -OT. Our protocol only uses a witness✄ hiding proof of knowledge (POK) [31] in place of the zeroknowledge POK from Bob to Alice that would be used in the protocol above. It also uses another witness-hiding POK from Alice to Bob after the first transfer in the protocol. The protocol is shown in Figure 3. In the figure, POK[ ☞ ] stands for a (witness-hiding) proof of knowledge protocol, namely, a so-called ❄ -protocol [23] consisting of three moves: an initial message from the prover (in this case, Bob), a challenge, and a response. (Messages, of course, could be combined; we show them separately for clarity.) The proofs are of disjunctions and conjunctions of statements. Efficient proofs of monotone (boolean) composition of knowledge statements were given in [14, 32]. Before describing the protocol more thoroughly, we first explain a particular construction that our protocol uses, namely, proofs of knowledge of representations over multiple bases, and, in particular, with the extra condition that the representation over one of them✡ is non-zero. (These ✡ ☎ ✡ ✡ ✡ ✞ ✞✺✜ ✜ ☞✼☞✼☞ ✜✾✽ and representations are denoted REP ✡ ✡ ✡ ✡ ✌ ✌ ✄ NZ-REP❇ ❈✍✞ ✡ ✞✺✜ ✜ ☎ ☞✼☞✻☞ ✜✾✽ , resp.—see Section 2.) This ✌ ✌ ✄ type of proof is needed for Alice to have a guarantee that Bob only knows the discrete log of one of the two public Proof Completeness is straightforward. For soundness, using the knowledge✡ extractor associated with the proof ✡ of knowledge, let ✞☛❆ ❆ ✍ ❆ ✍ ✍ be the representation extracted ✌ ✄ ❇☛❊ ❊ ); from the first part of the proof (i.e., ✡ ✢ ✜❈❇ ✄❉❇☛❊☎❋ ✡ ✡ ✢ ✞❍● ● ✍ ● ✍ ✍ the representation extracted from the second ( ■ ✡✿❏ ❏ ❏ ✌ ✜ ❊ ✄ ❊ ❊ ); ❑ the ✡discrete log extracted from the third (▲ ✢ ✄ ❀ ); and ✞☛▼ ▼❁✍ the representation extracted from ✖ ✌ the fourth ( ■ ▲ ✢ ✜✿◆ ✄✜◆ ❊ ). If ❆✌✍ ✍P✢ ❖ then the assertion holds, so assume otherwise. From proofs 2–4 we obtain ■ ✢ ✡❈❏ ✜ ❏ ❊ ✄ ❏ ❊ ❊ ✢ ✜◗◆✌❉ ✄ ◆✖❊☛✄ ❀ , which yields ✡❈❏ ✢ ❏ ❏ ❊ ✄ ◆✖❊ ✒ ❊ ❊ ✄ ❀ . There are two cases: ✜ ◆ ✒ ✢ ●❘❖ : Then ✡ ✢ ✜❚❙✗❯✺❱ ❊ ✄❲❙ ❊ ❯✺❱ ❊ ❊ ✄❨❳ . Since ▲❩✢ ❖ ❱ ❱ ❱ ✢ , this is a proper representation for ✡ . ❑✷❖ and thus ❱ ❊ ❊ ❯❬❙ ❊ , and thus the verifier ✜ ❱ ❊ ❯❬❙ ✄ : Then ✄ ✢ ❳ ❳ can compute a proper representation for ✡ (say, ✡ ✢ ✜❭❇ ✒ ❱ ❊ ❯❬❙ ✄✒❇☛❊ ✒ ❱ ❊ ❊ ❯❬❙ ❊ ✄ ). ❳ ❳ To prove witness indistinguishability, note that each proof of knowledge of a representation is witness indistinguishable, so even if the discrete logs of ✡ , ■ , ▲ , ✄ and ✄ are known (say ❪ , ❪ ☎ , ❪✱❫ , ❴ , and ❴✭✍ respectively), the verifier ✄ is only able to learn the following restrictions (all ❵❘❛◗❜ ✘ ): ● ✢ ✕ ❪ ✄ ✢ ✮ ✥❝❴❭❞ ✥❝❴ ✍ ❞ ✍ ✡ ✂✁☎✄✆✁ ✞ ✞ ✞ ✠✟☛✡ Picks ✝ ✝ ✝ ☞ ✍✌ ✍✌ ✌ ✏ ✄✁ ✌ Verifier Prover ✂✮ ✥★✧ ✹ ❊ ✆ ✎ ✮✿✆ ☞ ✑✏✓✒ ✆ ✥❲✲ ✂✮◗✆✗ ✧◗✆ ✕✖ ✥✬✮ ☎ ✢ ❪ ❫ ✢ ❪ ✥P● ✍ ✥ ● ✄ ✜✛ ❴◗● ✍ ✍ ✧◗✆ ✡ ❴ ✍ ❞ ✍ ● ✡ ❊ ✌ ✎ ❊❊ ✟ ❊ ✄ ✟✞✟ AND AND ✟ AND ❨✆✖ ✧◗✆ ☞ ✔✏ ✄ ☞✘✗ ✏ POK[(REP✆✖✂✮◗✆ REP ❨✆ DL REP ✟✞✟ ✚✙ ✟✞✟ ✡ Figure 2. Proof of knowledge of NZ-REP ❅ ✞ ❪ ✧ ✷✥✼ ✆ ✆ ✟ ✡ This system of equations has a solution for any ✞ ✮ ❞ ❞✾✍ ✌ satisfying ❪ ✢✯✮ ✥ ❴❭❞✷✥ ❴❁✍ ❞✠✍ . Note that no hardness as✄ sumptions are needed. ✢ ☎ ✝ The protocol. The new (stand-alone) ✆ -OT protocol is shown in Figure 3. The basic idea is the✄ same as that in the Bellare-Micali protocol. Alice gives Bob a value ✄ for which Bob does not know the discrete log (this ✄ corresponds to the value in the Bellare-Micali protocol), and forces Bob to commit to two values, one of them containing the value ✄ . However, we want to avoid ZK protocols, so everything is done with representations. This disallows ✡ ✖ trick which forces Bob the use of the simple ✞☎✜ ✬ ✜❁✬ ✌ to use in one of his two values, and thus Bob has to do a more complicated “NZ-REP” protocol, and include it in a more complicated boolean POK expression. On the Alice side, any reduction from Diffie-Hellman will require the value that the simulator sends for Alice in the first message to include a value for which the simulator does not know ✡ the discrete log. This is why we send two values ✞ ✄ , ✌ and simply prove knowledge of a representation of ✄ over and ✜ . This allows the simulator to incorporate a value with an unknown discrete log into . ✄ ✄ ✄ ✄ ✄ ✄ ✡ ✞✺✜ ✡ ✡ ✄ ✄ ✌ ✌ essentially by constructing a simulator that simulates the user instances in the real system using corresponding user instances in an ideal system.) Say plays the role of real-Alice (Alice in the real system). Then ★ plays the role of ideal-Alice and simulates real-Bob for . ★ accepts and ✄ and then runs the knowledge✡ extractor for the first POK. From this it re✡ ✢ ❏ ❏ ✞☎✄ ✞☎✜ , say ✄ ✜ ❊ . Then ✛★ chooses ceives REP ✌ ✌ ✡ ✟ ✢ ✮ ✡ ✮ ✡ ✢ ❏ ❏ ✢ ✍ ✔ ✴ ✧ ✚ and sets ✜✭✬ and ✄ ✒ ❊ ✜✭✬✝❊ ✒ ✄ ✜❁✬✝❊ . ¿From this point on, ✿★ follows the real-Bob protocol ✟☛✡ ✟☛✡ ✡ until the last step. When it receives ❴ ✡ ❴ , it uses its ☛ ✟ ✡ ✂✠✟ ✄ ✄ ✡ ✡ knowledge of DL ✞ ✜ and DL ✞ ✜ to compute both ✂ ✂✠✟☛✡ ✂ ✌ ✄ ✌ and . It sends ✞ to the trusted host, and has ideal✄ ✄✍✌ Bob send its ✑ value to the trusted host. (Note that ★ does not know the bit ✑ at this point.) The trusted host sends ✂✏✎ ✡ ✂ ✎ ideal-Bob the requested bit , and the pair ✞ ✑ is output. ✌ This will be perfectly indistinguishable from what would be output in the real world. ✂ ✂ ✂ ✂ ✄ ✄ ✄ ✂ ✄ ✤✣ ✂ ✤✣ ✂ Now say plays the role of real-Bob. Then ★ plays the role of ideal-Bob and simulates real-Alice. ★ follows the protocol for real-Alice, except that using the extractor associated with the second proof of knowledge, it extracts ✟ ✡ from (who is playing the discrete log ✮ of one of ✞ ✡ ✡ ✄ ✌ ✡ ✡ real-Bob), and a representation ✞ ✮ ✍ ❞ ❞✠✍ of the other one. ✌ If the extraction fails, or if the extractor extracts the discrete logs and representations of both, then ★ aborts. (Note that the extraction fails only with probability ✚✄ , and the other case will happen only with negligible probability because would essentially be computing discrete logs, as shown ✡ ✕ ✎ below.) Say ★ extracts the discrete log of ✡ for ✑ ✔ . that wants. Then ★ knows that ✑ is the index✂ of the bit ✂ ✟ ✛★ has ideal-Alice send her bits and to the trusted ✄ host, and then sends ✑ to the trusted host as ideal-Bob. (Note ✂✍✟ ✂ know the bits and at this point.) ✿★ that ★ doesn’t ✂✍✎ ✄ receives back from the trusted host, so now ✿★ knows ✂ ✂ ✂ ✂ ✂ Theorem 1 Assuming the hardness of DDH, the protocol of Figure 3 is a secure stand-alone OT protocol. ✂ Proof [Sketch] Verification of the completeness requirement is straightforward. For simulatability, we will transform a real-world adversary into an ideal-world adversary ★ that attacks the ideal system and outputs a transcript which is computationally indistinguishable from the one produced by attacking the real system. (We do this ✂ ✂ ✂ ✝ ✂ ✂ ☎ ✂ ✂ ✂ ✂ ✂ Alice ✂✁✗✄✰✆✠✁ ✞ ✆ ✞✞ ✆ Picks ✤ ✤ Computes ✄ ✥ ✤ ✟ ✌✏✎ ✥✬✧ ✆ ✟ ✡ ❊❊ POK[REP(✄ POK[(DL( ✕✖ ✁ ✕ ✮ ✂✮ ✆✖ ✧◗✆ ✁ ) AND NZ-REP ☎ OR ✂✮✣✄✔✆✖ ✧◗✆ ) AND NZ-REP☎ ✄ ✆ ✆ ✄ ✟ ✞✆ ✆ Picks ✁ ✁ ✮ Computes ✁ ✮✣✄✝✆ ✧ ✆ ✧ )] ✟ ✥★✧ ✁✄✂ ✻ ✆ ✯ ✻ ✞ ✯ ✥★✧ ✌ ✎ ✁✤✄ ✁ ✹ ❊ ✟☛✡ ✹ ✆ ❊ )) )) ] ✁✍✌ ✎ ✄ ✥✬✧ ✩✪✫ ✆ ❂ ✟ ✡ ✌ ✎ ✯ ✯ ✁ Picks ✁ ✄ ✥❲✲ If✄✍✥✬✮ ✩✪✫ then ✭ ✮ ✟ ✖ ✟ ✡ ✄ ✆ Picks ☞ ☞ Computes ✤ ✥★✧✪✩ ✄✰✆ (DL( ✸ ✆✖ ✧◗✆ ✮✣✄✰✆ ✮ ✎ ✤ ✁ ✄ ✆ ✧ ❊ Bob ☛✡ ✁ ✟ ✄ ✁ else ✁ ✥❲✲ ✁ If ✥✬✮ ✩ then ✭ ✄✦✥★✧ ✭ ✁ else ✭ ✁ ✁ ❂ ✥★✧ ✫ ✁ ✤ ❂ ✄✰✆ ✭ ✄✰✆ ✤ ✁ ✆ ✭ ✁ ✸ ✥ ✻ ✁ else Figure 3. A stand-alone ✆ ✂ ✎ ✂ ✎ ✣ ✎ ✎ , but not . ✛★ then computes ❴ and ✎ as in✣ the✎ real ✄✠✒ and . protocol, but chooses random values for ❴ ✂ ✄✠✒ ✆ ✄✠✒ If there is a distinguisher that can distinguish the transcript of from the transcript of ★ , then we can create a ✆ ✍ for DDH as follows. Take a DDH chaldistinguisher ✡ ✡ ✢ lenge: ✞☎✫ ✰ ✧ and set ✫ . Run the protocol as ✌ real-Alice against an adversary who plays the role of real✎ Bob, but extract the discrete log ✮ of ✡ and representation ✡ ✡ ✎ ✮ ✡ as in the simulation above, where ✑ is ✞ ✍ ❞ ❞✠✍ of ✌ ✄✠✒ the bit that wants. If the extractor extracts the discrete ✆ logs and representations of both, then ✍ simply computes the discrete log of ✫ and computes the correct answer to the DDH challenge directly. If the extraction fails, or ex✟ tracts both discrete logs and representations of both ✡✆ and ✡ but is not able to compute the discrete log of ✫ , ✍ an✄ swers “Random” and stops. (This happens with probability ☎ at most ✚✄ ✥ ✚✄ ✢ ✚ .) ✂ ✟ ✂ Now choose bits and according to the distribu✄ tion expected by (for instance, from a distribution that ). In the last step,✣ choose depends on the environment ✎ ✎ ✢ ✢ a random ❞❃✍ , set ❴ ✰ and either set ✂ ✂ ✄ ✂ ✂ ✂ ✁ ✄✠✒ ✄✠✒ ☎ ✝ ✄ ✭ If ✤ ✻✹ ✁ ✥❲✲ ✻ ✥ ✻ then ✓ -OT protocol ✂ ✎ ✣ ✕ ✎ ✢ if ✆ , or set ✆ randomly if ✄✠✒ ✄✠✒ . Now run on the transcript. If answers “real ✆ ✄✠✒ protocol,” then answer “True DH.” If answers “simulated protocol,” then answer “Random.” ✆ ✆ answers “real” and Let -real be the event that ✆ ✆ -sim the event that answers “simulated,” and let ❞ ✢ ✍✏✎ ✆ ✍✑✎ ✆ ✆ ❁ ✢ ❁ ✞ -real ”real” and let ❞ ✍ ✞ -real “sim” . Since ✌ ✌ distinguishes with non-negligible probability, we have that for some non-negligible ✒ : ✂ ✧✞✝✱❊✠☎ ✟ ❊✠✡☛✻ ✟ ✰☞✝✖❊ ❊✌✟☎❊✠✡✵✝ ✬ ❊ ✎ ✢ ✍✏✎ ✒✔✓ ✢ ❞ ✠ ✞ ✆ -real ❁ ”real” ❞ ✍ ✛ ✠ ✌ ✍✏✎ ✞ ✆ -real ❁ “sim” ✌ Let ✕ be the event that the extraction fails, and let ✕ ☎ be ✄ the event that the extraction succeeds but we cannot com✡ ✜ . Let pute ✑ , and furthermore cannot compute DL ✞✗✖ ✍✑✎ ✍✑✎ ✌ ❀ ✢ ▲ ✞✘✕ ✞✘✕ ☎ . Let ✕❨❫ be the event that and let ▲ ✍ ✢ ✌ ✄ ✌ the extraction succeeds and allows the✍✏simulator to com✡ ✎ ✞✘✕ ❫ . Note that pute DL ✞☛✫ ✜ directly. Let ▲✩✍ ✍ ✢ ✡ ✌ ✌ ✢ ▲ , ▲ ✍ , and ▲ ✍ ✍ are the same whether ✧ ✲✸✳ ✞☎✫ ✰ or ☎ ✡ ✌ ✢ ✧❂❖ ✲✩✳ ✞☎✫ ✰ , and that ▲✹✥ ▲ ✍ ✓ ✚ . Also note that when ✡ ✌ ✢ ✧ ✲✩✳ ✞☎✫ ✰ , Alice’s responses are exactly as in the real ✌ protocol, and when ✧ ✢ ❖ ✲✸✳ ☎✫ ✰ , Alice’s responses are exactly as in the simulated protocol. Therefore, ✡ ✞ ✌ ✍✏✎ ✆ ✍ ✞ ✢ is correct ✆ -real ❁ ✧ ✢ ✆ -sim ❁ ✧❂✢ ❖ ✍✏✎ ✍ ✎ ✏ ✍✏✎ ✍✏✎ ✍✏✎ ✗✕ ✗✕ ✗✕ ✄ ☎ ☛❞ ☛▲ ✲✸✳ ❁ ✢ ✧❂❖ ✲✸✳ ❫ ✠ ✠ ✌ ▲ ☎✫ ✰ ✞ ▲ ✕ ✠ ❞ ✥ ✌ ☎✫ ✰ ✞ ✌ ✌ ✡ ☎✫ ✰ ✞ ✁ ☛✄✂ ✠ ✌ ✍ ▲ ☛✄✂ ✠ ✠ ☛ ✥ ❞✠✍ ▲ ✕ ✢ ✧ ❂ ❖ ✲✩✳ ✢ ✧❂❖ ✲✩✳ ✡ ✲✩✳ ☎✫ ✰ ✞ ✥ ✌ ✌ ✡ ✲✸✳ ☎✫ ✰ ✞ ✥ ✌ ✌ ✡ ☎✫ ✰ ✥ ✞ ✌ ✌ ✌ ✌ ✡ ☎✫ ✞ ✰ ✥ ✥ 5.1. Proof of security ✁ ✕ ☛✄✂ ✍✍ ▲ ✕ ✌ ✢ ✢ ✧ ❂ ❖ ✌ ✌ ✍ ☛ ✠ ✥ ✠ ▲ ✞ ✧ ✕ ✍✍ ▲ ✞ ✞ ✌ ✞ ✁ ✠ ✍ ✒ ✕ ✌ ✍✏✎ ✏✍ ✎ ✡ ▲ ✠ ❞ ✍ ✥ ✞ ☛ ✲✸✳ ✌ ✠ ✕ ✞ ✰ ✌ ✢ ✧ ❂ ❖ ✞ ☎✫ ✞ ✡ ❁ ✞ ✞ ☛ ✡ ✲✩✳ ✞ ✞ ✢ ✍ ✎ ✑ ✍✏✎ ✌ ✞ ✥ ✍✍ ▲ ▲ Bob, and thus must transmit both bits (drawn from a simulated distribution) in the correct way. Given that the views seen by the adversary running against the extractor and the normal simulator are computationally indistinguishable, but nevertheless different, the basic argument that the extractor finishes in polynomial time does not hold. To get around this problem, we make sure the extractor for a given POK fails with polynomially small probability, and then after a non-constant number of independent attempts fails with only negligible probability. ✍ ✡ ✘ which is non-negligibly more than 1/2. Thus, if there is a distinguisher that can distinguish the transcript from the real world from that of the ideal world, then the DDH distinguisher constructed above would have a non-negligible advantage over a random choice. This completes the proof of the theorem. ✢ 5. The Concurrent OT Protocol We prove security by specifying a simulator that uses an ideal system to simulate adversary operations in a real system, so that the transcript of an adversary attacking the simulator is computationally indistinguishable from the transcript of an adversary attacking the real system. Let ✡ be the number of operations the adversary performs in the real system. ✡ must be polynomial in the security parameter ❀ , and without loss of generality we will ❀ . assume ✡ The Alice simulator. Here we describe the actions of the simulator for each possible operation of a “Bobcontrolling” adversary in the real protocol. We name the actions of the real user instances as follows: Alice0 Alice instance action to start the protocol (i.e., an Alice0 action for a real Alice instance is sending a message , and sending the first message in the first POK). ✄ Alice-challenge Alice instance action to respond to the challenge in the first POK. ✕ Alice instance action on receiving the initial Alice ● message in ✆✞✝✠✟ ❏ . Alice ● Alice instance action on receiving the response message in ✆✞✝✠✟ ❏ . The simulator performs actions as follows: ✞ ☎ ✝ The protocol for concurrent -OT is obtained from the protocol of Figure 3 by running sequentially ☎ proofs of knowledge of the same statement from Bob to Alice instead of just one, where ☎ is any non-constant (i.e., positive, increasing) function of the security parameter. (Note that the POK from Alice to Bob, i.e., the first POK in the protocol, is still only run once.) For a given user instance and ✕ ☎ , we use ✆✞✝✠✟ ❏ to denote the ● th POK for ● from Bob to Alice in the sequence for that user instance. For efficiency, the initial message in each POK can be combined with the final message from the preceding POK. To simplify exposition, however, we will assume that the messages remain separate. The reason we run the sequential proofs of knowledge is that the extractor for the proof of knowledge is not able to run exactly the same execution as the normal simulator, at least not without having to deal with possible nested rewinding and interleaving of proofs of knowledge. In particular, the normal simulator will always transmit random noise for the bit that cannot be read by Bob (because this bit is unknown to the normal simulator), while the extractor will not know the index of the bit that will be read by ✆ ✄ ✆☎ ✔ ✡ ✡ ✛ ✛ ✛ ✝ ✄ ✡ ✌ ✡ ✞ ✡ ✞ ☛ ✌ ✌ 1. ☛ ✠ for ✓✼● ✓☛☎ Alice0, Alice-challenge, Alice ❍● ✕ : The simulator behaves just as a real Alice instance would. ✡ ✞ ✕ ✌ 2. for ✓❂● ✓☞☎ : The simulator starts ✡ Alice ● separate look-ahead simulations starting at the current operation and running until the adversary stops. (Recall that the adversary performs at most ✡ operations total.) In each one, the simulator does not use the ideal system, but simply generates inputs for user instances from the correct input distribution, and behaves as in the real protocol. In each look-ahead, a random challenge is used for this proof of knowledge (i.e., ✆✞✝✠✟ ❏ in this Alice instance), and we say that the look-ahead succeeds if a valid response for that challenge is received. Otherwise, we say the look-ahead fails. ✡ ✕ ✕ ✞ ✌ ☎ After all look-ahead simulations are finished, the simulator proceeds as a real Alice instance would (i.e., the simulator generates and sends a challenge for the POK). ulator simply aborts.) The simulator sends that value ✑ to the trusted host in the ideal system, and has the ✂ ✟ ✂ ideal-Alice instance send her bits and . (Note that ✂ ✟ ✂ ✄ the simulator does not know and✂ at this point.) ✎ ✄ The simulator then receives the bit . ✎ ✎ and ✎ as in the Now the simulator generates ✎ and uniformly real protocol, and draws ✄✠✒ ✄ ✒ from ✙ ✚ . The simulator sends these values to Bob. Question: Why can’t we use a simple extractor (that succeeds with all but negligible☎ probability) for these POKs instead of the look-ahead simulations? ❴ ✡ Answer: A simple extractor would rewind after obtaining the last message of the POK from the adversary. But how do we perform the rewind phase with the exact same distribution as the normal simulation? This seems to require us to know the ✑ values used by Bob instances in concurrent executions, which leads to more rewinding in other POKs and thus the same type of concurrency problems encountered in concurrent ZK proofs. On the other hand, if the distributions are not exactly equal in the normal simulation and the rewind phase, we do not know how to construct an expected polynomial time extractor that succeeds (either because it extracted the desired knowledge, or because the normal simulation never received an answer in the POK) with all but negligible probability, even though the difference in success probabilities in the two types of simulation may be negligible. For instance, say the normal simulation succeeds with ✕✠✖✁ ✘ and the look-ahead succeeds probability ✕✗✖ with probability ✘ . The difference between these probabilities is negligible, but the expected simulation time would be about ✘ , which is super-polynomial. Since we cannot obtain negligible ☎ probability of success in the extractor, we run look-aheads (which are similar to rewind phases, but run before the normal simulation instead of after) to make the probability of failure polynomially small, and then run a non-constant number of POKs to force the probability of all POKs failing to be negligible. ✡ 3. ☎ ✡ : If all look-aheads for all POKs have Alice ✞ ✌ failed, or if all the look-aheads that succeed use exactly the same challenge as the simulator used in the normal (non-look-ahead) simulation, the simulator aborts the simulation. Otherwise, the simulator ex✡ ✡ ✠✍ tracts the values ✮ and ✞ ✮ ✍ from two responses ✌ to two distinct challenges in one of the POKs. The simulator can then determine the value ✑ used by the adversary. (If it cannot, because it✟ obtains discrete logs and representations for both ✡ and ✡ , the sim☛ ❞ ❞ ✄ ❴ ✣ ✣ The Bob simulator. The actions of the simulator against an Alice-controlling adversary in the real protocol: Bob0 Bob instance action on receiving the first message from Alice. Bob1 Bob instance action on receiving the first message of the first POK. ✡ ✕ Bob instance action to start . Bob ✞ ✡ ✌ Bob ✞ Bob instance action to respond to the challenge ✌ . in Bob-final Bob instance action to respond to the last message from Alice. The simulator performs the actions as follows: ❍● ❍● ✆✞✝✠✟ 1. ✆✞✝✠✟ ❏ ☛ ❍● ✡ Bob0, Bob ✞ ✕ ● ☎ ❏ ❍● ✡ for , and Bob ✞ for ✌ : The simulator behaves just as a real Bob instance would. ✼● ✕ ✓ ☎ ✓ ☛ ✌ ✓ ✓ ☛ 2. Bob1: The simulator saves the state of the execution for a later rewind to this point. Then the simulator generates and sends a challenge, as a real Bob instance would. 3. Bob ✞ : If the POK succeeds, Bob rewinds to the ✌ Bob1 step, and uses the knowledge extractor to ex✡ ✡ . After rewinding, the simulator tract REP ✞☛✄ ✞☎✜ ✌ ✌ performs a type of look-ahead simulation as the Alice simulator did above that uses no interaction with the ideal world and no further rewinding. Essentially, the look-ahead simulation chooses the ✑ value for each Bob instance according to the conditional distribution of ✑ values expected by the adversary given all previously revealed ✑ values. The reason that this works is that 3.1 Bob instances that have gotten to step ✟ ✕ ✡ ✕ Bob ✞ have already computed ✡ and ✡ ✌ ✄ values such that the simulator knows the discrete log ✂ of both (see below), and can thus ✎ compute sent from Alice for either ✑ ✢ ✕ ✢ or ✑ , and 3.2 Bob instances that have not gotten to step ✟ ✕ ✡ ✕ are free to set their ✡ and ✡ acBob ✞ ✌ ✄ cording to the ✑ chosen by the simulator. Thus the look-ahead simulation performed during rewinding is perfectly indistinguishable from the normal simulation, and thus the extraction can succeed in ✕ ✡ ✕ ✄ expected polynomial time (as in a stand-alone proofof-knowledge). ✡ ✡ After extracting REP ✞☛✄ ✞✺✜ , the simulator sets ✟ ✡ ✢ ✡ ✢ ❏ ❏ ✌✢ ✌ ✜❁✬ and ✄ ✜❁✬✝❊ , i.e., it knows ✒ ❊✺✜✭✬✝❊ ✒ ✄ the discrete log of both. ✄ ✄ Bob-final:✟ Using its knowledge of the discrete logs ✂ ✟ ✡ of both and ✡ , the simulator determines the bits ✂ ✄ and sent by Alice (controlled by the adversary), and ✄ sends them to the trusted host as the ideal-Alice instance. Then the simulator has the ideal-Bob instance send its bit ✑ to the trusted host. (Note that the ad✑ at this point.) The ideal-Bob versary does not know ✡ ✂ ✎ instance outputs ✞ ✑ . ✌ This completes the description of the Bob simulator. ✆ ✍✏✎ 4. Theorem 2 Assuming the hardness of DDH, the protocol described in Section 5 is a secure Concurrent OT protocol. Proof [Sketch] First consider the case of an Alicecontrolling adversary . Let ✿★ be defined as running on the simulator described above. (Thus ★ is an adversary for the ideal system.) Since the simulation of Bob-instances is perfectly indistinguishable from a set of real-Bob instances, if could break the real system, then ★ could break the ideal system, which is impossible. Now consider the case of a Bob-controlling . Let ★ be defined as running on the simulator described above. (Again, ★ is an adversary for the ideal system.) If there ✆ is a distinguisher that can distinguish the transcript of (running on the real system) from the transcript of ★ , then ✆ we can create a distinguisher ✡ ✡✍ for DDH as follows. Take ✢ ✞☛✫ ✰ ✧ a DDH challenge: ▼ . Whenever necessary, ✡✌ ✡ ✖✞✍ ✖ ✍ ✍ generate a new DDH instance ✞ ✖ which is a DH❀ ❀ ❀ ✌ triple if and only if the challenge is. (This can be done as described in [49].) Run against the simulator described above with the following modifications: When simulating Alice0, set ✢ ✖ (for a new ❑ ). ❀ ✡ When simulating Alice ✞ ✌ ✂✠✟☛✡☞✂ 1. Choose from the correct input distribution ✄ (conditioned on all previous inputs). ✂ ✂ ✂ ✂ ✂ ✢ ❞ -real ❁ ”real” ✆ ✞ ✒✔✓ ❞ ✠ ✍ ✍✏✎ -real ❁ “sim” ✆ ✞ ✠ ✌ ✌ ✛ Let ✕ be the event that an extraction fails, and let ✕ ☎ ✄ be the event that an extraction succeeds but we cannot com✡ ✜ . Let pute ✑ , and furthermore cannot compute DL ✞✗✖ ✌ ❀ ✕ ❫ be the event that an extraction succeeds and allows the ✡ ✡☎✄ ✕ ✡ ✢ simulator to compute DL ✞✗✖ ✡ ✜ directly. For ✂ , ✍✏✎ ✍✑✎ ✌ ❀ ✢ ❁ ✢ ✢ ❁ ✢ let ▲ ❊ ✡ ✞✘✕ ❊ ✧ ✲✩✳ ✞☛✫ ✰ and let ▲ ❊✍ ✞✗✕ ❊ ✧ ❖ ✌ ✌ . (In contrast to the proof for the standalone pro✲✸✳ ✞☎✫ ✰ ✆ ✡ ✌ ✌ tocol, ▲ ❊ may not be equal to ▲✩❊ ✍ .) Note that ▲ ☎ ▲ ☎ ✍ ✓ ✚ . ✡ Also note that when ✧ ✢ ✲✸✳ ✞☎✫ ✰ , Alice’s responses are ✡ ✌ exactly as in the real protocol, and when ✧ ✢ ❖ ✲✩✳ ✞☛✫ ✰ , ✌ Alice’s responses are exactly as in the simulated protocol. Therefore, ☛ ✍✏✎ ✞ ✆ ✍ is correct ✌ ✆ ❁ ✢ ✞ -real ✧ ✍✏✎ ✆ ❁ ✢ ✞ -sim ✧❂❖ ✍✏✎ ✢ ✂ ✂ ✆ Let -real be the event that answers “real” and ✆ -sim the event that answers “simulated,” and let ❞ ✆ ✢ ✍✏✎ ✆ ✍✑✎ ✆ ❁ ✢ ❁ ✞ -real ”real” and ❞✠✍ ✞ -real “sim” . Since ✌ ✌ distinguishes with non-negligible probability, we have that for some non-negligible ✒ : ✆ ✍✏✎ ✂ ✢ ✧ ✄ ❁ ✞✘✕ ☎ ✧ ✍✏✎ ✂ ❁ ✞✘✕ ✂ ✍✏✎ ✞✘✕❨❫ ✂ ❖ ✢ ❖ ✞☛❞ ✲✩✳ ✞☛✫ ✡ ✡ ▲ ✰ ✌ ✌ ✰ ✄ ▲ ❫ ✠ ✌ ✌ ✍✑✎ ✞ ✞ ✧ ✕ ❞ ✠ ✍ ▲ ✠ ✍ ✄ ✞☛▲ ✍ ✄ ✕ ❞ ✢ ✄ ▲ ☎✍ ✥ ✥ ☛ ☛ ❞✠✍ ✠ ☛ ▲ ✲✸✳ ❖ ✲✩✳ ✞☛✫ ✢ ❖ ✲✩✳ ✞☛✫ ✞☎✫ ✡ ✡ ✰ ✡ ✰ ✰ ✌ ✌ ✥ ✌ ✌ ✥ ✌ ✌ ✰ ✥ ✌ ✌ ✥ ✥ ☛ ✂ ▲ ❫✍ ✕ ✌ ✥ ☛ ✂ ☛ ✂ ✠ ✞ ❂ ✧ ❖ ✢ ✡ ▲ ❫ ✥✷▲ ❫✍ ✥ ✌ ✞☎✫ ✲✸✳ ✕ ✌ ✠ ✕ ✁ ✂ ▲ ☎✍ ✠ ✢ ✧ ✢ ✧ ✁ ✞ ✞ ✍✏✎ ✌ ✌ ✁ ▲ ☎ ✠ ✌ ✌ ✍✑✎ ✰ ✌ ✂ ✠ ✞☛✫ ✍✏✎ ✰ ✡ ✞☎✫ ✲✸✳ ✲✩✳ ✡ ✞☎✫ ✲✸✳ ▲ ☎ ✠ ✄ ☛ ☎ ✂ ✎ If ❀ ✣ ✢ ✕ ✎ , set ❴ ✢ ✣ ✖✞✍ ▲ ✒ ✥ ✠ ☛ ☛ ✄ ☛ ✎ ✢ and ✄✠✒ ✄✠✒ ✄ ✒ ✎ ❀ ✞✗✖✞✍ ✍ ✝✖✠❊ ✟☎❊✠✡ ✟☛✞✗✞ ✖ ✍ ✝✖❊ ❊ ✟ ❊✌✡✵✬✰❊ . Otherwise set ❴ and ✄ ✒ ❀ ✌✎ ❀ ✌ randomly. (This makes the protocol equiv✄✠✒ alent to the real protocol if and only if the DDH challenge is a DH-triple.) ✆ ✆ If answers “real protocol,” then guess “DH-triple;” if answers “simulated protocol,” then guess “random triple;” ✆ if the simulator for ✍ aborts because the extraction failed, ✑ could not be determined and furthermore or because ✡ ✜ DL ✞ ✖ could not be determined, then guess “DH-triple;” ✆ ✌ ❀ ✍ aborts because ✑ could not be deterif the simulator for✡ ✁ ✢ ✜ could be (say it is ), then if ✞✗✖ ✍ mined but DL ✞ ✖ ✌ ❀ ❀ ✌ ✖ ✍ ✍ , guess “DH-triple” else guess “random triple.” 2. ✕ ✡ ✡ ✠ ✘ which is non-negligibly more than 1/2, as long as ▲ is neg✄ ligible. Claim 1 ▲ ✄ is negligible. is the probability that in one of the at most Proof ▲ ✄ Alice-Bob instances, every set of look-aheads in the proofs fail. We will show that the probability of any set of ☎ ✕✗✖ (independently look-aheads failing for a given POK is of any other set of look-aheads), ☎✞✝ and thus the probability of ✖ the simulator aborting is , which is negligible. First, for a given POK in the simulation, note that the probability that gives a response in the simulation is the ✡ ☎ ✡ ✂ ✡ ✡ same as the probability that gives a response in the lookahead, since they both have the same distribution (because ✢ ✧ ✲✩✳ ☛✫ ✰ ). Let ✆ be this probability. Then the probability of getting a response in the simula✕ ✆ ❄ look-aheads fail is at most ✆ tion, when all of ✕✗✖ . This completes the proof of the claim. This completes the proof of the theorem. ✂ ✡ ✞ ✌ ✆ ☎ ✠ ✡ ✞ ✓ ☎ ✌ ✡ ✢ ✢ 6. Summary and Open Problems In this paper we introduced a model for proving security of concurrent OT protocols, and presented a protocol that is proven secure in this model based on the DDH problem. The protocol is efficient, requiring only a slightly nonconstant number of rounds. One immediate problem is to design a concurrent OT protocol with a constant number of rounds. One could also consider a stronger adversarial model for concurrent OT in which an adversary may simultaneously corrupt parties playing Alice and Bob roles, and may modify messages sent between honest parties. This model could also be considered for concurrent ZK, but a more basic model is generally used to highlight the fundamental problems underlying concurrency in ZK protocols. We have chosen to use a more basic model for concurrent OT for the same reason—to highlight the fundamental problems underlying concurrency in OT protocols. The stronger adversarial model (for either concurrent OT or concurrent ZK) generally requires the use of non-malleable primitives [26] to prove security, and thus causes protocols to be more complex and less efficient. References [1] D. Beaver. Security, Fault Tolerance, and Communication Complexity. PhD thesis, Harvard University, 1990. [2] D. Beaver. How to break a “secure” oblivious transfer protocol. In Advances in Cryptology—EUROCRYPT 92, volume 658 of Lecture Notes in Computer Science, pages 285–296. Springer-Verlag, 24–28 May 1992. [3] D. Beaver. Equivocable oblivious transfer. In Advances in Cryptology—EUROCRYPT 96, volume 1070 of Lecture Notes in Computer Science, pages 119–130. SpringerVerlag, 12–16 May 1996. [4] M. Bellare, R. Canetti, and H. Krawczyk. A modular approach to the design and analysis of authentication and key exchange protocols. In STOC’98 [50], pages 419–428. [5] M. Bellare and S. Micali. Non-interactive oblivious transfer and applications. In CRYPTO’89 [20], pages 547–557. [6] M. Bellare and P. Rogaway. Entity authentication and key distribution. In Advances in Cryptology—CRYPTO ’93, volume 773 of Lecture Notes in Computer Science, pages 232– 249. Springer-Verlag, 22–26 Aug. 1993. [7] R. Bird, I. Gopal, A. Herzberg, P. Janson, S. Kutten, R. Molva, and M. Yung. Systematic design of two-party authentication protocols. In Advances in Cryptology— CRYPTO ’91, volume 576 of Lecture Notes in Computer Science, pages 44–61. Springer-Verlag, 1992, 11–15 Aug. 1991. [8] M. Boyarsky. Public-key cryptography and password protocols: The multi-user case. In CCS’99 [13], pages 63–72. [9] G. Brassard, C. Crepeau, and J. Robert. Information theoretic reductions among disclosure problems. In 27th Annual Symposium on Foundations of Computer Science, pages 168–173, Toronto, Ontario, Canada, 27–29 Oct. 1986. IEEE. [10] G. Brassard, C. Crépeau, and J.-M. Robert. All-ornothing disclosure of secrets. In Advances in Cryptology— CRYPTO ’86, volume 263 of Lecture Notes in Computer Science, pages 234–238. Springer-Verlag, 1987, 11–15 Aug. 1986. [11] C. Cachin. On the foundations of oblivious transfer. In Advances in Cryptology—EUROCRYPT 98, volume 1403 of Lecture Notes in Computer Science, pages 361–374. Springer-Verlag, 1998. [12] R. Canetti, O. Goldreich, S. Goldwasser, and S. Micali. Resettable zero-knowledge. STOC 2000. [13] Proceedings of the Sixth Annual ACM Conference on Computer and Communications Security, 1999. [14] R. Cramer, I. Damgård, and B. Schoenmakers. Proofs of partial knowledge and simplified design of witness hiding protocols. In Advances in Cryptology—CRYPTO ’94, volume 839 of Lecture Notes in Computer Science, pages 174–187. Springer-Verlag, 21–25 Aug. 1994. [15] C. Crépeau. Equivalence between two flavours of oblivious transfers. In Advances in Cryptology—CRYPTO ’87, volume 293 of Lecture Notes in Computer Science, pages 350–354. Springer-Verlag, 1988, 16–20 Aug. 1987. [16] C. Crépeau. Verifiable disclosure of secrets and applications (abstract). In Advances in Cryptology—EUROCRYPT 89, volume 434 of Lecture Notes in Computer Science, pages 150–154. Springer-Verlag, 1990, 10–13 Apr. 1989. [17] C. Crépeau, J. van de Graaf, and A. Tapp. Committed oblivious transfer and private multi-party computation. In Advances in Cryptology—CRYPTO ’95, volume 963 of Lecture Notes in Computer Science, pages 110–123. SpringerVerlag, 27–31 Aug. 1995. [18] G. D. Crescenzo and R. Ostrovsky. On concurrent zeroknowledge with pre-processing. In Wiener [51], pages 485– 502. [19] G. D. Crescenzo, R. Ostrovsky, and S. Rajagopalan. Conditional oblivious transfer and timed-release encryption. In EuroCrypt’99 [29], pages 74–89. [20] Advances in Cryptology—CRYPTO ’89, volume 435 of Lecture Notes in Computer Science. Springer-Verlag, 1990, 20– 24 Aug. 1989. [21] I. Damgård. Efficient concurrent zero-knowledge in the auxiliary string model. In Advances in Cryptology— EUROCRYPT ’2000, Lecture Notes in Computer Science, pages 418–430. Springer-Verlag, 14–18 May 2000. [22] I. Damgård, J. Kilian, and L. Salvail. On the (im)possibility of basing oblivious transfer and bit commitment on weakened security assumptions. In EuroCrypt’99 [29], pages 56– 73. [23] I. B. Damgård. On the existence of bit commitment schemes and zero-knowledge proofs. In CRYPTO’89 [20], pages 17– 27. [24] W. Diffie, P. van Oorschot, and M. Wiener. Authentication and authenticated key exchanges. Designs, Codes and Cryptography, 2:107–125, 1992. [25] Y. Dodis and S. Micali. Lower bounds for oblivious transfer reductions. In EuroCrypt’99 [29], pages 42–55. [26] D. Dolev, C. Dwork, and M. Naor. Non-malleable cryptography (extended abstract). In Proceedings of the Twenty Third Annual ACM Symposium on Theory of Computing, pages 542–552, New Orleans, Louisiana, 6–8 May 1991. [27] C. Dwork, M. Naor, and A. Sahai. Concurrent zeroknowledge. In STOC’98 [50], pages 409–418. [28] C. Dwork and A. Sahai. Concurrent zero-knowledge: Reducing the need for timing constraints. In H. Krawczyk, editor, Advances in Cryptology—CRYPTO ’98, volume 1462 of Lecture Notes in Computer Science, pages 442–457. Springer-Verlag, 17–21 Aug. 1998. [29] Advances in Cryptology—EUROCRYPT 99, volume 1592 of Lecture Notes in Computer Science. Springer-Verlag, 1999. [30] S. Even, O. Goldreich, and A. Lempel. A randomized protocol for signing contracts (extended abstract). In Advances in Cryptology: Proceedings of Crypto 82, pages 205–210. Plenum Press, New York and London, 1983, 23–25 Aug. 1982. [31] U. Feige and A. Shamir. Witness indistinguishable and witness hiding protocols. In Proceedings of the Twenty Second Annual ACM Symposium on Theory of Computing, pages 416–426, Baltimore, Maryland, 14–16 May 1990. [32] J. Garay, M. Jakobsson, and P. MacKenzie. Abuse-free optimistic contract signing. In Wiener [51], pages 449–466. [33] O. Goldreich. Secure multi-party computation. manuscript. Available from http://www.wisdom.weizmann. ac.il/˜oded/PS/prot.ps. [34] O. Goldreich and L. A. Levin. A hard-core predicate for all one-way functions. In Proceedings of the Twenty First Annual ACM Symposium on Theory of Computing, pages 25–32, Seattle, Washington, 15–17 May 1989. [35] S. Halevi and H. Krawczyk. Public-key cryptography and password protocols. In Proceedings of the Fifth Annual ACM Conference on Computer and Communications Security, pages 122–131, 1998. [36] D. Harkins and D. Carrel. The internet key exchange. RFC 2409, 1998. [37] L. Harn and H. Lin. Noninteractive oblivious transfer. Electronics Letters, 26(10):635–636, 1990. [38] J. Kilian. Founding cryptography on oblivious transfer. In Proceedings of the Twentieth Annual ACM Symposium on Theory of Computing, pages 20–31, Chicago, Illinois, 2–4 May 1988. [39] J. Kilian, S. Micali, and R. Ostrovsky. Minimum resource zero-knowledge proofs (extended abstract). In CRYPTO’89 [20], pages 545–546. [40] J. Kilian and E. Petrank. Concurrent zero-knowledge in poly-logarithmic rounds (extended abstract). Available from the Cryptology ePrint Archive, http://eprint.iacr.org/2000/013. [41] J. Kilian, E. Petrank, and C. Rackoff. Lower bounds for zero knowledge on the internet. In 39th Annual Symposium on Foundations of Computer Science, pages 484–492. IEEE, Nov. 1998. [42] S. Lucks. Open key exchange: How to defeat dictionary attacks without encrypting public keys. In Proceedings of the Workshop on Security Protocols, 1997. [43] M. Naor and B. Pinkas. Oblivious transfer and polynomial evaluation. In Proceedings of the Thirty-first Annual ACM Symposium on Theory of Computing, pages 245–254, Atlanta, Georgia, 1–4 May 1999. [44] R. Needham and M. Schroeder. Using encryption for authentication in large networks of computers. Communications of the ACM, 21(12):993–999, 1978. [45] M. O. Rabin. How to exchange secrets by oblivious transfer. Technical Report Tech. Memo TR-81, Aiken Computation Laboratory, Harvard University, 1981. [46] R. Richardson and J. Kilian. On the concurrent composition of zero-knowledge proofs. In EuroCrypt’99 [29], pages 415–431. [47] A. Rosen. A note on the round complexity of concurrent zero-knowledge. In M. Bellare, editor, Advances in Cryptology—CRYPTO 2000, volume 1880 of Lecture Notes in Computer Science, pages 451–468. Springer-Verlag, 20– 24 Aug. 2000. [48] A. D. Santis and G. Persiano. Public-randomness in public-key cryptography (extended abstract). In Advances in Cryptology—EUROCRYPT 90, volume 473 of Lecture Notes in Computer Science, pages 46–62. Springer-Verlag, 1991, 21–24 May 1990. [49] V. Shoup. On formal models for secure key exchange. In CCS’99 [13], invited talk. Available from http://www.shoup.net/papers/skey.ps.Z. [50] Proceedings of the Thirtieth Annual ACM Symposium on Theory of Computing, Dallas, Texas, 23–26 May 1998. [51] M. Wiener, editor. Advances in Cryptology—CRYPTO ’99, volume 1666 of Lecture Notes in Computer Science. Springer-Verlag, 15–19 Aug. 1999.