Files
opaque-lattice/papers_txt/composable-oprf-thesis.txt
2026-01-06 12:49:26 -07:00

4776 lines
293 KiB
Plaintext
Raw Permalink Blame History

This file contains invisible Unicode characters
This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
KASTEL
Oblivious Pseudo-Random Functions via
Garbled Circuits
Masters Thesis of
Sebastian Faller
1939715
at the Department of Informatics
Institute for Theoretical Computer Science (ITI)
Reviewer: Prof. Dr. Jörn Müller-Quade
Second reviewer: Prof. Dr. Thorsten Strufe
Advisor: M.Sc. Astrid Ottenhues
Second advisor: M.Sc. Johannes Ernst
3. September 2021 3. March 2022
Karlsruher Institut für Technologie
Fakultät für Informatik
Postfach 6980
76128 Karlsruhe
I declare that I have developed and written the enclosed thesis completely by myself, and
have not used sources or means without declaration in the text.
Karlsruhe, 03.03.2022
....................................
(Sebastian Faller)
Abstract
An Oblivious Pseudo-Random Function (OPRF) is a protocol that allows two parties a
server and a user to jointly compute the output of a Pseudo-Random Function (PRF).
The server holds the key for the PRF and the user holds an input on which the function
shall be evaluated. The user learns the correct output while the inputs of both parties
remain private. If the server can additionally prove to the user that several executions of
the protocol were performed with the same key, we call the OPRF verifiable.
One way to construct an OPRF protocol is by using generic tools from multi-party
computation, like Yaos seminal garbled circuits protocol. Garbled circuits allow two
parties to evaluate any boolean circuit, while the input that each party provides to the
circuit remains hidden from the respective other party. An approach to realizing OPRFs
based on garbled circuits was e.g. mentioned by Pinkas et al. (ASIACRYPT 09). But OPRFs
are used as a building block in various cryptographic protocols. This frequent usage in
conjunction with other building blocks calls for a security analysis that takes composition,
i.e., the usage in a bigger context into account.
In this work, we give the first construction of a garbled-circuit-based OPRF that is secure
in the universal composability model by Canetti (FOCS 01). This means the security of our
protocol holds even if the protocol is used in arbitrary execution environments, even under
parallel composition. We achieve a passively secure protocol that relies on authenticated
channels, the random oracle model, and the security of oblivious transfer. We use a
technique from Albrecht et al. (PKC 21) to extend the protocol to a verifiable OPRF by
employing a commitment scheme. The two parties compute a circuit that only outputs a
PRF value if a commitment opens to the right server-key.
Further, we implemented our construction and compared the concrete efficiency with
two other OPRFs. We found that our construction is over a hundred times faster than a
recent lattice-based construction by Albrecht et al. (PKC 21), but not as efficient as the
state-of-the-art protocol from Jarecki et al. (EUROCRYPT 18), based on the hardness of
the discrete logarithm problem in certain groups. Our efficiency-benchmark results imply
that under certain circumstances generic techniques as garbled circuits can achieve
substantially better performance in practice than some protocols specifically designed for
the problem.
Büscher et al. (ACNS 20) showed that garbled circuits are secure in the presence of
adversaries using quantum computers. This fact combined with our results indicates that
garbled-circuit-based OPRFs are a promising way towards efficient OPRFs that are secure
against those quantum adversaries.
i
Zusammenfassung
Eine Oblivious Pseudo-Random Function (OPRF) ist ein Protokoll, dass es einem Server
und einem Nutzer erlaubt, gemeinsam die Ausgabe einer Pseudozufallsfunktion (PRF) zu
berechnen. Der Server besitzt den Schlüssel, unter welchem die Funktion ausgewertet wird.
Der Nutzer besitzt einen Eingabewert, an dem die Funktion ausgewertet wird. Der Nutzer
erhält die korrekte Ausgabe während keine Partei die Eingabe der anderen erfährt. Kann der
Server dem Nutzer zusätzlich beweisen, dass in mehreren Protokollausführungen der selbe
Schlüssel verwendet wurde, so nennen wir die OPRF verifizierbar. Eine Möglichkeit ein
OPRF Protokoll zu konstruieren ist, generische Techniken aus dem Bereich der sicheren
Mehrparteienberechnung, wie Yaos Garblet Circuits, zu verwenden. Garbled Circuits
erlauben es zwei Parteien gemeinsam einen beliebigen boolschen Schaltkreis auszuwerten,
wobei die Eingaben beider Parteien geheim bleiben. Die Möglichkeit, eine OPRF mithilfe
von Garbled Circuits zu erhalten, wurde z.B. von Pinkas et al. (ASIACRYPT 09) erwähnt.
Allerdings werden OPRFs oft als Baustein in größeren Protokollen verwendet. Dieser
häufige Einsatz in Verbindung mit anderen Bausteinen erfordert eine Sicherheitsanalyse,
die Komposition, also die Verwendung in größerem Kontext, mit einbezieht.
In dieser Arbeit geben wir die erste Konstruktion einer OPRF an, die auf Garbled Circuits
basiert und deren Sicherheit gleichzeitig im Universal Composability-Modell von Canetti
(FOCS 01) bewiesen ist. Das bedeutet, unsere Sicherheitsanalyse ist auch dann noch
aussagekräftig, wenn das Protokoll in beliebigen Umgebungen, sogar unter paralleler
Komposition eingesetzt wird. Wir erhalten ein passiv sicheres Protokoll, dass unter der
Annahme von authentifizierten Kanälen, des Random Oracle Models und der Sicherheit
eines Oblivious Transfer Protokolls, sicher ist. Wir setzen eine von Albrecht et al. (PKC
21) vorgeschlagene Technik ein, um unser Protokoll zu einer verifizierbaren OPRF zu
erweitern. Wir verwenden dazu ein Commitment Verfahren. Die Parteien berechnen einen
leicht veränderten Schaltkreis, der nur dann die PRF Ausgabe erzeugt, wenn sich ein
Commitment auf den Schlüssel des Servers korrekt öffnen lässt.
Zusätzlichen haben wir unsere Konstruktion implementiert und vergleichen die Effizienz
mit zwei weiteren OPRF Konstruktionen. Die Experimente zeigen, dass unsere OPRF
mehr als 110-mal schneller ist, als die Gitter-basierte OPRF von Albrecht et al. (PKC
21). Unsere Konstruktion ist allerdings nicht so effizient wie die OPRF von Jarecki et al.
(EUROCRYPT 18), die auf der Schwierigkeit der Berechnung diskreter Logrithmen basiert.
Unsere Experimente zeigen, dass unter bestimmten Umständen generische Techniken
wie Garbled Circuits eine wesentlich bessere Effizienz erreichen können, als speziell auf
den Anwendungsfall zugeschnittene Protokolle. Büscher et al. (ACNS 20) haben gezeigt,
dass Garbled Circuits sicher gegen Angreifer sind, die im Besitz von Quantencomputern
sind. Nimmt man diese Tatsache mit unseren Ergebnissen zusammen, zeigt sich, dass
Garbled Circuit-basierte OPRFs ein wichtiger Schritt auf dem Weg zu effizienten und
gleichzeitig gegen derartige Quantenangreifer sicheren OPRFs sind.
iii
Contents
Abstract i
Zusammenfassung iii
1. Introduction 1
1.1. Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2. Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.1. Diffie-Hellman-Based OPRFs . . . . . . . . . . . . . . . . . . . . 6
1.2.2. MPC-Based OPRFs . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.3. OPRFs from Post-Quantum Assumptions . . . . . . . . . . . . . . 8
1.3. Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2. Preliminary 11
2.1. Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2. Pseudo-Random Functions . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3. Commitment Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.4. Universal Composability . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.5. Oblivious Transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.6. Garbled Circuits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.6.1. Boolean Circuits . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.6.2. Yaos Garbled Circuits . . . . . . . . . . . . . . . . . . . . . . . . 18
2.6.3. Garbling Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.6.4. Free-Xor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.6.5. Half-Gates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.7. Security of OPRFs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.7.1. Simulation-Based Security . . . . . . . . . . . . . . . . . . . . . . 26
2.7.2. Universally Composable OPRFs . . . . . . . . . . . . . . . . . . . 28
3. Construction 33
3.1. Adversarial Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.2. Security Notion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.3. The main construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.4. Some Remarks on the Construction . . . . . . . . . . . . . . . . . . . . . 35
3.5. Proving Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4. Verifiability 57
4.1. Adapting the Construction . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.2. Proving Verifiability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
v
Contents
5. Comparison of Concrete Efficiency 67
5.1. Garbled-Circuit-Based OPRF . . . . . . . . . . . . . . . . . . . . . . . . . 67
5.1.1. Implementing the Garbling Scheme . . . . . . . . . . . . . . . . . 67
5.1.2. Implementing the Protocol Parties . . . . . . . . . . . . . . . . . 70
5.2. The 2HashDH Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
5.3. Lattice-based OPRF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
5.4. Benchmarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
5.4.1. Running Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.4.2. Network Traffic . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
6. Conclusion 77
Bibliography 79
A. Appendix 89
A.1. Implementing the Hash to Curve Algorithm . . . . . . . . . . . . . . . . 89
A.2. Advanced Encryption Standard . . . . . . . . . . . . . . . . . . . . . . . 90
A.2.1. Key expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
A.2.2. Add Round Key . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
A.2.3. Sub Bytes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
A.2.4. Shift Rows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
A.2.5. Mix Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
A.3. Naor-Pinkas-OT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
A.4. Actively Secure Garbled Circuits . . . . . . . . . . . . . . . . . . . . . . . 93
A.4.1. Cut-and-Choose . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
A.4.2. Authenticated Garbling . . . . . . . . . . . . . . . . . . . . . . . 93
A.5. Acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
vi
List of Figures
1.1. Sketch of the Oblivious Pseudo-Random Function (OPRF) Functionality. . 1
1.2. Usual Authentication With Password and Salt. . . . . . . . . . . . . . . . 2
2.1. The Hiding Experiment. . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2. The Binding Experiment. . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3. Sketch of the Universal Composability Security Experiment. . . . . . . . 14
2.4. Sketch of the 1-Out-of-2 Oblivious Transfer (OT) Functionality. . . . . . 16
2.5. The Ideal Functionality FOT0 From [Can+02]. . . . . . . . . . . . . . . . . 17
2.6. The Ideal Functionality FMOT From [Cho+13]. . . . . . . . . . . . . . . . 18
2.7. Our Ideal Functionality FOT . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.8. The Garbled Circuit Protocol. . . . . . . . . . . . . . . . . . . . . . . . . 20
2.9. The Simulation-Based Privacy Game From [BHR12, Fig. 5]. . . . . . . . . 23
2.10. The Simulation-Based Obliviousness Game From [BHR12, Fig. 5]. . . . . 24
2.11. The Authenticity Game From [BHR12, Fig. 5]. . . . . . . . . . . . . . . . 24
2.12. The Procedures for Garbling a Function 𝑓 . . . . . . . . . . . . . . . . . . 27
2.13. The Ideal Functionality FAUTH From [Can00]. . . . . . . . . . . . . . . . 28
2.14. The Ideal Functionality FRO . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.15. The Ideal Functionality FOPRF From [JKX18]. . . . . . . . . . . . . . . . . 31
3.1. The Ideal Functionality FOPRF Inspired by [JKX18]. . . . . . . . . . . . . 34
3.2. Our GC-OPRF Construction in the FOT, FRO, FAUTH -Hybrid Model. . . . 36
3.3. Reduction on the Privacy Property of the Garbling Scheme. . . . . . . . . 50
3.4. Reduction on the PRF Property. . . . . . . . . . . . . . . . . . . . . . . . 52
3.5. The Simulator Sim Part I. Simulation of Messages From FOPRF . . . . . . . 53
3.6. The Simulator Sim Part II. Simulation of Protocol Messages and the First
Random Oracle 𝐻 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.7. The Simulator Sim Part III. Simulation of FOT . . . . . . . . . . . . . . . . 55
3.8. The Simulator Sim Part IV. Simulation of the Second Random Oracle 𝐻 2 . 56
4.1. The Ideal Functionality FVOPRF Inspired by [BKW20; JKX18]. . . . . . . . 58
4.2. Our Verifiable VGC-OPRF Construction Part I. . . . . . . . . . . . . . . . 60
4.3. Our Verifiable VGC-OPRF Construction Part II. . . . . . . . . . . . . . . 61
4.4. The Major Changes to Get a Simulator Sim for FVOPRF . . . . . . . . . . . 64
5.1. Overview of GC-OPRF. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
5.2. Overview of 2HashDH. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
5.3. Overview of the Benchmark Results . . . . . . . . . . . . . . . . . . . . . 74
5.4. Comparison of the Measured Running Times. . . . . . . . . . . . . . . . 75
5.5. Comparison of the Measured Network Traffic. . . . . . . . . . . . . . . . 76
vii
List of Figures
A.1. Hash to Curve Algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . 89
A.2. Simplified Shallue-van de Woestijne-Ulas Mapping. . . . . . . . . . . . . 90
A.3. The Ideal Functionality FPre From [WRK17]. . . . . . . . . . . . . . . . . 94
viii
1. Introduction
A Pseudo-Random Function (PRF) is a function F : {0, 1}𝑚 × {0, 1}𝑛 → {0, 1}𝑙 , where F
takes a key 𝑘 ∈ {0, 1}𝑚 and an input value 𝑥 ∈ {0, 1}𝑛 and outputs a value 𝑦 ∈ {0, 1}𝑙 , and
where 𝑚, 𝑛, 𝑙 ∈ N are parameters that depend on the security parameter 𝜆 ∈ N. If the key
is chosen uniformly at random, the output of the function must be indistinguishable from
a uniformly random value. However, such a conventional PRF must be evaluated by a
single party, which knows 𝑘 as well as 𝑥. In certain settings, a stronger primitive might
be desirable. Imagine two parties where one party holds 𝑘 and the other party holds 𝑥. If
the two parties want to compute a pseudo-random value but hide their inputs from each
other, a normal PRF is no solution. One party would need to send its input to the other
party in order to evaluate the PRF. This problem can be tackled by using an Oblivious
Pseudo-Random Function (OPRF). An OPRF for a certain PRF consists of two parties that
interact to jointly compute an output of the PRF. One party called the server holds the key
𝑘 of the PRF and the other party called the user holds the input value 𝑥. In the end, the
user learns the output value 𝑦 = F𝑘 (𝑥), but nothing about the key 𝑘. The server obtains no
additional information from the interaction. In particular, it learns nothing about the users
input 𝑥. The just described notion of OPRF notion is also called strong OPRF. For certain
applications, it might be too restrictive. Instead of demanding that the user learns only the
output value 𝑦 = F𝑘 (𝑥) but nothing about the key 𝑘, one can demand the following: The
user learns nothing about the key 𝑘 that would help in calculating further PRF outputs
𝑦 0 = F𝑘 (𝑥 0) for 𝑥𝑥 0. This is called a weak or relaxed OPRF. An OPRF is called verifiable,
if the server can prove to the user, that the “right” key 𝑘 was used. More precisely, the
server convinces the user that the same server key was used in all interactions with the
client. Figure 1.1 depicts the general idea behind an OPRF execution.
User(𝑥) Server(𝑘)
𝑥 𝑘
OPRF
F𝑘 (𝑥 )
Figure 1.1.: Sketch of the Oblivious Pseudo-Random Function (OPRF) Functionality.
Motivation Conventional PRFs are a very useful and well-established building block.
They can be used e.g. for digital signatures and message authentication [BG90], checking
the correctness of memory [Blu+91], tracing sources of data leakage [CFN94], and many
1
1. Introduction
more. However, there are scenarios in which the additional possibility to evaluate the PRF
obliviously between two parties is beneficial. Consider for example a typical password
authentication on a website. Nowadays, most websites avoid storing the password of a
client in the clear. Instead, a random value, the so-called “salt”, is generated. The users
password and the salt are hashed and only the hash value and the salt are stored by the
webserver. To authenticate itself, the user sends the password to the server, who in turn
recomputes the hash and compares it to the stored value. This is depicted in Figure 1.2.
U(𝑝𝑤) S
Register
𝑝𝑤
$
𝑠 ← {0, 1}𝜆
𝐻 (𝑝𝑤 k 𝑠)
Login 𝑝𝑤 store (, 𝑠)
?
= 𝐻 (𝑝𝑤 k 𝑠)
success?
Figure 1.2.: Usual Authentication With Password and Salt.
Clearly, the user has to send its password over the network every time the user wants
to authenticate itself to the server. If an adversary can eavesdrop on this authentication,
the cleartext password allows the adversary to try the password at another service, where
the user likely has used a similar password. Even if the communication is secured via
a Transport Layer Security (TLS) channel, the password might get stolen, e.g., as TLS-
certificates of servers can get stolen. This problem can be avoided by using OPRFs.
The idea to use an OPRF for password-authentication lies at the heart of a construction,
called OPAQUE [JKX18]. OPAQUE is an asymmetric Password Authenticated Key Exchange
(aPAKE) that allows a user to authenticate itself to a server using a password. If the
authentication is successful, an ephemeral session key is exchanged. The whole interaction
only requires the user to send its password in the clear once, at the first registration.
Roughly speaking, the gist of OPAQUE is to let the user receive a pseudo-random value
𝑦 = F𝑘 (𝑥), where the key 𝑘 comes from the server and the input 𝑥 to the PRF is the users
password. This pseudo-random value is then employed by the user to decrypt further
information from the server, i.e., asymmetric keys for an authenticated key exchange that
were generated at registration.
Additionally, there is a plethora of other interesting applications for OPRFs, including
private set intersection [JL09], password-protected secret sharing [JKK14; Jar+16], secure
keyword search [Fre+05], secure data de-duplication [KBR13], and privacy-preserving
lightweight authentication mechanisms [Dav+18].
As OPRFs are often used as building blocks to solve more complex cryptographic tasks,
it would be desirable to have a security analysis that takes composition into account. The
2
Universal Composability (UC) model by Canetti [Can01] offers such security guarantees.
That means in particular that security proofs in the UC-model remain meaningful, even if
the analyzed protocol is used in arbitrary contexts and might be executed in parallel or
with correlated inputs. Over the last years, several works investigated the security of OPRF
protocols in that model. To the best of our knowledge, Jarecki, Kiayias, and Krawczyk
[JKK14] were the first to define an ideal verifiable OPRF functionality in the UC-model.
Subsequent works [Jar+16; JKX18; BKW20] enhanced and modified the definition. Jarecki
et al. [Jar+16] and Jarecki, Krawczyk, and Xu [JKX18] dispensed with the verifiability
property.
Realization via Garbled Circuits The above described strong OPRF functionality can actu-
ally be seen as a problem in the field of Multi-Party Computation (MPC). The goal of an
OPRF is to securely compute the two-party functionality
(𝑘, 𝑥) ↦→ (⊥, F𝑘 (𝑥)),
where ⊥ denotes that the server receives no output. One of the most famous protocols
to solve this task is Yaos garbled circuits [Yao86]. Garbled circuits are a vivid field of
research. The main idea is that two parties, Alice and Bob, want to compute a commonly
known boolean circuit 𝐶 on two input strings 𝑥, 𝑦 ∈ {0, 1} , where 𝑥 is only known to
Alice and 𝑦 is only known to Bob. At the end of the protocol, both parties should learn
the output of 𝐶 (𝑥, 𝑦), i.e., the circuit evaluated on the two input strings. However, Alice
should “not learn anything” about 𝑦 and similarly, Bob should not learn anything about 𝑥.
To the best of our knowledge, the apparent idea to realize an OPRF by using garbled
circuits was first described by Pinkas et al. [Pin+09]. However, we believe that a second
look at the idea is beneficial for several reasons:
• OPRFs are usually used as a building block to build more powerful cryptographic
protocols, see [JL09; JKK14; Jar+16; Fre+05; KBR13; Dav+18]. While e.g. [LP07]
consider security of garbled circuits in the simulation-based model of [Can98], we
prefer a treatment in the more current Universal Composability (UC) framework of
[Can01]. UC security offers strong security guarantees under composition. We can
also rely on more recent work on the formulation of an idealized OPRF functionality
by [Jar+16; JKX18]. We even argue without formal proof why the construction
of [Pin+09] does not satisfy the OPRF notion of [JKX18], i.e., does not UC-realize
their ideal OPRF functionality.
• The recent advantages in the field of MPC brought further improvements on the
concrete efficiency of garbled circuits, most notably, the work of [ZRE15]. This
allows for even more efficient implementations of the mentioned OPRF protocol
than described by [Pin+09].
• Pinkas et al. [Pin+09] do not consider verifiability of the OPRF. We adapt ideas from
[Alb+21] to achieve a verifiable OPRF.
Another point that makes an OPRF construction from garbled circuits interesting is
the fast progress in the field of quantum computing over the last years. Recently, Arute
3
1. Introduction
et al. [Aru+19] claimed that they reached quantum supremacy for the first time. This
means they computed a problem on a quantum computer that would have taken a sig-
nificantly larger amount of time on a classical computer. Some researches suggest that
practical quantum computing could be possible in the next two decades [Mos18; Bau+16].
Even if these estimates were over-optimistic, they make further progress in this field of
research conceivable. Quantum computers pose serious threats to classical cryptographic
constructions because the seminal work of Shor [Sho94] shows that the discrete logarithm
problem and the integer factorization problem can be solved efficiently by a quantum com-
puter. Therefore, it is necessary to further investigate post-quantum secure cryptographic
building blocks, i.e., building blocks that are secure against adversaries using quantum
computers.
Büscher et al. [Büs+20] showed that garbled circuits are secure in the presence of
adversaries, using quantum computers so-called quantum adversaries. Intuitively, this
is because garbled circuits rely on symmetric cryptography and Oblivious Transfer (OT)
and quantum adversaries have no substantial advantage over conventional computers in
breaking those primitives. Thus, garbled circuits are promising for providing a way of
achieving post-quantum secure OPRFs. Over the last decades, several works improved the
efficiency of garbled circuits dramatically, see Section 2.6. It is therefore an interesting
research question, whether a garbled-circuit-based OPRF will perform comparably or even
better than constructions that are directly based on presumably post-quantum secure
assumptions, as the lattice-based construction by Albrecht et al. [Alb+21].
1.1. Contribution
In this work, we construct the first garbled-circuit-based OPRF that is secure under uni-
versal composition [Can01]. We argue informally why the garbled-circuit-based OPRF by
[Pin+09] does not UC-realize ideal OPRF functionalities like [Jar+16; JKX18] and show
how to overcome their limitation by introducing a further programmable random-oracle
as in [JKX18]. We implemented the protocol and compared its concrete efficiency to the
OPRF protocols of [JKX18] and [Alb+21].
Technical Overview From a high point of view, our protocol follows the idea of Pinkas
et al. [Pin+09] that can be sketched as follows: If the server and the user participate in a
secure two-party computation, where the jointly computed circuit is a PRF, the resulting
protocol is an OPRF. However, we additionally introduced two hash functions. The first
hash function allows the user to hash an input string of arbitrary length to the input size
of the PRF. The second hash function is applied to the output of the garbled circuit and
to the original user input. Both hash functions will be modeled as random oracles. The
random oracles are crucial for the security proof, as both allow the simulator in the proof
to obtain information about the current simulated execution. But even more importantly,
we will need to program the second random oracle in certain situations. Roughly speaking,
this is because ideal OPRF functionalities in the style of [JKX18] compare the outputs of
the OPRF protocol with truly random values. But the UC-framework requires that the
compared output values are indistinguishable, even if the OPRF is used as a building block
in bigger contexts. That “bigger context” is modeled in the UC-framework by the so-called
4
1.1. Contribution
“environment” machine. But if the environment somehow knew the input 𝑥 and the key 𝑘,
merely computing a PRF as F𝑘 (𝑥) is completely deterministic. Thus, the simulator in the
proof must be able to “adjust” the output, so it still “looks like the random output” of the
ideal functionality. That can be done by programming the second random oracle. We will
elaborate further on this in Section 3.4.
An execution of our protocol can be sketched as follows:
• The server chooses a uniformly random key 𝑘.
• The user hashes its input 𝑝 and receives = 𝐻 1 (𝑝). It then requests a garbled circuit
from the server by sending Garble to the server.
• The server garbles the circuit of a PRF and creates input labels for its key as well as
for each possible input bit of the user. The server sends the garbled circuit, the key
labels, and additional information that is needed to evaluate the circuit to the user.
• The user and the server jointly execute a 1-out-of-2 OT for each input bit of the user.
The user sends the respective bit as choice bit and the server sends the two possible
labels as the message. The user obtains only the labels for his input .
• The user evaluates the garbled circuit on the labels for his input and the labels for
the servers key 𝑘. It receives an output 𝑦 and hashes 𝐻 2 (𝑝, 𝑦) = 𝜌. The user outputs
𝜌.
This is also depicted graphically in Figure 5.1. We prove that the protocol from above
UC-realizes an ideal OPRF functionality. We use a slightly simplified version of the
functionality from [JKX18] for our proof. This means for instance, that our protocol can be
used directly to instantiate the Password-Protected Secret Sharing protocol from [Jar+16].
To achieve verifiability, we use a technique proposed by Albrecht et al. [Alb+21]. We
assume that the server publishes a commitment 𝑐 on his key as “identificator” of its key.
Now, we do not only garble the circuit of a PRF but a circuit that outputs the PRF output
only if the “right key is used”. The circuit takes the users input and the commitment 𝑐
as inputs from the user. The server provides its key and the opening information for 𝑐 as
input to the circuit. The new circuit calculates the PRF output, but only if the provided
commitment correctly opens to 𝑘. As this verification of the commitment is “hard-wired”
into the garbled circuit, the user still learns no additional information about 𝑘. But it can
be sure that the received output is from the server that can open 𝑐.
Concrete Efficiency For the implementation, we used a C++ framework, called the EMP-
Toolkit from Wang, Malozemoff, and Katz [WMK16]. We answer the question of how well
our OPRF performs in comparison to the current state-of-the-art protocol, called 2HashDH,
by [JKK14; Jar+16; JKX18] and the lattice-based protocol by Albrecht et al. [Alb+21]. We
assess the efficiency of the implementation in terms of running time and communication
cost, i.e., the amount of data that has to be sent over the network. We performed our
experiments on a conventional consumer laptop and did not take network latency into
account. Our experiments show a noticeable gap in running time to the lattice-based
construction of [Alb+21]. Our construction is over 110 times faster than the lattice-based
5
1. Introduction
protocol. As we explain in Section 5.3, this comparison has to be taken with a grain of salt.
The experiments further show that 2HashDH by [JKK14; Jar+16; JKX18] is still about 50
times faster than our construction and requires less than 100 B of communication. This is
not surprising as the protocol merely needs to exchange two points of an elliptic curve.
However, with a running time of about 65 ms and traffic of about 250 kB our protocol is
still in a reasonable efficiency range.
Implications of the Results Our experiments show that even though we employed the
“generic” garbled circuit protocol, the resulting construction was still significantly more
efficient than a special-purpose protocol based on lattices. This is somewhat surprising as
garbled circuits allow to evaluate any boolean circuit privately. The main reason for this
might be that garbled circuits are a matured cryptographic tool that was optimized several
times, see Section 2.6, while Albrecht et al. [Alb+21] claim that their protocol is the first
lattice-based Verifiable Oblivious Pseudo-Random Function (VOPRF). However, to reach a
reasonable range of efficiency, there still seems to be a long way to go for lattice-based
OPRFs.
Contrarily, it is plausible that our garbled-circuit-based construction is secure in the
presence of adversaries with quantum computers, i.e., post-quantum secure, if an appro-
priate post-quantum secure OT protocol is chosen. The post-quantum security of garbled
circuits was formally proven by Büscher et al. [Büs+20], which makes the post-quantum
security of our protocol conceivable, even though we left a formal proof to future work.
Considering the benchmark results, we see garbled-circuit-based OPRFs as promising
candidates for practically efficient OPRFs that are secure in the presence of adversaries
with quantum computers.
1.2. Related Work
First, we give a quick overview of other OPRF constructions in the literature. We divided
them into three categories, depending on the underlying techniques of the protocols.
1.2.1. Diffie-Hellman-Based OPRFs
It is a well-known fact that a PRF can be constructed from a Pseudo-Random Generator
(PRG) by using a tree construction, see for instance [BS20]. However, as this construction is
not necessarily efficient, it is rather of theoretical interest. More efficient PRF constructions
rely on the computational hardness of certain problems. We will first focus on PRFs that
assume the hardness of variations of the Diffie-Hellman assumption. To the best of our
knowledge, there are three such PRF constructions for which there is an associated OPRF
protocol in the literature.
We start with the PRF, introduced by [Jar+16; JKK14; JKX18]. It is the most important
of the Diffie-Hellman-based PRFs for this work. The underlying PRF can be formulated as
𝑓𝑘2HashDH (𝑥) = 𝐻 2 (𝑝, 𝐻 1 (𝑥)𝑘 ),
where 𝐻 1 : {0, 1} → G and 𝐻 2 : {0, 1} × G → {0, 1}𝑛 are modeled as random oracles
and G is a group of prime order 𝑞 for which a “one-more” version of the Decisional
6
1.2. Related Work
Diffie-Hellman Assumption (DDH) assumption holds. The corresponding OPRF protocol,
presented in [Jar+16; JKK14; JKX18] uses a technique which is sometimes referred to as
“blinded exponentiation”. This technique was first used in the context of blind signature,
see [Cha83]. The main idea is that the user chooses some random 𝑟 ∈ Z𝑞 and sends 𝑔𝑟 to
the server. In turn, the server calculates 𝑏 B (𝑔𝑟 )𝑘 and sends it back to the user. As the
user knows 𝑟 , it can calculate 𝑏 1/𝑟 = 𝑔𝑘 . Thus, it received 𝑔𝑘 without revealing the actual
value of 𝑔 to the server. By combining this idea with the two random oracles, one gets
the protocol 2HashDH, depicted in Figure 5.2. This protocol is extremely efficient and the
security is analyzed by [Jar+16; JKK14; JKX18] in the UC-framework by [Can01]. There is
an ongoing effort to standardize this protocol by the Crypto Forum Research Group. See
[Dav+22] for the current draft.
Another PRF was introduced by Naor and Reingold [NR04, Construction 4.1]. It is
defined as follows: Let 𝑝, 𝑞 be primes such that 𝑞 | 𝑝 1. Let 𝑛 ∈ N, 𝑘 = (𝑎 0, . . . , 𝑎𝑛 ) ∈ Z𝑞𝑛+1
and 𝑔 ∈ Z𝑝 be an element of order 𝑞. The Naor-Reingold PRF with key 𝑘 on input
𝑥 = (𝑥 1, . . . , 𝑥𝑛 ) ∈ {0, 1}𝑛 is defined as
Î𝑛 𝑥𝑖
𝑓𝑘NR (𝑥) = 𝑔𝑎0 · 𝑖=1 𝑎𝑖 .
Freedman et al. [Fre+05] proposed a constant-round OPRF protocol for the Naor-
Reingold PRF that uses OT and the idea of blinded exponentiation, similar to [Jar+16;
JKX18].
The third PRF, introduced by Dodis and Yampolskiy [DY05, Sec. 4.2] is defined as
follows: Let G = h𝑔i be a cyclic group of order 𝑞 and 𝑘 ∈ Z𝑞 be uniformly random. The
Dodis-Yampolskiy PRF on input 𝑥 ∈ Z𝑞 is defined as
𝑓𝑘DY (𝑥) = 𝑔1/(𝑥+𝑘) .
The security of the PRF is based on the so-called Decisional 𝑞-Diffie-Hellman Inversion
Problem (𝑞-DHI). Jarecki and Liu [JL09] and Belenkiy et al. [Bel+08] gave protocols to
obliviously evaluate the above PRF. Both protocols employ a homomorphic encryption
scheme, e.g. Paillier [Pai99].
All three Diffie-Hellman-Based OPRFs share the limitation that Shors algorithms
[Sho94] will render them insecure if sufficiently strong quantum computers become
available. Additionally, the security proofs of [JL09; Bel+08] and [Fre+05] do not consider
composition. In particular, they do not analyze their protocols in the UC-model of Canetti
[Can01], as [Jar+16; JKK14; JKX18] and we do in our work.
1.2.2. MPC-Based OPRFs
The second category of OPRF protocols relies on techniques from Multi-Party Computation
(MPC).
Pinkas et al. [Pin+09] argue that it is possible to realize an OPRF by using Yaos garbled
circuits, see Section 2.6.3. Garbled circuits allow two parties to jointly evaluate any boolean
circuit, while the input of each party is hidden from the respective other party. If the
7
1. Introduction
calculated circuit is a description of a PRF, the resulting output is the desired pseudo-
random value. The privacy requirement for the OPRF is satisfied as the garbled circuit
protocol guarantees the privacy of the inputs. Pinkas et al. [Pin+09] do not give a formal
proof of security. However, they refer to the general proof for garbled circuit security
in the presence of active adversaries of Lindell and Pinkas [LP07]. The simulation-based
proof of [LP07] uses the framework of Canetti [Can98] that even considers composition
to a certain extent. Albrecht et al. [Alb+21] sketch an idea of how to achieve verifiability
from a garbled-circuit-based OPRF.
Kolesnikov et al. [Kol+16] choose a different MPC-based approach. They use efficient OT
extensions, see [Ish+03], to instantiate something close to an OPRF protocol. The security
notion they define is called batched, related-key OPRF (BaRK-OPRF). This notion is very
similar to usual OPRFs. However, there are certain differences. The word “batched” means
that the user can query pseudo-random output for 𝑚 ∈ N different input values 𝑟 1, . . . , 𝑟𝑚 .
Each pseudo-random answer will be calculated using a different PRF key. “Related key”
means that each PRF key is comprised of two components (𝑘 , 𝑘𝑖 ) and for every batch of
input values 𝑟 1, . . . , 𝑟𝑚 , i.e., for one protocol execution the first component of the PRF key
stays the same. Therefore, all pseudo-random outputs were calculated under related keys.
Kolesnikov et al. [Kol+16] observe that an OT of random messages can be interpreted
as a very simple OPRF. Concretely, if an OT-sender sends uniformly random messages
𝑚 0, 𝑚 1 ∈ {0, 1}𝜆 via OT and the receiver chooses one of them via a choice bit 𝑏 ∈ {0, 1},
the performed protocol is an OPRF for the PRF
𝐹 : {0, 1}2𝜆 × {0, 1} → {0, 1}𝜆 ; 𝐹 (𝑚0,𝑚1 ) (𝑏) = 𝑚𝑏 .
They improve the OT extension protocols from Ishai et al. [Ish+03] and Kolesnikov
and Kumaresan [KK13] and achieve an efficient OT extension protocol for 1-out-of-𝑛 OT
for exponentially large 𝑛 ∈ N. By combining this with the above idea, one gets an OPRF
with input domain {1, . . . , 𝑛}. They analyze the security of their protocol in the UC-model
of [Can01], as we did for our construction. A further similarity between our protocols
is that both rely only on the security of OT and symmetric cryptography. In contrast
to BaRK-OPRF, our construction does not enforce keys to be related. Servers can use
completely independent keys for different OPRF executions.
1.2.3. OPRFs from Post-Quantum Assumptions
To the best of our knowledge, there are two OPRF constructions that directly rely on
presumably post-quantum secure assumptions in the literature. Presumably post-quantum
secure means that it is currently not believed by the cryptographic community, that a
quantum computer is significantly more efficient in breaking those assumptions than
a conventional computer. This distinguishes the post-quantum secure assumptions as
e.g. Short Integer Solution (SIS), Learning With Errors (LWE), or the problem to find
isogenies between supersingular elliptic curves from integer factorization or Discrete
Logarithm (DLOG).
In fact, both constructions claim that they are even verifiable OPRFs. The first construc-
tion was proposed by Albrecht et al. [Alb+21]. It relies on the hardness of the decision
8
1.3. Outline
version of the Ring-LWE assumption and the one-dimensional version of the SIS assump-
tion, which was introduced by [BV15]. In contrast to our construction, their construction
only needs two rounds of communication. However, the concrete efficiency is clearly
worse, as relatively large parameters must be chosen.
The second construction was proposed by Boneh, Kogan, and Woo [BKW20] and is
based on the hardness of certain isogeny problems. However, the construction was recently
“broken” by Basso et al. [Bas+21]. They found that certain assumptions on isogenies made
by [BKW20] did actually not hold. Basso et al. [Bas+21] further argue that there is no
straightforward way to fix the construction. In conclusion, there is only the lattice-based
construction from [Alb+21] left that directly relies on post-quantum secure assumptions.
However, there might be another possibility to achieve post-quantum secure OPRF, i.e.,
to use one of the MPC-based constructions. The rough idea would be to instantiate the
protocol with a post-quantum secure OT protocol, as the rest of the security relies on
symmetric encryption. This method might suffice, as quantum computers appear to have
no substantial advantage in breaking symmetrical cryptography, see e.g. [BNS19; Amy+16].
While the security of the protocol from [Alb+21] provably holds in the Quantum-accessible
Random Oracle Model (QROM) [Bon+11], i.e., in the presence of adversaries that can send
superposition queries to the random oracle, we leave it to future work to formally prove
the post-quantum security of one of the MPC-based constructions.
1.3. Outline
First, in Chapter 2 we will recall necessary definitions and introduce important techniques
that we will apply later. Then we present and discuss our construction in Chapter 3.
Of particular interest in that section might be the proof of security in the UC-model,
which can be found in Section 3.5. In Chapter 4, we discuss which changes must be
applied to our construction and to the security proof to achieve a VOPRF. We discuss
the implementation of our protocol and the comparison of efficiency to other OPRFs in
Chapter 5. We summarize our results and propose directions for future work on the topic
in Chapter 6. Finally, in Appendix A, we present some techniques that are not necessary
to understand this work, but that might be of interest to some readers.
9
2. Preliminary
2.1. Notation
We write 𝜆 for the security parameter. We always assume that all algorithms take 𝜆
as implicit parameter. We call a probabilistic Turing machine Probabilistic Polynomial
$
Time (PPT), if its running time is bounded by a polynomial in 𝜆. By 𝑥𝑆 we denote
that 𝑥 is chosen uniformly at random from the set 𝑆. We write 𝑦𝐴 if the randomized
algorithm 𝐴(𝑥) outputs 𝑦 on input 𝑥. We write 𝑥 k 𝑦 for the concatenation of the strings 𝑥
and 𝑦. We use O (·), 𝑜 (·), Θ(·), Ω(·), and 𝜔 (·) for asymptotic notation. We say a function
is negligible in 𝜆, if it asymptotically falls faster than the inverse of any polynomial in 𝜆.
Particularly when describing simulators, we use h·i for records, made by the simulator.
We will use ∃h𝑥i (or @h𝑦i) to express that the simulator goes through its records and
checks if there is a matching record h𝑥i (or there is no matching record h𝑦i). Whenever
the behavior of an ideal functionality on the receipt of a certain message is not explicitly
defined, we assume that the functionality ignores the message.
Ausführlicher
2.2. Pseudo-Random Functions
A Pseudo-Random Function is function that produces “random looking” output values.
More precisely, the function is indexed by a key 𝑘, sometimes called “seed”. If the key
is chosen uniformly at random, the function maps input values to output values in such
a way, that it is indistinguishable whether the output values come from the pseudo-
random function or from a truly random function. In that sense, one could see a PRF as a
Pseudo-Random Generator “with random access” to the generated pseudo-random values.
However, it is possible to construct a PRF from a PRG [BS20]. The security is defined via a
PPT distinguisher D that either gets oracle access to F(𝑘, ·) for some randomly chosen key
𝑘 ∈ {0, 1}𝑚 or to a truly random function RF. The goal of D to tell those situations apart.
Definition 1 (Pseudo-Random Function) [KL15, Def. 3.25] Let 𝑛 B 𝑛(𝜆) and 𝑚 B 𝑚(𝜆) be
polynomial in 𝜆. Let F : {0, 1}𝑚 × {0, 1}𝑛 → {0, 1}𝑛 be a function family such that there is
a polynomial-time algorithm that takes 𝑘 ∈ {0, 1}𝑚 and 𝑥 ∈ {0, 1}𝑛 and outputs F(𝑘, 𝑥).
We say PRF is a pseudo-random function if the advantage defined as
h i h i
F(𝑘, · ) 𝜆 RF( · ) 𝜆
AdvPRF
F (D, 𝜆) B || $ Pr D (1 ) = 1 $ Pr D (1 ) = 1
𝑘 ←{0,1}
𝑚 RF←{𝑓 :{0,1} →{0,1} }
𝑛 𝑛
is negligible for every PPT distinguisher D, where the first probability is taken over
uniform choices of 𝑘 ∈ {0, 1}𝑚 and the randomness of D and the second probability is
taken over uniform choices of RF ∈ {𝑓 : {0, 1}𝑛 → {0, 1}𝑛 } and the randomness of D. 
11
2. Preliminary
Definition 2 (Pseudo-Random Permutation) [KL15, Sec. 3.5.1] Let 𝑛 B 𝑛(𝜆) and 𝑚 B 𝑚(𝜆)
be polynomial in 𝜆. Let F : {0, 1}𝑚 × {0, 1}𝑛 → {0, 1}𝑛 be a function family such that there
is a polynomial-time algorithm that takes 𝑘 ∈ {0, 1}𝑚 and 𝑥 ∈ {0, 1}𝑛 and outputs F(𝑘, 𝑥).
Let Perm𝑛 denote the set of all permutations of length 𝑛. We say F is a pseudo-random
permutation if the advantage defined as
h i h i
F(𝑘, · ) 𝜆 RP( · ) 𝜆
AdvPRP
F (D, 𝜆) B || $ Pr D (1 ) = 1 $Pr D (1 ) = 1
𝑘 ←{0,1}
𝑚 RP←Perm𝑛
is negligible for every PPT distinguisher D, where the first probability is taken over
uniform choices of 𝑘 ∈ {0, 1}𝑚 and the randomness of D and the second probability is
taken over uniform choices of RP ∈ Perm𝑛 and the randomness of D. 
2.3. Commitment Schemes
Intuitively, a commitment scheme allows to create a value 𝑐, called the commitment on a
message that hides the message but allows to only open the commitment to the original
message. The commitment is opened by using the so-called opening information, which
should be kept secret until the commitment has to be opened.
For the sake of simplicity, we will assume in this work that the messages, the commit-
ments and the opening information are bit strings.
Definition 3 (Commitment Scheme) [BS20, Sec. 8.12] A Commitment Scheme consists
of two efficient algorithms COM = (Commit, Unveil). For 𝑛, 𝑙, 𝑡 ∈ Θ(𝜆), the Commit
algorithm takes a message 𝑚 ∈ {0, 1}𝑛 and outputs (𝑐, 𝑟 ) ∈ {0, 1}𝑙 × {0, 1}𝑡 . We call 𝑐 the
commitment on 𝑚 and 𝑟 the opening information. The Unveil algorithm takes a commitment
𝑐 ∈ {0, 1}𝑙 , a message 𝑚 ∈ {0, 1}𝑛 and the opening information 𝑟 ∈ {0, 1}𝑡 and outputs
either 0 or 1, where we interpret output 1 as “𝑐 correctly opens to message 𝑚”.
We require a commitment scheme to have correctness. By that we mean
$
∀𝑚 ∈ {0, 1}𝑛 : ∀(𝑐, 𝑟 ) ← Commit(𝑚) : Pr [Unveil(𝑐, 𝑘, 𝑟 ) = 1] = 1.
The commitment should not reveal any information about the committed message. One
could also say that the commitment should hide the message from anyone who is not in
possession of the opening information. We formalize this in the following definition. We
define the security over a security experiment where the adversary A plays against a
challenger C.
Definition 4 (Hiding) [BS20, Sec. 8.12] Let COM = (Commit, Unveil) be a commitment
scheme. COM is computationally hiding if for every PPT adversary A we have
h i
Hiding Hiding
AdvCOM (A, 𝜆) B Pr ExpCOM,A (𝜆) = 1 ≤ negl(𝜆),
Hiding
where ExpCOM,A (𝜆) is the experiment depicted in Figure 2.1 and negl(·) is some negligible
function and the probability is taken over the randomness of C and A. 
12
2.4. Universal Composability
Hiding
ExpCOM,A (𝜆)
• A sends two messages 𝑚 0, 𝑚 1 ∈ {0, 1}𝑛 to C.
• The challenger chooses a bit 𝑏 ∈ {0, 1} uniformly at random. The challenger
computer (𝑐, 𝑟 ) ← Commit(𝑚𝑏 ) and sends 𝑐 to A.
• A takes the input 𝑐 and outputs a guess 𝑏 0 ∈ {0, 1}.
• The experiment outputs 1 iff 𝑏 = 𝑏 0.
Figure 2.1.: The Hiding Experiment.
Binding
ExpCOM,A (𝜆)
• A outputs (𝑐, 𝑚 0, 𝑟 0, 𝑚 1, 𝑟 1 ).
• The experiment outputs 1 iff it holds that
Unveil(𝑐, 𝑚 0, 𝑟 0 ) = 1,
Unveil(𝑐, 𝑚 1, 𝑟 1 ) = 1,
𝑚0 ≠ 𝑚1.
Figure 2.2.: The Binding Experiment.
Additionally to the hiding property, we require that no efficient adversary should be
able to lie about the message on which he committed. In other words, it should be hard for
an adversary to first commit on some message and later open the commitment to another
message. We will formalize this notion in the next definition.
Definition 5 (Binding) [BS20, Sec. 8.12] Let COM = (Commit, Unveil) be a commitment
scheme. A COM is computationally binding if for every PPT adversary A that outputs a
5-tuple (𝑐, 𝑚 0, 𝑟 0, 𝑚 1, 𝑟 1 ) with 𝑐 ∈ {0, 1}𝑙 , 𝑚 0, 𝑚 1 ∈ {0, 1}𝑛 , 𝑟 0, 𝑟 1 ∈ {0, 1}𝑡 , we have
h i
Binding Binding
AdvCOM (A, 𝜆) B Pr ExpCOM,A (𝜆) = 1 ≤ negl(𝜆),
Binding
where ExpCOM,A (𝜆) is the experiment depicted in Figure 2.2 and negl(·) is some negligible
function and the probability is taken over the randomness of A. 
There are also statistical and perfect variants of the notions of hiding and binding but
we will not need them in this work.
2.4. Universal Composability
Often, cryptographic protocols are not used in isolation but are combined to serve a
greater functionality. However, the security of protocols is not always conserved under
13
2. Preliminary
composition. A classical result is for example that the parallel composition of two zero-
knowledge protocols is in general not a zero-knowledge protocol [GK90]. Universal
Composability, introduced by Canetti [Can01] is a notion of security that solves this
problem. The original paper by Canetti [Can01] was revisited several times. In the
following, we always refer to the version from 2020 [Can00]. Protocols that are secure in
the UC model can be composed and preserve their security. This is stated more formally
in the composition theorem from Canetti [Can00, Theo. 22].
The rough idea of the UC-security experiment is to compare an ideal world with the real
world, similar to a “stand-alone” simulation-based proofs. This is conceptually visualized
in Figure 2.3.
Real Ideal
A 𝜋 𝑐 S F
E E
Figure 2.3.: Sketch of the Universal Composability Security Experiment.
In the ideal world, we do not regard the actual protocol but rather an idealized function-
ality F. The gist is however, that all interactions between parties are orchestrated by a
so-called environment machine E. The environment machine can be thought of as the
“bigger context” of the protocol execution, e.g., when the protocol is used as subroutine in
another protocol. In contrast to a normal distinguisher in a stand-alone security notion,
the environment can adaptively interact with the protocol parties.
In the real world, the environment machine E interacts with the real-world adversary
A and with the real protocol parties of a protocol 𝜋.
In the ideal world, the protocol parties are replaced by “Dummy-Parties”. These parties
do nothing except forwarding all input directly to F.
Additionally, the idealized functionality F and the environment E interact with a
simulator S, who plays the role of the real-world adversary. The job of S is to simulate
an execution of 𝜋 for E that looks like the real-world execution. If no PPT environment
machine can tell both worlds apart, the protocol 𝜋 UC-emulates (or UC-realizes) the ideal
functionality F. We will define this more formally (but still simplified) in the following.
First, we define the notion of UC-emulation. This notion will in turn allow us to define the
realization of an ideal functionality.
Definition 6 (UC-Emulation) [Can00, Def. 1] Let EXEC𝜋,A,E (𝑧) denote the random variable
over the local random choices of all parties of 𝜋, of E and of A that describes an output of
E on input 𝑧 ∈ {0, 1} when running protocol 𝜋 with adversary A. Let EXEC𝜋,A,E denote
the probability ensemble {EXEC𝜋,A,E (𝑧)}𝑧∈{0,1} .
14
2.4. Universal Composability
We say a protocol 𝜋 UC-emulates an protocol 𝜙, if for all PPT adversaries A there is an
PPT simulator S, such that for all environment machines E it holds that
𝑐
EXEC𝜋,A,E ≈ EXEC𝜙,S,E .
If this is the case, we write 𝜋 ≥ 𝜙. 
This notion captures more general situations than just the real-ideal comparison men-
tioned above. One can e.g. give two protocols 𝜋, 𝜙 with 𝜋 ≥ 𝜙, where 𝜋 and 𝜙 are
both “real-world” protocols. We now define what we mean exactly by realizing an ideal
functionality.
Definition 7 (UC-Realization of a Functionality) [Can00, Def. 2] Let IDEALF denote the
protocol that consist of a machine F, the ideal functionality, and 𝑚 Dummy-Parties
D1, . . . , D𝑚 , where 𝑚 is the number of parties that interact with F. The Dummy-Parties
only relay input to F and relay output from F to E and ignore all backdoor-tape messages.
We say a protocol 𝜋 UC-realizes a functionality F, if 𝜋 UC-emulates IDEALF.
We like to emphasize here that security in the UC-model is always defined relatively to
an ideal functionality. Consequently, care must be taken, when F is specified.
Security in the UC model is a strong notion. However, even some very simple func-
tionalities can not be achieved in the UC model without additional assumptions. The
most famous result is the impossibility of bit-commitments in the plain model. Without
additional assumptions, like a Common Reference String (CRS) or a Public-Key Infrastruc-
ture (PKI), there exists no protocol that UC-emulates the bit-commitment functionality
Fcom [CF01, Theorem 6].
As mentioned in Section 1.2.1, [Jar+16; JKX18] defined ideal functionalities, describing
the desired security of OPRFs in the UC model. They also constructed protocols that
UC-realize the ideal functionalities. We will discus them in Section 2.7.
The Dummy Adversary Canetti [Can00] also shows, that Definition 7 can be simplified by
using the Dummy Adversary. Instead of considering all PPT adversaries A, it is sufficient
to consider only the most simple adversary possible. The Dummy Adversary D takes all
messages it receives from the environment E and forwards them without any change to
the concerned protocol party. The other way around, if the D receives messages from any
protocol party, it forwards them directly to the environment.
This sounds contradictory on the first glance, as we restrict ourselves to a very special
adversary. However, the intuition for this fact is simple. The goal of the environment is to
distinguish whether it is “talking” to the adversary and a real protocol execution of 𝜋 or
to the simulator and the ideal protocol execution with F. Now if A would not forward
a message to E, that can only make Es task harder as its view of the interaction is “not
complete”. Analogously, if the adversary interacts with any party without the environment
knowing, or if the adversary does not interact with a party even though E instructed him
to do so, this does only make the task harder for E. Canetti [Can00, Claim 11] proves this
formally.
15
2. Preliminary
2.5. Oblivious Transfer
Oblivious Transfer (OT), introduced in 1981 by Rabin [Rab05], is fundamental for many
cryptographic tasks, including Yaoss garbled circuits. OT allows a sender to transfer one
of two messages to a receiver. The receiver can choose whether it wants the first or the
second message. The security guarantee for the sender is that the receiver does not learn
anything about the message that was not chosen. The security guarantee for the receiver
is that the sender does not learn anything about the choice of the receiver. This is sketched
graphically in Figure 2.4. The above describe protocol is also known as 1-out-of-2 OT. In
more generality, one can define a 1-out-of-𝑚 OT, where the sender sends 𝑚 ∈ N messages
𝑥 1, . . . , 𝑥𝑚 . One can also distinguish between “bit-OT”, where the messages of the sender
are single bits 𝑥 0, 𝑥 1 ∈ {0, 1} and “string-OT”, where the messages are bit strings length
𝑥 0, 𝑥 1 ∈ 𝑏𝑖𝑡𝑠𝑡𝑟 . In the case of garbled circuits, OT allows the garbling party to provide the
evaluating party with the necessary input labels without learning which input bits the
evaluating party actually chooses. That means, we need to perform 𝑛 ∈ N executions of a
1-out-of-2 string-OT protocol, one for each of the 𝑛 input bits of the circuit. As we are
interested in universally composable OTs, we briefly recall two ideal functionalities for
UC-secure OT and discuss their differences before we define the functionality we will use
in this work.
Receiver(𝑏) Sender(𝑥 0, 𝑥 1 )
𝑏 𝑥 0, 𝑥 1
OT
𝑥𝑏
Figure 2.4.: Sketch of the 1-Out-of-2 OT Functionality.
The first functionality FOT 0 is introduced by Canetti et al. [Can+02] and used e.g. by Peik-
ert, Vaikuntanathan, and Waters [PVW08]. It considers one sender and one receiver. We
present the functionality (for the special case of 1-out-of-2 OT) in Figure 2.5. With this func-
tionality, each sender-receiver pair requires its own CRS. Additionally, this functionality
does not inform the adversary explicitly if the messages are sent to functionality.
The second functionality FMOT is used e.g. by Choi et al. [Cho+13]. They claim that
it was introduced by Canetti [Can00], however we couldnt find a version of [Can00],
where this functionality appeared. Thus, we present the ideal functionality FMOT for
multi-session OT as defined by [Cho+13] in Figure 2.6. In contrast to the first functionality,
the adversary is explicitly informed, whenever a message is sent to the functionality. This
is done by ( Send, h𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, 𝑃𝑖 , 𝑃 𝑗 i) messages from the ideal functionality to the adversary.
A second difference seems to be the ability of the adversary to delay messages. Con-
cretely, the result of the OT is only transferred to the receiver 𝑃 𝑗 after the adversary sent
( Received, h𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, 𝑃𝑖 , 𝑃 𝑗 i, 𝑃 𝑗 ) to the functionality. However, in the UC-Framework the
16
2.6. Garbled Circuits
adversary is always able to delay messages. Thus, this messages is more a syntactical
difference in the two functionalities. The last difference is the number of parties. The
second functionality in Figure 2.6 allows for up to 𝑛 parties. This might seem more suited
for our case on the first glance. By employing the second functionality, one does only
need to generate one CRS for all OT executions. But the subtle problem that occurs is that
the number of parties 𝑛 needs to be fixed at the beginning of the protocol execution. In
contrast, in our OPRF scenario we allow (honest) parties to join the protocol at any time.
If we would want to use FMOT in such a dynamic context we would need to instantiate
a new protocol instance and thus generate a new CRS whenever more that 𝑛 parties
join the execution. To avoid those problems, we rather opt to use a “single sender single
receiver” functionality, like Figure 2.5. Note, that the generation of a new CRS for every
OT execution is not problematic as we are in the Ranom Oracle Model (ROM). We can
just generate a CRS for the OT execution by “hashing” the session id and the prefix of the
protocol execution using the random oracle.
For the sake of clarity, we augment the original FOT 0 functionality from [Can00] with
explicit message allowing the sender to delete execution. We stress again that this does
not give the adversary additional power but rather makes properties of the UC-framework
more explicit. We describe our functionality FOT in Figure 2.7.
0
Functionality FOT
0 proceeds as follows, interaction and running with an oblivious transfer sender 𝑇 , a
FOT
receiver 𝑅 and an adversary 𝑆.
• Upon receiving a message ( Sender, 𝑠𝑖𝑑, 𝑥 0, 𝑥 1 ) from 𝑇 , where each 𝑥 𝑗 ∈ {0, 1}𝑚 ,
record (𝑥 0, 𝑥 1 ). (The lengths of the strings 𝑚 is fixed and known to all parties.)
• Upon receiving a message ( Receiver, 𝑠𝑖𝑑, 𝑖) from 𝑅, where 𝑖 ∈ {0, 1} send (𝑠𝑖𝑑, 𝑥𝑖 )
to 𝑇 and 𝑠𝑖𝑑 to 𝑆 and halt. If no ( Sender, . . . ) message was sent, then send nothing
to 𝑅.
0 From [Can+02].
Figure 2.5.: The Ideal Functionality FOT
2.6. Garbled Circuits
2.6.1. Boolean Circuits
A boolean circuit is model of computation like the Turing machine. They can be seen as a
mathematical abstraction of actual electrical circuits that are used to build processors. We
define them as follows:
Definition 8 (Boolean Circuit) [AB09, Def. 6.1], [BHR12, Sec. 2.3] For 𝑛, 𝑚 ∈ N, a boolean
circuit 𝐶 with 𝑛 inputs and 𝑚 outputs is a directed acyclic graph. It contains 𝑛 nodes with
no incoming edges; called the input nodes and 𝑚 nodes with no outgoing edges, called the
output nodes. All other nodes are called gates. In our case, they are labeled with either
17
2. Preliminary
Functionality FMOT
FMOT interacts with parties 𝑃 1, . . . , 𝑃𝑛 and an adversary Sim and proceeds as follows:
• Upon receiving a message ( Send, h𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, 𝑃𝑖 , 𝑃 𝑗 i, h𝑥 0, 𝑥 1 i) from 𝑃𝑖 , where each
𝑥 𝑗 ∈ {0, 1}𝑚 , record h𝑠𝑠𝑖𝑑, 𝑃𝑖 , 𝑃 𝑗 , 𝑥 0, 𝑥 1 i. Reveal ( Send, h𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, 𝑃𝑖 , 𝑃 𝑗 i) to the
adversary. Ignore further ( Send, . . . ) messages from 𝑃𝑖 with the same 𝑠𝑠𝑖𝑑.
• Upon receiving a message ( Receive, h𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, 𝑃𝑖 , 𝑃 𝑗 i, 𝑏) from 𝑃 𝑗 , where 𝑏 ∈ {0, 1}
record the tuple h𝑠𝑠𝑖𝑑, 𝑃𝑖 , 𝑃 𝑗 , 𝑏i and reveal ( Receive, h𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, 𝑃𝑖 , 𝑃 𝑗 i) to the adver-
sary. Ignore further ( Receive, . . . ) messages from 𝑃 𝑗 with the same 𝑠𝑠𝑖𝑑.
• Upon receiving a message ( Sent, h𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, 𝑃𝑖 , 𝑃 𝑗 i, 𝑃𝑖 ) from the adversary, ignore
the message if h𝑠𝑠𝑖𝑑, 𝑃𝑖 , 𝑃 𝑗 , 𝑥 0, 𝑥 1 i or h𝑠𝑠𝑖𝑑, 𝑃𝑖 , 𝑃 𝑗 , 𝑏i is not recorded; Otherwise re-
turn ( Sent, h𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, 𝑃𝑖 , 𝑃 𝑗 i) to 𝑃𝑖 ; Ignore further ( Sent, h𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, 𝑃𝑖 , 𝑃 𝑗 i, 𝑃𝑖 ) mes-
sages from the adversary.
• Upon receiving a message ( Received, h𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, 𝑃𝑖 , 𝑃 𝑗 i, 𝑃 𝑗 ) from the adver-
sary ignore the message if h𝑠𝑠𝑖𝑑, 𝑃𝑖 , 𝑃 𝑗 , 𝑥 0, 𝑥 1 i or h𝑠𝑠𝑖𝑑, 𝑃𝑖 , 𝑃 𝑗 , 𝑏i is not
recorded; Otherwise return ( Received, h𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, 𝑃𝑖 , 𝑃 𝑗 i, 𝑚𝑏 ) to 𝑃 𝑗 ; Ignore further
( Received, h𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, 𝑃𝑖 , 𝑃 𝑗 i, 𝑃 𝑗 ) messages from the adversary.
Figure 2.6.: The Ideal Functionality FMOT From [Cho+13].
𝑋𝑂𝑅 or 𝐴𝑁 𝐷. For a gate 𝑔, 𝐺 (𝑔) ∈ {𝑋𝑂𝑅, 𝐴𝑁 𝐷 } yields the function corresponding to the
label. Gates have always two inputs and arbitrary fan-out. 
Further, [BHR12] use the convention that all wires of the circuit are numbered. If 𝑞 ∈ N
is the number of gates then 𝑟 = 𝑛 + 𝑞 is the number of wires. The number of the outgoing
wire(s) of a gate serves as number of the gate. Further, they assume that the numbering is
ordered in the following sense. Let 𝐴 : Gates → Wires give the first input wire of a gate
and 𝐵 : Gates → Wires give the second input wire of a gate. Then it holds that for all
𝑔 ∈ Gates that 𝐴(𝑔) < 𝐵(𝑔) < 𝑔. By using this convention, the evaluation of the circuit
can be defined as follows:
Definition 9 (Circuit Evaluation) A boolean circuit is evaluated by iterating over all gates
𝑔 ∈ Gates in their order and setting 𝑎 B 𝐴(𝑔), 𝑏 B 𝐴(𝑔), 𝑥𝑔 B 𝐺 (𝑔)(𝑥𝑎 , 𝑥𝑏 ). The output
of the circuit is 𝑥𝑛+𝑞−𝑚+1 k . . . k 𝑥𝑛+𝑞 . 
Note that 𝑥𝑎 and 𝑥𝑏 are well-defined, as the circuit is ordered.
2.6.2. Yaos Garbled Circuits
Garbled Circuits were introduced by Andrew Yao. According to [BHR12], the original
idea stems from an oral presentation of [Yao86]. Later, several works as e.g. Goldreich,
Micali, and Wigderson [GMW87] and Beaver, Micali, and Rogaway [BMR90] described
the protocol in more detail. Since their emergence, they went from a purely theoretical
18
2.6. Garbled Circuits
Functionality FOT
FOT proceeds as follows, interaction and running with an oblivious transfer sender 𝑆, a
receiver 𝑅 and an adversary A.
• Upon receiving a message (OT-Send, 𝑠𝑖𝑑, (𝑥 0, 𝑥 1 )) from 𝑆, where each 𝑥 𝑗 ∈ {0, 1}𝑚 ,
record h𝑠𝑖𝑑, 𝑥 0, 𝑥 1 i. Reveal (OT-Send, 𝑠𝑖𝑑) to the adversary. Ignore further
(OT-Send, . . . ) messages from 𝑆 with the same 𝑠𝑖𝑑.
• Upon receiving a message (OT-Receive, 𝑠𝑖𝑑, 𝑏) from 𝑅, where 𝑏 ∈ {0, 1} record
the tuple h𝑠𝑖𝑑, 𝑏i and reveal (OT-Receive, 𝑠𝑖𝑑) to the adversary. Ignore further
(OT-Receive, . . . ) messages from 𝑅 with the same 𝑠𝑖𝑑.
• Upon receiving a message (OT-Sent, 𝑠𝑖𝑑) from the adversary, ignore the message
if h𝑠𝑖𝑑, 𝑥 0, 𝑥 1 i or h𝑠𝑖𝑑, 𝑏i is not recorded; Otherwise return (OT-Sent, 𝑠𝑖𝑑) to 𝑆;
Ignore further (OT-Sent, 𝑠𝑖𝑑, . . . ) messages from the adversary.
• Upon receiving a message (OT-Received, 𝑠𝑖𝑑) from the adversary ignore
the message if h𝑠𝑖𝑑, 𝑥 0, 𝑥 1 i or h𝑠𝑖𝑑, 𝑏i is not recorded; Otherwise return
(OT-Received, 𝑠𝑖𝑑, 𝑥𝑏 ) to 𝑅; Ignore further (OT-Received, 𝑠𝑖𝑑, . . . ) messages from
the adversary.
Figure 2.7.: Our Ideal Functionality FOT .
construct to a practically interesting and powerful cryptographic tool. Garbled Circuits
allow two parties, Alice and Bob, to jointly evaluate a boolean circuit. The circuit takes a
secret input from Alice and a secret input from Bob. After the execution, both parties (or
only one of them) learns the output.
From a abstract point of view, the protocol works as follows: Alice encodes her input
and the boolean circuit in a way, such that Bob can evaluate the circuit on the encoded
input, but learns nothing about the input. Alice sends the encoded input and circuit to
Bob. Then she encodes the possible input bits for Bob. Bob uses an OT protocol to get his
encoded input from Alice, while Alice learns nothing about the input of Bob. Bob then
evaluates the circuit and gets an encoded output. Both parties get the result by decoding
this output.
More precisely, Alice “garbles” the boolean circuit. This means, she assigns random bit
strings of length proportional to the security parameter for each possible input bit. These
so-called labels hide the actual inputs. Next, she encrypts for every gate of the boolean
circuit the output of the gate with the corresponding input labels as keys. This means, she
performs one encryption for each row of the truth table of the gate. Finally, she permutes
the order of the rows, so the order of the ciphertexts does not reveal information on the
outcome of the gate. After that, she sends the garbled circuit to Bob, together with the
input labels for her input. Then, Alice and Bob perform a 1-out-of-2 OT for each input bit
of Bob. With the OTs, Bob gets the labels for his input bits, while Alice learns nothing
about his input. Now Bob can evaluate the circuit gate by gate, as he has the garbled
circuit and both sets of input labels. Again, he proceeds gate by gate. He tries to decrypt
19
2. Preliminary
the output label of a gate by using the input labels. For one row, the decryption will work
and Bob receives the output label of the gate. In the textbook-version, the encryption must
ensure that decrypting with wrong keys can be detected by Bob. Eventually, Bob gets the
output labels from evaluating the output gates. Now, he can e.g. send the output labels
back to Alice. Alice knows the mapping of the output labels to actual output values and
thus, learns the result of the computation. This is depicted in Figure 2.8.
Alice(𝑥) Bob(𝑦)
(𝐹, 𝑒, 𝑑) ← Garble(C)
𝑋 ← Encode(𝑒, 𝑥)
𝑌 [0] ← Encode(𝑒, 0𝑛 )
𝑌 [1] ← Encode(𝑒, 1𝑛 ) (𝐹 , 𝑋 )
𝑌 [0], 𝑌 [1
] 𝑦
OT
𝑌 [𝑦 ]
𝑍 = Eval(𝐹, 𝑋, 𝑌 ) 𝑍
𝑧 = Decode(𝑑, 𝑍 )
Figure 2.8.: The Garbled Circuit Protocol.
The original construction offers only passive security. This means that the protocol is
secure as long as both parties follow the protocol description but try to gain additional
information. If both parties follow the protocol, they cannot learn anything more from the
transaction, than the output. Obviously, this holds no longer, if the garbling party Alice
deviates from the protocol. The garbler could for example simply garble a different circuit,
even one that leaks information on the evaluators input.
There are several possibilities to transform a passively secure garbled circuit protocol
into an actively secure one. The most common technique is called cut-and-choose. It was
first used in the context of blind signatures by Chaum [Cha83] and was later adapted to
the garbled circuit setting by Mohassel and Franklin [MF06] and Lindell and Pinkas [LP07].
The idea is to garble the circuit many times with independent randomness. The evaluator
then randomly challenges the garbler to “reveale” some of the circuits, i.e., showing the
used randomness in the generation, to prove that the garbling was done correctly. The
evaluator accepts the garbling only if all of the checks were successful. Wang, Ranellucci,
and Katz [WRK17] introduced a technique called authenticated garbling. The approach is
to combine authenticated secret sharing (i.e. Bendlin et al. [Ben+11]) with garbled circuits
to achieve active security. Finally, Goldreich, Micali, and Wigderson [GMW87] introduced
a generic approach to transform any passively secure protocol into an actively secure one.
However, as this approach is very generic, it is likely too inefficient for our purposes.
20
2.6. Garbled Circuits
Several works improved the efficiency of garbled circuits. First, Beaver, Micali, and
Rogaway [BMR90] introduced the point-and-permute technique. By assigning two addi-
tional bits to a ciphertext, the evaluator can directly identify the entry in the truth table,
which has to be decrypted. Therefore, the number of decryptions per gates is reduced
from four to one, as the evaluator does not have to try decrypting every row of the truth
table. Naor, Pinkas, and Sumner [NPS99] and Pinkas et al. [Pin+09] further reduced the
number of necessary encryptions to garble the circuit. The most notable advances are
the techniques called free-xor and half-gates. Free-xor was introduced by Kolesnikov and
Schneider [KS08] and allows to garble a circuit in such a way, that xor-gates cost no
additional encryption. This is particularly useful as common circuits, like e.g. Advanced
Encryption Standard (AES) contain a much bigger number of xor-operations than and-
operations. Half-gates [ZRE15] reduces the number of encryptions needed to encode
an and-gate from four encryptions to only two. Recently, Rosulek and Roy [RR21] even
enhanced the half-gates technique, circumventing the lower bound proven in [ZRE15].
They introduce a technique called slicing and dicing. With that technique xor-gates are still
free and and-gates cost 1.5𝜆 + 5 bits per gate, where 𝜆 is the security parameter. Free-xor
and half-gates can be combined to offer very efficient garbling.
Over the years, several frameworks were developed to facilitate the real-world imple-
mentation of garbled circuits. The first implementation was the Fairplay library [Mal+04].
As this library is relatively old, it is merely of historical interest. Other libraries like Kreuter,
shelat, and Shen [KsS12] do not feature the optimizations provided by half-gates [ZRE15].
Therefore we will use the emp-toolkit library by Wang, Malozemoff, and Katz [WMK16]
in this work. This C++ library offers a method to implement garbled circuits that are
passively and actively secure. Additionally, most recent optimizations like free-xor and
half-gates are implemented. The downside is that the framework is barely documented.
2.6.3. Garbling Schemes
Bellare, Hoang, and Rogaway [BHR12] defined an elegant abstraction of the above de-
scribed protocol and gave a thorough analysis of security properties offered by variants of
these algorithms. In their work, the use the side-information function Φ. Given a circuit 𝑓 ,
this functions outputs certain information about the circuit. Depending on the desired
level of security, one can define Φ differently. E.g. Φ(𝑓 ) = 𝑓 would mean that all parties
learn the whole description of the circuit. In a more restrictive setting, one could also
demand e.g. that Φ(𝑓 ) = 𝑛, where 𝑛 is the number of input bits of 𝑓 . However, in this
work we will always assume the first case, i.e. that the circuit description is public. We
render their definition here:
Definition 10 (Garbling Scheme) [BHR12, Sec. 3.1] A garbling scheme is a tuple G =
(Gb, En, De, Ev, ev), where Gb is probabilistic and the remaining algorithms are deter-
ministic. Let 𝑓 ∈ {0, 1} be a description of the function we want to garble. The function
ev(𝑓 , ·) : {0, 1}𝑛 → {0, 1}𝑚 denotes the actual function, we want to garble. (𝑛 and 𝑚 must
be efficiently computable from 𝑓 .) On input 𝑓 and a security parameter 𝜆 ∈ N, algorithm
Gb returns a triple of strings (𝐹, 𝑒, 𝑑) ← Gb(1𝜆 , 𝑓 ). String 𝑒 describes an encoding function,
En(𝑒, ·), that maps an initial input 𝑥 ∈ {0, 1}𝑛 to a garbled input 𝑋 = En(𝑒, 𝑥). String 𝐹
21
2. Preliminary
describes a garbled function, Ev(𝐹, ·), that maps each garbled input 𝑋 to a garbled output
𝑌 = Ev(𝐹, 𝑋 ). String 𝑑 describes a decoding function, De(𝑑, ·), that maps a garbled output
𝑌 to a final output 𝑦 = De(𝑑, 𝑌 ). 
The security properties defined in [BHR12] are privacy, obliviousness, and authenticity.
Intuitively, privacy means that no efficient adversary can calculate anything from the
garbled circuit 𝐹 , the input labels 𝑋 and the decoding information 𝑑 that the adversary
could not have calculated from the output value 𝑦 and the side-information Φ(𝑓 ) alone. In
particular, the adversary cannot “break” the garbling scheme to get the input value of one
of the parties. Obliviousness is a related notion. The intuition is that no efficient adversary
can calculate anything from the garbled circuit 𝐹 and the input labels 𝑋 that the adversary
could have calculated from the side-information Φ(𝑓 ).
Note here, that in contrast to privacy, the adversary is not given the decoding information
𝑑. Consequently, the adversary should not be able to produce an output just from the
garbled circuit 𝐹 and the input labels 𝑋 . This is reflected in the fact that the simulator in
the security experiment Figure 2.10 will not get the output value 𝑦 ← ev(𝑓 , 𝑥). The last
notion is authenticity. The idea behind this notion is that the only output one should be
able to produce using the garbled circuit is 𝑦 = De(𝑑, Ev(𝐹, 𝑋 )).
Bellare, Hoang, and Rogaway [BHR12] gave a game-based security definition, as well as
a simulation based security definition for the first two properties. As Zahur, Rosulek, and
Evans [ZRE15] use the simulation-based notions, we will only render the simulation-based
definitions and the definition of authenticity here.
Definition 11 (Privacy) [BHR12, Sec. 3.4] For a simultor S, we define the advantage of
adversary A in the security experiment defined in Figure 2.9, as
Advprv.sim
G
,Φ,S
(A, 𝜆) B 2 Pr [PrvSimA
G,Φ,S = 1] 1.
A garbling scheme has privacy if for every PPT adversary A there is a simulator S such
that
Advprv.sim
G
,Φ,S
(A, 𝜆) ≤ negl(𝜆),
for a negligible function negl(·). 
Definition 12 (Obliviousness) [BHR12, Sec. 3.5] For a simulator S, we define the advantage
of adversary A in the security experiment defined in Figure 2.10, as
Advobv.sim
G
,Φ,S
(A, 𝜆) B 2 Pr [ObvSimA
G,Φ,S = 1] 1.
A garbling scheme has obliviousness if for every PPT adversary A there is a simulator S
such that
Advobv.sim
G
,Φ,S
(A, 𝜆) ≤ negl(𝜆),
for a negligible function negl(·). 
22
2.6. Garbled Circuits
Game PrvSimG,Φ,S
• The challenger C chooses a bit 𝑏 ∈ {0, 1} uniformly at random.
• A sends a function 𝑓 : {0, 1}𝑛 → {0, 1} and input 𝑥 ∈ {0, 1}𝑛 to C.
• If 𝑥 ∉ {0, 1}𝑛 the challenger sends ⊥ to A.
Else if 𝑏 = 1 the challenger sets (𝐹, 𝑒, 𝑑) ← Gb(1𝜆 , 𝑓 ) and 𝑋 ← En(𝑒, 𝑋 ).
Else the challenger calculates 𝑦 ← ev(𝑓 , 𝑥) and simulates (𝐹, 𝑋, 𝑑) ←
S(1𝜆 , 𝑦, Φ(𝑓 )).
Finally, C sends (𝐹, 𝑋, 𝑑) to A.
• A outputs a bit 𝑏 0
?
• The game outputs 𝑏 0 = 𝑏.
Figure 2.9.: The Simulation-Based Privacy Game From [BHR12, Fig. 5].
Definition 13 (Authenticity) [BHR12, Sec. 3.6] We define the advantage of adversary A in
the security experiment defined in Figure 2.11, as
A
Advaut
G (A, 𝜆) B Pr [AutG = 1].
A garbling scheme has authenticity if for every PPT adversary A it holds that
Advaut
G (A, 𝜆) ≤ negl(𝜆),
for a negligible function negl(·). 
2.6.4. Free-Xor
The technique proposed in Kolesnikov and Schneider [KS08] is one of the most important
advances on efficiency of garbled circuits. The technique allows to calculate a garbled
circuit in such a way that xor-gates come with no additional data, i.e., encrypted gate
labels that have to be sent over the network. The gist of [KS08] is the following: If one
defines the input labels of a xor-gate as 𝑋 [1] = 𝑋 [0] ⊕ Δ and 𝑌 [1] = 𝑌 [0] ⊕ Δ, where
Δ is some secret constant known to the garbling party and 𝑋 [0], 𝑌 [0] are random labels
and one defines 𝑍 [0] = 𝑋 [0] ⊕ 𝑌 [0], the evaluating party can calculate the output of the
xor-gate locally. This is because we have for 𝑏 1, 𝑏 2 ∈ {0, 1}
𝑋 [𝑏 1 ] ⊕ 𝑌 [𝑏 2 ] = 𝑍 [𝑏 1 ⊕ 𝑏 2 ],
and the evaluating party can compute the xor of the two input labels without further
information from the garbling party. Kolesnikov and Schneider [KS08] argue that even
though the labels are not chosen independently anymore in the above case, the garbling is
still secure. This technique is particularly useful as some the circuit description of some
“real-world” functions contain a relatively high amount of xor-gates. For example, AES
can be realized with 28216 gates from which 55% are xor-gates [Pin+09].
23
2. Preliminary
Game ObvSimG,Φ,S
• The challenger C chooses a bit 𝑏 ∈ {0, 1} uniformly at random.
• A sends a function 𝑓 : {0, 1}𝑛 → {0, 1} and input 𝑥 ∈ {0, 1}𝑛 to C.
• If 𝑥 ∉ {0, 1}𝑛 the challenger sends ⊥ to A.
Else if 𝑏 = 1 the challenger sets (𝐹, 𝑒, 𝑑) ← Gb(1𝜆 , 𝑓 ) and 𝑋 ← En(𝑒, 𝑋 ).
Else the challenger simulates (𝐹, 𝑋 ) ← S(1𝜆 , Φ(𝑓 )).
Finally, C sends (𝐹, 𝑋 ) to A.
• A outputs a bit 𝑏 0
?
• The game outputs 𝑏 0 = 𝑏.
Figure 2.10.: The Simulation-Based Obliviousness Game From [BHR12, Fig. 5].
Game AutG
• A sends a function 𝑓 : {0, 1}𝑛 → {0, 1} and input 𝑥 ∈ {0, 1}𝑛 to C.
• If 𝑥 ∉ {0, 1}𝑛 the challenger sends ⊥ to A.
The challenger sets (𝐹, 𝑒, 𝑑) ← Gb(1𝜆 , 𝑓 ) and 𝑋 ← En(𝑒, 𝑋 ).
C sends (𝐹, 𝑋 ) to A.
• A sends 𝑌 to C
• The game outputs 1 iff (De(𝑑, 𝑌 ) ≠ ⊥ and 𝑌 ≠ Ev(𝐹, 𝑋 ).
Figure 2.11.: The Authenticity Game From [BHR12, Fig. 5].
24
2.6. Garbled Circuits
2.6.5. Half-Gates
Zahur, Rosulek, and Evans [ZRE15] proposed an optimization to Yaos garbled circuits
that reduces the cost of each and-gate by half. This huge improvement is particularly
interesting as it can be combined with the free-xor optimization technique. The key idea
is to split a single and-gate into two “half-gates” that are easier to handle.
To understand the technique, one has to consider the following. Lets assume we want
to garble a gate 𝑐 = 𝑎𝑏. We further assume that the free-xor technique as in Section 2.6.4
is used. Then we have labels 𝐶, 𝐶𝑅, 𝐴, 𝐴𝑅, 𝐵, and 𝐵𝑅 for this gate, where 𝑅 is
the free-xor offset and 𝐴, 𝐵, 𝐶, are the labels encoding zero. If we assume that the garbler
(somehow) already knows the value of 𝑎, it would be easy to garble the gate. If 𝑎 = 0 the
garbler could just garble a gate that outputs constant 0 and for 𝑎 = 1 the garbler could
garble an “identitiy” gate, i.e., a gate that always outputs 𝑏. So the garbler has to produce
two encryptions:
𝐻 (𝐵) ⊕ 𝐶
if 𝑎 = 0 : 𝐻 (𝐵𝑅) ⊕ 𝐶
if 𝑎 = 1 : 𝐻 (𝐵𝑅) ⊕ 𝐶𝑅
This is the first “half-gate”. For the second “half-gate”, we adapt this idea to the evaluator
side. Consider again an and-gate 𝑐 = 𝑎𝑏. But this time, we assume that the evaluator
(somehow) already knows the bit 𝑎. If the evaluator knows the value of 𝑎, it can behave
differently in evaluating the circuit. For 𝑎 = 0, the evaluator has to receive the label 𝐶, as
the output is always zero. If 𝑎 = 1, the output of the gate depends on the value of 𝑏. It is
sufficient for the evaluator to learn the label Δ B 𝐶𝐵. By adding either 𝐵 or 𝐵𝑅 to Δ,
the evaluator will receive the right output label, i.e., either 𝐶 or 𝐶𝑅. This means, the
“half gate” of the evaluator is comprised of two encryptions
𝐻 (𝐴) ⊕ 𝐶
𝐻 (𝐴𝑅) ⊕ 𝐶𝐵.
One can further use optimization from garbled-row-reduction [NPS99] to reduce the
number of encryptions for each of the “half-gates” to just one. To finally put those two
halves together, we use the fact that for any 𝑟 ∈ {0, 1} we have
𝑐 = 𝑎 ∧𝑏
= 𝑎 ∧ ((𝑟𝑟 )𝑏)
= (𝑎𝑟 ) ⊕ (𝑎 ∧ (𝑟𝑏)).
If we let the garbler choose a uniformly random value 𝑟 ∈ {0, 1}, we can regard the
and-gate (𝑎𝑟 ) as the garblers “half-gate”. Obviously, 𝑟 is known to the garbler. We can
further regard (𝑎 ∧ (𝑟𝑏)) as the evaluators “half-gate”, if we can transfer the value of
(𝑟𝑏) to the evaluator. This can be done via the choice bit of the point-and-permute
technique, see [BMR90]. Intuitively, (𝑟𝑏) does not leak any information about 𝑏 to the
evaluator, as 𝑏 is masked by the uniformly random value 𝑟 . The xor-gate that is used to
combine the two halves is free.
25
2. Preliminary
We recall the details in Figure 2.12. For this figure, we adhered to the notation of [ZRE15]
and denote by 𝑥ˆ the vector (𝑥 0, . . . , 𝑥𝑛 ), for some 𝑛 ∈ N. Further, NextIndex is a stateful
procedure that simply increments an internal counter. Zahur, Rosulek, and Evans [ZRE15]
show that their scheme satisfies the simulation-based notions of obliviousness and privacy,
see Section 2.6.3.
2.7. Security of OPRFs
2.7.1. Simulation-Based Security
Freedman et al. [Fre+05] defined the security of OPRF using the real-world/ideal-world
paradigm. They define two notions of OPRF, namely strong-OPRF and relaxed-OPRF (later
also called weak OPRF). The first definition requires, that the user learns nothing about
the servers key. Though this is the intuitive property that we want from an OPRF, this
definition is to strong to capture some efficient protocols. For example, if the user receives
a value from the server and finally applies a hash function to that value to obtain the
final PRF output, the client obviously learned more about the servers that just the PRF
output. It learned the hash-preimage of the PRF output. E.g. the constructions from Jarecki,
Krawczyk, and Xu [JKX18] and Jarecki et al. [Jar+16] or Kolesnikov et al. [Kol+16] do
not satisfy the strong-OPRF notion because of their application of a hash function. Thus,
[Fre+05] define a relaxed version of OPRF. [Fre+05] give a brief definition of relaxed-OPRF.
We work out the details in the following:
Definition 14 (Relaxed-OPRF) [Fre+05, Def. 6] A two party protocol 𝜋 between a user U
and a server S is said to be a relaxed-OPRF if there exists some PRF familiy 𝑓𝑘 , such that 𝜋
correctly realizes the following functionality:
• Inputs: User holds an input 𝑥 ∈ X and server a key 𝑘 ∈ K,
• Output: User outputs 𝑓𝑘 (𝑥) and server outputs nothing,
and if the following properties hold:
• User privacy: There exists a PPT machine Sim such that for every key 𝑘 ∈ K and
every input 𝑥 ∈ X it holds that
𝑐
{𝑣 | 𝑣 = viewS hS(𝑘), U(𝑥)i𝜋 }𝜆 ≈ {𝑣 | 𝑣 ← Sim(1𝜆 , 𝑘)}𝜆 .
• Server privacy: We demand that for any malicious PPT adversary A playing the role of
the client there exists a PPT simulator Sim such that for all inputs ((𝑥 1, 𝑥 2, . . . , 𝑥𝑛 ), 𝑤)
it holds that
$
{(𝑣, 𝑓𝑘 (𝑥 1 ), 𝑓𝑘 (𝑥 2 ), . . . , 𝑓𝑘 (𝑥𝑛 ) | 𝑘 ← K, 𝑣 = outA hS(𝑘), A (𝑤)i}
𝑐 $
≈ {(Sim(𝑓𝑘 (𝑤)), 𝑓𝑘 (𝑥 1 ), 𝑓𝑘 (𝑥 2 ), . . . , 𝑓𝑘 (𝑥𝑛 )) | 𝑘 ← K},
where S is a honest server and viewP hA(𝑥), B(𝑦)i𝜋 denotes the view of party P ∈
{A, B} when protocol 𝜋 is executed between A with input 𝑥 and party B with input 𝑦
and outP hA(𝑥), B(𝑦)i𝜋 denotes the output of that interaction. Ist das wirklich richtig?
Liest sich komisch. 
26
2.7. Security of OPRFs
Gb(1𝜆 , 𝑓 ) : ˆ 𝑋ˆ ) :
Ev( 𝐹,
$
𝑅 ← {0, 1}𝜆1 k 1 for 𝑖 ∈ Inputs( 𝐹ˆ) do
for 𝑖 in Inputs(𝑓 ) do 𝑊𝑖 B 𝑋𝑖
$
𝑊𝑖0 ← {0, 1}𝜆 // In topological order
𝑊𝑖1 B 𝑊𝑖0 ⊕ 𝑅 for 𝑖 ∉ Inputs( 𝐹ˆ) do
𝑒𝑖 B 𝑊𝑖0 {𝑎, 𝑏} B GateInputs( 𝐹,ˆ 𝑖)
// In topological order if 𝑖 ∈ XorGates( 𝐹ˆ) :
for 𝑖 ∉ Inputs(𝑓 ) do 𝑊𝑖 B 𝑊𝑎𝑊𝑏
{𝑎, 𝑏} B GateInputs(𝑓 , 𝑖) else
if 𝑖 ∈ XorGates(𝑓 ) : 𝑠𝑎 B lsb(𝑊𝑎 ), 𝑠𝑏 B lsb(𝑊𝑏 )
𝑊𝑖0 B 𝑊𝑎0 ⊕ 𝑊𝑏0 𝑗 B NextIndex()
else 𝑗 0 B NextIndex()
(𝑊𝑖0,𝑇𝐺𝑖 ,𝑇𝐸𝑖 ) B GbAnd(𝑊𝑎0,𝑊𝑏0 ) (𝑇𝐺𝑖 ,𝑇𝐸𝑖 ) B 𝐹𝑖
𝐹𝑖 B (𝑇𝐺𝑖 ,𝑇𝐸𝑖 ) $
𝑊𝐺𝑖𝐻 (𝑊𝑎 , 𝑗) ⊕ 𝑠𝑎𝑇𝐺𝑖
endif 𝑊𝐸𝑖𝐻 (𝑊𝑏 , 𝑗 0) ⊕ 𝑠𝑏 (𝑇𝐸𝑖𝑊𝑎 )
$
𝑊𝑖1 B 𝑊𝑖0 ⊕ 𝑅 𝑊𝑖 B 𝑊𝐺𝑖𝑊𝐸𝑖
for 𝑖 ∈ Outputs(𝑓 ) do endif
𝑑𝑖 B lsb(𝑊𝑖0 ) for 𝑖 ∈ Outputs( 𝐹ˆ) do
return ( 𝐹,
ˆ 𝑒, ˆ
ˆ 𝑑) 𝑌𝑖 B 𝑊𝑖
return 𝑌ˆ
private GbAnd(𝑊𝑎0,𝑊𝑏0 ) :
𝑝𝑎 B lsb(𝑊𝑎0 ), 𝑝𝑏 B lsb(𝑊𝑏0 ) En(𝑒,
ˆ 𝑥)
ˆ :
𝑗 B NextIndex(), 𝑗 0 B NextIndex()
for 𝑒𝑖𝑒ˆ do
// First Half-Gate
$ 𝑋𝑖 B 𝑒𝑖𝑥 𝑖 𝑅
𝑇𝐺𝐻 (𝑊𝑎0, 𝑗) ⊕ 𝐻 (𝑊𝑎1, 𝑗) ⊕ 𝑝𝑏 𝑅
$ return 𝑋ˆ
𝑊𝐺0 ← 𝐻 (𝑊𝑎0 ) ⊕ 𝑝𝑎𝑇𝐺
// Second Half-Gate De(𝑑,ˆ 𝑌ˆ ) :
𝑇𝐸𝐻 (𝑊𝑏0, 𝑗 0) ⊕ 𝐻 (𝑊𝑏1, 𝑗 0) ⊕ 𝑊𝑎0
$
for 𝑑𝑖𝑑ˆ do
𝑊𝐸0 ← 𝐻 (𝑊𝑏0, 𝑗 0) ⊕ 𝑝𝑏 (𝑇𝐸𝑊𝑎0 )
$
𝑦𝑖 B 𝑑𝑖 ⊕ lsb(𝑌𝑖 )
// combine halves
return 𝑦ˆ
𝑊 0 B 𝑊𝐺0 ⊕ 𝑊𝐸0
return (𝑊 0,𝑇𝐸 ,𝑇𝐺 )
Figure 2.12.: The Procedures for Garbling a Function 𝑓 .
27
2. Preliminary
Functionality FAUTH
• Upon invocation, with input (Send, 𝑚𝑖𝑑, 𝑅, 𝑚) from 𝑆, send backdoor message
(Sent, 𝑚𝑖𝑑, 𝑆, 𝑅, 𝑚) to the adversary.
• Upon receiving backdoor message (ok, 𝑚𝑖𝑑): If not yet generated output, then
output (Sent, 𝑚𝑖𝑑, 𝑆, 𝑅, 𝑚) to 𝑅.
Figure 2.13.: The Ideal Functionality FAUTH From [Can00].
Functionality FRO
Upon receipt of a message 𝑥𝐴, if there is a record h𝑥, 𝑦i, return 𝑦. Else draw 𝑦 𝐵
uniformly at random. Record h𝑥, 𝑦 i and return 𝑦 .
Figure 2.14.: The Ideal Functionality FRO .
2.7.2. Universally Composable OPRFs
2.7.2.1. Authenticated Channels
In this work we will use the notion of authenticated channels. Intuitively, this means that
a sender of a message can be sure that only the intended receiver (or no one, in case the
message is lost) can receive a message. Additionally, a sender can be sure that the message
was not altered by an adversary. We demand that those requirements do only hold as long
as both parties follow the protocol.
Canetti [Can00] define authenticated communication via the ideal functionality depicted
in Figure 2.13.
2.7.2.2. The UC Framework and Random Oracles
A random oracle is an (over-) idealization of a hash function. Assuming the existence
of a random oracle often allows to prove security of cryptographic objects, that are are
more efficient than their “plain-model” counterparts. In a real-world implementation,
the random oracle will be replaced by a cryptographic hash function. While there are
examples, where this replacement does not preserve security (see [CGH98]), the random
oracle model is still regarded as a useful heuristic. A random oracle 𝐻 : 𝐴𝐵 maps
elements from set 𝐴 to elements of set 𝐵. It can be queried by all parties. If the random
oracle receives an input query 𝑥𝐴 for the first time, it draws a uniformly random output
value 𝑦𝐵 and outputs this value. The oracle also stores the tuple h𝑥, 𝑦i. If the random
oracle receives the query 𝑥 again, it outputs 𝑦 and does not draw a new value.
In the UC-framework, the random oracle is modeled as an ideal functionality. We
describe such a functionality in Figure 2.14. However, for the sake of convenience, we will
notate the random oracle in our work like a “conventional hash function” and not like an
ideal functionality.
28
2.7. Security of OPRFs
2.7.2.3. OPRF in the UC Model
We recall the security notion defined in [JKX18]. The security is defined in the UC-
framework, see Section 2.4. We describe the ideal functionality FOPRF in Figure 2.15. We
will write FOPRF to distinguish this functionality, from the slightly simplified functionality
FOPRF , we are going to introduce in Section 3.2.
The intuition of the functionality is that users interact with servers in several sessions.
A session is indexed by an id 𝑠𝑖𝑑 an belongs to one user and one server. An honest server
uses the same key for the whole session 𝑠𝑖𝑑. The user can request an output of the PRF by
interacting with the server in a subsession, identified by 𝑠𝑠𝑖𝑑. The user starts the request
of an output F𝑘 (𝑥) by sending (Eval, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, S0, 𝑥) to FOPRF . S0 denotes the server from
which the user wants to get the output. In other words, the user specifies the function
𝑓𝑘 (·) from which the output should be taken, only that the user doesnt know the value 𝑘
but rather specifies the server that holds 𝑘. As we assume that a server only holds one 𝑘
for every session 𝑠𝑖𝑑, the ideal functionality denotes its internal function as F𝑠𝑖𝑑,S (·). The
function is seen as an initially empty table and gets lazily filled with randomly drawn
values.
A server can consent to the interaction with the user by sending (SndrComplete, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑 0)
to FOPRF . Finally, the adversary can send (RcvCmplt, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, U, 𝑖) to FOPRF to indicate
that the user U can receive the requested output. However, FOPRF gives the adversary
the means to tamper with the output by specifying an identity 𝑖. This 𝑖 indicates from
which function F𝑠𝑖𝑑,𝑖 (·) the output should actually be chosen. If the adversary sends
(RcvCmplt, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, U, S0), where S0 is the server from the users (Eval, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, S0, 𝑥)
message, the interaction yields exactly the output that the user requested. But if 𝑖 ≠ S0,
the request is “detoured” and the user receives an output from a different table, namely
F𝑠𝑖𝑑,𝑖 (·). The identity 𝑖 does not need to correspond to an existing protocol party, but can
by any identity label, e.g. any bit string of a predefined length.
The above might give the impression that FOPRF undermines the security of OPRF proto-
cols realizing FOPRF . If the adversary can arbitrarily detour queries, it could e.g. answer all
queries with just one function F𝑠𝑖𝑑,S (·). This problem is solved via the ticket counter tx(·).
With this counter, FOPRF keeps track of the number of OPRF outputs that a server generates
and the number of OPRF output from that server that is used as output. Everytime the server
consents to giving OPRF output by sending (SndrComplete, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, S), the counter tx(S)
is incremented. If an output from S is delivered to a user by a (RcvCmplt, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, U, S)
message, the counter tx(S) is decremented. If the counter is zero but an output is request
by a (RcvCmplt, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, U, S) message, FOPRF ignores this message.
FOPRF also allows offline evaluation of functions, by sending (OfflineEval, 𝑠𝑖𝑑, 𝑖, 𝑥) to
FOPRF . This is possible in four cases:
1. If the server 𝑖 is corrupted. This models the fact that the adversary learns the PRF
key 𝑘 by corrupting a server. When the adversary knows 𝑘, it can evaluate F𝑘 (·) at
arbitrary points.
2. If the server itself wants to evaluate the function, it can do that, as it knows its own
key.
29
2. Preliminary
3. A real-world adversary can just makeup random output values. This is reflected
by the fact that the adversary can send offline evaluation requests for identities 𝑖
that are not an existing party. For the “virtual corrupt identities”, the adversary can
arbitrarily often query output values.
4. If the server is compromised, we are in a similar situation as in the case of corruption.
Note, that FOPRF models several users and several servers, interacting with each other.
This is rather unusual for a UC-functionality as it makes the security analysis more
complicated. However, modeling the functionality with only one user and one sender
has a drawback. The 2HashDH by [Jar+16; JKK14] relies on two hash functions. More
formally speaking, 2HashDH UC-realizes FOPRF in the 𝑅𝑂-hybrid model. Now, if different
users would want to query pseudo-random values from the same server and thus, the
same function F𝑘 (·), it would not be possible, as the random oracles 𝐻 1𝑠𝑖𝑑 , 𝐻 2𝑠𝑖𝑑 are different
for every session and thus the PRF F𝑘 (𝑥) = 𝐻 2𝑠𝑖𝑑 (𝑥, (𝐻 1𝑠𝑖𝑑 (𝑥))𝑘 ), too.
30
2.7. Security of OPRFs
Functionality FOPRF
Public Parameters: PRF output-length 𝑙, polynomial in the security parameter 𝜆.
Conventions: For every 𝑖, 𝑥, value F𝑠𝑖𝑑,𝑖 (𝑥) is initially undefined, and if undefined value
$
F𝑠𝑖𝑑,𝑖 (𝑥) is referenced then FOPRF assigns F𝑠𝑖𝑑,𝑖 (𝑥) ← {0, 1}𝑙 .
Initialization:
On (Init, 𝑠𝑖𝑑) from S, if this is the first Init message for 𝑠𝑖𝑑, set tx = 0 and send
(Init, 𝑠𝑖𝑑, S) to A. From now on, use tag “S” to denote the unique entity which sent
the Init message for session id 𝑠𝑖𝑑. Ignore all subsequent Init messages for 𝑠𝑖𝑑.
Server Compromise:
On (Compromise, 𝑠𝑖𝑑, S) from A, mark S as Compromised. If S is corrupted, it is
marked as Compromised from the beginning. Note: Message (Compromise, 𝑠𝑖𝑑, S)
requires permission from the environment.
Offline Evaluation:
On (OfflineEval, 𝑠𝑖𝑑, 𝑖, 𝑥) from P ∈ {S, A}, send (OfflineEval, 𝑠𝑖𝑑, F𝑠𝑖𝑑,𝑖 (𝑥)) to P if
any of the following hold: (i) S is corrupted, (ii) P = S and 𝑖 = S, (iii) P = A and 𝑖 ≠ S,
(iv) P = A and S is as marked Compromised.
Evaluation:
• On (Eval, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, S0, 𝑥) from P ∈ {U, A}, send (Eval, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, P, S0) to A.
On prfx from A, ignore this message if prfx was used before. Else record
h𝑠𝑠𝑖𝑑, P, 𝑥, prfxi and send (Prefix, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, prfx) to P.
• On (SndrComplete, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑) from S, send (SndrComplete, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, S) to A.
On prfx0 from A, send (Prefix, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, prfx0) to S. If there is a record
h𝑠𝑠𝑖𝑑, P, 𝑥, prfxi for P ≠ A and prfx ≠ prfx0, change it to h𝑠𝑠𝑖𝑑, P, 𝑥, OKi. Else
set tx + +.
• On (RcvCmplt, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, P, 𝑖) from A, ignore this message if there is no
record h𝑠𝑠𝑖𝑑, P, 𝑥, prfxi or if (𝑖 = S, tx = 0 and prfx ≠ OK). Else send
(EvalOut, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, F𝑠𝑖𝑑,𝑖 (𝑥)) to P and if (𝑖 = S and prfx ≠ OK) then set tx .
Figure 2.15.: The Ideal Functionality FOPRF From [JKX18].
31
3. Construction
3.1. Adversarial Model
For the sake of clarity, we formulate the assumptions about our adversaries:
We will implement an OPRF with garbled circuits. As “textbook versions” of garbled
circuits offer only security against passive, i.e., semi-honest adversaries, we will restrict
our construction to these adversaries. This means, the adversary follows the protocol
honestly but tries to learn additional information from its view on the protocol execution.
Further, we will restrict ourselves to a model of static corruption. This means the adversary
can only at the start of the protocol choose to gain control over certain parties. If a party is
corrupted, we assume that the adversary learns the partys input, the content of the partys
random tape, and all messages received by the party. The adversary can send messages in
the name of a corrupted party as long as the messages adhere to the protocol.
3.2. Security Notion
We will not use exactly the same formulation of the ideal OPRF functionality FOPRF , defined
in Section 2.7. Well use a slightly simplified version, described in Figure 3.1. Note, that
FOPRF does not capture adaptive compromise, as we only assume static corruption. For
the sake of simplicity, we also omit the prefixes used in FOPRF .
3.3. The main construction
Let 𝑚, 𝑛 ∈ Ω(𝜆) and 𝐹 : {0, 1}𝑚 × {0, 1}𝑛 → {0, 1}𝑛 be a PRF, with the additionally property
that for every 𝑘 ∈ {0, 1}𝑚 it holds that 𝐹𝑘 (·) : {0, 1}𝑛 → {0, 1}𝑛 is a permutation. In our
real-world implementation, described in Chapter 5, we instantiate this function with AES.
We will garble the circuit 𝐶 that describes 𝐹 to construct our OPRF.
The user runs with its password 𝑝𝑤 ∈ {0, 1} as input. The password is hashed to an 𝑛
bit value, so we can use it as input to C. Our construction involves two hash functions
𝐻 1 : {0, 1} → {0, 1}𝑛 and 𝐻 2 : {0, 1} × {0, 1}𝑚 → {0, 1}𝑙 , where 𝑙 ∈ Ω(𝜆). We will model
these hash functions as random oracles. The server takes no input. Initially, for each
session, it chooses a key 𝑘 ∈ {0, 1}𝑚 uniformly at random. The PRF, that is computed by
the OPRF protocol is
F𝑘 (𝑝𝑤) B 𝐻 2 (𝑝𝑤, C𝑘 (𝐻 1 (𝑝𝑤))).
In our description of the protocol, the server garbles the circuit and the user evaluates the
circuit. The user starts an execution of the protocol by hashing its input 𝑝𝑤. The obtained
value 𝑥 = 𝐻 1 (𝑝𝑤) will be used as the users input to the circuit. The user then requests
33
3. Construction
Functionality FOPRF
Initialization:
For each value 𝑖 and each session 𝑠𝑖𝑑, an empty table 𝑇𝑠𝑖𝑑 (𝑖, ·) is initially undefined.
$
Whenever 𝑇𝑠𝑖𝑑 (𝑖, 𝑥) is referenced below while it is undefined, draw 𝑇𝑠𝑖𝑑 (𝑖, 𝑥) ← {0, 1}𝑙 .
On (Init, 𝑠𝑖𝑑) from S, if this is the first Init message for 𝑠𝑖𝑑, set tx(S) = 0 and send
(Init, 𝑠𝑖𝑑, S) to A. From now on, use “S” to denote the unique entity which sent the
Init message for 𝑠𝑖𝑑. Ignore all subsequent Init messages for 𝑠𝑖𝑑.
Offline Evaluation:
On (OfflineEval, 𝑠𝑖𝑑, 𝑖, 𝑥) from P ∈ {S, A}, send (OfflineEval, 𝑠𝑖𝑑,𝑇𝑠𝑖𝑑 (𝑖, 𝑥)) to P if
any of the following hold: (i) S is corrupted and 𝑖 = S, (ii) P = S and 𝑖 = S, (iii) P = A
and 𝑖 ≠ S.
Online Evaluation:
• On (Eval, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, S, 𝑝𝑤) from P ∈ {U, A}, record h𝑠𝑠𝑖𝑑, S, P, 𝑝𝑤i and send
(Eval, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, P, S) to A.
• On (SndrComplete, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑) from S, increment tx(S) or set to 1 if previously
undefined, send (SndrComplete, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, S) to A.
• On (RcvCmplt, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, P, 𝑖) from A, retrieve h𝑠𝑠𝑖𝑑, S, P, 𝑝𝑤i, where P ∈ {U, A}.
Ignore this message if at least one of the following holds:
There is no record h𝑠𝑠𝑖𝑑, S, P, 𝑝𝑤i.
𝑖 = S but tx(S) = 0.
S is honest but 𝑖 ≠ S.
Send (EvalOut, 𝑠𝑖𝑑,𝑇𝑠𝑖𝑑 (𝑖, 𝑝𝑤)) to P. If 𝑖 = S set tx(𝑖) .
Figure 3.1.: The Ideal Functionality FOPRF Inspired by [JKX18].
34
3.4. Some Remarks on the Construction
a garbled circuit from the server by sending (Garble, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑) to the server. The server
will proceed by calculating the garbled circuit, using half-gates, as described in Figure 2.12.
In particular, it encodes its own key as input for the circuit. It sends the garbled circuit,
the input labels of the key, and the decoding information to the user. The user and the
server perform 𝑛 parallel 1-out-of-2-OTs in order to equip the user with the wire labels
for its desired input 𝑥 = 𝐻 1 (𝑝𝑤). Next, the user can evaluate the garbled circuit on the
encoded inputs 𝑋 and 𝐾 and receives an output label 𝑌 . This label can be decoded to obtain
the output value of the circuit 𝑦. Finally, the user hashes its input and the output of the
circuit again to obtain the output 𝜌 = 𝐻 2 (𝑝𝑤, 𝑦). We describe the OPRF more precisely in
Figure 3.2. We denote by 𝑚𝑖𝑑 the session id of each FAUTH session, i.e., each sent message.
We assume that 𝑚𝑖𝑑 contains the session id 𝑠𝑖𝑑 and the subsession id 𝑠𝑠𝑖𝑑 as a substring.
When we talk about the labels generated by Gb, we will write 𝑋 [0] (or 𝑋 [1], rsp.) to
denote that the label is an encoding of 0 (or 1, rsp.). When 𝑏 ∈ {0, 1}𝑛 , we will also write
𝑋 [𝑏] to denote the string of labels 𝑋 [𝑏 1 ] k . . . k 𝑋 [𝑏𝑛 ].
3.4. Some Remarks on the Construction
In the following, we give some remarks on the construction and explain decisions on the
protocol design.
Who garbles? We believe that the above-described approach could easily be adapted
to feature switched roles of garbler and evaluator. More precisely, we believe that its
also possible to construct a similar OPRF protocol where the user garbles the circuit and
the server evaluates the circuit. However, we decided to let the server garble the circuit
because our construction only has passive security. If the protocol would be implemented
in a real-world scenario, it is a more realistic assumption that a server behaves in an
honest-but-curious way than to assume that a user behaves that way. A server might be
maintained by a company that would fear economic damage if malicious behavior of their
servers is uncovered, while arbitrary users on the internet are likely to behave maliciously.
Nonetheless, we would always recommend using protocols that feature security against
active adversaries for real-world scenarios. If it would be possible to achieve an actively
secure OPRF protocol from garbled circuits, it might even be beneficial to switch roles. If
the user has to “invest” computation time on the creation of a garbled circuit, it decreases
the thread of Denial of Service (DOS) attacks on the server.
On the Need for the Second Hash Function One might ask why we need a second hash
function 𝐻 2 in the definition of our pseudo-random function F𝑘 (𝑥) = 𝐻 2 (C𝑘 (𝐻 1 (𝑥))). On
the first glance it even seems to weaken our results, as the construction in Figure 3.2 is
only a weak OPRF, see Section 2.7. One could conclude that if the user would not have to
hash the output of the garbled circuit, we would achieve a strong OPRF, as the user does
not learn anything more than the PRF output, instead of learning the 𝐻 2 pre-image of the
actual output. The pseudo-randomness would follow from the fact that C is a PRF. That
would lead to the OPRF described by Pinkas et al. [Pin+09]. The problem with this lies in
the definition of the ideal functionality, see Figure 2.15, and the strong notion of universal
35
3. Construction
S on (Init, 𝑠𝑖𝑑) from E
If this is the first (Init, 𝑠𝑖𝑑) message from E
$
𝑘 ← {0, 1}𝑚 , record h𝑘, 𝑠𝑖𝑑i
U on (Eval, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, S, 𝑝𝑤) from E
$
𝑥𝐻 1 (𝑝𝑤)
send (Send, 𝑚𝑖𝑑, S, (Garble, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑)) to FAUTH
S on (SndrComplete, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑) from E
if already received (Garble, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑) :
goto GarbleCircuit
else
ignore this message
S on (Sent, 𝑚𝑖𝑑, U, S, (Garble, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑)) from FAUTH
if already received (SndrComplete, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑) :
GarbleCircuit :
if @h𝑘, 𝑠𝑖𝑑i :
ignore this message
(𝐹, 𝑒, 𝑑) ← Gb(1𝜆 , C)
(𝑋 [0𝑛 ] k 𝐾) B En(𝑒, 0𝑛 k 𝑘)
(𝑋 [1𝑛 ] k 𝐾) B En(𝑒, 1𝑛 k 𝑘)
send (Send, 𝑚𝑖𝑑 0, U, (𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, (𝐹, 𝐾, 𝑑)) to FAUTH
for 𝑖 ∈ {1, . . . , 𝑛} :
send (OT-Send, (𝑠𝑠𝑖𝑑, 𝑖), (𝑋𝑖 [0], 𝑋𝑖 [1])) to FOT
else
ignore this message
U on (Sent, 𝑚𝑖𝑑, S, U, (𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, (𝐹, 𝐾, 𝑑))) from FAUTH
if already received (Eval, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, S, 𝑝𝑤) :
wait for (OT-Sent, (𝑠𝑠𝑖𝑑, 1)), . . . , (OT-Sent, (𝑠𝑠𝑖𝑑, 𝑛)) from FOT
for 𝑖 ∈ {1, . . . , 𝑛} :
send (OT-Receive, (𝑠𝑠𝑖𝑑, 𝑖), 𝑥𝑖 ) to FOT
else
ignore this message
U on (OT-Received, (𝑠𝑠𝑖𝑑, 1), 𝑋 1 ), . . . , (OT-Received, (𝑠𝑠𝑖𝑑, 𝑛), 𝑋𝑛 ) from FOT
if already received (𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, (𝐹, 𝐾, 𝑑)) :
𝑌 B Ev(𝐹, 𝑋 k 𝐾)
𝑦 B De(𝑑, 𝑌 )
$
𝜌𝐻 2 (𝑝𝑤, 𝑦)
output (EvalOut, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, 𝜌) to E
else
ignore this message
Figure 3.2.: Our GC-OPRF Construction in the FOT, FRO, FAUTH -Hybrid Model.
36
3.4. Some Remarks on the Construction
composability. We argue in an informal way, why the Pinkas et al. [Pin+09] OPRF protocol
does not UC-realize the ideal functionality FOPRF from Figure 2.15.
For a newly queried value, FOPRF draws a fresh output value uniformly at random. This
means that the OPRF output of the real protocol must be indistinguishable from a truly
random function for every environment. Indeed, we assume that the garbled circuit is a
PRF so the output of the circuit should be indistinguishable from random values. But this
does not hold if the PRF key is known. Lets imagine an environment that corrupted a
server. That means the environment knows the key 𝑘 of that server. Next, the environment
could query a value 𝐻 1 (𝑥) = and as the description of the garbled circuit C is public, the
environment can calculate 𝑦 = C𝑘 (). Now the environment can start a protocol execution
between an honest user with input 𝑥 and the corrupted server with key 𝑘. In the ideal
world, the functionality FOPRF will draw a uniformly random value as output for the user.
However, that output will be independent of the output 𝑦 that the environment calculated
beforehand, making it easy to distinguish the real and the ideal world. So a simulator
needs some way to manipulate the output accordingly. One might think of programming
the RO for 𝐻 1 . However, this does not seem to suffice, as 𝐻 1 (𝑥) can only be programmed
once, while an environment could easily repeat the above experiment for several corrupted
servers with different keys but with the same input 𝑥. The solution we use is to introduce
the second hash function 𝐻 2 . This hash function allows the simulator to program the
output of the circuit to fit the outputs generated by FOPRF .
On the Need for Authenticated Channels In the proof of security in Section 3.5, we assume
authenticated channels. This is necessary, as otherwise, we could not rely on the semi-
honest nature of messages sent to the simulator. By assuming that all parties behave
honest-but-curious, we do explicitly not mean the adversary. In this model, the adversary
could still send e.g. malformed circuits in lieu of the honestly generated circuit from the
server. To really get to a setting where the simulator can be sure of all the messages being
benign, we must make this additional assumption.
One could argue that the assumption of authenticated channels renders our construction
impractical for many settings. For instance, if the OPRF is used for password-based
authentication, as we discussed in Chapter 1, one might not necessarily expect to already
have an authenticated channel. But in fact, authenticated channels are already established
in many practical scenarios! Typically, a user would connect to a server over a TLS
channel, and thus, at least the server is authenticated via digital certificates. A user can
also authenticate itself to the server with a certificate. We even expect the security of
our construction holds if only the server authenticates itself. This does guarantee that
the garbled circuit was actually generated by the party with which the user intends to
communicate.
If the server implements an OPRF protocol for its own password-based authentication
mechanism, our protocol is still useful. Imagine for example a typical internet forum.
Users will connect to the website via Hypertext Transfer Protocol Secure (HTTPS) but
then use a username and password to log in to their forum account. The big security
benefit is that the users password is protected even if the server is compromised should
be motivation enough to use a protocol like OPAQUE [JKX18]. Clearly, a protocol that
37
3. Construction
assumes authenticated channels cannot be used to establish a TLS session. But TLS relies
mostly on a PKI and certificates instead of password-based authentication.
3.5. Proving Security
In order to prove that GC-OPRF actually UC-emulates FOPRF in the FOT, FRO, FAUTH -hybrid
model, we have to compare the views of two protocol executions. More precisely, for every
adversary A we must specify a simulator Sim such that for every environment E we have:
𝑐
EXECIDEAL FOPRF ,Sim,E ≈ EXECGC-OPRF,A,E ,
where IDEALFOPRF denotes the ideal protocol execution.
As discussed in Section 2.4, we will only consider a Dummy-Adversary A. We construct
the simulator as in Figures 3.5 to 3.8. For the sake of readability, we split the description of
Sim into four figures. We denote parties with a hat, e.g. P̂, if it is clear from the context
that they are corrupted. We write ∃h𝑟 i as shorthand for “Sim checks if a record h𝑟 i exists”.
Some Intuition on the Simulator Before we give a formal proof, we like to give some
intuition on the simulator in Figures 3.5 to 3.8. First, note that in the formulation of
the UC security experiment in Section 2.4, the simulator Sim replaces the adversary A.
That means all messages the environment sends to A will be received by Sim. We also
assume that the real-world adversary A is a dummy adversary, as elaborated in Section 2.4.
Nonetheless, we write in Figures 3.5 to 3.8 as if there was a party “A”. By this we mean
the messages Sim receives from E addressed to A or messages that Sim sends to E acting
as A.
As always in the UC model, the simulator answers all queries addressed to ideal function-
alities that were present in the real world. As we are working in the FOT, FRO, FAUTH -hybrid
model, Sim has to simulate FOT and FAUTH . Remember that a random oracle is strictly
speaking an ideal functionality, too. We just do not notate it like that for the sake of
convenience for the reader. Thus, Sim must also answer queries to the random oracles 𝐻 1 ,
and 𝐻 2 . In the ideal world of the UC security experiment all honest parties just forward
the input they receive from the environment E to the ideal functionality. If they receive
output from the ideal functionality, they forward this output to E. However, the adversary
can send messages on behalf of corrupted parties, meaning the adversary gets instructed
to do so by the environment.
From a high viewpoint, the simulator can be summarized as follows: For honest servers,
the simulator chooses internally a PRF key 𝑘 and follows the protocol exactly as a real
server would do with key 𝑘. For an honest user, the simulator requests a garbled circuit
from the server and simulates the request of input labels via OT. Note here, that Sim
does not know the input of the user. It can simulate the messages anyway as Sim does
also act as FOT . Then Sim receives a garbled circuit and input labels but for every input
𝑥𝑖 ∈ {0, 1} bit, Sim receives both labels 𝑋𝑖 [0] and 𝑋𝑖 [1], again because Sim simulates FOT .
Sim requests an output for the user from FOPRF . Now, FOPRF makes the user output some
uniformly random value, and Sim programs 𝐻 2 (𝑝, 𝑦) accordingly. As we will see, correct
programming is non-trivial.
38
3.5. Proving Security
𝐻 2 must be programmed because the output of a user in the real world is always the
output of 𝐻 2 (𝑝, 𝑦) for some values 𝑝 and 𝑦. However, in the ideal world, the output for
honest users is generated by the ideal functionality FOPRF . Hence, the simulator must
ensure that the output generated by FOPRF and 𝐻 2 (𝑝, 𝑦) coincide for values of 𝑝 and 𝑦
that can occur in an execution of the protocol. Sim can query output from FOPRF but this
has to be done carefully as FOPRF maintains a ticket counter that ensures that not more
PRF values can be received than server executions were performed. Especially, Sim must
somehow identify a server holding the key 𝑘 that mapped 𝑝 to 𝑦 = C𝑘 (𝐻 1 (𝑝)). We argue
in the following, why Sim has to do this.
Lets assume Sim would always choose the same server identity 𝑖 to receive its out-
put from. Clearly, FOPRF would ignore the requests of Sim as soon as E would query
(Eval, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, U, S0, 𝑥) from a server S0 ≠ 𝑖 for which E also sent (SndrComplete, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, S0).
This is, because (SndrComplete, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, S0) increments the ticket counter tx(S0) of FOPRF
by one. In contrast, if Sim queries output from FOPRF by sending (RcvCmplt, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, U, 𝑖)
that decrements the ticket counter tx(𝑖) of 𝑖 and not S0. Remember, that FOPRF ignores a
RcvCmplt request when a ticket counter tx(𝑖) would be decremented below 0 for some 𝑖.
One might be tempted to try the other extreme instead. What happens if the simulator
uses a completely new identity 𝑖 for each and every new query? We will call this 𝑖 a
“corrupt virtual identity”. By that we mean the following: Sim can query PRF output for
a server identity 𝑖 from FOPRF if this 𝑖 is no identity of an actual server of the session.
See OfflineEval point (iii) in Figure 3.1. These corrupt virtual identities do not have a
ticket counter. By using sending (OfflineEval, 𝑠𝑖𝑑, 𝑖, 𝑥) to FOPRF , the simulator receives
the entry 𝑇𝑠𝑖𝑑 (𝑖, 𝑥) from FOPRF s table. This identity 𝑖 must not correspond to an actually
existent server in that session 𝑠𝑖𝑑. Why cant Sim create a new such corrupt virtual identity
for every 𝐻 2 query it receives? Consider the following counter example:
E chooses a corrupted server S with key 𝑘 ∈ {0, 1}𝑚 . The circuit C is publicly known,
so E can precompute C𝑘 (𝑥 0 ) = 𝑦0 and C𝑘 (𝑥 1 ) = 𝑦1 where 𝑥 0 = 𝐻 1 (𝑚 0 ) and 𝑥 1 = 𝐻 1 (𝑚 1 )
for two messages 𝑚 0, 𝑚 1 ∈ {0, 1} . Now E lets A query 𝐻 2 (𝑚 0, 𝑦0 ) from Sim. Note that
there was no protocol execution so far and hence, Sim has not received nor calculated a
garbled circuit (𝐹, 𝐾, 𝑑). As we assumed in the beginning, Sim now creates a new “virtual
corrupt identity”. In other words, Sim creates a new identity 𝑖 for which no prior queries
to FOPRF exist. Now, Sim sends (OfflineEval, 𝑠𝑖𝑑, 𝑖, 𝑚 0 ) to FOPRF . As 𝑖 is no identity
of an actual server, FOPRF will answer with (OfflineEval, 𝑠𝑖𝑑, 𝜌 0 B 𝑇𝑠𝑖𝑑 (𝑖, 𝑚 0 )). Like
we assumed in the begining, Sim programs 𝐻 2 (𝑚 0, 𝑦0 ) B 𝜌 0 . Now E repeats this for
𝐻 2 (𝑚 1, 𝑦1 ). Sim will query (OfflineEval, 𝑠𝑖𝑑, 𝑖 0, 𝑚 1 ) to FOPRF for 𝑖 0 ≠ 𝑖 and will receive
(OfflineEval, 𝑠𝑖𝑑, 𝜌 1 B 𝑇𝑠𝑖𝑑 (𝑖 0, 𝑚 1 )) and set 𝐻 2 (𝑚 1, 𝑦1 ) B 𝜌 1 . Next, suppose E starts a
protocol execution between the server S and an honest user U with input 𝑚𝑏 , where
𝑏 ∈ {0, 1} is a secret bit, only E knows. As S is corrupted, E will send a garbled circuit
(𝐹, 𝐾, 𝑑) and input labels 𝑋 1 [0], 𝑋 1 [1], . . . , 𝑋𝑛 [0], 𝑋𝑛 [1] to Sim. Sim has no information
about 𝑥𝑏 , as the honest Us input is “protected” by the security of the OT protocol, see
Figure 2.7, and the privacy of the garbled circuit, see Definition 11. However, Sim must
produce an output for U by sending some message (RcvCmplt, . . . ) to FOPRF , because an
honest user in the real world would also output something after receiving the garbled circuit
and the labels. Sim could create an new “virtual corrupt identity” 𝑖 00. However, FOPRF s
answer 𝑇𝑠𝑖𝑑 (𝑖 00, 𝑚𝑏 ) would be different from 𝜌𝑏 with high probability, as 𝑇𝑠𝑖𝑑 (𝑖 00, 𝑚𝑏 ) is a
39
3. Construction
uniformly random value. Alternatively, Sim could go through all prior 𝐻 2 (·, ·) queries and
check for each query (𝛼, 𝛽) if 𝛽 = De(𝑑, Ev(𝐹, 𝑋 [𝐻 1 (𝛼)] k 𝐾)). Intuitively, that indicates
that the key 𝐾 “maps” 𝐻 1 (𝛼) to 𝛽, i.e., C𝑘 (𝐻 1 (𝛼)) = 𝛽 if 𝐾 encodes 𝑘. Sim would find that
it already received a query 𝐻 2 (𝑚𝑏 , 𝑦𝑏 ), such that 𝑦𝑏 = De(𝑑, Ev(𝐹, 𝑋 [𝐻 1 (𝑚𝑏 )] k 𝐾)). The
problem is, that Sim would also find the second query 𝐻 2 (𝑚 1𝑏 , 𝑦1𝑏 ) for which it holds
that 𝑦1𝑏 = De(𝑑, Ev(𝐹, 𝑋 [𝐻 1 (𝑚 1𝑏 )] k 𝐾)). In that case, Sim must guess 𝑏. Because if
𝑏 = 0, the result must be queried as (RcvCmplt, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, U, 𝑖) and if 𝑏 = 1, the result must
be queried as (RcvCmplt, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, U, 𝑖 0). If Sim guesses wrong, E can distinguish this
protocol execution from an real execution. Because if we denote by hU(𝑚𝑏 ), S iU the output
of U on input 𝑚𝑏 when interacting with server S , E sees that 𝐻 2 (𝑚𝑏 , 𝑦𝑏 ) ≠ hU(𝑚𝑏 ), S iU .
As Sim has no information about 𝑏 this happens with probability 1/2. This example makes
clear, why care must be taken when programming the random oracle 𝐻 2 .
Our strategy for programming 𝐻 2 (𝑝, 𝑦) is the following: If Sim receives a query, it looks
up the corresponding 𝐻 1 query 𝐻 1 (𝑝) = . If no such query exists, Sim can safely set
𝐻 2 (𝑝, 𝑦) to a uniformly random value. If such a query exists, Sim knows the input value
for the circuit. Now, it checks if there either was an honest server or a corrupted server,
such that 𝑦 = C𝑘 () holds for the key 𝑘 of one of the servers. For an honest server, Sim
requests the output value from FOPRF by sending a RcvCmplt message and for a corrupted
server, Sim requests the output value from FOPRF with an OfflineEval message.
Proof Strategy In the ideal world the environment can control the execution by sending
messages to the parties in the following ways:
• Honest user U: The environment E sends (Eval, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, S, 𝑝𝑤) messages to U. User
U transmits this message to FOPRF and outputs (EvalOut, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, 𝜌) to E.
• Honest server S:
E sends (Init, 𝑠𝑖𝑑) to S. Server S transmits this message to FOPRF who sends
(Init, 𝑠𝑖𝑑, S) to A. Im Realen wird die Nachricht einfach ignoriert, oder?
E sends (SndrComplete, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑) to S. Server S forwards this message to
FOPRF . The functionality FOPRF forwards this message to A.
• Dummy adversary A:
The environment can send (Send, 𝑚𝑖𝑑, S, (Garble, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑)), and (OT-Receive, (𝑠𝑠𝑖𝑑, 𝑖), 𝑥𝑖 )
to A. The adversary A acts as corrupted user Û and forwards these messages
to Sim. A sends all responses it receives to E.
The environment can send (Send, 𝑚𝑖𝑑, U, (𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, (𝐹, 𝐾, 𝑑)), and (OT-Send, (𝑠𝑠𝑖𝑑, 𝑖), (𝑋𝑖 [0], 𝑋𝑖 [1])
to A. The adversary A acts as corrupted server Ŝ and A forwards these mes-
sages to Sim. Again, A sends all responses it receives to E.
The environment can send (OT-Sent, (𝑠𝑠𝑖𝑑, 𝑖)), (OT-Received, (𝑠𝑠𝑖𝑑, 𝑖)) , and
(ok, 𝑚𝑖𝑑) to A. The adversary A will send these messages to Sim, acting as
adversary.
The view of the environment E is comprised of all messages that E receives as a reaction
to one of the messages above. The following messages form the view of the environment:
40
3.5. Proving Security
• (EvalOut, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, 𝜌) from U as response to an (Eval, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, S, 𝑝𝑤) message.
• (Sent, 𝑚𝑖𝑑, U, S, (Garble, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑)) from A when A acts as server and receives this
message, formatted as being sent from a user via FAUTH .
• (Sent, 𝑚𝑖𝑑, S, U, (𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, (𝐹, 𝐾, 𝑑)) from A when A acts as user and receives this
message, formatted as being sent from a server via FAUTH .
• (OT-Send, (𝑠𝑠𝑖𝑑, 𝑖)) from A when a server sends two messages to Sim, who acts as
FOT .
• (OT-Receive, (𝑠𝑠𝑖𝑑, 𝑖)) from A when a user sends a choice bit to Sim, who acts as
FOT .
• (OT-Sent, 𝑠𝑖𝑑) from A when A acts as server and sent (OT-Send, 𝑠𝑖𝑑, (𝑋 0, 𝑋 1 )) to
Sim before. Sim acts as FOT .
• (OT-Received, 𝑠𝑖𝑑, 𝑥𝑏 ) from A when A acts as user and sent (OT-Receive, 𝑠𝑖𝑑, 𝑏)
to Sim before. Sim acts as FOT .
• Responses to 𝐻 1 (·) and 𝐻 2 (·, ·) queries from A.
Our goal in the following proof is to argue, why the above-described view of the
environment in the ideal world is computationally indistinguishable from the view of the
environment in the real world. We construct a simulator such that each message in the
real world, has a directly corresponding message in the ideal world. Loosely speaking,
the simulator creates messages that “look the same” as in the real world. For instance,
Sim sends a message (Garble, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑) that is formatted exactly like a (Garble, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑)
message sent by the user in the real world. Further, Sim ensures that the messages
are sent in the same circumstances, i.e., at the same time. For example, Sim will send
(Garble, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑) when an honest user is invoked by E with (Eval, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, S, 𝑝𝑤), as
this is how the real-world user would react. The main idea is that the view in the real
world is indistinguishable from the view in the ideal world, if each message in the real
world is indistinguishable from its corresponding message in the ideal world.
We cannot analyze the protocol with a single distinction of cases in the style of “(1) both
parties are honest, (2) only user is corrupted, (3) only server is corrupted, (4) both parties
are corrupted.” This is because the ideal functionality Figure 3.1 and also Figure 2.15 from
[JKX18] handles multiple users interacting with multiple servers. Therefore, we will only
consider one simulator Sim that has to keep records of messages it gets to “dynamically”
decide for each message which situation Sim must simulate.
Formal Proof
Theorem 1. Let the garbling scheme G = (Gb, En, De, Ev, ev) have privacy, as defined in
Definition 11.Let C denote the boolean circuit of a PRF. Then GC-OPRF UC-realizes FOPRF
in the FOT, FAUTH, FRO -hybrid model.
Proof. As explained above, we will argue for each message that E receives why it is
indistinguishable for E whether the message comes from a real protocol execution or the
ideal execution with the simulator.
41
3. Construction
Responses to OT messages
• (OT-Send, (𝑠𝑠𝑖𝑑, 𝑖)) from A when a server sends two messages to Sim:
This message is exactly formatted as a OT-Send message from the functional-
ity FOT . Further, Sim behaves exactly like FOT in sending those messages. Con-
cretely, on a message (OT-Send, (𝑠𝑠𝑖𝑑, 𝑖), (𝑋𝑖 [0], 𝑋𝑖 [1])), Sim stores the labels and
informs the adversary that two labels were sent but not which labels by sending
(OT-Send, (𝑠𝑠𝑖𝑑, 𝑖)) to A. This is exactly the behavior of FOT , as described in Fig-
ure 2.7. Therefore, E cannot distinguish whether this message comes from the real-
or the ideal execution.
• (OT-Receive, (𝑠𝑠𝑖𝑑, 𝑖)) from A when a user sends a choice bit to Sim:
A similar comparison as above shows, that Sim behaves exactly like the original FOT .
That means, on a message (OT-Receive, (𝑠𝑠𝑖𝑑, 𝑖), 𝑏), Sim stores the choice bit 𝑏 and
informs the adversary that a choice bit was received, but not which bit. Therefore, E
cannot distinguish whether this message comes from the real or the ideal execution.
• (OT-Sent, 𝑠𝑖𝑑) from A when A acts as server and sent (OT-Send, 𝑠𝑖𝑑, (𝑋 0, 𝑋 1 )) to
Sim before:
Again, Sim behaves like FOT when creating those messages. Namely, upon receiving
a message (OT-Sent, 𝑠𝑖𝑑) from the adversary, Sim ignores the message if h𝑠𝑖𝑑, 𝑥 0, 𝑥 1 i
or h𝑠𝑖𝑑, 𝑏i is not recorded; Otherwise Sim sends (OT-Sent, 𝑠𝑖𝑑) to Ŝ. Therefore, E
cannot distinguish whether this message comes from the real or the ideal execution.
• (OT-Received, 𝑠𝑖𝑑, 𝑥𝑏 ) from A when A acts as user and sent (OT-Receive, 𝑠𝑖𝑑, 𝑏)
to Sim before:
These are the only messages on which Sim behaves sometimes differently than
FOT . The messages are received by Sim when the adversary “allows the delivery” of
OT-messages to the OT-receiver. If Sim recorded a choice bit 𝑥𝑖 ≠ ⊥, it means that
A sent (OT-Receive, (𝑠𝑠𝑖𝑑, 𝑖), 𝑥𝑖 ) before and Sim answers the query like FOT would
do. In particular, those queries do not stem from the simulation of a protocol run
with an honest user.
Sim does behave differently than FOT in the case when there are 𝑛 OT-Received
messages (OT-Received, (𝑠𝑠𝑖𝑑, 1)), . . . , (OT-Received, (𝑠𝑠𝑖𝑑, 𝑛)) with the same value
𝑠𝑠𝑖𝑑, see line 71 in Figure 3.7. The condition means that a complete set of input labels
were sent via OT. If further a record h𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, (𝐹, 𝐾, 𝑑)i exists with the same 𝑠𝑠𝑖𝑑,
all information for one OPRF execution was exchanged between server and user. We
stress that we assume in this proof that a corrupted server will never send a modified
circuit 𝐹 0 ≠ 𝐹 , modified decoding information 𝑑 0 ≠ 𝑑, or a modified encoded key
𝐾 0 ≠ 𝐾, where 𝐹, 𝐾, 𝑑 are the outputs of Gb and En. Otherwise, the adversary could
easily garble a different circuit than 𝐹 without the user noticing it. This weakness
is inherent to “textbook” garbled circuit constructions, see Section 2.6. Further, we
know that these labels belong to an interaction with an honest user, as no value
𝑥𝑖 ≠ ⊥ was recorded. In the real protocol, a user would evaluate the garbled circuit
42
3.5. Proving Security
and output the result as soon as it received all necessary input labels via FOT . Thus,
the simulator must also produce an output for honest users. The simulator retrieves
the server identity S connected to 𝑠𝑖𝑑. Sim sends (RcvCmplt, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, U, S, S) to
FOPRF . The functionality FOPRF will ignore this message in any of the three following
cases:
1. There is no record h𝑠𝑠𝑖𝑑, S, P, 𝑝i.
2. 𝑖 = S but tx(S) = 0.
3. S is honest but 𝑖 ≠ S.
The ignore condition of Item 1 cannot occur, as Sim found a record h𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, (𝐹, 𝐾, 𝑑)i.
Sim does only create this record, if a corresponding hGarble, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑i record was
found. That record in turn is only created when an (Eval, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, U, S) message
was received from FOPRF . We argue in Lemma 1 why the condition of Item 2 occurs
at most with negligible probability. The third condition in Item 3 can indeed occur.
However, as we assume passive corruption and authenticated channels, a real-world
user would also ignore a message (𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, (𝐹, 𝐾, 𝑑)) that is not from the designated
server.
If the RcvCmplt message is not ignored in line 74 in Figure 3.7, the ideal functionality
will choose a random values 𝜌 according to its internal random function associated
to S as output for U and U will output 𝜌. We will examine the distribution of 𝜌 in
the paragraph “Honest User Output” on Page 45.
Responses to Protocol Messages
• (Sent, 𝑚𝑖𝑑, U, Ŝ, (Garble, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑)) from A when A acts as server and receives this
message, formatted as being sent from a user via FAUTH :
Sim simulates the behavior of FAUTH , meaning it informs A that a message is being
sent via FAUTH and waits for the delivery until A sent (ok, 𝑚𝑖𝑑). The message
(Garble, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑) is sent by Sim as a reaction to an (Eval, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, U, Ŝ) message
from FOPRF , because a real user would also start a protocol execution by requesting a
garbled circuit from Ŝ. The message itself contains only the session- and subsession id,
it is identical in both executions. Thus, we see that Es view on the (Garble, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑)
message in the real world is is indistinguishable from this message created by Sim.
• (Sent, 𝑚𝑖𝑑, S, Û, (𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, (𝐹, 𝐾, 𝑑)) from A when A acts as user and receives this
message, formatted as being sent from a server via FAUTH :
This message is created by Sim, when Sim received a (SndrComplete, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑) mes-
sage. If the user of the subsession is corrupted, Sim also expects a (Garble, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑)
message from the user, as in a real execution, the server only starts garbling a circuit
when it received both messages. In a subession with an honest user, Sim can simulate
a (Garble, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑) message itself. The garbled circuit 𝐹 and the decoding infor-
mation 𝑑 are calculated in the same way in both worlds, using Gb(1𝜆 , C). The only
difference is the encoded key 𝐾. In the ideal world, 𝐾 is an encoding of a random
value 𝑘, which is chosen for the honest server S by Sim. In the real world, 𝐾 is an
43
3. Construction
encoding of the PRF-key 𝑘 of that server. However, in both cases, 𝑘 is a uniformly
random value in {0, 1}𝑚 and in both experiments, 𝑘 is encoded via Enc. Therefore,
the two experiments are distributed identically.
Responses of the Random Oracles:
𝐻 1 (·) queries:
In the real world, a random oracle chooses a uniformly random output for every
fresh query and stores this random value as “hash” of the input. On further queries,
that stored value is returned. The simulator answers the calls to 𝐻 1 exactly, as a real
random oracle would do, with uniformly random values ∈ {0, 1}𝑛 .
𝐻 2 (·, ·) queries:
In the following, we will only argue why the simulated 𝐻 2 is indistinguishable from
the original 𝐻 2 in the real execution. As weve seen at the beginning of Section 3.5,
the random oracle 𝐻 2 must also be compared to the users output. We defer this
discussion to the next paragraph. We distinguish the following cases:
Case 1: There is no record h𝐻 1, 𝑝, i found: The random oracle is programmed with a
uniformly random value. In this case, Sim behaves like the real random oracle.
Case 2: Records h𝐻 1, 𝑝, i and hS, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, (𝐹, 𝐾, 𝑑), 𝑋 [0𝑛 ], 𝑋 [1𝑛 ]i exist, such that De(𝑑, Ev(𝐹, 𝑋 k 𝐾)) =
𝑦: In that case, the value 𝑦 was calculated with the garbled circuit of an honest
server, with overwhelming probability. That means the simulator can query
FOPRF for the correct output value by choosing an unused subsession id 𝑠𝑠𝑖𝑑 0
and calling (Eval, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑 0, 𝑝) and (RcvCmplt, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑 0, A, S). If the ideal func-
tionality does not answer, Sim aborts. Remember that FOPRF does only ignore
(RcvCmplt, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, P, 𝑖) messages in one of the following three cases:
a) There is no record h𝑠𝑠𝑖𝑑, S, P, 𝑝i.
b) 𝑖 = S but tx(S) = 0.
c) S is honest but 𝑖 ≠ S.
We prove in Lemma 1 that the condition in Item 2 happens at most with negligi-
ble probability. Further, the first and the third abort condition Item 1 and Item 3
can not occur in this case, as Sim itself sends the message (Eval, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑 0, S, 𝑝)
to FOPRF just before sending (RcvCmplt, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑 0, A, S).
𝐻 2 (𝑝, 𝑦) is then programmed to the output 𝜌 of FOPRF . This is, by the definition
of FOPRF , a uniformly random value.
Case 3: There is a record h𝐻 1, 𝑝, i but no record hS, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, (𝐹, 𝐾, 𝑑), 𝑋 [0𝑛 ], 𝑋 [1𝑛 ]i
exists, such that De(𝑑, Ev(𝐹, 𝑋 k 𝐾)) = 𝑦: In that case, Sim checks the keys of
all corrupted parties 𝑘 Ŝ . Note that Sim knows those keys as we assume static
corruption only and that the adversary learns all the randomness of a corrupted
party. If there is such a corrupted server with key 𝑘𝑆 such that C𝑘𝑆 () = 𝑦, the
simulator can use its ability to offline evaluate PRFs from corrupted parties.
Thus, Sim will program 𝐻 2 (𝑝, 𝑦) to the output of the offline evaluation. This
44
3.5. Proving Security
will, again, be a uniformly random value 𝜌 ∈ {0, 1}𝑙 . If no such key exists
𝐻 2 (𝑝, 𝑦) is set to a uniformly random value, as from a real random oracle.
Honest User Output (EvalOut, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, 𝜌) from U as response to an (Eval, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, S, 𝑝𝑤)
message:
In the real world, 𝜌 is calculated as 𝜌 = 𝐻 2 (𝑝, De(𝑑, Ev(𝐹, 𝑋 k 𝐾))), where (𝐹, 𝐾, 𝑑) was
generated by the server and 𝑋 are the labels received via OT for 𝑥 = 𝐻 1 (𝑝). In the ideal
world, 𝜌 is chosen uniformly at random by FOPRF if a fresh (Eval, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, S, 𝑝) message
was sent. Remember that FOPRF keeps an internal table 𝑇𝑠𝑖𝑑 (𝑖, ·) for possible server IDs 𝑖. If
an honest user with input 𝑝 interacts with S, the functionality FOPRF will send 𝜌 = 𝑇𝑠𝑖𝑑 (S, 𝑝)
as output for the honest user. The simulator must produce the same output 𝜌 for 𝐻 2 (𝑝, 𝑦)
if 𝑦 = C𝑘 (𝐻 1 (𝑝)) holds for Ss key 𝑘. We therefore have to compare the output of 𝐻 2 with
the outputs of FOPRF . We distinguish the following cases in simulation of 𝐻 2 :
Case 1: There is no record h𝐻 1, 𝑝, i found: Sim only needs to program the random oracle,
if 𝑝 and 𝑦 do occur in a protocol execution. More precisely, if 𝑦 = C𝑘 (𝐻 1 (𝑝)) holds
for some servers key 𝑘. That is because in this case FOPRF can eventually output a
value 𝜌 as the output of an honest user with input 𝑝 interacting with a server with
key 𝑘. In other words, if there is a server with key 𝑘 such that 𝑘 “maps” 𝐻 1 (𝑝) to
𝑦, then there can be a protocol execution that leads to a query 𝐻 2 (𝑝, 𝑦) where Sim
must program 𝐻 2 . We will call a query (𝑝, 𝑦) relevant if there is a server with key 𝑘,
such that 𝑦 = C𝑘 (𝐻 1 (𝑝)). In the following, we bound the probability for the event
that (𝑝, 𝑦) becomes relevant, when 𝐻 1 (𝑝) is not determined yet.
Let 𝑡 ∈ N be the number of servers in the protocol execution. Let 𝑘 1, . . . , 𝑘𝑡 be the
keys used by the servers and let 𝑛 ∈ Ω(𝜆) be the output length of C. We assumed
in the beginning that C𝑘𝑖 (·) is a permutation for every 𝑖 ∈ {1, . . . , 𝑡 }. Thus, if we
choose some uniformly random input 𝑥 ∈ {0, 1}𝑛 , we get that C𝑘𝑖 (𝑥) ∈ {0, 1}𝑛 is
uniformly random. If 𝐻 1 (𝑝) is not queried yet, we have for every 𝑖 ∈ {1, . . . , 𝑡 } and
every 𝑦 ∈ {0, 1}𝑛 :
1
Pr [C𝑘𝑖 (𝐻 1 (𝑝)) = 𝑦] ≤ ,
2𝑛
where the probability is taken over the random output of 𝐻 1 . This follows from the
fact that C𝑘𝑖 (·) is a permutation.
We have for every tuple (𝑝, 𝑦) ∈ {0, 1} × {0, 1}𝑛 where 𝐻 1 (𝑝) was not queried yet:
" 𝑡 #
Ü
Pr [(𝑝, 𝑦) becomes relevant] = Pr (C𝑘𝑖 (𝐻 1 (𝑝)) = 𝑦)
𝑖=1
𝑡
∑︁
≤ Pr [C𝑘𝑖 (𝐻 1 (𝑝)) = 𝑦]
𝑖=1
= 𝑡 Pr [C𝑘1 (𝐻 1 (𝑝)) = 𝑦]
𝑡
𝑛,
2
45
3. Construction
where the probability is taken over the randomness of 𝐻 1 (𝑝). As 𝑡 is polynomial in
𝜆 and we assume 𝑛 ∈ Ω(𝜆), a tuple (𝑝, 𝑦) becomes relevant at most with negligible
probability if 𝐻 1 (𝑝) was not queried yet. Thus, Sim can assign a uniformly random
value to 𝐻 2 (𝑝, 𝑦).
Case 2: Records h𝐻 1, 𝑝, i and hS, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, (𝐹, 𝐾, 𝑑), 𝑋 [0𝑛 ], 𝑋 [1𝑛 ]i exist, such that De(𝑑, Ev(𝐹, 𝑋 [] k 𝐾)) =
𝑦:
In this case, the value is the output of the random oracle 𝐻 1 on input 𝑝. The tuple
(𝑝, 𝑦) is relevant, because the key of an honest server produces the output 𝑦, when
the input is provided to the circuit. Sim knows to which server the key belongs, as
the record hS, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, (𝐹, 𝐾, 𝑑), 𝑋 [0𝑛 ], 𝑋 [1𝑛 ]i explicitly contains the server id S. The
simulator Sim sends (Eval, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑 0, S, 𝑝) to FOPRF for a new subsession id 𝑠𝑠𝑖𝑑 0. That
means, Sim initiates a new protocol execution and requests itself the output value
𝜌 = 𝑇𝑠𝑖𝑑 (S, 𝑝) from FOPRF . Next, Sim can safely send the (RcvCmplt, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑 0, A, S)
message, without decreasing the ticket counter of S below 0. Intuitively, this is
because the key of an honest server and the input labels of an honest user are hidden
from E. We prove that in Lemma 1. The random oracle 𝐻 2 (𝑝, 𝑦) is programmed to
the answer 𝜌 of FOPRF . The programming ensures that E will get the same output
𝜌 = 𝐻 2 (𝑝, 𝑦) when invoking an execution of the protocol between a honest user with
input 𝑝 and the honest server that generated (𝐹, 𝐾, 𝑑).
Case 3: There is a record h𝐻 1, 𝑝, i but no record hS, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, (𝐹, 𝐾, 𝑑), 𝑋 [0𝑛 ], 𝑋 [1𝑛 ]i exists,
such that De(𝑑, Ev(𝐹, 𝑋 k 𝐾)) = 𝑦:
In that case, the value is the output of the random oracle 𝐻 1 on input 𝑝, but no
honest server key maps to 𝑦 = C𝑘 (). Thus, Sim checks the keys of all corrupted
server 𝑘 Ŝ . If one of the keys 𝑘 Ŝ is such that C𝑘Ŝ () = 𝑦 holds, Sim will use its
ability to offline evaluate corrupted servers tables 𝑇𝑠𝑖𝑑 ( Ŝ, ·). The simulator Sim sends
(OfflineEval, 𝑠𝑖𝑑, Ŝ, 𝑝) to FOPRF and receives the answer (OfflineEval, 𝑠𝑖𝑑, 𝜌) from
FOPRF . Note, that Sim will always receive an answer in this case, as Ŝ is the identity
of a corrupted server.
Sim programs 𝐻 2 (𝑝, 𝑦) to the output 𝜌 of the offline evaluation. E will get the same
𝜌 as output from an execution of the protocol between a user with input 𝑝 and the
corrupted server with key 𝑘 Ŝ .
If there are multiple such keys, i.e., the condition in line 101 of Figure 3.8 is true, Sim
aborts. This happens at most with negligible probability, as we prove in Lemma 2.
If no such key exists 𝐻 2 (𝑝, 𝑦) is set to a uniformly random value, as in this case 𝑦
does not correspond to some protocol execution, i.e., (𝑝, 𝑦) is not relevant. 
Lemma 1. Let the garbling scheme G = (Gb, En, De, Ev, ev) have privacy, as defined in
Definition 11. When interacting with the simulator in Figures 3.5 to 3.8, for each server S
the probability that a (RcvCmplt, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, P, S) message for P ∈ {U, A} is sent when the
ideal functionalitys ticket counter tx(S) is 0, is negligible. That means, only with negligible
probability FOPRF ignores a RcvCmplt message because the ticket counter is 0.
46
3.5. Proving Security
Proof. The ticket counter tx(S) is only increased by (SndrComplete, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑) messages
from S to FOPRF , i.e., by invocations of the server by E. The counter is decreased by
(RcvCmplt, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, U, S) messages from Sim to FOPRF . The simulator from Figures 3.5
to 3.8 sends (RcvCmplt, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, U, S) messages in two cases. We will regard them sepa-
rately:
Case 1: Sim received a query 𝐻 2 (𝑝, 𝑦) and has records h𝐻 1, 𝑝, i and hS, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, (𝐹, 𝐾, 𝑑), 𝑋 [0], 𝑋 [1]i
such that De(𝑑, Ev(𝐹, 𝑋 [] k 𝐾)) = 𝑦, i.e., the condition in line 85 in Figure 3.8 is true:
As Sim found the record hS, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, (𝐹, 𝐾, 𝑑), 𝑋 [0𝑛 ], 𝑋 [1𝑛 ]i, we can be sure that a
(SndrComplete, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑) messages was sent by E to Sim. This means the counter
tx(S) was increased at least once before the circuit was garbled. This holds, because
Sim does only store the record hS, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, (𝐹, 𝐾, 𝑑), 𝑋 [0𝑛 ], 𝑋 [1𝑛 ]i when it received
a (SndrComplete, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑) message from FOPRF .
Next, we know that De(𝑑, Ev(𝐹, 𝑋 [] k 𝐾)) = 𝑦 holds. If that holds, Sim can safely
assume that the server S that created (𝐹, 𝐾, 𝑑) is the server for which Sim must query
an OPRF output 𝜌 = 𝑇𝑠𝑖𝑑 (S, ) value from FOPRF . We argue in Lemma 2 that another
key 𝑘 0 ≠ 𝑘 could lead to the same result 𝑦 with at most negligible probability.
The (RcvCmplt, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, U, S) messages in line 74 of Figure 3.7 are only sent to
produce an output of honest users. If the user is corrupted, that implies that there
cannot be a message (RcvCmplt, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, U, S) produced by Sim in response to an
(OT-Received, (𝑠𝑠𝑖𝑑, 𝑖)) message, in line 74 of Figure 3.7.
If the user is honest, we show in Lemma 3 that the situation we currently argue about,
i.e., Sim received a query 𝐻 2 (𝑝, 𝑦) and has records h𝐻 1, 𝑝, i and hS, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, (𝐹, 𝐾, 𝑑), 𝑋 [0], 𝑋 [1]i
such that De(𝑑, Ev(𝐹, 𝑋 [] k 𝐾)) = 𝑦, happens at most with negligible probability.
In conclusion, sending (RcvCmplt, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑 0, A, S) in line 89 of Figure 3.8 as a con-
sequence of a 𝐻 2 (𝑝, 𝑦) query will decrease the ideal functionalitys counter tx(S) by
one. Another (RcvCmplt, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, U, S) is sent in line line 74 of Figure 3.7 at most
with negligible probability. Querying the same tuple 𝐻 2 (𝑝, 𝑦) again wont result in
a second (RcvCmplt, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑 0, A, S) message in line 89 of Figure 3.8, as the output
of 𝐻 2 (𝑝, 𝑦) is already defined. Thus, the counter is only decreased by one if it was
increased at least by one before with a (SndrComplete, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑) message to FOPRF .
Case 2: Sim received all 𝑛 messages (OT-Received, (𝑠𝑠𝑖𝑑, 𝑖)) a garbling (𝐹, 𝐾, 𝑑) for a sub-
session 𝑠𝑠𝑖𝑑, where all the recorded OT-requests 𝑥𝑖 are ≠ ⊥, i.e., the condition in line
7173 of Figure 3.7 is true:
We know that the user already received a garbling (𝐹, 𝐾, 𝑑), as either the clause in line
72 or the clause in line 73 of Figure 3.7 is true. We assume passive adversaries, which
implies that a (SndrComplete, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑) message was already sent to FOPRF . Else,
the server would not have created the garbling (𝐹, 𝐾, 𝑑). This means, the counter
tx(S) is only decreased by one with a (RcvCmplt, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, U, S) message in line 74
of Figure 3.7 if it is increased at least once before by a (SndrComplete, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑)
message to FOPRF .
47
3. Construction
We argue why there cannot be another (RcvCmplt, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, P, S) message with P ∈
{U, A} for the same 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑 and label S. There cannot be another (RcvCmplt, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, U, S)
message sent in line line 74 of Figure 3.7 for the same subsession 𝑠𝑠𝑖𝑑. This holds,
because we argue about the case where Sim simulates the behavior of an honest user.
Sim only sends (RcvCmplt, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, U, S) once, at the moment when all 𝑛 input la-
bels are received by the user. If Sim receives further labels for the same 𝑠𝑠𝑖𝑑 that will
not trigger a second (RcvCmplt, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, U, S) message for this 𝑠𝑠𝑖𝑑. Remember that
there are only two situations in which Sim sends (RcvCmplt, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, P, S) with P ∈
{U, A}. The first one is the situation where all 𝑛 messages (OT-Received, (𝑠𝑠𝑖𝑑, 𝑖))
were received. This is the situation we currently reason about. The second one is
when an 𝐻 2 (𝑝, 𝑦) is received and it turns out that the corresponding key 𝑘 belongs
to an honest server. We argue why the second situation can happen at most with
negligible probability. All recorded OT-requests 𝑥𝑖 are ≠ ⊥. Thus, the corresponding
(OT-Receive, (𝑠𝑠𝑖𝑑, 𝑖)) messages were simulated by Sim for an honest user. But the
(RcvCmplt, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑 0, A, S) messages in line 89 of Figure 3.8 are only sent if the
subsession is executed with an honest server. Again, it follows from Lemma 3 that
the (RcvCmplt, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑 0, A, S) messages in line 89 of Figure 3.8 is sent at most with
negligible probability. 
Lemma 2. For 𝑚, 𝑛, 𝑙 ∈ Ω(𝜆) let the function F : {0, 1}𝑚 × {0, 1}𝑛 → {0, 1}𝑙 be PRF. Let
𝑡 ∈ N be polynomial in 𝜆. For every 𝑥 ∈ {0, 1}𝑛 and uniformly random and independently
drawn keys 𝑘 1, . . . , 𝑘𝑡 ∈ {0, 1}𝑚 , there are at most with negligible probability in 𝜆 indices
𝑖, 𝑗 ∈ {1, . . . , 𝑡 } with 𝑖𝑗 such that F𝑘𝑖 (𝑥) = F𝑘 𝑗 (𝑥).
Proof. We start with the simpler case that the first index is 𝑖 = 1. In other words, we
bound the probability that there is a key in 𝑘 2, . . . , 𝑘𝑡 , such that F𝑘1 (𝑥) = F𝑘 𝑗 (𝑥). For
𝑥 ∈ {0, 1}𝑛 , we consider the following sequence of hybrid experiments: In the first
experiment 𝐸 1 , the experiment chooses uniformly random keys 𝑘 2, . . . , 𝑘𝑡 ∈ {0, 1}𝑚 and
outputs 1 iff there is one 𝑗 ∈ {2, . . . , 𝑡 } such that F𝑘1 (𝑥) = F𝑘 𝑗 (𝑥). 𝐸 2 is defined as
above, except that the second value F𝑘2 (𝑥) is replaced by a uniformly random value
𝑦2 ∈ {0, 1}𝑙 . Now, for every 𝑟 ∈ {3, . . . 𝑡 }, we define the experiments 𝐸𝑟 as follows: The
experiment chooses uniformly random values 𝑦2, . . . , 𝑦𝑟 ∈ {0, 1}𝑙 and uniformly random
keys 𝑘𝑟 +1, . . . , 𝑘𝑡 ∈ {0, 1}𝑚 . The experiment outputs 1 iff F𝑘1 (𝑥) = 𝑦 𝑗 for 𝑗 ∈ {2, . . . , 𝑟 } or
F𝑘1 (𝑥) = F𝑘 𝑗 (𝑥) for 𝑗 ∈ {𝑟 +1, . . . , 𝑡 }. Finally, 𝐸𝑡 is the experiment, where all values 𝑦2, . . . , 𝑦𝑡
are uniformly random. We get by a union-bound that 𝐸𝑡 outputs 1 with probability
" 𝑡 #
Ü (𝑡 1)
Pr [𝐸𝑡 = 1] = Pr (𝑦 𝑗 = F𝑘1 (𝑥)) ≤ 𝑙
,
𝑗=2
2
where the probability is taken over the random choices of 𝑦2, . . . , 𝑦𝑡 . Note, that F𝑘1 (𝑥)
is constant here. Assume, by way of contradiction, that the probability that experiment
𝐸 1 outputs 1 with a noticeable probability. Then there is an index 𝑁 ∈ {1, . . . , 𝑡 1}
such that the difference Δ B |Pr [𝐸 𝑁 = 1] Pr [𝐸 𝑁 +1 = 1] | is noticeable. We construct a
distinguisher D for the PRF security experiement, see Definition 1. D proceeds as the
48
3.5. Proving Security
experiment 𝐸 𝑁 but instead of using 𝑘 𝑁 to calculate F𝑘𝑁 (𝑥), the distinguisher D queries 𝑥
from its PRF oracle and receives an output 𝑦 . If the oracle answers with a PRF output
𝑦 , the output of D is exactly distributed as in 𝐸 𝑁 . If the oracle answers with a truly
random output, the output of D is distributed as in 𝐸 𝑁 +1 . Thus, by our assumption, D has
a noticble advantage Δ in the PRF experiment, which is a contradiction to F being a PRF.
This concludes the hybrid argument.
Above, we bound the probability that there is another key whose output collides with
𝑘 1 . With a completely analogous reduction, we get a similar inequality for every 𝑘 1, . . . , 𝑘𝑡 .
Hence, we have for all 𝑥 ∈ {0, 1}𝑛 that the probability that there are 𝑖, 𝑗 ∈ {1, . . . , 𝑡 } with
𝑖𝑗 such that F𝑘𝑖 (𝑥) = F𝑘 𝑗 (𝑥) is
" 𝑡 𝑡
!#
Ü Ü 𝑡 (𝑡 1)
Pr F𝑘𝑖 (𝑥) = F𝑘𝑖 (𝑥) ≤ 𝑙
+ negl(𝜆).
𝑖=1 𝑗=1;𝑗≠𝑖
2
Lemma 3. Let the garbling scheme G = (Gb, En, De, Ev, ev) have privacy, as defined in
Definition 11. Let C be the boolean circuit of a PRF, as defined in Definition 1. Suppose
the adversary A initiates an OPRF execution between an honest server and an honest user
with input 𝑝. The adversary A can at most with negligible probability send a request
𝐻 2 (𝑝, 𝑦) ∈ {0, 1}𝑛 such that C𝑘 (𝐻 1 (𝑝)) = 𝑦, where 𝑘 ∈ {0, 1}𝑚 is the key of the honest server.
Proof. Without loss of generality, we can assume that A requested = 𝐻 1 (𝑝) for the user
input 𝑝 ∈ {0, 1} . Further, we assume that A received a garbled circuit, an encoded key
, and decoding information (𝐹, 𝐾, 𝑑), that were created by Sim in simulating an honest
server. We know, A received no labels for the user input , as Sim simulated the OT for
an honest user.
Assume, by way of contradiction, that A calculates output 𝑦 ∈ {0, 1}𝑛 such that
De(𝑑, Ev(𝐹, 𝑋 [] k 𝐾)) = 𝑦 with noticeable probability 𝑃. First, we construct an adversary
B that plays the privacy experiment as in Figure 2.9 and communicates with A as if B
was the simulator. As we assume that the garbling scheme has privacy, we will get the
existence of a simulator SimPRF for the privacy experiment. We will use this simulator
SimPRF and the adversary B to construct a second adversary BPRF that will have noticeable
success probability in distinguishing the PRF C from a truly random function, which is a
contradiction to our assumption that C satisfies Definition 1 of a PRF.
B plays the UC-security experiment with A. Let 𝑡 ∈ N be the number of subsessions
that A invokes between an honest server and an honest user. The adversary B initially
chooses an index 𝑖 ∈ {1, . . . , 𝑡 } uniformly at random. B behaves like our normal simulator
from Figures 3.5 to 3.8, except when A initiates a subsession between an honest server and
an honest user. If that session is the 𝑖 th of those sessions, B behaves as follows: When
B receives an (Eval, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, U, S) message and a (SndrComplete, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, S) message
from FOPRF , B must simulate the honest server. It chooses a uniformly random key
𝑘 ∈ {0, 1}𝑚 and a uniformly random value 𝑥 0 ∈ {0, 1}𝑛 . The second value 𝑥 0 can be
seen as a “mock” input to the privacy challenger Cprivacy . Note that 𝑥 0 and the actual
hash value 𝑥 = 𝐻 1 (𝑝) are chosen independently. B answers queries to 𝐻 1 (𝑝) as usual by
choosing 𝑥 ∈ {0, 1}𝑛 uniformly at random and storing h𝐻 1, 𝑝, 𝑥i. The adversary B sends
49
3. Construction
(𝑥 0, 𝑘, C) to Cprivacy . The privacy challenger chooses 𝑏 ∈ {0, 1} uniformly at random. If
𝑏 = 1, it calculates (𝐹, 𝑒, 𝑑) ← Gb(1𝜆 , C) and (𝑋, ˜ 𝐾) = En(𝑒, 𝑥 0 k 𝑘). If 𝑏 = 0, it calculates
𝑦 0 = ev(C, 𝑥 0 k 𝑘) = C𝑘 (𝑥 0). Next, Cprivacy runs the simulator SimPRF on input 𝑦 0. The
simulator SimPRF outputs (𝐹, 𝑋, ˜ 𝐾, 𝑑). In both cases 𝑏 = 1 and 𝑏 = 0, the challenger Cprivacy
sends (𝐹, 𝑋, ˜ 𝐾, 𝑑) to B. Now, B uses this garbled circuit to simulate the honest server.
That means, B sends (𝐹, 𝐾, 𝑑) to A, formatted as if U sent it to S via FAUTH . Note that 𝑋˜
is not sent to A as our actual OPRF simulator from Figures 3.5 to 3.8 would also not do
that. Finally, B checks for ever 𝐻 2 query (𝑝, 𝑦 ) from A, if 𝑦 = C𝑘 (𝐻 1 (𝑝)) holds. Only if
that is the case, B outputs 1, else it outputs 0. We depicted the reduction in Figure 3.3.
Cprivacy B A
𝑝
$
𝑥 ← {0, 1}𝑛
𝐻1 ( 𝑝 ) = 𝑥
FOPRF
Eval via
ple te via FOPRF
SndrCom
$
𝑘 ← {0, 1}𝑚
$
𝑥 0 ← {0, 1}𝑛
𝑥0 , 𝑘 , C
Case 𝑏 = 1:
(𝐹, 𝑒, 𝑑) ← Gb(1𝜆 , C)
˜ 𝐾) = En(𝑒, 𝑥 k 𝑘)
(𝑋,
Case 𝑏 = 0:
𝑦 0 = C𝑘 (𝑥 0)
˜ 𝐾, 𝑑) ← SimPRF (𝑦 0)
(𝐹, 𝑋, ( 𝐹 , 𝑋˜ , 𝐾 , 𝑑
)
do nothing with 𝑋˜
(𝐹 , 𝐾 , 𝑑 )
𝑦
output 1 iff 𝑦 = C𝑘 (𝑥)
Figure 3.3.: Reduction on the Privacy Property of the Garbling Scheme.
In the case where the challenger Cprivacy chose 𝑏 = 1, the view of A is identically
distributed as in a normal OPRF execution with our simulator Sim. That holds, because
𝑘 ∈ {0, 1}𝑚 is also chosen uniformly at random and 𝐹 and 𝑑 are also calculated as (𝐹, 𝑒, 𝑑) ←
50
3.5. Proving Security
Gb(1𝜆 , C). The calculation of those values is completely independent of the value 𝑥 0. The
encoded key is calculated as (𝑋, ˜ 𝐾) = En(𝑒, 𝑥 0 k 𝑘), but the value of 𝐾 does only depend
on 𝑒 and not on 𝑥 0. With probability 1/𝑡, the adversary B chooses the right index 𝑖 of the
execution, where A succeeds in calculating 𝑦 such that 𝑦 = C𝑘 (𝐻 1 (𝑝)) holds. By our
assumption, this means that B outputs 1 with probability 𝑃/𝑡, which is noticeable. Now,
the privacy of the garbling scheme guarantees us that a simulator SimPRF exists that makes
B output 1 with noticeable probability 𝑃 0 in the case 𝑏 = 0. We now show in a second
reduction that we can build an adversary BPRF that uses SimPRF and A as subroutines and
that distinguishes between a PRF and a truly random function with noticeable probability.
Like B above, the adversary BPRF plays the UC-security experiment with A. The
adversary BPRF chooses an index 𝑖 ∈ {1, . . . , 𝑡 } uniformly at random, where 𝑡 ∈ N is
the number of subsession of honest users with honest servers. For the 𝑖 th subsession,
when BPRF receives an (Eval, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, U, S) message and a (SndrComplete, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, S)
message from FOPRF , the adversary BPRF must simulate the honest server. BPRF chooses a
uniformly random value 𝑥ˆ ∈ {0, 1}𝑛 and sends 𝑥ˆ to to PRF challenger CPRF . The challenger
CPRF chooses a bit 𝑏 0 ∈ {0, 1} uniformly at random. If 𝑏 0 = 1, the challenger calculates
ˆ for some uniformly random 𝑘 0 ∈ {0, 1}𝑚 . If 𝑏 0 = 0, the challenger CPRF sets
𝑦ˆ = C𝑘 0 (𝑥),
𝑦ˆ = RF(𝑥) ˆ where RF ∈ {𝑓 : {0, 1}𝑛 → {0, 1}𝑛 } is chosen uniformly at random. CPRF sends
𝑦ˆ to BPRF . The adversary BPRF calls SimPRF on input 𝑦ˆ and receives (𝐹, 𝑋, ˜ 𝐾, 𝑑) as output.
BPRF simulates a message to A as if the honest user sent (𝐹, 𝐾, 𝑑) to the honest server via
FAUTH . The adversary A answers with a value 𝑦. ¯ Now, BPRF checks for every 𝐻 2 query
(𝑝, 𝑦)
¯ if 𝑦¯ = C𝑘 (𝐻 1 (𝑝)) holds. Only if that is true, BPRF outputs 1, else it outputs 0. We
depicted the reduction in Figure 3.4.
Suppose that BPRF chose the correct index 𝑖 , i.e., the subsession in which A is successful
in sending the query (𝑝, 𝑦). ¯ That happens with probability 1/𝑡. In case 𝑏 0 = 1, the view of
SimPRF is exactly distributed as in the privacy experiment with B above. By our assumption
on SimPRF , the environment A has noticeable probability 𝑃 0 to send a query (𝑝, 𝑦) ¯ such
that 𝑦¯ = C𝑘 (𝐻 1 (𝑝)). That means, the overall success probability of BPRF in this case
is 𝑃 0/𝑡, which is noticeable. In case 𝑏 0 = 0, the value 𝑦ˆ ∈ {0, 1}𝑛 is uniformly random.
That means in particular that SimPRF s output (𝐹, 𝑋, ˜ 𝐾, 𝑑) is stochastically independent of
C𝑘 (𝐻 1 (𝑝)). In that case, the input (𝐹, 𝐾, 𝑑) gives A information-theoretically no advantage
in guessing C𝑘 (𝐻 1 (𝑝)). Consequently, A outputs (𝑝, 𝑦) ¯ such that 𝑦¯ = C𝑘 (𝐻 1 (𝑝)) at most
𝑛
with probability 2 . This is a contradiction to the PRF probability, as BPRF outputs 1 with
noticeable probability in the case 𝑏 0 = 1. In conclusion, no such simulator SimPRF can
exists, which is a contradiction to the assumed privacy of the garbling scheme. Thus, the
assumed adversary A cannot exist. 
51
3. Construction
CPRF BPRF A
𝑝
$
𝑥 ← {0, 1}𝑛
𝐻1 ( 𝑝 ) = 𝑥
F PRF
Eval via O
plet e via FOPRF
SndrCom
$
𝑥ˆ ← {0, 1}𝑛
𝑥ˆ
Case 𝑏 = 1:
$
𝑘 0 ← {0, 1}𝑚
𝑦ˆ = C𝑘 0 (𝑥)
ˆ
Case
$
𝑏 = 0:
RF ← {𝑓 : {0, 1}𝑛 → {0, 1}𝑛 }
𝑦ˆ = RF(𝑥)
ˆ
𝑦ˆ
(𝐹, 𝑋,
˜ 𝐾, 𝑑) ← SimPRF (𝑦)
ˆ
(𝐹 , 𝐾 , 𝑑 )
𝑦¯
output 1 iff 𝑦¯ = C𝑘 (𝑥)
Figure 3.4.: Reduction on the PRF Property.
52
3.5. Proving Security
Initialization
1: for all corrupted servers 𝑆ˆ with key 𝑘𝑆ˆ :
2: record h𝑘 ˆ, 𝑆i
ˆ
𝑆
On (Init, 𝑠𝑖𝑑, S) from FOPRF
3: If this is the first (Init, S, 𝑠𝑖𝑑) message from FOPRF
$
4: 𝑘 ← {0, 1}𝑚 ; record hS, 𝑠𝑖𝑑, 𝑘i
On (Eval, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, U, S) from FOPRF
5: // simulate sending (Garble, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑) on behalf of U to S via FAUTH
6: send (Sent, 𝑚𝑖𝑑, U, S, (Garble, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑)) to A.
7: record hGarble, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑i
8: on (ok, 𝑚𝑖𝑑) from A if S is corrupted :
9: send (Sent, 𝑚𝑖𝑑, U, S, (Garble, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑)) to S
10 : if U is honest and ∃hSndrComplete, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑i :
11 : goto label SimulateGarbling
On (SndrComplete, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, S) from FOPRF
12 : if U is corrupted and @hreceivedGarble, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑i :
13 : record hSndrComplete, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑i
14 : elseif U is honest and @hGarble, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑i :
15 : record hSndrComplete, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑i
16 : else
17 : SimulateGarbling :
18 : // simulate receiving (Garble, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑) from U via FAUTH
19 : send (Sent, 𝑚𝑖𝑑, Û, S, (Garble, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑)) to A.
20 : on (ok, 𝑚𝑖𝑑) from A :
21 : search for recorded tuplehS, 𝑠𝑖𝑑, 𝑘i
22 : if @hS, 𝑠𝑖𝑑, 𝑘i :
$
23 : 𝑘 ← {0, 1}𝑚 ; record hS, 𝑠𝑖𝑑, 𝑘i
24 : (𝐹, 𝑒, 𝑑) ← Gb(1𝜆 , C)
25 : (𝑋 [0𝑛 ] k 𝐾) B En(𝑒, 0𝑛 k 𝐾); (𝑋 [1𝑛 ] k 𝐾) B En(𝑒, 1𝑛 k 𝐾)
26 : // simulate sending (F,K,d) from S to Û via FAUTH
27 : send (Sent, 𝑚𝑖𝑑, S, Û, (𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, (𝐹, 𝐾, 𝑑))) to A
28 : on (ok, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑) from A :
29 : send (Sent, 𝑚𝑖𝑑, S, Û, (𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, (𝐹, 𝐾, 𝑑))) to Û
30 : record hS, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, (𝐹, 𝐾, 𝑑), 𝑋 [0𝑛 ], 𝑋 [1𝑛 ]i
31 : // simulate sending labels 𝑋 [0], 𝑋 [1] via FOT .
32 : for 𝑖 = 1, . . . , 𝑛 :
33 : record h(𝑠𝑠𝑖𝑑, 𝑖), (𝑋𝑖 [0], 𝑋𝑖 [1])i
34 : send (OT-Send, (𝑠𝑠𝑖𝑑, 𝑖)) to A
53
Figure 3.5.: The Simulator Sim Part I. Simulation of Messages From FOPRF .
3. Construction
On (Send, 𝑚𝑖𝑑, S, (Garble, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑)) from A on behalf of Û
35 : if @hSndrComplete, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑i :
36 : record hreceivedGarble, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑i
37 : else
38 : goto label SimulateGarbling
On (Send, 𝑚𝑖𝑑, U, (𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, (𝐹, 𝐾, 𝑑)) from A on behalf of Ŝ to U
39 : // Simulator gets message (𝐹, 𝐾, 𝑑) from Ŝ to U via FAUTH
40 : send (Sent, 𝑚𝑖𝑑, Ŝ, U, (𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, (𝐹, 𝐾, 𝑑))) to A
41 : on (ok, 𝑚𝑖𝑑) from A :
42 : if @hGarble, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑i :
43 : ignore this message
44 : record h𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, (𝐹, 𝐾, 𝑑)i
45 : // simulator requesting OT labels
46 : for 𝑖 = 1, . . . , 𝑛 :
47 : send (OT-Receive, (𝑠𝑠𝑖𝑑, 𝑖)) to A
48 : record h(𝑠𝑠𝑖𝑑, 𝑖), ⊥i
On a query 𝑝 to 𝐻 1 (·)
49 : if ∃h𝐻 1, 𝑝, i :
50 : return
51 : else
$
52 : ← {0, 1}𝑛
53 : record h𝐻 1, 𝑝, i
54 : return
Figure 3.6.: The Simulator Sim Part II. Simulation of Protocol Messages and the First
Random Oracle 𝐻 1 .
54
3.5. Proving Security
On (OT-Send, (𝑠𝑠𝑖𝑑, 𝑖), (𝑋𝑖 [0], 𝑋𝑖 [1])) from A to FOT on behalf of Ŝ
55 : record hŜ, (𝑠𝑠𝑖𝑑, 𝑖), (𝑋𝑖 [0], 𝑋𝑖 [1])i
56 : send (OT-Send, (𝑠𝑠𝑖𝑑, 𝑖)) to A.
57 : ignore further (OT-Send, (𝑠𝑠𝑖𝑑, 𝑖), . . . ) messages
On (OT-Sent, (𝑠𝑠𝑖𝑑, 𝑖)) from A to FOT
58 : if @hŜ, (𝑠𝑠𝑖𝑑, 𝑖), (𝑋𝑖 [0], 𝑋𝑖 [1])i :
59 : ignore this message
60 : else
61 : send (OT-Sent, (𝑠𝑠𝑖𝑑, 𝑖)) to Ŝ
62 : ignore further (OT-Sent, (𝑠𝑠𝑖𝑑, 𝑖)) messages
On (OT-Receive, (𝑠𝑠𝑖𝑑, 𝑖), 𝑥𝑖 ) from A to FOT on behalf of Û
63 : record h(𝑠𝑠𝑖𝑑, 𝑖), 𝑥𝑖 i
64 : send (OT-Receive, (𝑠𝑠𝑖𝑑, 𝑖)) to A
65 : ignore further (OT-Receive, (𝑠𝑠𝑖𝑑, 𝑖)) messages
On (OT-Received, (𝑠𝑠𝑖𝑑, 𝑖)) from A to FOT
66 : if @hS, (𝑠𝑠𝑖𝑑, 𝑖), (𝑋𝑖 [0], 𝑋𝑖 [1])i or @h(𝑠𝑠𝑖𝑑, 𝑖), 𝑥𝑖 i :
67 : ignore this message
68 : elseif 𝑥𝑖 ≠ ⊥
69 : send (OT-Received, (𝑠𝑠𝑖𝑑, 𝑖), 𝑋𝑖 [𝑥𝑖 ]) to Û
70 : else
71 : if ∀𝑟 ∈ {1, . . . , 𝑛} \ {𝑖}∃hOT-Received, 𝑠𝑠𝑖𝑑, 𝑟 i
72 : and (∃h𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, (𝐹, 𝐾, 𝑑)i
73 : or ∃hS, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, (𝐹, 𝐾, 𝑑), 𝑋 [0𝑛 ], 𝑋 [1𝑛 ]i) :
74 : send (RcvCmplt, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, U, S) to FOPRF
75 : else
76 : record hOT-Received, 𝑠𝑠𝑖𝑑, 𝑖i
Figure 3.7.: The Simulator Sim Part III. Simulation of FOT .
55
3. Construction
On a new query (𝑝, 𝑦) to 𝐻 2 (·, ·)
77 : if ∃h𝐻 2, 𝑝, 𝑦, 𝜌i :
78 : return 𝜌
79 : else
80 : if @h𝐻 1, 𝑝, = 𝐻 1 (𝑝)i :
$
81 : 𝜌 ← {0, 1}𝑙 and record h𝐻 2, 𝑝, 𝑦, 𝜌i
82 : return 𝜌
83 : else
84 : // check all simulated honest server S:
85 : if ∃hS, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, (𝐹, 𝐾, 𝑑), 𝑋 [0𝑛 ], 𝑋 [1𝑛 ]i, s.t. De(𝑑, Ev(𝐹, 𝑋 [] k 𝐾)) = 𝑦 :
86 : // De(𝑑, Ev(𝐹, 𝑋 [] k 𝐾)) means C𝑘 () for the garbled 𝑘
87 : choose a new 𝑠𝑠𝑖𝑑 0
88 : send (Eval, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑 0, S, 𝑝) to FOPRF
89 : send (RcvCmplt, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑 0, A, S) to FOPRF
90 : if FOPRF does not answer :
91 : output fail and abort
92 : else
93 : receive (EvalOut, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑 0, 𝜌) from FOPRF
94 : record h𝐻 2, 𝑝, 𝑦, 𝜌i
95 : return 𝜌
96 : else
97 : // check all corrupt server 𝑆ˆ with key 𝑘𝑆ˆ :
98 : ˆ s.t. C𝑘 () = 𝑦 :
if @h𝑘 ˆ, 𝑆i
𝑆 𝑆ˆ
$ 𝑙
99 : 𝜌 ← {0, 1} and record h𝐻 2, 𝑝, 𝑦, 𝜌i
100 : return 𝜌
101 : elseif there are multiple 𝑘𝑆ˆ : C𝑘𝑆ˆ () = 𝑦 :
102 : output fail and abort
103 : else
104 : retrieve h𝑘𝑆ˆ, 𝑆i
ˆ
105 : send (OfflineEval, 𝑠𝑖𝑑, 𝑆,
ˆ 𝑝) to FOPRF
106 : receive (OfflineEval, 𝑠𝑖𝑑, 𝜌) from FOPRF
107 : record h𝐻 2, 𝑝, 𝑦, 𝜌i
108 : return 𝜌
Figure 3.8.: The Simulator Sim Part IV. Simulation of the Second Random Oracle 𝐻 2 .
56
4. Verifiability
An OPRF is said to have verifiability if the user can roughly speaking be sure that a
server does not switch keys between several OPRF evaluations with the user. Thus, the
outputs that the user received are all sampled from a fixed PRF 𝐹𝑘 (·) where 𝑘 is the fixed
key of the server. To make this notion even useful, we must relax our requirements on
the passively secure parties. If all parties always follow the protocol, the same server will
always choose the same key. Therefore, we have that every passively secure OPRF is also
a VOPRF.
We will assume in this section that corrupted servers may decide to choose a new key
𝑘 0 ∈ {0, 1}𝑚 at will. By this, we consider strictly stronger adversaries as in Section 3.5. We
still require that the adversaries behave honestly in garbling the circuit. That means we
assume that every circuit 𝐹 that is sent by a corrupted server to a user is calculated as
(𝐹, 𝑒, 𝑑) ← Gb(1𝜆 , VC), where VC is the circuit of the protocol description.
4.1. Adapting the Construction
In this section, we introduce the ideal functionality FVOPRF that captures the above security
requirement rigorously. FVOPRF is depicted in Figure 4.1. The main difference to the
ideal functionality FOPRF in Figure 3.1 is the message (Param, S, 𝜋) from the adversary
A to FVOPRF . The adversary A can send this message for a server identity S to set the
identificator of that server. An identificator is some information that is published by the
server as “fingerprint” of its key. A client will use this identificator to specify from which
server it queries output. That means for the ideal functionality that FVOPRF keeps a table
params of all server identities and their associated identificators. Note that the adversary is
even allowed to choose the identificator for an honest server. If the adversary later allows
the delivery of an output value to a user by sending (RcvCmplt, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, P, 𝜋) to FVOPRF ,
the adversary has to specify the identificator 𝜋 of the server from whose table 𝑇𝑠𝑖𝑑 (S, ·) the
output should be taken. In other words, instead of specifying a server id 𝑖 like in FOPRF
of Figure 3.1, the adversary A specifies 𝜋 to receive an output from a certain server. The
mechanism for offline evaluation is adapted accordingly such that an indentificator has to
be specified to receive the output of an offline evaluation.
Albrecht et al. [Alb+21] sketch an idea to construct a verifiable OPRF with garbled
circuits. Their idea can be directly applied to our construction:
Let 𝐻 3 : {0, 1} → {0, 1}𝜆 be a third hash function. In an initialization phase, the
server draws an uniformly random value 𝑟 ∈ {0, 1}𝜆 and publishes the “fingerprint”
𝑘 = 𝐻 3 (𝑘 k 𝑟 ).
Now the definition of the protocol is changed in that the circuit jointly calculated by
both parties will no longer be just a PRF, but will be the following function:
57
4. Verifiability
Functionality FVOPRF
For each value 𝑖 and each session 𝑠𝑖𝑑, an empty table 𝑇𝑠𝑖𝑑 (𝑖, ·) is initially undefined.
$
Whenever 𝑇𝑠𝑖𝑑 (𝑖, 𝑥) is referenced below while it is undefined, draw 𝑇𝑠𝑖𝑑 (𝑖, 𝑥) ← {0, 1}𝑙 .
Initialization:
• On (Init, 𝑠𝑖𝑑) from S, if this is the first Init message for 𝑠𝑖𝑑, set tx(S) = 0 and
send (Init, 𝑠𝑖𝑑, S) to A. From now on, use “S” to denote the unique entity which
sent the Init message for 𝑠𝑖𝑑. Ignore all subsequent Init messages for 𝑠𝑖𝑑.
• On (Param, S, 𝜋) from A if params [S] is undefined then set params [S] = 𝜋.
Offline Evaluation:
On (OfflineEval, 𝑠𝑖𝑑, 𝑐, 𝑝) from P ∈ {S, A}, send (OfflineEval, 𝑠𝑖𝑑,𝑇𝑠𝑖𝑑 (𝑖, 𝑝)) to P if
there is no entry params [𝑖] = 𝑐 and P = A or if there is an entry params [𝑖] = 𝑐 and any
of the following hold: (i) S is corrupted, (ii) P = S.
Online Evaluation:
• On (Eval, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, S, 𝑝) from P ∈ {U, A}, record h𝑠𝑠𝑖𝑑, S, P, 𝑝i and send
(Eval, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, P, S) to A.
• On (SndrComplete, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑) from S, increment tx(S), send
(SndrComplete, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, S) to A.
• On (RcvCmplt, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, P, 𝜋) from A, retrieve h𝑠𝑠𝑖𝑑, S, P, 𝑝i (where P ∈ {U, A}).
Ignore this message if one of the following holds:
There is no record h𝑠𝑠𝑖𝑑, S, P, 𝑝i.
If there exists an honest server S such that params [S] = 𝜋, but tx(S) = 0.
S0 is honest but 𝜋 ≠ S0.
Send (Eval, 𝑠𝑖𝑑,𝑇𝑠𝑖𝑑 (𝑖, 𝑝)) to P. If params [S] = 𝜋 decrement tx(S).
Figure 4.1.: The Ideal Functionality FVOPRF Inspired by [BKW20; JKX18].
58
4.2. Proving Verifiability
VC((𝑘 , 𝑥), (𝑘, 𝑟 ))
𝑦 B C𝑘 (𝑥)
?
𝑏 B (𝑘 = 𝐻 3 (𝑘, 𝑟 ))
𝑣 B ((1 𝑏) · ⊥) + (𝑏 · 𝑦)
C𝑘 is still a boolean circuit that calculates a permutation 𝐹 that is a PRF as defined in
Definition 1. The server now has to provide its secret key 𝑘 and its random value 𝑟 to the
garbled circuit and the client has to provide its input and the fingerprint 𝑘 of the server
from which he wants to retrieve the result. The circuit does not only compute the output
of the PRF but does also check if the key has the claimed fingerprint. Only if that is true,
the PRF result is output.
We can express this idea even a bit more generalized, by saying that the server calculates
a commitment (𝑐, 𝑟 ) ← Commit(𝑘) as an “fingerprint” or identificator, where 𝑟 is the
opening information of the commitment 𝑐. So the circuit that will be garbled is the
following:
VC((𝑐, 𝑥), (𝑘, 𝑟 ))
𝑦 B C𝑘 (𝑥)
𝑏 B Unveil(𝑐, 𝑘, 𝑟 )
𝑣 B ((1 𝑏) · ⊥) + (𝑏 · 𝑦)
If the commitment 𝑐 can be opened to the value 𝑘 using the decommitment information 𝑟 ,
the PRF output is returned. Else, an error symbol ⊥ is returned.
Applying this idea to Figure 3.2, we get the protocol depicted in Figures 4.2 to 4.3. Un-
usually for the UC-framework, we do not work in the FCom -hybrid model. This is because
we need the fact that we can express the unveil algorithm Unveil as a boolean circuit. We
require that COM = (Commit, Unveil) is computationally hiding and computationally
binding. For the sake of simplicity, we assume that Commit outputs values in {0, 1}𝜆 . We
denote the labels for the input (𝑘, 𝑟 ) as 𝐾𝑅 and the labels for the input (𝑐, 𝑥) as 𝐶𝑋 . There
are only three major differences in this construction to the GC-OPRF construction from
Figure 3.2. The first is of course, that the server now garbles the boolean circuit VC above.
The second is that the server creates a commitment 𝑐 when it receives an initialization
message. The third is the hash function 𝐻 2 (·, ·, ·). The user also hashes the commitment of
the server by sending a (𝑝, 𝑦, 𝑐) to 𝐻 2 .
4.2. Proving Verifiability
In huge parts, the simulator for this proof works analogously to the simulator in Figures 3.5
to 3.8 for proving that GC-OPRF in Figure 3.2 UC-emulates FOPRF . Therefore we will only
elaborate on the differences. We depicted the routines with the essential differences in
Figure 4.4. To make it easier for the reader to spot the differences between Figure 4.4 and
the simulator from Figures 3.5 to 3.8, we marked all the lines that contain essential changes
between the two simulators with a gray background.
59
4. Verifiability
S on (Init, 𝑠𝑖𝑑) from E
If this is the first (Init, 𝑠𝑖𝑑) message from E
$
𝑘 ← {0, 1}𝑚
$
(𝑐, 𝑟 ) ← Commit(𝑘)
record hS, 𝑐, 𝑟, 𝑘i
// Send identificator to U via FAUTH
send (Send, 𝑚𝑖𝑑, U, (Init, 𝑐)) to FAUTH
U on (Eval, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, S, 𝑝) from E
$
𝑥𝐻 1 (𝑝)
send (Send, 𝑚𝑖𝑑, S, (Garble, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑)) to FAUTH
S on (SndrComplete, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑) from E
if already received (Garble, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑) :
goto GarbleCircuit
else
ignore this message
S on (Sent, 𝑚𝑖𝑑, U, S, (Garble, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑)) from FAUTH
if already received (SndrComplete, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑) :
GarbleCircuit :
if @hS, 𝑐, 𝑟, 𝑘i :
ignore this message
(𝐹, 𝑒, 𝑑) ← Gb(1𝜆 , VC)
(𝐶𝑋 [0] k 𝐾𝑅) B En(𝑒, 0𝜆+𝑛 k 𝑘 k 𝑟 )
(𝐶𝑋 [1] k 𝐾𝑅) B En(𝑒, 1𝜆+𝑛 k 𝑘 k 𝑟 )
send (Send, 𝑚𝑖𝑑 0, U, (𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, (𝐹, 𝐾𝑅, 𝑑)) to FAUTH
for 𝑖 ∈ {1, . . . , 𝜆 + 𝑛} :
send (OT-Send, (𝑠𝑠𝑖𝑑, 𝑖), (𝐶𝑋𝑖 [0], 𝐶𝑋𝑖 [1])) to FOT
else
ignore this message
Figure 4.2.: Our Verifiable VGC-OPRF Construction Part I.
60
4.2. Proving Verifiability
U on (Sent, 𝑚𝑖𝑑, S, U, (𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, (𝐹, 𝐾𝑅, 𝑑)) from FAUTH
if already received (Eval, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, S, 𝑝𝑤) :
wait for (OT-Sent, (𝑠𝑠𝑖𝑑, 1)), . . . , (OT-Sent, (𝑠𝑠𝑖𝑑, 𝜆 + 𝑛)) from FOT
for 𝑖 ∈ {1, . . . , 𝜆} :
send (OT-Receive, (𝑠𝑠𝑖𝑑, 𝑖), 𝑖 ) to FOT
for 𝑖 ∈ {𝜆 + 1, . . . , 𝜆 + 𝑛} :
send (OT-Receive, (𝑠𝑠𝑖𝑑, 𝑖), 𝑥𝑖 ) to FOT
else
ignore this message
U on {(OT-Received, (𝑠𝑠𝑖𝑑, 𝑖), 𝐶𝑋𝑖 )}𝑖=1,...,𝜆+𝑛 from FOT
if already received (𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, (𝐹, 𝐾, 𝑑)) :
𝑌 B Ev(𝐹, 𝐶𝑋 k 𝐾𝑅)
𝑦 B De(𝑑, 𝑌 )
if 𝑦 = ⊥ :
abort
$
𝜌𝐻 2 (𝑝𝑤, 𝑦)
output (Eval, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, 𝜌) to E
else
ignore this message
Figure 4.3.: Our Verifiable VGC-OPRF Construction Part II.
61
4. Verifiability
Not depicted are obvious changes, as e.g. that the simulator now has to garble the
adapted circuit VC and that input labels are also created for the inputs 𝑐 and 𝑟 .
The first major difference is that Sim now has to react differently on (Init, 𝑠𝑖𝑑, S)
messages from FVOPRF . These messages are sent to Sim, when a new honest server is
initialized by the environment. In that case, Sim draws a uniformly random key 𝑘 and
commits to this key. The ideal functionality allows the adversary to choose the identificator
𝑐 of a server, so Sim records the key and the commitment-randomness corresponding to
this server and sends (Param, S, 𝑐) to FVOPRF . Finally, this 𝑐 is also output as the output of
the honest server S. Sim keeps records hhon, 𝑐, 𝑘, 𝑟 i for all identificators of honest servers.
A second difference is the reaction on an (Init, 𝑐) messages from A on behalf of
some corrupted server Ŝ. In this case, Sim just forwards the adversarys choice of an
identificator 𝑐 to FVOPRF and records this identificator 𝑐. Sim keeps records hcorr, 𝑐, Ŝi for
all identificators of corrupted servers.
We also changed the response of the simulator to receiving a complete set of input labels
via (OT-Received, (𝑠𝑠𝑖𝑑, 1)), . . . , (OT-Received, (𝑠𝑠𝑖𝑑, 𝜆 + 𝑛)). As before, if the requests of
the labels were just simulated by Sim, i.e., h(𝑠𝑠𝑖𝑑, 𝑖)⊥i for all 𝑖 ∈ {1, . . . , 𝜆 +𝑛}, it means that
Sim must produce an output for the honest user. The simulator now uses the additional
power of the garbled circuit that allows Sim to check if the encoded key 𝐾 and the encoded
opening information 𝑅 can be opened to the commitment 𝑐. The simulator Sim If the
garbled circuit 𝐹 does not output ⊥, the simulator Sim request output from FVOPRF via
(RcvCmplt, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, U, S, 𝑐). Else, Sim makes U abort the execution.
Finally, we consider the changes in responses on 𝐻 2 queries. The most notable change
is that 𝑐 is now a third argument to the hash function. This allows Sim to send RcvCmplt
and OfflineEval messages with 𝑐 as identificator to FVOPRF . Sim keeps a list of honest and
a list of corrupted servers, i.e., their identificators. If 𝑐 is in the list of honestly initialized
servers, Sim knows the corresponding key 𝑘 and can validate C𝑘 () = 𝑦. This can be seen
in line 32 Figure 4.4. This case is analogous to the case Figure 3.8 line 85, where Sim finds
the key of an honest server.
If 𝑐 is in the list of corrupted servers, Sim does not know the corresponding key to
𝑐. Remember that we assume that A may choose different keys in this chapter. As the
server is corrupted, Sim cannot safely call (RcvCmplt, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, A, 𝑐) for this server as
there might have been a corresponding (RcvCmplt, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, A, 𝑐) message generated
as a result of all labels being received via OT. But in that case, Sim can safely send a
(OfflineEval, 𝑠𝑖𝑑, 𝑐, 𝑝) message, as the server is corrupted. This does not influence the
ticket counter of the server.
The proof of indistinguishability between EXECIDEAL FVOPRF ,Sim,E and EXECVGC-OPRF,A,E
now works in many parts analogously to the proof in Section 3.5. We will only elaborate
on the important difference and argue, why the output of an honest user in the ideal world
is indistinguishable from the output of an honest user in the real world. We start with the
differences in the proof:
• The most important difference in the proof is that the commitment 𝑐 now has to
be taken into account. The intuition is the following. As the commitment scheme
COM is computationally hiding, it is safe for the simulator to send (Init, 𝑐) for a
simulated 𝑘 and a commitment (𝑐, 𝑟 ) ← Commit(𝑘) on that key to a potentially
62
4.2. Proving Verifiability
corrupted U. An adversary A that can calculate some information about the key
𝑘 with (𝑐, 𝑑) ← Commit(𝑘) would break the computationally hiding property of
COM.
• A similar statement to Lemma 1 can be proven by using the computationally hid-
ing property of the commitment scheme. The main idea is that if A provokes
that Sim sends (RcvCmplt, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑 0, A, 𝑐) in line 34 of Figure 4.4, then the server
with identificator 𝑐 must be an honest server. If A is able to make Sim send
(RcvCmplt, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, S, 𝑐) in line 20 in Figure 4.4, we are in an OPRF execution with
an honest user. By a similar reduction, we receive a statement like Lemma 3, which
says that for an honest server and an honest user, A can at most with negligible
probability query 𝐻 2 (𝑝, 𝑦, 𝑐) such that 𝑐 is a commitment on the key 𝑘 of the honest
server and C𝑘 (𝐻 1 (𝑝)) = 𝑦. Because if the server is honest, A can at most with
negligible probability calculate 𝑦. The adversary A knows two pieces of information
that depend on 𝑘. The first is 𝑐. But if this would help A in calculating 𝑦, we could
construct an adversary against the hiding property of COM, see Definition 4. The
second piece of information is the garbling (𝐹, 𝐾𝑅, 𝑑). If that would help A, we could
construct an adversary against the privacy of the garbling scheme, see Definition 11.
As A does not have any information on 𝑘, the best chance to compute 𝑦 = C𝑘 (𝐻 1 (𝑝))
is by guessing, as C is a PRF as defined in Definition 1.
Honest User Output As already discussed in Section 3.5, in the real world, 𝜌 is calculated
as 𝜌 = 𝐻 2 (𝑝, De(𝑑, Ev(𝐹, 𝐶𝑋 k 𝐾𝑅)), 𝑐), where (𝐹, 𝐾𝑅, 𝑑) was generated by the server and
𝐶𝑋 are the labels received via OT for 𝑥 = 𝐻 1 (𝑝) and the identificatior commitment 𝑐. In
the ideal world, 𝜌 is chosen uniformly at random by FVOPRF if a fresh (Eval, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, S, 𝑝)
message was sent. If an honest user with input 𝑝 interacts with S, the functionality FVOPRF
will send 𝜌 = 𝑇𝑠𝑖𝑑 (S, 𝑝) as output for the honest user. The simulator must produce the
same output 𝜌 for 𝐻 2 (𝑝, 𝑦, 𝑐) if 𝑦 = C𝑘 (𝐻 1 (𝑝)) and Unveil(𝑐, 𝑘, 𝑟 ) = 1 holds for Ss key 𝑘
and opening information 𝑟 . Therefore, we have to compare the output of 𝐻 2 with the
outputs of FVOPRF . We distinguish the following cases in simulation of 𝐻 2 :
Case 1: There is no record h𝐻 1, 𝑝, i found: Sim only needs to program the random oracle, if
𝑝, 𝑦, and 𝑐 do occur in a protocol execution. More precisely, if 𝑦 = C𝑘 (𝐻 1 (𝑝)) holds
for some key 𝑘, where Unveil(𝑐, 𝑘, 𝑟 ) = 1 holds for some opening information 𝑟 . That
is, because in this case FVOPRF can eventually output a value 𝜌 as the output of an
honest user with input 𝑝 and identificator 𝑐 interacting with a server with key 𝑘 and
opening information 𝑟 . We will call a query (𝑝, 𝑦, 𝑐) relevant if there is a key 𝑘 and an
opening information 𝑟 , such that 𝑦 = C𝑘 (𝐻 1 (𝑝)) Unveil(𝑐, 𝑘, 𝑟 ) = 1. In the following,
we bound the probability for the event that (𝑝, 𝑦, 𝑐) becomes relevant, when 𝐻 1 (𝑝)
is not determined yet.
All keys 𝑘 1, . . . , 𝑘𝑡 of honest servers are chosen independently. However, this time we
also have to consider maliciously chosen keys from corrupted servers. The adversary
A could choose keys 𝑘ˆ1, . . . , 𝑘ˆ𝑠 that are somehow correlated. However, that does not
affect the following statement: Let 𝑡 ∈ N be the number of servers in the protocol
execution. Let 𝑘 1, . . . , 𝑘𝑡 be the uniformly random and independently drawn keys
63
4. Verifiability
On (Init, S, 𝑠𝑖𝑑) from FVOPRF On a new query (𝑝, 𝑦, 𝑐) to 𝐻 2 (·, ·, ·)
1: If this is the first (Init, S, 𝑠𝑖𝑑) message from FVOPRF 25 : if ∃h𝐻 2, 𝑝, 𝑦, 𝑐, 𝜌i :
$
2: 𝑘 ← {0, 1}𝑚 26 : return 𝜌
3: (𝑐, 𝑟 ) ← Commit(𝑘) 27 : else
4: record hhon, 𝑐, 𝑘, 𝑟 i 28 : if @h𝐻 1, 𝑝, = 𝐻 1 (𝑝)i :
$
29 : 𝜌 ← {0, 1}𝑙 and record h𝐻 2, 𝑝, 𝑦, 𝑐, 𝜌i
5: send (Param, S, 𝑐) to FVOPRF
30 : return 𝜌
6: send (Init, 𝑐) as message from S to U
31 : else
32 : if ∃hhon, 𝑐, 𝑘, 𝑟 i and C𝑘 () = 𝑦 :
On (Send, 𝑚𝑖𝑑, U, (Init, 𝑐)) from A on behalf of Ŝ
33 : send (Eval, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑 0, 𝑝) to FVOPRF
7: send (Send, 𝑚𝑖𝑑, Ŝ, U, (Init, 𝑐)) to A
34 : send (RcvCmplt, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑 0, A, 𝑐) to FVOPRF
8: on (ok, 𝑚𝑖𝑑) from A :
35 : if FVOPRF does not answer :
9: record hcorr, 𝑐, Ŝi
36 : output fail and abort
10 : send (Param, Ŝ, 𝑐) to FVOPRF 37 : else
38 : receive (Eval, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑 0, 𝜌) from FVOPRF
On (OT-Received, (𝑠𝑠𝑖𝑑, 𝑖)) from A to FOT
39 : record h𝐻 2, 𝑝, 𝑦, 𝑐, 𝜌i
11 : if @h(𝑠𝑠𝑖𝑑, 𝑖), (𝐶𝑋𝑖 [0], 𝐶𝑋𝑖 [1])i or @h(𝑠𝑠𝑖𝑑, 𝑖), 𝑥𝑖 i : 40 : return 𝜌
12 : ignore this message
41 : elseif ∃hcorr, 𝑐, Ŝi
13 : elseif 𝑥𝑖 ≠ ⊥
42 : send (OfflineEval, 𝑠𝑖𝑑, 𝑐, 𝑝) to FVOPRF
14 : send (OT-Received, (𝑠𝑠𝑖𝑑, 𝑖), 𝐶𝑋 [𝑥𝑖 ]) to Û
15 : else 43 : receive (OfflineEval, 𝑠𝑖𝑑, 𝜌) from FVOPRF
16 : if ∀𝑡 ∈ {1, . . . , 𝑛} \ {𝑖}∃hOT-Received, 𝑠𝑠𝑖𝑑, 𝑡i : 44 : record h𝐻 2, 𝑝, 𝑦, 𝑐, 𝜌i
17 : and (∃h𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, (𝐹, 𝐾𝑅, 𝑑)i : 45 : return 𝜌
18 : or ∃hS, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, (𝐹, 𝐾𝑅, 𝑑), 𝐶𝑋 [0𝑛 ], 𝐶𝑋 [1𝑛 ]i) : 46 : else
$
47 : 𝜌 ← {0, 1}𝑙 and record h𝐻 2, 𝑝, 𝑦, 𝑐, 𝜌i
19 : if De(𝑑, Ev(𝐹, 𝐶 [𝑐] k 𝑋 [0] k 𝐾𝑅)) ≠ ⊥ :
48 : return 𝜌
20 : send (RcvCmplt, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑, S, 𝑐) to FVOPRF
21 : else
22 : ignore this message
23 : else
24 : record hOT-Received, 𝑠𝑠𝑖𝑑, 𝑖i
Figure 4.4.: The Major Changes to Get a Simulator Sim for FVOPRF .
64
4.2. Proving Verifiability
used by the honest or corrupted servers. Let C be the PRF function calculated by
VC and let 𝑛 ∈ Ω(𝜆) be the output length of C. We assumed in the beginning that
C𝑘𝑖 (·) is a permutation. Thus, if we choose some uniformly random input 𝑥 ∈ {0, 1}𝑛 ,
we get that C𝑘𝑖 (𝑥) ∈ {0, 1}𝑛 is uniformly random. If 𝐻 1 (𝑝) is not queried yet, we
have for every 𝑖 ∈ {1, . . . , 𝑡 } and every 𝑦 ∈ {0, 1}𝑛 :
1
Pr [C𝑘𝑖 (𝐻 1 (𝑝)) = 𝑦] ≤ ,
2𝑛
where the probability is taken over the random output of 𝐻 1 . Thus, we get by a
union-bound that the probability for a key to make (𝑝, 𝑦, 𝑐) relevant is at most 𝑡2𝑛 ,
which is negligible.
Case 2: Records h𝐻 1, 𝑝, i and hhon, 𝑐, 𝑘, 𝑟 i exist, such that C𝑘 () = 𝑦:
In this case, the value is the output of the random oracle 𝐻 1 on input 𝑝. As the
commitment scheme COM is correct, we have that Unveil(𝑐, 𝑘, 𝑟 ) = 1, because Sim
calculated 𝑐 and 𝑟 as (𝑐, 𝑟 ) ← Commit(𝑘), see Definition 3. The tuple (𝑝, 𝑦, 𝑐) is
relevant, because the key of an honest server produces the output 𝑦, when the input
is provided to the circuit, and the circuit does not output ⊥, because Unveil(𝑐, 𝑘, 𝑟 ) = 1.
Thus, Sim programs 𝐻 2 (𝑝, 𝑦, 𝑐). The simulator Sim sends (Eval, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑 0, S, 𝑝) to
FVOPRF for a new subsession id 𝑠𝑠𝑖𝑑 0. That means, Sim initiates a new protocol
execution and requests itself the output value 𝜌 = 𝑇𝑠𝑖𝑑 (S, 𝑝) from FVOPRF . We argued
in Section 4.2 why a similar statement to Lemma 1 holds. Thus, Sim can safely send
the (RcvCmplt, 𝑠𝑖𝑑, 𝑠𝑠𝑖𝑑 0, A, 𝑐) message, without decreasing the ticket counter of S
to 0. The random oracle 𝐻 2 (𝑝, 𝑦, 𝑐) is programmed to the answer 𝜌 of FVOPRF . The
programming ensures that E will get the same output 𝜌 = 𝐻 2 (𝑝, 𝑦, 𝑐) when invoking
an execution of the protocol between an honest user with input 𝑝 and the honest
server with identificator 𝑐.
Case 3: There are records h𝐻 1, 𝑝, i and hcorr, 𝑐, Ŝi:
In that case, the value is the output of the random oracle 𝐻 1 on input 𝑝, but 𝑐 is the
identificator of a corrupted server. The simulator Sim sends (OfflineEval, 𝑠𝑖𝑑, 𝑆, ˆ 𝑝)
to FVOPRF and receives the answer (OfflineEval, 𝑠𝑖𝑑, 𝜌) from FVOPRF . Sim programs
𝐻 2 (𝑝, 𝑦, 𝑐) to the output 𝜌 of the offline evaluation. Sim does not check if the key
𝑘 on which 𝑐 is a commitment does even output 𝑦. In fact, Sim does not even
know that key, as A just sent (Param, Ŝ, 𝑐) to Sim. But Sim knows that sending
(OfflineEval, 𝑠𝑖𝑑, 𝑐, 𝑝) will not affect the ticket counter. And if for some key 𝑘 and
some opening information 𝑟 with Unveil(𝑐, 𝑘, 𝑟 ) = 1 it would hold that 𝑦𝑦 0 = C𝑘 (),
it would mean that the tuple (𝑝, 𝑦, 𝑐) is not relevant, i.e., there will be no real-
world execution of the protocol, where an honest user would request (𝑝, 𝑦, 𝑐). An
honest user would query the 𝐻 2 (𝑝, 𝑦 0, 𝑐) instead. In that case, Sim has unnecessarily
programmed 𝐻 2 . But as the programming was done with a uniformly random value,
the output of 𝐻 2 is still indistinguishable from a uniformly random value.
Contrarily, if 𝑐 is actually a commitment on a key 𝑘 such that A knows opening
information 𝑟 such that Unveil(𝑐, 𝑘, 𝑟 ) = 1 and 𝑦 = C𝑘 (), the oracle 𝐻 2 (𝑝, 𝑦, 𝑐) is
65
4. Verifiability
programmed to the right 𝜌. That is because the commitment scheme is computation-
ally binding. That means, A can at most with negligible probability find another
key 𝑘 0 and opening information 𝑟 0 such that Unveil(𝑐, 𝑘 0, 𝑟 0) = 1. That means for an
OPRF execution between an honest user with input 𝑝 and a corrupted server, the
server can send a garbling (𝐹, 𝐾𝑅 0, 𝑑) at most with negligible probability such that
⊥ ≠ De(𝑑, Ev(𝐹, 𝐶𝑋 k 𝐾𝑅 0)), where 𝐶𝑋 are the labels for 𝑐 and = 𝐻 1 (𝑝) and 𝐾𝑅 0
are the labels for 𝑘 0 and 𝑟 0.
Case 4: If 𝑐 is never “registered” as identificator via a (Param, S, 𝑐) message, 𝐻 2 (𝑝, 𝑦, 𝑐) is set
to a uniformly random value. In this case, no user received a (Init, 𝑐) message. Thus,
no honest real-world user will input this 𝑐 to the random oracle 𝐻 2 .
66
5. Comparison of Concrete Efficiency
As we were interested in the concrete efficiency of our construction, we implemented it and
compared it to other OPRF protocols. For the implementation, we leveraged a C++ frame-
work, called EMP-Toolkit [WMK16]. Further, we implemented a version of the state-of-the-
art OPRF protocol, 2HashDH, by [JKK14; Jar+16; JKX18]. Finally, we also compared the two
former protocols to the lattice-based protocol of Albrecht et al. [Alb+21]. This protocol was
already implemented by [Alb+21]. The main goal was to compare the concrete efficiency
of different OPRFs on the same computer. All source code described below can be found
in the GitHub repository github.com/SebastianFaller/OPRF-Garbled-Circuits. The
benchmark results refer to the version of commit ad35dbf01dc8bf4f09f2bd839aa36bda042675e6.
Update to latest commit before deadline.
5.1. Garbled-Circuit-Based OPRF
We will introduce our implementation in two steps. First, we present the implementation
of a generic garbling scheme reminiscent of the formal definition from Section 2.6.3. The
source code for the garbling scheme was written in collaboration with the supervisors of
the thesis.
5.1.1. Implementing the Garbling Scheme
The EMP-Toolkit is a framework that offers various routines for the efficient calculation of
garbled circuits and other cryptographic building blocks, such as hashing and symmetric
encryption. To the best of our knowledge, almost all relevant garbled circuit optimizations,
including Free-XOR [KS08] and Half-Gates [ZRE15], are implemented. Only the newly
published Three-Halves technique from Rosulek and Roy [RR21] is not yet implemented.
As an example of usage, we showed how garbling of a certain circuit can be realized in
C++ using EMP-Toolkit.
EMP-Toolkit processes circuits that are described in Bristol Format [Arc+]. Bristol
Format is a specification of how to encode algebraic or boolean circuits. EMP-Toolkit
already has a description of AES in Bristol Format built-in.
EMP-Toolkit allows to load a circuit from a Bristol Format file via the class BristolFormat.
This can be done by using the constructor of the class. The statement BristolFormat cf(
circuit_filename.c_str()); constructs a Bristol Format circuit object named cf when given a
path to the file as string circuit_filename. Wed like to emphasize that this is not the garbled
circuit yet, but rather a description of the plain boolean circuit. EMP-Toolkit defines the
type HalfGateGen<T>. By creating an object of this type, the programmer determines:
67
5. Comparison of Concrete Efficiency
• With which optimizations the circuit will be calculated. HalfGateGen is the class that
implements “Half-Gates” but there is e.g. a class PrivacyFreeGen that implements a
garbling scheme that is even more efficient but has no privacy.
• Where is the garbled circuit written to. This is done via the type parameter T. EMP-
Toolkit has several input-output classes that can be specified as a type parameter.
For instance, if T is NetIO, the garbled circuit is directly sent over the network. If
FileIO is chosen, the circuit is written to a file on the machine. We like to note here
that solving this problem via C++ Templates might not be an optimal choice, as it
is elusive to programmers which types might be used as type parameters without
thoroughly knowing the framework. Further, it disallows dynamic changing of the
desired behavior at runtime. A better solution would have been the “strategy” design
pattern [Gam10].
To generate a garbling we also have to assign random labels to all input bits. The basic
unit of computation in EMP-Toolkit is the type block. By default, all garbling routines
offer 128 bits of security, so a block has 128 bits. That means each input label will be one
block of pseudo-random data. We generate these blocks by using EMP-Toolkits PRG:
prg.random_block(input, n); fills the block- array input with 𝑛 pseudo-random blocks of data.
The same has to be done for the output labels. Afterwards the circuit can be garbled by
using cf.compute(output, input_1, input_2);, where input_1, input_2, and output are the above
calculated arrays.
The listing in Listing 5.1 shows the whole code for garbling a circuit. Note that the listing
also shows how further values as the encoding information and decoding information are
computed.
1 void garble(IOType* io, vector<block>* encoding_info, vector<bool>* decoding_info, const
string& circuit_filename) {
2 HalfGateGen<IOType>::circ_exec = new HalfGateGen<IOType>(io);
3 BristolFormat cf(circuit_filename.c_str());
4 encoding_info->resize(cf.n1+cf.n2+1);
5 decoding_info->resize(cf.n3);
6 block* input_1 = new block[cf.n1];
7 block* input_2 = new block[cf.n2];
8 block* output = new block[cf.n3];
9 PRG prg;
10 prg.random_block(input_1, cf.n1);
11 prg.random_block(input_2, cf.n2);
12 //garble the circuit
13 cf.compute(output, input_1, input_2);
14 //write decoding info
15 for(int i=0; i<cf.n3; i++) {
16 (*decoding_info)[i] = getLSB(output[i]);
17 }
18 //write encoding info
19 (*encoding_info)[0] = ((HalfGateGen<IOType>*) HalfGateGen<IOType>::circ_exec)->delta;
20 for (int i=0; i<cf.n1; i++) {
21 (*encoding_info)[i+1] = input_1[i];
22 }
23
68
5.1. Garbled-Circuit-Based OPRF
24
25 for (int i=0; i<cf.n2; i++) {
26 (*encoding_info)[cf.n1+1+i] = input_2[i];
27 }
28 //Clean up
29 delete HalfGateGen<IOType>::circ_exec;
30 delete[] input_1;
31 delete[] input_2;
32 delete[] output;
33 }
34
Listing 5.1: Garbling a Circuit Using EMP-Toolkit
Similar to the above listing, one can implement a whole garbling interface, inspired
by the formal definition from Section 2.6.3. We get the interface in Listing 5.2. The
function void garble(...) is as described above. This function corresponds to the function
(𝐹, 𝑒, 𝑑) = Gb(1𝜆 , 𝑓 ) of Section 2.6.3.
The function void encode(...) takes a vector input of boolean values as argument. This
vector contains all input bits that shall be encoded. The vector encoding_info contains the
encoding information that was output by void garble(...). The final labels will be stored
in the vector encoded_input. This function corresponds to the function 𝑋 = En(𝑒, 𝑥) of
Section 2.6.3.
The function void evaluate(...) takes an object for handling input and output as the
first argument. The garbled circuit itself meaning 𝐹 in terms of Section 2.6.3 will be
read from this object. The function also takes some input labels encoded_input and a string
circuit_filename that points to the Bristol format description of the circuit as argument.
The circuit will be evaluated on the specified input labels and will be stored in the vector
encoded_output. This function corresponds to the function 𝑌 = Ev(𝐹, 𝑋 ) of Section 2.6.3.
The function void decode(...) takes the encoded_output that was calculated by void evaluate
(...) and the decoding_info calculated by void garble(...) and stores the final output as
vector of boolean values in output. This function corresponds to the function 𝑦 = De(𝑑, 𝑌 )
of Section 2.6.3.
1 template <class IOType>
2 void garble(IOType* io, vector<block>* encoding_info, vector<bool>* decoding_info, const
string& circuit_filename)
3 }
4 void encode(vector<emp::block>* encoded_input, const vector<bool>& input, const vector<
emp::block>& encoding_info);
5
6 template <class IOType>
7 void evaluate(IOType* io, vector<block>* encoded_output, const vector<block>&
encoded_input, const string& circuit_filename);
8 }
9
10 void decode(vector<bool>* output, const vector<emp::block>& encoded_output, const vector
<bool>& decoding_info);
11
12
Listing 5.2: Garbling Scheme Interace
69
5. Comparison of Concrete Efficiency
5.1.2. Implementing the Protocol Parties
We employed the above scheme to implement the protocol parties for GC-OPRF. We used
AES as concrete instantiation for the circuit C in our implementation. An overview of the
protocol flow can be found in Figure 5.1.
U(𝑝) S
$
𝐻 1 (𝑝) $
𝑘 ← {0, 1}𝑚
Garble
(𝐹, 𝑒, 𝑑) ← Gb(1𝜆 , C)
𝐾 ← En(𝑒, 𝑘)
𝑋 [0] ← En(𝑒, 0𝑛 )
(𝐹, 𝐾, 𝑑) 𝑋 [1] ← En(𝑒, 1𝑛 )
]
𝑋 [0], 𝑋 [1
OT
𝑋 [ ]
𝑌 = Ev(𝐹, 𝐾 k 𝑋 [])
𝑦 = De(𝑑, 𝑌 )
$
𝜌𝐻 2 (𝑝, 𝑦)
Figure 5.1.: Overview of GC-OPRF.
We modeled the protocol by creating a class for the user and a class for the server.
The user has four member functions that allow interaction with the server. The func-
tion bool* eval(string pwd, int ssid) hashes the password of the user using SHA3 and
returns the input bits for the circuit as bool*. The function void receiveLabels(bool*
choices, block* encoded_user_input) uses EMP-Tools OT interface to exchange the labels
for the user input. For the sake of simplicity, we did not implement a provably UC-
secure OT protocol but retreated to use the already implemented Naor-Pinkas OT [NP01]
from EMP-Toolkit. We describe the protocol for the interested reader in Appendix A.3.
With void receiveKeyAndDecoding(block* encoded_key, bool* decoding_info), the user receives
the labels for the key and the decoding information via OT. Note that in contrast to
the actual protocol description in Figure 3.2, the garbling of the circuit 𝐹 is sent af-
ter the key labels and the decoding information 𝐾, 𝑑. This is because EMP-Tool con-
sumes the garbled circuit directly from the network interface when the circuit is eval-
uated. Finally, uint8_t* onLabelsReceived(int ssid, const block* encoded_user_input, const
block* encoded_key, const bool* decoding_info) evaluates the circuit, decodes the output and
hashes the output with SHA3. This is depicted in Listing 5.3. Note that certain constants
as ip_addr and AES_KEY_SIZE are defined elsewhere. We will not go into the details of each
called function, as they are simply using the garbled circuit scheme, described above, and
the hash function SHA3.
70
5.1. Garbled-Circuit-Based OPRF
1 NetIO user_io(ip_addr, port); // User is the OT-Receiver.
2 User<NetIO> u(sid, &user_io);
3
4 bool* current_h = u.eval(password, ssid);
5
6 // Receive garbled encoded key and decoding info
7 block encoded_key[AES_KEY_SIZE];
8 bool decoding_info[AES_INPUT_SIZE];
9 u.receiveKeyAndDecoding(encoded_key, decoding_info);
10
11 // Request labels via OT for H_1(password)
12 block labels[AES_INPUT_SIZE];
13 u.receiveLabels(current_h, labels);
14
15 // Evalute the circuit and hash the output
16 uint8_t* output = u.onLabelsReceived(ssid, labels, encoded_key, decoding_info);
17
Listing 5.3: User Execution of GC-OPRF
To create a server, one needs to choose a uniformly random key. The most important
method of the server is void onGarble(int ssid, vector<block>* encoded_ones, vector<block>*
encoded_zeroes). This function creates a garbled circuit, decoding information, and input
labels. Note that we first use the EMP class MemIO to garble the circuit. This is again because
the garbling routines produce direct output to the network interface when the circuit is
garbled. Letting this output go to local memory instead of the network interface facilitates
the sending of the remaining data.
1 void onGarble(int ssid, vector<block>* encoded_ones, vector<block>* encoded_zeroes){
2 //Write garbled circuit to memory first, so other data is sent first
3 MemIO* mem_io = new MemIO();
4 vector<block> encoding_info;
5 vector<bool> decoding_info;
6 garble(mem_io, &encoding_info, &decoding_info, circuit_filename);
7
8 vector<bool> input_zeros(AES_INPUT_SIZE, false);
9 //append key
10 input_zeros.insert(input_zeros.end(), key.begin(), key.end());
11 encode(encoded_zeroes, input_zeros, encoding_info);
12 vector<block> encoded_key = vector<block>(encoded_zeroes->begin() + AES_INPUT_SIZE,
encoded_zeroes->begin() + (AES_INPUT_SIZE+AES_KEY_SIZE));
13
14
15 encoded_zeroes->resize(AES_INPUT_SIZE);
16
17 vector<bool> input_ones(AES_INPUT_SIZE, true);
18 encode(encoded_ones, input_ones, encoding_info);
19
20 // Send everything to the user
21 sendKeyAndDecoding(encoded_key, decoding_info);
22
23 sendLabelsOverOT(*encoded_zeroes, *encoded_ones);
24
25 sendGarbledCircuitFromMem(mem_io);
71
5. Comparison of Concrete Efficiency
26 }
27
Listing 5.4: Servers response to a Garble message
5.2. The 2HashDH Protocol
For the implementation of the 2HashDH protocol from [Jar+16; JKK14] we relied on the
OPENSSL library [OPENSSL] in version 1.1.1. OPENSSL is a commercial-grade open-
source library for cryptography and secure communication and is already installed on
most Linux systems. In particular, we used the algorithms for elliptic curve cryptography
to instantiate the 2HashDH protocol. The protocol is depicted on a high level in Figure 5.2.
Note that we will use additive group notation in this chapter.
U(𝑥) S
$ $
𝐻 1 (𝑥) 𝑘 ← Z𝑞
$
𝑟 ← Z𝑞 𝑎 B 𝑟 ·ℎ
𝑦 = (1/𝑟 ) · 𝑏 𝑏 B 𝑘 ·𝑎
𝑧 = 𝐻 2 (𝑥, 𝑦)
Figure 5.2.: Overview of 2HashDH.
A slightly simplified listing of the code for the 2HashDH user can be seen in Listing 5.5.
Note that we still used EMP-Toolkit for the network communication. This dependency
could easily be removed to make our implementation more portable.
The user starts by initializing a group. We decided to use the NIST P256 curve [Sta19],
as it offers 128 bits of security, which is comparable to the garbled circuit implementation
of EMP-Toolkit. The group is initialized by EC_GROUP_new_by_curve_name(NID_X9_62_prime256v1).
Usually in OPENSSL, one also needs to specify a pointer to the BN_CTX structure, which can
be described as a buffer for certain calculations. Next, by calling EC_GROUP_get_curve(), one
gets the parameters of the elliptic curve. In particular, we are interested in the order of
the underlying field, as we will need this value later to invert the blinding value r. The
first important step of the protocol is to hash the input string pwd to a point on the elliptic
curve. Note that this is by far the most involved part of the protocol. We will elaborate on
it in Appendix A.1. After receiving a point g of the password, the user chooses a random
blinding value by calling BN_rand_range(r, field_order). With EC_POINT_mul(ec_group, a, NULL
, g, r, bn_ctx), the product 𝑟 · 𝑔 is calculated, using additive group notation. The resulting
point is then sent to the server. The server will execute similar code to multiply the received
point with its key and send it back. Now the user calculates the inverse of 𝑟 ∈ F with
BN_mod_inverse(oneOverR, r, field_order, bn_ctx) and multiplies this with the received point
from the server by calling EC_POINT_mul(ec_group, y, NULL, b, oneOverR, bn_ctx). Finally, the
resulting point is hashed using SHA3.
72
5.3. Lattice-based OPRF
1 // Create group object for NIST P256 curve
2 EC_GROUP* ec_group = EC_GROUP_new_by_curve_name(NID_X9_62_prime256v1);
3 BN_CTX* bn_ctx = BN_CTX_new();
4 EC_GROUP_precompute_mult(ec_group, bn_ctx);
5 // 32 byte for field element and one for encoding byte
6 const int ec_point_size_comp = 33;
7
8 // order needed to create and invert r
9 BIGNUM* field_order = BN_new();
10 EC_GROUP_get_curve(ec_group, field_order, NULL, NULL, bn_ctx);
11
12 EC_POINT* g = hash_to_curve(pwd, ec_group, bn_ctx);
13
14 BIGNUM* r = BN_new();
15 //Choose random r
16 BN_rand_range(r, field_order);
17 EC_POINT* a = EC_POINT_new(ec_group);
18 // a = g*r
19 EC_POINT_mul(ec_group, a, NULL, g, r, bn_ctx);
20
21 uint8_t buf[ec_point_size_comp];
22 // Convert point to raw binary data
23 EC_POINT_point2oct(ec_group, a, POINT_CONVERSION_COMPRESSED, buf, ec_point_size_comp,
bn_ctx);
24
25 user_io.send_data(buf, ec_point_size_comp);
26
27 // Receive b from server
28 EC_POINT* b = EC_POINT_new(ec_group);
29 user_io.recv_data(buf, ec_point_size_comp);
30 BIGNUM* oneOverR = BN_new();
31 BN_mod_inverse(oneOverR, r, field_order, bn_ctx);
32 EC_POINT* y = EC_POINT_new(ec_group);
33 // y = (1/r)*b
34 EC_POINT_mul(ec_group, y, NULL, b, oneOverR, bn_ctx);
35
36 // Hash the resulting point
37 EC_POINT_point2oct(ec_group, y, POINT_CONVERSION_COMPRESSED, buf, ec_point_size_comp,
bn_ctx);
38 uint8_t hashTwo[32]; // 32 bytes sha3 output
39
40 sha3_256(hashTwo, buf, ec_point_size_comp);
41
Listing 5.5: User for 2HashDH
5.3. Lattice-based OPRF
To get a plausibly post-quantum secure OPRF-protocol as another comparison to our
construction, we chose the lattice-based OPRF from Albrecht et al. [Alb+21]. In their work,
Albrecht et al. [Alb+21] implemented a proof of concept of their protocol in SageMath
73
5. Comparison of Concrete Efficiency
Protocol Avg. Runtime [ms] Network Traffic [kB]
Our work 66.05 ± 7.69 241.541
2HashDH [JKX18] 1.39 ± 0.48 0.066
Albrecht et al. [Alb+21] 7406.008 ± 54.890 513.254 ± 0.170
Figure 5.3.: Overview of the Benchmark Results
[Ste05]. For the sake of simplicity, all zero-knowledge proofs were left out in the imple-
mentation. We benchmarked this SageMath implementation. However, the comparison to
our protocol can only be seen as a rough estimate of actual efficiency. On the one hand,
an implementation of the protocol in C++ as we wrote for our construction and 2HashDH
would lead to significantly better performance. That is because SageMath is an interpreted
language, based on Python, while C++ is compiled. On the other hand, the performance
impact of zero-knowledge proofs in a lattice-based setting can be enormous. Albrecht et al.
[Alb+21, Sec 5.3] estimate that using the state-of-the-art lattice-based zero-knowledge
proof from [Yan+19] would result in more than 240 bits of communication. Another point
that makes this comparison less reliable is the estimation of lattice parameters. In gen-
eral, it is considered a non-trivial task to choose appropriate lattice parameters in order
to achieve a required security level. We opted to choose similar parameters as for the
National Institute of Standards and Technology (NIST) post-quantum competition algo-
rithm NewHope [Alk+16, Protocol 3] as both claim a security level of 128 bits, which is the
same security level we have for the 2HashDH implementation and the implementation of
our construction. Namely, we let the lattice dimension 𝑛 = 1024, chose the prime modulus
𝑞 as a 14 bit prime and let the rounding modulus 𝑝 = 3. Note, that this parameter choice is
likely over-optimistic and should not be considered for real-world implementations of the
protocol. Albrecht et al. [Alb+21] estimate the parameters for their scheme themselves
and their estimations are far more pessimistic. They suggest 𝑛 = 16384, a prime modulus
𝑞 with around 256 bit and a rounding modulus that is polynomial in 𝜆. We tried these
parameters for our benchmarks but SageMath would abort the execution of even a single
protocol instance i.e., one exchanged PRF value, with a failure message. We believe that
the test laptop for our benchmarks does not have enough memory. Therefore, we retreated
to the smaller parameters, mentioned above.
5.4. Benchmarks
We tested the three implementations on an Intel Core i5-5200U CPU @ 2.20GHz × 4 on the
local network interface. We measured the running time in milliseconds that each program
needs from the invocation of an OPRF session until the user calculated the output. The
server used the same PRF key for all executions. We also measured the amount of data
that the protocols exchange over the network, meaning data sent from user to server and
vice-versa. We summarized the results in Figure 5.3.
74
5.4. Benchmarks
7406.01 ± 54.89
70 66.05 ± 7.69 7000
Avg Runtime [ms] (n = 1000 runs)
60
Avg Runtime [ms] (n = 1000 runs)
6000
50 5000
40 4000
30 3000
20 2000
10 1000
1.39 ± 0.48 66.05 ± 7.69 1.39 ± 0.48
0 0
Garbled-Circuit-OPRF 2HashDH Garbled-Circuit-OPRF 2HashDH Lattice-Based without ZK
(a) Running times of GC-OPRF and 2HashDH (b) Running Times of GC-OPRF, 2HashDH
[JKX18]. [JKX18], and [Alb+21].
Figure 5.4.: Comparison of the Measured Running Times.
5.4.1. Running Time
We depicted the results for the running time measurement in Figure 5.4. We measured
an average running time of 66.05 ms for our own GC-OPRF protocol, with a standard
deviation of 7.69 ms. We measured an average running time of 1.39 ms for 2HashDH, with
a standard deviation of 0.48 ms. With under two milliseconds, the 2HashDH protocol by
[JKX18] was about 50 times faster than our construction. This is not surprising as the
protocol merely needs to exchange two points of an elliptic curve. We found a noticeable
difference in running time to the lattice-based construction of [Alb+21]. We measured an
average running time of 7559.70 ms for the [Alb+21] protocol, with a standard deviation
of 184.61 ms. Our construction is over 110 times faster than the lattice-based protocol. We
like to note here that the difference might get slightly smaller when the communication
goes over a high-latency Wide Area Network (WAN). This is because the protocol from
[Alb+21] requires only two rounds of communication, while our construction requires
four rounds.
5.4.2. Network Traffic
We depicted the results for the network traffic measurement in Figure 5.5. Our construction
sends 241.541 kB of data over the network. 2HashDH by [JKX18] sends only two points
on the NIST P256 curve, which is exactly 66 B. We measured about 513.474 kB of network
traffic with a standard deviation of 96 kB for the lattice-based protocol by [Alb+21]. Note
that the network traffic of our GC-OPRF implementation and of 2HashDH are constant
values, while there are slight variations in the measurement for the protocol of
[Alb+21]. This is because the transmitted value in the protocol is a random element in a
cyclotomic ring modulo some prime number. SageMath automatically compresses those
elements if possible which leads to a varying size.
75
5. Comparison of Concrete Efficiency
513.25 ± 0.17
500
Network Traffic [kB] (n = 1000 runs)
400
300
241.54 ± 0.00
200
100
0.07 ± 0.00
0
Garbled-Circuit-OPRF 2HashDH Lattice-Based without ZK
Figure 5.5.: Comparison of the Measured Network Traffic.
The measured network traffic for our GC-OPRF implementation matches our theoretical
estimates. We estimated around 230 kB of traffic for our construction as follows: According
to [Arc+], the employed AES circuit has 6400 and-gates. Each and-gate requires two
ciphertexts to be transmitted. The used ciphertext in EMP-Toolkit is 128 bit long. Thus,
we have 32 B of data of each and-gate. This makes 6400 · 32 B = 204 800 B. Additionally,
we have 128 executions of a Naor-Pinkas OT. This OT protocol is DLOG-based and
EMP-Toolkit implements a variant with elliptic curves. EMP-Toolkit uses the same NIST
P-256 elliptic curve for OT as we did for 2HashDH. However, they do represent a group
element uncompressed as 65 B of data. We reduced this cost to 33 B in our 2HashDH
implementation by using a compressed representation from OPENSSL [OPENSSL]. One
does not need to store an x- and a y-coordinate for a point on an elliptic curve. It is
sufficient to store the x-coordinate and the sign. A single 1-out-of-2 OT requires the
transfer of three group elements and two ciphertexts (see our description of Naor-Pinkas
OT in Appendix A.3). Again, a ciphertext is 128 bit, i.e., 16 B long. All OT executions sum
up to 128(3 · 65 B + 2 · 16 B) = 29 056 B. In total, we have 204 800 B + 29 056 B = 233 856 B.
We assume that the difference to the actually measured value comes from meta-data and
other overhead produced by EMP-Toolkit.
76
6. Conclusion
In this work, we investigated the security of a garbled-circuit-based OPRF in the UC-
framework [Can01]. To realize an ideal OPRF functionality in the style of Jarecki, Krawczyk,
and Xu [JKX18], we augmented the “straightforward” construction of Pinkas et al. [Pin+09]
with a second hash function. This second hash function was modeled as random oracle
and allowed the simulator in the proof to “program” the output of the random oracle to
the values that the ideal functionality outputs. The resulting protocol is secure against
passive adversaries.
We further used a technique proposed by Albrecht et al. [Alb+21] to make our OPRF
verifiable. We changed to garbled circuit such that the user now provides a commitment
on the key of the server. The server provides the key and the opening information to the
circuit. Only if the commitment correctly opens to the key of the server, the garbled circuit
outputs a pseudo-random value.
We implemented a prototype of our protocol and the state-of-the-art OPRF protocol
2HashDH by [Jar+16; JKK14; JKX18]. We compared the two implementations to a simplified
implementation of the lattice-based OPRF ¸ by Albrecht et al. [Alb+21]. The experiments
showed that our construction is significantly faster than the lattice-based protocol. We
also found that our construction is not as efficient as the DLOG-based 2HashDH protocol.
Nonetheless, the efficiency is still in a reasonable range with a running time of around
65 ms and around 250 kB network traffic. This indicates, that circuit-based OPRF protocols
might be a promising candidate for post-quantum secure OPRFs.
Future Work Our security proof holds only for passive, i.e., honest-but-curious adversaries.
This is a common assumption in cryptography, but it does arguably not capture realistic
scenarios. Hence, a proof considering active adversaries is desirable.
We also expect that there is space for improvement concerning the choice of the circuit
that is calculated by the garbling scheme. In our work, we assumed the circuit to be a PRF.
But then the actual PRF that is calculated by the OPRF protocol is 𝑓𝑘 (𝑥) = 𝐻 2 (𝑝, F𝑘 (𝐻 1 (𝑝))),
where F is a PRF. One could suspect that modeling 𝐻 2 as a random oracle already introduces
enough entropy to the output of the function. But we leave it for future work if weaker
assumptions on the circuit are sufficient to still achieve a secure protocol.
Additionally, we believe that more experimental insight would be beneficial. It would be
good to also take network latency into account for the experiments. This means concretely,
that the protocols should also be tested over a Local Area Network (LAN) and a WAN. It
would also be interesting to compare our protocols with the batched, related-key OPRF of
Kolesnikov et al. [Kol+16], as this protocol is also circuit-based and relies on OT.
Besides, we argued that circuit-based OPRF are promising candidates for post-quantum
secure OPRFs. To strengthen this claim, it would be desirable to also implement our
protocol using a presumably post-quantum secure OT protocol, e.g., [PVW08]. This
77
6. Conclusion
would show if the “price” for post-quantum security is still in a reasonable range. Such a
construction would need to be proven secure in the QROM model.
Finally, it would be interesting to see if our constructions necessity to program the
random oracle is inherent to UC-secure OPRFs. Hesse [Hes20] showed that aPAKE cannot
be achieved without a programmable random oracle. As Jarecki, Krawczyk, and Xu [JKX18]
carved out the close connection between aPAKE and OPRF, it is an intriguing question if
one can by connecting both works show that UC-secure OPRFs require a programmable
random oracle.
78
Bibliography
[AB09] Sanjeev Arora and Boaz Barak. Computational Complexity: A Modern Ap-
proach. First Edition. Cambridge University Press, 2009. isbn: 978-0-521-
42426-4.
[Alb+21] Martin R. Albrecht et al. “Round-Optimal Verifiable Oblivious Pseudorandom
Functions from Ideal Lattices”. In: PKC 2021: 24th International Conference
on Theory and Practice of Public Key Cryptography, Part II. Ed. by Juan Garay.
Vol. 12711. Lecture Notes in Computer Science. Virtual Event: Springer,
Heidelberg, Germany, May 2021, pp. 261289. doi: 10.1007/978- 3- 030-
75248-4_10.
[Alk+16] Erdem Alkim et al. “Post-quantum Key Exchange - A New Hope”. In: USENIX
Security 2016: 25th USENIX Security Symposium. Ed. by Thorsten Holz and
Stefan Savage. Austin, TX, USA: USENIX Association, Aug. 2016, pp. 327
343.
[Amy+16] Matthew Amy et al. “Estimating the Cost of Generic Quantum Pre-image
Attacks on SHA-2 and SHA-3”. In: SAC 2016: 23rd Annual International
Workshop on Selected Areas in Cryptography. Ed. by Roberto Avanzi and
Howard M. Heys. Vol. 10532. Lecture Notes in Computer Science. St. Johns,
NL, Canada: Springer, Heidelberg, Germany, Aug. 2016, pp. 317337. doi:
10.1007/978-3-319-69453-5_18.
[Arc+] David Archer et al. Bristol Fashion MPC Circuits. url: https://homes.esat.
kuleuven.be/~nsmart/MPC/ (visited on 02/04/2022).
[Aru+19] Frank Arute et al. “Quantum Supremacy Using a Programmable Supercon-
ducting Processor”. In: Nature 574.7779 (7779 Oct. 2019), pp. 505510. issn:
1476-4687. doi: 10.1038/s41586-019-1666-5.
[Bas+21] Andrea Basso et al. “Cryptanalysis of an Oblivious PRF from Supersingular
Isogenies”. In: Advances in Cryptology ASIACRYPT 2021. Ed. by Mehdi
Tibouchi and Huaxiong Wang. Lecture Notes in Computer Science. Cham:
Springer International Publishing, 2021, pp. 160184. isbn: 978-3-030-92062-
3. doi: 10.1007/978-3-030-92062-3_6.
[Bau+16] Bela Bauer et al. “Hybrid Quantum-Classical Approach to Correlated Ma-
terials”. In: Physical Review X 6.3 (Sept. 21, 2016), p. 031045. doi: 10.1103/
PhysRevX.6.031045.
[Bel+08] Mira Belenkiy et al. Delegatable Anonymous Credentials. Cryptology ePrint
Archive, Report 2008/428. https://eprint.iacr.org/2008/428. 2008.
79
Bibliography
[Ben+11] Rikke Bendlin et al. “Semi-homomorphic Encryption and Multiparty Com-
putation”. In: Advances in Cryptology EUROCRYPT 2011. Ed. by Kenneth G.
Paterson. Vol. 6632. Lecture Notes in Computer Science. Tallinn, Estonia:
Springer, Heidelberg, Germany, May 2011, pp. 169188. doi: 10.1007/978-
3-642-20465-4_11.
[BG90] Mihir Bellare and Shafi Goldwasser. “New Paradigms for Digital Signa-
tures and Message Authentication Based on Non-Interactive Zero Knowl-
edge Proofs”. In: Advances in Cryptology CRYPTO89. Ed. by Gilles Bras-
sard. Vol. 435. Lecture Notes in Computer Science. Santa Barbara, CA, USA:
Springer, Heidelberg, Germany, Aug. 1990, pp. 194211. doi: 10.1007/0-
387-34805-0_19.
[BHR12] Mihir Bellare, Viet Tung Hoang, and Phillip Rogaway. “Foundations of gar-
bled circuits”. In: ACM CCS 2012: 19th Conference on Computer and Com-
munications Security. Ed. by Ting Yu, George Danezis, and Virgil D. Gligor.
Raleigh, NC, USA: ACM Press, Oct. 2012, pp. 784796. doi: 10.1145/2382196.
2382279.
[BKW20] Dan Boneh, Dmitry Kogan, and Katharine Woo. “Oblivious Pseudorandom
Functions from Isogenies”. In: Advances in Cryptology ASIACRYPT 2020,
Part II. Ed. by Shiho Moriai and Huaxiong Wang. Vol. 12492. Lecture Notes
in Computer Science. Daejeon, South Korea: Springer, Heidelberg, Germany,
Dec. 2020, pp. 520550. doi: 10.1007/978-3-030-64834-3_18.
[Blu+91] Manuel Blum et al. “Checking the Correctness of Memories”. In: 32nd Annual
Symposium on Foundations of Computer Science. San Juan, Puerto Rico: IEEE
Computer Society Press, Oct. 1991, pp. 9099. doi: 10 . 1109 / SFCS . 1991 .
185352.
[BMR90] Donald Beaver, Silvio Micali, and Phillip Rogaway. “The Round Complexity
of Secure Protocols (Extended Abstract)”. In: 22nd Annual ACM Symposium
on Theory of Computing. Baltimore, MD, USA: ACM Press, May 1990, pp. 503
513. doi: 10.1145/100216.100287.
[BNS19] Xavier Bonnetain, María Naya-Plasencia, and André Schrottenloher. “Quan-
tum Security Analysis of AES”. In: IACR Transactions on Symmetric Cryptol-
ogy 2019.2 (2019), pp. 5593. issn: 2519-173X. doi: 10.13154/tosc.v2019.
i2.55-93.
[Bon+11] Dan Boneh et al. “Random Oracles in a Quantum World”. In: Advances in
Cryptology ASIACRYPT 2011. Ed. by Dong Hoon Lee and Xiaoyun Wang.
Vol. 7073. Lecture Notes in Computer Science. Seoul, South Korea: Springer,
Heidelberg, Germany, Dec. 2011, pp. 4169. doi: 10.1007/978-3-642-25385-
0_3.
[Bri+10] Eric Brier et al. “Efficient Indifferentiable Hashing into Ordinary Elliptic
Curves”. In: Advances in Cryptology CRYPTO 2010. Ed. by Tal Rabin.
Vol. 6223. Lecture Notes in Computer Science. Santa Barbara, CA, USA:
80
Springer, Heidelberg, Germany, Aug. 2010, pp. 237254. doi: 10.1007/978-
3-642-14623-7_13.
[BS20] Dan Boneh and Victor Shoup. A Graduate Course in Applied Cryptogra-
phy. Jan. 2020. url: https://crypto.stanford.edu/~dabo/cryptobook/
BonehShoup_0_5.pdf.
[Büs+20] Niklas Büscher et al. “Secure Two-Party Computation in a Quantum World”.
In: ACNS 20: 18th International Conference on Applied Cryptography and
Network Security, Part I. Ed. by Mauro Conti et al. Vol. 12146. Lecture Notes
in Computer Science. Rome, Italy: Springer, Heidelberg, Germany, Oct. 2020,
pp. 461480. doi: 10.1007/978-3-030-57808-4_23.
[BV15] Zvika Brakerski and Vinod Vaikuntanathan. “Constrained Key-Homomorphic
PRFs from Standard Lattice Assumptions - Or: How to Secretly Embed a
Circuit in Your PRF”. In: TCC 2015: 12th Theory of Cryptography Conference,
Part II. Ed. by Yevgeniy Dodis and Jesper Buus Nielsen. Vol. 9015. Lecture
Notes in Computer Science. Warsaw, Poland: Springer, Heidelberg, Germany,
Mar. 2015, pp. 130. doi: 10.1007/978-3-662-46497-7_1.
[Can+02] Ran Canetti et al. “Universally composable two-party and multi-party secure
computation”. In: 34th Annual ACM Symposium on Theory of Computing.
Montréal, Québec, Canada: ACM Press, May 2002, pp. 494503. doi: 10 .
1145/509907.509980.
[Can00] Ran Canetti. Universally Composable Security: A New Paradigm for Cryp-
tographic Protocols. Cryptology ePrint Archive, Report 2000/067. https :
//eprint.iacr.org/2000/067. 2000.
[Can01] Ran Canetti. “Universally Composable Security: A New Paradigm for Crypto-
graphic Protocols”. In: 42nd Annual Symposium on Foundations of Computer
Science. Las Vegas, NV, USA: IEEE Computer Society Press, Oct. 2001, pp. 136
145. doi: 10.1109/SFCS.2001.959888.
[Can98] Ran Canetti. Security and Composition of Multi-party Cryptographic Protocols.
Cryptology ePrint Archive, Report 1998/018. https://eprint.iacr.org/
1998/018. 1998.
[CF01] Ran Canetti and Marc Fischlin. “Universally Composable Commitments”. In:
Advances in Cryptology CRYPTO 2001. Ed. by Joe Kilian. Vol. 2139. Lecture
Notes in Computer Science. Santa Barbara, CA, USA: Springer, Heidelberg,
Germany, Aug. 2001, pp. 1940. doi: 10.1007/3-540-44647-8_2.
[CFN94] Benny Chor, Amos Fiat, and Moni Naor. “Tracing Traitors”. In: Advances in
Cryptology CRYPTO94. Ed. by Yvo Desmedt. Vol. 839. Lecture Notes in
Computer Science. Santa Barbara, CA, USA: Springer, Heidelberg, Germany,
Aug. 1994, pp. 257270. doi: 10.1007/3-540-48658-5_25.
[CGH98] Ran Canetti, Oded Goldreich, and Shai Halevi. “The Random Oracle Method-
ology, Revisited (Preliminary Version)”. In: 30th Annual ACM Symposium on
Theory of Computing. Dallas, TX, USA: ACM Press, May 1998, pp. 209218.
doi: 10.1145/276698.276741.
81
Bibliography
[Cha83] David Chaum. “Blind Signature System”. In: Advances in Cryptology CRYPTO83.
Ed. by David Chaum. Santa Barbara, CA, USA: Plenum Press, New York,
USA, 1983, p. 153.
[Cho+13] Seung Geol Choi et al. “Efficient, Adaptively Secure, and Composable Obliv-
ious Transfer with a Single, Global CRS”. In: PKC 2013: 16th International
Conference on Theory and Practice of Public Key Cryptography. Ed. by Kaoru
Kurosawa and Goichiro Hanaoka. Vol. 7778. Lecture Notes in Computer
Science. Nara, Japan: Springer, Heidelberg, Germany, Feb. 2013, pp. 7388.
doi: 10.1007/978-3-642-36362-7_6.
[Dav+18] Alex Davidson et al. “Privacy Pass: Bypassing Internet Challenges Anony-
mously”. In: Proceedings on Privacy Enhancing Technologies 2018.3 (July 2018),
pp. 164180. doi: 10.1515/popets-2018-0026.
[Dav+22] Alex Davidson et al. Oblivious Pseudorandom Functions (OPRFs) Using Prime-
Order Groups. Internet-draft draft-irtf-cfrg-voprf-09. Internet Engineering
Task Force / Internet Engineering Task Force, Feb. 8, 2022. 63 pp. url: https:
//datatracker.ietf.org/doc/html/draft-irtf-cfrg-voprf-09.
[DR02] Joan Daemen and Vincent Rijmen. The Design of Rijndael: AES - The Advanced
Encryption Standard. Information Security and Cryptography. Berlin Heidel-
berg: Springer-Verlag, 2002. isbn: 978-3-540-42580-9. doi: 10.1007/978-3-
662-04722-4.
[DY05] Yevgeniy Dodis and Aleksandr Yampolskiy. “A Verifiable Random Function
with Short Proofs and Keys”. In: PKC 2005: 8th International Workshop on
Theory and Practice in Public Key Cryptography. Ed. by Serge Vaudenay.
Vol. 3386. Lecture Notes in Computer Science. Les Diablerets, Switzerland:
Springer, Heidelberg, Germany, Jan. 2005, pp. 416431. doi: 10.1007/978-3-
540-30580-4_28.
[Faz+20] Faz-Hernandez et al. Internet Draft: Hashing to Elliptic Curves. Apr. 27, 2020.
url: https : / / tools . ietf . org / id / draft - irtf - cfrg - hash - to - curve -
07.html (visited on 02/04/2022).
[Fre+05] Michael J. Freedman et al. “Keyword Search and Oblivious Pseudorandom
Functions”. In: TCC 2005: 2nd Theory of Cryptography Conference. Ed. by Joe
Kilian. Vol. 3378. Lecture Notes in Computer Science. Cambridge, MA, USA:
Springer, Heidelberg, Germany, Feb. 2005, pp. 303324. doi: 10.1007/978-
3-540-30576-7_17.
[Gam10] Erich Gamma, ed. Design Patterns: Elements of Reusable Object-Oriented Soft-
ware. 38. printing. Addison-Wesley Professional Computing Series. Boston,
Mass.: Addison-Wesley, 2010. XV, 395 S. : Ill., graph. Darst. isbn: 978-0-201-
63361-0.
[GK90] Oded Goldreich and Hugo Krawczyk. “On the Composition of Zero-Knowledge
Proof Systems”. In: Automata, Languages and Programming. Ed. by Michael S.
Paterson. Lecture Notes in Computer Science. Berlin, Heidelberg: Springer,
1990, pp. 268282. isbn: 978-3-540-47159-2. doi: 10.1007/BFb0032038.
82
[GMW87] Oded Goldreich, Silvio Micali, and Avi Wigderson. “How to Play any Mental
Game or A Completeness Theorem for Protocols with Honest Majority”.
In: 19th Annual ACM Symposium on Theory of Computing. Ed. by Alfred
Aho. New York City, NY, USA: ACM Press, May 1987, pp. 218229. doi:
10.1145/28395.28420.
[Hes20] Julia Hesse. “Separating Symmetric and Asymmetric Password-Authenticated
Key Exchange”. In: SCN 20: 12th International Conference on Security in
Communication Networks. Ed. by Clemente Galdi and Vladimir Kolesnikov.
Vol. 12238. Lecture Notes in Computer Science. Amalfi, Italy: Springer, Hei-
delberg, Germany, Sept. 2020, pp. 579599. doi: 10.1007/978-3-030-57990-
6_29.
[Ish+03] Yuval Ishai et al. “Extending Oblivious Transfers Efficiently”. In: Advances
in Cryptology CRYPTO 2003. Ed. by Dan Boneh. Vol. 2729. Lecture Notes in
Computer Science. Santa Barbara, CA, USA: Springer, Heidelberg, Germany,
Aug. 2003, pp. 145161. doi: 10.1007/978-3-540-45146-4_9.
[Jar+16] Stanislaw Jarecki et al. Highly-Efficient and Composable Password-Protected
Secret Sharing (Or: How to Protect Your Bitcoin Wallet Online). Cryptology
ePrint Archive, Report 2016/144. https : / / eprint . iacr . org / 2016 / 144.
2016.
[JKK14] Stanislaw Jarecki, Aggelos Kiayias, and Hugo Krawczyk. “Round-Optimal
Password-Protected Secret Sharing and T-PAKE in the Password-Only Model”.
In: Advances in Cryptology ASIACRYPT 2014, Part II. Ed. by Palash Sarkar
and Tetsu Iwata. Vol. 8874. Lecture Notes in Computer Science. Kaoshiung,
Taiwan, R.O.C.: Springer, Heidelberg, Germany, Dec. 2014, pp. 233253. doi:
10.1007/978-3-662-45608-8_13.
[JKX18] Stanislaw Jarecki, Hugo Krawczyk, and Jiayu Xu. “OPAQUE: An Asymmetric
PAKE Protocol Secure Against Pre-computation Attacks”. In: Advances in
Cryptology EUROCRYPT 2018, Part III. Ed. by Jesper Buus Nielsen and
Vincent Rijmen. Vol. 10822. Lecture Notes in Computer Science. Tel Aviv,
Israel: Springer, Heidelberg, Germany, Apr. 2018, pp. 456486. doi: 10.1007/
978-3-319-78372-7_15.
[JL09] Stanislaw Jarecki and Xiaomin Liu. “Efficient Oblivious Pseudorandom Func-
tion with Applications to Adaptive OT and Secure Computation of Set Inter-
section”. In: TCC 2009: 6th Theory of Cryptography Conference. Ed. by Omer
Reingold. Vol. 5444. Lecture Notes in Computer Science. Springer, Heidelberg,
Germany, Mar. 2009, pp. 577594. doi: 10.1007/978-3-642-00457-5_34.
[KBR13] Sriram Keelveedhi, Mihir Bellare, and Thomas Ristenpart. “DupLESS: Server-
Aided Encryption for Deduplicated Storage”. In: USENIX Security 2013: 22nd
USENIX Security Symposium. Ed. by Samuel T. King. Washington, DC, USA:
USENIX Association, Aug. 2013, pp. 179194.
83
Bibliography
[KK13] Vladimir Kolesnikov and Ranjit Kumaresan. “Improved OT Extension for
Transferring Short Secrets”. In: Advances in Cryptology CRYPTO 2013,
Part II. Ed. by Ran Canetti and Juan A. Garay. Vol. 8043. Lecture Notes in
Computer Science. Santa Barbara, CA, USA: Springer, Heidelberg, Germany,
Aug. 2013, pp. 5470. doi: 10.1007/978-3-642-40084-1_4.
[KL15] Jonathan Katz and Yehuda Lindell. Introduction to Modern Cryptography. Sec-
ond Edition. Vol. Chapman & Hall/CRC cryptography and network security.
Boca Raton: CRC Press, 2015. isbn: 978-1-4665-7027-6.
[Kol+16] Vladimir Kolesnikov et al. “Efficient Batched Oblivious PRF with Applications
to Private Set Intersection”. In: ACM CCS 2016: 23rd Conference on Computer
and Communications Security. Ed. by Edgar R. Weippl et al. Vienna, Austria:
ACM Press, Oct. 2016, pp. 818829. doi: 10.1145/2976749.2978381.
[KS08] Vladimir Kolesnikov and Thomas Schneider. “Improved Garbled Circuit: Free
XOR Gates and Applications”. In: ICALP 2008: 35th International Colloquium
on Automata, Languages and Programming, Part II. Ed. by Luca Aceto et al.
Vol. 5126. Lecture Notes in Computer Science. Reykjavik, Iceland: Springer,
Heidelberg, Germany, July 2008, pp. 486498. doi: 10.1007/978- 3- 540-
70583-3_40.
[KsS12] Benjamin Kreuter, abhi shelat, and Chih-Hao Shen. “Billion-Gate Secure
Computation with Malicious Adversaries”. In: USENIX Security 2012: 21st
USENIX Security Symposium. Ed. by Tadayoshi Kohno. Bellevue, WA, USA:
USENIX Association, Aug. 2012, pp. 285300.
[LP07] Yehuda Lindell and Benny Pinkas. “An Efficient Protocol for Secure Two-
Party Computation in the Presence of Malicious Adversaries”. In: Advances
in Cryptology EUROCRYPT 2007. Ed. by Moni Naor. Vol. 4515. Lecture Notes
in Computer Science. Barcelona, Spain: Springer, Heidelberg, Germany, May
2007, pp. 5278. doi: 10.1007/978-3-540-72540-4_4.
[Mal+04] Dahlia Malkhi et al. “Fairplay - Secure Two-Party Computation System”. In:
USENIX Security 2004: 13th USENIX Security Symposium. Ed. by Matt Blaze.
San Diego, CA, USA: USENIX Association, Aug. 2004, pp. 287302.
[MF06] Payman Mohassel and Matthew Franklin. “Efficiency Tradeoffs for Mali-
cious Two-Party Computation”. In: PKC 2006: 9th International Conference
on Theory and Practice of Public Key Cryptography. Ed. by Moti Yung et al.
Vol. 3958. Lecture Notes in Computer Science. New York, NY, USA: Springer,
Heidelberg, Germany, Apr. 2006, pp. 458473. doi: 10.1007/11745853_30.
[Mos18] Michele Mosca. “Cybersecurity in an Era with Quantum Computers: Will
We Be Ready?” In: IEEE Security Privacy 16.5 (Sept. 2018), pp. 3841. issn:
1558-4046. doi: 10.1109/MSP.2018.3761723.
84
[MRH04] Ueli M. Maurer, Renato Renner, and Clemens Holenstein. “Indifferentiability,
Impossibility Results on Reductions, and Applications to the Random Oracle
Methodology”. In: TCC 2004: 1st Theory of Cryptography Conference. Ed. by
Moni Naor. Vol. 2951. Lecture Notes in Computer Science. Cambridge, MA,
USA: Springer, Heidelberg, Germany, Feb. 2004, pp. 2139. doi: 10.1007/978-
3-540-24638-1_2.
[Nie+12] Jesper Buus Nielsen et al. “A New Approach to Practical Active-Secure
Two-Party Computation”. In: Advances in Cryptology CRYPTO 2012. Ed. by
Reihaneh Safavi-Naini and Ran Canetti. Vol. 7417. Lecture Notes in Computer
Science. Santa Barbara, CA, USA: Springer, Heidelberg, Germany, Aug. 2012,
pp. 681700. doi: 10.1007/978-3-642-32009-5_40.
[NP01] Moni Naor and Benny Pinkas. “Efficient Oblivious Transfer Protocols”. In:
12th Annual ACM-SIAM Symposium on Discrete Algorithms. Ed. by S. Rao
Kosaraju. Washington, DC, USA: ACM-SIAM, Jan. 2001, pp. 448457.
[NPS99] Moni Naor, Benny Pinkas, and Reuban Sumner. “Privacy Preserving Auc-
tions and Mechanism Design”. In: Proceedings of the 1st ACM Conference
on Electronic Commerce. EC 99. New York, NY, USA: Association for Com-
puting Machinery, Nov. 1, 1999, pp. 129139. isbn: 978-1-58113-176-5. doi:
10.1145/336992.337028.
[NR04] Moni Naor and Omer Reingold. “Number-theoretic constructions of efficient
pseudo-random functions”. In: Journal of the ACM 51.2 (2004), pp. 231262.
[OPENSSL] OPENSSL. Copyright © 1999-2021 The OpenSSL Project Authors. All Rights
Reserved. url: https://www.openssl.org/ (visited on 02/04/2022).
[Pai99] Pascal Paillier. “Public-Key Cryptosystems Based on Composite Degree
Residuosity Classes”. In: Advances in Cryptology EUROCRYPT99. Ed. by
Jacques Stern. Vol. 1592. Lecture Notes in Computer Science. Prague, Czech
Republic: Springer, Heidelberg, Germany, May 1999, pp. 223238. doi: 10.
1007/3-540-48910-X_16.
[Pin+09] Benny Pinkas et al. “Secure Two-Party Computation Is Practical”. In: Ad-
vances in Cryptology ASIACRYPT 2009. Ed. by Mitsuru Matsui. Vol. 5912.
Lecture Notes in Computer Science. Tokyo, Japan: Springer, Heidelberg,
Germany, Dec. 2009, pp. 250267. doi: 10.1007/978-3-642-10366-7_15.
[PVW08] Chris Peikert, Vinod Vaikuntanathan, and Brent Waters. “A Framework for
Efficient and Composable Oblivious Transfer”. In: Advances in Cryptology
CRYPTO 2008. Ed. by David Wagner. Vol. 5157. Lecture Notes in Computer
Science. Santa Barbara, CA, USA: Springer, Heidelberg, Germany, Aug. 2008,
pp. 554571. doi: 10.1007/978-3-540-85174-5_31.
[Rab05] Michael O. Rabin. How To Exchange Secrets with Oblivious Transfer. Cryptol-
ogy ePrint Archive, Report 2005/187. https://eprint.iacr.org/2005/187.
2005.
85
Bibliography
[RR21] Mike Rosulek and Lawrence Roy. “Three Halves Make a Whole? Beating the
Half-Gates Lower Bound for Garbled Circuits”. In: Advances in Cryptology
CRYPTO 2021, Part I. Ed. by Tal Malkin and Chris Peikert. Vol. 12825. Lecture
Notes in Computer Science. Virtual Event: Springer, Heidelberg, Germany,
Aug. 2021, pp. 94124. doi: 10.1007/978-3-030-84242-0_5.
[Sho94] Peter W. Shor. “Algorithms for Quantum Computation: Discrete Logarithms
and Factoring”. In: 35th Annual Symposium on Foundations of Computer
Science. Santa Fe, NM, USA: IEEE Computer Society Press, Nov. 1994, pp. 124
134. doi: 10.1109/SFCS.1994.365700.
[Sta19] National Institute of Standards and Technology. Recommendations for Dis-
crete Logarithm-Based Cryptography: Elliptic Curve Domain Parameters. Draft
Special Publication (SP) 800-186, Comments Due: January 29, 2020 (public
comment period is CLOSED). Washington, D.C.: U.S. Department of Com-
merce, Oct. 2019. url: https://doi.org/10.6028/NIST.SP.800-186-draft.
[Ste05] William Stein. Sage Mathematical Software System. Version 9.5 released 2022-
01-30. 2005. url: https://www.sagemath.org/.
[Ula07] Maciej Ulas. Rational Points on Certain Hyperelliptic Curves over Finite Fields.
June 11, 2007. arXiv: 0706.1448 [math]. url: http://arxiv.org/abs/0706.
1448 (visited on 01/24/2022).
[WB19] Riad S. Wahby and Dan Boneh. “Fast and simple constant-time hashing
to the BLS12-381 elliptic curve”. In: IACR Transactions on Cryptographic
Hardware and Embedded Systems 2019.4 (2019). https://tches.iacr.org/
index.php/TCHES/article/view/8348, pp. 154179. issn: 2569-2925. doi:
10.13154/tches.v2019.i4.154-179.
[WMK16] Xiao Wang, Alex J. Malozemoff, and Jonathan Katz. EMP-Toolkit: Efficient
MultiParty Computation Toolkit. emp-toolkit. 2016. url: https://github.
com/emp-toolkit/emp-tool (visited on 07/20/2021).
[WRK17] Xiao Wang, Samuel Ranellucci, and Jonathan Katz. “Authenticated Garbling
and Efficient Maliciously Secure Two-Party Computation”. In: ACM CCS
2017: 24th Conference on Computer and Communications Security. Ed. by
Bhavani M. Thuraisingham et al. Dallas, TX, USA: ACM Press, Oct. 2017,
pp. 2137. doi: 10.1145/3133956.3134053.
[Yan+19] Rupeng Yang et al. “Efficient Lattice-Based Zero-Knowledge Arguments
with Standard Soundness: Construction and Applications”. In: Advances in
Cryptology CRYPTO 2019, Part I. Ed. by Alexandra Boldyreva and Daniele
Micciancio. Vol. 11692. Lecture Notes in Computer Science. Santa Barbara,
CA, USA: Springer, Heidelberg, Germany, Aug. 2019, pp. 147175. doi: 10.
1007/978-3-030-26948-7_6.
[Yao86] Andrew Chi-Chih Yao. “How to Generate and Exchange Secrets (Extended
Abstract)”. In: 27th Annual Symposium on Foundations of Computer Science.
Toronto, Ontario, Canada: IEEE Computer Society Press, Oct. 1986, pp. 162
167. doi: 10.1109/SFCS.1986.25.
86
[ZRE15] Samee Zahur, Mike Rosulek, and David Evans. “Two Halves Make a Whole -
Reducing Data Transfer in Garbled Circuits Using Half Gates”. In: Advances
in Cryptology EUROCRYPT 2015, Part II. Ed. by Elisabeth Oswald and
Marc Fischlin. Vol. 9057. Lecture Notes in Computer Science. Sofia, Bulgaria:
Springer, Heidelberg, Germany, Apr. 2015, pp. 220250. doi: 10.1007/978-
3-662-46803-6_8.
87
A. Appendix
In this chapter we present additional material for our thesis.
A.1. Implementing the Hash to Curve Algorithm
The 2HashDH construction described in Section 5.2 assumes the existence of a hash
function 𝐻 1 : {0, 1} → G, where G is the group in which the protocol operates and thus
in which calculation of discrete logarithms is hard. In our case, it will be the group of
points on the standardized elliptic curve NIST P-256 [Sta19]. The naive approach would
be to hash the input string to a bit string of a predefined length 𝑙 ∈ N and then to map
the output bit string to a point on the elliptic curve. The group G has prime order and
thus, the mapping 𝑚 : {0, 1}𝑙 → G cannot be bijective for 𝑙 > 1. One could interpret the
bit string as an integer and reduce it modulo the group order. Unfortunately, this is not
sufficient. The proof of security for 2HashDH modeled 𝐻 1 as a random oracle. Thus, the
output distribution must be “close to” uniformly random. But the described naive approach
yields a skewed distribution.
Instead, a proper hash to curve algorithm must be employed. The Internet Engineering
Task Force (IETF) proposed several algorithms in the internet draft [Faz+20]. We depict
the algorithm recommended for the NIST P-256 curve in Figure A.1.
First, the input message 𝑚𝑠𝑔 is hashed using SHA256 to two field elements 𝑢 [0] and
𝑢 [1] of the underlying field of the elliptic curve. Each of these field elements is mapped to
a point on the curve, using the map_to_curve algorithm. Then the two resulting points
are added, using the addition of G. In general, one must use a clear_cofactor algorithm
to make sure that the resulting point lies in a subgroup of prime order. However, as the
curve NIST P-256 already has prime order, this process can be left out in our case. We will
describe the subroutines below.
hash_to_curve (𝑚𝑠𝑔)
𝑢 B hash_to_field (𝑚𝑠𝑔, 2)
𝑄0 B map_to_curve (𝑢 [0])
𝑄1 B map_to_curve (𝑢 [1])
𝑅 B 𝑄0 + 𝑄1
𝑃 B clear_cofactor (𝑅)
return 𝑃
Figure A.1.: Hash to Curve Algorithm.
89
A. Appendix
SSWU(𝑢, 𝐴, 𝐵, 𝑍 )
𝑡𝑣1 B inv0 (𝑍 2 · 𝑢 4 + 𝑍 · 𝑢 2 )
𝑥1 B (𝐵/𝐴) · (1 + 𝑡𝑣1)
?
if 𝑡𝑣1 = 0
set 𝑥1 B 𝐵/(𝑍 · 𝐴)
𝑔𝑥1 B 𝑥13 + 𝐴 · 𝑥1 + 𝐵
𝑥2 B 𝑍 · 𝑢 2 · 𝑥1
𝑔𝑥2 B 𝑥23 + 𝐴 · 𝑥2 + 𝐵
if is_square (𝑔𝑥1)
set 𝑥 B 𝑥1 and 𝑦 B sqrt (𝑔𝑥1)
else
set 𝑥 B 𝑥2 and 𝑦 B sqrt (𝑔𝑥2)
if sgn0 (𝑢) ≠ sgn0 (𝑦)
set 𝑦 B 𝑦
return (𝑥, 𝑦)
Figure A.2.: Simplified Shallue-van de Woestijne-Ulas Mapping.
Hash to Field The requirement on the hash to field algorithm is to be indifferentiable
from a random oracle. This is a different notion than indistinguishable, see [MRH04]. The
message is expanded to a sufficiently long string of bits by using several calls to SHA256.
The resulting bit string is divided into smaller bit strings, one for each required field
element. Next, the bit strings are interpreted as integers and reduced modulo the prime
order of the field.
Map to Curve The internet draft [Faz+20] recommends to use the algorithm shown in
Figure A.2. This algorithm maps a field element 𝑢 ∈ F to a point 𝑃 = (𝑥, 𝑦) ∈ G, where 𝑃
is a point on a Weierstrass curve with equation
𝑌 2 = 𝑋 3 + 𝐴𝑋 + 𝐵,
with 𝐴 ≠ 0, 𝐵 ≠ 0. The algorithm is called Simplified Shallue-van de Woestijne-Ulas
mapping. It was described by Brier et al. [Bri+10] and Ulas [Ula07] and enhanced by
Wahby and Boneh [WB19]. The value 𝑍 ∈ F is a constant that depends on the curve. The
function inv0 (𝑒) calculates the multiplicative inverse of 𝑒 ∈ F or outputs 0 if 𝑒 = 0. The
function is_square (𝑒) checks if 𝑒 is a square in F. If an element 𝑒 ∈ F is square, the square
root is calculated by the function sqrt (𝑒). The function sgn0 (𝑒) returns 1 if 𝑒 is positive
or 𝑒 is 0. Else it returns 0.
A.2. Advanced Encryption Standard
Advanced Encryption Standard is by far the most widely used block cipher. It was stan-
dardized by the NIST as successor of the Data Encryption Standard (DES). The algorithm
90
A.2. Advanced Encryption Standard
is also called the Rijndael algorithm and was proposed by Daemen and Rijmen [DR02].
It is a block cipher and works on blocks of size 128 bits. Though Rijndael can work with
different key-lengths, we will present the algorithm only for a key length of 256 bits.
The algorithm performs 14 rounds to encrypt one block of data. From a high point of
view, the algorithm proceeds in the following order. We will explain each step in detail in
the coming sections.
• Key expansion (generate round keys from original key)
• Add round key
• For round 1 to round 13
Sub bytes
Shift rows
Mix columns
Add round key
• Sub bytes
• Shift rows
• Add round key
A.2.1. Key expansion
The original key 𝑘 ∈ {0, 1}256 is used to generate round keys for the 15 Add Round Key
executions one for each round plus the initial Add RoundKey. First, the original key
is organized in words of 32 bits 𝑊0, . . . ,𝑊7 . These words are the first 8 round keys. The
following round keys are defined recursively. For 𝑖 = 8, . . . , 60, we let:
𝑊 ⊕ 𝑆 (𝑊𝑖1  8) ⊕ const(𝑖), if 𝑖 ≡ 0 mod 8
𝑖8
𝑊𝑖 B 𝑊𝑖8 ⊕ 𝑆 (𝑊𝑖1 ) ⊕ const(𝑖), if 𝑖 ≡ 4 mod 8
𝑊𝑖8 ⊕ 𝑊𝑖1,
else
The value of the constant const(𝑖) depends on 𝑖, by ·  8 we denote rotating 8 bits to
the left and 𝑆 is a so-called S-box. This S-box substitutes the bytes of a word. We will
explain it in Appendix A.2.3.
A.2.2. Add Round Key
In this step, each byte of the current state is combined via bitwise xor with the correspond-
ing byte of the round key. Note that the round keys are of the same size as the states. This
is the only step, where the key directly influences the result.
91
A. Appendix
A.2.3. Sub Bytes
This step consist of replacing every byte by another byte, using the so-called S-box. This
S-box describes a substitution and ensures that the algorithm is non-linear. First, the
respective byte is interpreted as an element 𝑥 ∈ F28 = F2 [𝑋 ]/(𝑋 8 + 𝑋 4 + 𝑋 3 + 𝑋 + 1). If
𝑥 ≠ 0, replace 𝑥 by 𝑥 0 B 𝑥 1 . Second, an affine transformation is applied to get the output
𝑦 = 𝐴𝑥 + 𝑏, for constants 𝐴 ∈ {0, 1}8×8, 𝑏 ∈ {0, 1}8 , see [DR02] for the exact values of 𝐴
and 𝑏.
A.2.4. Shift Rows
For this and the next step, the bytes of the current state are arranged in a 4 × 4 matrix.
Then, the each row of the matrix is shifted by a certain offset. The first row is not shifted.
The second row is shifted by one column to the left. The third row is shifted by two
columns to the left and finally, the fourth row is shifted by three columns to the left.
A.2.5. Mix Columns
This step is performed in all rounds, except the final round. Again, we arrange the
bytes of the current state as a 4 × 4 matrix. A column is now interpreted as a vector
(𝑎 0, 𝑎 1, 𝑎 2, 𝑎 3 ) > ∈ F428 and multiplied by a constant matrix as follows:
𝑏 2 3 1 1 𝑎0
© 0ª © ª © ª
­𝑏 1 ® ­1 2 3 1® ­𝑎 1 ®
­ ®=­ ®·­ ®
­𝑏 2 ® ­1 1 2 3® ­𝑎 2 ®
«𝑏 3 ¬ «3 1 1 2¬ «𝑎 3 ¬
A.3. Naor-Pinkas-OT
In this section we describe the OT protocol introduced by Naor and Pinkas [NP01]. Let
G = h𝑔i be a group of prime order 𝑞 for which the Computational Diffie-Hellman (CDH)
assumption holds. Let 𝐻 : G → {0, 1}𝜆 be a random oracle. In the protocol, a sender
𝑆 interacts with a receiver 𝑅. The sender gets as input two messages 𝑀0, 𝑀1 ∈ {0, 1}𝜆 .
The receiver gets as input 𝜎 ∈ {0, 1} and outputs 𝑀𝜎 ∈ {0, 1}𝜆 . The protocol proceeds as
follows:
• Initially, 𝑆 chooses a random value 𝐶 ∈ G (it is important that 𝑅 does not know the
discrete logarithm of 𝐶).
$
𝑅 chooses 𝑘 ← Z𝑞 uniformly at random and sets pk𝜎 B 𝑔𝑘 and pk 1𝜎 B 𝐶 · (pk𝜎 ) 1 .
𝑅 sends pk 0 to 𝑆.
$
𝑆 calculates pk 1 = 𝐶 · (pk 1 ) 1 . 𝑆 chooses 𝑟 0, 𝑟 1 ← Z𝑞 uniformly at random. 𝑆 sets
𝐸 0 = (𝑔𝑟 0 , 𝐻 (pk𝑟00 ) ⊕ 𝑀0 ) and 𝐸 1 = (𝑔𝑟 1 , 𝐻 (pk𝑟11 ) ⊕ 𝑀1 ). 𝑆 sends (𝐸 0, 𝐸 1 ) to 𝑅.
𝑅 computes 𝑀𝜎 = 𝐻 ((𝐸𝜎 [0])𝑘 ) ⊕ 𝐸𝜎 [1], where 𝐸𝜎 denotes the first component of 𝐸𝜎
and 𝐸𝜎 [1] denotes the second component.
92
A.4. Actively Secure Garbled Circuits
Security. Intuitively, 𝑅s privacy comes from the fact that pk 0 is independent of 𝜎. 𝑆s
privacy comes from the fact that if 𝑅 could calculate the discrete logarithm of both pk 0
and pk 1 , 𝑅 could also calculate the discrete logarithm of 𝐶. As 𝐻 is a random oracle, an
adversary that would decrypt both 𝑀0 and 𝑀1 would need to calculate pk𝑟00 and pk𝑟11 . But
as CDH holds in the group, an adversary has at most negligible advantage to calculate
both of the two values, as (𝑔, 𝑔𝑟𝑏 , pk𝑏 , pk𝑏𝑟𝑏 ) is a CDH-tuple if the adversary does not know
the discrete logarithm of pk𝑏 .
A.4. Actively Secure Garbled Circuits
Yaos garbled circuits as described above are only secure if the garbling party behaves
honest-but-curious, i.e. passive adversaries. This is evident as a cheating garbler could
just garble a different circuit than the operator expects. Think for instance about a circuit
that outputs the evaluators input. As the garbling scheme offers privacy, the evaluating
party cannot know from 𝑌 = Ev(𝐹, 𝑋 ) that the actually encoded output 𝑌 is its own input
(without having the decoding information 𝑑).
Therefore, the evaluator must ensure that the garbled circuit he receives is indeed the
circuit he expects, e.g., the one specified by the protocol.
A.4.1. Cut-and-Choose
The one of the first techniques in the literature that ensured the garbling of the right
circuit was “cut-and-choose”. This technique was used before in other contexts and was
first applied to garbled circuits by Lindell and Pinkas [LP07]. The core idea is the following:
The garbler does not only garble a single version garbling of the circuit but it creates many
garblings of the same circuit, where each garbling is calculated with fresh randomness.
When the circuits are sent to the evaluator the evaluator can demand from the operator to
“open” certain gates and thus show, that the right gates was garbled. If the garbler fails to
answer one of the evaluators opening requests the evaluator aborts. If the garbler behaves
honestly he can answer all requests of the evaluator. On the other hand, if the garbler
altered the circuit there will be a negligibly small probability that the garbler can answer
all request correctly.
However the big downside of this approach is the efficiency. To get statistical security
in the security parameter 𝜆, the garbler has to garble O (𝜆) circuits.
A.4.2. Authenticated Garbling
Wang, Ranellucci, and Katz [WRK17] introduced a method called authenticated garbling
to ensure security of Yaos garbled circuits against malicious adversaries. The main idea
is to use the information theoretic Message Authentication Code (MAC) from [Nie+12].
This MAC allows two parties A and B to authenticate a bit 𝑏 ∈ {0, 1}. A holds a global
key ΔA ∈ {0, 1}𝜆 which was chosen uniformly at random. This global key will be the
same for all MACs generated by A. To authenticate a bit 𝑏 held by B, A choses a local key
93
A. Appendix
Functionality FPre
• Upon receiving ΔA from A and init from B, and assuming no values ΔA, ΔB are
currently stored, choose uniform ΔB ← {0, 1}𝜆 and store hΔA, ΔB i. Send ΔB to B.
$
• Upon receiving ( random, 𝑟, 𝑀 [𝑟 ], 𝐾 [𝑠]) from A and random from B, sample uni-
form 𝑠 ∈ {0, 1} and set 𝐾 [𝑟 ] B 𝑀 [𝑟 ] ⊕ 𝑟 ΔB and 𝑀 [𝑠] B 𝐾 [𝑠] ⊕ 𝑠ΔA . Send
(𝑠, 𝑀 [𝑠], 𝐾 [𝑟 ]) to B.
• Upon receiving ( AND, (𝑟 1, 𝑀 [𝑟 1 ], 𝐾 [𝑠 1 ]), (𝑟 2, 𝑀 [𝑟 2 ], 𝐾 [𝑠 2 ]), (𝑟 3, 𝑀 [𝑟 3 ], 𝐾 [𝑠 3 ]))
from A and ( AND, (𝑠 1, 𝑀 [𝑠 1 ], 𝐾 [𝑟 1 ]), (𝑠 2, 𝑀 [𝑠 2 ], 𝐾 [𝑟 2 ])) from B, verify that
𝑀 [𝑟𝑖 ] = 𝐾 [𝑟𝑖 ] ⊕ 𝑟𝑖 ΔB and that 𝑀 [𝑠𝑖 ] = 𝐾 [𝑠𝑖 ] ⊕ 𝑠𝑖 ΔA , for 𝑖 ∈ {1, 2}. Send cheat to
B if one of the checks fails. Otherwise, set 𝑠 3 B 𝑟 3 ⊕ ((𝑟 1 ⊕ 𝑠 1 ) ∧ (𝑟 2 ⊕ 𝑠 2 )), set
𝐾 [𝑟 3 ] B 𝑀 [𝑟 3 ] ⊕ 𝑟 3 ΔB , and set 𝑀 [𝑠 3 ] B 𝐾 [𝑠 3 ] ⊕ 𝑠 3 ΔA . Send (𝑠 3, 𝑀 [𝑠 3 ], 𝐾 [𝑟 3 ]) to
B.
Figure A.3.: The Ideal Functionality FPre From [WRK17].
𝐾 [𝑏] ∈ {0, 1}𝜆 and we let the MAC be
𝑀 [𝑏] B 𝐾 [𝑏] ⊕ 𝑏ΔA .
Lets ignore for a moment how the two parties calculate (or exchange) these values securely.
A holds the local and the global key (𝐾 [𝑏], ΔA ) and B holds the bit value 𝑏 and the MAC
𝑀 [𝑏]. If B maliciously wanted to claim a different bit 𝑏 𝑏, he would need to guess
the local key 𝐾 [𝑏], which is only possible with negligible probability as 𝐾 [𝑏] is chosen
uniformly at random for every new bit 𝑏. We will adher to the notation of [WRK17] and
write [𝑏] B to denote the situation where B holds (𝑏, 𝑀 [𝑏] = 𝐾 [𝑏] ⊕ 𝑏ΔA ) and A holds
𝐾 [𝑏] (and ΔA ). Symmetrically we write [𝑏] 𝐴 if A holds (𝑏, 𝑀 [𝑏] = 𝐾 [𝑏] ⊕ 𝑏ΔB ) and B
holds a local key 𝐾 [𝑏] and global key ΔB .
Next, we note that the above scheme is XOR-homomorphic. Concretely, if e.g. A
holds two authenticated bits [𝑏] A and [𝑐] A for 𝑏, 𝑐 ∈ {0, 1}, then A can locally compute
(𝑏𝑐, 𝑀 [𝑏𝑐] = 𝑀 [𝑏] ⊕ 𝑀 [𝑐]) and B can locally compute 𝐾 [𝑏𝑐] = 𝐾 [𝑏] ⊕ 𝐾 [𝑐] to
get [𝑏𝑐] A . This XOR-homomorphism allows to combine the MAC with techniques from
secret sharing.
Roughly speaking, these MACs will be used to authenticate the garbling of each gate of
the garbled circuit.
Wang, Ranellucci, and Katz [WRK17] use the ideal functionality FPre , depicted in Fig-
ure A.3 to realize their protocol. This ideal functionality “encapsulates” the preprocessing
phase for their protocol. After exchanging the MACs and the randomness for each gate,
the protocol can be executed. We omit the details of the protocol description here.
A.5. Acronyms
PRG Pseudo-Random Generator
94
A.5. Acronyms
DES Data Encryption Standard
VOPRF Verifiable Oblivious Pseudo-Random Function
OPRF Oblivious Pseudo-Random Function
PRF Pseudo-Random Function
UC Universal Composability
ZK Zero-Knowledge
MPC Multi-Party Computation
SFE Secure Function Evaluation
OT Oblivious Transfer
PAKE Password Authenticated Key Exchange
aPAKE asymmetric Password Authenticated Key Exchange
LWE Learning With Errors
DDH Decisional Diffie-Hellman Assumption
CDH Computational Diffie-Hellman
NIST National Institute of Standards and Technology
RSA Rivest Shamir Adleman
PKI Public-Key Infrastructure
CRS Common Reference String
PPT Probabilistic Polynomial Time
AES Advanced Encryption Standard
ROM Ranom Oracle Model
QROM Quantum-accessible Random Oracle Model
AKE Authenticated Key Exchange
DOS Denial of Service
PRP Pseudo-Random Permutation
MAC Message Authentication Code
IETF Internet Engineering Task Force
95
A. Appendix
LAN Local Area Network
WAN Wide Area Network
SIS Short Integer Solution
TLS Transport Layer Security
HTTPS Hypertext Transfer Protocol Secure
DLOG Discrete Logarithm
𝑞-DHI Decisional 𝑞-Diffie-Hellman Inversion Problem
96