## TCC 2016-B: Program

### Foundations I (Chair: John Steinberger)

9:00-9:20
Simulating Auxiliary Inputs, Revisited
Maciej Skorski

For any pair $(X,Z)$ of correlated random variables we can think of $Z$ as a randomized function of $X$. If the domain of $Z$ is small, one can make this function computationally efficient
by allowing it to be only approximately correct. In folklore this problem is known as \emph{simulating auxiliary inputs}.
This idea of simulating auxiliary information turns out to be a very usefull tool, finding applications in complexity theory, cryptography, pseudorandomness and zero-knowledge. In this paper we revisit this problem, achieving the following results:

\begin{enumerate}[(a)]
\item We present a novel boosting algorithm for constructing the simulator. This boosting proof is of independent interest, as it shows how to handle "negative mass" issues when constructing probability measures by shifting distinguishers in descent algorithms. Our technique essentially fixes the flaw in the TCC'14 paper "How to Fake Auxiliary Inputs".
\item The complexity of our simulator is better than in previous works, including results derived from the uniform min-max theorem due to Vadhan and Zheng. To achieve $(s,\epsilon)$-indistinguishability we need the complexity $O\left(s\cdot 2^{5\ell}\epsilon^{-2}\right)$ in time/circuit size, which improve previous bounds by a factor of $\epsilon^{-2}$.
In particular, with we get meaningful provable security for the EUROCRYPT'09 leakage-resilient stream cipher instantiated with a standard 256-bit block cipher, like $\mathsf{AES256}$.
\end{enumerate}

Our boosting technique utilizes a two-step approach. In the first step we shift the current result (as in gradient or sub-gradient descent algorithms) and in the separate step we fix the biggest non-negative mass constraint violation (if applicable).

9:20-9:40
Fast Pseudorandom Functions Based on Expander Graphs
Benny Applebaum, Pavel Raykov

We present direct constructions of pseudorandom function (PRF) families based on Goldreich's one-way function. Roughly speaking, we assume that non-trivial local mappings $f:\{0,1\}^n\rightarrow \{0,1\}^m$ whose input-output dependencies graph form an expander are hard to invert. We show that this one-wayness assumption yields PRFs with relatively low complexity. This includes weak PRFs which can be computed in linear time of $O(n)$ on a RAM machine with $O(\log n)$ word size, or by a depth-3 circuit with unbounded fan-in AND and OR gates (AC0 circuit), and standard PRFs that can be computed by a quasilinear size circuit or by a constant-depth circuit with unbounded fan-in AND, OR and Majority gates (TC0).

Our proofs are based on a new search-to-decision reduction for expander-based functions. This extends a previous reduction of the first author (STOC 2012) which was applicable for the special case of \emph{random} local functions. Additionally, we present a new family of highly efficient hash functions whose output on exponentially many inputs jointly forms (with high probability) a good expander graph. These hash functions are based on the techniques of Miles and Viola (Crypto 2012). Although some of our reductions provide only relatively weak security guarantees, we believe that they yield novel approach for constructing PRFs, and therefore enrich the study of pseudorandomness.

9:40-10:00
3-Message Zero Knowledge Against Human Ignorance
Nir Bitansky, Zvika Brakerski, Yael Kalai, Omer Paneth, Vinod Vaikuntanathan

The notion of Zero Knowledge has driven the field of cryptography since its conception over thirty years ago.
It is well established that two-message zero-knowledge protocols for NP do not exist, and that four-message
zero-knowledge arguments exist under the minimal assumption of one-way functions. Resolving the precise round
complexity of zero-knowledge has been an outstanding open problem for far too long.

In this work, we present a three-message zero-knowledge argument system with soundness against uniform
polynomial-time cheating provers. The main component in our construction is the recent delegation protocol for
RAM computations (Kalai and Paneth, TCC 2016B and Brakerski, Holmgren and Kalai, ePrint 2016). Concretely, we rely
on a three-message variant of their protocol based on a {\em key-less} collision-resistant hash functions
secure against uniform adversaries as well as other standard primitives.

More generally, beyond uniform provers, our protocol provides a natural and meaningful security guarantee
against real-world adversaries, which we formalize following Rogaway's human-ignorance" approach (VIETCRYPT
2006): in a nutshell, we give an explicit uniform reduction from any adversary breaking the soundness of our
protocol to finding collisions in the underlying hash function.

### Unconditional Security I (Chair: Andrej Bogdanov)

10:30-10:50
Pseudoentropy: Lower-bounds for Chain rules and Transformations
Krzysztof Pietrzak, Maciej Skorski

Computational notions of entropy have recently found many applications, including
leakage-resilient cryptography, deterministic encryption or memory delegation.
The two main types of results which make computational notions so useful are
(1) Chain rules, which quantify by how much the computational entropy of a variable decreases if conditioned on some other variable (2) Transformations, which quantify to which extend one type of entropy implies another.

Such chain rules and transformations typically lose a significant amount in
quality of the entropy, and are the reason why applying these results one gets rather weak
quantitative security bounds. In this paper we for the first time prove lower bounds in this context, showing that existing
results for transformations are, unfortunately, basically optimal for non-adaptive black-box reductions (and it's hard
to imagine how non black-box reductions or adaptivity could be useful here.)

A variable $X$ has $k$ bits of HILL entropy of quality $(\epsilon,s)$ if there exists a variable $Y$ with
$k$ bits min-entropy which cannot be distinguished from $X$ with advantage $\epsilon$ by
distinguishing circuits of size $s$. A weaker notion
is Metric entropy, where we switch quantifiers, and only require that
for every distinguisher of size $s$, such a $Y$ exists.
%For Metric entropy, we further distinguish between a notion that considers probabilistic or only weaker deterministic distinguishers.

We first describe our result concerning transformations. By definition, HILL implies Metric without any loss in quality.
Metric entropy often comes up in applications, but must be transformed to HILL for meaningful security guarantees.
The best known result states that if a variable $X$ has $k$ bits of Metric entropy of quality $(\epsilon,s)$, then
it has $k$ bits of HILL with quality $(2\epsilon,s\cdot\epsilon^2)$.
We show that this loss of a factor $\Omega(\epsilon^{-2})$ in circuit size is necessary. In fact, we show the stronger result that this loss is already necessary when transforming so called deterministic real valued Metric entropy to randomised boolean Metric (both these variants of Metric entropy are implied by HILL without loss in quality).

The chain rule for HILL entropy states that if $X$ has $k$ bits of HILL entropy of quality $(\epsilon,s)$, then for any
variable $Z$ of length $m$, $X$ conditioned on $Z$ has $k-m$ bits of HILL entropy with quality $(\epsilon,s\cdot \epsilon^2/ 2^{m})$. We show that a loss of $\Omega(2^m/\epsilon)$ in circuit size necessary here.
Note that this still leaves a gap of $\epsilon$ between the known bound and our lower bound.

10:50-11:10
Oblivious Transfer from Any Non-Trivial Elastic Noisy Channel via Secret Key Agreement
Ignacio Cascudo, Ivan DamgÃ¥rd, Felipe Lacerda, Samuel Ranellucci

A $(\gamma,\delta)$-elastic channel is a binary symmetric channel between a sender and a receiver where the error rate of an honest receiver is $\delta$ while the error rate of a dishonest receiver lies within the interval $[\gamma, \delta]$. In this paper, we show that from \emph{any}
non-trivial elastic channel (i.e., $0<\gamma<\delta<\frac{1}{2}$) we can
implement oblivious transfer with information-theoretic security. This was
previously (Khurana et al., Eurocrypt 2016) only known for a subset of these
parameters. Our technique relies on a new way to exploit protocols for
information-theoretic key agreement from noisy channels. We also show that
information-theoretically secure commitments where the receiver commits follow
from any non-trivial elastic channel.

11:10-11:30
Simultaneous Secrecy and Reliability Amplification for a General Channel Model
Russell Impagliazzo, Ragesh Jaiswal, Valentine Kabanets, Bruce M. Kapron, Valerie King, Stefano Tessaro

We present a general notion of channel for cryptographic purposes, which can model either a (classical) physical channel or the consequences of a cryptographic protocol, or any hybrid. We consider {\em simultaneous secrecy and reliability amplification} for such channels. We show that simultaneous secrecy and reliability amplification is not possible for the most general model of channel, but, at least for some values of the parameters, it is possible for a restricted class of channels that still includes both standard information-theoretic channels and keyless cryptographic protocols.

Even in the restricted model, we require that for the original channel, the failure chance for the attacker must be a factor $c$ more than that for the intended receiver. We show that for any $c > 4$, there is a one-way protocol (where the sender sends information to the receiver only) which achieves simultaneous secrecy and reliability. From results of Holenstein and Renner (\emph{CRYPTO'05}), there are no such one-way protocols for $c < 2$. On the other hand, we also show that for $c > 1.5$, there are two-way protocols that achieve simultaneous secrecy and reliability.

We propose using similar models to address other questions in the theory of cryptography, such as using noisy channels for secret agreement, trade-offs between reliability and secrecy, and the equivalence of various notions of oblivious channels and secure computation.

### Test of Time Award (Chair: Adam Smith)

11:30-12:45
From Indifferentiability to Constructive Cryptography (and Back)
Award paper: "Indifferentiability, Impossibility Results on Reductions, and Applications to the Random Oracle Methodology" by Ueli Maurer, Renato Renner, and Clemens Holenstein (TCC 2004).
Speaker: Ueli Maurer

The concept of indifferentiability of systems, a generalized form of
indistinguishability, was proposed in 2004 to provide a simplified
and generalized explanation of impossibility results like the
non-instantiability of random oracles by hash functions due to
Canetti, Goldreich, and Halevi (STOC 1998). But indifferentiability
is actually a constructive notion, leading to possibility
results. For example, Coron {\em et al.} (Crypto 2005) argued that the
soundness of the construction $C(f)$ of a hash function from a
compression function $f$ can be demonstrated by proving that $C(R)$
is indifferentiable from a random oracle if $R$ is an ideal random
compression function.

The purpose of this short paper is to describe how the
indifferentiability notion was a precursor to the theory of
constructive cryptography and thereby to provide a simplified and
generalized treatment of indifferentiability as a special type of
constructive statement.

### Foundations II (Chair: Alessandro Chiesa)

14:15-14:35
On the (In)security of SNARKs in the Presence of Oracles
Dario Fiore, Anca Nitulescu

In this work we study the feasibility of knowledge extraction for succinct non-interactive arguments of knowledge (SNARKs) in a scenario that, to the best of our knowledge, has not been analyzed before. While prior work focuses on the case of adversarial provers that may receive (statically generated) {\em auxiliary information}, here we consider the scenario where adversarial provers are given {\em access to an oracle}. For this setting we study if and under what assumptions such provers can admit an extractor. Our contribution is mainly threefold.
First, we formalize the question of extraction in the presence of oracles by proposing a suitable proof of knowledge definition for this setting. We call SNARKs satisfying this definition O-SNARKs. Second, we show how to use O-SNARKs to obtain formal and intuitive security proofs for three applications (homomorphic signatures, succinct functional signatures, and SNARKs on authenticated data) where we recognize an issue while doing the proof under the standard proof of knowledge definition of SNARKs. Third, we study whether O-SNARKs exist, providing both negative and positive results. On the negative side, we show that, assuming one way functions, there do not exist O-SNARKs in the standard model for every signing oracle family (and thus for general oracle families as well). On the positive side, we show that when considering signature schemes with appropriate restrictions on the message length O-SNARKs for the corresponding signing oracles exist, based on classical SNARKs and assuming extraction with respect to specific distributions of auxiliary input.

14:35-14:55
Leakage Resilient One-Way Functions: The Auxiliary-Input Setting
Ilan Komargodski

Most cryptographic schemes are designed in a model where perfect secrecy of the secret key is assumed. In most physical implementations, however, some form of information leakage is inherent and unavoidable. To deal with this, a flurry of works showed how to construct basic cryptographic primitives that are resilient to various forms of leakage.

Dodis et al. (FOCS '10) formalized and constructed leakage resilient one-way functions. These are one-way functions f such that given a random image f(x) and leakage g(x) it is still hard to invert f(x). Based on any one-way function, Dodis et al. constructed such a one-way function that is leakage resilient assuming that an attacker can leak any lossy function g of the input.

In this work we consider the problem of constructing leakage resilient one-way functions that are secure with respect to arbitrary computationally hiding leakage (a.k.a auxiliary-input). We consider both types of leakage --- selective and adaptive --- and prove various possibility and impossibility results.

On the negative side, we show that if the leakage is an adaptively-chosen arbitrary one-way function, then it is impossible to construct leakage resilient one-way functions. The latter is proved both in the random oracle model (without any further assumptions) and in the standard model based on a strong vector-variant of DDH. On the positive side, we observe that when the leakage is chosen ahead of time, there are leakage resilient one-way functions based on a variety of assumption.

14:55-15:15
The GGM Function Family is Weakly One-Way
Aloni Cohen, Saleet Klein

We give the first demonstration of the cryptographic hardness of the Goldreich-Goldwasser-Micali (GGM) function family when the secret key is exposed. We prove that for any constant $\epsilon>0$, the GGM family is a $1/n^{2+\epsilon}$-weakly one-way family of functions, when the lengths of secret key, inputs, and outputs are equal.
Namely, any efficient algorithm fails to invert GGM with probability at least $1/n^{2+\epsilon}$ -- \emph{even when given the secret key}.
Additionally, we state natural conditions under which the GGM family is strongly one-way.

### Foundations of Multi-Party Protocol (Chair: Alon Rosen)

15:45-16:05
Almost-Optimally Fair Multiparty Coin-Tossing with Nearly Three-Quarters Malicious
Bar Alon, Eran Omri

An $\alpha$-fair coin-tossing protocol allows a set of mutually distrustful parties to generate a uniform bit, such that no efficient adversary can bias the output bit by more than $\alpha$. Cleve [STOC 1986] has shown that if half of the parties can be corrupted, then, no $r$-round coin-tossing protocol is $o(1/r)$-fair. For over two decades the best known $m$-party protocols, tolerating up to $t\geq m/2$ corrupted parties, were only $O(t/\sqrt{r})$-fair.
In a surprising result,
Moran, Naor, and Segev [TCC 2009] constructed an $r$-round two-party $O(1/r)$-fair coin-tossing protocol, i.e., an optimally fair protocol.
Beimel, Omri, and Orlov [Crypto 2010] extended the results of Moran et al.~to the {\em multiparty setting} where strictly fewer than 2/3 of the parties are corrupted. They constructed a $2^{2^k}/r$-fair $r$-round $m$-party protocol, tolerating up to $t=\frac{m+k}{2}$ corrupted parties.

Recently, in a breakthrough result, Haitner and Tsfadia [STOC 2014] constructed an $O(\log^3(r)/r)$-fair (almost optimal) three-party coin-tossing protocol. Their work brings forth a combination of novel techniques for coping with the difficulties of constructing fair coin-tossing protocols. Still, the best coin-tossing protocols for the case where more than 2/3 of the parties may be corrupted (and even when $t=2m/3$, where $m>3$) were $\theta(1/\sqrt{r})$-fair.
We construct an $O(\log^3(r)/r)$-fair $m$-party coin-tossing protocol, tolerating up to $t$ corrupted parties, whenever $m$ is constant and $t<3m/4$.

16:05-16:25
Binary AMD Circuits from Secure Multiparty Computation
Daniel Genkin, Yuval Ishai, Mor Weiss

An AMD circuit over a finite field F is a randomized arithmetic circuit that offers the best possible protection'' against additive attacks. That is, the effect of every additive attack that may blindly add a (possibly different) element of F to every internal wire of the circuit can be simulated by an ideal attack that applies only to the inputs and outputs.

Genkin et al. (STOC 2014, Crypto 2015) introduced AMD circuits as a means for protecting MPC protocols against active attacks, and showed that every arithmetic circuit C over F can be transformed into an equivalent AMD circuit of size O(|C|) with O(1/|F|) simulation error. However, for the case of the binary field F=F_2, their constructions relied on a tamper-proof output decoder and could only realize a weaker notion of security.

We obtain the first constructions of fully secure binary AMD circuits. Given a boolean circuit C and a statistical security parameter s, we construct an equivalent binary AMD circuit C' of size |C|*polylog(|C|,s) (ignoring lower order additive terms) with 2^-s simulation error. That is, the effect of toggling an arbitrary subset of wires can be simulated by toggling only input and output wires.

Our construction combines in a general way two types of simple'' honest-majority MPC protocols: protocols that only offer security against passive adversaries, and protocols that only offer correctness against active adversaries. As a corollary, we get a conceptually new technique for constructing active-secure two-party protocols in the OT-hybrid model, and reduce the open question of obtaining such protocols with constant computational overhead to a similar question in these simpler MPC models.

16:25-16:45
Composable Security in the Tamper-Proof Hardware Model under Minimal Complexity
Carmit Hazay, Antigoni Polychroniadou, Muthuramakrishnan Venkitasubramaniam

We put forth a new formulation of tamper-proof hardware in the Global Universal Composable (GUC) framework introduced by Canetti et al. in TCC 2007. Almost all of the previous works rely on the formulation by Katz in Eurocrypt 2007 and this formulation does not fully capture tokens in a concurrent setting. We address these shortcomings by relying on the GUC framework where we make the following contributions:
(1) We construct secure Two-Party Computation (2PC) protocols for general functionalities with optimal round complexity and computational assumptions using stateless tokens. More precisely, we show how to realize arbitrary functionalities in the two-party setting with GUC security in two rounds under the minimal assumption of One-Way Functions (OWFs). Moreover, our construction relies on the underlying function in a black-box way. As a corollary, we obtain feasibility of Multi-Party Computation (MPC) with GUC-security under the minimal assumption of OWFs.
As an independent contribution, we identify an issue with a claim in a previous work by Goyal, Ishai, Sahai, Venkatesan and Wadia in TCC 2010 regarding the feasibility of UC-secure computation with stateless tokens assuming collision-resistant hash-functions (and the extension based only on one-way functions).

(2) We then construct a 3-round MPC protocol to securely realize arbitrary functionalities with GUC-security starting from any semi-honest secure MPC protocol. For this construction, we require the so-called one-many commit-and-prove primitive introduced in the original work of Canetti, Lindell, Ostrovsky and Sahai in STOC 2002 that is round-efficient and black-box in the underlying commitment. Using specially designed input-delayed'' protocols we realize this primitive (with a 3-round protocol in our framework) using stateless tokens and one-way functions (where the underlying one-way function is used in a black-box way).

16:45-17:05
Composable Adaptive Secure Protocols without Setup under Polytime Assumptions
Carmit Hazay, Muthuramakrishnan Venkitasubramaniam

All previous constructions of general multiparty computation protocols that are secure against adaptive corruptions in the concurrent setting either require some form of setup or non-standard assumptions. In this paper we provide the first general construction of secure multi-party computation protocol without any setup that guarantees composable security in the presence of an adaptive adversary based on standard polynomial-time assumptions. We prove security under the notion of UC with super-polynomial helpers'' introduced by Canetti et al. (FOCS 2010), which is closed under universal composition and implies super-polynomial-time simulation''. Moreover, our construction relies on the underlying cryptographic primitives in a black-box manner.

Next, we revisit the zero-one law for two-party secure functions evaluation initiated by the work of Maji, Prabhakaran and Rosulek (CRYPTO 2010). According to this law, every two-party functionality is either trivial (meaning, such functionalities can be reduced to any other functionality) or complete (meaning, any other functionality can be reduced to these functionalities) in the Universal Composability (UC) framework. As our second contribution, assuming the existence of a simulatable public-key encryption scheme, we establish a zero-one law in the adaptive setting. Our result implies that every two-party non-reactive functionality is either trivial or complete in the UC framework in the presence of adaptive, malicious adversaries.

17:05-17:25
Adaptive Security of Yao's Garbled Circuits
Zahra Jafargholi, Daniel Wichs

A garbling scheme is used to garble a circuit C and an input x in a way that reveals the output C(x) but hides everything else. Yao's construction from the 80's is known to achieve selective security, where the adversary chooses the circuit C and the input x in one shot. It has remained as an open problem whether the construction also achieves adaptive security, where the adversary can choose the input x after seeing the garbled version of the circuit C.

A recent work of Hemenway et al. (CRYPTO '16) modifies Yao's construction and shows that the resulting scheme is adaptively secure. This is done by encrypting the garbled circuit from Yao's construction with a special type of somewhere equivocal encryption'' and giving the key together with the garbled input. The efficiency of the scheme and the security loss of the reduction is captured by a certain pebbling game over the circuit.

In this work we prove that Yao's construction itself is already adaptively secure, where the security loss can be captured by the same pebbling game. For example, we show that for circuits of depth $d$, the security loss of our reduction is 2^{O(d)}, meaning that Yao's construction is adaptively secure for NC1 circuits without requiring complexity leveraging.

Our technique is inspired by the nested hybrids'' of Fuchsbauer et al. (Asiacrypt '14, CRYPTO '15) and relies on a careful sequence of hybrids where each hybrid involves some limited guessing about the adversary's adaptive choices. Although it doesn't match the parameters achieved by Hemenway et al. in their full generality, the main advantage of our work is to prove the security of Yao's construction as is, without any additional encryption layer.

### Delegation and IP (Chair: Muthuramakrishnan Venkitasubramaniam)

9:00-9:25
Delegating RAM Computations with Adaptive Soundness and Privacy
Prabhanjan Ananth, Yu-Chi Chen, Kai-Min Chung, Huijia Lin, Wei-Kai Lin

We consider the problem of delegating RAM computations over persistent
databases. A user wishes to delegate a sequence of computations over a
database to a server, where each computation may read and modify the database
and the modifications persist between computations. Delegating RAM
computations is important as it has the distinct feature that the run-time of
computations maybe sub-linear in the size of the database.

We present the first RAM delegation scheme that provide both soundness and
privacy guarantees in the adaptive setting, where the sequence of delegated
RAM programs are chosen adaptively, depending potentially on the encodings of
the database and previously chosen programs. Prior works either achieved only
adaptive soundness without privacy [Kalai and Paneth, ePrintâ€?5], or only
security in the selective setting where all RAM programs are chosen statically
[Chen et al. ITCSâ€?6, Canetti and Holmgren ITCSâ€?6].

Our scheme assumes the existence of indistinguishability obfuscation (iO) for
circuits and the decisional Diffie-Hellman (DDH) assumption. However, our
techniques are quite general and in particular, might be applicable even in
settings where iO is not used. We provide a â€œsecurity lifting techniqueâ€?br /> that â€œliftsâ€?any proof of selective security satisfying certain special
properties into a proof of adaptive security, for arbitrary cryptographic
schemes. We then apply this technique to the delegation scheme of Chen et al.
and its selective security proof, obtaining that their scheme is essentially
already adaptively secure. Because of the general approach, we can also easily
extend to delegating parallel RAM (PRAM) computations. We believe that the
security lifting technique can potentially find other applications and is of
independent interest.

JOINT SLOT WITH

Ran Canetti, Yilei Chen, Justin Holmgren, Mariana Raykova

We show how to garble a large persistent database and then garble, one by one, a sequence of adaptively and adversarially chosen RAM programs that query and modify the database in arbitrary ways. The garbled database and programs reveal only the outputs of the programs when run in sequence on the database. Still, the runtime, space requirements and description size of the garbled programs are proportional only to those of the plaintext programs and the security parameter. We assume indistinguishability obfuscation for circuits and somewhat-regular collision-resistant hash functions. In contrast, all previous garbling schemes with persistent data were shown secure only in the static setting where all the programs are known in advance. Our scheme extends the non-adaptive RAM garbling scheme of Canetti and Holmgren [ITCS 2016].

As an immediate application, we give the first scheme for efficiently outsourcing a large database and computations on the database to an untrusted server, then delegating computations on this database, where these computations may update the database.

We also define and use a new primitive of independent interest, called adaptive accumulators. The primitive extends the positional accumulators of Koppula et al [STOC 2015] and somewhere statistical binding hashing of Hubacek and Wichs [ITCS 2015] to an adaptive setting.

9:25-9:45
Interactive Oracle Proofs
Eli Ben-Sasson, Alessandro Chiesa, Nicholas Spooner

We initiate the study of a proof system model that naturally combines interactive proofs (IPs) and probabilistically-checkable proofs (PCPs), and generalizes interactive PCPs (which consist of a PCP followed by an IP). We define *interactive oracle proof* (IOP) to be an interactive proof in which the verifier is not required to read the prover's messages in their entirety; rather, the verifier has oracle access to the prover's messages, and may probabilistically query them. IOPs retain the expressiveness of PCPs, capturing NEXP rather than only PSPACE, and also the flexibility of IPs, allowing multiple rounds of communication with the prover. IOPs have already found several applications, including unconditional zero knowledge [BCGV16], constant-rate constant-query probabilistic checking [BCGRS16], and doubly-efficient constant-round IPs for polynomial-time bounded-space computations [RRR16].

We offer two main technical contributions. First, we give a compiler that maps any public-coin IOP into a non-interactive proof in the random oracle model. We prove that the soundness of the resulting proof is tightly characterized by the soundness of the IOP against *state restoration attacks*, a class of rewinding attacks on the IOP verifier that is reminiscent of, but incomparable to, resetting attacks.

Second, we study the notion of state-restoration soundness of an IOP: we prove tight upper and lower bounds in terms of the IOP's (standard) soundness and round complexity; and describe a simple adversarial strategy that is optimal, in expectation, across all state restoration attacks.

Our compiler can be viewed as a generalization of the Fiat--Shamir paradigm for public-coin IPs [FS86], and of the CS proof'' constructions of [Mic94] and [Val08] for PCPs. Our analysis of the compiler gives, in particular, a unified understanding of these constructions, and also motivates the study of state restoration attacks, not only for IOPs, but also for IPs and PCPs.

When applied to known IOP constructions, our compiler implies, e.g., blackbox unconditional ZK proofs in the random oracle model with quasilinear prover and polylogarithmic verifier, improving on a result of [IMSX15].

9:45-10:05
Delegating RAM Computations
Yael Kalai, Omer Paneth

In the setting of cloud computing a user wishes to delegate its data, as well as computations over this data, to a cloud provider. Each computation may read and modify the data, and these modifications should persist between computations. Minding the computational resources of the cloud, delegated computations are modeled as RAM programs. In particular, the delegated computationsâ€?running time may be sub-linear, or even exponentially smaller than the memory size.

We construct a two-message protocol for delegating RAM computations to an untrusted cloud. In our protocol, the user saves a short digest of the delegated data. For every delegated computation, the cloud returns, in addition to the computationâ€™s output, the digest of the modified data, and a proof that the output and digest were computed correctly. When delegating a T-time RAM computation M with security parameter k, the cloud runs in time Poly(T,k) and the user in time Poly(|M|, log(T), k).

Our protocol is secure assuming super-polynomial hardness of the Learning with Error (LWE) assumption. Security holds even when the delegated computations are chosen adaptively as a function of the data and output of previous computations.

We note that RAM delegation schemes are an improved variant of memory delegation schemes [Chung et al. CRYPTO 2011]. In memory delegation, computations are modeled as Turing machines, and therefore, the cloudâ€™s work always grows with the size of the delegated data.

### Differential Privacy (Chair: Martin Hirt)

10:35-10:55
Separating Computational and Statistical Differential Privacy in the Client-Server Model
Mark Bun, Yi-Hsiu Chen, Salil P. Vadhan

Differential privacy is a mathematical definition of privacy for statistical data analysis. It guarantees that any (possibly adversarial) data analyst is unable to learn too much information that is specific to an individual. Mironov et al.~(CRYPTO 2009) proposed several computational relaxations of differential privacy (CDP), which relax this guarantee to hold only against computationally bounded adversaries. Their work and subsequent work showed that CDP can yield substantial accuracy improvements in various multiparty privacy problems. However, these works left open whether such improvements are possible in the traditional client-server model of data analysis. In fact, Groce, Katz and Yerukhimovich~(TCC 2011) showed that, in this setting, it is impossible to take advantage of CDP for many natural statistical tasks.

Our main result shows that, assuming the existence of sub-exponentially secure one-way functions and 2-message witness indistinguishable proofs (zaps) for NP, that there is in fact a computational task in the client-server model that can be efficiently performed with CDP, but is infeasible to perform with information-theoretic differential privacy.

10:55-11:15
Concentrated Differential Privacy: Simplifications, Extensions, and Lower Bounds
Mark Bun, Thomas Steinke

"Concentrated differential privacy" was recently introduced by Dwork and Rothblum as a relaxation of differential privacy, which permits sharper analyses of many privacy-preserving computations. We present an alternative formulation of the concept of concentrated differential privacy in terms of the Renyi divergence between the distributions obtained by running an algorithm on neighboring inputs. With this reformulation in hand, we prove sharper quantitative results, establish lower bounds, and raise a few new questions. We also unify this approach with approximate differential privacy by giving an appropriate definition of "approximate concentrated differential privacy."

11:15-11:35
Strong Hardness of Privacy from Weak Traitor Tracing
Lucas Kowalczyk, Tal Malkin, Jonathan Ullman, Mark Zhandry

A central problem in differential privacy is to accurately answer a large family Q
of statistical queries over a data universe X. A statistical query
on a dataset D in X^n asks "what fraction of the elements of D satisfy a given
predicate p on X?" Ignoring computational constraints, it is possible to accurately
answer exponentially many queries on an exponential size universe while satisfying
differential privacy (Blum et al., STOC'08). Dwork et al. (STOC'09) and Boneh and
Zhandry (CRYPTO'14) showed that if both Q and X are of polynomial size,
then there is an efficient differentially private algorithm that
accurately answers all the queries. They also proved that if Q and X are both
exponentially large, then under a plausible assumption, no efficient
algorithm exists.

We show that, under the same assumption,
if either the number of queries or the data universe is of
exponential size, then there is no differentially private algorithm
Specifically, we prove that if one-way functions and
indistinguishability obfuscation exist, then:

---For every n, there is a family Q of O~(n^7) queries on a data universe X of size 2^d such that no poly(n,d) time differentially private algorithm takes a dataset D in X^n and outputs accurate answers to every query in Q.

---For every n, there is a family Q of 2^d queries on a data universe X of size O~(n^7) such that no poly(n,d) time differentially private algorithm takes a dataset D in X^n and outputs accurate answers to every query in Q.

In both cases, the result is nearly quantitatively tight, since there
is an efficient differentially private algorithm that answers
Omega~(n^2) queries on an exponential size data universe,
and one that answers exponentially many queries on a data universe of
size Omega~(n^2).

Our proofs build on the connection between hardness of
differential privacy and traitor-tracing schemes (Dwork et al.,
STOC'09; Ullman, STOC'13). We prove our hardness result for a
polynomial size query set (resp., data universe) by showing that they
follow from the existence of a special type of traitor-tracing scheme
with very short ciphertexts (resp., secret keys), but very weak
security guarantees, and then constructing such a scheme.

### Invited Talk (Chair: Martin Hirt)

11:35-12:35
Through the Looking Glass: What Cryptography Should Do for Alice
Allison Bishop

### Public-Key Encryption I (Chair: Andrej Bogdanov)

14:00-14:20
Towards Non-Black-Box Separations of Public Key Encryption and One Way Function
Dana Dachman-Soled (presented by Tal Malkin)

Separating public key encryption from one way functions is one of the fundamental goals of complexity-based cryptography. Beginning with the seminal work of Impagliazzo and Rudich (STOC, 1989), a sequence of works have ruled out certain classes of reductions from public key encryption (PKE)---or even key agreement---to one way function. Unfortunately, known results---so called black-box separations---do not apply to settings where the construction and/or reduction are allowed to directly access the code, or circuit, of the one way function. In this work, we present a meaningful, non-black-box separation between public key encryption (PKE) and one way function.

Specifically, we introduce the notion of BBN- reductions (similar to the BBNp reductions of Baecher et al. (ASIACRYPT, 2013)), in which the construction E accesses the underlying primitive in a black-box way, but wherein the universal reduction R receives the efficient code/circuit of the underlying primitive as input and is allowed oracle access to the adversary Adv. We additionally require that the functions describing the number of oracle queries made to Adv, and the success probability of R are independent of the run-time/circuit size of the underlying primitive. We prove that there is no non-adaptive, BBN- reduction from PKE to one way function, under the assumption that certain types of strong one way functions exist. Specifically, we assume that there exists a regular one way function f such that there is no Arthur-Merlin protocol proving that z not in Range(f)'', where soundness holds with high probability over no instances,'' y ~ f(U_n), and Arthur may receive polynomial-sized, non-uniform advice. This assumption is related to the average-case analogue of the widely believed assumption coNP \not\subseteq NP/poly.

14:20-14:40
Post-Quantum Security of the Fujisaki-Okamoto and OAEP Transforms
Ehsan Ebrahimi Targhi, Dominique Unruh

In this paper, we present a hybrid encryption scheme that is chosen
ciphertext secure in the quantum random oracle model. Our scheme is a
combination of an asymmetric and a symmetric encryption scheme that are
secure in a weak sense. It is a slight modification of the Fujisaki-Okamoto
modify the OAEP-cryptosystem and prove its security in the quantum random
oracle model based on the existence of a partial-domain one-way
injective function secure against quantum adversaries.

### Secret Sharing (Chair: Andrej Bogdanov)

14:40-15:00
Threshold Secret Sharing Requires a Linear Size Alphabet
Andrej Bogdanov, Siyao Guo, Ilan Komargodski

We prove that for every n and 1<t<n any t-out-of-n threshold secret sharing scheme for one-bit secrets requires share size log(t+1). Our bound is tight when t=n-1 and n is a prime power. In 1990 Kilian and Nisan proved the incomparable bound log(n-t+2). Taken together, the two bounds imply that the share size of Shamirâ€™s secret sharing scheme (Comm. ACM â€?9) is optimal up to an additive constant even for one-bit secrets for the whole range of parameters 1<t<n.

More generally, we show that for all 1<s<r<n, any ramp secret sharing scheme with secrecy threshold s and reconstruction threshold r requires share size log((r+1)/(r-s)).

As part of our analysis we formulate a simple game-theoretic relaxation of secret sharing for arbitrary access structures. We prove the optimality of our analysis for threshold secret sharing with respect to this method and point out a general limitation.

15:00-15:20
How to Share a Secret, Infinitely
Ilan Komargodski, Moni Naor, Eylon Yogev

Secret sharing schemes allow a dealer to distribute a secret piece of information among several parties such that only qualified subsets of parties can reconstruct the secret. The collection of qualified subsets is called an access structure. The best known example is the k-threshold access structure, where the qualified subsets are those of size at least k. When k=2 and there are n parties, there are schemes where the size of the share each party gets is roughly log(n) bits, and this is tight even for secrets of 1 bit. In these schemes, the number of parties n must be given in advance to the dealer.

In this work we consider the case where the set of parties is not known in advance and could potentially be infinite. Our goal is to give the t-th party arriving a small share as possible as a function of t. Our main result is such a scheme for the k-threshold access structure where the share size of party t is (k-1)*log t + poly(k)*o(log t). For k=2 we observe an equivalence to prefix codes and present matching upper and lower bounds of the form log t+loglog t+logloglogt+O(1). Finally, we show that for any access structure there exists such a secret sharing scheme with shares of size 2^{t-1}.

### New Models (Chair: Alon Rosen)

15:50-16:10
Designing Proof of Human-work Puzzles for Cryptocurrency and Beyond
Jeremiah Blocki, Hong-Sheng Zhou

We introduce the novel notion of a Proof of Human-work (PoH) and present the first distributed consensus protocol from hard Artificial Intelligence problems. As the name suggests, a PoH is a proof that a {\em human} invested a moderate amount of effort to solve some challenge. A PoH puzzle should be moderately hard for a human to solve. However, a PoH puzzle must be hard for a computer to solve, including the computer that generated the puzzle, without sufficient assistance from a human. By contrast, CAPTCHAs are only difficult for other computers to solve --- not for the computer that generated the puzzle. We also require that a PoH be publicly verifiable by a computer without any human assistance and without ever interacting with the agent who generated the proof of human-work. We show how to construct PoH puzzles from indistinguishability obfuscation and from CAPTCHAs. We motivate our ideas with two applications: HumanCoin and passwords. We use PoH puzzles to construct HumanCoin, the first cryptocurrency system with human miners. Second, we use proofs of human work to develop a password authentication scheme which provably protects users against offline attacks.

16:10-16:30
Access Control Encryption: Enforcing Information Flow with Cryptography
Ivan DamgÃ¥rd, Helene Haagh, Claudio Orlandi

We initiate the study of Access Control Encryption (ACE), a novel cryptographic primitive that allows fine-grained access control, by giving different rights to different users not only in terms of which messages they are allowed to receive, but also which messages they are allowed to send.

Classical examples of security policies for information flow are the well known Bell-Lapadula [BL73] or Biba [Bib75] model: in a nutshell, the Bell-Lapadula model assigns roles to every user in the system (e.g., public, secret and top-secret). A users' role specifies which messages the user is allowed to receive (i.e., the no read-up rule, meaning that users with public clearance should not be able to read messages marked as secret or top-secret) but also which messages the user is allowed to send (i.e., the no write-down rule, meaning that a malicious user with top-secret clearance should not be able to write messages marked as secret or public).

To the best of our knowledge, no existing cryptographic primitive allows for even this simple form of access control, since no existing cryptographic primitive enforces any restriction on what kind of messages one should be able to encrypt.

Our contributions are: - Introducing and formally defining access control encryption (ACE); - A construction of ACE with complexity linear in the number of the roles based on classic number theoretic assumptions (DDH, Paillier); - A construction of ACE with complexity polylogarithmic in the number of roles based on recent results on cryptographic obfuscation;

### Obfuscation and Multilinear Maps (Chair: Alon Rosen)

16:30-16:50
Secure Obfuscation in a Weak Multilinear Map Model
Sanjam Garg, Eric Miles, Pratyay Mukherjee, Amit Sahai, Akshayaram Srinivasan, Mark Zhandry

All known candidate indistinguishibility obfuscation (iO) schemes rely on candidate multilinear maps. Until recently, the strongest proofs of security available for iO candidates were in a generic model that only allows "honest" use of the multilinear map. Most notably, in this model the zero-test procedure only reveals whether an encoded element is 0, and nothing more.

However, this model is inadequate: there have been several attacks on multilinear maps that exploit extra information revealed by the zero-test procedure. In particular, Miles, Sahai and Zhandry [Crypto'16] recently gave a polynomial-time attack on several iO candidates when instantiated with the multilinear maps of Garg, Gentry, and Halevi [Eurocrypt'13], and also proposed a new "weak multilinear map model" that captures all known polynomial-time attacks on GGH13.

In this work, we give a new iO candidate which can be seen as a small modification or generalization of the original candidate of Garg, Gentry, Halevi, Raykova, Sahai, and Waters [FOCS'13]. We prove its security in the weak multilinear map model, thus giving the first iO candidate that is provably secure against all known polynomial-time attacks on GGH13. The proof of security relies on a new assumption about the hardness of computing annihilating polynomials, and we show that this assumption is implied by the existence of pseudorandom functions in NC^1.

16:50-17:10
Virtual Grey-Boxes Beyond Obfuscation: A Statistical Security Notion for Cryptographic Agents
Shashank Agrawal, Manoj Prabhakaran, Ching-Hua Yu

We extend the simulation-based definition of Virtual Grey Box (VGB) security -- originally proposed for obfuscation (Bitansky and Canetti, 2010) -- to a broad class of cryptographic primitives. These include functional encryption, graded encoding schemes, bi-linear maps (with uber assumptions), as well as unexplored ones like homomorphic functional encryption.

Our main result is a characterization of VGB security, in all these cases, in terms of an indistinguishability-preserving notion of security, called 'rtestfamily-sINDPRE security', formulated using an extension of the recently proposed Cryptographic Agents framework (Agrawal et al., 2015). We further show that this definition is equivalent to an indistinguishability based security definition that is restricted to 'concentrated' distributions (wherein the outcome of any computation on encrypted data is essentially known ahead of the computation).

A result of Bitansky et al. (2014), who showed that VGB obfuscation is equivalent to strong indistinguishability obfuscation (SIO), is obtained by specializing our result to obfuscation. Our proof, while sharing various elements from the proof of Bitansky et al., is simpler and significantly more general, as it uses rtestfamily-sINDPRE security as an intermediate notion. Our characterization also shows that the semantic security for graded encoding schemes (Pass et al. 2014), is in fact an instance of this same definition.

We also present a composition theorem for rtestfamily-sINDPRE security. We can then recover the result of Bitansky et al. (2014) regarding the existence of VGB obfuscation for all NC1 circuits, simply by instantiating this composition theorem with a reduction from obfuscation of NC1 circuits to graded encoding schemas (Barak et al., 2014) and the assumption that there exists an rtestfamily-sINDPRE secure scheme for the graded encoding schema (Pass et al. 2014).

### Round Complexity & Efficiency of Multi-Party Computation (Chair: Alessandro Chiesa)

9:00-9:20
Efficient Secure Multiparty Computation with Identifiable Abort
Carsten Baum, Emmanuela Orsini, Peter Scholl

We study secure multiparty computation (MPC) in the dishonest majority setting providing security with identifiable abort, where if the protocol aborts, the honest parties can agree upon the identity of a corrupt party. All known constructions that achieve this notion require expensive zero-knowledge techniques to obtain active security, so are not practical.

In this work, we present the first efficient MPC protocol with identifiable abort. Our protocol has an information-theoretic online phase with message complexity O(n^2) for each secure multiplication, similar to the BDOZ protocol (Bendlin et al., Eurocrypt 2011), and a factor of O(\kappa) lower than the identifiable abort protocol of Ishai et al. (Crypto 2014). A key component of our protocol is a linearly homomorphic information-theoretic signature scheme, for which we provide the first definitions and construction based on a previous non-homomorphic scheme. We then show how to implement the preprocessing for our protocol using somewhat homomorphic encryption, similarly to the SPDZ protocol (Damg{\aa}rd et al., Crypto 2012) and other recent works with applicable efficiency improvements.

9:20-9:45
Secure Multiparty RAM Computation in Constant Rounds
Sanjam Garg, Divya Gupta, Peihan Miao, Omkant Pandey

Securing computation of a random access machine (RAM) program typically entails that it be first converted into a circuit. This conversion is unimaginable in the context of big-data applications where the size of the circuit can be exponential in the running time of the original RAM program. Realizing these constructions, without relinquishing the efficiency of RAM programs, often poses considerable technical hurdles. Our understanding of these techniques in the multi-party setting is largely limited. Specifically, the round complexity of all known protocols grows linearly in the running time of the program being computed.

In this work, we consider the multi-party case and obtain the following results:

1.\emph{Semi-honest model}: We present a constant-round black-box secure computation protocol for RAM programs. This protocol is obtained by building on the new black-box garbled RAM construction by Garg, Lu, and Ostrovsky [FOCS 2015], and constant-round secure computation protocol for circuits of Beaver, Micali, and Rogaway [STOC 1990]. This construction allows execution of multiple programs on the same persistent database.

2. \emph{Malicious model}: Next, we show how to extend our semi-honest results to the malicious setting, while ensuring that the new protocol is still constant-round and black-box in nature.

JOINT SLOT WITH

Constant-Round Maliciously Secure Two-Party Computation in the RAM Model
Carmit Hazay, Avishay Yanai

The random-access memory (RAM) model of computation allows program constant-time memory lookup and is more applicable in practice today, covering many important algorithms. This is in contrast to the classic setting of secure 2-party computation (2PC) that mostly follows the approach for which the desired functionality must be represented as a boolean circuit. In this work we design the first constant round maliciously secure two-party protocol in the RAM model. Our starting point is the garbled RAM construction of Gentry et al. from 2014 that readily induces a constant round semi-honest two-party protocol for any RAM program. We show how to enhance the security of their construction into the malicious setting while facing several significant challenges that stem due to handling the data memory.

9:45-10:05
More Efficient Constant-Round Multi-Party Computation from BMR and SHE
Yehuda Lindell, Nigel P. Smart, Eduardo Soria-Vazquez

We present a multi-party computation protocol in the case of dishonest majority which has very low round complexity. Our protocol sits philosophically between Gentry's Fully Homomorphic Encryption based protocol and the SPDZ-BMR protocol of Lindell et al (CRYPTO 2015). Our protocol avoids various inefficiencies of the previous two protocols. Compared to Gentry's protocol we only require Somewhat Homomorphic Encryption (SHE). Whilst in comparison to the SPDZ-BMR protocol we require only a quadratic complexity in the number of players (as opposed to cubic), we have fewer rounds, and we require less proofs of correctness of ciphertexts. Additionally, we present a variant of our protocol which trades the depth of the garbling circuit (computed using SHE) for some more multiplications in the offline and online phases.

10:05-10:25
Cross&Clean: Amortized Garbled Circuits With Constant Overhead
Jesper Buus Nielsen, Claudio Orlandi

Garbled circuits (GC) are one of the main tools for secure two-party computation. One of the most promising techniques for efficiently achieving active-security in the context of GCs is the so called cut-and-choose approach, which in the last few years has received many refinements in terms of the number of garbled circuits which need to be constructed, exchanged and evaluated.

In this paper we ask a simple question, namely "how many garbled circuits are needed to achieve active security?" and we propose a novel protocol which achieves active security while using only a constant number of garbled circuits per evaluation in the amortized setting.

### Unconditional Security II (Chair: Adam Smith)

10:55-11:15
Proof of Space from Stacked Expanders

Recently, proof of space (PoS) has been suggested as a more egalitarian alternative to the traditional hash-based proof of work.
In PoS, a prover proves to a verifier that it has dedicated some specified amount of space.
A closely related notion is memory-hard functions (MHF), functions that require a lot of memory/space to compute.
While making promising progress, existing PoS and MHF have several problems.
First, there are large gaps between the desired space-hardness and what can be proven.
Second, it has been pointed out that PoS and MHF should require a lot of space not just at some point, but throughout the entire computation/protocol;
few proposals considered this issue.
Third, the two existing PoS constructions are both based on a class of graphs called superconcentrators, which are either hard to construct or add a logarithmic factor overhead to efficiency.
In this paper, we construct PoS from stacked expander graphs.
Our constructions are simpler, more efficient and have tighter provable space-hardness than prior works.
Our results also apply to a recent MHF called Balloon hash.
We show Balloon hash has tighter space-hardness than previously believed and consistent space-hardness throughout its computation.

11:15-11:35
Perfectly Secure Message Transmission in Two Rounds

In the model that has become known as "Perfectly Secure Message Transmission"(PSMT), a sender Alice is connected to a receiver Bob through n parallel two-way channels. A computationally unbounded adversary Eve controls t of these channels, meaning she can acquire and alter any data that is transmitted over these channels. The sender Alice wishes to communicate a secret message to Bob privately and reliably, i.e. in such a way that Eve will not get any information about the message while Bob will be able to recover it completely.

In this paper, we focus on protocols that work in two transmission rounds for n= 2t+1. We break from previous work by following a conceptually simpler blueprint for achieving a PSMT protocol. We reduce the previously best-known communication complexity, i.e. the number of transmitted bits necessary to communicate a 1-bit secret, from O(n^3 log n) to O(n^2 log n). Our protocol also answers a question raised by Kurosawa and Suzuki and hitherto left open: their protocol reaches optimal transmission rate for a secret of size O(n^2 log n) bits, and the authors raised the problem of lowering this threshold. The present solution does this for a secret of O(n log n) bits.

### Invited Talk (Chair: Adam Smith)

11:35-12:35
Secure Hardware and Cryptography: Contrasts, Challenges and Opportunities

### Public-Key Encryption II (Chair: Muthuramakrishnan Venkitasubramaniam)

14:00-14:20
Standard Security Does Not Imply Indistinguishability Under Selective Opening
Dennis Hofheinz, Vanishree Rao, Daniel Wichs (presented by Mor Weiss)

In a selective opening attack (SOA) on an encryption scheme, the adversary is given a collection of ciphertexts and she selectively chooses to see some subset of them â€œopened", meaning that the messages and the encryption randomness are revealed to her. A scheme is SOA secure if the data contained in the unopened ciphertexts remains hidden. A fundamental question is whether every CPA secure scheme is necessarily also SOA secure. The work of Bellare et al. (EUROCRYPT '12) gives a partial negative answer by showing that some CPA secure schemes do not satisfy a simulation-based definition of SOA security called SIM-SOA. However, until now, it remained possible that every CPA-secure scheme satisfies an indistinguishability-based definition of SOA security called IND-SOA.

In this work, we resolve the above question in the negative and construct a highly contrived encryption scheme which is CPA (and even CCA) secure but is not IND-SOA secure. In fact, it is broken in a very obvious sense by a selective opening attack as follows. A random value is secret-shared via Shamir's scheme so that any $t$ out of $n$ shares reveal no information about the shared value. The $n$ shares are individually encrypted under a common public key and the $n$ resulting ciphertexts are given to the adversary who selectively chooses to see $t$ of the ciphertexts opened. Counter-intuitively, by the specific properties of our encryption scheme, this suffices for the adversary to completely recover the shared value. Our contrived scheme relies on strong assumptions: public-coin differing inputs obfuscation and a certain type of correlation intractable hash functions.

We also extend our negative result to the setting of SOA attacks with key opening(IND-SOA-K) where the adversary is given a collection of ciphertexts under different public keys and selectively chooses to see some subset of the secret keys.

14:20-14:40
Public-Key Encryption with Simulation-Based Selective-Opening Security and Compact Ciphertexts
Dennis Hofheinz, Tibor Jager, Andy Rupp

In a selective-opening (SO) attack on an encryption scheme, an adversary A gets a number of ciphertexts (with possibly related plaintexts), and can then adaptively select a subset of those ciphertexts. The selected ciphertexts are then opened for A (which means that A gets to see the plaintexts and the corresponding encryption random coins), and A tries to break the security of the unopened ciphertexts.

Two main flavors of SO security notions exist: indistinguishability-based (IND-SO) and simulation-based (SIM-SO) ones. Whereas IND-SO security allows for simple and efficient instantiations, its usefulness in larger constructions is somewhat limited, since it is restricted to special types of plaintext distributions. On the other hand, SIM-SO security does not suffer from this restriction, but turns out to be significantly harder to achieve. In fact, all known SIM-SO secure encryption schemes either require O(|m|) group elements in the ciphertext to encrypt |m|-bit plaintexts, or use specific algebraic properties available in the DCR setting. In this work, we present the first SIM-SO secure PKE schemes in the discrete-log setting with compact ciphertexts (whose size is O(1) group elements plus plaintext size). The SIM-SO security of our constructions can be based on, e.g., the k-linear assumption for any k.

Technically, our schemes extend previous IND-SO secure schemes by the property that simulated ciphertexts can be efficiently opened to arbitrary plaintexts. We do so by encrypting the plaintext in a bitwise fashion, but such that each encrypted bit leads only to a single ciphertext bit (plus O(1) group elements that can be shared across many bit encryptions). Our approach leads to rather large public keys (of O(|m|2) group elements), but we also show how this public key size can be reduced (to O(|m|) group elements) in pairing-friendly groups.

14:40-15:00
Multi-Key FHE from LWE, Revisited
Chris Peikert, Sina Shiehian

Traditional fully homomorphic encryption (FHE) schemes only allow
computation on data encrypted under a \emph{single} key.
L{\'o}pez-Alt, Tromer, and Vaikuntanathan (STOC 2012) proposed the
notion of \emph{multi-key} FHE, which allows homomorphic computation
on ciphertexts encrypted under different keys, and also gave a
construction based on a (somewhat nonstandard) assumption related to
NTRU.\@ More recently, Clear and McGoldrick (CRYPTO 2015), followed by
Mukherjee and Wichs (EUROCRYPT 2016), proposed a multi-key FHE that
builds upon the LWE-based FHE of Gentry, Sahai, and Waters (CRYPTO
2013). However, unlike the original construction of L{\'o}pez-Alt
\etal, these later LWE-based schemes have the somewhat undesirable
property of being single-hop for keys:'' all relevant keys must be
known at the start of the homomorphic computation, and the output
cannot be usefully combined with ciphertexts encrypted under other
keys (unless an expensive bootstrapping'' step is performed).

In this work we construct two multi-key FHE schemes, based on LWE
assumptions, which are \emph{multi-hop for keys}: the output of a
homomorphic computation on ciphertexts encrypted under a set of keys
can be used in further homomorphic computation involving
\emph{additional} keys, and so on. Moreover, incorporating
ciphertexts associated with new keys is a relatively efficient
native'' operation akin to homomorphic multiplication, and does not
require bootstrapping (in contrast with all other LWE-based
solutions). Our systems also have smaller ciphertexts than the
previous LWE-based ones; in fact, ciphertexts in our second
construction are simply GSW ciphertexts with no auxiliary data.

### Attribute-Based Encryption (Chair: John Steinberger)

15:30-15:50
Deniable Attribute Based Encryption for Branching Programs from LWE
Daniel Apon, Xiong Fan, Feng-Hao Liu

Deniable encryption (Canetti et al. CRYPTO '97) is an intriguing primitive that provides a security guarantee against not only eavesdropping attacks as required by semantic security, but also stronger coercion attacks performed after the fact. The concept of deniability has later demonstrated useful and powerful in many other contexts, such as leakage resilience, adaptive security of protocols, and security against selective opening attacks. Despite its conceptual usefulness, our understanding of how to construct deniable primitives under standard assumptions is restricted.

In particular from standard lattice assumptions, i.e. Learning with Errors (LWE), we have only flexibly and non-negligible advantage deniable public-key encryption schemes, whereas with the much stronger assumption of indistinguishable obfuscation, we can obtain at least fully sender-deniable PKE and computation. How to achieve deniability for other more advanced encryption schemes under standard assumptions remains an interesting open question.

In this work, we construct a flexibly bi-deniable Attribute-Based Encryption (ABE) scheme for all polynomial-size Branching Programs from LWE. Our techniques involve new ways of manipulating Gaussian noise that may be of independent interest, and lead to a significantly sharper analysis of noise growth in Dual Regev type encryption schemes. We hope these ideas give insight into achieving deniability and related properties for further, advanced cryptographic systems from lattice assumptions.

15:50-16:10
Targeted Homomorphic Attribute-Based Encryption
Zvika Brakerski, David Cash, Rotem Tsabary, Hoeteck Wee

In (key-policy) attribute-based encryption (ABE), messages are encrypted respective to attributes $x$, and keys are generated respective to policy functions $f$. The ciphertext is decryptable by a key only if $f(x)=0$. Adding homomorphic capabilities to ABE is a long standing open problem, with current techniques only allowing compact homomorphic evaluation on ciphertext respective to the same $x$. Recent advances in the study of multi-key FHE also allow cross-attribute homomorphism with ciphertext size growing (quadratically) with the number of input ciphertexts.

We present an ABE scheme where homomorphic operations can be performed compactly across attributes. Of course, decrypting the resulting ciphertext needs to be done with a key respective to a policy $f$ with $f(x_i)=0$ for all attributes involved in the computation. In our scheme, the target policy $f$ needs to be known to the evaluator, we call this \emph{targeted homomorphism}. Our scheme is secure under the polynomial hardness of learning with errors (LWE) with sub-exponential modulus-to-noise ratio.

We present a second scheme where there needs not be a single target policy. Instead, the decryptor only needs a set of keys representing policies $f_j$ s.t.\ for any attribute $x_i$ there exists $f_j$ with $f_j(x_i)=0$. In this scheme, the ciphertext size grows (quadratically) with the size of the \emph{set of policies} (and is still independent of the number of inputs or attributes). Again, the target set of policies needs to be known at evaluation time. This latter scheme is secure in the random oracle model under the polynomial hardness of LWE with sub-exponential noise ratio.

16:10-16:30
Rishab Goyal, Venkata Koppula, Brent Waters

Semi-adaptive security is a notion of security that lies be- tween selective and adaptive security for Attribute-Based Encryption (ABE) and Functional Encryption (FE) systems. In the semi-adaptive model the attacker is forced to disclose the challenge messages before it makes any key queries, but is allowed to see the public parameters.

We show how to generically transform any selectively secure ABE or FE scheme into one that is semi-adaptively secure with the only additional assumption being public key encryption, which is already naturally included in almost any scheme of interest. Our technique utilizes a fairly simple application of garbled circuits where instead of encrypting directly, the encryptor creates a garbled circuit that takes as input the public parameters and outputs a ciphertext in the underlying selective scheme. Essentially, the encryption algorithm encrypts without knowing the â€˜realâ€?public parameters. This allows one to delay giving out the underlying selective parameters until a private key is issued, which connects the semi-adaptive to selective security. The methods used to achieve this result suggest that the moral gap between selective and semi-adaptive security is in general much smaller than that between semi-adaptive and full security.

Finally, we show how to extend the above idea to generically bundle a family of functionalities under one set of public parameters. For example, suppose we had an inner product predicate encryption scheme where the length of the vectors was specified at setup and therefore fixed to the public parameters. Using our transformation one could create a system where for a single set of public parameters the vector length is not apriori bounded, but instead is specified by the encryption algorithm. The resulting ciphertext would be compatible with any private key generated to work on the same input length.

### Functional Encryption (Chair: John Steinberger)

16:30-16:50
From Cryptomania to Obfustopia through Secret-Key Functional Encryption
Nir Bitansky, Ryo Nishimaki, Alain PasselÃ¨gue, Daniel Wichs

Functional encryption lies at the frontiers of current research in cryptography; some variants have been shown sufficiently powerful to yield indistinguishability obfuscation (\IO) while other variants have been constructed from standard assumptions such as \LWE.
Indeed, most variants have been classified as belonging to either the former or the latter category. However, one mystery that has remained is the case of \emph{secret-key functional encryption} with an unbounded number of keys and ciphertexts. On the one hand, this primitive \mbox{is not known to imply} anything outside of minicrypt, the land of secret-key crypto, but on the other hand, we do no know how to construct it without the heavy hammers in obfustopia.

In this work, we show that (sub-exponentially secure) secret-key functional encryption is powerful enough to construct indistinguishability obfuscation if we additionally assume the existence of (sub-exponentially secure) standard public-key encryption. In other words, secret-key functional encryption provides a bridge from cryptomania to obfustopia.

On the technical side, our result relies on two main components. As our first contribution, we show how to use secret key functional encryption to get exponentially-efficient indistinguishability obfuscation'' \XIO, a notion recently introduced by Lin et al. (PKC '16) as a relaxation of \IO. Lin et al. show how to use \XIO and the \LWE assumption to build \IO. As our second contribution, we show how to replace, in this result, the \LWE assumption with any (standard) public-key encryption scheme.

Lastly, we ask whether secret-key functional encryption can be used to construct public-key encryption itself and therefore take us all the way from minicrypt to obfustopia. A result of Asharov and Segev (FOCS '15) shows that this is not the case under black-box constructions, even for exponentially secure functional encryption. We show, through a non-black box construction, that sub-exponentially secure-key functional encryption indeed leads public-key encryption. However, the resulting public-key encryption scheme is at most quasi-polynomially secure, which is insufficient to take us to obfustopia.

16:50-17:15
Single-Key to Multi-Key Functional Encryption with Polynomial Loss
Sanjam Garg, Akshayaram Srinivasan

Functional encryption (FE) enables fine-grained access to encrypted data. In a FE scheme, the holder of a secret key $\SK_f$ (associated with a function $f$) and a ciphertext $c$ (encrypting plaintext $x$) can learn $f(x)$ but nothing more.

An important parameter in the security model for FE is the number of secret keys that adversary has access to. In this work, we give a transformation from a FE scheme for which the adversary gets access to a single secret key (with ciphertext size sub-linear in the circuit for which this secret key is issued) to one that is secure even if adversary gets access to an {unbounded} number of secret keys. A novel feature of our transformation is that its security proof incurs only a {\em polynomial} loss.

JOINT SLOT WITH

Compactness vs Collusion Resistance in Functional Encryption
Baiyu Li, Daniele Micciancio

We present two general constructions that can be used to combine any
two functional
encryption (FE) schemes (supporting a bounded number of key queries)
into a new functional encryption scheme supporting
a larger number of key queries.
By using these constructions iteratively,
we transform any primitive FE scheme supporting a single
functional key query (from a sufficiently general class of functions)
and has certain weak compactness properties to a collusion-resistant
FE scheme with the same or slightly weaker compactness properties.
Together with previously known reductions, this
shows that the compact, weakly compact, collusion-resistant, and
weakly collusion-resistant versions of FE are all equivalent
under polynomial time reductions.
These are all FE variants known to imply the existence of indistinguishability
obfuscation, and were previously thought to offer slightly different avenues toward
the realization of obfuscation from general assumptions.
Our results show that they are indeed all equivalent, improving our
understanding of the minimal assumptions on
functional encryption required to instantiate
indistinguishability obfuscation.