These pages are a bit out of date. For information about my most research research, see my Publications page
I am currently involved in research projects in the following areas:
Here is a brief summary of the projects by area, with pointers to related pages describing each project in more detail. (Click on the project titles to reach the individual project pages.)
Geometry of numbers and coding theory are two areas of mathematics that study point lattices and error correcting codes, respectively. Both areas are also the source of many interesting computational problems. Point lattices and linear codes are multidimentional objects with a large number of theoretical and practical applications, from message communication over noisy channels, to the solution of low degree modular equations, to the design of secure cryptographic functions, and more. The wide range of applications of lattices and codes is largely due to their rich multidimentional structure. High dimensionality also makes most of these problems highly non trivial to solve, even in an approximate sense.
Mostly motivated by cryptographic applications, we have been working on many different research projects related to lattices and codes, described below.
Most classic computational problems on point lattices are computationally intractable. The goal of this project is to find out which problems are computationally hard. In most practical applications, approximate solutions are almost as good as exact ones. So, an important goal of this project is to determine if the problems are computationally hard to solve not only when one seeks the optimal solution, but also when approximate (nearly as good) solutions are allowed, and for what level of the approximation factor. (more…)
Point lattice and linear codes have much in common: they both share high dimensional and additive group structures. However, the mathematics of lattices and codes is often very different. (Technically, lattices are modules over the integer ring, while codes are vector spaces over a finite field.) Still, the similarities between the two classes of objects often extend way beyond the syntactic level. For example, lattices can be constructed from codes, and vice versa. Moreover, ideas from one area often prove to be useful in the study of the other one. The goal of this project is to get a better understanding of the computational complexity of coding problems, with special attention to problems that bear strong similarity with the analogous lattice problems. (more…)
The notion of computational hardness used in the above two projects is NP-hardnes. Althoug NP-hardness is often considered evidence of the intractability of a problem, we remark that NP-hardness is a worst-case complexity notion: if a problem is NP-hard, then (under standard complexity assumptions) no efficient algorithm can correctly solve all instances of the problem. Still, most instances of an NP-hard problem may be easy to solve. In cryptography, when one wants to build cryptographic functions and protocols that are hard to break, the more stringent notion of average-case hardness is required: if the key to a cryptographic function is chosen at random, then the function is hard to break with high probability. The goal of this project is to study the average case hardness of lattice problems, their cryptographic applications, and methods that allow to provably derive hard-on-average problem instances from lattice problems that are hard to solve in the worst case. (more…)
In the last few decades cryptography has evolved from a specialized topic mostly concerned about achieving secret communication, to a broad research field addressing security issues in a wide range of applications. Modern cryptography adopts a modular approach to the study of the security issues arising in high level applications: first, computationally hard problems are used to build secure cryptographic primitives, i.e., cryptographic functions solving relatively simple and application independent security problems. Then, cryptographic primitives are combined together in higher level protocols, in a way largely independent from the specific implementation of the primitives. My research span a large portion of the spectrum in cryptographic research, from the design of simple cryptoraphic primitives (like hash functions or signature schemes) to the construction of higher level protocols, like zero knowledge proof systems, or secure key distribution schemes. Below is a list of specific areas I have been working on.
Hash functions allow to compress a long string x into a short digest H(x), and are used in many practical applications. Some applications require hash functions with special security or algorithmic properties. This project explores various extensions of basic collision resistant hash functions, like hash functions H(x) that hide all partial information about x (and still allow for public verifiability, e.g., given x and h, anybody can check that indeed h = H(x)), or hash functions that allow for fast update operations, e.g., given x, H(x) and an edit operation to be applied to x (to obtain a related string x’), one can compute H(x’) much faster than computing it from scratch. (more…)
A digital signature is the electronic counterpart of real life signatures: they can be easily certified as authentic by everybody, but only the legitimate signer can efficiently produce them. Digital signatures are probably the most important and widely used cryptographic primitive enabled by public key technology, and they are the building block of many modern distributed computer applications, like electronic contract signing, certified email, and secure web browsing. More advanced applications often requires more than simple digital signatures. This project expores various extension of the basic concept of digital signatures, providing formal security definitions, solutions, and rigorous security analyses. (more…)