Thursday, October 11, 2012, at 2:30 P.M.
Room 506, (CIT 5th Floor)
In cryptography, our goal is to design protocols that withstand malicious behavior of an adversary. Traditionally, the focus was on a setting where honest users followed their protocol exactly, without fault. But what if an adversary can induce faults, for example a physical attack that changes the state of a user's computation, forcing a user to accept when he should be rejecting; or tries to use a modified secret key? Can any security guarantees still be given when such errors occur? My PhD work studies the implications of various types of errors and develops techniques that protect against them.
I have delved into the following topics for different scenarios of errors: (1) cryptography with imperfect hardware, where the adversary can cause the cryptographic device to leak some secret information and tamper with the device's memory; (2) secure delegation protocols, where a user can delegate some computation to an untrusted server that causes errors.
To highlight some of my results:
(1) I gave a generic construction to secure *any* cryptographic functionality against continual memory tampering and leakage errors in the *split-state model*. My main tool is to construct a non-malleable code that is also leakage resilient in this model, which resolves one central open problem in the previous work (due to Dziembowski et al. -- ICS 10).
(2) I developed new delegation protocols that allow a user, who only stores a short certificate of his data (potentially very large), to delegate the computation on the data to the cloud, and then verify the outcome in time *sub-linear* in the data size.
In the talk, I will elaborate my work in these two lines, and some potential future directions.
Host: Anna Lysyanskaya