Past Work

I am a Principal Engineer at Intel’s Strategic CAD Lab working on distributed and concurrent algorithms and fault tolerance. I am deeply interested in the fundamental correctness of algorithms, and have worked on models, proof systems, and verification tools for many problems including concurrent data structures, security protocols, database systems, and hardware algorithms (including the checking of cache coherence protocols for HP servers). I am also interested in the design of fault-tolerant distributed algorithms. My most recent work is about recommendation systems in the face of malicious or colluding users, with applications to electronic commerce and web search.

My work has been concentrated in fault-tolerant distributed algorithms, system modeling and verification, and new ways of thinking about distributed computation.

Fault-tolerant distributed algorithms

Electronic commerce and web search

My colleagues and I have written a string of papers on recommendation systems and their application to electronic commerce and web search. Amazon and eBay are popular examples of electronic commerce, and both are based on recommendation systems. At Amazon, when a buyer is checking out (just before actually making a purchase), the system shows the buyer a list of recommendations of other products that might interest the buyer. Amazon tries to deduce what buyers like based on their past purchases, and makes recommendations based on this information. At eBay, after each transaction, the buyer and seller are allowed to post comments about each other. Buyers and sellers routinely study these recommendations about others before engaging in transactions with them.

The question is this: Recommendation systems work well when users are honest, but what should you do when you know there are dishonest users hoping to entice you into making bad decisions or buying bad products, possibly even lying about their own experience with products or other users? We want to enable users to use recommendations from others (to collaborate with others to find good products quickly) without falling prey to bad recommendations. We show that in a number of different contexts, remarkably simple randomized algorithms yield amazingly good results.

At HP, we have applied our ideas to web search, since a search engine is just another instance of a recommendation system, and implemented our ideas on the company’s internal search engine. It is well known that the kinds of algorithms used by popular search engines do not work well on internal corporate webs. We have patented some ideas on how to to view a user’s response to the search engine’s results as a recommendation, and to use these recommendations to improve the search engine’s performance.

Lock-free data structures

In parallel or distributed systems, locks are well-known to be a source of trouble, especially if processors in the system can crash or block while holding locks. It is much better to design data structures without using locks if possible, using “lock-free” or “wait-free” data structures, if the performance penalty is not too severe. My colleagues and I have studied the cost of wait-free data structures and how to design them.

Fault-tolerant databases

When a database crashes, possibly after a system failure, the database must begin a recovery process to reconstruct the state of the database at the time of the crash. The only information available to the recovery process is what made it to the disk before the crash. Everything in memory is lost. It is not hard to demonstrate that the order in which pages are written back from memory to the disk is crucial to the system’s ability to recover. The industry has many techniques for doing recovery, all finely tuned and engineered to work with different classes of database operations. My colleagues and I have been refining a model of the recovery process with which we can describe known recovery techniques and why they work, and extend these techniques to new domains.

System modeling and verification

I have spent a lot of time thinking about how to model distributed and parallel systems, and how to reason about the correctness of algorithms running on such systems.

During my graduate work at MIT, Professor Nancy Lynch and I defined the Input/Output Automaton model of computation, a particularly simple and elegant model originally intended for reasoning about asynchronous distributed algorithms, and a way of using refinement maps to construct hierarchical correctness proofs for such algorithms. Since that time Professor Lynch and her students have continued to elaborate the model and build tools and languages for the model to the point that the model is now used around the world in academic, industrial, and military contexts.

At DEC, Compaq, and HP, I have been eager to build models of company products and reason about the correctness of their designs.

One significant body of work involved reasoning about the correctness of cache coherence protocols for company servers. My colleagues and I used the TLA+ language to build models of the memory models for the DEC Alpha and Intel IA-64 processors, to build models of cache coherence protocols for multiprocessors systems being designed by the company for these processors. The vast majority of this work was done by hand, figuring out the system invariants and proving that they were indeed invariant. While we never actually finished a complete proof before the system was shipped, the engineers told us they were grateful for the rigorous analysis, and that their confidence in their algorithms was significantly improved. We were pleased to see how easily the engineers read our models written in TLA+, and to notice engineers using the terminology we had fashioned to describe their algorithms in their own discussions of the algorithms at the white board.

Near the end of this effort, one of my colleagues wrote an explicit state model checker for TLA+, and that was a significant breakthrough. I became interested in building tools for finding bugs in protocols, and started work with colleagues on our own model checker. The idea was to describe systems and invariants in a higher-order logic, to compile the model checking question into a large Boolean formula, to hand Boolean formula to a satisfiability checker, and to use the resulting satisfying assignment to construct the sequence of states leading the system to a state violating the invariant.

One application used this tool to find bugs in programs intended to be run on the IA-64 memory model. The idea was to take a piece of machine code and a property that the code is supposed to satisfy, turn that into a finite-state machine and an invariant, take a model of the IA-64 memory model, and simply ask the question of whether there is a behavior of the program allowed by the memory model that violates the invariant. Ultimately, we hoped that one source of machine code would be the output of compliers, but at this point HP Labs terminated work on verification tools, and the project was never completed.

In other projects, my friends and I have found bugs in proposals submitted to the PCI-X standards body, in multiprocessor networking algorithms, in database models, and we have routinely used these techniques to find bugs in our own distributed algorithms and data structures.

While I am fascinated with the prospect of building tools in the future, I clearly have much more experience using tools than building them. I find that I can pick up new tools quickly, and my two decades of experience writing proofs and reasoning rigorously about algorithms helps me come up to speed and make progress rapidly.

New ways to think about distributed computing

Topology

My colleagues and I have spent years refining new ways to use simple ideas from topology to model and reason about fault-tolerant distributed computation. Speaking informally, the idea is to think of the global states of the system as forming a simplicial complex. At first, considering only the initial states, these initial states might form a complex representing a multi-dimensional sphere. As time progresses, however, as some processors take steps and other processors fail, this sphere evolves into dramatically more complex surfaces with holes appearing in the surface as processors fail, until at some point the surface rips and becomes disconnected. The marvelous observation is that the amount of time required to solve problems like k-set agreement in distributed systems corresponds directly to the connectivity of this simplicial complex. Our work started out as purely combinatorial arguments, but we are now nearly at the point where we can axiomatize system properties and reason about connectivity directly from these axioms.

Knowledge

My colleagues and I have also worked to show how simple mathematical models of knowledge and belief can be used to design fault-tolerant distributed algorithms and reason about their correctness. In any given global state of a distributed system, we can think of a processor as considering any number of other global states to be candidates for the current state. Specifically, the processor considers possible all global states in which it has the same local state as it does in the current state. The processor is said to know a fact in a global state if that fact is true in all global states it considers possible. This simple definition of knowledge can be extended and applied in many different contexts. In the case of consensus or agreement algorithms, for example, we can use notions of knowledge to design consensus algorithms that are optimal in a very strong sense, and to prove results about the complexity of solving consensus in various kinds of systems.

Security

Finally, I have spent a little time thinking about security. In one piece of work with Martin Abadi, we looked at the famous Burrows-Abadi-Needham (BAN) logic for authentication protocols, and developed a computational model of belief for this logic, and proved the logic’s soundness in terms of this model. In another piece of work, I worked with Joe Halpern and Yoram Moses to take the famous notion of a zero knowledge protocol and understand this definition in terms of traditional models of knowledge and belief. The well known cryptographic definition of zero knowledge involves reasoning about Turing machines and indistinguishability of probability distributions. In contrast, we were able to capture the same notion with a single formula written in a logic of knowledge based on probabilistic computation.