for files over a telephone line. Of course, anyone can dial the phone number of the file server; A's claim is that it is legitimately entitled to access the files.
A is a television owner, and B is a cable company. A's claim is that he is a paying customer and that the broadcast should be descrambled for him.
These are really three examples of the same thing. Player A must be able to authenticate himself (or prove his identity) to B. Any potential impersonator A* must have only a negligible probability of convincing B that he is A.
Goldwasser et al. (1989) and Babai and Moran (1988) initiated a complexity-theoretic study of such problems. They defined the class of sets with interactive proof systems and suggested that this definition correctly captures the notion of a set in which membership of an element can be verified efficiently. A proof system for a set is an interactive protocol between a prover A (with unlimited computing power) and a verifier B (limited to probabilistic polynomial-time computation). If , then A is able to convince B to accept x with high probability. If , then no player A*, even if he deviates from the legitimate behavior of A, should be able to convince B to accept x with more than negligible probability. The more general notion of a multiprover interactive proof system was put forth by Ben-Or et al. (1988); in these protocols, the role of the prover is played by several machines A1,..., Am, each with unlimited computing power. Two ingredients of proof systems are absolutely vital in making the theory interesting: interaction (i.e., the fact that prover(s) and verifier can exchange polynomially many rounds of messages before the verifier decides whether or not to accept), and randomness (i.e., the fact that the verifier can toss coins at any point in the protocol and that cheating provers do not know the outcomes of these tosses at the start of the protocol).
Consider the previous examples, in which A is required to prove his identity to B. Suppose we use a trivial protocol in which B is given a table of users and passwords, A is given only his own password, and to prove his identity, A simply reveals the password to B. (Note the absence of interaction and randomness.) In this scheme, A transfers full knowledge of his identity proof to B. An eavesdropper (or even B himself) can turn around and claim to be A by presenting the same proof. Is there a proof system that achieves the same goal but transfers no knowledge to B except