BB84, eavesdropper detection, and the classical protocols quantum computers will break
Every secret communication between two parties — call them Alice and Bob — relies on a shared key. The key could be a physical object (a padlock), but in the digital world it is a random bit-string known only to them. The message is encrypted using the key and decrypted by anyone who holds the same key.
The problem: Alice and Bob are physically separated. They must somehow agree on a key without meeting — and they must do so over a public channel that an eavesdropper (Eve) can observe. This is the key distribution problem, and it is the central unsolved problem of classical cryptography for 2000 years.
Critically, the goal is not just to keep the key secret — it is to detect Eve's presence. If Alice and Bob can confirm that Eve has not learned the key (even partially), they can use it safely. If they detect too much interference, they discard the key and try again. Quantum key distribution (QKD) is the first method in history that makes eavesdropper detection physically guaranteed — not just computationally hard to avoid, but provably impossible.
The ideal encryption scheme is the one-time pad (OTP), proved by Claude Shannon in 1949 to be the only perfectly secret cipher. If Alice and Bob share a random key string \(k\) of the same length as the message \(m\), encryption is simply XOR:
\[c = m \oplus k, \qquad m = c \oplus k\]An adversary who sees the ciphertext \(c\) but not \(k\) gains zero information about \(m\) — every possible plaintext is equally likely. Shannon proved this rigorously: perfect secrecy requires a key at least as long as the message, and the key must never be reused.
The one-time pad is useless in practice on its own: Alice and Bob would need to securely exchange a key as long as every message they ever send. QKD solves exactly this — it generates a fresh shared random key on demand, detected as unexploited by Eve. Combined with the one-time pad, it achieves information-theoretic security: security that holds even against an adversary with unlimited computing power.
Proposed by Charles Bennett and Gilles Brassard in 1984, BB84 exploits two fundamental quantum facts:
BB84 uses four states drawn from two conjugate bases. Alice encodes each bit using a randomly chosen basis:
| Bit (aᵢ) | Basis (bᵢ) | State sent | Photon polarisation |
|---|---|---|---|
| 0 | Z (rectilinear) | |0⟩ | Horizontal |H⟩ |
| 1 | Z (rectilinear) | |1⟩ | Vertical |V⟩ |
| 0 | X (diagonal) | |+⟩ | +45° diagonal |45⟩ |
| 1 | X (diagonal) | |−⟩ | −45° diagonal |135⟩ |
States within the same basis are orthogonal and perfectly distinguishable. States from different bases — say |0⟩ and |+⟩ — are non-orthogonal: \(\langle 0|+\rangle = 1/\sqrt{2} \neq 0\). This non-orthogonality is the source of BB84's security.
Alice generates two random bit-strings: \(a = a_1 a_2 \ldots a_n\) (her secret bits) and \(b = b_1 b_2 \ldots b_n\) (her random basis choices, each X or Z). She encodes qubit \(i\) according to the table above. In a photonic implementation, each qubit is a single photon whose polarisation carries the state.
Alice sends the prepared qubit states to Bob over a quantum channel (e.g. optical fibre). The states are fragile — any interaction with the environment disturbs them, potentially in a detectable way.
Eve intercepts a fraction \(f\) of the transmitted qubits. For each intercepted qubit, she guesses a basis (X or Z) at random, measures the qubit in that basis, then resends the (now post-measurement) state to Bob. Eve has probability 1/2 of guessing the correct basis. When she guesses correctly, the state passes through undisturbed. When she guesses wrong, she collapses the qubit to a random state in the wrong basis, introducing errors at Bob's end.
Bob independently generates his own random basis string \(b' = b'_1 b'_2 \ldots b'_n\) and measures each received qubit in his chosen basis, recording the outcome (0 or 1).
Alice and Bob publicly announce their basis strings \(b\) and \(b'\) (but not the measurement results). They discard every bit position where their bases differ. Only matching-basis positions are retained — approximately half. In matching positions, if Eve was absent, Bob's result perfectly agrees with Alice's bit. This subset is the sifted key.
Alice and Bob publicly reveal a random sample of their sifted key bits and compare them. If Eve intercepted a fraction \(f\) of qubits, the expected error rate in this sample is:
\[\text{error rate} = \frac{f}{4}\]Derivation: Eve guesses the wrong basis with probability 1/2. Of those wrong-basis measurements, 1/2 produce an incorrect state that causes Bob's result to differ from Alice's. So errors appear in \(f \times \frac{1}{2} \times \frac{1}{2} = f/4\) of all sifted bits. If the observed error rate exceeds a threshold, Alice and Bob abort and start over. If it is acceptably low, they proceed.
Even with a small \(f\), a fraction \(f/4\) of the remaining key bits may have errors. Alice and Bob must identify and eliminate these without leaking information to Eve. The standard method is bisective search: divide the key string into halves, publicly compare parities (XOR of each half), and recurse into whichever half has a parity mismatch. The first bit of each block compared is discarded (to prevent Eve gaining information from the parity announcements). This continues until all errors are localised and removed.
After error reconciliation Alice and Bob have nearly identical strings, but Eve has partial knowledge of some bits (from her correct-basis measurements). Privacy amplification compresses the string to a shorter one from which Eve's information is provably negligible. If the string has length \(L\) and Eve is estimated to know at most \(k\) bits, the compressed key of length \(L - k - s\) (for a security parameter \(s\)) is information-theoretically safe. This is typically done by hashing: publicly agreeing on a random hash function and taking its output as the final key.
A small trace through the protocol with 10 bits shows the mechanics clearly. Steps are rows; each column is one transmitted qubit.
| Row | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
|---|---|---|---|---|---|---|---|---|---|---|
| Alice's bits (aᵢ) | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 |
| Alice's bases (bᵢ) | Z | X | X | Z | Z | X | Z | X | Z | X |
| State Alice sends | |1⟩ | |+⟩ | |−⟩ | |0⟩ | |1⟩ | |−⟩ | |0⟩ | |+⟩ | |1⟩ | |+⟩ |
| Bob's bases (b'ᵢ) | Z | Z | X | X | Z | X | X | X | X | Z |
| Bob's results | 1 | rand | 1 | rand | 1 | 1 | rand | 0 | rand | rand |
| Bases match? | ✓ | ✗ | ✓ | ✗ | ✓ | ✓ | ✗ | ✓ | ✗ | ✗ |
| Sifted key | 1 | — | 1 | — | 1 | 1 | — | 0 | — | — |
Alice and Bob retain bits 1, 3, 5, 6, 8 — where their bases matched. The sifted key is 1 1 1 1 0. Without Eve, these agree perfectly. A subset (say bits 1 and 3) is publicly revealed to check for errors. The remainder becomes the secret key after error reconciliation and privacy amplification.
The security of BB84 rests entirely on quantum mechanics, not computational hardness. Let's trace exactly why Eve's intervention is unavoidable.
When Eve intercepts a qubit, she must measure it in some basis. She doesn't know Alice's basis \(b_i\). Two cases:
Combined: for each qubit Eve intercepts, there is a probability \(\frac{1}{2} \times \frac{1}{2} = \frac{1}{4}\) of introducing an error in Bob's sifted key. If Eve intercepts a fraction \(f\), Alice and Bob observe an error rate of \(f/4\).
Eve cannot do better than this by any classical strategy. The only way to get information without introducing errors would be to clone the qubit — copy it perfectly and forward the original. But the no-cloning theorem (proved later in the course) shows this is impossible for quantum states. Any attempt to extract information necessarily disturbs the state. Security here is not a computational assumption — it is a law of physics.
In practice, BB84 is implemented with photons. A photon's polarisation is a natural qubit: horizontal \(|H\rangle \equiv |0\rangle\), vertical \(|V\rangle \equiv |1\rangle\), and ±45° diagonals \(|45\rangle \equiv |+\rangle\), \(|135\rangle \equiv |-\rangle\).
Photons are ideal for QKD for two reasons:
The same property that makes photons ideal for QKD makes them challenging for quantum computation: implementing two-qubit gates requires photon-photon interactions, which need indirect schemes (linear optical quantum computing, measurement-based computation) and are less efficient than, say, superconducting qubits. The right physical platform depends on the application.
Classical Cryptography — Reference Appendix
The protocols below underpin today's internet security. All of them are threatened by quantum computers running Shor's algorithm.
The one-time pad (Vernam cipher, 1917) is the only encryption scheme with information-theoretic security. Given message \(m\) and a uniformly random key \(k\) of the same length, never reused:
\[c = m \oplus k \qquad \text{(encrypt)}, \qquad m = c \oplus k \qquad \text{(decrypt)}\]where \(\oplus\) is bitwise XOR. Shannon's theorem (1949) states: an adversary who sees \(c\) but not \(k\) has zero information about \(m\) — the posterior distribution over \(m\) given \(c\) is identical to the prior. Every plaintext is equally likely regardless of the ciphertext. This is perfect secrecy by definition.
QKD solves the distribution problem by generating the OTP key securely. Together, BB84 + OTP is the only end-to-end information-theoretically secure communication system.
RSA (Rivest, Shamir, Adleman, 1977) is the most widely deployed public-key cryptosystem. Its security rests on the practical difficulty of factoring large integers: given \(n = p \times q\) where \(p\) and \(q\) are large primes, recovering \(p\) and \(q\) from \(n\) alone takes exponential time on a classical computer.
Correctness follows from Euler's theorem: \(m^{\phi(n)} \equiv 1 \pmod{n}\), so \(m^{ed} = m^{1+k\phi(n)} \equiv m \pmod{n}\).
Shor's algorithm (1994) factors an \(n\)-bit integer in \(O(n^3)\) quantum gate operations — exponentially faster than the best classical algorithms. A sufficiently large quantum computer breaks RSA-2048 in hours. The NSA has already announced that RSA will be deprecated as quantum hardware matures.
Proposed in 1976, Diffie-Hellman (DH) was the first public-key protocol — it allows two parties to agree on a shared secret over a completely public channel, with no prior shared information. Its security relies on the discrete logarithm problem: given \(g\), \(p\), and \(A = g^a \bmod p\), finding \(a\) is computationally hard.
The shared secret \(K = g^{ab} \bmod p\) is identical for Alice and Bob because \((g^a)^b \equiv (g^b)^a \equiv g^{ab} \pmod{p}\). Eve sees \(g\), \(p\), \(A = g^a\), and \(B = g^b\), but computing \(g^{ab}\) from these without knowing \(a\) or \(b\) is the discrete logarithm problem — believed to require sub-exponential but super-polynomial time classically, and broken in polynomial time by Shor's algorithm on a quantum computer.
Elliptic Curve Cryptography (ECC, 1985) achieves the same security as RSA and DH but with much smaller keys — a 256-bit ECC key provides roughly the same security as a 3072-bit RSA key. This makes ECC the preferred standard for mobile devices, TLS certificates, and Bitcoin signatures.
An elliptic curve over a field (in cryptography, a finite field \(\mathbb{F}_p\)) is the set of points \((x, y)\) satisfying:
\[y^2 \equiv x^3 + ax + b \pmod{p}, \qquad 4a^3 + 27b^2 \neq 0\]together with a special "point at infinity" \(\mathcal{O}\) that acts as the group identity. The condition \(4a^3 + 27b^2 \neq 0\) ensures the curve has no cusps or self-intersections (it is non-singular).
Points on an elliptic curve form an abelian group under a geometric addition rule. Given two points \(P\) and \(Q\):
The addition rule defines a group \((E(\mathbb{F}_p), +)\). Scalar multiplication \(kP = P + P + \cdots + P\) (\(k\) times) can be computed efficiently using repeated doubling in \(O(\log k)\) steps. The Elliptic Curve Discrete Logarithm Problem (ECDLP): given \(P\) and \(Q = kP\), find \(k\). This is believed harder than the standard discrete log (no sub-exponential algorithm is known for generic curves).
The protocol is identical to DH but scalar multiplication on the curve replaces modular exponentiation:
Bitcoin's ECDSA signature scheme uses the curve secp256k1 (\(y^2 = x^3 + 7\) over \(\mathbb{F}_p\) for a specific 256-bit prime \(p\)). The private key is a scalar \(k\); the public key is the curve point \(K = kG\).
Shor's algorithm (1994) runs on a quantum computer and solves both integer factoring and discrete logarithm in polynomial time. This breaks:
| Protocol | Hard Problem | Quantum Attack | Status |
|---|---|---|---|
| RSA | Integer factoring | Shor's algorithm | Broken |
| Diffie-Hellman | Discrete log (mod p) | Shor's algorithm | Broken |
| ECDH / ECDSA | Elliptic curve discrete log | Shor's algorithm | Broken |
| AES-256 (symmetric) | Brute-force key search | Grover's algorithm (√ speedup only) | Weakened, not broken |
| BB84 + OTP | Laws of physics | None — information-theoretic | Secure |
Grover's algorithm searches an unsorted database of \(N\) items in \(O(\sqrt{N})\) steps — a quadratic speedup over classical \(O(N)\). For AES-256 with \(N = 2^{256}\) possible keys, Grover gives \(O(2^{128})\) — still enormous, so doubling key lengths suffices for symmetric crypto. But RSA, DH, and ECC face complete polynomial-time attacks.
Nation-state adversaries are already collecting encrypted internet traffic today, storing it, and waiting for a large enough quantum computer to decrypt it retroactively. Any data that must remain secret for more than 10–15 years — classified government communications, medical records, long-term financial data — is vulnerable to this strategy now. This is the primary driver behind NIST's post-quantum cryptography standardisation effort, which selected its first four algorithms in 2024 (CRYSTALS-Kyber, CRYSTALS-Dilithium, FALCON, SPHINCS+), all based on lattice or hash problems believed hard even for quantum computers.