One If by Land, Two If by SeaThe point of having a secret is often to share it. A secret is rarely interesting if you can't tell anyone about it. Paul Revere, for example, certainly wouldn't have helped along the American Revolution if he had neglected to pass on the information about the English Redcoats he received through the lantern encryption scheme.
So let's suppose Alice and Bob have to share a secret. They may or may not trust each other; or Bob may trust Alice, while Alice doesn't trust Bob, or conversely. And whether or not they trust each other, one or both of them may mistrust the communications channel they have to use to share their secret.
For example, when meeting a friend in private, we usually trust both the friend and the channel (the private face-to-face conversation). When telephoning a friend, we probably trust the friend but not, perhaps, the channel, since someone may be eavesdropping. When meeting a stranger, we may distrust the stranger but trust the channel. And when two paranoiacs meet, they trust neither each other nor the channel.
Whenever we share information we could be sharing secrets. So whether it's cable companies sending encrypted signals to television sets or spies sending encrypted messages to governments through radio waves, smoke signals, or bongo drums, the need for secrecy is potentially all around us.
Even if Alice and Bob have an eavesdrop-proof channel, they can't use it to exchange all their secrets, because it may not be able to carry long messages or may only be usable now and again. For example, their secure channel could be a private face-to-face conversation, but they may meet infrequently, or speech may be unsuitable to pass along large amounts of information. Either way, at some point they will have to rely on a channel they know to be insecure.
Long and painful historical experience has taught us the virtue of assuming that interlopers intercept every single encrypted message and that they know everything about our security system except the actual encryption keys we use. That includes knowing all keys used previously and having translations for all earlier encrypted messages. Even then our security system should still be secure.
We give secrecy breakers so much benefit of the doubt because changing keys is easy, but changing the entire system---formats, channels, and protocols---is hard. So since those parts of the system are usually longlasting, and people are weak, it's best to assume they are all compromised.
During the Second World War the Germans failed to learn that lesson, and may have lost the war as a result. They had developed a machine to encrypt information that was broadcast by radio to their military outposts, spies, and ships and submarines at sea. What they didn't know was that a group of British decrypters had secretly built a primitive computer and, with its aid, had broken the entire system early in 1940. So often Winston Churchill sat in his office and read their secret transmissions only hours after they were broadcast.
The Germans, overconfident in their system, rarely changed keys or encryption schemes. They thought their system unbreakable and attributed all their military setbacks to an imaginary network of brilliant British spies. They placed too much faith in the secrecy of their encryption methods.
Today the rule is that in a good encryption scheme we should be able to read and reread the encryption procedure til our eyes bubble, and puzzle over previous translations til our brains hurt, and still be unable to decrypt any part of the current message---unless we also know the particular key used.
Of course, like stage magic, secrecy can rely on physical security, misdirection, and deception, as well as on encryption. For example, when the American president travels by car, the car is bulletproof, but there are also many decoy cars. The British government uses the same method to transport jailed high-profile terrorists.
Sending a secret message down a communications channel is like stashing a secret in a room. To keep it secret we might lock the room (that is, use a secure channel and keep information unencrypted); lock the information (use an insecure channel but encrypt the information); or hide the information (keep information apparently unencrypted but hide a message in it).
Of course, the truly paranoid among us, being prudent, will sensibly lock both the room and the information, hide the information, and then hide the room. Which scheme we choose depends on how paranoid we are, and how important the information's loss would be.
When our level of suspicion gets high enough, we don't even want adversaries to know that we sent a message. During the Cold War, American and British spies often bugged Soviet installations. But each time they did so they could tell that the operation had gone wrong because there was a precipitous drop in message traffic. Through several highly placed double agents in British intelligence, the Soviets always knew when someone was listening.