//disclaimer – i set out to write some simple, non-comprehensive, lightweight course material for network and information security – posts (like this one) are categorized under ‘security primer’. It is still work in progress, comments welcome.
Communication systems are basically made of two parts: the channel and the messages. At the start and end of the channel users may be sending messages or listening for them. Simply put, a communication system is secure if it is available for users to send messages when they want, and in that case, when a user sends a message, then the user on the receiving end of the channel receives the message (in a timely manner). Communications security dates back to thousands of years ago. The most well-known part of communications security is cryptography, or the art of keeping messages secret. But also obfuscation, the art of hiding messages, is part of communications security.
Remark: We will not go into details about cryptography here. If you are interested in how Japanese Geisha’s communicated with their lovers and Roman emperors with their legions, in secret, I recommend reading the book by Simon Singh called “The Code book”, for a fun explanation of cryptography through the ages: http://simonsingh.net/books/the-code-book/ . Simon Singh is specialized in writing about technical matters in a romantic way – he also wrote a bestseller about “Fermat’s theorem”. You will not get bored.
Remark: Users need it easy. In the middle ages European lords and ladies stubbornly continued to use substitution ciphers for centuries, even when Arabs had already cracked the cipher (through frequency analysis). They persisted despite their personal experts advising against it, despite well-known examples of successful attacks. But using this cipher, although broken, was easy and it meant that their illiterate and uneducated staff would not understand these messages. The risk of an advanced attacker intercepting and deciphering their messages was just not considered big enough. Classic example of a user taking a calculated risk. Bruce Schneier frequently blogs about how humans make risk assessments, about common mistakes they make. See his blog for funny stories on airport security, anti-terrorism laws, et cetera. His blog and newsletter can be found at: https://www.schneier.com/.
Saying that a communication system is ‘secure’ is a bit vague, so let’s look at some relevant (sub) properties. For the sake of simplicity we focus on one instance of usage of the communication system: Alice sends a message to Bob. To explain better what we mean with these (sub) properties we introduce an attacker called Eve.
- Authentication – Alice and Bob are certain about each other’s identity. Eve can not pretend to be Alice or Bob (faking or spoofing their identity).
- Accountability or non-repudiation – When Bob receives the message, Alice can not later deny she did not send it. Eve can not send messages that appear to come from Alice (faking or spoofing their messages).
- Confidentiality or secrecy – Only Bob knows the content of the message. Eve does not find out – unless of course Alice or Bob reveal the message.
- Integrity – Bob gets exactly the message(s) Alice sent. Eve can not change the message (tampering of messages).
- Availability or continuity – Alice finds the system ready when she want to send her message. And when she does Bob gets the message. Eve can not prevent them from sending messages (jamming or obstructing).
- Anonimity – Eve does not know it is Alice sending a message, and that Bob is receiving it.
- Obfuscation – Eve does not know Alice is sending a message.
The picture above shows a scytale, a stick used for wrapping a message in a secret way – an ancient cryptographic device used by the Romans and Greeks to keep messages secret, to obfuscate messages, and to preserve integrity.
Note that these (sub) properties are more precise – but they are sometimes intertwined. For example, part of non-repudiation involves authentication and integrity. For if authentication is broken, then Alice could argue someone else pretended to be her and sent the message and if integrity is broken than Alice could say the message was altered on the way. In both cases Bob can’t prove that Alice sent the message (non-repudiation is broken). Similarly confidentiality and integrity are closely related (you can not meaningfully change the content if you do not know the content of a message).
Another way of looking at things comes from the field of protocol analysis. In protocol analysis there are two key properties:
- Liveness – Good things happen – most of the time. Meaning: Most of the time Alice can send a message and Bob gets it eventually.
- Safety – Bad things don’t happen – never. Meaning: Eve does not get the message. Bob does not get an altered message. Alice can send a message when she wants to. Bob does not get messages which Alice did not send. Eve does not find out Alice is communication to Bob. And so on.
If you think about it, it is much easier to list all the good things that should happen, than to list all the bad things that should not happen. There is an endless list of bad things that should not happen. It is one of the fundamental problems in network and information security.