The Security Problem You Forgot Wasn’t Solved Yet
Georg C. F. Greve
Mon Dec 02 2019
Email is still the most valuable communication network and protocol. Yet most cybersecurity professionals agree its approach of verifying integrity has challenges and security measures for it may add to the problem. Consider how encryption easily helps attackers avoid countermeasures. Which is why both experts and regular folk are getting phished for billions of dollars each year.
And a signed email today only verifies that the content of an email has not been modified, not that what the user sees is actually what is there. However, the majority of the regular users don’t know that. So it makes it just look like the email is safe and trustworthy. Then there’s the habit of using email like a web page which brings tracking pixels, invisible text, links that lead to another site than the user expects… all these things are particularly rampant in emails because of people using HTML, the programming language for web design, just to have prettier emails.
Once you start questioning the legacy of many technical decisions we live with today, a certain Alice in Wonderland feeling is often inevitable. We started Vereign with an “if a we had to design a solution for this problem today, with modern understanding of technology – how would we do this” approach. Which led to a couple of design decisions that are looking to prove themselves as truly outstanding based on the feedback we received. And we’re also tumbling further down the rabbit hole on some fairly deep tech questions about Certificate Authorities and Digital Signatures. But what about the way we’re securing email? We kinda aren’t. Not really. Here’s why.
Cybersecurity professionals have long recommended to turn off HTML email. But when faced with a choice between beauty and security, most users choose beauty. Should you happen to be a security expert, please don’t mistake this for an irrational or uninformed choice. And believe us, this is a mistake that we have been making for a very long time ourselves. Consider this: For anyone but security experts, security is typically seen as the absence of harm.
Beauty supports the delivery of value. So by preferring HTML, most users prioritize value creation over the prevention of potential harm.
Which is why many email clients, including the most commonly used ones, support this choice and typically default to it. If they didn’t, they would probably not be among the most commonly used email clients for long. Billions of dollars in preventable damages across multiple industries had no meaningful impact on this preference. In the minds of the majority of users, the benefits outweigh the costs. And truthfully, we don’t know. Maybe it does? There’s been no studies on it-- do companies profit more directly and indirectly from the beautification of email messages then they lose to fraud or phishing because of HTML and code able to run in emails? All we do know is that the attacks are there and the losses are there. However, we really don’t have a choice anyway. It’s taken me some time and several stages to ultimately arrive at acceptance myself. But HTML email is here to stay, and the design of any solution must take this into account. So where do the issues of HTML email arise from and how might we resolve them?
A vast number of issues arise out of a discrepancy of what the user believes they are seeing, and what the user is actually seeing. Be it that the HTML in the email is cleverly crafted to make it seem like it has been authenticated, or be it that it ultimately hides information in specific ways or uses character sets or things like white text on white background. Another technique that is often used is also to have the HTML load remote resources, which not only is a privacy issue, it might also allow to deploy malicious payloads to specific targets.
All of that is possible because HTML is source code. It is a set of instructions that gets interpreted by a piece of software that follows these instructions mostly blindly. Simply put, that piece of software, the so-called browser engine, is responsible for turning the source code into the visual representation on your screen. Like all complex software, such browser engines have security flaws and there is no guarantee any particular user would have the latest, most secure version.
So, when we’re sending HTML emails, we are sending source code. And traditionally, email signing is like a seal on the raw source code – and not what that source code does based on the many types of email software used for interpreting it, or what the user actually sees.
There is no content filtering on this in most cases, and signed emails always looks like it is far more trustworthy. So signing a malicious email is a great idea if you’re a criminal.
If this was an article on how to send better Phishing emails, I’d have to say, in conclusion, if you want to get into the Evil League of Evil, always sign your malicious emails.
Wait, Not Yet
But this isn’t an article about making more evil emails. So we’re not done. The signing problem is a known problem. Which is why most of the larger email platforms have stopped trusting third parties and applications. Instead they insist that users use only a limited set of mostly harmless commands of the programming language that is HTML, allowing them at times to also filter for potentially malicious tricks, and the email itself gets composed and then signed on the platform itself – not the email client. Which then poses the next challenge: How much trust do you want to put into the platform? Remember, just because you pay for the 3rd party app doesn’t mean you can fully trust them.
All the big breaches of the last 10 years are because of 3rd party vendors. And email platform makers, well, they can literally change your message and sign it in your name.
The Actual Conclusion
Is there a way to solve this? Can we perhaps find a better approach to email signing that compares the result – information visible to the recipient – to the intended result – information sent by the sender. Can we compare two messages in this way after they have undergone sanitizing on a centralized platform?
Doing this right might be a great way to increase security for signed messages. Which is why we’ve been thinking about this a lot. The signature for such an approach could easily be stored in a V-Card without disturbing legacy applications and clients. It would also allow to additionally encapsulate messages using S/MIME or OpenPGP standards.
But what is the best algorithm to compare the information of original and received message?
This article is a co-production with our advisory board member Pete Herzog of ISECOM.