Wednesday, May 1, 2013

Security Testing at Large Scale with PegasusHPC

A few weeks ago I wrote about a new framework for high throughput web crawling called PegasusHPC. Since then, I've been working on a prototype forked from pegasus for dynamic vulnerability analysis at large scale. In this post, I  am sharing a bit of its design and I show some benchmarks.

I took advantage of the current multithreading architecture of pegasus to include an analyzer chain component before the crawler component of each worker-thread.

The Analyzers Chain Component receives an event for each unique targeted page . Each one of its active-analyzers builds a set of payloads to have executed on the server side. On the other hand, the passive analyzers just report vulnerabilities after scanning the response, without having to perform any additional request.

Some examples of active analyzers are the ones corresponding to XSS vulnerabilities or SQLI and examples of passive ones are those corresponding to Clickjacking vulnerabilities.  The architecture diagram of the workers is shown in Figure 1.

Figure 1. Workers Architecture Diagram.

The Analyzers Chain internals are very similar to the widely known ZAP, which I happen to be committer of.  However, I tried to optimize the object creation and the number of requests carried out to reduce as much as possible its overhead.

Though I am planning to open-source release pegasus and the analyzers, I would do that when having a stable & tested version, which might not be soon due to my lack of spare time for working on it.

Cross-Site Scripting in Meneame

I performed some of the first experiments of this approach in the meneame social news software, which is one of the most widely-used ones in Spain.

The following trace was extracted from the logs of pegasus after a quick analysis of

DEBUG  - "670307ec28f8e7c7d9a252a6fe070749315d8b70" and "670307ec28f8e7c7d9a252a6fe070749315d8b70" is 0 and their ratio is: 0.0, BAD randomness

DEBUG  - The distance between "" and "" is 0 and their ratio is: 0.0, BAD randomness

INFO  - CSRF Vulnerability found  found in url [""] and fields: key ["useripcontrol"] value["670307ec28f8e7c7d9a252a6fe070749315d8b70"], key ["return"] value["/user/IgnatiusJReilly"], key ["processlogin"] value["1"], key ["userip"] value[""], .

INFO  org.pegasushpc.agents.Dispatcher  - [OK]Current unique URIs[112] max urls[20000] idle threads [false] URIs queued [5415] pwnerqueue[41].

INFO  - Found canary and RXSS Vulnerability

INFO  - Reflected XSS Vulnerability in a href tag, url[""/><script>alert("pegasusIsMyDaddy")</script>"].

INFO  org.pegasushpc.agents.Dispatcher  - [OK]Current unique URIs[131] max urls[20000] idle threads [false] URIs queued [5498] pwnerqueue[46].

As we can see in the previous log, a CSRF is reported. There were several CSRF alerts like that. Meneame uses parameters in the query-string to determine the content of many dynamically-generated pages. Thought vulnerable to CSRF in theory, these pages do not perform any critical action and therefore they can not be exploited.

A version of this CSRF analyzer has been released as ZAP plugin. However, it ended up not being liked in the dev-list. So, it was not integrated in the default plugins, even thought their results were quite good.

The RXSS catch my attention. After debugging the output of the payload in my web browser, I noticed that meneame properly escaped the  script tag. So, I could not execute the previous payload. I ended up building the following vector attack to be able to execute arbitrary javascript.

The previous URI was displayed as shown in Figure 2 in my web browser.
Figure 2. XSS vulnerability in

The responsible disclosure of this vulnerability (CVE-2013-3309) was coordinated with Ricardo and fixed the same day. The revision of the patch is publicly available on their codebase at

I want to publicly thank Ricardo for fixing this very fast and not suing me.

No comments:

Post a Comment