Forced browsing is the class of serious web application vulnerability I see the most often. Contrary to conventional wisdom, you can build effective automated tests for it in your application, tests that don’t use hard-coding or fuzzing.

This friend of mine, Ohran, maintains a decent-sized web app, mydeathstar.empire. It does various boring administrative things for this space station, and the guy in charge, Darth something, keeps him busy with lots of little tweaks. The latest one was that for some reason he wanted to restrict who was allowed to remotely operate the trash compactors. Apparently there’d been some issue with that. So my friend wrote some code for the web app’s navigation header so that you’d only see a link to the trash compactor dashboard if you had the right set of privileges. It looked something like

in render_nav_bar:

if(current_user.trash_master) {  link_to("/trash_compactor_dashboard")  }

So Trash Masters saw a page like this:

While regular stormtroopers saw a page without the link:

Now, Ohran is very conscientious about quality (apparently Darth Whatever isn’t very forgiving of bugs) so he added in a couple of automated tests too:

login_as(regular_user)
visit_page("/")
assert_not_in_page("/trash_compactor_dashboard")
login_as(trashy_user)
visit_page("/")
assert_in_page("/trash_compactor_dashboard")

And of course he deployed it to a staging server and had someone click around to make sure everything looked right. Eventually, he pushed the changes to the live server, and all seemed well.

That night, though, he woke up in a sudden panic. He pulled his laptop onto his chest and logged in to his account. The link wasn’t there, as expected – trash wasn’t his job. Then, he manually typed a URL into his address bar, hit enter, and…

Ohran had forgotten some very important code. When a request came in to load the dashboard page, or perform any trash compactor action, he needed to check to ensure that request was actually authorized. Even though no unauthorized user could get to the dashboard by accident, anybody with an account could use the forced browsing technique to load the page–and since it had previously been accessible to everyone, people certainly knew the URL. Ohran had to scramble to fix his mistake before anyone noticed.

After saving his neck, Ohran came to me, still fretting. “The authorization logic in this app is all over the place,” he said. “I don’t have the time or the freedom to refactor it, so whenever I add a new authorization rule, I just have to remember to add it in at least two places: hiding the link, and controlling the actual functionality. If I forget the first one, the user sees an error, and if I forget the second one, I’ve made the app vulnerable. For all I know, there’s some other forced browsing vulnerability out there that I’ve missed. Like when stormtrooper TK-422 goes to his personal preferences page, the URL looks like /users/TK-422/preferences. If he changed the number in the URL so it was /users/TK-421/preferences, he shouldn’t see TK-421’s preference page, that’s private!” Ohran rubbed his throat.

“Can’t you write a test for forced browsing?” I asked.

“I Googled it, and apparently you can’t really,” said Ohran. “Like, OWASP says ‘Automated tools are unlikely to find these problems.’ They give a lot of ways to defend against it, but they’re all manual testing and refactoring and doing code analysis. Which, I mean, I’ll do as much as I can. Everywhere else I’ve looked says it’s impossible, aside from just checking a hardcoded list of common ‘sensitive paths’ like /admin.php. There’s no way /trash_compactor_dashboard is gonna be on that list. Or there’s fuzzing, but same deal there–no testing tool is going to randomly guess that URL.”

Ohran and I put our heads together, and eventually we figured out a solution. It was actually pretty obvious in retrospect: just take what a manual white-hat tester does when searching for forced browsing vulnerabilities, and do it programmatically. First, we wrote a helper function in Ohran’s test code that could crawl the site, recursively following all of the links that a given user could see and returning the complete list. It looked something like

define browse_as(user) {
 visited_pages = Set.new()
 crawl(user, "/",visited_pages)
 return(visited_pages)
}

define crawl(user, page, visited_pages) {
 login_as(user)
 visit_page(page)
 visited_pages.add(page)
 current_page.links.for_each(link) {
  if(!visited_pages.include(link)) {crawl(user,link, visited_pages)}
  }
}

Just your standard spider. We made it re-login on every request just because some links log you out, but we could also have blacklisted logout links and other links that leave the site, so that we don’t end up crawling around the actual web.

Then we checked in a test. In the test, we create two users. One has all the privileges in the app: she could operate the trash compactors, commence primary ignition, open the thermal exhaust port, whatever. The other just has the minimum privileges necessary to log in. Then, the test crawls as both users, and compares the results. Crawling as the privileged user is just a simple way of enumerating all the pages in the app. Crawling as the unprivileged user is a way of inferring what the user is expected to be able to see. We can assume that if a user sees a link to a page they’re not supposed to see, that’ll be caught in manual testing. So this automated test now knows that any user can see the home page, their preference page, and a station-wide list of alerts, say. Now we take the difference of the lists: the list of all pages the privileged user can browse to, but the unprivileged user can’t. This will include admin-only pages, like the trash compactor dashboard, as well as personal pages for the privileged user, like her preferences page. Finally, we can loop over each link in this list, and try forced browsing to it as the unprivileged user. If we get a success response, not a redirect or an error, then this is likely a forced browsing vulnerability, and the test should fail. The test looks like:

privileged_user_visible_pages = browse_as(privileged_user)
regular_user_visible_pages = browse_as(unprivileged_user)

restricted_pages = privileged_user_visible_pages.except(regular_user_visible_pages)

restricted_pages.for_each(link) {
 login_as(unprivileged_user)
 visit_page(link)
 assert_error_page(current_page)
}

Happily, it turns out the conventional wisdom is wrong. As long as you have the ability to log in as both a privileged and unprivileged user, you can write an automated test for your web app that catches forced browsing vulnerabilities. I believe this should be added to integration testing for most web applications. If you use an interactive testing tool like Burp Suite, I’d suggest implementing this as a plugin (I might even write this as a bookmarklet one of these days). If you use, say, Cucumber, it’s probably fastest to just implement this algorithm by hand in your existing test suite.

Gotchas:

  • Make sure you secure your write (e.g. POST, PUT) routes, not just your page reads. My pseudo-code above works for this if “current_page.links” includes form submissions and AJAX requests, and “visit_page()” can handle form submissions, but that’s tricky since you may need mock data.
  • As mentioned, it’s best to restrict the links you follow to relative links and ones to the same domain, and to blacklist the logout link.
  • For a large enough app, this can be a time-consuming test. Ideally, try to run it outside of a browser to save time rendering. You may also want to optimize by adding “redundant” pages to your blacklist, such as pages after the first of a paginated result list.
  • Make sure you’re running against a test database that has at least one of everything, so that every possible link exists.
  • If you try to browse to a page you don’t have access to, some apps will simply serve a page you do have access to instead, without returning a 3xx Redirect response or displaying an error. These can lead to spurious failures in your tests. Ideally you should change that behavior, as it’s not very standards-compliant anyway.
  • A purely single-page web app that doesn’t store state in the URL at all will have to do this a bit differently, possibly by recording the AJAX requests done as each user crawls, then forcing those rather than front-end state.
  • The test as described doesn’t catch similar vulnerabilities involving other kinds of user-controllable input, such as cookies and hidden form submissions. It might be worth expanding to record everything (besides the authentication secret) the privileged user sends, and replaying it as the unprivileged user.
  • It may be non-trivial to ensure your privileged user always has all possible privileges.

And one final, meta-level disclaimer: try not to write projects with authorization rules that keep you up at night, scattered around different files and enforced in different ways. Try to put all of them in one place, so you can reference them both when deciding what links to render and what actions to allow. Make it impossible to process a request if no authorization check has run. And then test anyway.

Analytics