Crawling: Detecting changes in application state. When you crawl an application with more volatile URLs or more complex stateful functionality, you may want to select the More complete or Most complete setting. However, when you crawl an application with more stable URLs and no stateful functionality, you may want to select the Faster or Fastest setting. The default crawl strategy represents a trade-off between speed and coverage that is appropriate for typical applications. The crawl strategy setting enables you to tune the approach taken to specific applications. However, this imposes an overhead in the quantity of work involved in the crawl. The crawler can handle all of these cases. Functions where user actions cause changes in content and subsequent behavior.Volatile content that changes non-deterministically.Overloaded URLs that reach different functions through different navigational paths.Ephemeral URLs that change each time a function is accessed.On the other hand, a heavily-stateful application might use: Return deterministic content in each response.Employ a unique and stable URL for each distinct function.Real-world applications differ hugely in the way they organize content and navigation, the volatility of their responses, and the extent and complexity of the application state involved.Īt one extreme, a largely stateless application may: It's sensible to limit the maximum link depth to a smaller number. However, there are clearly diminishing returns to crawling deeply into a navigational structure such as this. To a crawler, this can appear as a very deep nested tree of links, all returning different content. For example, a shopping application might have a huge number of product categories, sub-categories, and view filters. Some applications contain extremely long navigational sequences that don't lead to interesting functionality. Fully covering multi-stage processes (such as viewing an item, adding it to a shopping cart, and checking out) requires more hops. As such, it is normally possible to reach the vast majority of an application's content and functionality within a small number of hops from the start URL. Modern applications tend to build navigation into every response, for example in menus and page footers. Specify the maximum number of navigational transitions (clicking links and submitting forms) that the crawler can make from the start URL(s). If you don't provide any application logins, the crawler automatically performs an unauthenticated crawl instead. It uses only your provided logins and does not attempt to self-register users or trigger login failures. These settings enable you to tune Burp Scanner's behavior during the crawl phase, to reflect the objectives of the audit and the nature of the target application.īurp Scanner skips the unauthenticated crawl phase if you have provided one or more application logins for it to use.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |