01-25-2011 01:09 AM
as you may know from reading my previous posts, a customer is asking a precise count of total scanned pages vs found vulnerabilities. This is a totally different approach compared to the one adopted by the current reports in WebInspect.
Leaving it apart, I'd like now to know the differences between:
- crawled urls
- adited urls
The manual is quite unclear about that and we find different numbers in the main result window and in the various reports.
Is it possible to extract the real number of scanned pages and the real number of pages with vulnerabilites? I think that these concepts are often treated as synonyms while there are really different indeed!
From my insights:
- pages: real files, let's say "urls with a file extension at the end"
- session: other kinds of urls, such as directories or default attack types based on default urls (default files often present in certain folders, but not found in the actual scan)
Am I right?
If so, what about crawled and audited urls?
Thank you in advance.
01-25-2011 10:42 AM
Here is my understanding. See if makes sense.
Total number of sessions (excluding AJAX requests, script and script frame includes, WSDL includes). I believe the one you mention. You can see the count for the session on the dashboard.
Pages: Pages are like you mention real files,Linked Files through Hyperlink (Like DOC,XLS,PDF), "urls with a file extension at the end
Crawled urls - completely maps a site's tree structure by sending an HTTP request to the target URL, examining the HTTP response, and then sending additional HTTP requests to every link it detects on that page and all subsequent pages. Normally, a crawl runs until no more
links can be found and followed.
Audit Urls - Audit all the urls that are crawled and applies the methodologies (cross site scripting, sql injector etc) of the selected policy to determine vulnerability risks. You can see the count for audit & crawl on the dashboard.