Re: WebInspect queries on its behavior and methods (723 Views)
Reply
Frequent Advisor
sample
Posts: 108
Registered: ‎01-31-2011
Message 1 of 3 (723 Views)
Accepted Solution

Webinspect queries

Hi,

I have few queries on webinspect.

 

1) How does the tool know what fields to test and what data types to put into them?

2) How automated is the testing (person clicks versus automated scripting)?

3) What are the goals of the tool… simply testing field inputs (XSS hacks, injection, etc) or also vulnerabilities across links or through a work flow?

4)Once you provide a "root page" to test, how does the tool "crawl" to other pages?

5)If the pages use Axis or Flash or some other non-jsp-based functionality, is the tool still able to "crawl" to other sub-pages and through a work flow?

6)How long does the testing typically take? What if the validations are failing early on?

 

In need of immediate reply. Please.

Respected Contributor
HansEnders
Posts: 613
Registered: ‎07-01-2008
Message 2 of 3 (723 Views)

Re: WebInspect queries on its behavior and methods

1)  WebInspect will attack and fuzz all possible outputs by default, whether they are visible inputs, hidden, query parameters, POST parameters, URL truncation, headers, or cookies.  To spend even more time on the scan, you can enable the auditing of script event sessions under the Content Analyzers scan settings panel.  You can also limit the attacks utilized by customizing your scan policy, e.g. disabling the Cookie Injection or the Header Injection audit engine(s).

 

 

2)  Fully automated, although you can also use WebInspect as a manual testing engine if you prefer.  There are many ways to modify how WebInspect traverses through the site as well as how it handles the session management or navigational parameters.  Look up the Help guides for the following topics and features.

  • Simultaneous vs Sequential Crawl
  • Breadth First vs Depth First crawl
  • Crawl-and-Audit vs Audit-Only
  • standard Requestors settings vs Single Shared (session) Requestor
  • Restrict To Folder feature vs Session Exclusions
  • automated vs Manual Step-Mode Crawl and HTTP Editor

 

3) WebInspect's default goal is to fully crawl and spider all possible links found on the scan target host as well as audit (attack) each and every input or variable found therein.  This attacks both the Platform as well as the Application, although one may alter that by customizing or selecting the scan Policy for that purpose.  WebInspect will not audit linked hosts unless you specifically add them to the Allowed Hosts scan settings.  Off-site script includes will still be fetched to enable the crawling of the specified target host, although even this feature can be disabled under the Content Analyzers scan settings if desired.  If you require specific workflows through the application you may need to investigate the use of the Workflow Driven Assessment, perhaps coupled with the Audit-Only method, or the Depth First Crawler setting.

 

 

4)  The Crawler will start at the base URL provided and seek all links found there whether parts of standard links or as parts of executed script code.  If the URI is deep-linked, it will individually request each of the parent folders.  This standard of discovery will be further supported by forceful browsing techniques such as requesting known, standard directories and file names in order to expose unlinked or hidden structures.  Directory Truncation and Directory Traversal will augment this forceful browsing.  New links exposed during the Audit phase of discovered pages will be further Crawled and Audited, within the limits of the Recursion scan setting.

 

 

5)  WebInspect leads the market in development towards the automated parsing of Rich Internet Applications (RIA).  It already parses Javascript, VBScript, AJAX, Flash, and Silverlight is capable against many up-and-coming scripting methodologies.  These features are constantly being updated and provided to our customers via our product updates and the SmartUpdate feature.

 

 

6)  The testing of your application depends on your application and can be positively or negatively affected by the scan settings used.  The number of pages is not representative of the time, as WebInspect identified unique sessions by their combination of the page and its parameters, causing a highly dynamic site with a few pages to actually be represented by several hours worth of parsing.  Our Customer Support team can assist you with a review of the site's Crawl to determine more efficient settings, but you would be well prepared if you familiarized yourself with all the available Scan Settings found under the Edit menu.

 

If the validations are failing early on, yet the application is still responding, there may be no indication other than a shortened scan time and limited findings.  The only time the scan will truly halt prematurely ("Pause") is if the first response were a 404 or there were a drastically large number of null responses from the target server.  You would want to review the sessions captured, perhaps using the Sequence view and/or the Traffic Monitor, as well as individual HTTP Request/Response pairs.  If you determine there is something amiss such as session state or custom state-keeping or navigational parameters, then you should edit the relevant settings and repeat.  Under such conditions I would suggest using a Crawl-Only initially, to better view the issues in parsing without the clutter of the audits and fuzzing.


-- Habeas Data
Frequent Advisor
sample
Posts: 108
Registered: ‎01-31-2011
Message 3 of 3 (723 Views)

Re: WebInspect queries on its behavior and methods

Thank you very much for ur quick response.

The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.