10-28-2013 07:41 AM
I have another, hopefully rather interesting question:
How does WebInspect work?
Big topic so far, so please allow me to render this a bit more precisely.
I'm using WebInspect in Step Mode most of the time. At the moment I am just not able to grasp how this works. After I finish clicking through my applications, WebInspect starts scanning. In doing so it scans sites that are definitely password protected. I'm sure the application invalidates sessions pretty fast, but WebInspect seems to be able to scan it nonetheless. How do I know? WebInspect reports findings on sites that are protected. If WebInspect would finish the scan stating there are no findings, I would be suspicious, but that's not the fact.
Until now, we had to options for this mystery:
- We shrugged and said "hmm…Voodoo"
- We thought maybe WebInspect saves a whole copy of the app while browsing, using some mysterious internal webserver to emulate the whole app
Unfortunately, we're planning to change the way we're using WebInspect. We plan to put it in between our functional tester and the application in Step Mode, so someone who REALLY knows the app does the browsing for us. Afterwards we run the scanning overnight.
Good plan so far, but unfortunately a question has arisen: Does WebInspect manipulate the application? Will it possibly even ruin it for further functional tests and we therefor need another test environment instead of letting it loose on our functional testing environment?
In case our guess b) is correct, WebInspect will not ruin the test environment, because it just uses it while browsing. Sad thing is, we don't really believe it works that way.
Could you please shed some light on this topic?
Solved! Go to Solution.
10-30-2013 08:03 AM
By only using Manual Step-Mode scan, you are limiting the test coverage of WebInspect, but that may be your goal. I find the Step-Mode works "ok" when there is a discrete area I wish to test, and I do not want to bother setting up a Workflow or List-Driven scan. Usually when I try to limit a scan to a small area, I accidentally omit key portions of the site and so I fail to scan what I actually needed scanned. ;-)
Overall, the effort it takes to define a safe, custom configuration for a completely automated scan (with session state managed) is worth the effort in the long term. Fortify Support can assist if your site has particular details that require special scan configurations.
WebInspect actions in an automated scan:
1. Handful of probes sent (Bad Options, et al)
2. Login Macro is run, if defined.
3. Start Macros or Workflows are run, if defined.
4. Links are Crawled, unless this is an Audit-Only scan.
5A. If the scan pattern is set to Simultaneous Mode (default), then the links being Crawled are immediately being Audited.
5B. If the scan pattern is set to Sequential Mode, then the links being Crawled will be saved to be Audited later.
5C. If this is a Crawl-Only, the inputs will not be Audited.
6. The scan Recursion setting causes any new areas exposed by Audits to be turned over to the Crawler for further discovery, and anything new found there is then Audited per the above patterns.
7. If the scan is Paused and Resumed, any defined Login Macro re-runs to regain session state. Workflows and Start Macros do not re-run.
8. When no new links are found and all inputs have been audited, the scan is marked as Completed.
* User should then review the post-scan analysis pane, Recommendations. there may be items to Resolve there and perhaps run a Rescan with those updated scan settings.
WebInspect actions in a Step-Mode scan:
1. IE Browser window opens, with its proxy set to a listener port on the localhost. This is how WebInspect "watches" the traffic, by utilizing our Web Proxy tool's technology as MitM.
2. User browses the site, with the visited sessions appearing in WebInspect.
3. User closes browser, then clicks the Finish button in WebInspect to halt the Step-Mode. It can be resumed if needed from the Step-Mode link found in the lower lefthand corner.
4. If desired, user right-clicks any parent folder in the Site Tree and chooses "Crawl From Here". Wait for automated Crawl-Only to complete.
5. When ready to Audit, user clicks the Audit button in the toolbar, selects the desired scan Policy (even though it was already in the scan settings), and then waits while the site is Audited.
* This action of Crawl, stop, then Audit is similar to running an automated Crawl-Only and then clicking the Audit button in the toolbar afterwards.
* Note that this description above assumes you have the default settings for Step-Mode under the Edit menu > Application Settings > Step-Mode, particularly the Manual Audit option.
* Note that session state may be lost during the Crawl-Only or Audit phases here. For this reason it is optimal to pre-record a Login Macro and have that specified in your scan settings for Authentication. This will ensure session state is maintained throughout.
* If you change the default settings for Step Mode from Manual Audit to Audit As You Browse, the session-state losses will be much more apparent as the Audits will be occurring in the background while you are still manually browsing the site.
-- Habeas Data
10-31-2013 02:36 AM - edited 10-31-2013 02:37 AM
Good Morning Hans,
thanks for your reply.
Please allow me to clarify, what exactly concerns me, as I seem to have failed to make this clear. It's much information again, so I couloured the actual question in green. I hope this helps.
While I completely understand how to work with the Step Mode (I'm using it for applcations that are to complex in regards of user interaction to use automated scans), I just don't really grasp how the Step Mode itself works.
To give an example:
I used WebInspect against an online banking application. This application invalidates sessions pretty fast (luckily). So I used the Step Mode to browse the application, finished brwosing by closing the browser (Firefox in that case. IE has some issues, but that's a story and for another time), clicked "Finish" and started the audit afterwards by clicking "Start / Resume".
To my surprise, WebInspect still received valid responses from the application after more than two and a half hours. I never recorded a login macro (wouldn't work as there is some challange-respsonse fun in the site that requires mouse-clicking...ugly), still WebInspect was able to interact with the application although should have killed the sessions hours ago.
Please don't get me wrong: so far I'm really happy with this. I just don't understand, how this may be possible.
Until some days ago, this was solely a matter of couriosity and I never looked into this any further. But shortly before my last post the question has arisen, wether we can run WebInspect against the same environment we run our functional testing on. Does WebInspect actually modify values in the testing environment and therefore possibly render it useless for functional testing?
The scenario is to have the functional tester do their work during the day, while WebInspect is working as a proxy, recording their steps. After end of business, we start the audit based on the browsing done that day by the testers. Is there a risk WebInspect modifies the test environment in a way so the functional test can't continue his work the next morning?
11-08-2013 02:30 PM
Yes, it is quite likely that WebInspect can change values in the web application to your dismay. You probably should not scan your Functional environment unless you are prepared to restore it from back-up post-scan. It all depends on the target and your definition of "modified too much", not WebInspect.
There is an article that discusses this risk some in the Help guide, "Preparing Your System for Audit". Essentially, WebInspect's Crawler is designed to exercise all of the site, to press all buttons and fill out all forms. It submit pages multiple times with a variety of valid values and with fuzzing values. Based on the design of the web app, this can mangle all sorts of things. In my experience, it is the Crawler engine that is most hazardous, not the Audit Engines, but it all depends on the target. Here are actual situations I have had to deal with.
* 4,000+ e-mails in Support queue
* hundreds of new postings on forum by user "12345" or "Peter Gibbons" or John.Doe@somewhere.com
* test account password reset to "foo"
* reset all the HP Jetdirect cards to factory defaults - scanned with admin credentials
* etc/passwd file over-written with "Created by WebInspect" - multi-part form inputs plus PUT file upload plus Directory Traversal
If you plan to scan a Production site, there are methods to approach it with a series of scans, using ever-increasing levels of intrusiveness. Samples may be Crawl with no forms submission, Crawl with forms, using the Safe policy, then the Quick policy, and eventually the Standard policy, et al.
-- Habeas Data
11-19-2013 04:39 AM
thank you very much for the answer and the information, that's exactly what I needed.
Btw, I don't plan to scan a productive environment. I just wanted to clarify wether there will be conflicts if we use the same plattform for multiple tests. Obviously "yes", so we will adapt the plan a bit.
Again: Thanks a lot!