12-13-2011 09:45 PM
I'm interested in other peoples experiences with scanning Drupal and/or Sharepoint Sites using Web Inspect.
Are there any special considerations that need to be made, settings that yield the best results, or particular approaches that should be used when scanning web applications/sites built on these technologies.
We are finding that with the small number of these types of applications we are scanning, they often take a long time to scan (even with a small crawl count), generate a lot of 500 and 404 Errors and have connection issues resulting in paused scans. This doesn't seem to be the case for the larger number of other applications we scan that don't use these technologies.
Would be interested to hear everyone's thoughts.
12-14-2011 11:01 AM
I cant speak too much to drupal. I know from past experience with sharepoint that certain files such as webresource.AXD can cause a real performance hit in WebInspect scans. That is because web resource is actually an ISAPI filter that takes web requests with parameters that are used to identify resource files such as images. If the file is not excluded the problem in the scan is twofold
1) there are a bunch of parameters that keep getting added to webresource.axd. WebInspect is compelled to audit each parameter.
2) each request may return content that might ordinarily be excluded from being requested. Usually the response will be discarded once the mimetype is evaluated, but it still had to get downloaded. Also in some cases the content will get analyzed. Which can be costly and ineffective.
for sharepoint you may want to add webresource.axd to the excluded urls.
I'll need someone else to comment on drupal.
12-14-2011 03:06 PM
Hi Sam, thankyou very much for your reply :)
All our Sharepoint scans have been taking forever to complete, so I will definitely take your suggestion and add an exception for the WebResource.axd. Thanks again, Dan
12-15-2011 05:37 AM
I should have mentioned this earlier, a tactic I often employ when performing security assessments on web apps is to do a crawl only scan first. Usually this goes pretty quickly and it helps me to determine if there are any conditions that might cause an auditing problem. to make this determination I quickly analyze the results of the crawl by completely expanding the tree view on the left side using rightclick expand all. I skim the trees representation of site discovery to determine answers to the following questions
1) Did the scanner discover all of the pages it should have?
If not, why not? Was there authentication needed? Do I need to create a login macro or more precise web form values?
2) Did the scanner complete the workflow I wanted it to complete?
Again, if not, why not? Were more precise web form values requires or specific macro's required?
3) Are there lots of permutations of the same page indicating the scanner get stuck in a rut?
if so, what should the requests for the look like? How many requests is acceptable? What and how many parameters are required? What is the page for? Maybe I should just exclude the page and consider assessing it in isolation?
Once I have that high level overview I can determine the best way to customize the scanners settings to ensure it discovers the site as completely as possible and doesn't create conditions that will prevent an audit from completing. It may take two, three or more crawls to fine tune the scan settings to ensure optimal coverage then I launch into the audit.
Since you have audit results already you can do something similar with your existing scan results. Expand the tree, look for nodes where there are a lot of query and post parameters. Lots of hits in one place typically means one of two things - complex web forms, or something the scanner is misunderstanding.
Ask questions about the web application from the designers. If it's something like SharePoint, use web searches to learn about specific problem area. Once you are armed with knowledge about the site architecture you can make informed decisions about how to best configure the scan settings. You will know what was included and what wasn't and that puts you in a better position to truly evaluate the security risks within the web app.
Remember, your competitor, the hacker, wont just do a quick point and shoot of your apps. If he or she is really interested in gaining access he or she will invest the time to get results.
12-15-2011 08:46 AM
Kudos Sam for the great points. I like the crawl-first and site-tree 1-2-3 decision logic approach to tailoring the scan settings for the site. It is valuable advice (and reminder to us all) that will lead to getting good results from WebInspect. Too often we are just so tempted to press the "I feel lucky" Scan button and hope for the best. Thanks for the post.