Not working on large or complex websites? Rated 4 out of 5 stars

I was excited to see this tool updated but it still doesn't seem to work on our website at http://agr.wa.gov/

Scan the entire website. Under the section for "wsdaNewStyle3.css" the first item it shows is "#pageLastMod" but that id is used on many pages, only one of which is http://agr.wa.gov/Inspection/FVinspection/ContactUs.aspx

The problem is that if I find even a single thing that a tool doesn't work properly on, then I can't trust anything produced by that tool. :( Is there perhaps something I can change in the configuration of the tool that will make it work properly on our website, or have I reached a limit in how complex the CSS can be for the tool to work?

Thanks for any assistance you can provide!
-----------------------------------------------------------------------------------
Thank you for your prompt reply! And that is the one thing I needed to know: that it doesn't "spider" the entire site, it only scans what it can reach off the current page you activate it from, kind of a "breadth-first" search instead of a "depth-first" search, limited to a certain number of levels (because it stands to reason that at some point a recursive scan would hit every page from the homepage, otherwise the unlinked pages would never have any traffic). If that was on the instructions webpage somewhere then I just missed reading it. :(
I will see about generating a sitemap and go from there, thanks again!

One final question since I have your attention: is there any way to tell it to not scan a domain other than the one I started the scan in? On my system for example, it also scans Google because I use their mapping widget.

This review is for a previous version of the add-on (3.01). 

Recursive scan is not supported, no. I guess it's just semantics to describe what it does as "spidering the whole site", because as you've observed, what it actually does is spider a sitemap, and won't reach any pages that aren't linked to from that sitemap.

You can "program" the Spider by defining exclusions or inclusions based on link REL attribute values, ie. tell it to exclude all links that have or don't have a particular REL value. You can also tell it not to follow off-site links, and only follow links on the same domain. Both of these options are in "Preferences > Spidering"