errors Rated 4 out of 5 stars

α) Με την εγκατάσταση όλα οκ. Στη χρήση τώρα...κάμποσα selectors τα διαγράφει ενώ χρειάζονται...πρέπει να τρέξεις όλα τα js της φόρμας-σελίδας, η αφήνει πράγματα που δεν χρειάζονται.πχ έβαλα το bootstrap.css το καθάρισε αρκετά αλλά με το χερι το έκανα αρκετα μικρότερο.β) Όταν το επαν-ενεργοποίησα χάθηκαν τα Data στη popup σελίδας...

Εάν χρησιμοποιείτε το "Αυτοματισμού" προτιμήσεις που μπορείτε να πάρετε την επέκταση να ανιχνεύσει αυτόματα, ενώ μπορείτε να περιηγηθείτε και να αλληλεπιδράσετε με τις σελίδες. Κάντε κλικ στα στοιχεία, αιωρείται πάνω από τα πράγματα - ό, τι αλληλεπιδράσεις είστε scripting για - και αυτό θα σας βοηθήσει να βρείτε επιλογείς που χρησιμοποιούνται μόνο με JavaScript.

(If you use the "Automation" preferences you can get the extension to scan automatically, while you browse and interact with pages. Click on elements, hover over things -- whatever interactions you're scripting for -- and this will help it to find selectors that are only used by JavaScript.)

Ilya Rated 4 out of 5 stars

Cant find the option to save cleaned up css file. After scanning online webpage can't use the option "view stored data". (FireFox 24)

The cleaning controls are in the view data dialog, but I don't know of any reason why you wouldn't be able to access that. What happens when you try?

Ain't working in FF22 Rated 3 out of 5 stars

All the functions are greyed out, can't access preferences or scan any page. Maybe it needs updated for FF22+ ?

This review is for a previous version of the add-on (3.01.1-signed). 

Doesn't work Mac 10.7.5 FF 23.0 Rated 1 out of 5 stars

Non of the functions work, can't even get dialog box, preferences can't save, etc

This review is for a previous version of the add-on (3.01.1-signed). 

Rated 1 out of 5 stars

Doesn't work on firefox 22

This review is for a previous version of the add-on (3.01.1-signed). 

Yes, that notes above do say that. Changes in F22 have broken the extension, and I'll release a fix as soon as I get the chance.

Helpful, but not completely. Rated 3 out of 5 stars

This will get me in the neighborhood of what I want...

The xml sitemap business doesn't work, so if that's what you want, you should wait for the next version. That's what I need, the aggregate.

The selector bit is nice, but it would be cool to see if actual rules go un-applied, like in cases where an element can adopt rules from multiple selectors. Dust-Me selectors never claims to be able to do this, so I can't really fault it. But it does mean that if you really need this tool, you probably need more information if you want to really clean up your CSS,

If your site is more than one page or has DOM changes, it's going to take more work to be confident that a selector is really unused.

It's good information, what little there is, and the View Saved Data interface is thoughtful. I look forward to the next version.

This review is for a previous version of the add-on (3.01.1-signed). 

Thanks for the review.

Sitemap indexes are not yet supported, but ordinary sitemap XML files should be fine. Can you tell me what the problem was, or give me a URL of the sitemap you tried to parse?

Where an element adopts rules from multiple selectors, all of the selectors will be checked, including those which only apply through inheritance. But you're right when you say that this alone is not enough to be completely sure that a selector is unused -- you have to take it as the starting point to reduce manual checking. I welcome any thoughts you have on how this could be improved.

Rated 1 out of 5 stars

I'm sorry but this addon isn't "that" addon. I was expecting it to be something like Firebug.

This review is for a previous version of the add-on (3.01.1-signed). 

Erm, well Firebug is like Firebug, there'd be no point replicating functionality.

What exactly did you want?

Rated 1 out of 5 stars

Oh finally figured out you have to right click BUT IT DOES NOTHING. You wasted 20 minutes of my time. BAH!

00000000 Stars

This review is for a previous version of the add-on (3.01.1-signed).  This user has a previous review of this add-on.

You LEFT click the icon in the add-on bar and it will scan the current page. Or you can RIGHT click the page and select "Spider this page", which will open the spider dialog and prefill its URL, then you press "Start" to begin spidering.

I will be publishing some proper documentation soon, but in the meantime, I hope that will get you started

Musts to be useful Rated 1 out of 5 stars

It's a must to do to things, on let us save the files, and to use the custom user agent that is being used in requests. I am using "User Agent Switcher" to let me do dev work on a live site without others seeing it.

Also it seems to never crawl all the links, so it's not working right or helpful.

Also, it doesn't seem to work if you log in as a user so that it will pick up hidden pages so it's not using the sessions of the browser as is? This is important to ecom sites with carts and user areas. my stats fed back was that there was 4000 unused rules, 4 pages crawled yet 183pages were ignored when they would have helped reduce the unused rules.

With not being able to down load a css file of the used rules or even picking up the other pages, this is not useful at all...ATM this is just a things to do but no value to it at all.I hope it's fixed to be useful soon cause it's a great idea and i had high hopes for it.

This review is for a previous version of the add-on (3.01.1-signed). 

Firstly, the extension does not crawl sites like a search-engine robot, it only follows links from the first page you crawl, so any pages that are not linked from that, will not be tested. You need to get it to crawl a sitemap (HTML or XML).

Secondly, it does test pages as the same user (ie. as you), so it will pick up login items, but note that the spider's default behavior is to disable JS on the pages it tests, so it doesn't pick up dynamic content. You can change that in the "Spidering" tab in Preferences.

I'm not sure what you're referring to by custom user agents -- it won't make any difference to the extension what UA you have. Or is that what you're saying, that it should?

Save a cleaned file Rated 4 out of 5 stars

Would be sensational if it was possible to save a cleaned CSS file directly to disk.

This review is for a previous version of the add-on (3.01.1-signed). 

Not working on large or complex websites? Rated 4 out of 5 stars

I was excited to see this tool updated but it still doesn't seem to work on our website at http://agr.wa.gov/

Scan the entire website. Under the section for "wsdaNewStyle3.css" the first item it shows is "#pageLastMod" but that id is used on many pages, only one of which is http://agr.wa.gov/Inspection/FVinspection/ContactUs.aspx

The problem is that if I find even a single thing that a tool doesn't work properly on, then I can't trust anything produced by that tool. :( Is there perhaps something I can change in the configuration of the tool that will make it work properly on our website, or have I reached a limit in how complex the CSS can be for the tool to work?

Thanks for any assistance you can provide!
-----------------------------------------------------------------------------------
Thank you for your prompt reply! And that is the one thing I needed to know: that it doesn't "spider" the entire site, it only scans what it can reach off the current page you activate it from, kind of a "breadth-first" search instead of a "depth-first" search, limited to a certain number of levels (because it stands to reason that at some point a recursive scan would hit every page from the homepage, otherwise the unlinked pages would never have any traffic). If that was on the instructions webpage somewhere then I just missed reading it. :(
I will see about generating a sitemap and go from there, thanks again!

One final question since I have your attention: is there any way to tell it to not scan a domain other than the one I started the scan in? On my system for example, it also scans Google because I use their mapping widget.

This review is for a previous version of the add-on (3.01.1-signed). 

Recursive scan is not supported, no. I guess it's just semantics to describe what it does as "spidering the whole site", because as you've observed, what it actually does is spider a sitemap, and won't reach any pages that aren't linked to from that sitemap.

You can "program" the Spider by defining exclusions or inclusions based on link REL attribute values, ie. tell it to exclude all links that have or don't have a particular REL value. You can also tell it not to follow off-site links, and only follow links on the same domain. Both of these options are in "Preferences > Spidering"

Excellent work, one feature requested! Rated 5 out of 5 stars

I was about to write some JS code and I decided to see if my idea was done first. It had been, and done very well! Thanks for making this a very usable extension. One request though -- I feel I want to be able to save off a copy of the .CSS file without the unused CSS in it. Thanks for Dust-Me, again!

This review is for a previous version of the add-on (3.01.1-signed). 

That feature is coming in the next update.

Not working with sitemap index Rated 4 out of 5 stars

It seems a great tool, unfortunately it does not work with sitemap index, or at least gzipped sitemap index

This review is for a previous version of the add-on (3.01.1-signed). 

Coming Soon!

That feature is coming in the next update

Great Add-on, few features missing Rated 4 out of 5 stars

Please add:
- "Exceptions" feature - to exclude certain pages /sub-directories from scanning;
- Ability to scan local copy of the web site

This review is for a previous version of the add-on (3.01.1-signed). 

Not sure yet whether local scanning will be possible, since it won't be able to use XHR to make requests, I'll have to see whether it can work with files through the directory structure instead.

For exceptions, that certainly something I'll look into. In the meantime have you looked at the Spidering preferences -- you can program exceptions there based on the value of "rel" attributes in HTML sitemap links.

Dust-me selector V3 Rated 5 out of 5 stars

New version available here : http://www.brothercake.com/site/portfolio/tools/dustmeselectors/

This review is for a previous version of the add-on (2.2). 

updated to Firefox 13.5 Rated 4 out of 5 stars

I have updated the .rdf file so the add-on should work as before until Firefox 13.5. You can download revised version: https://docs.google.com/open?id=0B_CE2l4Osa3SdVlmSkxUWWNUMm1QRDJ2cDI5OFMtZw

This review is for a previous version of the add-on (2.2). 

move to opera Rated 4 out of 5 stars

This was the only plugin holding me back from using the new firefox 10 version (by the end of 2012 we'll probably be on version firefox 300.00!). So to cut a long story short, I installed another browser having found dust me selectors for opera. I don't use dust me selectors as often so I can switch between the two. So if your waiting for an update before diving in to firefox 10 you may want to do what I did.

This review is for a previous version of the add-on (2.2). 

Release date latest FF compatible version Rated 5 out of 5 stars

Hi brothercake, Christmas is closing in fast.. Any news on the releasedate of the latest FF compatible version?

This review is for a previous version of the add-on (2.2). 

Fail Rated 1 out of 5 stars

Umm..., the free version of unused-css.com is so limited it's useless.

Dust-me: Like many here, I've never been able to get it to scan an entire site without hanging. Still don't see why it's recommended so much - it does NOT work as advertised.

This review is for a previous version of the add-on (2.2). 

Rated 4 out of 5 stars

Thanks for this great extension. If you want to clean multiple pages at once and download the clean CSS files, you should try http://unused-css.com

This review is for a previous version of the add-on (2.2).