If you think this add-on violates Mozilla's add-on policies or has security or privacy issues, please report these issues to Mozilla using this form.
Please don't use this form to report bugs or request add-on features; this report will be sent to Mozilla and not to the add-on developer.
I discover this amazing tool and start using it, but since a few days I have the following problem:
At the end of a Spider Sitemap process the window Dust-me Data does not pop up, impossible to see the selectors and start the "clean" process. The Dust-me data appears on the ms windows Tab bar but impossible to display the window.
If you go "View stored data..." from the toolbar-button menu or the main tools menu, then what actually happens?
Honestly, it's a great concept and exactly what I was looking for, but I am not sure it is working correctly.
On a tryout the add-on said certain stylesheet had 408 unused selectors (and around 200 selectors used), when according to inspect it has 396 selectors, total.
I don't know if it has something to do with my version of FF (31) or if its just outdated or broken. Not sure I can trust the information.
So you're saying there's a difference between the web inspector (or Firebug?) and what dust-me produces?
There's a number of possibilities, the least likely of which is a fault in dust-me. In fact it's more likely that dust-me is correct and the inspector is wrong; or at least, different -- dust-me understands a number of selectors that Firefox doesn't support, and which therefore won't show up in its inspection.
Or it might be to do with dynamic content that was present when scanned but not when inspected. Or it might be to do with a difference in how the different tools represent the CSS DOM.
I can't really say more without knowing what URL you tested it on. If you'd like to drop a line to the support email, I'd be happy to investigate.
This is not only cool and useful but lean and tight, unmistakable UI. A good example of good software! Works flawlessly on my machine. Not only finds my unused selectors from my sitemap, it offers to create a copy with the unused either commented or deleted. Version info from FF about page:
Mozilla Firefox for Linux Mint
mint - 1.0
This works very well in functionality. It's helped me find clean up CSS (though having to go to every page on a site does take a bit). My only problem with it is that there is no high-res icon. When I have it placed in the new dropdown menu, it looks blurry. Add the high-res icon, and I'll change this to 5-star.
EDIT: I mean the dropdown menu that shipped with Firefox starting with Australis. I'm using a 6 year old Dell, with a screen that is not in any way retina.
You mean for retina screens? I'm gonna do that soon -- just got a 2013 macbook myself, so it really does stand out :-)
EDIT: In which case, I don't know what you mean. Can you upload a screenshot?
Excellent app, does exactly what it says. Great for cleaning cruft out of huge messy css files. Very impressed with the "spider" function.
Does nothing when I click on "view data".
I have latest updates of Firefox and Dust me installed.
Complete waste of time.
Can you give me some more information to isolate what this problem might be -- the precise Firefox version number, which platform you're running it on, and whether there are any errors in the console.
С одной стороны плагин то, что нужно. Казалось бы, можно найти на сайте неиспользуемые стили, скопировать лишь используемые, и заменить ими стили. Но нет, почему-то сделано так, что сохранить используемые и неиспользуемые стили можно лишь в json и csv. Почему нельзя сделать сохранение в css формате - непонятно.
Нужно возиться с конвертация json формата в css. Хотелось бы, конечно, иметь сразу возможность сохранения в css.
Попробуйте "чистые" кнопки в диалоговом окне "Просмотр сохраненных данных", которые производят версию таблицы стилей, в которых неиспользованные селекторы удаляются.
(Try the "clean" buttons in the "View stored data" dialog, which produce a version of the stylesheet in which unused selectors are removed.)
Definitely not reliable - I have selectors listed as unused that are actually used on various pages
If you know that a selector is unused, then you can right-click and select "mark as used".
But generally, the places where Dust-Me will fail to find selector use, is where those selectors are only used by content which is generated by JS after page load or by user interaction (e.g. a lightbox which only appears when the user clicks something). It's not possible for the spider to detect that kind of thing, so you have to scan such pages individually, using the Automation preferences and mutation events, to detect dynamic changes in content.
Dust-Me Selectors works well for me but inthe web site I'm working on doesn't work.He say me No StylesheetsandUnable to parse selectors data (JSON syntax error) Can you help me please?
This is the url
What's the web site URL?
very good, easy use.
α) Με την εγκατάσταση όλα οκ. Στη χρήση τώρα...κάμποσα selectors τα διαγράφει ενώ χρειάζονται...πρέπει να τρέξεις όλα τα js της φόρμας-σελίδας, η αφήνει πράγματα που δεν χρειάζονται.πχ έβαλα το bootstrap.css το καθάρισε αρκετά αλλά με το χερι το έκανα αρκετα μικρότερο.β) Όταν το επαν-ενεργοποίησα χάθηκαν τα Data στη popup σελίδας...
Cant find the option to save cleaned up css file. After scanning online webpage can't use the option "view stored data". (FireFox 24)
The cleaning controls are in the view data dialog, but I don't know of any reason why you wouldn't be able to access that. What happens when you try?
All the functions are greyed out, can't access preferences or scan any page. Maybe it needs updated for FF22+ ?
Non of the functions work, can't even get dialog box, preferences can't save, etc
Doesn't work on firefox 22
Yes, that notes above do say that. Changes in F22 have broken the extension, and I'll release a fix as soon as I get the chance.
This will get me in the neighborhood of what I want...
The xml sitemap business doesn't work, so if that's what you want, you should wait for the next version. That's what I need, the aggregate.
The selector bit is nice, but it would be cool to see if actual rules go un-applied, like in cases where an element can adopt rules from multiple selectors. Dust-Me selectors never claims to be able to do this, so I can't really fault it. But it does mean that if you really need this tool, you probably need more information if you want to really clean up your CSS,
If your site is more than one page or has DOM changes, it's going to take more work to be confident that a selector is really unused.
It's good information, what little there is, and the View Saved Data interface is thoughtful. I look forward to the next version.
Thanks for the review.
Sitemap indexes are not yet supported, but ordinary sitemap XML files should be fine. Can you tell me what the problem was, or give me a URL of the sitemap you tried to parse?
Where an element adopts rules from multiple selectors, all of the selectors will be checked, including those which only apply through inheritance. But you're right when you say that this alone is not enough to be completely sure that a selector is unused -- you have to take it as the starting point to reduce manual checking. I welcome any thoughts you have on how this could be improved.
I'm sorry but this addon isn't "that" addon. I was expecting it to be something like Firebug.
Erm, well Firebug is like Firebug, there'd be no point replicating functionality.
What exactly did you want?
Oh finally figured out you have to right click BUT IT DOES NOTHING. You wasted 20 minutes of my time. BAH!
You LEFT click the icon in the add-on bar and it will scan the current page. Or you can RIGHT click the page and select "Spider this page", which will open the spider dialog and prefill its URL, then you press "Start" to begin spidering.
I will be publishing some proper documentation soon, but in the meantime, I hope that will get you started
It's a must to do to things, on let us save the files, and to use the custom user agent that is being used in requests. I am using "User Agent Switcher" to let me do dev work on a live site without others seeing it.
Also it seems to never crawl all the links, so it's not working right or helpful.
Also, it doesn't seem to work if you log in as a user so that it will pick up hidden pages so it's not using the sessions of the browser as is? This is important to ecom sites with carts and user areas. my stats fed back was that there was 4000 unused rules, 4 pages crawled yet 183pages were ignored when they would have helped reduce the unused rules.
With not being able to down load a css file of the used rules or even picking up the other pages, this is not useful at all...ATM this is just a things to do but no value to it at all.I hope it's fixed to be useful soon cause it's a great idea and i had high hopes for it.
Firstly, the extension does not crawl sites like a search-engine robot, it only follows links from the first page you crawl, so any pages that are not linked from that, will not be tested. You need to get it to crawl a sitemap (HTML or XML).
Secondly, it does test pages as the same user (ie. as you), so it will pick up login items, but note that the spider's default behavior is to disable JS on the pages it tests, so it doesn't pick up dynamic content. You can change that in the "Spidering" tab in Preferences.
I'm not sure what you're referring to by custom user agents -- it won't make any difference to the extension what UA you have. Or is that what you're saying, that it should?
Would be sensational if it was possible to save a cleaned CSS file directly to disk.
I was excited to see this tool updated but it still doesn't seem to work on our website at http://agr.wa.gov/
Scan the entire website. Under the section for "wsdaNewStyle3.css" the first item it shows is "#pageLastMod" but that id is used on many pages, only one of which is http://agr.wa.gov/Inspection/FVinspection/ContactUs.aspx
The problem is that if I find even a single thing that a tool doesn't work properly on, then I can't trust anything produced by that tool. :( Is there perhaps something I can change in the configuration of the tool that will make it work properly on our website, or have I reached a limit in how complex the CSS can be for the tool to work?
Thanks for any assistance you can provide!
Thank you for your prompt reply! And that is the one thing I needed to know: that it doesn't "spider" the entire site, it only scans what it can reach off the current page you activate it from, kind of a "breadth-first" search instead of a "depth-first" search, limited to a certain number of levels (because it stands to reason that at some point a recursive scan would hit every page from the homepage, otherwise the unlinked pages would never have any traffic). If that was on the instructions webpage somewhere then I just missed reading it. :(
I will see about generating a sitemap and go from there, thanks again!
One final question since I have your attention: is there any way to tell it to not scan a domain other than the one I started the scan in? On my system for example, it also scans Google because I use their mapping widget.
Recursive scan is not supported, no. I guess it's just semantics to describe what it does as "spidering the whole site", because as you've observed, what it actually does is spider a sitemap, and won't reach any pages that aren't linked to from that sitemap.
You can "program" the Spider by defining exclusions or inclusions based on link REL attribute values, ie. tell it to exclude all links that have or don't have a particular REL value. You can also tell it not to follow off-site links, and only follow links on the same domain. Both of these options are in "Preferences > Spidering"
I was about to write some JS code and I decided to see if my idea was done first. It had been, and done very well! Thanks for making this a very usable extension. One request though -- I feel I want to be able to save off a copy of the .CSS file without the unused CSS in it. Thanks for Dust-Me, again!
That feature is coming in the next update.
It seems a great tool, unfortunately it does not work with sitemap index, or at least gzipped sitemap index
That feature is coming in the next update
- "Exceptions" feature - to exclude certain pages /sub-directories from scanning;
- Ability to scan local copy of the web site
Not sure yet whether local scanning will be possible, since it won't be able to use XHR to make requests, I'll have to see whether it can work with files through the directory structure instead.
For exceptions, that certainly something I'll look into. In the meantime have you looked at the Spidering preferences -- you can program exceptions there based on the value of "rel" attributes in HTML sitemap links.
New version available here : http://www.brothercake.com/site/portfolio/tools/dustmeselectors/