Speaking as someone who both builds websites, and also has very strong objections to ads, tracking, website bloat, and all the other maladies and afflictions of the modern web, I have to say that… using NoScript is not a good suggestion.
There are several reasons; some are entirely selfish, and some have to do with how your actions affect wider trends. I’ll list what I think are the major reasons.
First, though, let me say that I entirely agree with you when you say “I don’t think arbitrary third party trackers ought to be trusted to run in your browser”. I’ll go further, and say that I have no obligation to view ads, to be tracked, to view messages about how ads are necessary for a website’s revenue and continued survival, to view messages about cookies or other GDPR-related nonsense, to click on popups about those messages, to view or click on popups asking me to register an account with a website, etc., etc., etc. It’s my computer, and I have the absolute right (assuming I am breaking no laws) to view content thereon in whatever way I wish.
And I still don’t think NoScript is a good call. Here’s why.
It breaks websites in ways that may not be obvious.
It is, in some sense, the less bad scenario if you visit a site and it’s just obviously horribly broken; you click that NoScript icon, whitelist the site, and voilà—you’re good to go. But what if a site seems to be fine? It might not occur to you to enable NoScript… but you’ll be missing potentially quite important site features. If you get used to browsing with NoScript, you might not even think to turn off the extension for a website… and be deprived of work the site designer has put in to make the website usable and useful.
The perverse irony of this is that it means that using NoScript will most reliably damage your user experience of precisely those websites that use JavaScript responsibly. For a simple example, take the website which I am, right now, using to type this comment: GreaterWrong. If you turn off JavaScript, you can still view posts and comments, and in fact the site will at first glance look just fine (this is by design; we want to support those users who, due to limitations of hardware and software, cannot run the JS we use). But you will not have any of the usability enhancements GreaterWrong offers—changing the text size, adjusting the appearance (themes & theme tweaker) and the content width, keyboard-based navigation features, etc.
This is, of course, directly a problem for you, as an individual, but it gets worse due to general user-population trends. Widespread use of NoScript among those who care about issues related to web bloat (and related issues) would (and, I strongly suspect, already is) weaken or even largely remove the pressure on web designers to minimize at least the most egregious of such anti-patterns. If I know that weighing down my site with a bunch of JavaScript means that many users will simply leave the site and never come back, I will put effort into optimization. But if I know that anyone who cares about performance will in any case have NoScript installed, then why not add framework after framework and tracker after tracker?
Note that I am not saying “and therefore you must suffer the burden of websites filled with malicious and bloated JavaScript… For The Common Good™”! Rather, I am saying that the right way to fight these undesirable trends is not NoScript, but rather something else (see below).
It is a blunt instrument, and that leads to security lapses.
If you find that a website doesn’t work with NoScript enabled, you can, indeed, laboriously whitelist one domain at a time, after manually checking the provenance of each one, until the site begins to work. But, for one thing, most people will not do this; they will be less careful, and will whitelist whatever it takes to make a site work. And even if you do carefully and manually whitelist every script domain one by one, you will inevitably let in some “naughty” scripts by mistake. Either way, security lapses will result.
There’s a better way.
You want to block ads. Good; block ads, not JavaScript.
You want to block trackers. Good; block trackers, not JavaScript.
You want to avoid third-party assets. Good; avoid third-party assets, not JavaScript.
In short, fix the specific problems you have, rather than simply disabling a big chunk of your browser.
Automatic instead of manual (no “spend 3 minutes manually whitelisting each of 50 domains, hoping you didn’t accidentally enable a bad one”)
No breaking of website functionality
Much more comprehensive protection than NoScript (together, these add-ons will also block tracking pixels, cookies, centrally distributed CSS or images or other assets that could track you, etc.)
To add to what I say in the parent comment, I want to comment a bit more about the security issue. From the OP:
… if a page isn’t working right, you simply click the NoScript icon and whitelist any domains you trust, or temporarily whitelist any domains you trust less.
But how the heck do I know what domains I trust? This security model requires me to know, and think, about what domains are “trustworthy” and what ones are not… in the face of constant, highly lucrative (and therefore highly incentivized) efforts at deception and treachery on the part of the ad-tech companies (and other bad actors)!
Which of the following is a better approach to security:
Personally undertake to decide which external scripts and assets come from “trustworthy domains” (and are therefore safe… presumably?), for every website I visit which implements some potentially desirable JavaScript functionality which I would want to enable.
I also suggest Firefox Reader View (or Just Read for Chrome). These will render only the main article on a page, stripping away all the extra junk. Many commercial sites have clickbait articles, autoplay videos, and other nonsense alongside the main article on a page. Ad Blockers and security add-ons will leave all this distracting noise alone. Reader View renders precisely what you want to see (usually).
But like NoScript, Reader View is a blunt instrument and will throw out useful site features. I only enable it in the rare case where there’s a worthwhile article on a crappy site.
This is a great response and I’m glad to have read it. However I think you miss one important disadvantage of your approach: These alternatives are mostly blacklists, and so they become less useful as you get further into the less-trafficked corners of the web, which is also where you’re most likely to hit, e.g., invisible compromised resources.
I’ve also been surprised at how little “whitelist fatigue” I’ve gotten. I would have naively expected to get tired of whitelisting domains, but in practice it’s continued to feel freeing rather than obnoxiously attention consuming, and site functionality is almost always easy / obvious to enable properly. It’s possible that sometimes I miss intended functionality, but I doubt that this comes close to outweighing the benefits.
Edit: the following paragraph misunderstands Said’s comment and doesn’t address the point that it was meant to; apologies.
Finally, I don’t buy the argument about incentivizing web authors. If trackers work less well, there is obviously less incentive to use them. If the only thing holding back authors from adding trackers willy-nilly is user annoyance at page bloat, then it’s clearly not enough, and so telling people to just go on shouldering that annoyance to ensure that the annoyance is minimized seems like privileging second-order effects that I would expect to be small.
telling people to just go on shouldering that annoyance to ensure that the annoyance is minimized
With respect, please re-read my comment, because not only did I not say anything like this, I specifically pointed out that I am not saying anything like it!
Furthermore, the argument from incentives was not specifically (or even mostly) about trackers; it was about bloat in website design / features. Frankly, it does not seem to me like you have given due consideration to what I wrote in that section of my comment…
However I think you miss one important disadvantage of your approach: These alternatives are mostly blacklists, and so they become less useful as you get further into the less-trafficked corners of the web, which is also where you’re most likely to hit, e.g., invisible compromised resources.
This is an interesting counterpoint, certainly. I am curious to what extent this is true in practice, and whether you make this claim on the basis of experience, or supposition; do you have examples?
Hello! I like both the above original post and your comment which both describe the general privacy/bloat issue and measures against.
But, aside from all the ads and tracking… what do you say about blocking JS to avoid shady malicous code running in your browser, delivered from a site you would normally trust, but was somehow manipulated and may someday find a way out of it’s sandbox (browser bugs, PDF vulnerabilities ect.)? Ok, one could disable plugins but you never know what’s next. How would you go about this issue with your approach? or did I miss something? I’m not an expert.
I hope my point and question is coming across, despite my English. ;)
Well, for one thing, the problem of “code delivered from a site you would normally trust but that is now malicious” is the same as the problem of “being mistaken about what sites to trust (and so accidentally trusting a site that was malicious all along)”.
As I understand your question, and as I understand the web and its technologies, the problem basically is that “if you run code (JavaScript) in your browser—code that is provided by arbitrary people on the internet—this is fundamentally a vulnerability”. And that’s true. There’s no solution to that basic fact other than “don’t run JavaScript”.
The matter really depends on how much you trust your browser vendor (Google, Apple, or Mozilla) to secure the browser against exploits that could harm/steal/pwn your computer or your data. If you trust them to a reasonable degree, then precautions short of “disable JavaScript entirely” suffice. If you really don’t trust them very much at all, then disable JavaScript (and possibly take even stricter measures to limit your exposure, such as running your browser in a VM, or some such thing; Richard Stallman’s browse-by-email workflow would be an extreme example of this).
I disagree with your argument. NoScript is an excellent tool and I use it on my personal browsers in addition to uBlock Origin.
Yes, it disables JavaScript and sometimes can break webpages. In those cases I’ll check my console and begin enabling JavaScript on the host page and any obvious CDNs it may be using. If after a couple of attempts the page still won’t display content, I’ll usually just leave the site as it’s not worth it.
On pages that actually do require JavaScript for display (simulations, visualizations, etc), I’ll let it run.
I’m curious as to why you think disabling JavaScript is something to avoid. It’s executing code, consuming power and occupying my CPU and RAM, often for no other purpose other than reporting my behavior back to some third party host. Why would I want to allow that?
Yes, it disables JavaScript and sometimes can break webpages. In those cases I’ll check my console and begin enabling JavaScript on the host page and any obvious CDNs it may be using. If after a couple of attempts the page still won’t display content, I’ll usually just leave the site as it’s not worth it.
Over half of my comment, by word count, is dedicated to addressing, and deconstructing, specifically this argument, and explaining both of the problems with it. Meaning no offense, but I am having a hard time believing that you read what I wrote; it rather seems like you instead skimmed my comment, pattern-matched to simplistic arguments you’ve read elsewhere, and responded to that straw version. I can’t really say anything in response without rehashing exactly what I wrote, because what I wrote is already a rebuttal of your points!
May I respectfully ask that you re-read my comment? If you still do not think that your arguments are addressed, then I suppose I have nothing further to say.
I’m curious as to why you think disabling JavaScript is something to avoid.
I explained this in my comment. See above.
It’s executing code, consuming power and occupying my CPU and RAM, often for no other purpose other than reporting my behavior back to some third party host. Why would I want to allow that?
Once again, the specific JavaScript that is running “for no other purpose other than reporting my behavior back to some third party host” is, indeed, that which you absolutely should be blocking. I explained that in my comment, as well, and I gave a detailed explanation of how to do precisely that.
Speaking as someone who both builds websites, and also has very strong objections to ads, tracking, website bloat, and all the other maladies and afflictions of the modern web, I have to say that… using NoScript is not a good suggestion.
There are several reasons; some are entirely selfish, and some have to do with how your actions affect wider trends. I’ll list what I think are the major reasons.
First, though, let me say that I entirely agree with you when you say “I don’t think arbitrary third party trackers ought to be trusted to run in your browser”. I’ll go further, and say that I have no obligation to view ads, to be tracked, to view messages about how ads are necessary for a website’s revenue and continued survival, to view messages about cookies or other GDPR-related nonsense, to click on popups about those messages, to view or click on popups asking me to register an account with a website, etc., etc., etc. It’s my computer, and I have the absolute right (assuming I am breaking no laws) to view content thereon in whatever way I wish.
And I still don’t think NoScript is a good call. Here’s why.
It breaks websites in ways that may not be obvious.
It is, in some sense, the less bad scenario if you visit a site and it’s just obviously horribly broken; you click that NoScript icon, whitelist the site, and voilà—you’re good to go. But what if a site seems to be fine? It might not occur to you to enable NoScript… but you’ll be missing potentially quite important site features. If you get used to browsing with NoScript, you might not even think to turn off the extension for a website… and be deprived of work the site designer has put in to make the website usable and useful.
The perverse irony of this is that it means that using NoScript will most reliably damage your user experience of precisely those websites that use JavaScript responsibly. For a simple example, take the website which I am, right now, using to type this comment: GreaterWrong. If you turn off JavaScript, you can still view posts and comments, and in fact the site will at first glance look just fine (this is by design; we want to support those users who, due to limitations of hardware and software, cannot run the JS we use). But you will not have any of the usability enhancements GreaterWrong offers—changing the text size, adjusting the appearance (themes & theme tweaker) and the content width, keyboard-based navigation features, etc.
This is, of course, directly a problem for you, as an individual, but it gets worse due to general user-population trends. Widespread use of NoScript among those who care about issues related to web bloat (and related issues) would (and, I strongly suspect, already is) weaken or even largely remove the pressure on web designers to minimize at least the most egregious of such anti-patterns. If I know that weighing down my site with a bunch of JavaScript means that many users will simply leave the site and never come back, I will put effort into optimization. But if I know that anyone who cares about performance will in any case have NoScript installed, then why not add framework after framework and tracker after tracker?
Note that I am not saying “and therefore you must suffer the burden of websites filled with malicious and bloated JavaScript… For The Common Good™”! Rather, I am saying that the right way to fight these undesirable trends is not NoScript, but rather something else (see below).
It is a blunt instrument, and that leads to security lapses.
If you find that a website doesn’t work with NoScript enabled, you can, indeed, laboriously whitelist one domain at a time, after manually checking the provenance of each one, until the site begins to work. But, for one thing, most people will not do this; they will be less careful, and will whitelist whatever it takes to make a site work. And even if you do carefully and manually whitelist every script domain one by one, you will inevitably let in some “naughty” scripts by mistake. Either way, security lapses will result.
There’s a better way.
You want to block ads. Good; block ads, not JavaScript.
You want to block trackers. Good; block trackers, not JavaScript.
You want to avoid third-party assets. Good; avoid third-party assets, not JavaScript.
In short, fix the specific problems you have, rather than simply disabling a big chunk of your browser.
Here’s how.
uBlock Origin (blocks ads)
Decentraleyes (protects against tracking via centralized asset distribution)
Facebook Container and Google Container (isolate your activity on said sites from the rest of your browsing) (Firefox only)
Nano Defender (enhancement for uBlock Origin; fights ad-blocker-blockers)
Privacy Badger (blocks trackers of various sorts)
AlwaysKillSticky (gets rid of sticky page elements) [disclosure: I’m the author of this one]
Anti-Paywall (exactly what it sounds like)
Advantages of this approach:
Automatic instead of manual (no “spend 3 minutes manually whitelisting each of 50 domains, hoping you didn’t accidentally enable a bad one”)
No breaking of website functionality
Much more comprehensive protection than NoScript (together, these add-ons will also block tracking pixels, cookies, centrally distributed CSS or images or other assets that could track you, etc.)
To add to what I say in the parent comment, I want to comment a bit more about the security issue. From the OP:
But how the heck do I know what domains I trust? This security model requires me to know, and think, about what domains are “trustworthy” and what ones are not… in the face of constant, highly lucrative (and therefore highly incentivized) efforts at deception and treachery on the part of the ad-tech companies (and other bad actors)!
Which of the following is a better approach to security:
Personally undertake to decide which external scripts and assets come from “trustworthy domains” (and are therefore safe… presumably?), for every website I visit which implements some potentially desirable JavaScript functionality which I would want to enable.
Delegate that vigilance to the Electronic Frontier Foundation.
To me, this does not seem to be a difficult question.
Just wanted to say thanks, this is helpful.
Small addition: uBlock origin also blocks most kinds of trackers, so depending on your block list choices you might not need an additional plugin.
I also suggest Firefox Reader View (or Just Read for Chrome). These will render only the main article on a page, stripping away all the extra junk. Many commercial sites have clickbait articles, autoplay videos, and other nonsense alongside the main article on a page. Ad Blockers and security add-ons will leave all this distracting noise alone. Reader View renders precisely what you want to see (usually).
But like NoScript, Reader View is a blunt instrument and will throw out useful site features. I only enable it in the rare case where there’s a worthwhile article on a crappy site.
This is a great response and I’m glad to have read it. However I think you miss one important disadvantage of your approach: These alternatives are mostly blacklists, and so they become less useful as you get further into the less-trafficked corners of the web, which is also where you’re most likely to hit, e.g., invisible compromised resources.
I’ve also been surprised at how little “whitelist fatigue” I’ve gotten. I would have naively expected to get tired of whitelisting domains, but in practice it’s continued to feel freeing rather than obnoxiously attention consuming, and site functionality is almost always easy / obvious to enable properly. It’s possible that sometimes I miss intended functionality, but I doubt that this comes close to outweighing the benefits.
Edit: the following paragraph misunderstands Said’s comment and doesn’t address the point that it was meant to; apologies.
Finally, I don’t buy the argument about incentivizing web authors. If trackers work less well, there is obviously less incentive to use them. If the only thing holding back authors from adding trackers willy-nilly is user annoyance at page bloat, then it’s clearly not enough, and so telling people to just go on shouldering that annoyance to ensure that the annoyance is minimized seems like privileging second-order effects that I would expect to be small.
With respect, please re-read my comment, because not only did I not say anything like this, I specifically pointed out that I am not saying anything like it!
Furthermore, the argument from incentives was not specifically (or even mostly) about trackers; it was about bloat in website design / features. Frankly, it does not seem to me like you have given due consideration to what I wrote in that section of my comment…
This is an interesting counterpoint, certainly. I am curious to what extent this is true in practice, and whether you make this claim on the basis of experience, or supposition; do you have examples?
You’re right; I’m sorry that I didn’t read your comment sufficiently carefully.
The reasoning there is purely my expectation and isn’t based on data or particular experience.
Hello! I like both the above original post and your comment which both describe the general privacy/bloat issue and measures against.
But, aside from all the ads and tracking… what do you say about blocking JS to avoid shady malicous code running in your browser, delivered from a site you would normally trust, but was somehow manipulated and may someday find a way out of it’s sandbox (browser bugs, PDF vulnerabilities ect.)? Ok, one could disable plugins but you never know what’s next. How would you go about this issue with your approach? or did I miss something? I’m not an expert.
I hope my point and question is coming across, despite my English. ;)
Well, for one thing, the problem of “code delivered from a site you would normally trust but that is now malicious” is the same as the problem of “being mistaken about what sites to trust (and so accidentally trusting a site that was malicious all along)”.
As I understand your question, and as I understand the web and its technologies, the problem basically is that “if you run code (JavaScript) in your browser—code that is provided by arbitrary people on the internet—this is fundamentally a vulnerability”. And that’s true. There’s no solution to that basic fact other than “don’t run JavaScript”.
The matter really depends on how much you trust your browser vendor (Google, Apple, or Mozilla) to secure the browser against exploits that could harm/steal/pwn your computer or your data. If you trust them to a reasonable degree, then precautions short of “disable JavaScript entirely” suffice. If you really don’t trust them very much at all, then disable JavaScript (and possibly take even stricter measures to limit your exposure, such as running your browser in a VM, or some such thing; Richard Stallman’s browse-by-email workflow would be an extreme example of this).
I disagree with your argument. NoScript is an excellent tool and I use it on my personal browsers in addition to uBlock Origin.
Yes, it disables JavaScript and sometimes can break webpages. In those cases I’ll check my console and begin enabling JavaScript on the host page and any obvious CDNs it may be using. If after a couple of attempts the page still won’t display content, I’ll usually just leave the site as it’s not worth it.
On pages that actually do require JavaScript for display (simulations, visualizations, etc), I’ll let it run.
I’m curious as to why you think disabling JavaScript is something to avoid. It’s executing code, consuming power and occupying my CPU and RAM, often for no other purpose other than reporting my behavior back to some third party host. Why would I want to allow that?
Over half of my comment, by word count, is dedicated to addressing, and deconstructing, specifically this argument, and explaining both of the problems with it. Meaning no offense, but I am having a hard time believing that you read what I wrote; it rather seems like you instead skimmed my comment, pattern-matched to simplistic arguments you’ve read elsewhere, and responded to that straw version. I can’t really say anything in response without rehashing exactly what I wrote, because what I wrote is already a rebuttal of your points!
May I respectfully ask that you re-read my comment? If you still do not think that your arguments are addressed, then I suppose I have nothing further to say.
I explained this in my comment. See above.
Once again, the specific JavaScript that is running “for no other purpose other than reporting my behavior back to some third party host” is, indeed, that which you absolutely should be blocking. I explained that in my comment, as well, and I gave a detailed explanation of how to do precisely that.
Said’s comment specifically addresses this.