Application-monitoring company Sentry recently wrote about their experience removing cookies on their site, which allowed them to drop their cookie banner. I’m glad they wrote this up! But it also illustrates why many sites have cookie banners even when they don’t seem to being doing anything risky.
I was curious whether their site actually did avoid setting any cookies, and visited it in a private browsing window with third-party cookies enables. As I browsed around the site I noticed several cookies being set:
Loading changelog.getsentry.com I received first-party cookies
_GRECAPTCHA
,ph_phc_UlHlA3tIQlE89WRH9NSy0MzlOg1XYiUXnXiYjKBJ4OT_posthog
, and_launchnotes_session
, plus third-party cookie_GRECAPTCHA
onrecaptcha.net
.Loading try.sentry-demo.com I received first-party cookies
sentrysid
,sc
, andsudo
.Loading sentry.io/auth/login I recieved first-party cookies
__stripe_mid
,__stripe_sid
,session
, andsentry-sc
, plus a third-party cookiem
onm.stripe.network
.Loading docs.sentry.io/product/performance/performance-video I received third-party cookie
__cf_bm
fromplayer.vimeo.com
.
It’s possible there are others; I didn’t run any sort of exhaustive search.
Now, Sentry does say:
For clarification, Sentry has removed all cookies, other than essential cookies that do not require site visitor consent. Work with your legal team to better understand which of your cookies qualify as essential cookies under the laws that apply to you.
And from a perspective of removing cookie banners this is right: you don’t need to completely stop using cookies, you just need to limit your use of cookies to cases that are “strictly necessary in order to provide an information society service explicitly requested by the subscriber or user”. For example, per the official guidance some things that are ok include setting a cookie when someone logs in, chooses a setting like “dark mode”, or adds an item to their shopping cart: you can’t do what the user asked for without cookies. But even then, you have to be quite careful to stay within the narrow limits of this exception: the guidance clarifies that you should generally use an expiration of a few hours or configure the cookie to be deleted when the user closes their browser.
Looking over the cookies they’re currently setting, it doesn’t look to me like they fall within this exception:
I don’t see any reason why the changelog page would need to set cookies. When I asked they said this was a known issue that they were working on fixing.
-
While it’s possible that the Sandbox implementation does fundamentally require cookies, because it’s simulating a complex application you’d normally log in for, if you first load the sandbox none of that is accessible. It has a modal dialog asking for you work email address, and without it none of the functionality works. They should at least postpone setting the cookies until you’ve submitted the form:
-
I don’t see why a login page requires setting any cookies: while you do need to set a cookie if the user actually logs in, I just loaded the page. When I asked they brought up CSRF prevention but that’s a bit of a strange one. CSRF is an attack an attacker site directs the browser to submit a form to a victim site. If the victim site isn’t taking steps to fight CSRF then it won’t understand that the user didn’t actually initiate this request, and the attacker can use this to take actions as the victim. But a CSRF vulnerability on a login would imply that the attacker had already successfully phished the user, at which point they can already initiate any action they wish on behalf of the user.
Even if these cookies are needed to prevent CSRF in a way I’m not seeing, I don’t see why you would need one like
sentry-sc
that has a one-year expiration and aSame-Site=Lax
opt in to being shared in third-party contexts. And since none of these usePath
to scope themselves to just the login form, once you’ve visited that page you’ll be sending cookies on every future pageview anywhere on the site. The Vimeo third-party cookie
__cf_bm
, is more debatable. It’s documented as an essential cookie “which is part of Cloudflare’s Bot Management service and helps mitigate risk associated with spam and bot traffic.” I think this is a grey area: I can’t find any explicit official guidance on whether using cookies to detect bots fits within the e-Privacy exemptions.
Overall I have a lot of sympathy for Sentry here. They’re trying to be careful in how they use cookies, and I think cookies they have are mostly very reasonable and shouldn’t require a cookie banner. On the other hand they do seem to me, as a lay person, to be out of compliance with the e-Privacy directive.
I think this is a good demonstration of why companies generally do choose to stick with cookie banners even though they’re annoying. Technically-inclined users will say that as long as you’re not doing anything nefarious you don’t need to ask consent, but the exceptions the regulations set out are really quite narrow and it’s easy to go wrong.
(I’d love to see the regulations changed here: there’s no reason to single out storing data on the client for special treatment. The general protections in the GDPR offer a much more consistent approach to data privacy, and I’m not convinced e-Privacy adds anything useful anymore. And then, of course, a regulation that leads to cookie banners and other consent walls everywhere that people mostly click through without reading is clearly not working well.)
Why are they annoying?
Some websites—rare, delicate lotus flowers—bother me with a small, horizontal banner on the bottom of the page. When I click “accept”, it actually goes away forever on that browser for that website.
While many others, istead, slap a bulky framed message in the middle of the page. Possibly 2-4 seconds after most of the loading, just to interrupt my initial interactions with the page in the most annoying way possible.
Is there a reason for that? Is it out of control overconservative legal worry?
They are annoying if you don’t just accept the cookies. I always reject all non-essential. Typically that is a three-click process. It’s annoying when it’s the fifth site in a row.
I’ve found it easiest to just add a Ublock rule (if ublock missed it) or enable AKS on that site, and never need to see them again.
Raging against the tyrannical bureaucrats telling them what they can and can’t include on their own website by including the banner in the most annoying way possible? Kinda like the ¢10 plastic bag tax at grocery checkouts that tells the customer exactly why they have to pay the tax and makes them count out how many bags they’ve used.
I think this is unlikely, because it is not in a website interest to annoy its users, and they are not otherwise obtaining something from bigger banners.
It is if the user feels that annoyance towards the regulator instead of the website developer
I haven’t personally needed to pay super close attention to the e-Privacy regulations but I thought they exclusively focused on cookies as a specific technology? The web has client-side data storage that is not cookies, and cookies are more privacy invasive than simple client-side storage because they’re also automatically transmitted to the server on every matching request without any further interaction from either the user or the website.
It seems to me that it’s much easier to respect user privacy when using other mechanisms for client-side storage and for transmitting data from the client to the server. I’ve also generally found that the cookie-free approaches tend to result in more maintainable and debuggable code, without incurring additional overhead for many use cases. (An exception: document-centric use cases where the documents themselves are access controlled generally do benefit from cookies, and low-JS sites have more legitimate use for a non-JS mechanism for storing and transmitting authentication information; but both of those seem to be somewhat niche use cases relative to the current web as a whole.) Thus, I’m a bit annoyed that there hasn’t been more movement across the industry to migrate from cookies toward other more-targeted technological solutions for many use cases requiring data storage on the client — particularly for those use cases that would be legitimate banner-free uses of cookies according to e-Privacy.
For better or worse, the e-privacy directive is not specific to cookies: it covers any form of client side data storage. For example, “Users should have the opportunity to refuse to have a cookie or similar device stored on their terminal equipment.”
Good to know, thanks!
(And thanks in particular for linking to the original text — while your excerpt is suggestive, the meaning of “similar device” isn’t entirely clear without seeing that the surrounding paragraph is focused on preserving privacy between multiple users who share a single web-browsing device. I feel like that is still a valid concern today and a reasonable reason for regulations to treat client-side storage slightly differently from server-side storage, even though it’s not most people‘s top privacy concern on the web these days and even though this directive doesn’t resolve that concern very effectively at all.)
To the extent that the goal is to give privacy between multiple users, a way to explicitly say “this browser is just for me” and then not see cookie banners would be pretty great.
Once you’re willing to mandate browser features to bolster privacy between multiple users on the same device, I’d get rid of website-implemented cookie banners altogether (at least for this purpose) and make the browser mandate more robust instead. I could see this as a browser preference with three mandated states (and perhaps an option for browsers to introduce additional options alongside these if they identify that a different tradeoff is worthwhile for many of their users):
Single user mode: this browser (or browser profile) is only used by one user, accept local storage without warning under the same legal regime as remote storage of user data.
Shared device mode: this browser (or browser profile) is shared among a constrained set of users, e.g. a role-oriented computer in an organization or a computer shared among members of a household. Apply incognito-inspired policies to ensure that local storage cannot outlive a particular usage session except for allowlisted domains, and require the browser to provide a persistent visual indication of whether the current site is on the allowlist (similar to how browsers provide a persistent indication of SSL status).
Public device mode: this browser (or browser profile) is broadly available for use by many people who do not necessarily trust each other at all, e.g. a machine in a school’s computer lab or in a public library. Apply the same incognito-inspired policies as in shared device mode, but without the ability to allowlist specific sites that can store persistent cookies. The browser must also offer the ability for the computer administrator to securely lock a browser in this mode to prevent untrusted users from changing the local-storage settings.
I don’t think you need to mandate browser features: a big reason we don’t have this sort of thing today is that even if the browser offered this setting it wouldn’t be enough to satisfy the regulation. The regulation could say something vaguely like “web browsers may offer their users a choice between three profiles [insert your description] and communicate to websites which setting the user has chosen. If a website receives this information, it may save information to the client device etc”
In theory, yes. Do you have particular knowledge that things would likely play out as such if the regulations permitted, or are you reasoning that this is likely without special knowledge? If the former, then I’d want to update my views accordingly. But if it’s the latter, then I don’t really see a likely path for your regulatory proposal to meaningfully shift the market in any way other than market competition forcing all major browsers to implement the feature, in which case it doesn’t practically matter whether the implementation requirement has legal weight.
I think it does matter? It’s not clear that browsers can be required to do this, and even if it were legal to require them to it’s not a good precedent. On the other hand, browsers working together with regulators and site owners to make a new technical standard (to communicate shared browser status) + rules (so it’s legal to use the technical standard to not prompt about cookies) so users can have a better experience would be clearly legal and a great precedent.
(I have maybe a bit of special knowledge, in that I worked with a browser team and regulatory lawyers 2020-2022 but I’m not claiming to be an expert on how regulations and browsers change!)
Fair enough, although I put a little less weight on the undesirable precedent because I think that precedent is already largely being set today. (Once we have precedents for regulating specific functionality of both operating systems and individual websites, I feel like it’s only technically correct to say that the case for similar regulation in browsers is unresolved.)
Also, the current legal standard just says that websites must give users a choice about the cookies; it doesn’t seem to say what the mechanism for that choice must be. The interpretation that the choice must be expressed via the website’s interface and cannot be facilitated by browser features is an interpretation, and I’d argue against that interpretation of the directive. I don’t see why browsers couldn’t create a ‘Do-Not-Track’-style preference protocol today for conveying a user’s request for necessary cookies vs all cookies vs an explicit prompt for selecting between types of optional cookies, nor any reason why sites couldn’t rely on that hypothetical protocol to avoid showing cookie preference prompts to many of their users (as long as the protocol specified that the browsers must require an explicit user choice before specifying any of the options that can skip cookie prompts; defaulting users to “necessary cookies only” or the all-cookies-without-prompts setting would break the requirement for user choice).
But we don’t see initiatives like that, presumably in large part because browsers don’t expect to see much adoption if they implement such a feature, especially since it’s the type of feature that requires widespread adoption from all parties (browser makers, site owners, and users) before it creates much value. Instead, lots of sites show cookie banners to you and I while we browse the web from American soil using American IP addresses, seemingly because targeting different users with different website experiences is just too sophisticated for many businesses. They evidently see this as a compliance requirement to be met at minimal cost rather than prioritizing the user experience. I don’t see how the current dynamic changes as long as websites still see this purely as a compliance cost to be minimized and as long as each website still needs to maintain their own consent implementations?