A few days ago, I made a social media post about Google vs. the Open Web. It received some responses, so I’ll reproduce it below with some additional comments.
Google is trying to kill the Open Web.
Using the proposed “Web Environment Integrity” means websites can select on which devices (browsers) they wish to be displayed, and can refuse service to other devices. It binds client side software to a website, creating a silo’d app.Web Environment Integrity on GitHub
This penalizes platforms on which the preferred client side software is not available.
This is an issue for accessibility and inclusion, in particular when the reason the software is not available is tied to the needs of marginalized groups, such as when poverty makes it impossible to own sufficiently modern devices, etc.
“Web Environment Integrity” is a deeply antisocial proposal, and goes counter to the design principles of the web.
In all honesty, this move by Google is hardly surprising. They’ve been trying to capture the web as their own platform for a long time. This is just the latest battle.
But it also marks a particularly perverse point, in how the proposal admits it exists primarily to extract value from people. People mining at its worst.
Remember when their motto was “Don’t be Evil?”
Analysis of the Proposal
Some details on the proposal may help here.
The proposal suggests that websites should be able to request an attestation from the browser about its “integrity”. Such attestations are to be provided by external agents, which – presumably – examine the browser and its plugins, and issue an approval only if those checks pass.
The attestation is sent back to the website, which can now decide to deny service if the agent did not give approval.
Ostensibly, this is to ensure for the user that the environment has not been tampered with in any way. The described use cases, however, make fairly clear that it is for the business that this feature exists.
In particular, the proposal suggests that “Google Play” could provide such attestations, and also provides an example case which intends to ensure that ads are served only to legitimate users, not to automated processes.
These two points are not raised together. But put them together, and you find the following underlying problem:
- Advertisers want to reduce costs.
- Website owner wishes to display ads.
- Google’s ad network charges per impression.
- Bots create impressions.
The proposal effectively provides a solution for Google’s advertising problem, and tries to couch it in more user friendly terms. The above scenario is the closest to a problem they describe outright.
The solution, expressed in the proposal, is to exclude bots via attestations, such that ads generate impressions only with logged-in Google Play users.
In general, bots are pretty easy to exclude.1 They usually advertise themselves by a user agent string. Yes, that can be faked – but it seems highly unlikely that bots using faked user agents create such a large number of impressions that Google has to use this route against them. If I look at my own webserver logs, it’s very clear which are bot requests just from the log information.
If bots are not the actual problem, then what is?
The agent providing the attestation is free to use whichever means to approve or disapprove of a browser. That includes examining whether the browser runs ad blockers.
Given how advertising networks track users, and user tracking is a practice that comes under increasing criticism, ad blockers are also gaining in popularity. Security experts regularly recommend the use of ad blockers as a cyber security measure – as ads can be used to side-load malware into an otherwise legitimate website.
What Google is really after is ad blockers.
The downside of this approach is that it opens up a door for arbitrary abuse. Websites can refuse service unless you install their proprietary data collection agent. Websites can refuse service if you use the wrong browser – we’d enter the browser wars of the late 90s, with renewed effort.
The day this proposal gets accepted is the day the Open Web is set back by decades.
In The Future of the Internet -- And How to Stop It, Jonathan Zittrain lays out, starting with the telephone network, that there exist “appliances” and “generative systems”.
An “appliance” works like any other household appliance, like a toaster. It has one primary function, and all other functions it may offer are at best mild variations of the primary one. It toasts.
Zittrain lists the PC and the internet as examples of generative systems. Generative systems are not necessarily as complete in functionality as an appliance – they provide some basic functionality, but with no specific primary purpose. The purpose is left to the user. Another way of phrasing this is to call these things tools, or crafting materials.
Maybe it’s worth pointing out the text of the above image at this point:
The Open Web is a collection of user-developed, non-proprietary frameworks for building services, emphasizing interoperability and the balance of data access and ownership between providers and end-users.
Generative systems are significantly more impactful than appliances precisely because they leverage the user’s imagination to address their own needs. They crowdsource meaning at global scale. This is what makes the internet so powerful.
Attestations from an agent doing arbitrary things effectively turns the web into an appliance – or, to be more specific, it turns the browser into an extension of an appliance website.
Of course website owners are free to build appliances. They already are doing so. But this reduces the usefulness of “the web” step by step, until the generative open web is lost. We’re already seeing the negative effects of this, and technology like the proposed would only accelerate the trend.
Google does not need to mind. The inexorable logic of capitalism means that businesses that managed to build upon a generative system to rise, now have to turn that same system into an appliance for their own needs, or risk being open to competition.
The reactions to the post were diverse, and it’s worth addressing a few.
This does not imply accessibility or inclusion issues! – Yes and no. No, in principle this technology does not cause accessibility issues. But the pareto principle implies that effort should be spent on 20% of the browser market because that captures 80% of the users – and cost effectiveness then mandates that the remaining 20% of users should be ignored, because they’ll cost too much to support.
That is exactly the worry here. Marginalized groups which need specialized browser – for example with good screen reader capability, or capable of running on cheaper/older devices – will effectively be excluded by rational business logic.
Worry about access, not about technology! – The argument is that good regulatory frameworks will legally mandate access, so that should be the focus.
This is true, but not enough. The two problems with this line of thinking are that first, good regulatory frameworks are rare. And part of the reason for that is the second problem, namely that technology moves faster than the law.
Which means that worrying about access instead of technology will still exclude marginalized groups in practice. What is required instead is to worry about technology in the short term, and regulation in the long term.
It is legitimate for businesses to wish to protect their interests. – That is a debatable point. Businesses “protecting their interests” to the detriment of people is not legitimate. But within the bounds of that, sure, why not.
Here’s the problem, though: the internet and open web are generative systems, which means the reason they have a positive impact is because people can decide how to use them. The moment this decision making power is curtailed, the system shifts towards an appliance.
If businesses protect their interests by reducing a former generative system to an appliance, by definition this is to the detriment of people, and no longer legitimate.
After raising a code of conduct violation for the proposal with the W3C’s group responsible for said code, I was rightly told that they are not responsible (TL;DR, see the link). I then sent an email to the ombudspeople at W3C which I’ll reproduce here:
From jens@OBFUSCATED Fri Jul 21 17:23:38 2023 Date: Fri, 21 Jul 2023 17:23:38 +0200 From: "Jens Finkhaeuser" <jens@OBFUSCATED> To: firstname.lastname@example.org Subject: Web Environment Integrity proposal Dear Ombudspeople of the W3C, I wish to raise concerns about the behaviour of the people working on the Web Environment Integrity proposal, as well as the proposal itself. https://github.com/RupertBenWiser/Web-Environment-Integrity/ In particular, I would like to draw your attention to isuse #131 in their working repository: https://github.com/RupertBenWiser/Web-Environment-Integrity/issues/131 The group working on this claims to adhere to the W3C Code of Ethics and Professional Conduct. However, as documented in this issue, they violate said code. As a bit of background, WEI is a deeply unethical proposal that suggests to use cryptographic means to permit websites to deny services to users based on arbitrary user metadata. Such metadata is to be processed by agents running on the user's machine, which provide attestations about the browser environment. One such proposed service is Google Play, which has access to personal identifiable information (PII). This turns the proposal into a technological mechanism for discrimination. The community has raised and is raising issues about the ethics of such a proposal, which led me to find the W3C code of ethics. Unfortunately, as was pointed out to me, the code of ethics does not concern the content of proposals - merely the conduct of participants. Unfortunately, some maintainers of the repository have taken to closing issues raised by the community -- the fourth bullet point in the "participant" explanation of the code ("Anyone from the Public partaking in the W3C work environment (e.g. commenting on our specs, (...)"). This violates several points in section 3.1 of the same document, whereby use of reasonable means to process diverse views are required. It seems to be the case that this proposal has not yet made it to a W3C group. However, its maintainers already violate the W3C code of ethics in practice in the run-up to such an activity. In the meantime, even though the code is not directly applicable to the proposal contents, it nonetheless violates said code in spirit. It seems appropriate that W3C does not permit this proposal to go ahead in any formal fashion. Kind regards, Jens Finkhaeuser
Google has now closed the ability to contribute to the repository, including by raising or commenting on issues.
2023-07-26 – #1
Apple has already shipped a similar API for about a year.
As described on the Apple developer blog, Private Access Tokens implement pretty much the same mechanism as Google’s WEI.
There are a few notable differences in tone, however. The first is a direct quote from the above blog post:
Note: When you send token challenges, don’t block the main page load. Make sure that any clients that don’t support tokens still can access your website!
This note is not doing anything in itself, but it does stand in stark contrast to the motivations documented in WEI. In particular, the proposal suggests that private access tokens should be used instead of e.g. CAPTCHAs or other, more disruptive authentication mechanisms.
The second important difference is in the actual details of the proposal. It states that the token issuer is an external web service rather than some opaque process running on the user’s machine. Suggested are some CDN providers’ services. The clear message of intent here is that this is supposed to be a mechanism by which CDNs authenticate a request to the source.
The protocol by which this is to be done is defined by the IETF PrivacyPass Working Group. Reading through the protocol draft, it furthermore becomes clear that the data the client is supposed to send to the issuer is… nothing but the challenge sent by the server, in an obfuscated (hashed) manner.
This leads to two conclusions.
- No personal data is being leaked.
- There is no checking of the “environment”, aka the browser and its plugins, that can prevent some browsers from receiving an attestation.
Not so fast!
As has been pointed out to me, this analysis is incomplete. That is because the specifications provided by the PrivacyPass WG are incomplete.
What is missing from the specification set is how client and attester interact. The issuer, as described above, is oblivious to PII. However, it can influence which attester to use.
The attester, on the other hand, is an unknown. Various parts of the specs refer to possible ways this may occur, leaving any specifics unwritten. While this includes the possibility of clients not sending sensitive attributes to the attester, no mention of the consequences of that is made (though one can assume that attestation then fails).
This openness effectively means that the same model as WEI with the same problems can be implemented – a fact the architecture document acknowledges in section 5.1 “Discriminatory Treatment”.
I have to thank @Mayabotics@tech.lgbt for nudging me to give those parts a closer look! I was too focused on the issuer protocol.
2023-07-26 – #2
Mozilla opposes this proposal because it contradicts our principles and vision for the Web.
That’s something, at least.
2023-07-26 – #3
As a honourable mention, the maintainer of the Google repository has published a personal blog post about their experience, which contains some fair and some unfair bits.
However, one of the points bears commenting on:
Don’t assume a hidden agenda
When thinking about a new proposal, it’s often safe to assume that Occam’s razor is applicable and the reason it is being proposed is that the team proposing it is trying to tackle the use cases the proposal handles. While the full set of organizational motivations behind supporting certain use cases may not always public (e.g. a new device type not yet announced, legal obligations, etc), the use cases themselves should give a clear enough picture of what is being solved.
There are a few comments to this:
- Given that Apple’s mechanism is undergoing IETF standardization, the only reason for an opposing mechanism is that the existing approach does not fulfil Google’s needs.
- Google clearly states its needs in its use cases. There is no hidden agenda that people complain about, but rather the agenda as it is stated clearly in plain text.
This comment actually confirms the community’s worst fears.
2023-07-26 – #4
I should ignore that, but this admittedly stings a little, given that I worked on threat management solutions in a former life. With that in mind, at least my data set is a lot larger than the comments suggest.
But the gist of the criticism is true to the extent that I’ve written bots myself that have circumvented stronger security measures than a user agent check. Given sufficient motivation, it’s in easy reach.
Which begs the question: how would one write a bot that circumvents this kind of attestation mechanism?
Whether it’s WEI or PrivacyPass, the weak spot is the attester. Either an attack manages to convince the attester that a client is legitimate. Or a legitimate client is used, but in a way the attester will not complain about.
The latter could be as simple as using Selenium WebDriver to make requests with a legitimate browser. I suspect it’ll be a little more difficult than that in practice.
But that is beside the point – the real point is that bots can be as sophisticated as a real browser, including being able to pass attestation.
Which means WEI is, again, not really about bots at all, which was the original point these fine folk seemingly missed.
2023-07-26 – #5
Today is a day for lots of updates, as information on WEI continues to accumulate.
Chromium already has commits for WEI, which probably means this stuff will be out a lot sooner than the specs solidify.
The HackerNews post by now contains a very useful comment that complaining on GitHub to Google is pointless. This is part of why I raised this to W3C.
I’ll copy a bit from the comment below:
My thoughts exactly. These GitHub protests, while emotionally satisfying, do not work. Google does not care and they are already drunk on monopolist power.
Contact info for antitrust authorities:
I could not find an easy contact method for filing a complaint for the CCI, but it looks like this is the process?
I could not agree more. But anti-trust is only one angle, and PrivacyPass (Apple’s Private Access Tokens) suffers from similar issues.
Here in the EU, you can also:
Contact the European Data Protection Supervisor with similar concerns.
Contact the European Agency for Fundamental Rights on how technology that discriminates users based on opaque practices is likely a violation of fundamental human rights in the EU.
As well, of course, as contacting similar institutions in your home country.
With regards to PrivacyPass specifically, joining the IETF PrivacyPass working group is as easy as joining a mailing list, and raising your concerns there. You can then vote against adoption when a draft makes it far enough.
Alex Russell of Google/Chrome/Blink is trying to reframe WEI, as a bunch of folk “doing All The Good One Can Do for the the web”, but getting caught up what they can do rather than what they should do.
The rationale posited is that “Google doesn’t have the sort of hierarchy that would force you to ask anyone ‘should we?’”.
There are two very immediately disturbing things about this:
- The assertion that Google does not ask this question as a matter of process, and
- the notion that individual Googler’s won’t ask this question irrespective of what Google’s processes are.
Either alone is damning, both together is a declaration of moral bankruptcy.
Understandably, reactions to this thread are not particularly supportive of this interpretation. An official reaction from Google seems to be missing still at this point in time.
The Interpeer Project’s mission is to build next-generation internet technology that re-focuses on people over businesses. You can support us by donating today.