Impressions from the OHCHR consultation on human rights and technical standard-setting processes for new and emerging digital technologies

Just over a week ago, the Office of the United Nations High Commissioner for Human Rights hosted a consultation on human rights in technical standard-setting processes, which I managed to attend the first half of. I live blogged some impressions, that deserve a summary here.

Speaking were human rights researches, security standards contributors, and representatives of standards organizations.

OHCHR consultation on human rights and in tech standard

Niels ten Oever (@nllz@mastodon.cloud) is a researcher in the intersection of tech and human rights, and particiapates in IRTF together with the current chairs Mallory Knodel of the Center for Democracy & Technology and Sofía Celi of Brave Software.

Jana Iyengar of Fastly is working with IETF and W3C; Gurshabad Grover is a cryptographer contributing to the former. Mehwish Ansari represents ARTICLE 19.

Vanja Skoric represents the Civic AI Lab at the University of Amsterdam. Frederik Zuiderveen Borgesius (@Frederik_Borgesius@akademienl.social) presents a legal point of view to the proceedings.

Finally, Adrian Popa represents ISO and Gabriela Ehrlich IEC, bringing the major organizations contributing to standards in the digital realm to the table.

Contribution to Standards Processes

Speakers generally treat standards as a basis for collaboration, as well as something to collaborate on. As ten Oever puts it, we’ve been creating and using standards since the bronze age, but we’re still not clear on how they work. We generally treat openness, consensus and transparency as the pillars of standards work, but there is no unambiguous definition of either of these.

The point was made during the previous EU Open Source Policy Summit that “openness” in particular can mean different things; in open source, it refers to permissionless innovation, while in open standards it generally means that anyone can access the result – participation is not necessarily given.

Celi makes the point that most often, those people who most need their rights considered in the standards process are not participants. This can have a number of reasons, but it’s usually about access. One comment, that standards are typically made for the people participating in their creation, highlights how important it is to achieve diversity in the standards working groups.

ISO and IEC do not grant participation to non-members other than through a period of public consultation in comments. ISO sees this as adequate, while the IEC representative admits more could be done.

At the same time, ISO requires broad representation in their work, which leaves open the question how to achieve that when participation is difficult.

Currently, the processes by ISO and IEC are far from the permissionless type of “open”. There is also no “fourth mode” representation, in which practitioners are also users and contributors to the standards.

Ansari points out that one of the issues with membership in standards organizations is that membership on paper does not imply membership in practice. Participation incurs a cost, and it is not always feasible for stakeholders representing the most affected parties to bring the necessary resources or expertise to the table.

In the context of human rights in technology, expertise is a particularly thorny issue. Knodel elaborates that in the example of artificial intelligence, one needs to be a human rights expert and an AI expert to participate effectively. If the AI solution is applied to a particular field, such as healthcare, expertise in that field is also required. This is extremely difficult for stakeholders to achieve. (Knodel floats the possibility that a generalized, data driven approach may serve us better here, but the current state is the above.)

In particular, civil society struggles to bring resources to bear in order to overcome this issue.

Organizations such as the IETF and IRTF place the lowest burden on standards participation, which is why a dedicated human rights working group exists there. Knodel describes the group as having bridge role between technical and human rights experts, as too often, the two groups do not interact yet in the making of standards.

On the IETF, Iyengar notes that it works because it has to – the industry knows the value of standardization. However, he also makes an inadvertent point on under-representation of human rights issues by stating that standards need to be adopted, and the industry adopts what it can use. If human rights are in conflict with the standards, that may hinder adoption.

The broader point is that standards should be in the public interest – alignment with human rights considerations is part of that equation.

In practice, some methods for protecting human rights – such as encryption, to provide privacy – also has the effect that middleboxes can no longer provide the services they used to. This can directly affect business interests, such as when the business model is to monetize access, even assuming that such access is entirely benign.

The representative for The Netherlands made the comment that despite these issues, the multi-stakeholder approach must be strengthened. These things cannot be done in silos.

But it was the representative from Venezuela that made the one of the more worrying, somewhat related points: technology that implements standards is also unevenly distributed. Pushing forward new standards may also be a form of weaponization (my words), in that it can exclude parts of the world where resources to adopt new standards are particularly thin.

This suggests strongly that one part of the human rights considerations in standards must be to build standards to last, which means taking also the best guess of future problems into account.

This implies more work for the standards bodies – but additional complexity is also one of the issues that can best be overcome with sufficient funding and expertise. Here, the goal of keeping the standards processes lightweight and as permissionless as feasible may conflict with future-proofing the output.

And processes should be kept lightweight, because the majority of standards never get implemented. Making a process to create them harder just wastes everyone’s efforts.

Practical Standardization Issues

Grover presents the journey from TLS 1.2 to TLS 1.3 as an example of how standards evolve also due to human rights concerns. In particular, TLS 1.3 adds forward secrecy to the design.

The background is that the digital sphere “compresses the space” in which governments can surveil their citizens. As such, logging encrypted data for offline brute-force decryption makes for an effective practice. Forward secrecy prevents this as best as possible.

Unfortunately, metadata is often not treated as carefully as payload. In TLS, domain names of sites can still be leaked to the observer, even if none of the requests and responses can be logged. Often, this is enough to put people at risk. The knowledge that these gaps need to be closed exists in, in this case, the IETF – but the business interests of companies may not always be aligned with this understanding.

Celi makes the point that even though the lower layers such as transport encryption provide some protection, and organizations such as the IETF try to provide such functionality, similar efforts do not always exist at the application layer. For example, given an unlocked device, it is very easy to access a person’s browser history – no scanning of encrypted traffic is required for this.

This is also an effect felt in the #ChatControl mass surveillance proposal put forward by some European MEPs. Instead of breaking encryption, the proposal is to scan chat messages on the client before being sent.

A similar concern is reflected in the relatively recent discussion on whether telemetry added to applications is putting the privacy of users unduly at risk.

There is a good side to this, however. As Celi points out, one of the benefits of having more security in lower layers of the application stack is that these issues get discussed where they are a little more visible. This raises the necessary question of whether surveillance efforts by governments were ever legal to begin with.

As the representative of the UK puts it in the context of DNS-over-HTTPS (DoH): “who controls the service?”

DoH is promoted by proponents as adding security, as requests to resolve domain names to IP addresses are now encrypted. But in doing so, applications supporting DoH may not accept locally run DNS filters such as e.g. Pi-hole any longer, as local servers are rarely offering TLS certificates signed by a trusted authority. That limits the number of DNS servers applications can use, and concentrates knowledge of DNS queries in those service providers that do provide DoH with trusted certificates. This concentration of data, in turn, weakens the privacy of the end user.

In the context of human rights, one of the conversations to be had is about how to discuss bias and harms. In artificial intelligence, for example, biased training data produces biased AI decisions – which can help strengthen stereotypes, as the common view of machines is that they are themselves unbiased.

This discussion often circles back to the political issue of asking whose standards are being imposed when such things are considered. Notions of the importance of privacy vary wildly across the cultures of the world.

Possibly the best answer to this lies in not debating such issues. International laws on fundamental human rights provide a sufficient answer to the debate. They also clarify that human rights are both inalienable and indivisible – not taking them into account is not permissible. Unfortunately, not all the world’s nations have ratified the Universal Declaration of Human Rights.

This point is acerbated by Ehrlich’s comment that in the context of the internet, international standards matter more than national ones. This is, of course, true for purposes of interoperability – but if human rights are to be taken seriously, then one cannot let them be negotiated down to the lowest common denominator.

A similar concern relates to the standards processes, which typically involve voting for the adoption of standards: human rights cannot be voted upon. There must be a different way of ensuring they’re sufficiently accounted for in standards.

Borgesius argues that lawmakers play a crucial role here in considering HR in the abstract as they apply to technology, and can then delegate the specifics of adhering to the law to standards bodies.

An additional concern he has is that standards are not the same as the law, but sometimes take on a similar status – such as when governments suggest a particular standard to be adhered to. If the standard in question then costs money to access, that is in itself a human rights issue in the form of a lack of inclusivity.

Skoric makes two additional points on the legal issues surrounding HR and standards. One salient one is that considering HR in standards is mostly about asking the key question how a standard will not undermine HR. In the absence of things that undermine HR, the standard is probably fine.

Finally and perhaps most importantly: assuming we have sufficient processes in place to consider human rights in standards, and assuming those standards are then implemented – what if we discover that they do, after all, undermine HR? There is currently no process in place to contest the use of existing standards for such a reason. We may currently have to rely on consumer rights bodies, which face all the issues of expertise and resources as outlined above to prevent them from participating more fully in the process.

Closing Thoughts

The participants in the consultation largely agreed on the main points, which is refreshing. Less encouraging is that there are few ideas put forward for how to address pervasive issues in recognizing human rights in standards processes, from expertise and resource issues to lack of access to the results.

It’s interesting, however, that a number of participants raised the spectre of surveillance capitalism as one of the driving forces behind not considering HR sufficiently. Whether it is the destruction of middlebox-based business models through end-to-end encryption, or the subtler acknowledgement that businesses will only adopt standards that serve them, the issue that corporations can monetize privacy violations very successfully remains. Neither human rights laws as they exist, nor standards themselves can sufficiently address this – other legal means need to be leveraged here.

Interpeer Project

In the context of the Interpeer Project, a fair bit of thought has been put towards the surveillance capitalism problem. For example, in its ICN design, and somewhat unusually for ICN, we treat end-to-end encryption (E2EE) as a first class citizen.

Other ICN approaches concentrate on transporting arbitrary data. While this can be done with Interpeer’s approach as well, using vessel as a container format provides E2EE out of the box, and does not leave this as an application concern.

Even so, it is hard work convincing some folk of the necessity of such constructs. However, a consultation such as this one vindicates our approach – that’s a win we’ll take!


Published on February 27, 2023