Let's Talk About REST

Remember the early 90’s and Salt-n-Pepa? No? Well, it’s about breaking taboos around talking about important topics. In a vaguely comparable way, we software engineers have a kind of taboo on talking about REST. Let’s break that.

Rest here

Figure: “Rest here” by oliverkendal is licensed under CC BY 2.0

See, most of you will know what REST means, right? And the majority of my readers will be able to tell me that it stands for REpresentational State Transfer. And yet, my experience is that just about everyone misclassifies things as RESTful that are not. Worse, when I bring that up the often-heard response is that “well modern REST means something else” as if two decades of misusing a term somehow invalidates the original idea.

So today, I don’t want to write about protocols so much but instead about architecture. In particular, I want to write about the web architecture as laid out in Roy Fielding's dissertation that introduced the term REST. And while I don’t want to go into too much detail on the industry’s misappropriation of the term, I have to go into it a little bit.

Historical Context

To start with, Fielding’s dissertation isn’t so much about REST at all. Instead it is about networked system architecture. In particular, it is about architecture for a networked system that is so large and has diverse parts under control of different entities, that it becomes impossible to treat it as a single system. The architecture, then, has to provide a framework for delegating control over individual parts in such a way that the whole can still function.

At the same time, he explicitly writes about a hypermedia system. This is, of course, because Fielding was involved in the standardisation of HTTP/1.0 and /1.1 – and therefore primarily interested in hypermedia at the time. But it’s also important to point this out because such systems have an implied use-case that other networked systems may not have.

Lastly, the main thrust of the paper is to arrive at an architecture that can be used to achieve “web scale” (remember when that was a buzzword?) in such a system. Other networked systems may not need the same scale.

Fielding starts out by classifying several different networked architectures by describing their constraints. He argues that such constraints can be combined to derive new architectures. In fact, he sets out to do just that and arrives at REST. The dissertation goes on to examine how the web – even as far back as in 2000 – is not actually following the REST principles particularly well, though REST principles have guided the development of HTTP/1.1.

REST in simple terms

REST can be explained in fairly simple terms, though I do recommend reading the full dissertation. It is absolutely worth the effort you put into it.

First, in distributed systems there is a decision one needs to make. Fellow fedi-native Nate Cull put this very succinctly, but Fielding makes much the same observation in the dissertation:

  1. the app stays on someone else’s computer, you send your data to it.
  2. the app comes to your computer, you run it there.

Nate’s point is that the first option is a privacy risk, and the second a security risk. And while I wholeheartedly agree with that, let’s focus on this first: in order for a distributed application to, well, be distributed, you have to bring data and processing together at some point. And the above are the two options you have for this.

The web, of course, does both, and it does so entirely by design. In REST, processing is considered to be either code run on a server, or code run in the browser. The dissertation considers the latter scenario to be optional, i.e. not fundamental to how REST operates. I’ll get into that later, but for now, consider how much JavaScript there is on the web – RESTful or not, the modern web clearly considers running code in the browser essential to its functioning.

For a variety of reasons around caching and proxying, Fielding concludes that for optimal performance, one has to choose option number 1. Moreover, the server-side code must not be stateful, that is all state it requires must be transferred in the request sent to the server. The implication is that server code should mostly behave like a pure function, transforming inputs into outputs without any secondary input channels. This is where the State Transfer part of REST comes from, and where just about any REST app deviates from the architectural ideal.

The second crucial part to the definition of REST is that the data one sends and receives to the server should not be considered exactly equal to the data the server stores.

Oh hang on. How does statelessness not conflict with the server storing data? Well, the distinction is really about what clients can influence. Two clients sending identical requests to the same URI should get identical responses. There should not be some hidden server state that decides whether one gets an image and the other text. The server can still store data associated with the URI, however.

OK, with that sorted, the data one sends and receives in a request/response tuple should not be considered equivalent to the data the server stores. Rather, it should be considered a representation of the server’s data that is appropriate to the URI’s semantics. For example, I might store some document on the server, but the /title URI endpoint just returns the document title. A silly, contrived example perhaps, but it should be obvious that there are few reasons for the server to treat a document title as a first class data citizen. And this part is where the REpresentational part of the architecture comes from.

That’s pretty much REST in a nutshell:

  1. Don’t get hung up on data formats, they’re specific to the semantics of the access URI.
  2. Don’t store state server-side, just data. Therefore treat every URI as something very close to a pure function. If sever-side data is involved, it is also to be identified by the URI.

And yes, that means a URI not only can represent both semantics and data identifiers, but can do so at the same time. That is on purpose, and that’s a problem.

The Good Thing about REST

REST achieves a very powerful thing with these two characteristics above: it’s up to the server implementor what is going to happen. Information hiding permitted by the representational nature of requests and responses allows the server implementor to decide how to store data – whether as flat files, or in a relational database, etc.

Second, the fact that URIs have semantics that are also server-defined permits implementors to make a wide array of choices in how to interact with clients.

This both leads to a principle of delegation. Nothing in the overarching REST architecture prevents someone running a server from doing whatever they want with respect to storing and processing data. Therefore, a large number of decisions are effectively delegated to an arbitrarily large community of contributors.

This is how “web scale” can be possible, by not prescribing very much at all.

But… well, there’s always a “but”, isn’t there?

The Problem with REST

Ignore that everyone who claims to write RESTful applications doesn’t. People use NoSQL databases and give up on the representational nature of the request and response bodies. People tie CRUD semantics to HTTP methods and give up on that part as well. Lastly, people explicitly introduce server state via allowing authentication sessions to influence how, not whether, resources are to be processed.

No, all of that is just people being people.

The problem with REST lies precisely in this decision to leave a lot of things undefined. In fact, HTTP standards for modifying methods such as PATCH and POST explicitly leave the definition of the meaning of request bodies up to the server implementor.

The implication is that it is impossible to produce a generic web client that is capable of modifying arbitrary resources.

HTTP’s POST is typically implemented in browsers as sending multipart/form-data, at least when triggered from HTML documents that contain forms. But nothing in the specifications requires this. And when there is no HTML form, this interpretation of POST semantics simply cannot be implied.

Which means that this idea that code can optionally be transferred to the client, well, it’s not optional at all. It’s mandatory for extending a generic client to be able to modify remote resources. Only the server on which the resource lives can know how it is to be modified, so only this server can provide the code required for this kind of access.

Now I have to jump to Fielding’s defence here, in case it seems like I’m being overly critical, and by extension, to the defence of all of the contributors to HTTP and related standards. Remember how in the beginning I stressed that the REST architecture is supposed to apply to hypermedia systems? That’s important to recall.

In the early web, there is a kind of implicit assumption being made that I don’t recall ever being actually mentioned. But the 90s were a time where this assumption was so obvious to make that I don’t think anyone ever thought of putting it in writing. And that assumption is that the owner/operator of a server is also the author of the media the server produces.

Of course, this doesn’t have to be a single person, and didn’t have to be even then. But in this dot-com era, we did not yet have a massive number of users on the web. Most website operators published their own things. It was only in the web 2.0 era, after HTTP/1.1, that this shifted to website operators’ primary function lying in providing platforms for third-parties to publish stuff.

The implication of this assumption is that of course it is entirely reasonable to tie the semantics of an URI to lie within the control of a server operator; they were likely the only party who would ever have to access it in a modifying fashion. The operator produces media, and the visitor largely consumes it. That is what hypermedia was at the time.

It is not the web of today. But that assumption still lingers. And the consequences are not very pleasant.

Architecture of Surveillance Capitalism

The result of this unmodified assumption is surveillance capitalism. No, really.

Remember how I wrote in the beginning that the web does both, it sends data to code and it pulls code to data? And that the former is a privacy risk, while the latter is a security risk?

What the REST architecture achieves is effectively a kind of data silo. Something that is entirely defined server-side: the server decides how to store data, how to process it, how to represent it, and by sending code to the client, also how to gather it and send it to the server. The app, the collection of server- and client-side code, becomes the distribution and processing medium.

All that users can do is either feed information into this channel or leave it be.

That means the notion that we own data – about ourselves or other things – and use web applications to work with it is really quite delusional. I mean, of course that also happens. But the moment we feed anything into a web app, we also cede control over this information entirely. We get some processing capabilities in exchange for this.

It is, of course, still possible for ethical providers of web applications to treat your data carefully. And legal frameworks such as the GDPR do make it a tad harder for software manufacturers to abuse your data. But since they accumulate so much from such a diverse number of users, it is at least very tempting to derive some kind of use from this treasure trove. Just how hard it is to resist that temptation is the stuff of other articles elsewhere, however.

Looking Ahead

The question, though, is how would we achieve a “web scale” architecture without giving up so much? I will explore that in another post. Suffice to say, for now, that this is what the Interpeer Project aims for.


Published on March 18, 2021