The New Cathedral and Bazaar

Bazaar, Cairo, Oct-2011

Figure: “Bazaar, Cairo, Oct-2011” by maltman23 is licensed under CC BY-SA 2.0

Or how Blockchain gets consensus wrong.

Over twenty years ago, Eric S. Raymond wrote an essay that changed the software development world. Titled The Cathedral and the Bazaar, it outlined his experience in trying to understand and emulate the success of the Linux operating system kernel. The essay set down some observations in rules that other projects should follow for similar success.

Raymond wrote this essay in a world that by and large only knew cathedral style software development. That is not to say that open collaboration on software projects did not yet exist – famously, the Free Software Foundation pre-dates the essay by over another decade, and its founder states his motivation as related to collaboration. But Raymond provided the community at the time with cause for introspection; most crucially, though, his essay prompted enterprising businesses to try and adopt some of the same strategies.

One of the results of this are the Twelve Principles associated with the Agile Manifesto. Where Raymond provides a mixture of technical and collaborative insights, the manifest tries to distil and adapt rules for collaboration in such a way that they may also fit a corporate environment.

Today, everybody does Agile. And if they don’t, they still claim to do, because all else would be considered foolish. Agile, though, turns out to be something of a Trojan horse: by shifting the focus of its principles, it also re-introduces some distinct cathedral-like aspects.

The briefest of summaries of cathedral and bazaar style projects’ respective characteristics is that in the cathedral model, contributors have extrinsic motivations, whereas in bazaar models their motivations are largely intrinsic.

Raymond has a strongly liberal political leaning, so his view of bazaar style projects is of an unregulated marketplace of ideas, where success is made on a mixture of soundness of the idea itself, and the severity of the itch the idea scratches. A good project leader should not so much guide the direction of the project through setting interesting problems to solve, as adopt the most hotly debated topics and solutions. He contrasts this to cathedral-like projects, where direction is provided from the top, and extrinsic motivations are supplied for developers to pick up problems to solve.

Agile is bazaar-like in demanding that the direction is provided by customer feedback, but doesn’t go as far as Raymond, who suggests to make customers part of the development team (a feat that only works when operating entirely in the open). Similarly, Agile recognizes developer motivation, but explicitly asks for external sources of it. When I put on my cynic’s hat, comparing the two documents reads as if Agile is written by corporate people trying to sell cathedral style project management in the guise of bazaar characteristics.

For the past year or so, I have been increasingly involved in blockchain projects. Some were open source, some closed, some more cathedral and some more bazaar. I could probably write an article on the different project management styles alone, but that’s not really what this article is about.

For context, it’s worth noting that I’m of an interesting generation of software engineers, in that I’m neither quite old guard, nor rising rock star. I spent my early career in between. When I entered the field, the dot-com bubble had already burst, and was venting its last puffs of tepid air with sounds reminiscent of flatulence.

Technologically speaking, we had all the tools of early open source software, with little of their sophistication. We weren’t pioneers, but we were still few enough that ease of use of software wasn’t a concern; questions were more often than not met with RTFM. We needed bazaar style FOSS just to learn our trade, and adopted it’s nearest cousin Agile with all our hearts. After all, if it worked for the software everyone was using, why not for the software we were building?

Fast forward to blockchain projects. They’re all about redistributing power to the masses, try to open source as much as possible, and generally feel like a true bazaar… except…

Except you’ve got to follow the money. And the money for most of these projects comes in one way or another from HODLers in the chain’s currency, such as Jihan Wu. Besides funding crypto projects on their chain of choice via one foundation or another, they also instil some kind of trust in the developer base that the currency has value merely due to being backed.

I’m not judging the sense of this, or precision of this characterization; other people can fall over themselves to try and prove or disprove it. All I’m interested in is this:

Contributing to a blockchain project in the hopes of increasing your stock value is the definition of an extrinsic motivation, and as cathedral-like as it can get.

Lichfield Cathedral

Figure: “Lichfield cathedral” by Gary Ullah is licensed under CC BY 2.0

It’s worth pointing out that there is nothing wrong with extrinsic motivation. But it’s equally worth pointing out that if Raymond’s analysis is correct, extrinsic motivations matter far less to the long-term health of projects than contributors’ intrinsic ones. It’s not easy to see the latter when currency value is what’s most visibly discussed.

Towards the end of 2006, I joined a secretive startup then cryptically known as The Venice Project, which had been prototyping away for almost a year at the time. Cast your mind back to 2006. Back then, YouTube was around a year old, and was the chief source of highly pixelated cat videos. So abysmal was the video quality at the time that today’s Imgur GIFs look like high definition in comparison. I know, it’s hard to imagine.

As part of the interview round, I stepped into my future team lead’s office. I’d been talking a little bit already about my background and skills, but so far hadn’t been told what exactly we were going to be working on. After some introduction and light interview questions, my team lead demonstrated the project: after a few seconds of buffering, across his screen flickered an episode of 5th Gear in TV quality, sourced from a nascent peer-to-peer network.

I was stunned. I remember distinctly that my first words were “I’d pay for that”, to which he replied “OK, but it’s going to be free.”

Venice

Figure: “Venice” by luca.sartoni is licensed under CC BY-SA 2.0

I spent the rest of my interview electrified. My mind was buzzing with the possibilities, so much so that the remainder of the day is rather blurry now. But I got hired to work on this networking stack, so it can’t have been a complete disaster.

The point of this episode is that from this experience, I have a particular understanding of peer-to-peer technology. My understanding is tightly coupled to distribution, not decentralization.

The concept of distribution subsumes that of decentralization. Decentralization tries to avoid single points of failure by sharing responsibilities among many nodes. The way that is achieved isn’t really fundamental to the concept – however, in blockchain decentralized projects, every node individually decides on the correct system state, and consensus is reached by aligning with the verifiable results of other nodes' decisions.

In distributed systems, there is no correct system state. Every node makes individual decisions based on partial system knowledge, and the system state is whatever emerges from the aggregate.

As a peer-to-peer software engineer I’m interested in what makes a system stable when no individual peer has complete knowledge of it. As an engineering consultant, I’m interested in how much communication and process is required for a team to perform well, and how much stifles productivity. As an amateur game designer, I’m trying to find the perfect balance between providing enough rules to stimulate play, and not so many as to make player decisions without meaningful consequences.

The Cathedral and the Bazaar points us towards leveraging intrinsic motivation in managing software projects. As I’ve pointed out earlier, Agile waters this down somewhat by focusing on extrinsically providing an environment that motivates developers. The fine print written in invisible ink is that the level of success of the environment lies in how much intrinsic motivation it allows to develop. This is why startups hand out T-shirts and free soda, not because their salaries are too low, but because gifts make your efforts feel far more appreciated than contractual agreements.

In game design, there is a lot of discussion about player motivation as well. The most quoted work, Raph Koster’s A Theory of Fun for Game Design, forgoes the question of what kind of motivation might keep players interested, immediately settles on the intrinsic “fun”, and analyses how that might be manufactured. The curious result is that in Koster’s experience, fun emerges from the learning experience that comes with mastering challenges of the optimal difficulty.

Where software project contributors and players in a game are both groups of people, and we might draw comparisons between their respective motivations, members in a peer-to-peer network are more difficult to compare with, because they’re computers.

Well. That depends. Are they?

In the more abstract sense of the term, a peer in a peer-to-peer network is either another human participant, or a machine they control. From the networking code perspective, it is the machine. A peer-to-peer network is one where each participating machine acts as both client and server to other clients.

But when every machine represents exactly one person, and usually each person connects only via one machine, the boundaries blur. And so the terminology shifts. Nowadays, we have peer-to-peer networks built on top of strictly client-server protocols such as HTTP, with the implied restriction that nodes acting as servers must have publicly reachable IP addresses. Technologically, there is nothing P2P about that, but conceptually the term still applies.

The distinction between peer-as-person and peer-as-machine matters, because without it, one cannot disambiguate between the person’s motivation for joining a network, and a machine’s programming.

But because this distinction blurs, human behaviour appears programmable.

The fundamental building block of all blockchain projects trying anything beyond financial transactions is a smart contract. Smart contracts are a pretty amazing thing. By providing cryptographic proof of something, they allow dumb machines to decide whether or not to proceed with an action. If the proof is verifiable, proceed, otherwise abort.

The hard and utterly fascinating part lies in modelling the real world in terms of cryptographically verifiable proofs. As software engineers, most of us are drawn to absolutes. We would like to find the one rule to rule them all, the perfect spot. Using mathematics to prove things in the real world scratches that itch like no other. I get why engineers are drawn to smart contracts.

Smart contracts make the same the same kind of assumption that game theory does: they require rational decision makers. But just like the character of Dr. Sheldon Cooper in the above spot eventually realizes, the world of people is a messy place, and rationality is neither omnipresent nor eternal.

It’s important to point out that despite the ease with which this assumption is challenged, game theory remains useful. Its application lies in generalized situations, and statistically relevant results across populations; it cannot be used to predict the behaviour of any random individual in any chosen situation. Smart contracts try to do exactly that, however.

Because the boundary between peer-as-machine and peer-as-person is blurred, it’s hard to distinguish between programming a machine and its owner. Picked apart, the reasoning goes something like this: since I can programmatically determine how a machine acts in the absence or presence of a mathematical proof, and provide extrinsic motivation for the machines to honour these proofs, and machines represent strictly rational people, I can effectively program person behaviour.

Blockchain engineers tend to be a very smart bunch. They’re very much aware that not every participant must be rational, and that participants may well try to game the system for their own benefit. The 51% and Sybil attacks are good examples of rational but harmful participant behaviours that need to be prevented.

The solution, inevitably, is to pour more effort into smart contract based rules to design them in just the right way that loopholes become hard or impossible to exploit.

Every design decision becomes a question of which extrinsic motivation – the carrot or the stick – to apply in which situation.

That’s a top-down, cathedral style decision making process if I’ve ever seen one.

There is a trust building exercise where you let yourself fall backwards, and assume someone else – your partner, or whoever you’re trying to build trust with – is standing there to catch you. You can’t look backwards, there are no mirrors, making this a literal example of blind trust.

My sensei once asked me to do this, and as expected caught me. Affirming my trust in this way, he asked me to do it again, but then stepped away – leaving me to discover in a flash that what he really wanted was to see how well I can fall backwards under unexpected conditions. No developers were hurt in the making of this exercise.

The point is, not only did I have an incomplete view of the overall system (I could not see backwards), I also had a false view of the system. My previous experience led me to believe I would be caught again. What allowed me to recover from this mistake was not trust in a consensus state, but trust in only the parameters I could immediately control: my skills in catching my fall effectively.

Or, if you’d like to put it differently, I recovered by focusing on my intrinsic motivation, not getting hurt, as opposed to the extrinsic motivation of trying to do what my sensei asked of me. That the end result satisfied both was not visible to me at the time.

Triangle Fight Night

Figure: “Triangle Fight Night” by Leon Maia is licensed under CC BY 2.0

There is a rough equivalence at play here:

Intrinsic motivation is effectively local decision making, while extrinsic motivation depends on the consensus system state.

This raises the question whether blockchain’s reliance on extrinsic motivations for machines is having the same downside as cathedral style application of extrinsic motivations for people. Should machines instead not be left to decide individually in what way they want to participate on a network?

When I was contributing to a peer-to-peer video streaming stack, we made the decision not to rely overly on consensus state. Instead, we always asked ourselves what the optimal local decision was in any given locally detected situation. Despite having flaws, the result was exceptional.

That is not to say that system state played no role whatsoever. It’s just that we never relied on having a complete or accurate picture of it. When a remote node did not reply to a query within a given timeout, it did not matter whether the node became unreachable due to network routing issues, the round trip delay was too high, the node was shut down, or it was malicious. The right response was to internally score it as unreliable.

Nodes did share some information on their view of the reliability of other nodes. But this information was considered supplemental. If our own estimation of a node was not available or outdated, supplemental information provided the best guess. There was no global node reliability ranking, however.

If Raymond extracted many rules by comparing cathedral style and bazaar style project management, and Agile reduced this to twelve principles, there really is only one rule of thumb that I’d like to convince people of:

If nodes in your system can make workable local decisions, however imperfect, then make local decisions. Rely on global consensus state only sparingly if at all.

This is not a rejection of smart contracts or blockchain technologies. They have their place, especially where local decision making is impractical. I could not, for example, imagine a workable system in which all nodes individually verify the identity of persons, if such identity verification is required.

But “put it on the chain” is far from a panacea. The parallels drawn in this article would suggest it may do more harm than good. At the end of the day, it is an attempt at making the chaotic world predictable, and that is rarely a winning strategy.


Published on February 19, 2020