This book is way too fuzzy for me; it uses sketchy logic to connect hazy premises to radical—yet also vague—conclusions. It does touch on a bunch of interesting topics, though; here are a handful that particularly caught my attention.
Bridle says corporations are a kind of AI, because a corporation has:
…clearly defined goals, sensors and effectors for reading and interacting with the world, the ability to recognize … attractors and things to avoid, the resources to carry out its will, and the legal and social standing to see that its needs are catered for, even respected.1
I like this perspective. It makes it obvious that we should expect corporations to take on a life of their own; that the structure of an organization can cause it to behave, through its members, in ways that none of those members—even the people at the top—would individually choose. This also seems like a helpful way of thinking about religions, governments, and probably many other institutions.
Comparing our prevailing attitudes to animals with our emerging attitudes to AI, Bridle says we have
…a three-tiered classification system for the kinds of animals we encounter: pets, livestock and wild beasts… In transferring this analogy to the world of AI, it seems evident that thus far we have mostly created domesticated machines of the first kind, we have begun to corral a feedlot for the second, and we live in fear of unleashing the third.2
Bridle wants us to relate to both animals and technology in a different, non-hierarchical way. Their arguments for this seem hand-wavy and their alternative vision seems vague. I love a phrase they use here, though, on how “autonomous transit” done right could go:
…it could liberate us from the mundanity of everyday life, and introduce us to a host of chattering new companions, starting with itself.3
I love to imagine a future where AI takes the form not of sterile machines that do our bidding, but of friendly fellow travelers and collaborators. If such a future is possible, it may require deliberate effort to realize it—it’s not the path that tech companies will be led down in search of profit.
In 2014, two biologists at the University of Missouri recorded the sound of cabbage white caterpillars feeding on a cress plant. … Having left the caterpillars to munch away for some time, the scientists then removed them and played the sound of their approach back to the plants. Immediately, the plants flooded their leaves with chemical defences intended to ward off predators: they responded to the sound as they would to the actual caterpillars. They heard them coming. Crucially, they didn’t respond in the same way when other sounds - of the wind or of different insects - were played to them. They were able to distinguish between the different sounds, and act appropriately.4
That’s Bridle’s summary of a paper called “Plants respond to leaf vibrations caused by insect herbivore chewing”. Super cool!
In 2018 the … slime mould, Physarum polycephalum, showed that it was able to solve the travelling salesman problem in linear time, meaning that as the problem increased in size, it kept making the most efficient decisions at every juncture.5
That’s in reference to a paper called “Remarkable problem-solving ability of unicellular amoeboid organism and its mechanism”. Bridle’s summary really bothers me because it seems to suggest slime moulds can solve the TSP optimally in linear time. But that would be an Earth-shattering revelation; it’s currently widely believed (though not proven!) that such an efficient solution is mathematically impossible. If we knew something in nature existed that actually does it, we’d know that P = NP. Reverse-engineering how the molds accomplish it would instantly become probably the single most important research topic related to computer science.
I have not dug into the paper in depth, but it appears to claim only an “ability to find a reasonably high-quality solution” rather than an optimal one; this is a less shocking (though still super cool!) claim, since approximation algorithms for TSP already exist. But the paper does say its “results may lead to the development of novel analogue computers enabling approximate solutions of complex optimization problems in linear time” (IIUC, the approximation algorithm corresponding to the molds’ behavior is only efficient when you have more parallelism than our typical computer architectures do). This still backs up Bridle’s basic point that we have a lot to learn from other creatures, I just think the book is a bit hyperbolic about it.
There’s a chapter on the usefulness of randomness, and one example is sortition (randomly selecting people to fulfill certain government duties, rather than electing them by a vote). I always find this idea very alluring; it seems like it would accomplish the goal of democratic self-rule while avoiding some of the perverse incentives created by voting systems. Bridle emphasizes how it encourages people to really engage with their government, pointing to recent successes of citizens’ assemblies. Bridle also gives historical examples I hadn’t heard of before, including:
…in fifteenth-century Spain, sortition was used in the Castilian regions of Murcia, La Mancha and Extremadura. When Ferdinand II added the Kingdom of Castile to his Kingdom of Aragon, becoming the first de facto King of Spain, he acknowledged that ‘cities and municipalities that work with sortition are more likely to promote the good life, a healthy administration and a sound government than regimes based on elections. They are more harmonious and egalitarian, more peaceful and disengaged with regard to the passions.’6
The book does not give a source for that quote; I was not able to find it via a cursory search, and Claude and ChatGPT both doubt its veracity. (But of course, the fact that this would be a translation of any original quote complicates the search.)
The first search engines were hand-curated lists of interesting places, essentially random accumulations of sites and tools ordered only by the passions and peccadilloes of those who assembled them. While Google still searches the web with automated random walks, its results are ordered by deeply partisan algorithms, with the top results sold off to the highest bidder. Google has almost a 90 per cent share of the world’s web searches, yet indexes only a tiny fraction of the visible web. Most searchers never look beyond the first page of results. There is little room for randomness in exploring the vast amount of information actually available to us. … So many of our tools are designed to reduce randomness in a similar fashion: from algorithmic recommendation systems to dating apps, from GPS navigation to weather forecasting.7
This is often useful, of course, but I’m inclined to agree that something important is lost when too much of our interaction with the world is mediated by such mechanisms.