Henry Farrell and Cosma Shalizi for The Economist, on an apparent “A.I. as shoggoth” meme with which I’m unfamiliar:
But what such worries fail to acknowledge is that we’ve lived among shoggoths for centuries, tending to them as though they were our masters. We call them “the market system”, “bureaucracy” and even “electoral democracy”. The true Singularity began at least two centuries ago with the industrial revolution, when human society was transformed by vast inhuman forces. Markets and bureaucracies seem familiar, but they are actually enormous, impersonal distributed systems of information-processing that transmute the seething chaos of our collective knowledge into useful simplifications.
Indrajit Samarajiva on how we’ve been living with A.I. the whole time, in the form of the corporation:
You can say ‘no, we don’t recognize corporations as people’ but we do. We have for centuries. The US Supreme Court literally gave them free speech rights in Citizens United. You can say that corporations can’t act on their own, but they do, they just use a vast network of machine and human computers. You can say that corporations aren’t intelligent, but they are. They perceive, process, and act on more of the world’s information than any human in history. Indeed, you cannot say these things because they are not true.
It makes sense, then, that Dan McQuillian calls A.I. a bullshit generator waging class war: that’s as perfectly good a description as any other of the corporation under late capitalism.
Add into the mix Jonnie Penn for The Economist, on how A.I. thinks like a corporation:
The tradition lives on. Many contemporary AI systems do not so much mimic human thinking as they do the less imaginative minds of bureaucratic institutions; our machine-learning techniques are often programmed to achieve superhuman scale, speed and accuracy at the expense of human-level originality, ambition or morals.
These streams of AI history—corporate decision-making, state power and the application of statistics to war—have not survived in the public understanding of AI.
David Runciman, a political scientist at the University of Cambridge, has argued that to understand AI, we must first understand how it operates within the capitalist system in which it is embedded. “Corporations are another form of artificial thinking-machine in that they are designed to be capable of taking decisions for themselves,” he explains.
“Many of the fears that people now have about the coming age of intelligent robots are the same ones they have had about corporations for hundreds of years,” says Mr Runciman. The worry is, these are systems we “never really learned how to control.”