IIn 1914, on the eve of World War I, H.G. Wells published a novel about the possibility of an even bigger conflagration. liberated world Thirty years before the Manhattan Project, “humankind'' [to] Carry around enough potential energy in your handbag to destroy half a city. ” A global war breaks out, precipitating a nuclear apocalypse. To achieve peace, it is necessary to establish a world government.
Wells was concerned not just with the dangers of new technology, but also with the dangers of democracy. Wells's world government was not created by democratic will, but was imposed as a benign dictatorship. “The ruled will show their consent by silence,” King Ecbert of England says menacingly. For Wells, “common people” were “violent fools in social affairs and public affairs.” Only an educated, scientifically-minded elite can “save democracy from itself.”
A century later, another technology inspires similar awe and fear: artificial intelligence. From Silicon Valley boardrooms to the backrooms of Davos, political leaders, tech moguls, and academics are exulting in the immense benefits of AI, but there is no hope that superintelligent machines will take over the world. There are concerns that AI may spell the end of humanity. And, as a century ago, questions of democracy and social control are at the heart of the debate.
In 2015, journalist Stephen Levy interviewed Elon Musk and Sam, the two founders of OpenAI, a technology company that gained public attention two years ago with the release of ChatGPT, a seemingly human-like chatbot.・Interview with Altman. Fearful of the potential impact of AI, Silicon Valley moguls founded the company as a nonprofit charitable trust with the goal of developing technology in an ethical manner to benefit “all of humanity.”
Levy asked Musk and Altman about the future of AI. “There are two schools of thought,” Musk mused. “Do you want a lot of AI or a few? I think more is probably better.”
“If I used it on Dr. Evil, wouldn't it give me powers?” Levy asked. Altman responded that Dr. Evil is more likely to be empowered if only a few people control the technology, saying, “In that case, we'd be in a really bad situation.” Ta.
In reality, that “bad place” is being built by the technology companies themselves. Musk resigned from OpenAI's board six years ago and is developing his own AI project, but he is now accused of prioritizing profit over public interest and neglecting to develop AI “for the benefit of humanity.” He is suing his former company for breach of contract.
In 2019, OpenAI created a commercial subsidiary to raise money from investors, particularly Microsoft. When he released ChatGPT in 2022, the inner workings of the model were hidden. In response to criticism, Ilya Satskeva, one of OpenAI's founders and the company's chief scientist at the time, said the company's openness was designed to prevent malicious actors from using it to “cause great damage.” He argued that it needed to be lower. Fear of technology became a cover for creating a shield from surveillance.
In response to Musk's lawsuit, OpenAI last week released a series of emails between Musk and other members of the company's board of directors. All of this makes it clear that all board members agreed from the beginning that OpenAI could never actually be open.
As AI develops, Sutskever wrote to Musk: The “open” in openAI means that everyone should benefit from the results of AI once it is developed. [sic] It's built, but it's totally fine if you don't share the science. ” “Yes,” Mr. Musk replied. Regardless of the nature of the lawsuit, Musk, like other tech industry moguls, has not been as open-minded. The legal challenges to OpenAI are more a power struggle within Silicon Valley than an attempt at accountability.
Wells wrote liberated world At a time of great political turmoil, when many people were questioning the wisdom of extending suffrage to the working class.
“Was that what you wanted, and was it safe to leave it to you?” [the masses]' wondered the Fabian Beatrice Webb, 'Will the ballot box create and control the British government in its vast wealth and far-flung territories?' This is at the heart of Wells' novel. It was a question that existed in “To whom can people entrust their future?''
A century later, we are once again engaged in heated debates about the virtues of democracy. For some, the political turmoil of recent years is a product of democratic overreach, the result of allowing irrational and uneducated people to make important decisions. “It is unfair to put the responsibility of making very complex and sophisticated historic decisions on unqualified simpletons,” as Richard Dawkins said after the Brexit referendum. would have agreed.
Others say that such contempt for ordinary people is what contributes to the flaws in democracy, where large sections of the population feel deprived of a say in how society is run. .
It's a disdain that also affects discussions about technology.like the world is liberated, The AI debate focuses not only on technology, but also on questions of openness and control. Alarmingly enough, we are far from being “superintelligent” machines. Today's AI models, such as ChatGPT and Claude 3, which another AI company, Anthropic, released last week, are so good at predicting what the next word in a sequence will be that it's easy to use human-like words. can deceive us into thinking that it can hold. conversation. However, they are not intelligent in the human sense, have little understanding of the real world, and are not trying to wipe out humanity.
The problems posed by AI are not existential, but social. From algorithmic bias to mass surveillance, from disinformation and censorship to copyright theft, our concern is not that machines might someday exercise power over humans, but that they already have unequal power. and acts in ways that reinforce injustice and provide tools for those in power to use. Reinforce their authority.
That's why what we might call “Operation Ecbert,” the argument that some technologies are so dangerous that they must be controlled by a select few over democratic pressure, It's very threatening. The problem isn't just Dr. Evil, it's the people who use fear of Dr. Evil to protect themselves from surveillance.