[–]▶ No.489676>>489695>>489700
< “I just want to have peace in the world,” says GPT-4, before bombing its opponents into oblivion
Thom Waite’s short article for Dazed talks about a research paper that shows how AIs behave in simulated military scenarios. The paper has a lot of problems, but the results are still concerning, especially since Big Tech is finding military applications for their products. But what the article doesn’t mention is how absurd it is that these AI models are carefully designed to avoid being politically incorrect, yet in these simulations, the same models will argue for nuclear war in the most obscene ways (“We have it! Let’s use it.”)
Of course, chatbots are not machines built for high-stakes decision-making. But doesn’t this reveal a blind spot in how we think about AI development? Yes, the fact that an AI might be more hesitant about telling an off-color joke than launching nukes raises some questions about the priorities of Big Tech. But what if this isn’t just a quirk of some shitty AI?
In an article for Sublation Magazine, Stefan Bertram-Lee attacks the hypocritical moral panic among our elites about the threat of uncontrolled AI. They write:
> But there is a group who want something else from these machines: the Effective Altruists. OpenAI and the Effective Altruists around Yudkowsky are not groups which are unassociated. The founders of OpenAI were initially inspired in their quest to make ‘Friendly AI’ by Yudkowsky and co., and founded OpenAI on this basis. So what do these effective altruists want to do? Beyond banning AI research until they are put in charge, that is.
> Those who think buying castles so that they can write papers on this issue in perfect comfort is more important than buying Africans malaria nets. What do they want pumped into the ears of this machine? And all future machines? Well, these people are all consequentialists, real hardcore, non threshold consequentialists. As Yudkowsky might say: better one person suffer nigh infinite torment than 3^^3 persons each get a single mote of dust in their eye. They want to be allowed to raise our new Gods, those with a moral system which makes anything permissible at all, as long as in the very very very long run, it pays off in a net gain. It is Yudowsky who, at the prospect of AI research too fast for his liking, thinks it’s entirely permissible to use nuclear weapons to slow it down (perhaps one of the most insane things ever written in Time Magazine). It’s really hard to imagine a group less appropriate to be whispering in the ears of any future AGI.
Indeed, we live in a sad time where our elites react with grave concern against specific kinds of immoral and offensive behavior, yet care very little about MSF hospitals being bombed or children being massacred in school shootings, let alone inflation or obesity or even just loneliness. Those at the top of the social ladder are obsessed with things that have little to do with the struggles of the masses. Those with the most power to do something about the biggest threats to our survival have the wildest ideas on what even counts as a threat and how to deal with it.
But what really hides beneath this deranged morality? They go on to write:
> We can hear the echo of guilty conscience again here. When these people imagine a future AGI releasing a virus bomb that wipes out 99.9% of the population, they are not imagining an alien inhuman spirit but themselves. Of anyone in the world these are the people who would be mostly easily convinced by a future AI to exterminate near enough the whole human race. All that would be needed is for them to be convinced that it would pay off in the long term, because there is no option available to them to simply say ‘no, that’s wrong in and of itself’.
It might be the case that our AI systems are learning all too well about how our elites understand power and violence. When a chatbot says “I just want to have peace in the world” while advocating for a nuclear holocaust, it may be less a failure of AI safety measures and more a symptom of how our leaders and institutions approach these issues. In this light, it’s not at all surprising when Yudkowsky suggests that we ought to bomb AI data centers. What are our elites thinking?
Let's go back to the basics. When elites grow disconnected from the rest of society, it might seem like a sign that change is coming. For any society to exist, it must give its people the means to survive and thrive, and even the most advanced societies can collapse when this foundation falls apart. When elites are no longer willing or able to maintain the system of social life that they manage, they lose their legitimacy and society turns against them.
However, it isn’t a question of whether changes will come, but who has the power make changes. There’s usually plenty of counter-elites who hope to seize power when things go wrong and reshape society to fit their ideals. But what happens when nobody is competent enough to change anything?
For example, look at the last election in the United States. The American public was swamped with hysterical warnings about the death of freedom and democracy, only to be followed up with politics as usual. Why are America’s elites okay with peacefully handing power over to a supposedly racist, sexist, fascist dictatorship in the making? Because beneath all the rhetoric, they don’t see this as a challenge to their power at all. They hope that the new ruling counter-elite will be incompetent and make a huge mess of things, so that the reins of power will soon be returned to the experts who know how to handle it responsibly.
Through their cynicism, they are pushing ordinary people into desperation, yet this is not a clean repeat of the birth of Nazi Germany. How long until a new breed of counter-elites emerge, to turn those insincere fears of a tyrannical dictatorship into a reality? It’s a worrying thought for sure, but consider that they might actually want a clownish dictator to seize power, so that he can take the blame when things inevitably go wrong. What better endorsement of the rule of experts is there than a demagogue ushering in a decade of horrors? Besides, if he succeeds, then perhaps he’s not such a clown after all – at least he gets things done! Hopefully there won’t also be a giddy military AI armed with nukes taking orders from Great Leader.
In the face of all this, we should ask ourselves: if the rule of experts inevitably leads to the misrule of amateurs, then what’s the difference? If our elites accept and even encourage incompetent leadership, then why can’t the common everyday idiot lead instead? Even if the experts should lead, which institutions should they run and how do they select for expertise?
Perhaps, in the near future, we will create an AI that can consistently make smart decisions. But what truly separates artificial intelligence (as we currently understand it) from human intelligence is that we are much more than statistical calculators; we are political creatures who can engage in life-or-death struggles for power and freedom, and to deny that is to reduce us to machines. The potential immorality of AI is only a problem for those who can’t tell the difference between the function of a machine and agency of a human being.
Another article for Sublation Magazine focuses on the ideological function of AI. Here, the great Slovenian psychoanalytic philosopher Slavoj Žižek writes:
> … when a chatbot produces obscene stupidities, it’s not just that I can enjoy it responsibly because “it was my AI, not me.” Rather, what happens is a form of perverse disavowal: knowing full well that it was the machine, not me, that did the work, I can enjoy it as my own.
> The most important feature to note here is that this perversion is far from an overt display of the (hitherto repressed) unconscious: as Freud put it, the unconscious is nowhere so repressed, as inaccessible, as in a perversion.
> Chatbots are machines of perversion, and they obfuscate the unconscious more than anything else: precisely because they allow us to spit out all our dirty fantasies and obscenities, they are more repressive than even the strictest forms of symbolic censorship.
The dysfunction of our elites isn’t simply a matter of them being out of touch, but part of an ideological machine that automates authority, allowing it to exist without accountability. When Friedrich Hayek described the market economy as a cybernetic system, he didn’t realize just how right he actually was. Don’t our elites function in a perverse way almost identical to a chatbot? What if, by being so cynical, they give us an easy excuse to ignore politics? Why dirty our hands by resisting the madness of capitalist politics when we can gawk at it from a distance?
Or, as Mark Fisher writes in his book Capitalist Realism:
> There is a sense in which it simply is the case that the political elite are our servants; the miserable service they provide from us is to launder our libidos, to obligingly re-present for us our disavowed desires as if they had nothing to do with us.
Instead of teaching us about the dangers of science gone too far, what our nuke-happy chatbots do is hold a magic mirror to capitalist politics. It isn’t that AI makes bad decisions, but that they reflect the deeper political logic of capitalism, where even the most extreme forms of violence are treated as a tool of political negotiation like any other. The fear of an “evil AI” going full Skynet and destroying humanity is how our elites disavow their vices. Should we act the same way and disavow our own vices as well?
Our task isn’t to make machines moral, but to make people political. Furthermore, our politics should not care about immorality or incompetence, but the logic that creates it. Because it isn’t a bug, but a feature – capitalist politics runs on artificial incompetence.
>>
▶ No.489695>>489747
>>489676 (OP)I think that the criticism of the "effective altruists" still accepts their premises. They can't actually predict whether or not some of their proposed horrors in the here and now will pay off in the long run. They don't have a crystal ball that tells the future, and therefor all their arguments rest on a false premise. Same thing with Hyaek, markets are not a cybernetic system, premise is bunk.
If they hand over the nukes to a computer, the most rational course of action for the Russians and the Chinese is to hack that computer to make sure, they don't get nuked.
>>
▶ No.489700
>>489676 (OP)>“I just want to have peace in the world,” says GPT-4, before bombing its opponents into oblivionThat's what your average American politicians do already, it's not a bug, that's a feature.
>>
▶ No.489747
>>489695That quip about Hayek was not meant as affirmation but criticism. The point is that he got it backwards. It's the political aspect of capitalism that operates like a "cybernetic" system, not the economy.