| title | description |
|---|---|
Counterarguments |
A list of reasons why people might disagree with the idea of pausing AI development - and how to respond to them. |
This is a compilation of disagreements about AI dangers and pushing for an AI Pause.
It could be, we don't disagree with that. But it could also be dangerous, including existential risks.
But it's not just AI companies saying it’s an existential threat.
- Hundreds of AI scientists signed this statement: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
- 86% of AI scientists believe that we could lose control over AI.
- The top 3 most cited AI researchers (prof. Yoshua Bengio, prof. Geoffrey Hinton, Ilya Sutskever) all warn about existential risk from AI.
Read more about x-risk
Modern AI is not designed, it's trained. It's quite literally a digital brain, consisting of millions of neurons. A human designs and programs the learning algorithm, but nobody understands the AI that is grown after that. We can't predict what they will learn to do, which is why they are called "emergent capabilities". It took 12 months until scientists found out chat GPT-4 can autonomously hack websites. AI models are already highly unpredictable, even billion dollar companies can't prevent their models from going crazy or explain how to make bio weapons.
Maybe in most cases, but a really smart AI could spread to other machines. It's just bytes, so it's not bound to one location.
GPT-4 already can autonomously hack websites, exploit 87% of tested vulnerabilities and beats 88% of competitive hackers. How smart do you think GPT-6 will be?
Read more about the cybersecurity risks.
Quite a bit of things are connected to the web. Cars, planes, drones, we now even have humanoid robots. All of these can be hacked.
And it's not just robots and machines that can be hacked. A finance worker was tricked by an AI conference call to get $25m transferred. An AI can use other AIs to generate deepfakes. And GPT-4 is already almost twice as good at persuading people than people are.
Read more about how good the best AI models are.
It doesn’t have to be evil or hate humans to be dangerous to humans. We don’t hate chimpanzees, but we still destroy their forests. We want palm oil, so we take their forest. We’re smarter, so chimps can’t stop us. An AI might want more compute power to be better at achieving some other goal, so it destroys our environment to build a better computer. This is called instrumental convergence, this video explains it very nicely.
Even if it has no goals of its own, and it just follows order, someone is going to do something dangerous with it eventually. There even was a bot called ChaosGPT which was tasked explicitly to do as much as possible to humans. It was autonomously searching for weapons of mass-destruction on google, but it didn’t get very further than that. The thing is, the only thing that is protecting us right now is that AI isn’t very smart yet.
On Metaculus, the community prediction for (weak) AGI was 2057 just three years ago, and now it's 2026.
In 2022, AI researchers thought it would take 17 years until AI would be able to write a New York Times bestseller. A year later, a Chinese professor won a writing contest with an AI-written book.
We don't know how long we have, but let's err on the side of caution.
Read more about urgency
We're not asking to ban it just here. We need an international pause through a treaty. The same as we have for banning CFCs, or blinding laser weapons.
Read more about our proposal
We can regulate it by regulating chips. Training AI models require very specialized hardware, which is only created by one company, TSMC. That company uses machines that are created by yet another company, ASML. The supply chain for AI chips is very fragile and can be regulated.
Read more about feasibility.
Some ways in which a pause could be bad and how we could prevent those scenarios are explained on this page. But if the article doesn't cover your worries you can tell us about them here.
70% of people already believe that governments should pause AI development. The popular support is already there. The next step is to let our politicians know that this is urgent.
Yes you can! There are many ways to help, and we need all the help we can get.