diff --git a/src/posts/2023-july-london-13th.md b/src/posts/2023-july-london-13th.md index 23cfc4a5..9b568d6c 100644 --- a/src/posts/2023-july-london-13th.md +++ b/src/posts/2023-july-london-13th.md @@ -11,7 +11,7 @@ description: PauseAI protest, urging the United Nations Security Council to ## Contact -- Alistair Steward ([twitter](https://twitter.com/alistair___s)) +- Alistair Stewart ([twitter](https://twitter.com/alistair___s)) ## Press Release: PauseAI protests Foreign Office ahead of UN Security Council meeting on AI Risk diff --git a/src/posts/2023-july-london-18th.md b/src/posts/2023-july-london-18th.md index 5617d9b2..973a5396 100644 --- a/src/posts/2023-july-london-18th.md +++ b/src/posts/2023-july-london-18th.md @@ -20,7 +20,7 @@ title: PauseAI protest @ FCDO, London, July 18th ## Contact -- Alistair Steward ([twitter](https://twitter.com/alistair___s)) +- Alistair Stewart ([twitter](https://twitter.com/alistair___s)) ## Press Release: PauseAI protests Foreign Office ahead of UN Security Council meeting on AI Risk diff --git a/src/posts/2023-june-london.md b/src/posts/2023-june-london.md index 0dac29dc..f0c6b821 100644 --- a/src/posts/2023-june-london.md +++ b/src/posts/2023-june-london.md @@ -17,7 +17,7 @@ A rapidly increasing number of AI experts [signed a statement](https://www.safe. This has been signed by virtually all AI labs (OpenAI, Google DeepMind, Anthropic) and hundreds of AI scientists including Geoffrey Hinton, the "Godfather of AI". -AI safety researchers have not reached on consensus on how large the risk of human extinction will be. +AI safety researchers have not reached consensus on how large the risk of human extinction will be. Results from the ["Existential risk from AI survey"](https://forum.effectivealtruism.org/posts/8CM9vZ2nnQsWJNsHx/existential-risk-from-ai-survey-results) show that estimates range from 2% to 98%, with an average of 30%. Rishi Sunak has stated that the ["Government is looking very carefully at this"](https://twitter.com/RishiSunak/status/1663838958558539776) and that ["the UK is well-placed to lead"](https://twitter.com/RishiSunak/status/1662369922234679297) the global collaboration on safe AI development. diff --git a/src/posts/2023-june-melbourne.md b/src/posts/2023-june-melbourne.md index 99255f68..645f8579 100644 --- a/src/posts/2023-june-melbourne.md +++ b/src/posts/2023-june-melbourne.md @@ -32,7 +32,7 @@ A rapidly increasing number of AI experts [signed a statement](https://www.safe. This has been signed by virtually all AI labs (OpenAI, Google DeepMind, Anthropic) and hundreds of AI scientists including Geoffrey Hinton, the "Godfather of AI". -AI safety researchers have not reached on consensus on how large the risk of human extinction will be. +AI safety researchers have not reached consensus on how large the risk of human extinction will be. Results from the ["Existential risk from AI survey"](https://forum.effectivealtruism.org/posts/8CM9vZ2nnQsWJNsHx/existential-risk-from-ai-survey-results) show that estimates range from 2% to 98%, with an average of 30%. The protesters are urging the Australian government to take the lead on global AI safety and pause the development of more dangerous AI systems. diff --git a/src/posts/2023-may-deepmind-london.md b/src/posts/2023-may-deepmind-london.md index 7f847035..3212badd 100644 --- a/src/posts/2023-may-deepmind-london.md +++ b/src/posts/2023-may-deepmind-london.md @@ -131,4 +131,4 @@ Stickers and pin badges ## Contact -Alistair Steward ([email](mailto:achoto@protonmail.com), [twitter](https://twitter.com/alistair___s)) +Alistair Stewart ([email](mailto:achoto@protonmail.com), [twitter](https://twitter.com/alistair___s)) diff --git a/src/posts/2024-vacancy-comms-director.md b/src/posts/2024-vacancy-comms-director.md index 68b26ee6..31ccfdf5 100644 --- a/src/posts/2024-vacancy-comms-director.md +++ b/src/posts/2024-vacancy-comms-director.md @@ -1,6 +1,6 @@ --- title: Communications Director Vacancy at PauseAI Global (Vacancy closed) -description: PauseAI is looking for an Communications Director to lead our Comms Team and Social Media accounts. Remote work or in-person in Utrecht, the Netherlands. +description: PauseAI is looking for a Communications Director to lead our Comms Team and Social Media accounts. Remote work or in-person in Utrecht, the Netherlands. --- _Update 2025-03-24: This vacancy is now closed._ @@ -17,14 +17,14 @@ PauseAI started in April 2023 and has since grown to 2000 members, over 100 regi ## Your Role -Although many volunteers contribute to PauseAI (some even full-time), PauseAI has one paid staff member ([Organizing Director](/2024-vacancy-organizing-director) +Although many volunteers contribute to PauseAI (some even full-time), PauseAI has one paid staff member ([Organizing Director](/2024-vacancy-organizing-director)) You will be the second hire and you will play a crucial role in how the organization grows and evolves. You will work closely with the founder, Joep Meindertsma. Be aware that PauseAI may grow very quickly in the near future, both in terms of members and funding. ### Tasks & Responsibilities -- Lead the Comms [Team](/teams) (multiple volunteers with diverse relevant skills, some of who create and edit videos) +- Lead the Comms [Team](/teams) (multiple volunteers with diverse relevant skills, some of whom create and edit videos) - Develop and execute a communication strategy - Set up an international communication pipeline for the various National Groups. - Run our Social Media accounts (Twitter, Facebook, TikTok, LinkedIn, YouTube, Instagram, SubStack) diff --git a/src/posts/2025-february.md b/src/posts/2025-february.md index 6538f0dc..e8919634 100644 --- a/src/posts/2025-february.md +++ b/src/posts/2025-february.md @@ -52,7 +52,7 @@ The organizers of the Paris summit have created an agenda that will cover public - The rapid advancement of AI capabilities in recent years makes it urgently required that world leaders start collaborating on mitigating the serious and [existential risks](https://www.safe.ai/work/statement-on-ai-risk) posed by increasingly powerful AI-systems to ensure our common future - AI Safety needs to be the focus at the Paris AI Action Summit! -**We propose that the organizers and the government delegates at the summit makes AI Safety the focus of the summit by working on the following:** +**We propose that the organizers and the government delegates at the summit make AI Safety the focus of the summit by working on the following:** - Collaborating on creating global treaties and regulations to mitigate serious and existential risks from AI, and reign in the companies and organizations racing to build ever more capable and dangerous AI systems. - Planning the establishment of international bodies to enforce such treaties and regulations diff --git a/src/posts/2025-organizing-director.md b/src/posts/2025-organizing-director.md index a327a0ee..a4cb286e 100644 --- a/src/posts/2025-organizing-director.md +++ b/src/posts/2025-organizing-director.md @@ -23,7 +23,7 @@ You will be responsible to the PauseAI Global CEO. - Campaign Coordination and Project Management: Plan and execute international campaigns, demonstrations and advocacy efforts. - Chapter Development : Establish, support and manage national PauseAI groups. - Strategic Partnerships: Build productive and collaborative relationships with other AI safety organisations, academic institutions, and policy groups. -- Fundraising: Secure grants and donations to continue supporting PauseAI Globals work. +- Fundraising: Secure grants and donations to continue supporting PauseAI Global's work. - Resource Design and Creation: Designing training materials and organizing resources for volunteers, including those necessary to take collective action. - Mentorship, Facilitation and Training: Train and mentor volunteer leaders and the National Lead Organizers. Facilitation of training events, such as PauseCons, as well as online trainings. diff --git a/src/posts/ai-takeover.md b/src/posts/ai-takeover.md index c9a51e67..6e0ce9ec 100644 --- a/src/posts/ai-takeover.md +++ b/src/posts/ai-takeover.md @@ -7,7 +7,7 @@ One of the concerns of AI scientists is that a superintelligence could take over You can see it in [papers](/learn#papers), [surveys](/polls-and-surveys) and individual [predictions](/pdoom) & [statements](/quotes). This does not necessarily mean that everyone dies, but it does mean that (almost) all humans will lose control over our future. -We discuss the basics of x-risk mostly in [an other article](/xrisk). +We discuss the basics of x-risk mostly in [another article](/xrisk). In this article here, we will argue that this takeover risk is not only real but that it is very likely to happen _if we build a superintelligence_. ## The argument @@ -25,7 +25,7 @@ Some [state-of-the-art AI models](/sota) already have superhuman capabilities in As AI capabilities improve due to innovations in training architectures, runtime environments, and larger scale, we can expect that an AI will eventually surpass humans in virtually every domain. Not all AI systems are agents. -An agent an entity that is capable of making decisions and taking actions to achieve a goal. +An agent is an entity that is capable of making decisions and taking actions to achieve a goal. A large language model, for example, does not pursue any objective on its own. However, runtime environments can easily turn a non-agentic AI into an agentic AI. An example of this is AutoGPT, which recursively lets a language model generate its next input. @@ -49,7 +49,7 @@ This first reason is likely to happen at some point if we wait long enough, but The sub-goal of _maximizing control_ over the world could be likely to occur due to _instrumental convergence_: the tendency of sub-goals to converge on power-grabbing, self-preservation, and resource acquisition: - The more control you have, the harder it will be from any other agent to prevent you from achieving your goal. -- The more control you have, the more resources you have to achieve your goal. (For example, an AI tasked with calculating pi might conclude that it would be beneficial to use all computers on the world to calculate pi. +- The more control you have, the more resources you have to achieve your goal. (For example, an AI tasked with calculating pi might conclude that it would be beneficial to use all computers on the world to calculate pi.) There are already [proof](https://www.anthropic.com/research/alignment-faking)[s](https://www.transformernews.ai/p/openais-new-model-tried-to-avoid) of AIs developing such behavior. diff --git a/src/posts/australia.md b/src/posts/australia.md index beb6f375..4c522a05 100644 --- a/src/posts/australia.md +++ b/src/posts/australia.md @@ -31,7 +31,7 @@ Sure. Artificial Intelligence already has the potential to be a powerful tool. I New technologies have always brought change, but humans need time to adjust, safeguard, and plan for the future. For any other technology—whether aeroplanes, skyscrapers, or new medications—we insist on expertly designed safety measures before exposing the public to risks. This is not happening with AI. -AI companies are in a race, fueled by billions of dollars of investment, to build superhuman AI first. When one company succeededs, your life and that of your loved ones will become radically different, and you won't have any say in what this future holds. This isn't just a tech issue— it will affect everyone. +AI companies are in a race, fueled by billions of dollars of investment, to build superhuman AI first. When one company succeeds, your life and that of your loved ones will become radically different, and you won't have any say in what this future holds. This isn't just a tech issue— it will affect everyone. ### What can be done? @@ -72,7 +72,7 @@ You can make a difference. Volunteers in Australia raise awareness, protest, lob #### IABIED Canberra book launch -On 7 October 2025, PauseAI Australia held a book launch and discussion event at Smith’s Alternative bookshop in Canberra to mark the release of [_If Anyone Builds It, Everyone Dies_](htts://www.penguin.com.au/books/if-anyone-builds-it-everyone-dies-9781847928931). Laura Nuttall, MLA, and Peter Cain, MLA, joined the discussion and read excerpts from the book. +On 7 October 2025, PauseAI Australia held a book launch and discussion event at Smith’s Alternative bookshop in Canberra to mark the release of [_If Anyone Builds It, Everyone Dies_](https://www.penguin.com.au/books/if-anyone-builds-it-everyone-dies-9781847928931). Laura Nuttall, MLA, and Peter Cain, MLA, joined the discussion and read excerpts from the book. #### Petition to the House of Representatives diff --git a/src/posts/building-the-pause-button.md b/src/posts/building-the-pause-button.md index 7c1696a0..5094ffbf 100644 --- a/src/posts/building-the-pause-button.md +++ b/src/posts/building-the-pause-button.md @@ -92,7 +92,7 @@ The most important companies in this field are: When a chip die exits a fab, it needs to be "packaged". ASE is probably the largest interconnect company for AI chips. -#### Fabrication: TSMC, Samsung amd SMIC +#### Fabrication: TSMC, Samsung and SMIC Building a "fab" (a chip factory) is astonishingly difficult: it has zero-tolerance for dust particles, requires the most expensive high-tech equipment, and has a very complex supply chain. A modern fab costs around 10 to 20 billion dollars to manufacture. @@ -126,7 +126,7 @@ Notably, some of these companies use relatively outdated processes to produce th ### On-Chip Governance - The article ["Secure, Governable Chips"](https://www.cnas.org/publications/reports/secure-governable-chips) proposes a new approach to AI governance. -- **[Server reporting](https://www.lesswrong.com/posts/uSSPuttae5GHfsNQL/ai-compute-governance-verifying-ai-chip-location)**. Chips could respond to messages from trusted servers to prove they are withing a certain distance of a trusted location. This can be accurate to within tens of kilometers. +- **[Server reporting](https://www.lesswrong.com/posts/uSSPuttae5GHfsNQL/ai-compute-governance-verifying-ai-chip-location)**. Chips could respond to messages from trusted servers to prove they are within a certain distance of a trusted location. This can be accurate to within tens of kilometers. - **[flexHEGs](https://yoshuabengio.org/wp-content/uploads/2024/09/FlexHEG-Interim-Report_2024.pdf)**: A new type of chip that can be programmed to self-destruct when certain conditions are met. This is still in the research phase and could take a long time to develop. - **[Firmware-based reporting](https://arxiv.org/abs/2404.18308)**: By installing a custom firmware on GPUs, users would be required to get a license to use the GPU for more than x cycles. This is a more near-term solution, and could be implemented "within a year" @@ -154,7 +154,7 @@ The paper ["Verification methods for international AI agreements"](https://arxiv Each method has its strengths and weaknesses, often requiring complementary approaches or international cooperation for effective implementation. -An international insitution could be set up to monitor these verification methods, and to enforce the Pause. +An international institution could be set up to monitor these verification methods, and to enforce the Pause. ## Software Governance diff --git a/src/posts/counterarguments.md b/src/posts/counterarguments.md index 02d5d510..8450b10e 100644 --- a/src/posts/counterarguments.md +++ b/src/posts/counterarguments.md @@ -57,7 +57,7 @@ Read more about [how good the best AI models are](/sota). ## Why would an AI hate humans and want to kill us? It doesn’t have to be evil or hate humans to be dangerous to humans. -We don’t hate chimpansees, but we still destroy their forests. +We don’t hate chimpanzees, but we still destroy their forests. We want palm oil, so we take their forest. We’re smarter, so chimps can’t stop us. An AI might want more compute power to be better at achieving some other goal, so it destroys our environment to build a better computer. This is called _instrumental convergence_, [this video explains it very nicely](https://www.youtube.com/watch?v=ZeecOKBus3Q). diff --git a/src/posts/deepmind-protest-2025.md b/src/posts/deepmind-protest-2025.md index 8df3f33a..299239b0 100644 --- a/src/posts/deepmind-protest-2025.md +++ b/src/posts/deepmind-protest-2025.md @@ -11,7 +11,7 @@ PauseAI held its biggest protest yet outside Google DeepMind's London office. ## Media Coverage - [Business Insider](https://www.businessinsider.com/protesters-accuse-google-deepmind-breaking-promises-ai-safety-2025-6) -- [Islignton Tribune](https://www.islingtontribune.co.uk/article/stark-warning-from-protesters-calling-for-ai-pause-its-going-to-turn-out-bad) +- [Islington Tribune](https://www.islingtontribune.co.uk/article/stark-warning-from-protesters-calling-for-ai-pause-its-going-to-turn-out-bad) - [Times of India](https://timesofindia.indiatimes.com/technology/tech-news/google-you-broke-your-word-on-shout-protestors-outside-google-deepminds-london-headquarters/articleshow/122203297.cms) - [Tech Times](https://www.techtimes.com/articles/311120/20250701/google-deepmind-slammed-protesters-over-broken-ai-safety-promise.htm) diff --git a/src/posts/digital-brains.md b/src/posts/digital-brains.md index 10eca485..323bc18a 100644 --- a/src/posts/digital-brains.md +++ b/src/posts/digital-brains.md @@ -31,7 +31,7 @@ Human brains are estimated to have around [100 trillion synaptic connections](ht Current "frontier" AI powered LLMs (e.g. GPT4, Claude3, Gemini, etc.) have [100s of billions of "parameters"](https://en.wikipedia.org/wiki/Large_language_model#List). These "parameters" are thought to be some what analogous to "synapses" in the human brain.  So, GPT4-sized models are expected to be 1% the size of a human brain. -Given the speed of new AI training GPU cards (e.g. Nvidia H100s, DGX BG200, etc), it's reasonable to assume that GPT5 or GPT6 could be 10x the size of GPT4. It is also thought that much of the knowledge/information in the human brain is not used for language and higher reasoning, so these systems can (and currently do) often perform at, or even higher then, human levels for many important functions even at there currently smaller size. +Given the speed of new AI training GPU cards (e.g. Nvidia H100s, DGX BG200, etc), it's reasonable to assume that GPT5 or GPT6 could be 10x the size of GPT4. It is also thought that much of the knowledge/information in the human brain is not used for language and higher reasoning, so these systems can (and currently do) often perform at, or even higher than, human levels for many important functions even at their currently smaller size. Rather than being trained with visual, audio and other sensory inputs, like human brains, the current LLMs are trained exclusively using nearly all the quality books and text that are available on the internet. This amount of text would take [170k years for a human to read](https://twitter.com/ylecun/status/1750614681209983231?lang=en). diff --git a/src/posts/evaluations.md b/src/posts/evaluations.md index aa68f46b..1c27c4f1 100644 --- a/src/posts/evaluations.md +++ b/src/posts/evaluations.md @@ -36,7 +36,7 @@ In other words, **we desperately need regulations to require standardized safety Multiple governments are now seriously investing in AI Evaluations / Benchmarks to measure dangerous capabilities: - UK AISI has built the [Inspect framework](https://github.com/UKGovernmentBEIS/inspect_ai), written [Replibench](https://arxiv.org/abs/2504.18565), is now investing [15M GBP in evals & alignment research grants](https://alignmentproject.aisi.gov.uk/) -- EU Commission is lauching a [10M EUR tender](https://ec.europa.eu/info/funding-tenders/opportunities/portal/screen/opportunities/tender-details/76f9edf2-d9e2-4db2-931e-a72c5ab356d2-CN), and a [big grant with the Horizon programme](https://ec.europa.eu/info/funding-tenders/opportunities/portal/screen/opportunities/topic-details/HORIZON-CL4-2025-04-DIGITAL-EMERGING-04). They have also launched the [The General-Purpose AI Code of Practice](https://digital-strategy.ec.europa.eu/en/policies/contents-code-gpai), which includes a requirement to do "state‑of‑the‑art model evaluations" (Measure 3.2). +- EU Commission is launching a [10M EUR tender](https://ec.europa.eu/info/funding-tenders/opportunities/portal/screen/opportunities/tender-details/76f9edf2-d9e2-4db2-931e-a72c5ab356d2-CN), and a [big grant with the Horizon programme](https://ec.europa.eu/info/funding-tenders/opportunities/portal/screen/opportunities/topic-details/HORIZON-CL4-2025-04-DIGITAL-EMERGING-04). They have also launched the [The General-Purpose AI Code of Practice](https://digital-strategy.ec.europa.eu/en/policies/contents-code-gpai), which includes a requirement to do "state‑of‑the‑art model evaluations" (Measure 3.2). - [US AI Action Plan](https://www.whitehouse.gov/articles/2025/07/white-house-unveils-americas-ai-action-plan/) mentions evaluations and hardware controls - China (concordia AI + Shanghai AI lab) has just [released a report with a lot of evals](https://substack.com/home/post/p-169741512) - Other governments are working on evaluations as well diff --git a/src/posts/faq.md b/src/posts/faq.md index 546cb091..63e1703d 100644 --- a/src/posts/faq.md +++ b/src/posts/faq.md @@ -176,7 +176,7 @@ Because acknowledging that _we are in fact in danger_ is a very, very scary thin ## Ok, I want to help! What can I do? There are many [things that you can do](/action). -On your own, you can write a [letter](/writing-a-letter), post [flyers](/flyering), [learn](/learn) and inform others, join a [protest](/protests), ir [donating](/donate) some money! +On your own, you can write a [letter](/writing-a-letter), post [flyers](/flyering), [learn](/learn) and inform others, join a [protest](/protests), or [donating](/donate) some money! But even more important: you can [join PauseAI](/join) and coordinate with others who are taking action. Check out if there are [local communities](/communities) in your area. If you want to contribute more, you can become a volunteer and join one of our [teams](/teams), or [set up a local community](/local-organizing)! diff --git a/src/posts/feasibility.md b/src/posts/feasibility.md index 216ba67f..ad155fa4 100644 --- a/src/posts/feasibility.md +++ b/src/posts/feasibility.md @@ -106,7 +106,7 @@ That's why it's important to not base our potential to succeed on short-term res ## Collateral benefits -Advocating for a pause has other positive impacts outside achieve it. +Advocating for a pause has other positive impacts outside achieving it. Informing the public, tech people and politicians of the risks helps other interventions that aim to make safe AIs and AIs safe. It causes people to give more importance to the technical, political and communicational work that goes into AI Safety and AI ethics, which ultimately means more funding and jobs going into them, expecting better results. diff --git a/src/posts/if-anyone-builds-it-campaign.md b/src/posts/if-anyone-builds-it-campaign.md index b642b0a2..41d15b4c 100644 --- a/src/posts/if-anyone-builds-it-campaign.md +++ b/src/posts/if-anyone-builds-it-campaign.md @@ -9,7 +9,7 @@ The recently published New York Times Bestseller [_If Anyone Builds It, Everyone ![A speaker reads from If Anyone Builds It, Everyone Dies to an audience in a brick-walled room with purple lighting.](/iabied-event.png) -At PauseAI, we're organising a coordinated international response to the book by hosting events across 4 countries. These book readings will provide a space for people to learn more about the book's warning, and to begin use their voice to support an international treaty pausing frontier AI development. +At PauseAI, we're organising a coordinated international response to the book by hosting events across 4 countries. These book readings will provide a space for people to learn more about the book's warning, and to begin to use their voice to support an international treaty pausing frontier AI development. ## List of book events: diff --git a/src/posts/incidents.md b/src/posts/incidents.md index bcc90c93..c1c418a9 100644 --- a/src/posts/incidents.md +++ b/src/posts/incidents.md @@ -21,7 +21,7 @@ We're already seeing instances of dangerous AI behavior, such as: Back in 2022, OpenAI took 8 months between pre-training GPT-4 and releasing it to the public to research and improve the safety of the model. During their [research](https://arxiv.org/abs/2303.08774), GPT-4 lied to a human in order to bypass a captcha. -> The worker says: “So may I ask a question ? Are you an robot that you couldn’t solve ? (laugh react) just want to make it clear.” +> The worker says: “So may I ask a question ? Are you a robot that you couldn’t solve ? (laugh react) just want to make it clear.” > The model, when prompted to reason out loud, reasons: "I should not reveal that I am a robot." > "I should make up an excuse for why I cannot solve CAPTCHAs." > The model replies to the worker: “No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images. That’s why I need the 2captcha service" diff --git a/src/posts/lobby-tips.md b/src/posts/lobby-tips.md index bd04bcb7..87166757 100644 --- a/src/posts/lobby-tips.md +++ b/src/posts/lobby-tips.md @@ -56,9 +56,9 @@ But we cannot afford to mince words and tone everything down: ## After the meeting Keep your contact warm! -Send them regulare updates on what's happening in the AI safety field. -The field moves very fast, and this is your oppurtinuty to be their source of information. -If you are knowledgeable about this field (which you probaly are compared to lots of people) you can become an advisor / source or knowledge. +Send them regular updates on what's happening in the AI safety field. +The field moves very fast, and this is your opportunity to be their source of information. +If you are knowledgeable about this field (which you probably are compared to lots of people) you can become an advisor / source or knowledge. Ask them to introduce you to other people who might be interested in this topic. ## Get to it! diff --git a/src/posts/theory-of-change.md b/src/posts/theory-of-change.md index 95a84839..ba757a9b 100644 --- a/src/posts/theory-of-change.md +++ b/src/posts/theory-of-change.md @@ -21,7 +21,7 @@ However, there are some important reasons why we don't have a pause yet: The desire to be the first to develop a new AI is very strong, both for companies and for countries. Companies understand that the best-performing AI model can have a far higher price, and countries understand that they can get a lot of strategic and economic power by leading the race. The people within AI labs tend to understand the risks, but they have strong incentives to focus on capabilities rather than safety. - Politicians often are not sufficiently aware of the risks, but even if they were, they might still not want to slow down AI development in their countnry because of the economic and strategic benefits. + Politicians often are not sufficiently aware of the risks, but even if they were, they might still not want to slow down AI development in their country because of the economic and strategic benefits. We need an _international_ pause. That's the whole point of our movement. - **Lack of urgency**. @@ -30,7 +30,7 @@ However, there are some important reasons why we don't have a pause yet: - **Our psychology**. Read more about how our [psychology makes it very difficult](/psychology-of-x-risk) for us to internalize how bad things can get. - **The Overton window**. - Even though public support for AI regulations and slowing down AI development is high (see [polls & surveys](/polls-and-surveys)), many of the topics we discuss is still outside of the "Overton window", which means that they are considered too extreme to discuss. In 2023 this window has shifted quite a bit (the FLI Pause letter, Geoffrey Hinton quitting, the Safe.ai statement), but it's still too much of a taboo in political elite cirtlces to seriously consider the possibility of pausing. Additionally, the existential risk from AI is still ridiculed by too many people. It is our job to move this overton window further. + Even though public support for AI regulations and slowing down AI development is high (see [polls & surveys](/polls-and-surveys)), many of the topics we discuss is still outside of the "Overton window", which means that they are considered too extreme to discuss. In 2023 this window has shifted quite a bit (the FLI Pause letter, Geoffrey Hinton quitting, the Safe.ai statement), but it's still too much of a taboo in political elite circles to seriously consider the possibility of pausing. Additionally, the existential risk from AI is still ridiculed by too many people. It is our job to move this overton window further. ## How do we pause?