Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion src/posts/2023-july-london-13th.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ description: PauseAI protest, urging the United Nations Security Council to

## Contact

- Alistair Steward ([twitter](https://twitter.com/alistair___s))
- Alistair Stewart ([twitter](https://twitter.com/alistair___s))

## Press Release: PauseAI protests Foreign Office ahead of UN Security Council meeting on AI Risk

Expand Down
2 changes: 1 addition & 1 deletion src/posts/2023-july-london-18th.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ title: PauseAI protest @ FCDO, London, July 18th

## Contact

- Alistair Steward ([twitter](https://twitter.com/alistair___s))
- Alistair Stewart ([twitter](https://twitter.com/alistair___s))

## Press Release: PauseAI protests Foreign Office ahead of UN Security Council meeting on AI Risk

Expand Down
2 changes: 1 addition & 1 deletion src/posts/2023-june-london.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ A rapidly increasing number of AI experts [signed a statement](https://www.safe.

This has been signed by virtually all AI labs (OpenAI, Google DeepMind, Anthropic) and hundreds of AI scientists including Geoffrey Hinton, the "Godfather of AI".

AI safety researchers have not reached on consensus on how large the risk of human extinction will be.
AI safety researchers have not reached consensus on how large the risk of human extinction will be.
Results from the ["Existential risk from AI survey"](https://forum.effectivealtruism.org/posts/8CM9vZ2nnQsWJNsHx/existential-risk-from-ai-survey-results) show that estimates range from 2% to 98%, with an average of 30%.

Rishi Sunak has stated that the ["Government is looking very carefully at this"](https://twitter.com/RishiSunak/status/1663838958558539776) and that ["the UK is well-placed to lead"](https://twitter.com/RishiSunak/status/1662369922234679297) the global collaboration on safe AI development.
Expand Down
2 changes: 1 addition & 1 deletion src/posts/2023-june-melbourne.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ A rapidly increasing number of AI experts [signed a statement](https://www.safe.

This has been signed by virtually all AI labs (OpenAI, Google DeepMind, Anthropic) and hundreds of AI scientists including Geoffrey Hinton, the "Godfather of AI".

AI safety researchers have not reached on consensus on how large the risk of human extinction will be.
AI safety researchers have not reached consensus on how large the risk of human extinction will be.
Results from the ["Existential risk from AI survey"](https://forum.effectivealtruism.org/posts/8CM9vZ2nnQsWJNsHx/existential-risk-from-ai-survey-results) show that estimates range from 2% to 98%, with an average of 30%.

The protesters are urging the Australian government to take the lead on global AI safety and pause the development of more dangerous AI systems.
Expand Down
2 changes: 1 addition & 1 deletion src/posts/2023-may-deepmind-london.md
Original file line number Diff line number Diff line change
Expand Up @@ -131,4 +131,4 @@ Stickers and pin badges

## Contact

Alistair Steward ([email](mailto:[email protected]), [twitter](https://twitter.com/alistair___s))
Alistair Stewart ([email](mailto:[email protected]), [twitter](https://twitter.com/alistair___s))
6 changes: 3 additions & 3 deletions src/posts/2024-vacancy-comms-director.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Communications Director Vacancy at PauseAI Global (Vacancy closed)
description: PauseAI is looking for an Communications Director to lead our Comms Team and Social Media accounts. Remote work or in-person in Utrecht, the Netherlands.
description: PauseAI is looking for a Communications Director to lead our Comms Team and Social Media accounts. Remote work or in-person in Utrecht, the Netherlands.
---

_Update 2025-03-24: This vacancy is now closed._
Expand All @@ -17,14 +17,14 @@ PauseAI started in April 2023 and has since grown to 2000 members, over 100 regi

## Your Role

Although many volunteers contribute to PauseAI (some even full-time), PauseAI has one paid staff member ([Organizing Director](/2024-vacancy-organizing-director)
Although many volunteers contribute to PauseAI (some even full-time), PauseAI has one paid staff member ([Organizing Director](/2024-vacancy-organizing-director))
You will be the second hire and you will play a crucial role in how the organization grows and evolves.
You will work closely with the founder, Joep Meindertsma.
Be aware that PauseAI may grow very quickly in the near future, both in terms of members and funding.

### Tasks & Responsibilities

- Lead the Comms [Team](/teams) (multiple volunteers with diverse relevant skills, some of who create and edit videos)
- Lead the Comms [Team](/teams) (multiple volunteers with diverse relevant skills, some of whom create and edit videos)
- Develop and execute a communication strategy
- Set up an international communication pipeline for the various National Groups.
- Run our Social Media accounts (Twitter, Facebook, TikTok, LinkedIn, YouTube, Instagram, SubStack)
Expand Down
2 changes: 1 addition & 1 deletion src/posts/2025-february.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ The organizers of the Paris summit have created an agenda that will cover public
- The rapid advancement of AI capabilities in recent years makes it urgently required that world leaders start collaborating on mitigating the serious and [existential risks](https://www.safe.ai/work/statement-on-ai-risk) posed by increasingly powerful AI-systems to ensure our common future
- AI Safety needs to be the focus at the Paris AI Action Summit!

**We propose that the organizers and the government delegates at the summit makes AI Safety the focus of the summit by working on the following:**
**We propose that the organizers and the government delegates at the summit make AI Safety the focus of the summit by working on the following:**

- Collaborating on creating global treaties and regulations to mitigate serious and existential risks from AI, and reign in the companies and organizations racing to build ever more capable and dangerous AI systems.
- Planning the establishment of international bodies to enforce such treaties and regulations
Expand Down
2 changes: 1 addition & 1 deletion src/posts/2025-organizing-director.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ You will be responsible to the PauseAI Global CEO.
- Campaign Coordination and Project Management: Plan and execute international campaigns, demonstrations and advocacy efforts.
- Chapter Development : Establish, support and manage national PauseAI groups.
- Strategic Partnerships: Build productive and collaborative relationships with other AI safety organisations, academic institutions, and policy groups.
- Fundraising: Secure grants and donations to continue supporting PauseAI Globals work.
- Fundraising: Secure grants and donations to continue supporting PauseAI Global's work.
- Resource Design and Creation: Designing training materials and organizing resources for volunteers, including those necessary to take collective action.
- Mentorship, Facilitation and Training: Train and mentor volunteer leaders and the National Lead Organizers. Facilitation of training events, such as PauseCons, as well as online trainings.

Expand Down
6 changes: 3 additions & 3 deletions src/posts/ai-takeover.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ One of the concerns of AI scientists is that a superintelligence could take over
You can see it in [papers](/learn#papers), [surveys](/polls-and-surveys) and individual [predictions](/pdoom) & [statements](/quotes).
This does not necessarily mean that everyone dies, but it does mean that (almost) all humans will lose control over our future.

We discuss the basics of x-risk mostly in [an other article](/xrisk).
We discuss the basics of x-risk mostly in [another article](/xrisk).
In this article here, we will argue that this takeover risk is not only real but that it is very likely to happen _if we build a superintelligence_.

## The argument
Expand All @@ -25,7 +25,7 @@ Some [state-of-the-art AI models](/sota) already have superhuman capabilities in
As AI capabilities improve due to innovations in training architectures, runtime environments, and larger scale, we can expect that an AI will eventually surpass humans in virtually every domain.

Not all AI systems are agents.
An agent an entity that is capable of making decisions and taking actions to achieve a goal.
An agent is an entity that is capable of making decisions and taking actions to achieve a goal.
A large language model, for example, does not pursue any objective on its own.
However, runtime environments can easily turn a non-agentic AI into an agentic AI.
An example of this is AutoGPT, which recursively lets a language model generate its next input.
Expand All @@ -49,7 +49,7 @@ This first reason is likely to happen at some point if we wait long enough, but
The sub-goal of _maximizing control_ over the world could be likely to occur due to _instrumental convergence_: the tendency of sub-goals to converge on power-grabbing, self-preservation, and resource acquisition:

- The more control you have, the harder it will be from any other agent to prevent you from achieving your goal.
- The more control you have, the more resources you have to achieve your goal. (For example, an AI tasked with calculating pi might conclude that it would be beneficial to use all computers on the world to calculate pi.
- The more control you have, the more resources you have to achieve your goal. (For example, an AI tasked with calculating pi might conclude that it would be beneficial to use all computers on the world to calculate pi.)

There are already [proof](https://www.anthropic.com/research/alignment-faking)[s](https://www.transformernews.ai/p/openais-new-model-tried-to-avoid) of AIs developing such behavior.

Expand Down
4 changes: 2 additions & 2 deletions src/posts/australia.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ Sure. Artificial Intelligence already has the potential to be a powerful tool. I

New technologies have always brought change, but humans need time to adjust, safeguard, and plan for the future. For any other technology—whether aeroplanes, skyscrapers, or new medications—we insist on expertly designed safety measures before exposing the public to risks. This is not happening with AI.

AI companies are in a race, fueled by billions of dollars of investment, to build superhuman AI first. When one company succeededs, your life and that of your loved ones will become radically different, and you won't have any say in what this future holds. This isn't just a tech issue— it will affect everyone.
AI companies are in a race, fueled by billions of dollars of investment, to build superhuman AI first. When one company succeeds, your life and that of your loved ones will become radically different, and you won't have any say in what this future holds. This isn't just a tech issue— it will affect everyone.

### What can be done?

Expand Down Expand Up @@ -72,7 +72,7 @@ You can make a difference. Volunteers in Australia raise awareness, protest, lob

#### IABIED Canberra book launch

On 7 October 2025, PauseAI Australia held a book launch and discussion event at Smith’s Alternative bookshop in Canberra to mark the release of [_If Anyone Builds It, Everyone Dies_](htts://www.penguin.com.au/books/if-anyone-builds-it-everyone-dies-9781847928931). Laura Nuttall, MLA, and Peter Cain, MLA, joined the discussion and read excerpts from the book.
On 7 October 2025, PauseAI Australia held a book launch and discussion event at Smith’s Alternative bookshop in Canberra to mark the release of [_If Anyone Builds It, Everyone Dies_](https://www.penguin.com.au/books/if-anyone-builds-it-everyone-dies-9781847928931). Laura Nuttall, MLA, and Peter Cain, MLA, joined the discussion and read excerpts from the book.

#### Petition to the House of Representatives

Expand Down
6 changes: 3 additions & 3 deletions src/posts/building-the-pause-button.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ The most important companies in this field are:
When a chip die exits a fab, it needs to be "packaged".
ASE is probably the largest interconnect company for AI chips.

#### Fabrication: TSMC, Samsung amd SMIC
#### Fabrication: TSMC, Samsung and SMIC

Building a "fab" (a chip factory) is astonishingly difficult: it has zero-tolerance for dust particles, requires the most expensive high-tech equipment, and has a very complex supply chain.
A modern fab costs around 10 to 20 billion dollars to manufacture.
Expand Down Expand Up @@ -126,7 +126,7 @@ Notably, some of these companies use relatively outdated processes to produce th
### On-Chip Governance

- The article ["Secure, Governable Chips"](https://www.cnas.org/publications/reports/secure-governable-chips) proposes a new approach to AI governance.
- **[Server reporting](https://www.lesswrong.com/posts/uSSPuttae5GHfsNQL/ai-compute-governance-verifying-ai-chip-location)**. Chips could respond to messages from trusted servers to prove they are withing a certain distance of a trusted location. This can be accurate to within tens of kilometers.
- **[Server reporting](https://www.lesswrong.com/posts/uSSPuttae5GHfsNQL/ai-compute-governance-verifying-ai-chip-location)**. Chips could respond to messages from trusted servers to prove they are within a certain distance of a trusted location. This can be accurate to within tens of kilometers.
- **[flexHEGs](https://yoshuabengio.org/wp-content/uploads/2024/09/FlexHEG-Interim-Report_2024.pdf)**: A new type of chip that can be programmed to self-destruct when certain conditions are met. This is still in the research phase and could take a long time to develop.
- **[Firmware-based reporting](https://arxiv.org/abs/2404.18308)**: By installing a custom firmware on GPUs, users would be required to get a license to use the GPU for more than x cycles. This is a more near-term solution, and could be implemented "within a year"

Expand Down Expand Up @@ -154,7 +154,7 @@ The paper ["Verification methods for international AI agreements"](https://arxiv

Each method has its strengths and weaknesses, often requiring complementary approaches or international cooperation for effective implementation.

An international insitution could be set up to monitor these verification methods, and to enforce the Pause.
An international institution could be set up to monitor these verification methods, and to enforce the Pause.

## Software Governance

Expand Down
2 changes: 1 addition & 1 deletion src/posts/counterarguments.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ Read more about [how good the best AI models are](/sota).
## Why would an AI hate humans and want to kill us?

It doesn’t have to be evil or hate humans to be dangerous to humans.
We don’t hate chimpansees, but we still destroy their forests.
We don’t hate chimpanzees, but we still destroy their forests.
We want palm oil, so we take their forest. We’re smarter, so chimps can’t stop us.
An AI might want more compute power to be better at achieving some other goal, so it destroys our environment to build a better computer.
This is called _instrumental convergence_, [this video explains it very nicely](https://www.youtube.com/watch?v=ZeecOKBus3Q).
Expand Down
2 changes: 1 addition & 1 deletion src/posts/deepmind-protest-2025.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ PauseAI held its biggest protest yet outside Google DeepMind's London office.
## Media Coverage

- [Business Insider](https://www.businessinsider.com/protesters-accuse-google-deepmind-breaking-promises-ai-safety-2025-6)
- [Islignton Tribune](https://www.islingtontribune.co.uk/article/stark-warning-from-protesters-calling-for-ai-pause-its-going-to-turn-out-bad)
- [Islington Tribune](https://www.islingtontribune.co.uk/article/stark-warning-from-protesters-calling-for-ai-pause-its-going-to-turn-out-bad)
- [Times of India](https://timesofindia.indiatimes.com/technology/tech-news/google-you-broke-your-word-on-shout-protestors-outside-google-deepminds-london-headquarters/articleshow/122203297.cms)
- [Tech Times](https://www.techtimes.com/articles/311120/20250701/google-deepmind-slammed-protesters-over-broken-ai-safety-promise.htm)

Expand Down
2 changes: 1 addition & 1 deletion src/posts/digital-brains.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ Human brains are estimated to have around [100 trillion synaptic connections](ht

Current "frontier" AI powered LLMs (e.g. GPT4, Claude3, Gemini, etc.) have [100s of billions of "parameters"](https://en.wikipedia.org/wiki/Large_language_model#List). These "parameters" are thought to be some what analogous to "synapses" in the human brain.  So, GPT4-sized models are expected to be 1% the size of a human brain.

Given the speed of new AI training GPU cards (e.g. Nvidia H100s, DGX BG200, etc), it's reasonable to assume that GPT5 or GPT6 could be 10x the size of GPT4. It is also thought that much of the knowledge/information in the human brain is not used for language and higher reasoning, so these systems can (and currently do) often perform at, or even higher then, human levels for many important functions even at there currently smaller size.
Given the speed of new AI training GPU cards (e.g. Nvidia H100s, DGX BG200, etc), it's reasonable to assume that GPT5 or GPT6 could be 10x the size of GPT4. It is also thought that much of the knowledge/information in the human brain is not used for language and higher reasoning, so these systems can (and currently do) often perform at, or even higher than, human levels for many important functions even at their currently smaller size.

Rather than being trained with visual, audio and other sensory inputs, like human brains, the current LLMs are trained exclusively using nearly all the quality books and text that are available on the internet. This amount of text would take [170k years for a human to read](https://twitter.com/ylecun/status/1750614681209983231?lang=en).

Expand Down
2 changes: 1 addition & 1 deletion src/posts/evaluations.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ In other words, **we desperately need regulations to require standardized safety
Multiple governments are now seriously investing in AI Evaluations / Benchmarks to measure dangerous capabilities:

- UK AISI has built the [Inspect framework](https://github.com/UKGovernmentBEIS/inspect_ai), written [Replibench](https://arxiv.org/abs/2504.18565), is now investing [15M GBP in evals & alignment research grants](https://alignmentproject.aisi.gov.uk/)
- EU Commission is lauching a [10M EUR tender](https://ec.europa.eu/info/funding-tenders/opportunities/portal/screen/opportunities/tender-details/76f9edf2-d9e2-4db2-931e-a72c5ab356d2-CN), and a [big grant with the Horizon programme](https://ec.europa.eu/info/funding-tenders/opportunities/portal/screen/opportunities/topic-details/HORIZON-CL4-2025-04-DIGITAL-EMERGING-04). They have also launched the [The General-Purpose AI Code of Practice](https://digital-strategy.ec.europa.eu/en/policies/contents-code-gpai), which includes a requirement to do "state‑of‑the‑art model evaluations" (Measure 3.2).
- EU Commission is launching a [10M EUR tender](https://ec.europa.eu/info/funding-tenders/opportunities/portal/screen/opportunities/tender-details/76f9edf2-d9e2-4db2-931e-a72c5ab356d2-CN), and a [big grant with the Horizon programme](https://ec.europa.eu/info/funding-tenders/opportunities/portal/screen/opportunities/topic-details/HORIZON-CL4-2025-04-DIGITAL-EMERGING-04). They have also launched the [The General-Purpose AI Code of Practice](https://digital-strategy.ec.europa.eu/en/policies/contents-code-gpai), which includes a requirement to do "state‑of‑the‑art model evaluations" (Measure 3.2).
- [US AI Action Plan](https://www.whitehouse.gov/articles/2025/07/white-house-unveils-americas-ai-action-plan/) mentions evaluations and hardware controls
- China (concordia AI + Shanghai AI lab) has just [released a report with a lot of evals](https://substack.com/home/post/p-169741512)
- Other governments are working on evaluations as well
Expand Down
2 changes: 1 addition & 1 deletion src/posts/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -176,7 +176,7 @@ Because acknowledging that _we are in fact in danger_ is a very, very scary thin
## Ok, I want to help! What can I do?

There are many [things that you can do](/action).
On your own, you can write a [letter](/writing-a-letter), post [flyers](/flyering), [learn](/learn) and inform others, join a [protest](/protests), ir [donating](/donate) some money!
On your own, you can write a [letter](/writing-a-letter), post [flyers](/flyering), [learn](/learn) and inform others, join a [protest](/protests), or [donating](/donate) some money!
But even more important: you can [join PauseAI](/join) and coordinate with others who are taking action.
Check out if there are [local communities](/communities) in your area.
If you want to contribute more, you can become a volunteer and join one of our [teams](/teams), or [set up a local community](/local-organizing)!
Expand Down
2 changes: 1 addition & 1 deletion src/posts/feasibility.md
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ That's why it's important to not base our potential to succeed on short-term res

## Collateral benefits

Advocating for a pause has other positive impacts outside achieve it.
Advocating for a pause has other positive impacts outside achieving it.
Informing the public, tech people and politicians of the risks helps other interventions that aim to make safe AIs and AIs safe.
It causes people to give more importance to the technical, political and communicational work that goes into AI Safety and AI ethics, which ultimately means more funding and jobs going into them, expecting better results.

Expand Down
Loading
Loading