Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
50 changes: 50 additions & 0 deletions src/lib/components/Home.svelte
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
<script lang="ts">
import Block from '$lib/components/Block.svelte'
import QuotesCarousel from '$lib/components/QuotesCarousel.svelte'
import Stats from '$lib/components/Stats.svelte'
import RiskMatrix from '$lib/components/RiskMatrix.svelte'
</script>

<!-- The Hero image is in `+layout.svelte` -->
<QuotesCarousel />
<br />
<Stats />
<br />
<RiskMatrix />
<section>
<Block linkText="Read about the risks" href="/risks">
<span slot="title">We risk <u>losing control</u></span>
AI can have amazing benefits, but it could also erode our democracy, destabilize our economy and
could be used to create powerful cyber weapons.
</Block>
<Block linkText="How and why AI could kill us" href="/xrisk">
<span slot="title">We risk <u>human extinction</u></span>
Many AI labs and experts agree: AI could end humanity.
</Block>
<Block linkText="Read the proposal" href="/proposal">
<span slot="title">We need a <u>pause</u></span>
Stop the development of AI systems more powerful than GPT-4 until we know how to make them safe.
This needs to happen on an international level, and it needs to happen soon.
</Block>
<Block linkText="How long do we have?" href="/urgency">
<span slot="title">WE NEED TO ACT <u>RIGHT NOW</u></span>
In 2020, experts thought we had more than 35 years until AGI. Recent breakthroughs show we might
be almost there. Superintelligence could be one innovation away, so we should tread carefully.</Block
>
<Block linkText="Take action" href="/action">
<span slot="title"><u>YOU</u> CAN HELP</span>
Too few people are well-informed about the potential risks of AI. Inform others, and help stop this
race to the bottom.</Block
>
</section>

<style>
br {
content: '';
display: block;
height: 0.1rem;
margin: 5rem 0;
background: var(--bg-subtle);
width: 100%;
}
</style>
5 changes: 3 additions & 2 deletions src/lib/components/QuotesCarousel.svelte
Original file line number Diff line number Diff line change
Expand Up @@ -28,13 +28,13 @@
image: Turing
},
{
text: "The robot is not going to want to be switched off because you've given it a goal to achieve and being switched off is a way of failing—so it will do its best not to be switched off.",
text: 'If we pursue [our current approach], then we will eventually lose control over the machines',
author: 'Stuart Russell',
title: 'Writer of the AI textbook',
image: Russell
},
{
text: 'It’s very challenging psychologically to realize that what you’ve been working for, with the idea that it would be a great thing—for society, for humanity, for science—may actually be catastrophic.',
text: 'Rogue AI may be dangerous for the whole of humanity. Banning powerful AI systems (say beyond the abilities of GPT-4) that are given autonomy and agency would be a good start.',
author: 'Yoshua Bengio',
title: 'AI Turing Award winner',
image: Bengio
Expand Down Expand Up @@ -62,6 +62,7 @@
{/each}
<button on:click={nextSlide} class="nav-button">→</button>
</div>
<a href="/quotes">more quotes</a>
</div>

<style>
Expand Down
74 changes: 74 additions & 0 deletions src/lib/components/RiskMatrix.svelte
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
<script lang="ts">
// Define the risks and their positions (x, y)
type Risk = {
name: string
capabilities: number
impact: number
}

let risks: Risk[] = [
{ name: 'Facial Recognition', capabilities: 10, impact: 10 },
{ name: 'Automated Disinformation', capabilities: 30, impact: 15 },
{ name: 'Bias in AI', capabilities: 40, impact: 15 },
{ name: 'Fully autonomous weapons', capabilities: 60, impact: 40 },
{ name: 'AI powered cyberweapons', capabilities: 75, impact: 50 },
{ name: 'AI powered bioweapons', capabilities: 80, impact: 60 },
{ name: 'Loss of control', capabilities: 85, impact: 90 }
]

function handleClick(risk: Risk) {
alert(`You clicked on ${risk.name}`)
}
</script>

<svg width="500" height="500" viewBox="0 0 120 120">
<!-- X and Y Axes -->
<line x1="20" y1="90" x2="100" y2="90" stroke="var(--color-text)" />
<line x1="20" y1="90" x2="20" y2="10" stroke="var(--color-text)" />
<text x="40" y="95" font-size="4" fill="var(--color-text)">Capabilities</text>
<text x="-30" y="5" font-size="4" transform="rotate(-90, 12, 5)" fill="var(--color-text)"
>Potential Impact</text
>

<!-- "We are here" line -->
<line
x1="20"
y1="90"
x2="45"
y2="65"
stroke="var(--color-text)"
stroke-width="0.5"
stroke-dasharray="2,2"
/>
<text x="46" y="63" font-size="3" fill="var(--color-text)" transform="rotate(-90, 46, 63)"
>We are here</text
>

<!-- Risks as clickable circles -->
{#each risks as risk}
<circle
cx={risk.capabilities + 10}
cy={100 - risk.impact}
r="2"
fill="var(--brand)"
on:click={() => handleClick(risk)}
class="cursor-pointer"
/>
<text
x={risk.capabilities + 11}
y={100 - risk.impact - 1}
font-size="3"
fill="var(--color-text)">{risk.name}</text
>
{/each}
</svg>

<style>
circle {
transition: r 0.2s ease-in-out;
}

circle:hover {
r: 3;
}
</style>
12 changes: 12 additions & 0 deletions src/posts/psychology-of-x-risk.md
Original file line number Diff line number Diff line change
Expand Up @@ -108,6 +108,10 @@ So when people hear about existential risk, they will think it is just another o
Try to have an understanding of this point of view, and don't be too hard on people who think this way.
They probably haven't been shown the same information as you have.

### Present Bias

We tend to emphasize the importance of the present time over the future.

### We like to think that we are special

Both at a _collective_ and at an _individual_ level, we want to believe that we are special.
Expand Down Expand Up @@ -176,6 +180,9 @@ In an [interview](https://youtu.be/0RknkWgd6Ck?t%25253D949), he gave the followi

It should surprise no-one that some of the most fierce AI risk deniers are AI researchers themselves.

Take Yann LeCun, for example, one of the most vocal critics of AI risk.
He works at Meta

### Easy to dismiss as conspiracy or cult

In the past year, the majority of the population was introduced to the concept of existential risk from AI.
Expand Down Expand Up @@ -227,6 +234,11 @@ We instinctively fear heights, big animals with sharp teath, sudden loud noises,
A superintelligent AI does not hit any of our primal fears.
Additionally, we have a strong fear for social rejection or losing social status, which means that people tend to be afraid of speaking up about AI risks.

### Diffusion of responsibility

Not a single person is "responsible" for making sure AI doesn't lead to our extinction.
So, someone else should solve it.

### Scope insensitivity

> "A single death is a tragedy; a million deaths is a statistic." - Joseph Stalin
Expand Down
48 changes: 2 additions & 46 deletions src/routes/+page.svelte
Original file line number Diff line number Diff line change
@@ -1,54 +1,10 @@
<script lang="ts">
import Block from '$lib/components/Block.svelte'
import Home from '$lib/components/Home.svelte'
import PostMeta from '$lib/components/PostMeta.svelte'
import QuotesCarousel from '$lib/components/QuotesCarousel.svelte'
import Stats from '$lib/components/Stats.svelte'

const title = 'We need to Pause AI'
const description = 'We are risking human extinction. We need to pause AI development, right now.'
</script>

<PostMeta {title} {description} />
<!-- The Hero image is in `+layout.svelte` -->

<QuotesCarousel />
<br />
<Stats />
<br />
<section>
<Block linkText="Read about the risks" href="/risks">
<span slot="title">We risk <u>losing control</u></span>
AI can have amazing benefits, but it could also erode our democracy, destabilize our economy and
could be used to create powerful cyber weapons.
</Block>
<Block linkText="How and why AI could kill us" href="/xrisk">
<span slot="title">We risk <u>human extinction</u></span>
Many AI labs and experts agree: AI could end humanity.
</Block>
<Block linkText="Read the proposal" href="/proposal">
<span slot="title">We need a <u>pause</u></span>
Stop the development of AI systems more powerful than GPT-4 until we know how to make them safe.
This needs to happen on an international level, and it needs to happen soon.
</Block>
<Block linkText="How long do we have?" href="/urgency">
<span slot="title">WE NEED TO ACT <u>RIGHT NOW</u></span>
In 2020, experts thought we had more than 35 years until AGI. Recent breakthroughs show we might
be almost there. Superintelligence could be one innovation away, so we should tread carefully.</Block
>
<Block linkText="Take action" href="/action">
<span slot="title"><u>YOU</u> CAN HELP</span>
Too few people are well-informed about the potential risks of AI. Inform others, and help stop this
race to the bottom.</Block
>
</section>

<style>
br {
content: '';
display: block;
height: 0.1rem;
margin: 5rem 0;
background: var(--bg-subtle);
width: 100%;
}
</style>
<Home />