diff --git a/src/lib/components/Home.svelte b/src/lib/components/Home.svelte new file mode 100644 index 000000000..07c378cf9 --- /dev/null +++ b/src/lib/components/Home.svelte @@ -0,0 +1,50 @@ + + + + +
+ +
+ +
+ + We risk losing control + AI can have amazing benefits, but it could also erode our democracy, destabilize our economy and + could be used to create powerful cyber weapons. + + + We risk human extinction + Many AI labs and experts agree: AI could end humanity. + + + We need a pause + Stop the development of AI systems more powerful than GPT-4 until we know how to make them safe. + This needs to happen on an international level, and it needs to happen soon. + + + WE NEED TO ACT RIGHT NOW + In 2020, experts thought we had more than 35 years until AGI. Recent breakthroughs show we might + be almost there. Superintelligence could be one innovation away, so we should tread carefully. + + YOU CAN HELP + Too few people are well-informed about the potential risks of AI. Inform others, and help stop this + race to the bottom. +
+ + diff --git a/src/lib/components/QuotesCarousel.svelte b/src/lib/components/QuotesCarousel.svelte index 31b735379..f092006a7 100644 --- a/src/lib/components/QuotesCarousel.svelte +++ b/src/lib/components/QuotesCarousel.svelte @@ -28,13 +28,13 @@ image: Turing }, { - text: "The robot is not going to want to be switched off because you've given it a goal to achieve and being switched off is a way of failing—so it will do its best not to be switched off.", + text: 'If we pursue [our current approach], then we will eventually lose control over the machines', author: 'Stuart Russell', title: 'Writer of the AI textbook', image: Russell }, { - text: 'It’s very challenging psychologically to realize that what you’ve been working for, with the idea that it would be a great thing—for society, for humanity, for science—may actually be catastrophic.', + text: 'Rogue AI may be dangerous for the whole of humanity. Banning powerful AI systems (say beyond the abilities of GPT-4) that are given autonomy and agency would be a good start.', author: 'Yoshua Bengio', title: 'AI Turing Award winner', image: Bengio @@ -62,6 +62,7 @@ {/each} + more quotes diff --git a/src/posts/psychology-of-x-risk.md b/src/posts/psychology-of-x-risk.md index d8b4e5050..af4e06058 100644 --- a/src/posts/psychology-of-x-risk.md +++ b/src/posts/psychology-of-x-risk.md @@ -108,6 +108,10 @@ So when people hear about existential risk, they will think it is just another o Try to have an understanding of this point of view, and don't be too hard on people who think this way. They probably haven't been shown the same information as you have. +### Present Bias + +We tend to emphasize the importance of the present time over the future. + ### We like to think that we are special Both at a _collective_ and at an _individual_ level, we want to believe that we are special. @@ -176,6 +180,9 @@ In an [interview](https://youtu.be/0RknkWgd6Ck?t%25253D949), he gave the followi It should surprise no-one that some of the most fierce AI risk deniers are AI researchers themselves. +Take Yann LeCun, for example, one of the most vocal critics of AI risk. +He works at Meta + ### Easy to dismiss as conspiracy or cult In the past year, the majority of the population was introduced to the concept of existential risk from AI. @@ -227,6 +234,11 @@ We instinctively fear heights, big animals with sharp teath, sudden loud noises, A superintelligent AI does not hit any of our primal fears. Additionally, we have a strong fear for social rejection or losing social status, which means that people tend to be afraid of speaking up about AI risks. +### Diffusion of responsibility + +Not a single person is "responsible" for making sure AI doesn't lead to our extinction. +So, someone else should solve it. + ### Scope insensitivity > "A single death is a tragedy; a million deaths is a statistic." - Joseph Stalin diff --git a/src/routes/+page.svelte b/src/routes/+page.svelte index 0799c7d3e..7377cbc8f 100644 --- a/src/routes/+page.svelte +++ b/src/routes/+page.svelte @@ -1,54 +1,10 @@ - - - -
- -
-
- - We risk losing control - AI can have amazing benefits, but it could also erode our democracy, destabilize our economy and - could be used to create powerful cyber weapons. - - - We risk human extinction - Many AI labs and experts agree: AI could end humanity. - - - We need a pause - Stop the development of AI systems more powerful than GPT-4 until we know how to make them safe. - This needs to happen on an international level, and it needs to happen soon. - - - WE NEED TO ACT RIGHT NOW - In 2020, experts thought we had more than 35 years until AGI. Recent breakthroughs show we might - be almost there. Superintelligence could be one innovation away, so we should tread carefully. - - YOU CAN HELP - Too few people are well-informed about the potential risks of AI. Inform others, and help stop this - race to the bottom. -
- - +