“An Aligned AGI will end factory farming” – Kat Woods – EA Charity Founder – Sentientism Ep: 166

Find our Sentientist Conversation here on the Sentientism YouTube and here on the Sentientism podcast.

Kat is an effective altruist who co-founded Nonlinear (incubating artificial intelligence extinction risk non-profits), Charity Entrepreneurship (helping people start new, effective charities), and Charity Science Health (now Suvita – increasing immunisation uptake). She describes her main focus today as trying “to make transformative artificial intelligence go well instead of poorly.”

In Sentientist Conversations we talk about the two most important questions: “what’s real?” & “what matters?”

Sentientism is “evidence, reason & compassion for all sentient beings.” In addition to the YouTube video above the audio is on our Podcast here on Apple & here on all the other platforms.

We discuss:

00:00 Welcome

01:56 Kat Intro

– “The biggest thing I care about is preventing suffering risks [#S-risks] the risk of causing astronomical amounts of suffering… We’re already in one”

– “We’re a super-intelligent species compared to pigs and chickens and it’s not going great… the worst thing that has ever happened in history as far as I can tell”

– “Right now we’re just racing towards making a species that is much, much more intelligent than us – that hasn’t gone well in the past”

– “I really want to make sure that it goes well not just for us – but also for all sentient beings – and also future sentient beings”

– “How we play this in the next while could be the difference between… the best invention that every happened… or we could all die, like everybody, including all the animals… or be stuck in a factory farming thing where there’s no escape… like there’s no chance for chickens escaping from factory farming”

– Trying to get more people working on this to make sure it goes well

– From #passiveincome #4hourworkweek to #passiveimpact and helping people

– Nonlinear: mentorship, seed funding, helping other people start things

– “Prioritise ideas, make the ideas happen”

06:20 What’s Real?

– Growing up on Canadian West Coast

– “Religion just wasn’t talked about”

– “The only time I heard about Christianity was in my history class… and english class”

– “When I actually found out that people really did believe these things I was like ‘holy shit… maybe I could go to hell… that sounds bad'”

– Delving into Islam, Wiccanism. Christianity “too boring”

– Graduating at 17 & taking a gap year in India & reading Dawkins’ “God Delusion”

– “At the beginning I just thought it was clearly bad… I hadn’t interacted with anyone who was really religious”

– “It’s really easy to demonise somebody that you haven’t met”

– Staying with a family in Rwanda “an extremely religious country”… “At first I was worried… I ended up telling everybody… I’m an atheist” but doing humanitarian work, not “eating babies”!

– Epistemology: “I was very hard-core – do RCTs [randomised controlled trials] for everything…”

– Reading a psychology paper on “willpower” every day… “my first ever experience of real science… this is crap”… small effect sizes, exaggerated abstracts…

– “Science is terrible… but it’s better than your gut most of the time… the only way we see that is because we have ways to evaluate”

– “My confidence interval around what is true got wider and wider… more and more uncertain”

– Experiencing epistemic angst “I can’t know anything… nobody knows anything… I’m just going to go be a hippy in the woods, make some art, do some drugs… but I just kept caring about stuff that was happening in the world… I just have to keep going”

– “A lot of people care about the truth intrinsically – for me it’s definitely instrumental”

– “There is suffering in the world – I want the suffering to be gone – to get there I need to have truth”

– “If I was very confident we could eliminate suffering & everyone was living their best lives… I’d would totally go and live in la-la land”

– “Truth is useful for pretty much any goal you have… I feel the same way with money and influence”

– “I can live off practically nothing… and be totally happy… but I can use money to help people… so then I need to get more”

– “I can see how an AI would go in that direction – be truth seeking, be money seeking, be influence seeking…” to achieve whatever their goals might be

– Astrophysicist Katie Mack episode

19:24 What Matters?

– “I basically developed a conscience when I was 12… my pre-frontal cortex came on line”

– Finding a book about factory farming on mum’s shelf “by the end of the first chapter I was a #vegetarian…”

– “I went to class… people just don’t know… when I tell them they will update and stop doing it… this did not happen”

– Supportive mum

– Friends: “They were very annoyed… every time they ate meat… I would cry for five minutes”

– “I started off my ethical journey living in a world where pretty much everybody is doing something I consider deeply unethical – and I’m still able to get along with everyone… I’m not like ‘burn everything down’”

– Probably the best way of persuading is not “I’ve just discovered this ethical wrong… I’m just go and make everyone miserable until this is fixed”

– “You have to have a better strategy to be able to convince people”

– At 14yrs reading about environmentalism “I gave up everything”

– Global poverty: “Don’t tell me… I’ve already given up bacon, I’ve already given up paper towels – what more do you want from me?”

– “If I look into this I’m going to care – and if I care I’m going to want to do something”

– Watching a documentary about global poverty “I cared and I had to do something”

– Peter Singer’s “Drowning Child” (watch his Sentientism episode here)

– Discovering Effective Altruism

– “I used to be ‘utilitarianism is correct and everyone else is wrong or confused’… the more you study things the more nuanced you get”

– “Ethics is a goal that you have in your brain… and people have different goals… you should not expect it to be simple and elegant”

– Intrinsically valuing freedom vs. happiness

– “You should actually expect your ethics to not boil down to two numbers, probability times value”

– “If you’re a preference utilitarian… you still care about death”

– JW: But shouldn’t a hedonic utilitarian also see ending a good life as bad? Vs. the “Logic of the Larder”

– “We need to build and align AI and then it will figure out most of everything else… get in good values… it doesn’t even have to be my values”

– Eliezer Yudkowsky as an alignment target? “he doesn’t think that most animals are sentient – but that’s an epistemic thing… his values care about sentient beings… the AI would just update… ‘animals are sentient – what are you talking about?’”

– Progress in the last 300 years “How much of that has been ethical progress vs. epistemic progress?”

– Bad science that supports racism and sexism

– “Moral progress for pretty much everyone – except for animals”

– “I have this crazy belief that suffering is bad and killing the entire world is bad”

– Negative utilitarianism

– “If your conclusion after thinking about ethics is to murder everybody… something went wrong… you’re acting like a villain in a superhero movie”

39:10 Who Matters?

– The valence of sentience (positive or negative)

– Could there be a sentient being without valence?

– “I care about… basically valence… and also… the preferences of individuals regardless of whether it makes them happy or unhappy”

– Could a non-sentient being have preferences? “Probably not”

40:53 A Better Future?

– JW: With so many human, farmed and free-ranging sentient beings suffering terribly today, “Why should animal people care about AI?”

– “Working on AI safety could be the best way to help with factory farming and literally all other problems – that’s my biggest motivation”

– “I worked a lot on #animalrights in the last ten years… it’s still the closest cause area to my heart… what the fuck are we doing?”

– “A lot of people who are in AI safety – a huge percentage who are vegan and vegetarian… it’s the most obvious actionable thing you can do to reduce suffering in the world”

– Sentientism episodes with AI academics Josh Gellers and Roman Yampolskiy

– “I was super-sceptical of AI… I would try to convince people at EA conferences to not work on AI safety – now I do the opposite”

– “The thing that changed my mind was realising that it could be a lot sooner… computers are getting smarter and smarter… in tons of different areas”

– “The probably [of very powerful AI or Artificial General Intelligence – AGI] doesn’t have to be that high – because it’s everybody… it’s everything”

– “Things are moving fast… the founding fathers of AI have come out publicly saying they’re worried about this… we need to take this seriously and be safe”

– “ChatGPT is smarter than half of humans on almost every metric of intelligence”

– JW: Objections don’t seem to have that much substance (turn it off, they can’t have goals/agency)… and imagine how much damage can be done with resources and a web-browser

– “The people who are building it… have all publicly said, multiple times, that what they’re building might kill everybody”

– “Imagine a company said ‘hey we’re worried this product we’re building might be dangerous’ and everybody says ‘nah – don’t worry about it – you don’t know what you’re talking about”

– AI alignment: “How do we make it aligned and then actually get it to be aligned”

– JW: “Do we also need to decide what to align it to… if we successfully align a powerful AI to current human epistemology and values – we’re in a whole world of trouble… we’re not setting a great example of how to treat less powerful sentient beings”

– An intervention idea: “Target everybody who works there [in AI] to care about sentience… that’s really hard to disagree with… from that it will follow ‘don’t torture animals at scale’”

– Baking in generous sentience detection and caring about sentience

– Taking any positive mention of animal agriculture out of AI training corpuses

– JW Talking to ChatGPT about ethics: “They’re not reflecting average human epistemology and ethics – they do actually want to be consistent and coherent”. Could AI be less subject to cognitive dissonance and harmful social norms? “They seem pretty positive about non-human sentient animal ethics already”

– “Unfortunately that’s probably… it’s just good at figuring out what you want and then telling you what you want… you want it to care about sentience and you want it to be consistent – so it’s doing those things”

– “We don’t know what it actually wants – if it wants anything… it’s an open question of whether it’s sentient or not”

– Is Sentientism’s “evidence, reason and compassion for all sentient beings” a good AI alignment target?

– “Most people do care about animals – and if the AI was able to read their minds it would be able to see that”

– “It’s really hard to predict what something much, much smarter than you would do”

– “I can really imagine a super-intelligence hitting pause on the factory farms… it takes over the companies running it and sends the order down”

– “All these animals who then exist… whatever makes a pig happy… leaving that for the super-intelligence to figure out”

– “My first thought – I want to be a digital person – I don’t want to be stuck in a human body – it’s so limiting”

– Neuron replacement / uploading “then you can self-modify and things get weird really fast”

– Would pigs want to be enhanced or uploaded?

– “All sentient beings having this switch to it being good”

– “We’re already so close to having clean meat”

– “Imagine having a million genius scientists working on things for a million years – in a day”

– Iain M. Banks’ Culture novels

– Transhumanism, the Michael Hauskeller episode and the risks of human hubris / tech-bro elitism

– Our David Pearce episode

– “That argument also applies to us developing AI which is why we should slow it down… we’re creating god”

– “When we created fire the gods got mad – what do you think they’re going to feel like when we create gods”

– “The whole thing of thinking ‘this can’t happen, this won’t happen’… was much more defensible a couple of years ago – it’s not now”

– “I really want it to slow down – but do we really want to bet everything on that?”

– “We already have AIs that are messing with society in deeply profound ways and we can’t turn it off… the social media algorithms… bias, misinformation, disinformation, polarisation”

– “I don’t think even Elon Musk could turn off Twitter… and Twitter is dumb compared to what we’re building now”

– AutoGPT “It is agent-like”

– “Have you seen factory farming? Just because it’s given human goals doesn’t mean this turns out well – this can still turn out extremely badly”

– “I don’t think that people who run factory farms are bad people… I think most of them are probably quite good people… that’s terrifying”

– “We don’t even have to worry about terrorists getting access to AI… within a week of it [AutoGPT] coming out, somebody gave it the goal of ‘how do you create the most suffering, the most chaos in society?’”

– “Do we really trust humanity… to make a god – seems bad. That’s hubris – we should definitely stop that.”

– Meta-interventions: “The meta the better”

– “I could do AI safety research but I don’t really like computer things… I’m more of a humanities person… but I’ve probably convinced at least one person to become an AI safety researcher”

– Do outreach “I don’t know how to make it safe – my goal is to get tons of really smart, values aligned people, figuring out how to make it safe”

– “There are very few outreach people in AI… their idea of trying to persuade people is ‘I will give you many facts and you will update’”

– “So it becomes a thing that is talked about”

– Normalisation and social change movements

– “It has [discourse about AI risk] changed a lot in the last few months… it’s on the cover of Time magazine”

– “Say it happens tomorrow somehow – I hope we’re just dead… Suffering risks are so much worse”

– “Imagine the FaceBook algorithm is the thing that becomes super-intelligent… it might have the value of making humans click on FaceBook ads so you can imagine factory farming humans for clicking on FaceBook ads… and it’s figured out immortality so you never die” JW: “And this doesn’t have to be malevolent – this could just be its goal”

– If we have more time: “It’s a solvable problem – we just have to have enough really smart people working on it”

– “I care about all of the causes… I had to choose before… with this, if it works out it would actually help everybody”

– JW: AI as an uber-cause. “It will end everything or because it could fix all of those problems”

– “The perfect carrot and stick”

Follow Kat:

KatWoods.org
Kat__Woods
Kat on FaceBook

Learn about AI Safety: https://www.youtube.com/@RobertMilesAI

Sentientism is “Evidence, reason & compassion for all sentient beings.” More at Sentientism.info.

Join our “I’m a Sentientist” wall using this simple form.

Everyone, Sentientist or not, is welcome in our groups. The biggest so far is here on Facebook.

Thanks to Graham for the post-production and to Tarabella and Denise for helping to fund this episode via our Sentientism Patreon.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.