Tobias co-founded the Center for Reducing Suffering with Magnus Vinding. CRS is a research center that works to create a future with less suffering, taking all sentient beings into account. More broadly, Tobias is involved in the effective altruism movement which applies evidence and reason to find the most effective ways to help others. In his new book, Avoiding the Worst: How to Prevent a Moral Catastrophe, Tobias lays out the concept of risks of future suffering (s-risks) and outlines ways to steer the world away from s-risks and towards a brighter future. You can get his book for free on Amazon or read the PDF version. Avoiding the Worst is also on the bookshelf of our Sentientism GoodReads group. His many other writings on how to best reduce suffering in the long-term future are collected here.
Previously, Tobias’ research at University College London focused on Cooperative Artificial Intelligence. The aim of this work is to better understand how artificial learners can achieve higher levels of cooperation in social dilemmas (e.g. through mechanism design). Before that, Tobias graduated in 2016 from Ulm University with a Master’s degree in Mathematics and a Bachelor’s degree in Computer Science and Physics. After graduating, he worked as a quantitative trader at Jane Street Capital.
In Sentientist Conversations we talk about the two most important questions: “what’s real?” & “what matters?”
02:09 Tobias’ Intro
– CRS: “How can you best reduce suffering?”
– Tobias’s book “Avoiding the Worst” and S-Risks
– A 2 step approach: 1) who and what matters, 2) empirical understanding and action
03:33 What’s Real?
– “I am #atheist since I am old enough to think about these issues”
– “I have a naturalist worldview – my answer to what’s real is what we can scientifically observe and measure…”
– JW: For many religious people the biggest S-Risk “is that of being sent to hell by a supposedly benevolent god”
– Cognitive biases: wishful thinking (hoping s-risks aren’t possible), confirmation bias (looking for evidence that supports our existing beliefs – ignoring contradictory evidence), scope neglect (struggling to understand or emotionally respond to very large numbers)
– JW: Animal agriculture as an understandable S-risk vs. #longtermism, large scales, artificial sentients
– “Factory farming is a moral catastrophe of incredible scale”
– JW: The ethical (e.g. “only humans matter”) and epistemological (e.g. “animal farming is humane”) errors that can cause massive harms
– Animal advocates’ neglect of wild / free-ranging animal suffering ethics “the vast majority of sentient beings on earth are not humans… are not factory-farmed animals… but animals living in the wild, in nature – and they also suffer very serious harms… predation… starvation… diseases”
– @AnimalEthics video course on wild animal ethics
– “This might be the most important source of suffering at this point in time”
– Risks of belief digitisation (jumping to yes/no instead of probabilistic credences) when considering low probability outcomes and/or large scale impacts
– Dealing with uncertainty
– “It’s not necessarily what we are emotionally made for…”
– The expected value approach and “Pascal’s mugging”… “taking speculative scenarios sufficiently seriously but not getting too crazy over it”
– Bayesian thinking
– “I would not think of S-risks as a Pascal’s mugging… The broader idea of a large-scale moral catastrophe in the future doesn’t seem that far-fetched to me at all”
– “We already have a similar dynamic… in terms of factory farming. Why is it so crazy to think that something similar and even larger scale could happen in the future.”
– “The topic is quite neglected. I’m just a random guy on the internet and I’ve managed to write the first book on the topic. It would seem that the possibility of worst case futures with astronomical levels of suffering is worth having one book on.”
– We know terrible things can happen… the rise of #fascism etc.
– “If you envision a future… more advanced technology… other planets… artificial intelligence… ever more powerful tools”
– “If we don’t make moral progress… it’s not so difficult to see how the combination of these two facts can lead to s-risks.”
20:30 What Matters?
– Divine command theory, nihilism, moral relativism vs. caring about the suffering & flourishing of others
– Virtue, feminist care, deontological, utilitarian, consequentialist ethics?
– “The idea that we should try to prevent the suffering of others… that seems relatively uncontroversial…”
– Beyond suffering: interests, preferences, life, dignity, agency? “I would not necessarily say that all these things are nonsensical… but it just seems that, especially when we are talking about severe forms of suffering… those other things seem relatively abstract and perhaps less important in comparison?”
– Suffering-focused ethics: “suffering should be our primary moral priority… a view I would broadly subscribe to”
– “It doesn’t seem to me to be morally urgent to create more happy beings… or to maximise the intensity of happiness of someone who is already well off… compared to alleviating the suffering of people who are worse off”
– “It should be fairly uncontroversial – the thing that everyone should be able to get behind… that those worst case outcomes are worth preventing”
– “I can sort of see where the nihilist faction might be coming from… “nothing really matters”… but you need to do something in life… so why not care about reducing the suffering of others?”
27:40 Who Matters?
– Coming across animal ethics arguments on the internet and going vegetarian at ~17-18 yrs old
– Reading Peter Singer and his argument for anti-speciesism
– Wild-animal suffering: “I was fairly quickly convinced… why would animals living in nature be less important than those in factory farms or those as pets”
– “Those distinctions [species, family, nation] are fairly arbitrary… you can say that… on a philosophical level I don’t find it particularly satisfactory”
– JW: Intellectual and emotional paths to extending our moral scope
– “Ideally it should be a combination of the two… there has to be this emotional core of empathising and compassion for others… if you don’t have this at all you might as well say ‘why care about anything?’”
– “It also requires the intellectual side to understand that this goes far beyond only the beings that you would normally care about… your immediate surroundings… you need the intellectual side to understand that this should apply to all sentient beings”
– Invertebrate and insect sentience? “I don’t want to pretend that I emotionally care about fruit flies… but on this abstract level the argument for taking invertebrates or insects seriously… is that while it is uncertain whether they are sentient there is maybe some chance… and if they are there is a very large number… this could be a quite important issue”
– JW: Anti-speciesism is rejecting speciesism against sentient beings (not, for example, species of plants or funghi)
– “This is why sentience would seem to be a logical and coherent criteria”
– “I’ve never met anyone who says that they care about plants…” JW: “I’ve met a few but that’s what being a vocal vegan on Twitter gets you…”
– Biocentrism, ecocentrism “I don’t really find the arguments for that very convincing”
– Artificial sentients? The Google Lambda affair. “It does seem likely to me that at least in principle these forms of sentience could also be possible… they could be created in large numbers in the future… possibly a very important factor for s-risks… look at humanity treats animals… those artificial minds could be completely powerful without any sort of voice to represent them… there could be a complete lack of moral concern for such beings”
– “We could have artificial minds and they could be mistreated on a large scale”
– “I don’t have a crystal ball… we just need to deal with this uncertainty and take into account many different scenarios… and find ways to act that are robustly good in many different scenarios”
– JW: Animal advocates and vegans want to aggressively push humanity’s moral scope beyond anthropocentrism but are often very nervous about considering wild-animals, future sentient beings or artificial sentient beings “push out but at the same time put the brakes on very quickly”
– “Even granting that all those sentient beings matter it might not be a smart strategic approach to try to move people all the way to that… it might be better to focus now on the first step of getting people to care about farmed animals… but this is different from saying that only farmed animals matter… this is just a strategic question”
– JW: “Your strategy or your tactics might well be to focus and prioritise – but that’s not an excuse to withdraw your moral consideration from these other sentient beings”
– “All those considerations don’t change anything about the fact that the wild animals exist and that they do matter”
– “I wouldn’t seriously try to pretend that I’m actually indifferent between my own well-being and that of a randomly selected wild-animal… but on a theoretical, philosophical level it’s the right aspiration to have”
– Longtermism and future beings “On an abstract level I would fairly strongly defend the view that future beings matter equally… why would the suffering of someone in the year 2500 matter more or less than the suffering of someone in the year 2000?”
– The complications of uncertainty about the far future and our ability to impact it
– JW: Can we discount based on the probability of a future being existing? “Yes, I would say so – if a being doesn’t [or won’t] exist then they don’t need to be taken into account”
– “There are arguably far more future beings than are alive now… even if you only assign a small probability to scenarios where humanity expands into space… or has advanced technology which allows the creation of large numbers of beings… even a small probability of such large futures would be sufficient to make the case that in expectation there are far more beings in the distant future than are alive right now”
– Vegetarian and vegan transitions: “It hasn’t been so difficult socially… I did move on to become vegan a few years later”
48:45 A Better World?
– S-risks and X-risks (extinction risks)
– Antinatalism “then there will be no suffering and no s-risks”… “I don’t necessarily find that a very helpful approach to have… I don’t think that this is going to achieve very much… it’s not true that this is the only way to prevent s-risks… there are many other things that are far more strategically effective and reasonable… than this most nuclear option of all… it would trigger an enormous backlash and that’s actually quite toxic and counterproductive to the goal of reducing suffering and preventing s-risks”
– “At least in some versions it requires a purely suffering-focused view – that only suffering matters… while I do subscribe to suffering focused ethics… I wouldn’t necessarily go so far as to say that categorically only suffering matters”
– “It’s important to maintain a degree of humility and pluralism about values and not fall into some sort of fanaticism or extremism about one specific thing”
– “It can be quite tricky to give a specific example [of s-risks] because any specific example you give might sound quite speculative or even absurd to some people”
– “Maybe the most worrisome risks are those that we don’t even know about – the unknown unknowns”
– Incidental s-risks like factory farming “suffering is created as a byproduct of some economic process”
– Agential s-risks “even more disturbing… malevolent actors that deliberately cause harm out of sheer sadism or spite”
– Natural s-risks “If live is common in the universe… there is already a large amount of suffering out there”
– “If you want something more concrete maybe you can watch episodes of Black Mirror”
– Elon Musk, transhumanism, elitism…
– “One can wonder why exactly it is so important for him [Elon Musk] to go to Mars… other than it seems kind of cool.”
– “That foundation of what actually matters or who matters should be thought about first before you are spending many billions of dollars on going to Mars”
– “The million-dollar question… what can we do about it?”
– “Perhaps the answer… is that we should do more research to figure out what to do about this… so that ideally future people who want to reduce s-risks will be in a better position”
– Capacity building… community building… research “That probably would be my answer to the question but I do understand that these higher level, abstract, more meta interventions might feel a bit unsatisfactory”
– More concrete interventions: “Expanding the moral circle – spreading concern for the suffering of all sentient beings… trying to improve our political system [see Magnus Vinding’s Reasoned Politics]”
– The Centre on Longterm Risks work on AI related s-risks
– Many of the S-risk avoidance interventions “just make the world broadly better from many perspectives… both from a more near-term perspective and from an s-risk perspective”
– “The idea of trying to improve politics isn’t perhaps entirely novel… we of course need to come up with something that is of interest to more than just a small circle or handful of people who are talking and thinking about s-risks… if you want to gain traction in the broader world it has to have somewhat broader appeal”
– JW: Is promoting the Sentientism worldview a robust intervention in the face of epistemic and moral uncertainty?
– “There is also a risk of perhaps going too much in the direction of people just discussing something on the internet with nothing ever happening in the real world… ultimately the goal of all this is to change something in the real world”
01:10:40 Follow Tobias
– Avoiding the Worst (available for free and via an audiobook)
Sentientism is “Evidence, reason & compassion for all sentient beings.” More at Sentientism.info.
Everyone, Sentientist or not, is welcome in our groups. The biggest so far is here on Facebook.