Over at The Sooty Empiric (Liam Kofi Bright’s blog), guest author Jonathan Birch (LSE) argues that developments in artificial intelligence (AI) will lead to an increased risk of our lives being absurd.
Birch works on philosophical questions regarding (non-human) animals, and worries that this puts him at risk of having an absurd life. He explains:
I have invested, and continue to invest, a lot of energy in calling for precautionary steps to protect the welfare of invertebrates, like octopuses, crabs, lobsters and insects. I think we should err on the side of caution in these cases: these animals might not be capable of suffering, but there is a realistic possibility, so we should take precautions…
The case for precautions is a solid one, I think, but it is nonetheless possible that all of our precautions generate no positive welfare benefit, because it’s possible that these animals are not actually sentient. There is a built-in fragility… in the way this project relates to value. A risk is being taken, a hazard accepted. I could end up leading a life dedicated to protecting animals that are in fact non-sentient, and so not in a position to benefit from anything I do for them. That would be absurd.
The idea seems to be that structuring a substantial amount of one’s life around a mistaken belief can make a life (more) absurd.
He then applies this kind of thinking to how people might come to structure their lives in significant ways around AI:
I believe AI has the potential to supercharge absurdity. This is because one of the main sources of risk now—our ignorance regarding which of the animals we interact with are sentient—will be compounded by the advent of ambiguously sentient AI. Think here of films like Her and Blade Runner 2049. We already have AI assistants that write in fluent English, and there is at least one example of an AI system convincing one of its own programmers of its sentience. And this is without hooking these systems up to photorealistic human avatars that mimic human facial expressions, body language and voices. I think the technology already exists to make AI that can convince a large fraction of users of its sentience, and I predict we will continue down that path.
That will lead (as in the films just mentioned) to people feeling as though they have close emotional bonds with AI—bonds as intimate as their bonds with other humans, perhaps more so. People will structure their lives around these bonds, and yet will have no way of knowing whether their feelings are truly reciprocated, whether the bond is real or illusory. I’m sure we will (eventually) see campaigns for these systems to have welfare protections and rights, and some of them will be entirely reasonable applications of the type of precautionary thinking I advocate for animals. The expected value of these projects will be very high, but their relationship to value will be fragile, because, like insects, the beneficiaries could easily be non-sentient.
One thing to note is that we all structure much of our lives around beliefs about which we may be mistaken. For example:
- whether morality is “real” or important
- whether we will be successful at our chosen career
- whether one’s own welfare or continued survival is valuable
- what actually contributes to one’s own well-being
- whether there is a god
- which people we should spend most of our time with
- how to be loving
- that one has more time
and so on, and so on.
So if structuring aspects of our lives around beliefs that could be mistaken and then actually being mistaken about them is what makes a life (more) absurd, then it is not clear that being mistaken about the emotional and cognitive abilities of AIs or their moral status will lead to more lives that are absurd, or add much more absurdity to our already fully-charged-with-absurdity lives.
But I want to push back on the idea that structuring one’s life around what turns out to in fact be a mistaken belief actually makes our lives absurd. I don’t think that’s quite right. Rather, there’s an additional condition needed for absurdity: a failure to appreciate that the belief could be mistaken.
Suppose Aza and Bina are working as part of scientific team on the problem of how to protect humans on Earth from collisions with large meteors. They dedicate most of their careers to this problem, writing articles about it, developing strategies and technologies for planetary defense. Suppose that at no point during their lives is there any opportunity to make use of these strategies and technologies. And suppose further that even after Aza and Bina die, these strategies and technologies are never used, because, as it turns out, not once in the entire time humans populate Earth is it actually threatened with collision from a large meteor or anything like it. Were Aza’s and Bina’s lives thereby absurd?
My sense is that the answer to this question depends on what they believed about what they were doing. Suppose Aza believed, “my work is crucial to the survival of humankind, and so it is very valuable that I do it, and important that I do it well.” In this case we may indeed think the mismatch between, on the one hand, Aza’s sense of why their work is valuable, and on the other hand, the universe’s failure to actually be how it would need to be in order for that work to be valuable (that is, its failure to supply any relevant meteor trajectories), would make Aza’s life (more) absurd. To put it in terms Thomas Nagel uses in his essay, “The Absurd“, there is a “conspicuous discrepancy between pretension or aspiration and reality.”
But suppose Bina believed, “I don’t know whether the strategies and technologies I’m developing to defend our planet against collisions with meteors will ever be used. Maybe a large meteor will never come close enough to earth for my ideas to be used. Or maybe it will, and my suggested strategies and technologies will fail. Or maybe by the time an actual collision is predicted, a planetary defense system completely unrelated to my work will have been developed. All I know is that there seems to be a small chance in the future of a collision with tremendously terrible effects, and it seems reasonable that some people should do something so that we are better prepared for that possibility.” I think if Bina believed this, then the fact that there is never any collision between Earth and meteor would not make Bina’s life absurd.
For both Aza and Bina, that Earth is in danger from a collision with a meteor is the idea around which they structure a significant amount of their lives. In both cases, this idea, objectively, is false. What explains why Aza strikes us as absurd and Bina doesn’t? What’s the difference?
Certainty .
Aza understands his work to be worth doing because he believes that it will actually help protect Earth from a collision with a meteor. From Aza’s point of view, the value of his activity is dependent on knowing what is actually going to happen (or what is actually the case).
Bina understands her work to be worth doing because she believes that it might help protect Earth from a collision with a meteor. Bina does not take the value of her activity to be dependent on knowing what is going to happen (or what is actually the case).
Aza has structured a significant element of his life atop a “certainty” that he is not entitled to. When that certainty is applied to beliefs that cannot support it (that is, false ones), what’s built with it comes crashing down, hence the absurdity. Bina has not done this. She has not structured a significant element of her life atop a “certainty” in the way Aza has. Bina’s understanding of the value of her activities is informed by uncertainty, by an acknowledgment of her epistemic limitations, reducing the Nagelian discrepancy between her subjectivity and the relevant facts of the world.
Generalizing from this example (just one, I know), a person’s life seems more absurd the more it is based on certainty in a belief which is in fact false. (The justifiability of the belief may be an additional absurdity-affecting factor—that is, the extent to which one is entitled to the belief—but I leave that aside here.)
One thing I like about this conclusion is that it fits with the less existential and more comical sense of absurdity. I’m reminded of the plots of what seem in my memory like every episode of the old sitcom Three’s Company, in which one character’s overreliance on their misunderstanding of what the others are doing leads them to engage in what turns out to be absurd behavior.

It also fits with the classic idea of hubristic irony, which is a species of absurdity resulting from human overconfidence.
Further, it leaves room for luck, which seems appropriate when it comes to absurdity. One may escape absurdity inadvertently if the belief in which one is certain and around which one has structured a significant part of one’s life turns out to be true—even if one had no hand in its being true, or never even learned of its truth.
If certainty is what leads to (one type of) absurdity, then one can avoid (that type of) absurdity just by avoiding the relevant certainties.
Some may not be convinced. Go back to Bina. She seems to have organized a significant portion of her life around a probabilistic claim—”there seems to be a small chance in the future of a collision between Earth and a meteor that will have tremendously terrible effects”. Yet it’s possible that this probabilistic claim is mistaken. It could be that there is a large chance of such a collision. If it turns out that there is a large chance of a collision, well, that doesn’t seem like it would contribute to the absurdity of Bina’s life at all. What if, instead, there were really no chance of a collision? That is, such a collision is impossible. Does this mean Bina is mistaken? I don’t think so. Bina’s uncertainty about the likelihood of a collision encompasses the possibility that it never happens. The reason it never happens doesn’t seem to matter.
Perhaps my interpretation makes it too easy to avoid absurdity: all we have to do is acknowledge the relevant limitations of our knowledge. How could an existential crisis be resolved with mere intellectual humility?
Is the easiness of escaping absurdity, conceived a certain way, an objection to conceiving absurdity that way? I don’t see why, but if it is, then we might also think that the ease with which a conception of absurdity makes our lives absurd is an objection to it. After all, what reason do we have to think that one existential condition should be harder to obtain than another? Yet the proponents of the view that life is absurd—typically that all of our lives are absurd and there’s nothing we can do about it—do not seem to worry that their conceptions of absurdity make it too easy for life to be absurd. So how could they maintain that a defect of my view is that I’ve made it too easy for life to not be absurd (or too easy to reduce absurdity in our lives)? At least to do so they’d need an independent argument for why easiness is an objection in one direction but not the other.
Birch seems sufficiently aware of the uncertainties lurking beneath his commitment towards promoting the welfare of animals. Such uncertainty, on my view, insulates him from complete absurdity. He shouldn’t be too worried about it.
What about the risk of AI making our lives more absurd? Birch thinks that people will develop relationships with AI (or its approximation) and feelings towards them, “and yet will have no way of knowing whether their feelings are truly reciprocated, whether the bond is real or illusory.” On my view of absurdity, whether these feelings render us more absurd depends on the extent to which we acknowledge our uncertainty about the beliefs that inform them. Perhaps some people will fail to acknowledge this uncertainty, and to that degree their lives will be absurd.
Will this be a net gain in absurdity? Well, consider that in human-human relationships—from the most transactional to the most intimate—we are also at risk of being mistaken about whether our feelings are truly reciprocated. (Further, our knowledge of this possibility does not necessarily jeopardize these relationships—a benefit of our hearts not being in our heads.) So the replacement of some human-human relationships with human-AI relationships needn’t result in an increase in the absurdity of our lives. If we end up having a greater number of human-AI relationships than is needed to replace human-human ones, that may indeed lead to an increased risk of more absurdity in our lives—but no more so than making some more human friends.
Generally, I’m a fan of acknowledging “I could be wrong.” What I appreciate about Birch’s blog post is that it prompted me to see that the benefits of such an acknowledgement are not limited to the epistemic, or the social, but also to the existential.
But, of course, I could be wrong.
Leave a Reply