This post is a cross-post from Daniel Rubio’s Substack. We wanted to include a couple of additional paragraphs to the Christ and Counterfactuals version, but they didn’t fit into the introductory remarks allowed by Substack’s cross-posting function, so we’re publishing the piece as a new version. You can see the original post here. We previously published a related article by Daniel Rubio on infinity, decision theory, and Christian ethics: Infinity and Christian ethics.
What is decision-theoretic paralysis
In decision theory, ‘paralysis’ refers to a state of affairs where all options in a decision problem wash out to the same level of choiceworthiness. This paralyzes the agent because, apart from randomness/fiat, she has no basis on which to make a decision. A little paralysis is no problem. Would you prefer your plastic hanger be white or blue? But widespread paralysis is a problem. If your decision theory cannot tell you to prefer a nutritious deluxe salad for lunch over a tray of death caps, it is not fit to task. Decision-theoretic paralysis is in many ways more of a theoretical problem than a practical one. Most actual agents have enough sense to simply not eat the death caps, no matter what an account of instrumental rationality says. The ones that didn’t got selected away eons ago. But we in the decision theory business would like our theories to predict/rationalize the sensible choices, and so there remains a problem for those who are trying to give a model of rationality.
Basic components of a decision theory: prizes, decision weights and decision rules
Before talking about how paralysis can enter into a decision theory, it will be helpful to give a broad schematic of how typical decision theories are put together. Doubtless there will be bells, whistles, and complications that this schematic ignores. But it will cover the general structure and will allow us to pose the problem at a fairly high level of generality. This in turn can let us look at potential solutions that solve the problem at its roots.
A typical decision theory has three components. The first we’ll call prizes. Prizes are the things the agent wants: goods, status, outcomes more generally. Anything the agent cares about is a prize. This can include simple things like bars of gold, and more complicated things like whether anyone has wronged the agent, or the agent’s relative position in a status hierarchy, or whether the agent has obtained their other prizes in an ethical manner. The second we’ll call decision weights. Decision weights attach to prizes that are the possible outcomes of different choices. The third, we’ll call a decision rule. A decision rule will tell the agent how to rank the various choices available to her. Usually it does this by taking the prizes on offer from a choice, modifying them by the decision weights, and then outputting a ranking.
To make this more concrete, let’s take a fairly simple and popular decision theory: Bayesian expected utility maximization. The prizes in standard Bayesian decision theory are represented by the utilities, which are numerical representations of an agent’s pure preference for various goods (or states of affairs, or outcomes, or propositions, depending on which foundational setup we are working in). The decision weights are represented by the probabilities, which are meant to stand for the agent’s credences or subjective assessments of how likely various events are to occur. The decision rule then says: maximize expected utility, which means for each course of action under consideration, add take each good it might produce, multiply that by the probability of that good being produced conditional on taking the course of action, and add up the resulting products. Any option that maximizes expected utility is choice-worthy.
A simple example of how this works. Suppose you are offered a choice between two coin flips and must choose one. Flip 1 pays $10 if heads and $2 if tails. Flip 2 pays $20 if heads and $-4 if tails. Assuming the coins are fair, the expected utility of Flip 1 is 0.5($10)+0.5($2) = $6 and the expected utility of Flip 2 is 0.5($20)+0.5($-4) = $8. So the decision rule says ‘take Flip 2’ is the only choice-worthy action.
How paralysis enters
Corresponding to the three moving parts of a simple decision theory, there are three ways for paralysis to enter in. We can have prize gaps, cases where the agent has no way of fitting the way she values a good into her network of desires. We can have decision weight gaps, cases where the agent has no way of assigning weights to the goods she might receive to feed into her decision rule. And we can have decision rule trivialization. Cases where the agent’s decision rule stops differentiating different courses of action, so that all acts are assessed as the same (this can be fine in small problems, like if an agent does not care if her car is red or blue but is a problem when it becomes widespread). Ultimately, all paralysis is a kind of decision rule trivialization, but the entry point is important. Sometimes this is a result of gaps in the prizes or decision weights, and sometimes it is a feature of the decision rule directly or how the decision rule interacts with the other parts of the puzzle. For simplicity I’ll stick with standard Bayesian decision theory to go through each of these sources of paralysis. Partly because despite its many discontents it is still the gold standard, and partly because its simplicity makes it good for illustrative purposes.
Prize gaps
Prize gaps, or utility gaps, are the rarest source of paralysis. There are interesting questions about how to integrate prizes “objectively” into a decision theory when things like incomparability (no fact of the matter about how two goods compare; read more here) or parity (there is a difference in value between them, but the difference does not tell in favor of either; read more here) are on the table. But there is no technical objection to incorporating incomparable goods or goods on a par into an ordering by fiat. Perhaps the best potential example is L.A. Paul’s transformative experiences. A transformative experience is one where the agent does not know what it will be like to undergo the experience until they undergo it, and undergoing the experience changes the agent’s preferences. Whether this creates a utility gap is controversial—fiat is a powerful solvent. But it’s the best example we have. (Slightly more technical aside: if we’re working in a setting with infinite state spaces, we can get utility gaps in compound lotteries from things like options with divergent or conditionally convergent payoff series. These are technical enough that I will simply refer the reader to my explainer on infinity in Christian ethics, which goes over them to some degree.)
Decision weight gaps
Decision weight gaps, or probability gaps, are more common. Some sources are kind of technical, like from non-measurable events or updating on 0 (in standard probability theories). More familiarly, perhaps the easiest source of a probability gap is when the agent is simply clueless about how likely something is to occur. In slightly more technical terminology: the agent’s probability for an event is inscrutable. In slightly less technical terminology: the agent’s probability for an event is ¯\_(ツ)_/¯. This kind of totalizing uncertainty shows up more often than you’d think in the literature, from philosophy of physics to philosophy of mind and religion.
Contagion
Probability and utility gaps lead to paralysis in the same way: via contagion. Here’s a simple description of contagion: the functions that make decision theory run expect their inputs to be numbers. When instead they receive “?” or “¯\_(ツ)_/¯” they don’t know what to do and begin to panic. Panicky functions don’t help with decisions. A little more technically: in decision theory we assume that the acts form what’s called a mixture space, which means (among other things) that the value assigned any act is a weighted sum (more technically still: a λ/1-λ mix) of two other acts, one of which is more valuable than it and one of which is less. We can think of the option as equivalent to a “coin flip” (at arbitrary bias) between these two. Since the only thing equivalent to “?” or “¯\_(ツ)_/¯” is more “?” or “¯\_(ツ)_/¯,” any gaps will propagate throughout the entire structure. This renders all acts equal in their assessment, paralyzing the agent.
Infinities
Gaps, however, are not the only source of paralysis. Infinities can have a similar effect. In standard mathematics, infinite numbers have what is known as the absorption property. That means that infinity plus anything that isn’t a larger infinity equals infinity. (Technical aside: in cardinal arithmetic, the sum k+u for any cardinals k and u = Max(k, u).) Likewise, it means that infinity times anything that isn’t a larger infinity equals infinity. (Technical aside: in cardinal arithmetic, the product ku for any cardinals k and u = Max(k, u).) This means that infinity times a very small fraction still gets you infinity.
This creates paralysis because as long as the agent is not certain that an option won’t yield an infinite prize (maybe she’ll find an infinite prize on her way to lunch? Small chances, but you never know), the expected utility of that option is infinite. This means letting infinite values into a standard decision-theoretic ecosystem is perilous. They won’t trivialize everything. But they will trivialize everything that isn’t sure to avoid them. And they will be a decision rule trivializer for any decision rule where we have to add or multiply things (virtually all of them).
Unfortunately, there are some pretty decent arguments for the existence of infinite prizes. Maybe the universe is infinite, and a civilization that expands into it indefinitely would be infinitely valuable. Maybe God exists (maybe God even incarnated). Maybe the right way to think of the value of a single life requires infinity. Maybe there’s a utility monster. These are, at least, prospects about which we should not be certain. And as we just saw, a teensy probability of an infinite prize is enough to cause trouble.
Getting unparalyzed
We’ve seen how to get paralyzed. How can we get unparalyzed? Because there are so many ways to get paralyzed, there isn’t likely to be a single sweeping solution. The problems of infinity can be somewhat (but not fully) tamed with non-standard analysis. Transformative experience might be handleable with higher-order preferences. Changing the rules of probability can somewhat mitigate both the contagion and some of the more technical sources of probability gaps. Backup norms might help when our primary decision rule gets trivialized, although they will have to be different enough to avoid the trivialization themselves. My inclination is to ban inscrutability outright. This is an ongoing area of research, and it’s possible that we just can’t have nice things.
Decision-theoretic paralysis and Christian Effective Altruism
Decision-theoretic paralysis is a major threat to effective altruism. Effective altruists seek to prioritize cause areas, in order to divert scarce resources into those areas that are significant, tractable, and neglected. Decision rules that rank different acts are an essential component of this prioritization. If our decision theory ends up paralyzed, every course of action ends up with the same priority.
Mark Johnston discusses an infinitarian paralysis that emerges if traditional forms of theism are true, and ends up arguing that God at least does not follow a decision rule that seeks to further the good. An ethical system need not be consequentialist to include a significant component based on furthering the good, and so Christian Effective Altruists face a paralysis problem doubly; they have special reason to worry about infinite goods that make the world unimprovable. A complete account of Christian Effective Altruism will need to reckon with these problems.
> that God at least does not follow a decision rule that seeks to further the good
Provocative! It suggests to me that civic morality (which focuses on ends: external, physical, objective) diverges from divine morality (which is actually about the means: internal, spiritual, subjective). Because what God wants most is not perfection, but intimate communion…