Epistemic Risk and the Demands of Rationality

William James famously writes:

We must know the truth; and we must avoid error—these are our first and great commandments as would-be knowers; but they are not two ways of stating an identical commitment, they are two separable laws (James, 1896).

The core insight in this passage is that as epistemic agents we face competing costs: you can avoid error by refusing to believe anything, but you will never learn the truth. And if you try to learn true things, you risk incurring some error too. Every theory of epistemic risk is ultimately a theory of enriching this insight with a sufficiently general methodological framework for handling the (epistemic) trade-offs associated with competing doxastic attitudes.

In Epistemic Risk & the Demands of Rationality, Richard Pettigrew develops a rich, engaging and capable decision theoretic framework of epistemic risk for graded beliefs—i.e., credences. On Pettigrew’s view, as is the case in epistemic utility theory, we have three main ingredients: (1) a value function (scoring rule); (2) a credence function (encoding your graded beliefs); and (3) a decision rule (a principle for identifying the rational credence function or credence functions). Within this space, where should we look for—i.e., where should we situate—epistemic risk? Pettigrew’s answer is that epistemic risk is to be found in the decision rule (the third ingredient in my tripartite framing). The main upshot of his argument is that permissivism about attitudes to risk implies permissivism about which credences it is rational to hold. More precisely, it implies permissivism about which prior credences it is rational to hold. And since updating via Bayes’ Rule on two different prior distributions will ordinarily lead to two different posterior distributions, it also implies permissivism about which posterior credences it is rational to hold.

The first part of the book develops the core framework within the context of epistemic utility theory. Chapter 2 sets the stage and distinguishes between credal and full belief theories of epistemic risk as well as inter personal and intra personal permissivism. Chapter 3 examines extant arguments in favor of situating epistemic risk within the value function for full beliefs, drawing in particular on Kelly (2014), Easwaran (2016), and Dorst (2019).

Chapter 4 occupies a particularly important role in the argument of the book. In this chapter, Pettigrew argues that when it comes to credences, we cannot locate epistemic risk in the value function, as the above scholars have done for full beliefs. Instead, we have to locate credences in the decision rule. The argument draws on Horowitz (2017) and runs as follows: epistemic value functions ordinarily must have a certain property—namely, scoring rules must be strictly proper. Strict propriety implies that, in expectation, every credence function will see itself as best. In other words, if you have a certain credence function, and you ask yourself, “should I adopt some other credence function?”, in expectation, the answer is no. The expectation is computed with respect to your current credence function. For Pettigrew, this implies impermissivism—you will always judge your own credence function as uniquely best—and hence there is no room for epistemic risk in the value function. There is no space, as a result of strict propriety, for considering what to believe on the basis of considerations of epistemic risk. I will return to this point later.

Chapter 5 surveys central results in epistemic utility theory, focusing in particular on its two central tenets: probabilism (credences should be probabilities) and conditionalization (probabilities are updated via Bayes’ Rule). Meanwhile, Chapter 6 introduces risk weighted decision rules, focusing in particular on the Generalized Hurwicz criterion (GHC). GHC is a weighted averaging of outcomes from best to worst—it is a generalization of maximax and maximin, which can be recovered as corner cases of GHC—and on Pettigrew’s account it accommodates epistemic risk by allowing us to tune its parameters (e.g., how much weight to give to the best outcome vs. how much weight to give to the worst outcome) in proportion to the agent’s risk appetite. This is in several respects the core methodological chapter. It is through GHC that we implement our attitudes to epistemic risk. Chapter 7 applies this GHC-based epistemic risk framework to the choice of selecting priors: risk averse or neutral GHC weights, for instance, will imply uniform priors while risk seeking ones will not. Indeed, for any prior, there is a set of GHC weights for which that prior will be permissible. Hence, for any prior, there exists some attitude to epistemic risk permitting it. Chapter 8 explains that posteriors ought to maximize prior expected epistemic utility, drawing on Gallow (2019). This is obtained via Bayes’ Rule.

The second part of the book is entitled “Putting the Theory to Work.” This title may be misleading, since part two of the book is devoted almost entirely to addressing objections to the theory. Nonetheless, the ensuing discussion is illuminating. Pettigrew focuses in particular on rationality-based objections to permissivism. If permissivism is true then, some have argued, the normative force of epistemic norms seems tenuous. Chapter 9 is devoted to addressing this family of concerns. Permissivism seems to imply that you can rationally jump around different credence functions without any underlying change in your evidence. This is the so-called brute shuffling objection. In Chapter 10, Pettigrew argues, first, that such arbitrary changes in credence amount to updates of belief and, since updates of belief should follow Bayes’ Rule, we can say they are irrational to the extent that they violate Bayes’ Rule. But one might reply: no, I was not updating my belief, I was substituting one prior for another, which I will later faithfully update via Bayes’ Rule. Pettigrew attempts to deflect this objection by drawing a distinction between authoritative and non-authoritative credences. By abandoning your credence to select a new prior, you are signaling that you no longer see it as authoritative, and restarting your epistemic journey, so to speak. This is not irrational, it is arational. This is fine as far as it goes, but I find it hard to distinguish between a shuffle that counts as an update and a shuffle that counts as a restart. At this point, author and critic might be talking past each other.

Some permissible priors, including the uniform prior, are not ideal for learning. Chapter 11 briefly considers this, and concedes that permissible priors may be imperfect in certain respects, while highlighting that they are not rationally required (which would be more problematic). Chapter 12 considers a couple more objections. I will highlight the conspiracy theory objection, in particular: given such wide permissivism, is it not the case that if I start with a high enough prior in a wild conspiracy, I will end up with a high posterior in that conspiracy? And if that is the case, as indeed it is, how can that be rational? Pettigrew argues that the actions one takes on the basis of such a belief may be objectionable from a practical standpoint. While this is sensible, I would have liked to see more discussion of the epistemic status of conspiratorial beliefs.

Let me now return to the point arising in Chapter 5—namely, that strict propriety leaves no room for situating epistemic risk in the utility function or scoring rule. Risk is normative, and epistemic risk presumably is too. But there are at least two ways in which epistemic risk can be a normative concept. First, we can use epistemic risk in an action guiding sense: that is, to provide a recipe for which credences to select. This, I think, is how Pettigrew sees epistemic risk. Unsurprisingly, therefore, if one wants to use epistemic risk in an action guiding sense, the best place to situate it is in the decision rule. For Pettigrew, you should consult your attitudes to epistemic risk in order to identify which decision rule to adopt. That decision rule, then, allows you to select a prior, and the prior is then updated via Bayes’ Rule. You get the full recipe.

Second, we can use epistemic risk evaluatively, to deliver, from a third person perspective, some assessment about the decision maker. For example: Alice’s credences are very risky / Gigi’s credences are not risky at all / Alice’s credences are riskier than Gigi’s / if Kelvin wants to minimize risk, Kelvin should adopt different credences. The evaluative perspective does not aim to offer a decision rule, or a recipe, for what to believe. Rather, it delivers some normative statement about the epistemic quality of the agent’s beliefs.

Indeed, this is how James M. Joyce (1998, 2009) tends to view the accuracy dominance argument for coherence. When we say that an agent is incoherent, we are making an evaluative statement to the effect that there is something wrong with her credence function by highlighting that her credence function is accuracy dominated. However, we are not providing her with a recipe, or a roadmap, for selecting a new credence function. This point especially arises in discussions surrounding the so-called Bronfman Objection to Joyce’s accuracy dominance argument for coherence: namely, that it fails to deliver a clear path for where the agent should move. But once we properly understand the normative structure of Joyce’s claim, the objection ceases to have any force. Coherence, on Joyce’s view, is not intended to be action guiding in the sense of providing the agent with a recipe for which credences she should choose. He is instead highlighting something valuable about her current state—namely, identifying a defect.

My own approach to epistemic risk is developed in the Joycean spirit (Babic, 2019). On this approach, epistemic risk is defined in terms of the curvature of the scoring rule. In particular, it is measured in terms of the integral difference between how well you can do, and how poorly you can do, when you adopt certain credences. This measure is directly related to Leonard Savage’s (1971) generalized information entropy of the credence function: in particular, risk + entropyk, where k is some constant. For example, a credence of 0.9 that a coin of unknown bias will land on heads is very risky under the Brier score, because if the coin lands on heads, you will do very well (your inaccuracy score will be (1 – 0.9)2), but if it lands on tails, you will do very poorly (your inaccuracy score will be (0 – 0.9)2). Epistemic risk corresponds to the size of the spread between these two quantities. By contrast, a credence of 0.5 is minimally risky, because your accuracy will be the same whether the coin lands on heads or tails. In both cases it will be 0.52 since (1 – 0.5)2 = (0 – 0.5)2.

The fact that strict propriety commits us to seeing our credence as best in expectation is orthogonal to this account of epistemic risk. You can line up credence functions on a shelf, so to speak, and compare them in terms of their epistemic risk. While it is true that once you adopt one, it will see itself as best in expectation, that is irrelevant to the degree of epistemic risk associated with each of them.

So which approach is better? Should we encode epistemic risk in the value function, and see it as an evaluative concept, or should we encode it in the decision rule, and see it as an action guiding concept? From Pettigrew’s perspective, I suspect, the latter is to be preferred due to its direct action guiding force. From my perspective, the latter is problematic because it seems to presuppose doxastic voluntarism—the notion that we can choose what to believe. For if that were false, such that we cannot choose what to believe, then there is no action to guide. I have taken it as a given that we cannot choose our beliefs in the way we can choose practical alternatives. Hence, my view is that the evaluative approach is the most we can hope for. In that sense, I would have liked to see some discussion of doxastic voluntarism, as well as more discussion of what kind of normative concept Pettigrew envisions epistemic risk to be. Nonetheless, in this excellent book Pettigrew articulates an overall compelling picture of epistemic risk qua choice of decision rule and of its relationship to credal permissivism.

REFERENCES

Babic, B. (2019). A Theory of Epistemic Risk. Philosophy of Science 86 (3), 522–550.

Dorst, K. (2019). Lockeans Maximize Expected Accuracy. Mind 128 (509), 175–211.

Easwaran, K. (2016). Dr. Truthlove, Or: How I Learned to Stop Worrying and Love Bayesian Probabilities. Noûs 50 (4), 816–853.

Gallow, D. (2019). Learning and Value Change. Philosophers’ Imprint 19, 1–22.

Horowitz, S. (2017). Epistemic Value and the Jamesian Goals. In J. Dunn and

K. Ahlstrom-Vij (Eds.), Epistemic Consequentialism. Oxford: Oxford University Press.

James, W. (1896). The Will to Believe. The New World 5 (June), 327–347.

Joyce, J. M. (1998). A Nonpragmatic Vindication of Probabilism. Philosophy of Science 65 (4), 575–603.

Joyce, J. M. (2009). Accuracy and Coherence: Prospects for an Alethic Epistemology of Partial Belief. In F. Huber and C. Shmidt-Petri (Eds.), Degrees of Belief. Springer.

Kelly, T. (2014). Evidence Can Be Permissive. In J. T. M. Steup and E. Sosa (Eds.), Contemporary Debates in Epistemology. New York: Wiley-Blackwell.

Savage, L. (1971). Elicitation of Personal Probabilities and Expectations. Journal of the American Statistical Association 66 (336), 783–801.

Reviewed by Boris Babic, University of Hong Kong