Author: Esker
Two of our Core Tenets here at Tabletop Builds are that “math is math” and “anecdotes are not good evidence.” One of the things we do when we look at the value of a class feature or the quality of a build is to try to calculate how it will perform “on average,” “in the long run.” In other words, what can we expect from it, not on a great day where we happen to roll a bunch of 20s in a row and our enemies roll 1s and 2s, and not on the worst day where we can’t seem to roll above a 10 and they never seem to miss, but averaging those days with the ones in between.
An important caveat up front is that the average result isn’t always what’s of interest. There will be some cases where we care more about making sure the worst result isn’t too bad than we do about making the average result as good as possible. We might write more about this in another post, but let’s start with the basics.
In order to figure out what to expect in the long run, we need to be able to do two things:
- Figure out how likely each possible outcome is (that is, what are the probabilities in question?).
- Decide on a quantitative scale that we can use to evaluate how good or bad the result is for each possible outcome.
If we can do both of those things, then we can work out the expected result of a game event by taking a weighted average: list each possible result, attach its “goodness/badness score” to it, and average those together weighted by how likely they are to occur.
A Basic Example: Average Damage Roll
There are lots of things in D&D which are best measured using things that aren’t damage, but damage rolls are one of the simplest things to calculate, so that’s where we’ll start.
Let’s say we are casting fire bolt, which does 1d10 fire damage at level 1. How much is that?
On an individual cast, it might be 1, it might be 10, and it might be something in between. But on average, each result between 1 and 10 is equally likely. So, giving each of these a 10% chance of occurring, or a probability of 0.1, we wind up with an expected result (a weighted average of the possible results) of
0.1 x 1 + 0.1 x 2 + 0.1 x 3 + ... + 0.1 x 9 + 0.1 x 10
which works out to 5.5.
Useful shortcut: average of a 1dX roll
Any time we want to know the average result of a single die, the following rule is useful.
If the die has values from 1 to X, all equally likely, then the average result is (X+1)/2.
So 1d4 averages 5/2 = 2.5, 1d6 is 3.5, etc.
Constant Modifiers and Multiple Dice
Often we have damage formulas like 2d6+3 (the damage a Fighter with 16 Strength does with one hit of a greatsword, for example).
How do we find the expected result of something like this?
Fortunately, expected values behave really nicely when we’re adding multiple results together, even if some of them are rolled and some are constant.
- The expected value of the sum is the sum of the expected values.
- The expected value of a constant is the constant.
For an expression like 2d6+3, we really have three sources of damage: the first die, the second die, and the constant modifier. In other words, we can rewrite it as 1d6+1d6+3.
The expected value of each of the dice is 3.5 (see the shortcut above). The expected value of the +3 modifier is… 3.
So, all together, the expected value of 2d6+3 is 3.5 + 3.5 + 3 = 10.
Factoring in Accuracy
The above calculations assumed that we hit with our fire bolt or our greatsword, but we won’t always hit, and some characters will hit more often than others. So we need a way to compare two options, where potentially one may do more if it hits but have a lower chance of hitting.
Fortunately, as long as we’ve agreed to use expected value as the measure of interest, we can put both options on a single scale (as long as we have a reasonable way of estimating the chance of hitting, on which more below), by just factoring in a miss as another possible result.
So, let’s say our 1d10 fire bolt has a 60% chance of hitting. That means that the chance of doing 1 damage isn’t actually 10%, but rather 10% of the 60% of the time that we do any damage at all. That is, a 6% chance overall, same for 2, 3, 4, etc. up to 10. Meanwhile, we have a 40% chance of doing no damage.
Putting that all together, the expected damage of our fire bolt is:
0.4 x 0 + 0.06 x 1 + 0.06 x 2 + ... + 0.06 x 9 + 0.06 x 10
which winds up being 3.3.
Now, it would be tedious to list out all the possibilities like this every time, but notice that the 6% that we got was just our to-hit chance (60% here) times the chance of rolling that number (10%). And since all but the first result involve that 60%, we can factor it out and write it as
0.40 x 0 + 0.60 x (0.10 x 1 + 0.10 x 2 + ... + 0.10 x 9 + 0.10 x 10)
so that the bit in parentheses is just the average damage roll. That means that, factoring in accuracy, our fire bolt can be expected to do:
0.40 x 0 + 0.60 x [expected damage roll]
in other words, just 0.60 x 5.5, which, again, is 3.3.
This will work for any attack roll:
To find the expected damage, factoring in accuracy (which you should always factor in), just multiply the chance of hitting by the expected damage roll
Application: Whether or not to “Power Attack”
Now we’re equipped to answer questions like “is it worth sacrificing some accuracy to get more damage on a hit?”
Since you’re reading an optimization blog, you may have taken a feat like Sharpshooter on a character at some point, and wondered “when is it worth making use of the −5 accuracy / +10 damage option?”
Let’s say we have a longbow, and if we don’t power attack we have a 75% chance to hit (a quick “go to” assumption we use, by the way, for a character with the Archery Fighting Style, which corresponds to needing a 6 or better on the d20). If we hit we’ll do 1d8+4 damage.
Alternatively, we can take the −5 to hit, to increase our damage to 1d8+14 if we manage to hit anyway. Now we need an 11 or better on the d20, which means we have a 50% chance to hit.
Without the power attack, our expected damage roll if we hit is 4.5 + 4, or 8.5. Factoring in the accuracy of 75%, our actual expected damage is:
0.75 x 8.5 = 6.375
With the power attack, our expected damage roll if we hit becomes 4.5 + 14, or 18.5, but our accuracy goes down to 50%. So our actual expected damage is:
0.5 x 18.5 = 9.25
So we can see that we’ve increased our expected damage by a bit under 3 by using the −5/+10 option. Far from adding +10 damage, but still almost a 50% increase in our damage output.
How do we come up with the chance that an attack (or anything else) succeeds?
Assuming a straight roll without advantage or disadvantage, which we’ll get into below, or any special bonuses like Bardic Inspiration, etc., which we covered in part 2, we can do this by starting with the DC (or AC if it’s an attack) and working backwards to find the range of d20 rolls that will result in a success.
Here’s the general procedure first, then we’ll go through an example.
- Start with the DC (or AC).
- Subtract the modifier from the DC to find the minimum d20 roll that will produce a success. Call this number M.
- The actual success chance is then 1 − (M−1)/20.
The expression in step 3 comes from the fact that M−1 is the maximum result that results in a failure, and since the values 1 through M−1 are all failures, and each of them occurs with a probability of 1/20, there’s collectively an (M−1)/20 chance that one of these occurs. And since there’s a total probability of 1 to go around, that means the rest of the time, one of the good things happens.
Or, in a single formula:
Success Chance = 1 − (DC − modifier − 1) / 20
Example: A Basic Attack Roll
Let’s say we’ve got +7 to hit and we’re attacking an enemy with 15 AC.
We’ll need an 8 or better (15 − 7) on the d20 to hit.
That means a 7 or lower will miss, which happens with probability 7/20, or 35% of the time. Therefore the other 65% of the time we’ll hit.
Advantage and Disadvantage
What if we are rolling more than once and taking the better (or worse) result?
We can handle this case by:
- First, finding the chance that each d20 gives us the needed result.
- Either:
- If we have advantage, find the chance that at least one gives us what we need.
- If we have disadvantage, try to find the chance that both give a desirable result.
Step 1 is the same for both dice and works as if we didn’t have advantage or disadvantage.
To find Step 2, we need to use the following fact about probability:
Given two events, each of which may or may not come to pass, and knowing that one of them happening tells us nothing about whether the other will happen (in probability jargon, they are independent, though note that this is very much not the same as being mutually exclusive) then the probability that both occur is the chance that the first one happens times the chance that the second one happens.
So, if each die has a 65% of coming up the way we want, then the chance that both of them do (which is what needs to happen if we have disadvantage) is simply
0.65 x 0.65 = 0.4225
This is then our chance of success if we need a 7 or better and have disadvantage.
If we have advantage, then we will succeed unless both dice come up as failures. So we can do the same thing we did for disadvantage, but to find the chance of failure instead. So if each die gives us a 35% chance of failure, then the chance that both of them disappoint us is
0.35 x 0.35 = 0.1225
Or 12.25%. Now, since we know the chance of failure, the chance of success is just the rest of the probability, since we either succeed or fail. Since the total probability is 1, our chance to succeed is 1 − our chance to fail. So, in this example, we succeed 87.75% of the time.
For you “give me the bottom line” folks, we have the following formulas:
Success Chance at Disadvantage = (Base Success Chance)^2
Success Chance at Advantage = 1 − (Base Fail Chance)^2
We’ve done a few things in this article:
- Discussed the idea of using the average result, or expected value as a metric.
- Given a general procedure for calculating expected value, using damage rolls as an example.
- Explained how to combine multiple dice, constant modifiers, and accuracy into expected value calculations.
- Explained how to find success chances with straight rolls, advantage rolls, or disadvantage rolls.
Although the examples we’ve used are mostly based on attack rolls, the general principles apply any time you have multiple potential results, each with a certain chance of occurring, where we can put a number on how good/bad each outcome is.
That’s it for now! In part 2, we look at some nuances of these things, like how to account for mechanics where you get to add a die to your roll (as with spells like bless and guidance), as well as how to value mechanics where you get to decide after the roll to use a limited resource (Bardic Inspiration, the Lucky feat, Superiority Dice) to improve the result. Part 2 can be found here!
How do you factor critical hits into the math?
A critical hit is just one of the possible outcomes of the d20 roll, so if you’re finding the expected value (expected damage, say), then you include the probability of a crit times the damage you do on a crit in the weighted sum.
I didn’t factor in crits in the examples here for the sake of simplicity, but if you take the tier 1 fire bolt example where you have a 60% chance to hit, there’s actually a 55% chance of doing 1d10, a 5% of doing 2d10, and a 40% chance of doing 0. So you get 0.40 x 0 + 0.55 x 5.5 + 0.05 x 2 x 5.5 = 3.58.
Or you can slice the sum a bit differently and do 0.60 x 5.5 + 0.05 x 5.5, which gets you the same result.
This is a great article! Thank you!! I’ve been having a hard time understanding the maths but this made it very clear to me.