Kallabecca said:
You don't need a CI for known values. In the case of dice, what is on the sides is known and unless something is wrong with the physical die, then they are truly random and so you can just build up a table of combinations. Like in the case of an Ability + Difficulty, you get the following results.

12 
18.75% 
a 
6 
9.38% 
stt 
2 
3.13% 
faa 
1 
1.56% 
tt 
1 
1.56% 
sst 
3 
4.69% 
ss 
1 
1.56% 
sa 
1 
1.56% 
aa 
1 
1.56% 
f 
5 
7.81% 
t 
7 
10.94% 
ft 
1 
1.56% 
ff 
1 
1.56% 
st 
8 
12.50% 
fa 
4 
6.25% 
s 
6 
9.38% 
ffaa 
1 
1.56% 
sstt 
1 
1.56% 
ffa 
2 
3.13% 
Each die has 8 sides, so the total combinations is 64 (8 * 8). As the number of dice increases the combinations keep going up. As I noted above about the worst case scenario of dice (6 Proficiency, 6 Challenge, 6 Boost and 6 Setback) that results in a fairly large pool of possible results. Originally I said it was on the order of 10^22. I realized that the Boost and Setback dice are really 3 choices not 6 (since 2 sides share the same result for each possible result) which reduces the table to 10^18. Doing an MC of something that large would require a very large number of runs. My original tests in the other thread was 10^5 samples for just 2 dice (max of 64 combinations) to get the final numbers that came really close to those listed in the table here. In comparison, that would barely scratch the surface of the possibilities of the largest pool.
On confidence intervals, no, you don't need them for known exact parameter values (as compared to estimated parameter values). But probability intervals are NOT confidence intervals. Even if you know the exact parameter value, e.g you knew exactly what the mean or median of a random process was (in which case you don't need a confidence interval to estimate uncertainy about said parameter), it still doesn't describle how the outcomes (not the parameter) are distributed. Probability/credible intervals do provide you with an easily intepreted summary of where the values fall. For example, if a distibution (not a mean or median, the WHOLE distribution) has a 90% probaility interval of (1, 3), then 90% of the results will have fall between 1 and 3.
For example, lets say we KNOW the median of success on some roll is 2. Well, cool, but we could have this from an infinite number of distributions where the median is two, some examples of which have the following 90% probability intervals:
To me, those are very very different distributions. Notice there's chance of failure for the second one is less than 10% (no guarantee of symettric distribution) , but appears to be higher on the others since their intervals include 0 and negative values. It doesn't give ALL the information about a distribution, but it is a quick way of providing information abut it, just like confidence interval does. Also, the PI's from an MC simulation don't rely on or require defined distributions.
On exploration of the event space, okay, you're right, it would take a very large MC simulation to get, or even have a chance of getting every outcome. But I don't need to observe every possible outcome to get good information, for several reasons. First, Probability intervals are independant of sample size. After a certain point, say 100 or so simulations (and this number is based on personal experience), the PI's don't change much and I'm mostly using >10k simulations (typically 50k). Empirically my PIs are very very stable between simulations. Second, thanks to the WLoLN, I know my calculated means do approach the real means.
To compare to observational studies, you don't have to observe every possible configuration of predictors and outcomes (in fact to do so, you would probably not have a random sample, which screws up inference). Similarly, I don't need to observe every possible outcome from a MC simulation for it to be valid. See the method of pi calculation on wiki's MC site. I don't need every possible observation, I only need a representative sample. Explicitly, I'm addressing the 3 goal listed int he MC method's site: generation of samples from a probability distribution when a deterministic algorithm is infeasible (I think the combinatorial method is infeasible).
Now contrast with your combinatorial method, where you DO have to calculate all possible outcomes (which you've pointed out is along the 18th order of magnitude), also takes a very long time. The aggregation of that many individual results into a usable form, I would think, would be difficult, especially to do things like calculate correlation and PI's.
So, sum up: There are some shortcomings of the MC method that your combinatorial method solves. Personally, I don't think that increased complexity of the combinatorial method is worth said complexity, but if you do, by all means please code it so we can compare results.
WJL