(EDIT: Fixed a bunch of broken quotes; stray size tags were gumming up the works.)
First: I wasn't attempting to address every point you made because I ran up against the forum software's limit on the number of quotes I'm allowed to use. Yes, you're right (e.g.) that I assumed when you said "7 TIE swarm" you meant a Howlrunner swarm, as well as in a couple of other places.
Now, on to the rest:
The world is analog, not digital. Another entire category of options is: the quantitative methods can be used as a baseline which then requires more detailed analysis to consider the specifics of the ship in question. Ultimately, the non-statistical factors can be mapped into effectively changing the attack and defense values. This is neither post-hoc nor arbitrary, and demonizing it as such is really not an effective debate method.
There's no demonizing, and I wasn't feeling particularly heated (see the note on fraud for why this is no longer true). I came back to these boards because you sound hyperbolic at best, and because people appear to be listening to you.
The central thrust of my posts are that we don't know anything about the Defender except its stat line, base cost, actions, and upgrades; you've concluded, based on not just that extremely limited information but a subset of it, that the Defender is badly overcosted and will be essentially useless from a competitive standpoint.
As I've said multiple times, apparently not as clearly as I'd hoped: your reasoning is based on catastrophically incomplete data and a model that does a relatively poor job of predicting empirical performance. And you've admitted that the data is incomplete and that the model's results require adjustment, but somehow without compromising your faith in your basic conclusion. I don't understand that, and it's why I'm still bothering to discus the issue.
I will re-iterate my challenge:
Since you seem so adamant that Lanchester's Square Law is useless, I challenge you (or anyone!) to find even a single example in the above list for wave 1-3 ships where the predicted balanced point cost (after minor adjustments for dial, upgrades, etc) substantially differs from the community's general consensus.
Was my response unclear? The fact that your model requires substantial post-hoc adjustment to every
ship except your baseline bring it into alignment with reality makes it a bad model, and (again) I don't understand why you're maintaining that it's okay.
The option you've
chosen is to pretend that the FoM/Lanchester model is still appropriate despite the fact that it gets every single ship wrong
. You can find post-hoc rationalizations for every costing "error", but the fact that you have to do so should worry you.
But I'll bite on your challenge (again): please explain why the perceived error in costing the A-wing is the same as the error with the Interceptor, despite the fact that the community as a whole would agree that Interceptor is an effective ship but the A-wing is not. Please explain why the the Y-wing's access to the turret plus access to target lock is worth 2.5 points, while the X-wing's access to target lock alone is worth 2 points. Please explain why FoM ranks the TIE Bomber so highly, when most players see it as a fairly ineffective dogfighter.
And you still haven't really addressed the Lambda and Firespray, which are respectively at the high and low extremes of the list (FoM 107 and 59) in sharp contrast to their levels of acceptance in the competitive community.
A ranked list without numbers isn't particularly informative, and presumably used the same flawed math to generate the numbers for this ranking, since it disagrees with my above list that is correctly ranked by baseline point efficiency. So lets use the numbers that are derived correctly, and stop propagating incorrect information, OK?
You can find a link to my derivation of Figures of Merit here
. The only difference between this and what you've done is that I used fractional numbers for every ship--the FoM is based in each case on the fractional number of ships that will fit in a 120-point list, so I can skip the annoying step of (e.g.) having to compare 126 points of X-wings with 120 points of TIE Fighters.
All interesting, and all completely irrelevant to the point here. OK, yes, that's important, but not here on FFG Forums. I don't want to get drawn into a chest-thumping exercise about whose field more is important how, and how that translates into having more insightful comments. Just the raw facts, please.
It wasn't chest thumping; it was an attempt to explain, in a collegial fashion, where I'm coming from--why my tolerance for error might be substantially smaller than yours.
I rarely even lurk on the BGG forums, and certainly was not aware that I was being approvingly quoted there. How, then, could I be using that as "authority"? The only authority that I have presented in this thread are the raw facts and how well the historical data lines up with the predicted values from the model. Which, again, I invite you to critique.
When people say smart-sounding, sciencey things, lots of other people often listen uncritically--again, that's the authority I mean. And that's fine, as long as the smart-sounding sciencey person is right. At the risk of verging on the political, this is how anti-vaccination crackpots and "Intelligent Design" propagandists get a foothold: their target audience doesn't know enough to spot the gaps.
But if someone says "If humans evolved from monkeys, why are there still monkeys?" and others are inclined to swallow it whole, you have two choices: you can spend a few hours attempting to explain the modern definition of species as primarily a statistical and behavioral phenomenon, or you can simply say "If you were born from your parents, why do you still have parents?" and let your audience work it out from there. It's not really true that parents and children have the same relationship as related species, but it's illustrative and meaningful even though it's not correct in the most technical sense.
I failed to account for the fact that your Figure of Merit is an exponential scale--not because I didn't understand it, but because I was pitching my conversation toward an audience that for the most part wouldn't understand
Up to this point, everything to this point has been a somewhat understandable difference in opinion, but this is a gigantic red flag and I am going to draw a line in the sand here.
When I sat down to apply your methodology in a way that demonstrated its fundamental mismatch with the empirical data--that is, in an attempt to demonstrate exactly how often the "ballpark" estimates you're discussing are off--it honestly didn't occur to me initially to treat it as a nonlinear scale. That is, given just a list of Figures of Merit, the most straightforward approach is to treat a ship with a 66 FoM as being 66% as efficient/effective as a ship with 100 FoM.
It then occurred to me as I was previewing my post that perhaps I should have treated it as an exponential scale, meaning a ship with a 66 FoM is sqrt(66/100) = 81% as efficient. But I really didn't want to get into the distinction between linear scales and exponential ones, I didn't want to get sidetracked explaining why 66% is really 81%, and honestly I had chores to do and didn't want to go back and recalculate everything using the square root of the normalized FoM. So I went with the more straightforward version--while explaining exactly what I'd done, clearly enough that you understood it perfectly.
And then, instead of actually addressing the fact that the FoM model spits out incorrect values for every single ship except the TIE Fighter, you decided to focus on the fact that the wrong values I
got out of the model weren't the same as the wrong values you
got out of it.
So, it's a fair cop; mea culpa. I screwed up when (admittedly in the name of convenience and ease of explanation) I approximated FoM as a linear scale, rather than an exponential one. I've now admitted--twice!--that I had the wrong
wrong values, while you have the right
I have 2 very simple parting thoughts:
- You used a method that you knew was wrong, and therefore would yield incorrect results.
- You did this because you didn't think that the community members were intelligent enough to understand the correct method, so you wanted to give them "something" to chew on, even if it was clearly wrong.
- In this case the misinformation conveniently made it look like Lanchester's Square Law was wrong, which you have been trying (unsuccessfully) to disprove this whole time. Unfortunately I called you out on it.
- You never EVER EVER, falsify the data or intentionally provide misleading information. I am abhorred that someone who is a PhD student / candidate would even think of doing this. I have zero tolerance for falsifying data and plagiarism.
- I have enough faith in the intelligence and goodwill of the community, so that even if not everyone can follow all the math, they still deserve nothing but the best and most unbiased information possible.
I have two parting thoughts, too.
First, Lanchester's Square Law is
wrong--or, more precisely, the application you're making here is inappropriate. You used a well-known (if somewhat controversial) theoretical relationship between force ratios and casualties to derive a "figure of merit" that doesn't have anything whatsoever to do with force ratios or casualties, and used it to model relative effectiveness for these ships despite the fact that it simply doesn't do a good job of actually explaining ship costs.
Second, I've been trying to keep this on the level, but now you've accused me of fraud. I've been completely open about the issue you think makes me a dishonest actor, to the point of explaining my methodology so that anyone could check my work. You can explain why approximating FoM as a linear scale isn't particularly defensible on technical grounds, and I'll even agree with you in hindsight, but it's a drastic and unsupportable exaggeration to claim that I deliberately misled people because I thought they were too stupid to tell.
But now that you've made it personal, let's go back to the first parting thought: you're standing behind a model that fails to model every available data point
. Furthermore, you claim that it's really okay because it reflects conclusions you like, after you adjust the results to reflect those conclusions, and then you use it to extrapolate--without performing the same kind of post-hoc adjustments. So whenever you're done spinning the fact that your model requires post-hoc adjustments to match any data point
, please explain to me how what you've done here is anything short of malpractice.
Edited by Vorpal Sword, 17 February 2014 - 12:51 AM.