...
Yes. The idea is very cute, but if the wiki contains information that is mostly false, untested, and/or unimplemented, features implemented based upon this are of questionable value. Really, unless you are role-playing, there has to be some testable outcome. Role-playing is nice, but if the tool is telling me the wrong guy should be my miner, that's not helpful, and mining is one of the few things it's possible to really test well. I best there is a direct relation between mining speed and agility, and what else is there to mining except that? Maybe one guy might sleep more than than the other guy, but if agility dominates, it dominates. And it's possible to find this out.
I'll do more research.
On this same page here
http://www.bay12forums.com/smf/index.php?topic=66525.msg3124821#msg3124821Its stated that toady specified the weights.
Ah, gotcha. I got ahead of myself being excited about the idea that I overlooked that roles just select skills, traits and attributes to display the values for. I'm not sure how the weights work as I've not had a chance to mess with them yet. They sound like what I'm looking for. Getting 34.06 sorted and going to test them with a new fort now.
I spoke with Splinterz, he's going to put how the formula is calculated in the documentation on the next release. For now... I'll specify it right here:
The formula originally was based on Weighted Mean's. Which I felt was limited as weights were really (weight/(sum of weights))=max % that that weight could provide to the overall weight, I felt this was a limit as some attributes (at the time I was merely working with attributes) might have a value far above, or below the mean, and should be able to contribute more to the overall % when it does so.
I did some stuff with excel testing different possibilities out, and came up with a new formula, bounced it back and forth against Splinterz, who suggested a sum of distribution curves (which was an idea I had as well, but he implemented it first, albeit not as well as the current implementation).
The
current implementation is:
All lists of attributes, skills, traits involved with a role are converted to their respective z-scores (i.e. for attributes: (attribute value-attribute mean)/standard deviation), which standardizes the data (i.e. similar in concept to converting them to their respective %, but instead converts them to a + or - number representing it's position below or above the mean. The importance of this step is to ensure each attribute, skill, and weight have the same scale before "weighting" i.e. factoring them.), then each list is factored by it's weight. Each standard deviation then equals the same as the weight value (z scores have a natural standard deviation of 1, i.e. the same scale concept mentioned prior). I indeed verified that a weighted z-score's new standard deviation is the same as the weight via scalc/excel. Then within each category: (i.e. attributes, traits, skills) their respective z-scores are summed together. Standard Deviations for each category are quadrature'd together (i.e. sum the squares of the deviations, then square root that sum. Verified via Math.reddit.com, and other statistic websites).
So now we have 3 new lists: A summed Z-score of Attributes, Weights, and Skills, and each has their new respective standard deviation.
In v8, I failed to re-scale each category to their respective z-scores (as a sum of zscores changes the scale from sdev of 1 to whatever it is now, so now... they need to be rescaled to standard deviation of 1), we also failed to quadrature things together, and I was doing an averaging of values, and a root mean square of standard deviations, which was not correct. So we did that again (convert each list to it's respective z-score) with the skills, weights, and attribute's summed values. Then we weighted them, same with the standard deviations. Then added each category's z-scores together, and again quadrature'd the (weighted/factored) standard deviations to get a final list of summed z-scores and a new standard deviation.
This list is ran through a normal cumulative distribution function, then multiplied by 1000 to get the final %