I didn't want to leave anyone hanging on some R version of things. My damn elbow got infected and laid me out AND I lost my zfs partition due a fire scare when I disconnected everything in a hurry (NAS) but I exported some CSV's and imported them into R to try some things...
So here is my analysis.
Stratified MAD might work better for some scenarios (like dice rolls), but it still is a bad choice for skills.
Using a basic median in place of mean to derive a standard deviation is more than enough for attributes. When I did that I got near .5 mean average and median was .5.
The only reason the initial method does this weird thing with merging a minmax with a mean and median is because it helped achieve a .5 mean, but the mean isn't important at all for what we hope to achieve (A 50/50 split) and using a median based approach is more than adequate.
Having said all that. I don't really have a solution for skills. It seems pretty easy what I need to do.
Count only values greater than 0,
derive %'s for that, divide by 2,
and add .5
then convert the 0's to .5
then find average-.5
then adjust the entire set by this amount to bring the skills within a .5 mean.
The question is what normalization method to use for skills. I believe right now it's using that min/max around mean/median merged with Empirical Cumulative Distribution Function.
That's where i'm kind of stuck right now. I looked at Kernel Density Estimates but am not done. I still feel like crap atm.
I'm looking at this atm
https://rdrr.io/cran/sn/man/dsn.html