Sheet:
https://mega.nz/#!AEBXTD7L!xUHzf4Xhp_Xt-kUlh_4oW1YXxRE1kK84BPFXzB6zD90Video:
https://www.youtube.com/watch?v=AIZe7sKGvdY I made a spreadsheet that finds publications with same tastes as the user then averages their review scores into a personalized meta-score. Technical name Collaborative_filtering. People usually listen to sources of info that have been honest or useful because those sources usually continue to be useful. This sheet just speeds up the process. If publications rate relatively objectively, using this sheet will punish hype and selling out. Bad publications don't matter if no one sees them. This approach should also solve all the metacritic, rating and belly flop controversy.
In a more general sense the sheet is supposed to teach open office, what can be accomplished with CPU & net and how to filter through the flood of misinformation to find productive truth. That last one is critical to democracy in current society. PMC sheet can also be used for a bunch of statistics, like which games are most divisive (highest average deviation, column N) or which publications agree most with averaged user reviews (copy 'user reviews' column in to input "by me" AD column).
It works. I found and subscribed to PCGamer because of this sheet. The metrics almost always say the new PMC fits my scores much more than the MC. You will know everything works when 'Emphasis' marks games you forgot to rate. Unfortunately rankings/recommendations will only slightly move the games around due to the "objective" and commonly agreed apon effect.
Help with continued development: Genera are somewhat incorrect. A real statistician going over it all would be nice. Those numbers, I mean what do they REALLY mean? A real OOO pro too. Some find function that outputs arrays of cell references and can be used IN an array calculation. Or a RANKIF. I need a way to hash? new data into the already existent sort and categories. Average user can contribute clearly obviously over/under rated games. More data with similar game names format. Make it all look nicer. Maybe reduce the difference between platforms. Maybe get more independent publications or Shows instead of Games. I live off comments and suggestions.
Blue is what you edit and yellow is the more important information. You need the latest libreoffice to run the actual sheet. When you are done entering or tweaking press F9 to run calculations. Use Ctrl-f to find 1s representing one of choosing algorithm's chosen rows or columns. Some data from
http://evilasahobby.com/2013/09/27/metacritic-the-data/. Other from Portia and Importio and Metcritic.com. Thanks OOO calc forum and NP++. Similar sites include opencritic.com, the defunct everyonesacritc, and for user based recommendations try listal, tastekid, critiker.
USAGE GUIDE
First you must input your scores into column AD using one of three methods: Just the games you like and high scores for them.-(taste only) Include games you found over rated or just hated and maybe borrow from the common over-under rated list or user review to publication average score difference.-(objectivity) The point of scores is to say which games are better than another so select a genera you know very well and (first sort into categories) first rank, then rate all the games in it.-(taste) You can rate games with one of two goals; what the games objectively deserve (objectivity) and what you feel they deserve (taste).
Don't rate a game higher or lower than max or min respectively. If you are using Base Difference, rating more extremely than BD will penalize a publication that gave it a score close to your own, more than ones that avoided it entirely. Check 'Friction' for games chosen publications think you misrated.
Set the publications choosing algorithm: AE5 sets number of publications to use. Adjust according to metrics and use more publications if scoring goal is objectivity. Using few publications means the recommendations get alot more exciting but Emphasis becomes weird.
Three nomials make up the polynomial that decides the final score. Average difference X percentage intersections + Base Difference X (100 - percentage intersections) - correlation X 1/(1-Sqrt(number intersections-4)). Correlation AE8 does not really work without many game scores in common.
Base Difference should start around the average difference calculations AB11:AE13. AE9 sets minimal % of games user scored that chosen publications must review aka Intersections. If both Base Difference and Intersections are set too low, user input scores get all the Emphasis but the recommendations will show that this PMC is useless.
Each publication and user uses a unique scoring system including different distributions and values such as objectivity and, for example, educational value. The only thing that reliably has the same meaning across sources is the ranking of each entertainment object against other objects in a similar category such as movies in the horror genera. There are a bunch of metrics describing how good new PMC is compared to old MC. Cell addresses stated here are for PMC. MC's equivalent is usually to the left. Most, AB11:AE13 AC7-8, show how much closer to user ratings the new PMC is. AC9,10,3 describe volume of reviews. 3 cells at column S describe how much chosen publications agree with each other 'cohesion'.
All blue cells to the left modify the game choosing algorithms of columns T & Z. 'Emphasis' = (MC score for this game - PMC score for this game) X number of Chosen publications rating this game. Its supposed to represent how much PMC disagrees with MC. 'Friction' is the same thing but between you and PMC. Input in W10 amount of most emphasized games to be selected in column T. Both columns can be set to require the other in W9 & Y11. Most requirements are disabled by setting their cells to 0 or nothing. Right column can help decide which games to review.