Most / Least Consistent Publishers?

March 4, 2009 — 2 Comments

(1) OK, before you even begin to read this, don’t shoot the messenger!
(2) You need to know that this is NOT my data, but, I find it very interesting, so that’s why I’m blogging it. (I’ve not see this data presented this way before.)
(3) This is based on Metacritic data, and let’s just say many of my friends are having a VERY heavy discussion (right now), on the validity of the Metacritic data. (So this is incredibly timely and will add fuel to that fire for certain!)
(4) Where did I find this data? (From a Metacritic expert.)
(5) Is the data real? You decide. Here’s how I got access: CLICK HERE – The password is: bluesbrothers
The owner of GameQuarry calculates which companies are best at predicting hits. Which are good at it, and which historically are not so good at it.
The results surprised the heck out of me, what about you?
Consistent%20Publishers.jpg
NOTE: This is not public info, but GameQuarry gave me permission to blog.

2 responses to Most / Least Consistent Publishers?

  1. 

    Hello Mr. Perry,
    I find your post quite interesting. I am not a messager assassin by any means, but I would hope you would only use this as a general indicator at best.
    As a self proclaimed hardcore gamer and an analyst I would challenge this data analysis greatly. Especially after reviewing the videos presented on the website.
    The ranking of these publishers seems very odd to me. With no disrespect to Game Quarry or Tim I would say his findings are fundamentally flawed. If you’re looking to validate this type of analysis (especially if your company is contracting someone to do this) I would only assume you would have multiple independant parties involved.
    As a gamer we can understand why Blizzard would be near the top of that list, and also as a gamer I am confused why the results wouldn’t be rolled up into Vivendi or Activision. I could spend more time on this, but it’s not an argument as much as an open discussion.
    You could randomly reshuffle the top 10 without any statistical analysis and the general consensus would probably be an agreement. We are speaking in generalities, but with the data which I would assume is fundamentally flawed, there would be little argument.
    I would like to comment on the reasons behind ranking publishers. The bottom feeders have more data points to rank them at. The top guns have much less. I’ll pick out only a couple who I would group when speaking again, in generalities. If we review the people who actually make the games for Microsoft, Nintendo, Ubisoft, Activision, and Konami we can see common trends. Within those very large publishing houses you have an extremely large variance between a “quality” developer for a first party (Nintendo, Microsoft) and a quality developer who have less of a comittment towards the platform itself and more of a comittment towards shareholders and the business of making money (this assumes a lot, but I wish for you to draw your own conclusions). You have been in the industry for a very long time (I’ve been following you since I was a child) and can confirm these findings. When you put something like “Developed by Miyamoto/Blizzard/Bungie” on a box as a customer this means more to you than “published by Activision”. Talk to your friends ask them their top 5 games or what their currently playing and you’ll find more discussion of Rock Band/Guitar Hero/Call of Duty 4 and less discussion of Super Mario Galaxy/Halo. Walmart shoppers and publishing houses shouldn’t be the navigators for our industry.
    Lastly we hold our developers and publishers to very high standards. We rank your products on a daily basis. Metacritic is a critic for the critics, but not in the same way. The fundamentals of what makes a quality review, or an accurate one has a very high deviation as well. Critics have high amounts of bias. I’m not speaking towards a particular platform, but towards genres even themselves. Can we ensure the reviewer who gave Grand Theft Auto a 6/10 was a fan of sandbox style games? No we can’t, but we work in a world who needs to draw comparisons between apples and oranges.
    Stardock has a great concept with the gamer’s bill of rights. We do not have the same system for our critics. The conclusions of the analysis of Tom was based off of the conclusions of the non-standardized critics on behalf of the customers, not the customers themselves. The results of the data as discussed above are so we can speak in generalities only. They should not be used to make sound business decisions.
    Companies spend a lot of money to ensure customer satisifaction. As a customer of the video game industry I do not wish to be represented by the critics and this is why it’s my duty as a customer to do my own research. I would also like to point out that only as a beta tester have I been asked what I think are the correct questions in a video game. The interesting this is game developers can do this as well especially with tools like PSN, XBL, and other digital distribution. At the end of every demo we have “Press (A) to purchase, Press (B) to exit”, but there is another question that needs to be asked. “Press (X) to give feedback on our product”. Gather the data from your customers, not your critics.
    To Tom I appreciate and thank him for his work done, and will use his results in the future, but my reasons are much different than yours 🙂
    Sincerely,
    William Decker

  2. 

    Williams points are valid in my opinion, and he touches on a general area for which I’m an advocate. Specifically, context. As I noted in the report, it does not reflect trends, genre, platform success, or a million other factors. Per an article I wrote on GameQuarry.com, there are currently no scoring indexes that can accomodate everyones needs:
    http://www.gamequarry.com/dablog/2009/02/15/an-alternative-scoring-index-for-the-videogame-industry/
    If our client had requested a breakdown of Game-Centric sites vs. Consumer, that would have had a twist as well. Context is everything here, and I urge everyone who views the report to keep that in mind.
    As for rolling up Vivendi, Activision, etc. That was in fact done. The examples I used were just examples and not a comprehensive list of the publisher normalization for the 4000+ titles. I have no set routine or methodology for normalization, as those calls are entirely up to the client.
    With all that said, I disagree that the method and data are fundamentally flawed considering the scope and objective of the client. If you took a room full of kids, and had them place their report cards for the last eight years in a pile, and you wanted to know who was most consistent, you’d use a scoring card much like we applied. But the results don’t show you which kids might have been excelling in math, science, pys ed.
    Metacritic data is as good as anything we have available. The issue with MC data is largely regarding the infleunce of the weighting system. It was for that reason, we re-averaged non-weighted scores for over 4,000 games.
    I thank William as his response hits on key items here that everyone viewing the report should consider.
    Thanks,
    Tim

Leave a Reply