Game Review 102
Review Systems
By Chris Picone, 01 December 2022
There are a thousand ways to approach your review. Formal or informal? Chatty or concise? Template, structured, or freestyle? Will you use a scoring or recommendation system? What sort of review will you write? Will it be a detailed analysis, or just the recommendation, or something in between? There’s no right or wrong here – this is a different strokes for different folks situation.
What kind of word count are you aiming for? Keep your audience in mind; roleplayers normally don’t mind reading but shooter fans will probably want something shorter.
The system I’ve been using for the last few years is a fairly formal style of review, around 1,000-1,500 words and broken into the following sections: Introduction, story, graphics, sound, gameplay, fun factor / replayability, and verdict. Each section theoretically has a paragraph but in reality I find myself often leaving only a sentence or two in the graphics and sound sections, leaving more room to talk about the other sections. There’s nothing really wrong with that – it’s due to the type of games I normally review – but it does leave me thinking I probably need to adjust my structure.
I’m actually not going to spend a lot of time talking about writing the actual review because, funnily enough, that’s really more of a writing issue than a reviewing issue and there are already plenty of examples and templates available all over the net. Whatever you choose, consistency is key.
No, this article is specifically about review systems. Aside from trying to help out new reviewers, I have another agenda here; I think the popular contemporary method of reviewing is out-dated. I think we’ve been doing “the same thing” for so long, and it “works,” so we just keep doing it but I’d argue that the way our audiences interact with our reviews has changed and I think it’s time we evolved to meet the changing meta. I’m going to delve into that and suggest a few alternatives.
So, let’s check out some of the most common systems out there!
As usual, I’ve created headings so you can easily skim through the article to the bits you’re interested in.
Scoring systems – “Overall Score”
These are popular – the “norm” even – and I certainly understand the appeal but I personally don’t like them for a number of reasons. First, many reviewers don’t post an explanation for their scores, or the explanations are posted separately to the reviews so no one sees them anyway. As such, the numbers are essentially meaningless. Second, even when explained, the scores are often misunderstood.
By way of explanation, I’m going to refer to Gideon’s Gaming’s (old) score system. To be clear, I’m using Gideon as an example because he actually did a really good job clarifying what his scores meant. See below:
Gideon’s system was actually a really good system and I personally found it very useful. So why don’t I use it? Human psychology. To explain: I recently observed an interaction between two reviewers regarding an eXplorminate review of Vargus. The gist of the review was that it’s a good game if you like lots of reading but definitely not for everyone, and it was accompanied by a score of 6. One of the reviewers took umbrage not with the review itself but with the score. His view of 6 was as “a clear don’t buy” which, I believe, is a pretty common perception. Is it right though? Probably not – that’s not the gist I got from the review in question. In fact, definitely not. Here’s eXplorminate’s review policy:
A score of 6, according to eXplorminate, is a game that has some strengths and that you might actually enjoy if you’re after that particular game style. The context: eXplorminate focuses on 4X games, and Vargus isn’t strictly a 4X game. So people that are open to “adjacent” games will probably enjoy it. You’ll note eXplorminate’s 6 is also fairly similar to Gideon’s 6. But how many of you will have read that “6” and come to that conclusion? Very few, I’d wager.
So where does this perception come from? As Gideon explains, you’re only likely to come across scores of 6-10 on most sites. This is true because, as a rule, reviewing games that the reviewer isn’t likely to enjoy is a waste of everyone’s time. However, this creates an unconscious perception that, since the scores go from 6-10, 8 is middle-of-the-range; therefore, anything above that is good, and anything below that is bad. Gideon – and eXplorminate – already explained that that’s not the case but our reptile brains can be pretty stubborn.
Amusingly, Gideon has since updated his review policy since I started writing this article. He’s now removed the overall score system he had been using for years. His reasoning: “Putting a numerical score at the end of the review is a futile attempt to condense around 2000 words of very specific and important context into a vague unrelated number. That number has different meanings between outlets and the people reading or watching a review. My score of 7 might mean different things from other reviewers 7s, and three different people looking at our reviews will have their own interpretations of what it means. At best, it makes the score itself useless. At worst, it makes the score misleading.”
That’s my take on it too, and why I don’t use them. I think they were an effective tool but reader habits have changed and maybe we should think about changing too. I think they’re outdated and we can do better. However, they're still the most popular method out there. If you are considering going with this method, I urge you to include an explanation for your system. Put it somewhere really easy to find – perhaps even with each review, so those unfamiliar with your site can’t miss it. If you're ready to look outside the box, here are some other options.
Scoring systems – “Five stars”
Essentially still just an “overall score” system, of course. I’m including this one separately because of the effect recently created by reviews on Amazon, Google Play, etc., whereby five star reviews have inexplicably become the default score. Here’s what’s wrong with that:
I recently faced the same issue while trying to increase the number of my reviews on itch.io. Keeping in mind my review policy (games I like are reviewed, games that aren’t for me are handled through private feedback), here’s my rating distribution on itch.io.
First, this skews the perception of my rating distribution, making it look like all my reviews are saying games are amazing. But when you compare individual games, it makes any game with “only” a four star rating look like there’s something wrong with it. Thankfully, itch.io also allow room for reviewers to add an explanation for their score.
Scoring systems – Categorised
A typical categorised scoring system would rate games out of ten across a range of criteria such as graphics, sound, story, gameplay, replayability, value for money, accessibility, etc.
Although I like the simplicity of the overall score system, I prefer categorised scoring for a number of reasons. Primarily: Different things matter to different people, some criteria may be more or less important for different kids of games. Most roleplayers and tactical gamers, for example, are far more interested in the quality of the story than they are in the graphics. But for modern first person shooters, good graphics are crucial. Platformers typically focus on the gameplay and replayability, but some people also want it too look good whereas others don’t care at all. So this system is good for that.
Once again, I don’t use this method of scoring either. My main issue with it is that there’s no way for the scores to be transferrable over time. A game like Final Fantasy VII, for example, had absolutely incredible graphics when it came out, but they aged awfully. If I was to go back to 1997, I would expect to see FF7 with a score of 10 for graphics but a Sega game to have maybe a 3 or 4. However, and weirdly, Sega Megadrive games look exactly the same now as I remember the all those years ago, and are still very playable today. FF7, on the other hand, did not age well. Even for games of the same age, it’s difficult to create a fair comparison. How would you compare the graphics in a top-down isometric game or a game with deliberately retro or stylised artwork against an ultra-real first person game?
Scoring systems – Aggregated
Aggregated scores are essentially when an overall score is created by averaging categorised scores. For example, a game with the following scores - Graphics 6, Audio 5, Gameplay 8, Story 7 – would create an aggregate score of (6+5+8+7) / 4 = 6.5.
This theoretically achieves the best of both worlds; the overall score is there for simplicity but readers can also check individual categories. The only issue is that, again, the aggregated score doesn’t reflect “weighting” (for example, our roleplayer above who is mostly concerned about the story), which means this system only really works if the reviewer is focussing on a single genre. So a potentially very good system, but still not for me.
Scoring system – Multi-user Aggregated
This is the same concept as the above except that overall scores come from multiple reviews rather than categories. This won’t be useful to most reviewers unless they’re a large organisation with multiple reviewers (probably freelancers). I thought I would include it anyway purely for interest sake. Metacritic has made the interesting (but wise) choice to separate critic reviews from user reviews. The concept of multi-user aggregates is fantastic for organisations like Steam. As great as this is, it faces the same issues as the other review methods above – five star normalisation, confusion, etc. – but it’s also very open to data skewing, particularly for games with a small number of reviews. Here’s an example:
Recommendations
Now you’re faced with even more choices. The simplest is “recommended / not recommended.” That might work for some but it doesn’t work for me because of my policy against writing negative reviews. That’s not to say that I give everything a positive review; it’s to say that, if I don’t like a game, I don’t waste my time writing a review for it. Or, if it’s an indie game that shows promise, I’ll send through feedback privately and then check back on it after a few updates. So I need something else.
Steam’s system is very similar, except you have a third option: “Recommended, not recommended, informational.” That doesn’t really solve my problem but I’ve found because the curation reviews are so short (250 characters), they actually get read alongside the recommendation, so it’s okay.
Other examples of recommendations include: “Buy, wait for an update, buy on sale, don’t buy.”
Verdicts
These are the “paint a picture” version of recommendations and are my go-to. They cut through the actual review and deliver the “buy / don’t buy” statement but with a few extra words so you have a chance to add conditions or reasons.
Having said that, there’s verdicts and there’s verdicts. Here’s PC Gamer’s verdict on Clockwork Bird’s Silicon Dreams:
And to compare, an example of a verdict from my review on Iron Tower Studio’s Age of Decadence:
Both of these verdicts come as part of a larger article. While mine paints a much clearer picture of exactly who or under what conditions someone should buy the game, PC Gamer’s is much more concise and is also accompanied by a review score.
As a side note, PC Gamer does actually provide the explanation for the review system, detailing exactly what each percentile range means and even providing examples of past games that have fit those scores, but to get there you have to click that tiny little bit of red text hidden on the dark background (I wonder how many people have ever clicked it?). Here’s an extract:
At the end of the day, there’s no right or wrong here. Go with whatever works best for you, for the types of games you review, and for your audience.
Links:
Gideon’s review policy:
https://gideonsgaming.com/how-i-review-games/
eXplorminate’s review policy:
https://explorminate.co/review-policy/
PC Gamer’s review policy: