By now, I'm sure you all know of the 'launch day' problems that EA's SimCity has been experiencing. I put 'launch day' in quotations because I'll consider the real launch date when everyone that's purchased the game can play it. I'm not here to complain about all of the problems; there's enough of that happening, and me adding to it doesn't change anything. I'm in the same boat as most of you: I bought a game, I got excited, and I can't play it right now.
The problem I have is with Polygon's review of SimCity and the thought process behind their review. Now, I visit Polygon daily. I dig their writers and enjoy the site, but I take issue with the journey of their review score over the last 24 hours. Polygon had one of the first reviews up for SimCity, giving the game a 9.5 out of a possible 10. While there are other sites that have review up, as well, most sites have reviews in progress with no attached score. The thought process is that it's hard to assign a score when you don't know what the user experience will be when it launches — mostly due to lag, server issues, connectivity, etc.
There were some journalists that even said not to trust a SimCity review; Penny Arcade said it about their own review. Regardless, the score was up on Polygon and it was a 9.5. Then everyone tried to play the game. And most people couldn't. And everyone freaked out — rightfully so. Problems ranged from the product key not working, not being able to update, the game crashing and no pre-loading, to not being able to claim a plot, stuck in an infinite tutorial loop and the servers losing your city and rolling back. Oh, and let's not forget the 30-minute queue times.
Suddenly, the 9.5 rating from Polygon looked a little premature.
And then it happened.
Polygon changed their score from a 9.5 to an 8 in less than 24 hours.
The 'updating' of their score is justified in their review policy, which states that "the era of the game as a static product is essentially over." In more details, here's what it says:
At Polygon, we operate on the supposition that reviews are a critical evaluation of a game on the day of its release, which the score we assign reflects. As a primary means of informing our readers, a static review score has a very limited shelf life. But review scores also serve as a foundation for a site's legacy and credibility with its audience. They're one of the ways that readers understand where a site is coming from. This makes changing review scores a taboo topic.
But we can update them.
Polygon's reviews and database have been built based on the idea of updates, or "bumps," as I've called them. If a game changes in a substantive way, we can add an update to our reviews that informs you how and why, and we can modify our scores accordingly. This will appear on the reviews in question as a timeline of that game's evolution and our corresponding recommendation (or lack thereof). The original review score will never vanish or go away, but our readers will be able to better understand where our opinions as a site reside over time for games we review.
No other sites I know of are pioneering review scores in this way. Polygon is the first site to trundle off into this perilous, new territory, and before the world of readers has a chance to decide if it's a good idea, here are some potential issues.
The first problem that comes to mind is what are they going to do when the server problems are fixed? Assuming that in the next few days all of the server issues are ironed out and fixed, will Polygon change their review score back to a 9.5? And if they do, changing the score to an 8 in the first place is a lesson in futility. Who is being catered to when a game is scored a few points lower every time there's an issue? It doesn't do a service to the reader, because if you can change scores on a whim, the credibility of your scores will be called into question every time.
Naturally, when the change in score was announced on their Twitter feed, readers also had questions. This particular exchange caught my eye:
I had two knee-jerk reactions to this. One was, "Who's winning here? Are you just changing your score to appease all the SimCity players that can't play the game and are complaining?" The other was, "Well that's stupid." Not only does non-static review scores create a moving target for criticism, but it leads me to question the original score. Here's what I take away from it:
- In a world where there were no server problems and everyone that bought the game got to play it without issue, SimCity is a 9.5 — a near perfect game.
- In reality, where only a third of the players can actually play the game as intended, SimCity is a game worthy of an 8.0. I repeat: in a game that is not working for over half of its player base, SimCity is still an 8.0.
- By that logic, if SimCity didn't work for anyone, the lowest score it should receive is a 6.0. That would mean that a game that doesn't work wouldn't be rated a 0/10.
You can say my logic might be flawed, but you get the idea.
Another slippery slope is deciding what events are deserving to have a score adjusted. Since this review policy wasn't in place for Diablo 3, there's no score history for that game's launch. But since games aren't static and should have updated scores, it's a reasonable argument to expect an updated review with every patch of a game. If League of Legends servers go down, do Polygon editors hop on and change their review score of League of Legends until the issue is resolved? When PSN got hacked and no one could play any of their PlayStation 3 games' multiplayer, did Polygon change the review score for every PlayStation 3 game that had multiplayer? In that example, PSN is the equivalent of Origin, meaning if SimCity had its score dropped for server problems, shouldn't all the PS3 games have had theirs adjusted for PSN problems?
Obviously not, and that's the point. Don't get me wrong — Polygon's bravery is commendable. In trying to quickly and accurately address the fluctuation that now accompanies most online games, they're plunging into new waters, and making some big asks of their readers. Can we trust that rating bumps are always at the behest of the reviewer, and not the developer? Probably. But this new semi-transparent approach to hitting a moving target presents a very slippery slope.