Wine-rating systems

I favour a simple like-dislike button, comparable to Facebook’s thumbs-up icon, as a credible wine-rating system. Failing that, a five-star rating, in half-star increments, or the Australian wine-show system’s bronze, silver and gold medal ratings, based on a 20-point scale, both give a broad quality ranking without splitting hairs.

However, the 100-point scale, popularised by America’s Robert M Parker, increasingly dominates the global scene and will inevitably become the standard for Australian wine shows (and already adopted by many critics). Brisbane Show and the Canberra International Riesling Challenge adopted it last year. Sydney trialled it in 2012 and intends going the whole hog this year.

Fortunately, Brisbane and Sydney at least settled on the same rating scale: 84–89 points for bronze medals; 90–95 for silvers and 96–100 for golds. And since both shows intend sticking with medals, consumers may not, at first, notice the difference between the 100-point system and 20-point scale it replaces.

That is, until successful producers begin adding scores to the gold, silver and bronze medals adorning their wines. The temptation may prove irresistible, especially for those with scores in the nineties – and a wine consumer now well and truly exposed to the 100-point system.

As well, wine show catalogues, now little read outside the industry, may attract wider consumer readership, if only because of greater familiarity with 100-point ratings. The old 20-point system probably meant nothing to the average wine lover.

Indeed, this is one of the points argued by supporters of 100-point rating – that the scores will help make wine shows more relevant to the consumer.

Part and parcel of 100-point ratings, is the dubious perception that only wines scoring 90 or above deserve attention.

While producers, traders and critics often slam this attitude, it’s completely understandable given the confusing number of wines available. And it’s little different, in principle, than a phenomenon observed for decades by producers and retailers – that gold medals and trophies sell wine; silver and bronze medals do not.

This says only that an insecure consumer, faced with a bewildering choice, takes the impartial advice of wine shows or critics and plumps for the best.  Since they can always find a 95-point wine at any price point, why buy the 89-point one?

This desire to help readers buy well also explains why publishers, including The Canberra Time and the larger Fairfax group, demand ratings from their wine reviewers.

While Fairfax overall embraces the 100-point system, this magazine chooses five-star ratings – my preferred system.

This seems more in tune with the percipient English writer, Hugh Johnson. He once commented after judging at the Sydney wine show, “I judge wine by loving it or hating it … and there’s not much in between. I love vitality in a wine, the sort of wine where one bottle is not enough… giving wines points creates a spurious sense of accuracy and if you can believe it means something when someone gives a wine 87 points out of 100 then you would believe anything.”

Like Johnson, judges, critics and consumers all seek exciting wines. And I believe he’s dead right about the spurious sense of accuracy in 100-point ratings – hence, my preference for a broader scale.

I don’t see how wine shows, or anyone who’s judged in wine shows, can adopt the scale with a straight face. Scoring always involves compromises by individual judges and either aggregate or average scores across a panel of three. That’s how committees work and how a truly democratic system should – allowing full expression of individual views, but finally reaching a decision.

Under the 20-point scoring system, wine shows award medals on the aggregate scores of three judges: 46.5–50.5 for bronze medals, 51.0–55.0 for silver and 55.5–60 for gold.

Under the 100-point system, however, shows will award medals based on the average score of three judges – for example, if one judge rated a wine at 83 points (one point below bronze), another gives it 86 and the third awards 89 (the highest bronze score), the aggregate is 258 points for an average of 86.

In the argy-bargy following each judging session, I can already see judges madly adjusting scores to achieve just the right average. Now that will be an exercise in futility.

There’s little difference in principle between the two systems. However, in the past if consumers saw the results at all, they probably saw the medals, not the aggregate score that led to it.

Under the new system, if shows and exhibitors publicise the points, then we’re likely to see scores, as in the example above, that no judge actually awarded. And could anyone interpret the relative merits of wines rated, say, 86 and 88 – by a committee of three? Sounds spurious to me.

And while some argue for the merits of a standard 100-point system, ratings among critics may vary considerably, not necessarily reflecting the wine-show bronze, silver and gold categories. Already, ratings by individual critics vary, as you’d expect of individual opinion, underlining the fact that that’s all it is.

Most consumers will continues to feel insecure about wine and, quite sensibly, take advice from wine shows and critics with due scepticism. I, for one, see the supposed precision of the 100-point system as a distraction from wine’s infinitely variable hues and tones.

Surely it’s better for readers if critics attempt to give some sense of a wine’s style, then a broad view of its quality – whether gold, silver or bronze; somewhere on the five-star scale; or even categorised, as Canberra’s Winewise magazine does, as highly recommended, recommended, agreeable, acceptable or unacceptable.

Copyright © Chris Shanahan 2013
First published 30 January 2013 in The Canberra Times