22 Feb 2017
Best of 2016
by JBHyperion

Best of 2016

by JBHyperion

Hello, and long time no see! As I'm sure you're all aware, the Best of 2016 voting process recently concluded, where community members far and wide came together to vote on their favourite and/or most noteworthy maps of the past year, culminating in the crowning of the "maps (which) deserved to be named the "Best of 2016".

A lot has happened over the course of the past year with regards to mapping which may have influenced these results and the mapping meta as a whole - from changes to the way the Beatmap Nominators are selected and operate, to restructuring of the Quality Assurance Team, and even the addition of the Code of Conduct for Mapping and Modding, which I talked about earlier last year. Additionally, the process came with an all new weighting algorithm, factoring in not only the playcount, but the perceived "skill" of the player and even the difficulty of the map on a player's vote.

As expected, the results and methods by which they were reached have divided opinion, heralding both joy for some and despair for others. Whilst a number of people seemed to be satisfied with the results compared to previous years, statements like "Popularity Contest" and "Most Retried of 2016" have been thrown around a lot by detractors. Many people have had their say on the pros and cons of this system, and I have no doubt that these will be taken into consideration to make next year's "Best of" even better. Before that however, let's switch focus back to 2016. Now that the winners and losers have been announced and since people have all started formulating their own ideas, I thought I'd throw my own into the mix. Note that these are all based on my personal opinion, so there will be a degree of subjectivity which you are free to agree or disagree with at your leisure. If you're okay with that, by all means feel free to read on!

Firstly, let's take a look at the weighting system and the trends it produced.

Formula for the weighting system

The switch to a more "experience"-based system rewarded higher-skilled players, but penalised higher difficulty maps, especially marathons, since these difficulties were clearly less accessible to the majority of the playerbase. Voting for a mapset where you didn't pass any of the difficulties resulted in the weighting for that vote being fixed at the minimum of 0.2, regardless of your skill or the amount of times you played the map. Additionally, marathons received less playcount because of their inherently greater length. By contrast, mapsets consisting of full difficulty spreads contained a wider variety of accessible difficulties for players to attempt and were easier to replay multiple times, contributing to a far greater number of raw votes and score in general. I assume this was done to combat the previous years' proliferation of players voting for maps they were unable to pass, but may still be able to judge and enjoy, hence the low (but non-zero) weighting.

Correlating playcount to weighting seems reasonable enough. After all, if you're playing a map multiple times, chances are you probably enjoy playing said map. What I don't understand is the dependence on a player's Performance Points. The weighting formula already takes the difficulty of the beatmap into account, helping to avoid the problem from previous years I mentioned before. This aspect raises the question: "Are more experienced players truly more qualified to determine the quality of a map?" What makes this point more contentious is the consideration that the majority of the mapping and modding community - those who support, criticize and discuss mapping quality on a regular basis - span almost the entire range of playing skill. Surely they are the ones more experienced to judge what the "best" maps of 2016 were? The Mappers' Choice Awards for osu!standard was an alternative, community-run event centred on this idea, allowing mappers and modders to share their own opinions. Of course this system had its own plethora of advantages and disadvantages, relief and disappointment, and, as someone who is becoming less-involved with the osu!standard mapping and modding community recently, I feel it's a topic best saved for others to debate. The main point, however, is that it demonstrates there are alternatives we can consider. Should a player's contribution to the game afford them more leverage? Could total playing time of a mapset replace the playcount weighting, giving longer maps a boost?

The simple reality is that no system will ever please everyone. However, that doesn't mean you should attack maps you don't like, or the staff for the methods used. Constructive criticism is key, so that we can all continue growing and improving. Maybe your favourite map didn't win this year, but that's okay. Does the act of people voting for a map suddenly make it a better map somehow? Of course not. Maybe lots of other people voted for a map you don't like, and that's okay too. You still have the maps that you like to play, and they're no worse off. Remember that osu! is a diverse community, with players of all backgrounds, interests and skill levels, and it takes all of these people to grow and shape the osu! we know and love!

That's all from me for now, though we'll try not to leave it so long without an article in future! In the interests of trying to appeal to everyone and not single out any particular game modes or specific results, I've only really been able to scratch the surface on this topic. Therefore, please feel free to discuss and share your own opinions on what this year's "Best of" did well and poorly in the comments below! Who knows, maybe your idea will help shape the future of community voting!

—JBHyperion

Comments0

Sort by
No comments yet.
/