If you click on a link and make a purchase we may receive a small commission. Read our editorial policy.

What does it take to become an expert?

Graham McAllister investigates the value of experience

Malcolm Gladwell's 2008 book, Outliers, claimed that an individual must practice for at least 10,000 hours, or approximately 10 years (20 hours a week for 500 weeks) in order to reach expert status. Since then, this figure has been often quoted, however it seems that it may not be quite true.

The 10,000 hour rule was first proposed by Herbert Simon and Bill Chase in 1973, when they looked into the histories of experts in different domains. They found that in chess for example, the average time taken between someone first learning the rules of chess and then becoming a Grandmaster, was 10 years. The same is true in musical composition, the time taken between first studying music and then going on to make a great composition is also around 10 years. Similarly in other domains, studies have found that for scientists and authors, the time taken between making their first publication and their best publication, was also around 10 years. However, what Simon and Chase also found, was that time alone would not automatically lead to expert status, i.e. merely spending 10 years in your chosen discipline had little bearing on becoming an expert. Something else was needed.

Fast forward twenty years to 1993, Anders Ericsson and other researchers are also looking into what makes an expert and they conclude that this 'something else' wasn't just the time spent practising in the discipline that mattered, but more importantly the characteristics of the practice. They introduce the concept of deliberate practice, and differentiate it from normal practice by defining four particular characteristics that needed to be considered: (1) The motivation of the individual to want to improve their skill level; (2) The pre-existing knowledge of the learner, i.e., do they understand what they are currently trying to achieve; (3) The learner should receive immediate feedback and understand the results of their performance; and (4) The same, or similar, tasks should be repeated. If these conditions were met, then performance could dramatically improve. However, in the absence of adequate feedback they state that: "Efficient learning is impossible and improvement only minimal even for highly motivated subjects". This is why repetition alone of an activity will not automatically lead to improvement in performance - feedback on performance is a critical component.

So this seems to be quite clear, to become an expert you need to engage in deliberate practice (exhibit the four characteristics) for around 10 years. But once again, it seems that this may not be quite true.

One of the issues with Ericsson's research was that the results were based on only two experiments. In 2014, a team of researchers decided to expand greatly on Ericsson's small experiments. They examined the results from 88 studies into deliberate practice with the intention of answering only one question - how much does deliberate practice improve your performance?

The results varied by discipline, and although the results show there was a positive relationship between deliberate practice and performance, it isn't as much as first thought. The relationship between deliberate practice and performance was highest for games (such as chess), music, and sports, but much lower for education and professions (almost no relationship at all). It seems evident that learners are more likely to improve in domains where deliberate practice can be clearly defined (games, music and sports), but much less effective for domains where the outcomes are more difficult to specify (education and professions). However, as nearly all learners who engaged in deliberate practice saw their performance improve, it certainly seems worthwhile, but how do we implement this in video game development?

Work and Deliberate Practice

There is one more condition of deliberate practice that we haven't discussed yet and that is when it is done. Although it is possible to learn and improve while doing your day-to-day job, the conditions are far from optimal. Ericsson states that "The costs of mistakes or failures to meet deadlines are generally great, which discourages learning and acquisition of new and possibly better methods during the time of work". This is why deliberate practice needs to be considered separate from work, it has to be in a controlled environment where risks can be taken, failures can be made, and most importantly, that feedback can be given so the learner can gain meaningful experience, i.e. improve.

Take programmers as an example. Deliberate practice could perhaps take place during pre-production, a phase when alternative approaches could be evaluated in a less pressured environment. It's also relatively straightforward to assess the effectiveness of competing algorithms as measures such as CPU cycles, frames-per-second, PSNR, or Big-O analysis can help determine the most successful strategy. Coupling these measurable outcomes with a mentor will give the learner the feedback they need in order to understand the results, leading to improvements in both the software and the learner's experience.

But what about areas which are not so easy to define? How do we know if one game design idea is better than another?

Playtesting - Deliberate Practice for Game Designers

The four key characteristics of deliberate practice map quite neatly to playtesting. Firstly, the designer should be motivated to see their current design being tested. Yes, it's usually an unnerving process to observe impartial players experience your game's design, but if there's no motivation for a designer to go through this process and become better, then that's more unnerving. Next is having clear goals, defining clearly the purpose of the playtest and also the criteria that will evaluate whether one game design concept is better than another. Thirdly is feedback, the designer will receive immediate feedback from watching the player's behaviour and by evaluating the results of a structured questionnaire or interview. They should have a solid understanding of where their design succeed or failed, and clear feedback on corrective action. Lastly is the repeatability characteristic: learning is iterative - you need to see if your corrections had the desired effect, and if not, why not. The playtest then, is integral as a method of not only evaluating the player experience of a game, but for allowing designers to improve by learning in a controlled environment.

Maximising Your Ability to Improve

As we saw earlier, deliberate practice will help almost everyone improve, but it varies depending on discipline. However even in disciplines which seem best suited - those where goals are easy to measure - deliberate practice only accounted for about 25 per cent of the factors in why someone had reached expert status. In those domains where precise criteria are difficult to specify, which game design is likely to fall into, deliberate practice accounted for less than 1 per cent of the reasons why someone had made it to expert status (basically it had no measurable effect).

Also bear in mind that these results are for deliberate practice, those who engage in structured learning to actively become better. For those who have merely put in 'years on the job', what value is their experience actually bringing to the table? Have they been repeating the same mistakes year after year, falling back on known methods, even though they're not likely to be optimal?

So, if deliberate practice can contribute up to 25 per cent of what makes an expert, what makes up the other 75 per cent? So far, science has failed us. There are plenty of theories, but the bottom line is that we don't know yet. The contribution of deliberate practice on expert status may appear relatively small in percentage terms, but learners who engage with it and implement the four key characteristics have successfully reached expert status (world leading) across a variety of disciplines. It seems science has outlined a guide for game designers to not only improve their own skills, but also to deliver great games.

Related topics
Author
Graham McAllister avatar

Graham McAllister

Contributor

Graham is the founder of Player Research, a Brighton-based user research and playtesting studio.

Comments