Sunday, 26 June 2016

Developing intuition: when can you trust your gut?

At the talk I gave on intuition at Wellington College’s Education Festival on Thursday, I ended up not using the slides I’d prepared and wandering a bit off topic. Here follows what I’d planned to say as well as the slides.

Certainty and over confidence can prevent us from thinking; the more certain we are that we’re right, the less we’ll consider other possibilities. This tendency not to think too much about the possibility that we might be mistaken stems in part from a whole suite of well documented cognitive biases, but also arises from institutional pressures. Schools put pressure on teachers to explain away their mistakes rather than to explore them and this leads to teachers repeating the same old mistakes often unaware that anything could be better.

Another potential barrier to developing better intuition is our belief that practice makes perfect. We tend to believe that experiences leads inexorably to expertise and so, the longer we’ve been teaching, the better our judgements become. Whilst I don’t want to claim that this is flat-out wrong, I do think there are very clear limits to our ability to make reliable intuitive judgements. Of course, I could be mistaken about this but I think the benefits to exploring the ways we think we develop professionally far outweigh the costs.

We know quite a lot about how to develop expertise. Anders Ericsson – the expert expert – has been researching expertise across many different fields for decades and his recent book Peak is an invaluable summary of what he’s learned. For teachers I think there are 4 key principles we can take from his findings:

  1. Frequent, low-stakes observations – lesson observation has gotten a bad rep over the past few years but that’s mainly because of the stakes involved. If observations are focussed on developing and honing key teaching skills then they could be a very effective means of helping teachers become more expert.
  2. Much better feedback on learning – we get very biased data on how effective we are at teaching because most of the feedback we get is on students’ performance during lessons. We see that they appear to have learned something as a result of our instruction and conclude that whatever we’re doing must be effective. Unless we collect data on where our teaching is effective over the longer term, we could be improving students’ current performance at the cost of their future learning.
  3. Guided, purposeful practice – once we have automated a skill we stop getting better at it. When you start learning to drive (or teach) there’s a hell of a lot to focus on and this effort keeps us conscious of what we’re doing. The more we think about our practice the more we’re likely to improve. We’ve all experience the phenomenon of driving on auto-pilot with no memory of the last 50 miles, and the same can be true for experienced teachers – we can get to the end of the day without having had to think all that much about what we were doing. Purposeful practice would require us to remain in the conscious stage of skill acquisition so that we would continue to improve. If practice is guided by someone more expert than ourselves they can help to focus our attention on honing our skill and expertise.
  4. A codified body of knowledge – having an expert guide on hand to direct our practice isn’t always practical. In most fields were practice results in genuine expertise – sports, chess, ballet, classical music – there is a well-defined body of content to master. This isn’t the case in education because we don’t agree on what effective teaching looks like. If we did, we could start to break down and work on individual components in the knowledge that improving these things would definitely make us better. but because we rely on fault intuitive judgement about what these things are, our practice is often purposeless and maybe even degrades our ability to teach effectively.

So, do teachers just improve? What’s the evidence? There have been a huge number of very counter-intuitive studies which have all indicated that although teachers seem to improve rapidly – in terms of student outcomes – over the first three years of practice, they subsequently plateau and perhaps even begin to decline. This is not uncontroversial and there are also many other studies which show teachers continue to get better and better with time. Kini & Podolsky (2016) argue that the studies which show teachers ceasing to improve have used poor statistical models and that actually, “Teaching experience is positively associated with student achievement gains throughout a teacher’s career” and that, “For most teachers, experience increases effectiveness”.

The problem with anyone without a statistical background is that these claims and counter claims revolve around arguing who has the better maths. As John Ewing says, “Whether naïfs or experts, mathematicians need to confront people who misuse their subject to intimidate others into accepting conclusions simply because they are based on some mathematics.” In short, I have no real idea who’s right, but Kini & Podolsky’s claims are in part based on this assumption:

[The finding that teachers don’t improve with experience] seems counter-intuitive, given the evidence that professionals in a wide range of contexts improve their performance with experience. For example, a surgeon’s improved performance is associated with increased experience gained at a given hospital. An increase in a software developer’s experience working on the same system is associated with increased productivity. What is common sense in the business world—that employees improve in their productivity, innovation, and ability to satisfy their clients as they gain experience in a specific task, organization, and industry—is not the commonly accepted wisdom in public education. [my emphasis]

They’re correct to say that the idea that experience doesn’t lead to expertise is counter-intuitive, but they’re wrong about everything else. Surgery, software development and business are all examples of domains where experience does not automatically confer expertise! This finding has been extensively documented.

Take the example of radiologists. A 2004 analysis of 500,000 mammograms and 124 radiologists was unable to find any evidence that years of experience leads to increased skill in diagnosis resulting many thousands of unnecessary biopsies and hundreds of cases were malignant tumours were missed. What typically happens is that a radiologist will be sent a mammogram of a patient she will never meet, make a diagnosis and return it, never to find out whether it was correct. Although her ability to correctly diagnose tumours may be by increasing, her confidence in her own expertise certainly is.

What about clinical psychologists? In his 1994 book House of Cards, Robin Dawes details how clinical psychologists with over 10 years experience are no better at diagnosing and treating mental illnesses than those fresh out of medical school. This pattern has been repeated in many many different domains. So much so that Robin Hogarth has identified what he calls ‘wicked domains’ in which experience routinely fails to lead to expertise. But all experienced clinical psychologists believe they are genuine experts.

A ‘wicked domain’ is one where feedback on performance is absent or biased. This is equivalent to playing golf in the dark – you never find out where the ball went so you never get better at hitting the ball. But it’s worse than that. Because feedback in wicked domains is biased, it leads us to believe we’re becoming experts even when we’re not. We become ever more confident and certain that we’re right: a dangerous combination. This is connected to the Dunning-Kruger effect: the finding that those without expertise lack the knowledge to realise their own deficits.

He also identified so-called ‘kind domains’ which provide accurate and reliable feedback. Gary Klein has led research into these ‘kind domains’ and shown that where we get solid feedback, we become genuinely intuitive. He studies into firefighters, neonatal nurses, military commanders and other professions have shown that in these fields, experienced practitioners ‘just know’ the right course of action to take in seconds.

Hogarth shows that even where a domain may have some ‘kind’ aspects, it can also have a ‘wicked’ effect on the genuine development of expert intuition:

The physician in the emergency room …must make speedy decisions and will not always receive adequate feedback. Indeed, the typical feedback he receives is short term: how the patient responds to his immediate actions. It is rare that the physician ever really finds out what happened to the patients he treated within a longer, and perhaps more relevant time frame. Some patients simply go home after treatment and never return to the hospital; others are cared for in different departments of the hospital, and so on. [my emphasis]

Although surgeons’ short-term survival rates dramatically improve with years on the job, long-term survival rates and other complications don’t.

Teaching may be similar to surgery. Although we get better at certain aspects of the job, we may not improve in others. For instance, teachers improve rapidly at managing classrooms. We get excellent feedback from students on the effectiveness of our decisions; they either behave or they don’t. We get daily opportunities to learn from our mistakes and we can see our practice improve as we hone in on the best way to interact with different classes. But we don’t necessarily get any better at actually teaching. Hamre et al have shown that ‘quality of instruction’ is the aspect of teaching least likely to improve over time. This is because the feedback we get is biased. We see that students can answer our questions and respond productively to our suggestions and we think we see learning when in reality all we’re seeing is their current performance from which we are inferring learning. Current performance may be ‘mere mimicry‘. If we assume our instruction is effective and move on, we will often be mistaken.

This may all sound like a counsel of despair, but there are ways we can train our intuition. These are mainly concerned with trying to make wicked domains kinder. Hogarth identifies 7 possibilities which taken together could help teachers develop genuine expertise:

  1. Select and/or create our environments by ‘apprenticing’ ourselves to experts
  2. Seek feedback through “intelligent sampling of outcomes”
  3. Impose “circuit breakers”
  4. Acknowledge emotions
  5. Explore connections
  6. Accept conflict in choice
  7. Make scientific method intuitive

Over my next few posts I will unpick how we might apply each of these steps in education.

The post Developing intuition: when can you trust your gut? appeared first on David Didau: The Learning Spy.



from David Didau: The Learning Spy | Brain food for the thinking teacher http://ift.tt/2922Hhc
via IFTTT
Post a Comment