Did you catch that? In episode 66, Katherine Wood from the University of Illinois discusses her research with the scientist behind the famous “Invisible Gorilla” experiments, Daniel Simons, into if and when people notice unexpected objects in inattentional blindness tasks. She discusses her and Simons’ article “Now or never: Noticing occurs early in sustained inattentional blindness” which they published on November 20, 2019 in the open-access journal Royal Society Open Science.
Websites and other resources
- “If you don’t notice something within 1.5 seconds, you may never see it“
- “You have no idea how much your brain is ignoring“
Patrons of Parsing Science gain exclusive access to bonus clips from all our episodes and can also download mp3s of every individual episode.
🔊 Patrons can access bonus content here.
We’re not a registered tax-exempt organization, so unfortunately gifts aren’t tax deductible.
Hosts / Producers
Doug Leigh & Ryan Watkins
How to Cite
Leigh, D., Watkins, R., & Wood, K.. (2020, January 21). Parsing Science – Hiding in Plain Site. figshare. https://doi.org/10.6084/m9.figshare.11691765
What’s The Angle? by Shane Ivers
Katherine Wood: If we want to enhance the subset of information we’re hoping to attend to we have to turn down the volume on all of the irrelevant information.
Doug Leigh: This is Parsing Science, the unpublished stories behind the world’s most compelling science, as told by the researcher themselves. I’m Doug Leigh.
Ryan Watkins: And I’m Ryan Watkins. Today, in episode 66 of Parsing Science, we’re joined by Katherine Wood from the University of Illinois’ Department of Psychology. She’ll talk with us about her research with Daniel Simons into if – and when – people notice unexpected objects in their visual field when they’re focused on other tasks.
Wood: Hi, I am Katherine Wood. I am in my final year of my doctoral studies at the University of Illinois. I was born and raised in Sacramento, California, and I went to University of California Berkeley for my undergrad degree. I did that degree in psychology, and got my first exposure to research in Dr. David Whitney’s lab. I fell in love with the process and all of its highs and lows, and when I was applying to graduate schools and looking at advisors that I wanted to work with I focused mostly on their areas of research, and vetting whether those areas lined up with my interests. And there was one faculty member at the University of Illinois who just seemed like a complete perfect fit with his areas of interest: what he was interested in, what I was interested in. And I was like, “Perfect! I’m gonna shoot him an email, introduce myself, explain my areas of interest, and see if he is taking students.” And it was only later after I had sent that email that I realized I had sent it to the Daniel Simons, who first did that Invisible Gorilla experiment. And I had a bit of a moment of panic: “Oh my god! I emailed Daniel Simons and asked if he was taking students. What do I think I’m doing?” But we hit it off right away, and we were interested in a lot of the same things. He’s an advisor with a really broad array of interests. And I also love studying attention and vision from a bunch of different angles, and so he ended up being a perfect person to work with to pursue my sometimes-unorthodox approaches to experiments. So I moved to Illinois right after undergrad to start working with Dan, where I have been ever since.
Leigh: For a while now, I’ve thought of “distraction” as just another form of attention, but toward something that we didn’t intend … to attend to. So I was glad to have the chance to ask Katherine – whose research focus is in attention and failures of awareness – if this is more or less right.
Distraction as attention
Wood: I think that’s a fair way to think about it, yeah. Distraction – which is basically a failure to dedicate attentional resources to the right thing, or the thing we want to be dedicating them to – and generally instead dedicating those attentional resources to something else. But, there is no hard and fast and commonly agreed-upon definition of attention. William James, I believe, is the one who famously said, “We all know what attention is; it needs no definition.” And I think for most people that’s generally true. We feel like we know when we are concentrating on something, or attending to something. And we can vary in how much attention we’re dedicating to something at any one time. You know, if you’re sitting at the airport and just casually people-watching, the scope of your attention is pretty broad. You know, you can be easily alerted to new or interesting things that pass by. And then there are times when we have extremely focused attention. If we’re working hard on a project and we don’t notice that the Sun has set around us. Or if we are frantically trying to figure out how to get the smoke alarm to stop going off when there’s no fire. Then our attention is extremely narrow, and we are much less aware of other things in our environment. And it’s in this latter case – when we have very narrowly focused attention – that we are susceptible to a phenomenon called inattentional blindness. So, when we are inattentionally blind, we fail to notice – as a result of having our attention narrowly focused on something else – we fail to notice events unfolding in our environment that we would otherwise notice immediately.
[ Back to topics ]
The Invisible Gorilla
Watkins: If you haven’t already seen Dan Simons’ selective attention experiment, you might want to pause this episode and head to parsingscience.org/e66 to watch it now. It’s just a minute or so long. Doug and I have shown it to several cohorts of graduate classes, and it’s almost always had the same effect: by-in-large people miss what’s right in front of their faces. Doug and I asked Katherine to describe her experience with the video.
Wood: In that video you have two groups of people: one in black shirts and one in white shirts. And they’re milling around and passing basketballs between themselves. And this is a busy visual scene, so you kind of have to concentrate. And you ask an observer “Ignore the team in the black shirts and count the passes that the white shirt team is making between themselves.” And it requires a pretty focused attention to accomplish this task accurately. And when people are doing this, about half the time, they will be totally unaware of a man in a gorilla suit that walks through the middle of the action, stops to beat his chest, and then exits through the other side of the game. Even though he’s on screen for about nine seconds total. And when people watch that video without having to monitor the passes that a particular team is making, they notice the gorilla right away. And they cannot believe that they would ever miss that. And I did not notice the gorilla the first time I saw that video, which was in an AP psychology class in high school. And I was so stunned that you could basically erase something from the world just by moving your attention around. That was unbelievable to me.
[ Back to topics ]
How inattentional blindness works
Leigh: The notion of “inattentional blindness” was coined in 1992 by Arien Mack from the New School for Social Research and Irvin Rock from the University of California in Berkeley … so it’s been around for quite some time. But, despite being familiar with the concept, Ryan and I both missed the gorilla at first, just like many of our students have. This led us to wonder: how can it be that we’re so confident going into such tasks, even when we know that inattentional blindness exists?
Wood: People have done studies asking people to basically predict whether they would be inattentionally blind. So they say, “If this were to happen in these circumstances, do you think you would notice the gorilla?” And, unsurprisingly perhaps, people overwhelmingly say “Yes, of course I would not miss the gorilla.” And they report that at much higher rates than they actually notice the gorilla. So not only do we have this phenomenon where we can totally miss these events out in the world, but people also don’t think they’re susceptible to it. Which is an interesting combination of circumstances. People think they will notice more than they do, and it’s kind of a reinforcing cycle, because you missed something as you’re out and about in the world you’ll never know that you missed it. It’s only those cases where you do see something at the last second, and stop in time, or notice something kind of hidden. And that reinforces your belief that you notice these things when they’re important. But all those cases where you missed it completely get lost, and don’t enter into how you evaluate your own abilities. So when we were digging into the background for these studies, one of the things we looked at was previous work on how quickly we deploy attention and how quickly we segment our visual environments into the information we want to take in and the information that we need to filter. And previous studies of how quickly this can be done indicate that it’s hundreds of milliseconds to set up these filters. So when we know what information is important in a visual environment, and then we are given access to it – so you know you tell a subject you’re going to be attending to these colors of dots and ignoring these other colors of dots – as soon as those dot displays come up, within a couple of hundred of milliseconds we see evidence of suppression of the unwanted information and enhancement of the desired information.
[ Back to topics ]
Noticing unexpected objects
Watkins: You can demo the games Katherine and Dan used in their experiments at parsingscience.org/e66. If you’d like to play them you might want to do that now because – spoiler alert – we’re going to talk about their artifice moving forward. For everyone else, the “unexpected object” in Katherine and Dan’s experiments is a small plus sign that moves across a screen while participants are busy with a sham task of counting the number of times various moving shapes bounced off a screen. You’d probably think that – if given enough time – you’d almost never fail to notice the plus sign in inattentional blindness tasks like this. And that was something that Katherine and Dan wanted to find out as well. But they were also interested in learning when, if at all, participants noticed the unexpected object – be it when it first appears, or just before disappearing – as well as whether it being on the screen for a longer amount of time increases people’s noticing, or if we just notice unexpected objects randomly.
Wood: In the paper we introduce a couple of different models for how noticing might occur. And these are in no way exhaustive. But we picked a handful that we thought would be fairly plausible; nothing outrageous. I think the most straightforward to grasp is this idea that noticing is sort of random: you’re just as likely to notice the unexpected object at any point well it’s on screen. Under that assumption, the more time you have with the unexpected object, the more likely you are to notice it, because you have more opportunity to do so. It’s a very intuitive model, and I think a lot of people – and a lot of researchers, probably – assumed, us included, that something like that was probably the case. “Of course you are more likely to notice the object if you have more opportunity to do so.” So that was one of the possibilities that we were interested in examining.
And then there is the possibility that more time – as the unexpected object is onscreen for longer – it produces an increasingly strong signal as to its presence. You know, “If we’re passively accumulating evidence about what’s going on out in the visual world – even when we’re attending to something – then eventually that bright, salient new object is going to generate a strong enough signal that we will notice it. And we’re not noticing it by chance, but rather [are] more likely to notice it later, when we accumulated stronger evidence for its presence than earlier.” And then, the models where we predict noticing tied to onset or offset: originally we included those as easily dismissed baseline. And I did not think that either of those would be the case, but I said, “Well, we can throw them in. It will show how these different generative models would predict these different patterns of noticing.” And then, surprise-surprise, it actually turned out that it did look like the onset model was the best explanation for the data we were seeing, which came as a total shock.
[ Back to topics ]
Does longer exposure lead to greater noticing?
Leigh: Does increasing the time we’re exposed to an unexpected object also increase the likelihood that it is noticed? That’s the question that Katherine and Dan set out to answer in the first of their three experiments. Here’s how Katherine describes the process of designing this experiment.
Wood: When we set out to do this set of experiments there had been some prior work exploring related questions to this matter of how long the unexpected object is visible, giving people more time than less. But no previous study had really systematically explored this question of exactly when these unexpected objects reach awareness. So we were setting off on our own, and we weren’t sure the absolute best way to study this question. We weren’t sure if the way we wanted to study it would yield interpretable results. So, Experiment 1 was largely a stress test of our methodology. So, in Experiment 1, what we decided to do was vary the amount of time the unexpected object was on screen, and change absolutely nothing else. All the stimuli were exactly the same, the only thing we were going to change was how far across the display the unexpected object went. And that was how we were going to control exposure time. So we had a long five-second exposure and a shorter two and two-thirds second exposure.
And the other important thing we wanted to evaluate was if we asked people to report where the object was when they noticed it, did those reports seem to be accurate? Because this location data was going to be a huge part of our methodology, and if it didn’t appear to be reliable, then we were gonna have to rethink our approach. So, in Experiment 1, we had these two exposures, and we were really interested in evaluating the location reports: are people reporting the object where it could actually appear? Are there meaningful differences between the location reports for people who did notice the object versus those who didn’t? And, to that end, we added another condition where there was absolutely no unexpected object at all. So, for these subjects, they just saw three identical trials and then, out of the blue, were asked, “Hey, did you see anything new?” And the answer, of course, should have been “no,” because there was nothing new. But what this condition gave us was a truly random baseline that we could compare our results to, to make sure that the data we were seeing appeared to be meaningful.
[ Back to topics ]
Onsetting and offsetting of unexpected objects
Watkins: In their second experiment, Katherine and Dan were interested in comparing what difference it might make if an unexpected object were to onset at the far left versus the far right edge of the display. They also measured the degree to which participants misreported the location of the object even more precisely, as Katherine describes next.
Wood: Experiment 1 came back with results that we were not expecting, which is that exposure time had very little impact on noticing rates. It had some, but much smaller than we were expecting. And the location data looked pretty good; it looked like subjects were able to accurately put the object where it could have appeared. But we wanted to be sure, so Experiment 2 was designed to replicate the overall findings of Experiment 1, and then take care of a couple of potential confounds that existed in Experiment 1, and then provide a more robust test of where subjects were localizing the object. So we made a couple of changes. We added another exposure time condition, even shorter than the 2.67 second one. Just to see if we could find the bottom, basically: if there was a tipping point we’re noticing would start to drop off. So we shrink it down for another condition. And then we moved the onset and offset points of the objects around. And the idea there was twofold: One, it should be really obvious in Experiment 2 if subjects are not putting the object in the right place, or if they don’t actually know where it showed up and are just kind of guessing. By having these pretty extreme shifts in where the object on sets, if their location reports don’t follow that on set point then we know that something’s wrong. And so Experiment 2 was designed to try to cover a lot of ground with its design. And in Experiment 2 we found something really interesting that I was not expecting.
Does exposure relate to better localization?
[ Back to topics ]
Leigh: We’ll hear what, after this short break.
Leigh: Since simply detecting the unexpected object doesn’t guarantee that people could describe it accurately, Katherine and Dan were curious to learn if increasing the exposure time might allow noticers to form a more accurate representation of the unexpected object’s location. To find out, their second experiment addressed whether exposure time impacts how well participants were able to estimate the unexpected objects’ location when it onset from either edge of the display and offset in the middle, versus when it onset near the middle and offset at either edge. Here again is Katherine Wood, describing what they found in their second experiment.
Wood: So we found, in general, the same pattern as an Experiment 1. Namely, that additional exposure time has a pretty minimal impact on the overall likelihood of noticing the object. But what was interesting is [that] we found two different patterns of those results, depending on whether we looked at the unexpected objects that onset from the edge of the display – so where there on set point coincided with the same place that subjects were attending to – versus objects that onset in the middle of the display where, presumably, less attention was allocated. And in the former case, when the object onset at the edge, we found a stronger impact of exposure time on noticing. So, each additional block of time added more to the overall probability that the object would be noticed, compared to the cases where the unexpected object onsets from the middle of the display.
And in that case it was pretty much flat: it really did not matter very much how much time you added, there was almost no change in the overall likelihood of noticing. And that suggested an interesting interaction between the spatial location of attention and this effect of exposure time, where exposure time only seems to matter if it coincides with the object onsetting at the same place where attention is allocated. But, even in that case, the increase we observed was a lot smaller than you might expect. And then, from the location side of things, it looked like subjects were pretty accurately following the onset point of the object. So even when we move things around pretty dramatically – shifting it huge amounts into the display – subjects location reports were pretty consistent in clustering around that onset point. So, Experiment 2 kind of reinforce what we thought was happening in Experiment 1, which is “Yeah, it sure looks like people are noticing the object as soon as it comes on-screen.”
[ Back to topics ]
Does exposure relate to feature detection?
Watkins: Do people who have longer exposure to an unexpected object also have a better mental representation of its features? That is, even though additional time exposure didn’t substantially affect people’s noticing rates, Katherine and Dan were curious if it affects how much information about the unexpected object noticers are able to extract. So these questions were those that guided their third and final experiment, as Katherine describes next.
Wood: Experiment 3 also was designed to see if, well, if exposure time doesn’t seem to have much impact on how likely you are to detect the object, is it possible that exposure time nevertheless affects how accurately you can represent this object, and how accurately you can encode its features? So in Experiment 3 – for the noticing results – it was very much consistent with what we had observed in the previous two experiments. And so, we looked to see whether people in the short exposure condition were less accurate at reporting the unexpected object’s color than people who have longer to experience the object. And that did not appear to make a difference there either.
[ Back to topics ]
Is inattentional blindness an illusion?
Leigh: Over the years, Ryan and I have had several researchers onto the show who study illusion. But it didn’t occur to me until our conversation with Katherine that most illusions operate on our misperceptions such that we perceive the presence of something that’s not there … and that inattentional blindness is perhaps more like the illusion of the absence of something. Curious, we asked Katherine about her thoughts on this analogy.
Wood: People often say that illusions trick the brain, which is not quite right, because, actually, the brain is still working exactly as it’s supposed to be. The visual system is still working exactly as it is supposed to, and as it does in everyday life. It’s just you give it this wacky, really unusual, stimulus, and the resulting perception is different than ground truth. And it reveals – gives us a peek behind the curtain – as to what the visual system is doing. But it’s not a mistake necessarily. And I think of inattentional blindness in much the same way. So we – in general, I think – want to believe that we would notice these objects. And we think, “well, I want to notice if something in my environment changes.” But inattentional blindness is a side-effect of the attentional selection system working as intended. If we want to enhance the subset of information we’re hoping to attend to we have to turn down the volume on all of the irrelevant information. And as outrageous as it might seem, that gorilla is irrelevant information in the context of trying to track these bounces. So, inattentional blindness reveals how the attentional system is prioritizing information in kind of a similar way that these visual illusions reveal how color constancy is computed, or how we figure out linear perspective and relative size, and so on.
[ Back to topics ]
To p or not to p
Watkins: Katherine and Dan’s study is somewhat rare in psychology research in that they report their statistical findings using confidence intervals rather than the more ubiquitous – and contentious – p-value. In fact, the paper makes no use of the word “significant” at all. So Doug and I were interested to hear what led to this decision.
Wood: There are a couple of different approaches. One is, if at all possible, can we design experiments so that no matter what the result is, it is interesting and informative, assuming that the methods are sound and the data are interpretable? And so that’s something we’ve tried to be consistent about in our research: “Okay, can we pose questions that whether the result is traditionally significant or not, the answer is nevertheless interesting?” And that’s one of the advantages of doing something like what we did in this paper, where we come up with a couple of different possibilities, make predictions on how the data will look based on those possibilities, and then outline what the implications would be. And that changes things from, “Oh, it was not significant so it’s not important,” or “Oh, it’s not interesting because it wasn’t significant” to “Oh, that’s consistent with this model. What would that mean? What further questions does that generate?”
So there’s a tremendous and vigorous debate about null hypothesis significance testing and p-values. And there are those in favor and those against. But, for me and Dan – when we worked on this and other projects – it was much less about making an ideological statement, and more of asking ourselves, “What do we want out of this data?” So, do we want to just no yes or no, is there and effect here? Do we need to know the magnitude of the effects? And, in general, what we are most interested in are the point estimates, and also the magnitude of the differences between the different conditions and so on. And so, for us, we’re like “Well, we’ll just estimate it with some confidence intervals. Get a sense of how variable these effects appear to be, and interpret that.” Because we’re less interested in whether a result it’s significant, and much more interested in, “Well, is this a little effect that’s not likely to have much of an impact, or is this an enormous effect that, you know, you can see it from space?” So we – for these projects – go much more with the estimate-and-report-variance approach.
[ Back to topics ]
Inattentional blindness in the real world
Leigh: Inattentional blindness isn’t just a laboratory curiosity, it can also happen in the real world and under more natural conditions. Such as, say, a bike nearly running into us as we walk down the sidewalk engrossed with our mobile phone. Since Ryan and I are routinely overconfident in our ability to multi-task, we were curious to learn how Katherine anticipates that her research might be applied.
Wood: One thing that would be important to know, going forward, is whether this finding that we observed is true in all scenarios. So, is this also the case for real-world objects in a complicated environment? Like, if you’re driving. If it is, it’s fairly terrifying, because it means that if you don’t notice something the first time it enters your visual field, there’s a chance that you will not ever notice it. In terms of doing a complicated and dangerous task like driving, that has sobering implications. And it’s possible that in the real world these effects work differently, or they’re attenuated. But, we also spend a lot of our time dealing with screens and objects on screen. And, I think a little more benignly, what these results might suggest is in terms of designing websites or tutorials: you know, if you have your important little pop-up in the bottom corner only show up once, there’s a chance people will never detect it. Or, if they’re playing a video game and they’re looking for an item in an environment: if they miss it the first time, they may be very unlikely to ever find it again. And that will way to frustration. So even if these results don’t necessarily present themselves the same way in the real world, I think examining them in the context of these relatively simple computerized displays nevertheless can maybe be useful.
[ Back to topics ]
Links to article, bonus audio and other materials
Watkins: That was Katherine Wood discussing her article “Now or never: Noticing
Occurs early in sustained inattentional blindness,” which she co-authored with Daniel Simons and published on November 20, 2019 in the open access journal Royal Society Open Science. You’ll find a link to their paper at parsingscience.org/e66, along with bonus audio and other materials we discussed during the episode.
Leigh: Parsing Science also tweets news about the latest developments in science every day, including many brought to our attention by listeners like you. Follow us @parsingscience, and the next time you spot a science story that fascinates you, let us know and we might just feature the researchers in a future episode.
[ Back to topics ]
Preview of next episode
Watkins: Next time, in episode 67 of Parsing Science, we’ll be joined by Temple Grandin from Colorado State University’s Department of Animal Sciences. She’ll talk with us about her work translating academic studies of ethology and animal behavior for practical application in commercial livestock and poultry farms.
Temple Grandin: We’re gonna have to communicate a whole lot better, and not just fall into jargon. And in my work on improving how cattle are handled, a very important part of my career was public speaking. And another important part of my career was writing. And one of the issues we’ve got right now – with some of the graduate students and undergraduates in the last five years – is very bad writing skills.
Watkins: We hope that you’ll join us again.
[ Back to topics ]