Why do some of us choose to remain ignorant of information that – though perhaps unpleasant – could help us make better informed decisions in the future? In episode 76, Emily Ho from Northwestern University’s Department of Medical Social Sciences discusses her research into why we keep our heads in the sand about important information for a variety of psychological and economic reasons. Her article “Measuring information preferences,” was published on March 13, 2020 with David Hagmann and George Loewenstein in the journal Management Science.
Websites and other resources
-
- Emily’s website and Twitter feed
- Take Emily’s Information Preferences Scale yourself!
Bonus Clips
🔊 Access bonus content here.
Support us for as little as $1 per month at Patreon. Cancel anytime.
We’re not a registered tax-exempt organization, so unfortunately gifts aren’t tax deductible.
Hosts / Producers
Ryan Watkins & Doug Leigh
How to Cite
Music
What’s The Angle? by Shane Ivers
Transcript
Emily Ho: When real people are faced with a real life decision to obtain really important information, they choose not to get it.
Doug Leigh: This is Parsing Science. The unpublished stories behind the world’s most compelling science as told by the researchers themselves. I’m Doug Leigh.
Ryan Watkins: And I’m Ryan Watkins. Today, in episode 76 Parsing Science, we’ll talk with Emily Ho from Northwestern University’s Department of Medical Social Sciences. She’ll discuss her research into some people’s choice to remain ignorant of formation that, while unpleasant, could help them make better informed decisions in the future. Here’s Emily Ho.
Ho: My name is Emily Ho. I am a PhD student at Fordham, and in the summer, I’ll be joining Northwestern University’s School of Medicine at the Department of Medical Social Sciences as a research assistant professor. I did my undergraduate studies at NYU in downtown Manhattan where I grew up. And sometime in the middle of my undergrad studies I got really interested in the measurement of psychology. All of this psychological phenomenon is kind of swirling around us, and how do we sit down and actually quantify it? You know, if I were to ask you how tall you were it’d be easy to just pull out a ruler and give you a number. But with things that are a bit more abstract – like depression or, you know, how introverted you are – it can be a little harder to do that. And that’s where the field of quantitative psychology comes in. Where people develop models and scales to try to measure precisely things that are maybe not so precise. So I decided to pursue my Graduate Studies at Fordham which, conveniently, is also New York and also has a program in psychometrics and quantitative psychology. And that’s where I am.
Information avoidance
Leigh: It’s said that there’s three types of knowledge knowing what you know, knowing what you don’t know and not knowing what you don’t know. But despite it now being easier than ever to obtain information, when it comes to the most important decisions in life people often choose to remain ignorant for a variety of psychological and economic reasons. Ryan and I began our conversation with Emily by asking her why we sometimes choose not to know that which we could.
Ho: The conventional – I’m going to say conventional with a quotation mark here – knowledge in the literature, particularly the economics literature, is given that information is free. Assuming it is, I should say, because it’s not always free. Assuming that information is free and that it’s easy to obtain, why shouldn’t you get it? Even if it doesn’t change your decision-making in any way, you should always get, you know, what’s free. Kind of guided by this conventional wisdom, the standard argument in the literature is like, “There’s no one that’s information avoidant,” or if it is, it’s just noise in the experiment. And so we shouldn’t account for people’s information preferences at all. Well, in the last maybe 10 years, there’s been a lot of empirical evidence a lot of studies and experiments showing that when real people are faced with a real life decision to obtain really important information, they choose not to get it.
So an example is the study by Emily Oster, who’s a health economist at Brown, where she samples all these people who have a risk of having Huntington’s disease. And Huntington’s disease is this neurodegenerative disease that, if one of your parents have it, you have a 50% chance of getting it yourself. And if you have it, it’s a very debilitating disease, with really early onset symptoms. And she samples all these people who have a chance of getting it, and she finds that most of them – the overwhelming majority, 93% of them – don’t want to get the information from this test. So that’s a pretty big piece of evidence showing that there is information avoidance. It is prevalent, and it might have really long term consequences. That’s kind of how this idea starts.
The Information Preferences Scale
Watkins: Sometimes researchers create tests and questionnaires out of necessity so that they are able to enter a broader research question in which their instrument serves as a means of measuring just one of the study’s variables. Other times the development of the instrument is – in and of itself – the whole point of the study. Emily’s work on developing what she calls the Information Preferences Scale falls into this latter category. So we asked her to tell us how she describes the measure and how it works.
Ho: It’s a 13 item scale. It covers three domains: the health domain, the finance domain, the social domain. And it also has some general purpose information avoidance questions. And in the last study, we tacked on subscale for organizational behavior. And basically, it’s meant to be something that’s easy to administer, whether you’re doing it online or in the field. And trying to get a sense of people’s propensity to not obtain information that could potentially be useful to them. And in our earlier iterations of the scale we were really focused on like, “Do you want to know this information if it’s bad?” Or, you know, “If it’s good”? And it turned out that people answered really inconsistently, maybe because of bias. And so we ended up making it very neutral. You don’t know if the information is bad or not. That’s kind of between you and your pessimism. And so that yielded much more, sort of, consistent results.
Origin of the scale
Leigh: Ryan and I were curious about how the idea for developing the Information Preference Scale first emerged, and what got Emily excited about committing to the long and often arduous process involved in creating a new questionnaire with sufficient evidence of validity and reliability for general use by others.
Ho: You know, there’s obviously the literature. But there’s also me and my two co-authors: George Lowenstein and David Hagmann. And we were just kind of talking about how this differed by ourselves. Like, “Oh, if you were a avoider, do you avoid information?” or whatever. And David was “Oh, I’m a total information seeker. I would want information even if it hurts my feelings,” or whatever. And George said, “Oh, I’m a total ostrich. Right? I bury my head in the sand when the news is bad. And that’s it.” And I, like, I thought I was somewhere in the middle. And so we thought this was interesting because, you know, three of us with really different views. So that’s how this project started.
And arguably, like, the most fun part of the job is creating items. Every psychometrics class you sit in on, you never really learn how to create items. It’s just given to you, you know, in a CSV file. And you’re just supposed to kind of do some latent variable modeling with it, and come up with the numbers you’re supposed to come up [with]. So the first part, you know, the creativity, the generation part was definitely … even the choice of what the item responses should be, right? Should it be yes/no? Should be on a continuum? Things like that. That was decisions we had to make along the way. Maybe the second thing is how we were going to validate it. In the paper we have people complete the scale and then make a decision. And that was something that kind of came along later in the process: pinning down exactly how we were going to validate this scale. And, at some point, to pilot the items – like informally, kind of – I started giving it to all my friends. I wouldn’t tell them what it was, because I don’t want to, like, bias them. And I think I literally said, “This is scale the kind of tell you what kind of potato chip you are.” You know, like those BuzzFeed quizzes. And I don’t know why they believed me, but they did, and I got some good responses that way. That were probably, you know, less prone to bias than if you had actually told them “This is a scale to measure whether you avoid information or not.”
How control affects information-seeking
Watkins: While information avoidance may be a trait, sometimes we can act on information and other times we can’t. For example, you might choose to rebalance your 401k in favor of stocks when the market is rising, but – due to limits on the frequency of trades that are permitted within a 30-day period – be unable to adjust it back if the market crashes the next day. Since the ability to control how we act on information seems like it might affect the degree to which we seek or avoid information, we were interested in hearing Emily’s thoughts on the matter.
Ho: Whether you have control – or whether you think you can do anything about the information – that’s a sort of interesting point. Because one could argue that you can always do something with the information. In the example of the Huntington’s test – the disease – you can’t do anything about whether you have a fatal disease or not, obviously, because there’s no cure. But there’s other things you could do: you could plan for retirement better. If you’re, you know, maybe slogging through your PhD studies and, you know, you only have a few years left. Maybe you don’t want to finish your PhD … or other degrees. Things like that. So there’s some interesting things about whether people just look at the information and think it’s, you know, “I can’t do anything about this specific piece of information. So I’m not going to get it at all.” Or people who kind of really think, “Oh, like this will help me make different choices in other areas of my life.”
Developing scenarios for the scale
Leigh: Developing items for the survey was especially fun for Emily because – unlike typical questionnaires in which the questions are posed on an agreement scale with directions like, “Please indicate the extent to which you agree with the following items using the strongly agree to strongly disagree scale below” – or even a frequency scale asking you to respond “always to never” – she and her team developed scenarios about the degree to which a respondent is an information seeker or avoider with response options ranging from “definitely don’t want to know” to “definitely want to know.” And if you’d like to take it for yourself, you can find a link to the scale at parsingscience.org/e76. Here’s what Emily had to say about this approach.
Ho: Well, it was a super-fun process. I will definitely tell you that I think that one of the questions – it’s like, “You decide to go to the theater for your birthday [and] give your close friend or partner your TV” … “your credit card,” sorry. “So they can purchase tickets for the two of you, which they do. And you’re not sure but you suspect that the tickets are expensive. Do you want to know how much the tickets cost?” So I think this question literally came after an evening that George spent at the theater. And he was like, “I don’t know if I want to know this,” and then we tested out the item, and that worked. And so we just came up with a bunch of scenarios that we thought would be possible grounds for avoidance or seeking, either way. We ran a lot of pilots, using a lot of different kinds of wording, to see which ones would give us the most stable psychometric results over time. Definitely, a few of the earlier pilots – where we had a simple, like, “Do you want to know this information? Yes/No” – that didn’t yield really great stability. Like, people would change their minds over time. And so we wanted to get at certain scenarios that were really stable, and didn’t have too many people saying they were going to avoid, and didn’t have too many people just saying they were going to seek the information.
The benefits of scenarios
Watkins: Unlike typical skills, which rely on abstract questions – such as “When it comes to my finances, ignorance is bliss” – concrete scenarios can better predict consequential information acquisition decisions than abstract questions do. So we followed up by asking Emily to tell us more about the benefits of using scenarios in questionnaires like hers. We’ll hear what she had to say after this short break.
ad: Altmetric‘s podcast, available on Soundcloud, Apple Podcasts & Google Podcasts
Watkins: Here again is Emily Ho.
Ho: It’s really a different approach to how traditional psychological measures are developed, where they’re much more interested in reporting, like, how you feel and what your tendency to do certain things is. And what’s really cool about these scenarios is that they really force the participant to think about themselves in this situation and then give a response. That’s – we’d like to think – maybe a little bit more immune to things like self-report bias or social desirability bias, where people are kind of in this situation, and then they rate “Do I probably want to know or don’t want to know?” And so, it’s really useful for two reasons. One is that it is – as we show in the paper – more linked to behavioral outcomes. More linked, literally to whether people choose to want to know information or choose to avoid it than other scales that are more abstract in nature. And the second thing is that it’s kind of more tied to like certain domains. Obviously, you know, for health information researchers are really interested in health decisions. Having that tie in, I think, makes the leap to external validity and kind of real-world generalizability much more apparent and … yeah, much more apparent. And then the caveat, I’ll say, is that the scenarios, by definition, are pretty specific. And we try to make it sort of general to a large population, but for maybe different purposes different scenarios might suffice. And that’s kind of an interesting open question about, you know, what kind of decisions that the scale can predict. I mean, we show that it predicts decisions across a lot of different domains. Not only just the ones represented in our scale. But, you know, we’re open to using this as a model for future scale development. And so I think that’s a really open both substantive and sort of methodological question.
Information preferences, risk tolerance & aversion, and need for closure
Leigh: As Emily explains next, one of her hypotheses was that those who are more tolerant of risk would be more interested in obtaining information and less likely to avoid it. Another was that a relationship exists between information preferences and other established constructs theorized to be related to their measure, such as risk aversion and need for closure.
Ho: We wanted to test a few things. One was we wanted to basically test this scenario based approach to writing items, and see whether it could actually predict a real-world outcome. Literally, a person has the option to decide whether they want information or not. And then they can either choose to be forward to the site or not. So that was like sort of a test of the novel psychometric framework that we used. And then we also wanted to see whether these items … whether they could only predict health, or finance, or social decisions, [or] whether it could predict other things as well. And because some of the behavioral economists I was working with were interested in economics issues, we chose political outcomes, climate change, and you know things of that type. We wanted to provide, like, a really robust test of the methodology, and also of our scale. So that’s one thing. And then the second thing that we did was – around the time we were designing this last study – we chanced upon in the literature a scale that was literally called “Establishing an Information Avoidance Scale” that had been published super-recently. And we were like, “Oh, no,”” you know, we saw the title and said, “You know have all our efforts been for nothing?” So we read it and they were, like, attitudinal measures like, “When it comes to X, I would rather not know this,” or “Ignorance is bliss,” blah, blah, blah. And so we said, “Well, obviously we have to put these items to the test as compared with our scale.” So, sort of, it provided actually a very neat test of the methodology and its predictive validity.
Predictive validity
Watkins: Doug teaches psychometrics: the branch of survey research that enables researchers to determine how good a survey is at doing what it is intended to do. So he was eager to have Emily elaborate on how she thinks about predictive validity, as well as what she and her team did to establish what evidence of it the Information Preference Scale has.
Ho: Predictability is kind of this … like, you’re trying to say the scale predicts an actual behavior, right? Whether it’s success in college, or whether people will actively not avoid information or not. So, there are a lot of what I’ll “robustness checks” that one could implement in the future. Like, one is looking at, like, time lags: whether taking a scale in January will predict a decision in December. So that’s one way that people test validity. Another way is just to expand the set of possible decisions. So we used a lot with the politics and climate change and things, but you could use it for many other decisions and see if … when it stops to predict things. So that’s … those are two ideas that come to mind.
I will say that it’s really hard to definitively establish predictive validity in basically any measure, just because there are so many ways in which predictive validity can be rejected. Let’s put it this way. So the thresholds are pretty high, and I think it’s really important when you’re, like, developing measures. Like, any measure … to really carefully delineate what this prediction – what you don’t know is if it predicts just because you haven’t done the study doesn’t mean it doesn’t predict whatever outcome. But you have to be really careful to demarcate the spaces that your study has shed light on. And I think in a lot of studies people don’t do that maybe as rigorously as maybe they ought to.
Convergent validity
Leigh: Another step in validating a measure is providing evidence of convergent validity: whether your scale performs akin to another similar scale. In an equal but oppose way, newly developed scales should also demonstrate evidence of “discriminant” or divergent validity, which is the mirror image of convergent validity. That is, scores from two measures of two constructs that should be uncorrelated with one another indeed are not correlated. We asked Emily to discuss how she sees these ideas, as well as how she and her team examined the convergent and divergent validity of their Information Preferences Scale.
Ho: For example, you expect that, you know, the ACT and the SAT to agree pretty highly because they’re both tests that are designed for the same purpose, which is to assess your college readiness. So you’d expect these two scales to exhibit convergent validity. And an example of divergent validity is, maybe, the SAT with something that’s really unrelated, like depression: you know, an emotional thing with a sort of cognitive construct. So that’s an example of divergent validity.
And so, typically to show that your scale agrees with concepts that have been established in the literature you, you know, kind of try to do a competition with established psychological phenomenon. So we thought that people who had a greater need for cognition – people who wanted more cognition- were more likely to also be information-seeking. Or that people who were more receptive to opposing views should be also more information-seeking. So that’s an example of convergent validity. But at the same time you don’t want these correlations to be so similar – like above, let’s say, 0.80 – because then the scale is just redundant and then, you know, what’s the point in creating your own scale? So it’s kind of this funny balance of you need your scale to agree with constructs that have been established, but at the same time, disagree with contracts you don’t think are related at all. Like, we didn’t really expect that agreeableness – which is one of the Big Five constructs – would agree with – would correspond to – whether you were information-seeking or not. But we put it in there because we wanted to say information preference doesn’t really correspond to this personality trait, but it corresponds to others, right? So the sort of reckoning between different constructs has something that is an essential part of basically situating your scale in the literature, in a way.
Exploratory and confirmatory factor analysis
Watkins: Machine learning – the branch of computer science interested in developing algorithms that can predict future events based solely on past data – often relies first on training a computational model based on a subset of a dataset, then using the remainder of the dataset to evaluate the performance of the model on data that the computational model has not yet seen. It occurred to us during our conversation that the last two validation techniques that Emily and her team used – exploratory factor analysis and confirmatory factor analysis, or EFA and CFA for short – use similar methods for providing evidence about the construct validity of a questionnaire.
Ho: After you’re done with all the fun parts – like creating the items and testing it out, and making sure that you have a sort of proper distribution of responses for each item – you then need to test for there’s two main things one is like reliability, which is how stable the instrument is over time. You don’t want an instrument that keeps hitting different targets and gives you a different response within a short period of time, because that suggests that the instrument isn’t really reliable. So that’s one thing that you have to do. The second thing is to do something called latent variable modeling, where you try to map the items that you have – like the 13 items on the Information Preference Scale – on to what we’re going to call latent factors. And these are the psychological constructs that your scale is comprised of. So, one of the hypotheses we had going in for this study was, “Is information preference domain-specific, or is it just the general construct? Were we just going to have all the items map onto one construct – information avoidance – or was it kind of group into financial information avoidance, health information avoidance, social information avoidance?” And so, through a process of exploratory factor analysis and things like that, we’re able to kind of discern that answer. And for us it was kind of both. It was mostly domain-specific, but we found that a general model that included the three different domains also fit really well. That’s kind of the psychometric validation part of that study.
And so typically an exploratory factor model is done first, where you put all the items in, you know, a model and look at the model fit. And so, it’s used often for things like figuring out which items are great or which ones are kind of on the fence. You typically want the exploratory factor models to have a loading – which kind of like a regression coefficient – above 0.4, 0.5. That’s generally the cut off. And then, once you have that you kind of have an idea of the model that the scale has, whether it’s one domain, two domains, whatever … or two factors. And then so once you kind of have an idea of what that is, then on a different sample – like, you kind of go on MTurk again and have the participants take that scale – and then you run the same model as a CFA. In your EFA you said, “Oh, yeah. It’s a two factor model,” and then on your CFA – with a completely different set of participants – you’re going to say, “Okay. I’m going to impose a CFA that’s a two factor model, and then I’m going to see how well the model fits the data.” So usually it’s a two-part process.
Why ignorance can be bliss
Leigh: In the end, after completing four separate pilot studies with the initial versions of their scale, Emily and her team settled on a final instrument that contains five personal characteristic items, three items about health, three about finances, and two general information avoidance items. What they found through these two additional studies was that information avoidance is a stable trait. So Ryan and I wondered for what reason she might think that it is that people might tend not to change their minds … about changing their minds.
Ho: I think some people just have very strongly held beliefs that are very difficult to shake, even in the face of a lot of evidence. There are sort of some things in the literature that suggests that even if you’re given information, sometimes that can even backfire, and kind of make you even more convinced of your beliefs, even if you’re given something to the contrary. So that’s a very interesting psychological phenomenon we don’t know that much about yet. And I think it’s a harder question to answer about, well, you know, we did sort of establish that information preference seems to be a distinct trait that’s sort of stable over time. I shouldn’t say sort of stable, it is stable over time. But that we also say that it differs by domain. So, you know, the same people who avoid health information isn’t the same people who avoid finance information. And like, why is that? And is that stable over time? And that’s totally an open question. There hasn’t been a lot of work done on the specific contexts in which people will avoid information. So I don’t know if the story is as simple as, “It makes me feel bad, so I’ll avoid it,” because there are definitely issues where – just to give you a really funny example. Like, people watch depressing movies all the time, right? And they watch depressing movies sometimes knowing exactly how depressing it’s going to be, and why did they finish and wait until the end? So that’s, that’s an interesting kind of ruffle in the whole landscape of how we consume information and things like that.
But there’s definitely a motivated reasoning component to it. And motivated reasoning is just the concept developed by Kunda in 1990, where she talks about people selectively editing the information that they attach the most weight to themselves in choosing to make a decision. So it’s basically the idea of, you know, you really want to eat the dessert. And you know that if you are going to know what the number of calories is, it’s going to make you not eat the dessert. And so as a result, you avoid knowing that to begin with. So it gives you a sort of willful ignorance to do the things you really want to indulge even though you secretly know it’s not the best decision for you. Sort of like if you have a strong preference for something it doesn’t matter. All the news in the world telling you it’s a bad idea, you’re just still gonna do it anyway. Which is kind of interesting, kind of speaks to people’s strong antecedent preferences for for things. Even if, you know, they shouldn’t have those preferences or you know …
Predictive validity
https://www.parsingscience.org/wp-content/uploads/2020/06/ParsingScience076-Emily-Ho.mp3#t=25:55
Watkins: We ended our conversation by asking Emily what she believes might be some of the fertile areas of future research into our preferences regarding information, as well as our desire to sometimes avoid it.
Ho: The boundaries of when people avoid information or when they don’t. How this triangulates with whether more information makes us happier or less happy, and when we need to kind of trade off happiness and practicality, and things like that. Of course, I’m not an economist by training. So I don’t know, that sort of thing. But that’s super interesting to me. We used to think that all you have to do is give people information and that’s it. But it’s a … it’s a bit more complicated than that when like you get your … you get in the way of yourself, you know? And sometimes there’s one could argue, right, that it’s strategic to because …
Cass Sunstein has this great example of popcorn where, “Well, you know, they start putting calorie labels on everything. And then you go to the theater and you really want popcorn. And then you look at the calories. And then you’re watching your movie without popcorn, and you’re just a lot less happy. And was the information a good thing or not, right?” You walk out of the theater and you’re a lot less satisfied, and maybe it would have been better for you not to know the calories, you know, and you would have, you know, had a much better experience at the theater so that, you know, that’s interesting as well to me.
Links to manuscript, bonus audio and other materials
Leigh: That was Emily Ho, discussing her article “Measuring information preferences,” published along with David Hagman and George Lowenstein, in the journal Management Sciences on March 13, 2020. You’ll find a link to their paper at parsingscience.org/e76, along with transcripts, bonus audio clips, and other materials that we discussed during the episode.
Watkins: If you like what you’ve been hearing, then head over to Apple podcasts, Google podcasts, or wherever else you might get your podcasts, and subscribe to Parsing Science. And if you do already subscribe, consider leaving a review on iTunes. It’s a great way not only for others to learn about Parsing Science, but also a great way to help spread the work of the scientists on this show.
Preview of next episode
Leigh: Next time, in episode 77 and Parsing Science, we’ll talk with Trevon Logan from the Ohio State University’s Department of Economics. He’ll talk with us about his research into the widespread election of Black Southern politicians during the Reconstruction era, which led to increased tax revenues that were put towards public education and land tenancy reforms … until Jim Crow laws reverse that progress just 12 years later, resulting in the systemic disenfranchisement of these legally elected office holders.
Logan: The really cool thing about Reconstruction is that we had a truly exogenous event that no one – say, circa 1863, or even 1864 – would have predicted, which is the enfranchisement of African American men. And then the immediate move into holding political office,. They were remarkably effective at conceptually understanding what it was that they wanted to do, in terms of having tax policy be used to help to redistribute wealth to African Americans. The fact that they were not able to achieve that as a goal is a function of a lot of things that are happening not only economic, but also socially.
Leigh: We hope that you’ll join us again.