When real-time fMRI neurofeedback improves people’s symptoms long after treatment, might that influence the guidance that’s provided to patients, and also inform the design of future clinical trials? In episode 60, we’re joined by Michelle Hampson from Yale University‘s School of Medicine. She discusses her finding that people suffering from neuropsychiatric disorders may benefit from real-time fMRI neurofeedback, not only while inside the brain scanners, but also for weeks after. Her open-access article, “Time course of clinical change following neurofeedback” was published with multiple co-authors on May 2, 2018 in the journal NeuroImage.
Websites and other resources
- Michelle’s lab
- PubMed entry for article
- The Tourette’s study: “Randomized, sham-controlled trial of real-time fMRI neurofeedback for tics in adolescents with Tourette Syndrome”
Patrons of Parsing Science gain exclusive access to bonus clips from all our episodes and can also download mp3s of every individual episode.
🔊 Patrons can access bonus content here.
Hosts / Producers
Doug Leigh & Ryan Watkins
How to Cite
Leigh, D., Watkins, R., & Hampson, M.. (2019, October 18). Parsing Science – Enduring effects of Nneurofeedback. figshare. https://doi.org/10.6084/m9.figshare.10002581
What’s The Angle? by Shane Ivers
Hampson: If they’re only sampling immediately afterwards, they’re losing a humongous amount of their power … and they could come up with a null result for an intervention that was actually effective.
Leigh: This is Parsing Science: the unpublished stories behind the world’s most compelling science, as told by the researchers themselves. I’m Doug Leigh.
Watkins: And I’m Ryan Watkins. Today, in episode 60, we’re joined by Michelle Hampson from Yale University. She’ll discuss her research suggesting that people suffering from neuropsychiatric disorders may benefit from real-time fMRI neurofeedback, not only while inside the brain scanners, but also for weeks afterwards.
Hampson: Hi, I’m Michelle Hampson. My interests have evolved, but recently I’m very interested in using neurofeedback to try to train people to better control their brains. So, I did my undergraduate work at University of Alberta, where I studied computer science, and in the process of doing that I took some artificial intelligence courses and was really struck by how hard it is to do the things that the human brain does. So I became fascinated with the human brain and then I went and did my graduate work at Boston University at a department called Cognitive and Neural Systems, which basically was neural network models of brain function. And then I wanted to do something more empirical, so I took a postdoc at Yale working for somebody who did brain imaging. The idea was I would come down and I would model what was happening in the brain based on their brain imaging data. And when I got here I realized that I liked collecting and analyzing brain imaging data a lot more than I liked the modeling, so I kind of went into brain imaging. And that’s how I ended up where I am.
Watkins: Research done through Michelle’s lab at Yale involves imaging clinically relevant human brain functions to study and treat various psychiatric and neurological disorders. For the study we’re discussing with her today, she and her postdoc, Mariela Rance, along with their collaborators focused on patients with obsessive-compulsive disorder or Tourette’s syndrome. So Doug and I asked her to describe the characteristics of each of these illnesses.
Features of obsessive-compulsive disorder and Tourette’s
Hampson: So one of the studies was a study of obsessive-compulsive disorder, or OCD: a disorder characterized by obsessive thoughts that induce great deal of anxiety and compulsions, which are actions you perform to minimize the anxiety induced by those thoughts. So, for example, your thoughts might be “I think I left the stove on,” and your compulsion might be to go back to your house and check the stove … which is not unusual to do it once, but if you have to do it seven times before you leave, and you’re still worried about it as you’re driving to work, you know, then you’re getting into a domain of clinical issue. So if it starts to affect your functioning in your life. And the OCD study is a study in an adult population. So there’s a variety of different types of OCD symptoms that OCD patients experience, but meta-analyses and factor analyses have broken this down into four main types. One of these is contamination OCD. So this is kind of fear of human excrement, any sort of bodily fluid, or anything that could make you sick … fear that you’re going to make other people sick, etc. And a common compulsion, for example, would be hand washing, or washing in general. Another type of OCD is checking symptoms. So that’s like, “Did I leave the door unlocked?” The third type of symptom is symmetry. And then hoarding has now been classified as a totally different type of disorder. We decided to focus on two of these dimensions because for each dimension we focused on we had to develop a whole different stimulus set, and that’s a lot of work. So it was a lot of work just to include two different groups, and by including these two groups we get the majority of OCD patients. So that’s the population that came in for our OCD study: adults who had either contamination- or checking-related OCD. The Tourette’s syndrome study is a completely different clinical trial where we were recruiting adolescents – so teenagers for the most part – who have Tourette’s syndrome. Tourette’s syndrome is a disorder which actually peaks in adolescence, and often resolves, but not all people find that it goes away; but many many young people the symptoms do clear up when they become adults, but some people struggle with it for their whole lives. Basically the symptoms of Tourette’s syndrome are tics. These include motor tics; so these are just unwanted repetitive movements. And they also include vocal tics, which can be kind of sounds that they make. They can even be complex vocal tics like words or phrases. The motor tics can also be complex. So, they can have very simple motor tics, like a cheek moving, or they can have complex motor tics, like pretending to lift up a machine gun and fire it. For some of the kids who have this disorder, the tics are completely involuntary, but for others they report that they have some control; they can repress them for a little bit. But they have a sort of unbearable urge to do the movement. So kind of like when you have to cough and, you know, you can stop it for a little while, but you really can’t hold it back indefinitely. And I was very interested in the supplementary motor area as a key region in this disorder – partly because I’ve done a neuroimaging study which had highlighted that region – so we thought that the supplementary motor area might be a region which was kind of giving rise to tics. And if we could reduce or allow people to control activity in this area we were hoping that this would allow them to control their tics symptoms.
[ Back to topics ]
Origins of the study
Leigh: Michelle’s the Director of Real-Time Human Functional Neuroimaging research at Yale, which means that she’s routinely a collaborator with other researchers investigating the role of brain activity in a variety of psychiatric conditions. Like many researchers, she often participates in multiple clinical trials simultaneously, allowing for the identification of trends among studies which might otherwise go unnoticed. We asked Michelle to describe how this particular article – on patients symptoms change over the course of time following neurofeedback – originated.
Hampson: So this particular paper actually came out of two different clinical trials we were running. We were really working in two separate groups. One group, I was working with the OCD research clinic led by Chris Pittenger and many of the people who work in his clinic. And that was for the OCD neurofeedback study. Then there was a whole other group of people I was working with including Denis Sukhodolsky and Christopher Walsh, who were running the Tourette’s syndrome neurofeedback study. And originally these were two separate studies, we had no intention of combining data across these studies. But when we saw this unique pattern that kind of surprised us in the OCD study I just decided to look at other data that I had – the other study I happen to have a substantial data on was the Tourette’s syndrome one – to see if we saw a similar pattern. and when we saw what looked like a similar pattern in that dataset, then we decided to, like, pool the data. actually Dustin Scheinost, who is one of the authors on the paper, suggested to me that we combine the data from these two studies. and at first I was extremely resistant to the idea because I thought, “Are you kidding?” You know these are two separate clinical trials I’m running; I need to get publications. I can’t publish the interim data from these two clinical trials in one paper that’s like, you know, years and years of work being tossed into one little paper, right? But then we came up with this creative idea of not revealing the outcome results from either of the trials individually, that we could pool the data and analyze the data together, and not actually publish the clinical results from either application independently. So, in the end, he won me over and we ended up going that route.
[ Back to topics ]
What real-time fMRI neurofeedback is
Leigh: Whether through personal experience or the popular media, most of us are familiar with the cross-sectional images of the brain produced through functional MRI scanners. Often these images are color-enhanced to highlight the areas of the brain which activate in response to some stimulus. But, if you’re like us, you might be less familiar with the application of fMRI to neurofeedback, by which people can be taught to modulate their brain functioning through real-time monitoring of their brain states. Since the use of neurofeedback has historically been carried out by electroencephalography – or EEG – Ryan and I were curious to learn what real-time fMRI neurofeedback involves, as well as how much computational power is required to do it.
Hampson: So real-time fMRI refers to the fact that we analyze the fMRI data as we’re collecting it. So traditionally fMRI data were analyzed days or weeks or months after it’s collected, because it’s very computationally intensive analyzing functional MRI imaging data. But in recent years, people developed the capability to analyze it quickly, so that you can really extract from the data some aspect of brain function that you’re interested in, and you can monitor how that’s changing over the course of a functional MRI scan. And so this allows you, for example, to give feedback to the subject showing them how some, you know, aspect of their brain function is changing, so they can sit there and try to learn to control that aspect using the feedback that you’re giving them as a direct training signal. I think the early days we thought, “Well, it’s a long shot, but let’s just try training people a little bit.” And there were some early studies that came out and showed even just with a few runs in a single session, you can improve people’s ability to control certain aspects of their brain function. And so that early data was promising. That, “Hey, we can actually shift brain function even with a little bit of exposure.” Building on that, you know, we’ve kind of gone to two to three days for many of the clinical applications, and the results have actually been remarkable. You do seem to be able to get effects even with such a small amount of training. And typically when we analyze our neuroimaging data we’re doing statistical analyses which are quite complex on the entire scan of data. But the computation that’s needed for us to give feedback immediately is not really that intensive. I think it just didn’t occur to people that they can actually extract interesting information very quickly and give feedback during the scan, because you’re not actually having to do statistics on the whole run. So there are now ways available to do statistics volume-by-volume as it’s coming in. So we can do fancier stuff, but actually what we’re giving feedback on is usually very simple and it’s not that computationally intensive.
[ Back to topics ]
Designing the study
Leigh: By imaging patients’ brain functioning as they tried various techniques to mitigate their OCD or Tourette’s symptoms, Michelle and her team were able to provide feedback in real-time about how successful they were in regulating that brain activity. Ryan and I were eager to learn how Michelle went about designing her team’s study, and what the patients’ participation entailed. We’ll hear what she had to say after this short break.
Leigh: Here again is Michelle Hampson.
Hampson: Of course this was a randomized double-blind trial, so half the subjects received real neurofeedback and the other half received a sham form of feedback, which we yoked sham. And the way that works is that the yoke sham subjects are shown exactly the same time courses as they’re matched real neural feedback subjects, and they’re queued to increase and decrease activity in their brain area at the same times. So to the extent that they’re matched subjects, the sham subjects who are matched to those people are kind of misled to believe that they’re having the same level of success in controlling their brain areas. So they would come in for a couple of sessions of neurofeedback meaning two different days they would get neurofeedback training in the scanner. They also had days where they came in just for assessment scans, where we collected a lot of measures of their brain function that we were interested in: seeing if the intervention could alter. And those sessions were scheduled before and after the neural feedback sessions. So in summary, they came in for four sessions: an assessment, two feedback, and a final assessment. And a very big difference between fMRI neurofeedback and EEG neurofeedback is an EEG neurofeedback they often bring people in for 40 or more sessions fMRI. Of course, it’s extremely expensive, and if it took 40 sessions, it would not really be feasible even to to do research using fMRI neurofeedback. But in the OCD study, where we were monitoring symptoms, you know, right after neurofeedback – two weeks after and a month after – we’d included those follow-ups because we wanted to show that that what they learned during the neurofeedback didn’t go away. So that they didn’t kind of regress back to their baseline state right away. Because in clinical studies, you know, if you can use neurofeedback to improve symptoms that’s great, but if it only lasts a day it’s not very clinically useful because they can’t come in once a week for neurofeedback – or every three days for neurofeedback – for the rest of their lives, right? So you need something that actually has some persistence. So that’s why we included the follow-up, as we wanted to show that the effects could last and not disappear completely. So it was completely surprising to us that the effects continued to grow like we, literally, had not … it had not occurred to us that this pattern would be what we would see. And that’s one reason that we really felt this needed to be published. Because if other people who were designing their studies hadn’t really thought about finding this pattern, then their studies wouldn’t be designed for it. And that could result in not just a huge waste of resources, you know, discarding interventions that actually do have promise.
[ Back to topics ]
Strategies for controlling OCD and Tourette’s symptoms
Watkins: The patients included in the experiment were selected regardless of whether or not they had been receiving other treatments such as cognitive behavioral therapy – or CBT for short – but only if they began doing so for three months or more prior to enrolling in Michelle’s study. This was to ensure that these treatments had stabilized sufficiently such that Michelle’s baseline measurements were dependable. This got Doug and I to wondering about how much information participants in her study were provided about strategies for controlling their OCD and Tourette’s symptoms, as well as how she and her team ensured that such information didn’t bias participants’ performance during and after neurofeedback.
Hampson: There’s a big debate in our field whether you should give people strategies or not. But in the end we felt that just for the purpose of controlling, so that everybody had the same information going into the study … because it would be unfortunate if you happen to randomized into one group more people who’d had exposure to CBT, you know, than in the other group. So, if the real group for example had had a lot of CBT training and the sham group didn’t, then you might expect that the real group would get better just because they had more strategies to practice, more strategies to work with. And so we wanted to make sure that everybody at least had a basic idea of some strategies that could be helpful for controlling the region. And also I think it just makes them more comfortable going in there where, you know, the first day of the scan you tell them, “Okay, try to control these brain areas.” If they have no idea what they’re doing they’re kind of like, “How am I supposed to do that,” right? So we have a strategy session with a clinical psychologist which is scheduled before they start doing the neuroimaging assessments, so that they have some idea of things they can try to use to control their brain areas before they start. And then the hope is that the neurofeedback is going to, you know, help them refine that ability to control their brain areas.
[ Back to topics ]
Watkins: This study made use of a crossover design, which is a common technique in randomized control trials such as Michelle’s when there are multiple repeated measurements of each participants performance longitudinally, over time. We asked Michelle to explain what crossover designs involve, why they’re done, and what are some of the challenges and benefits they can present.
Hampson: So a crossover design means that each subject gets both interventions. So instead of recruiting subjects and randomly assigning half of them to get real neurofeedback and the other half to get sham, every subject who comes into the study is going to get both real and sham. And what is randomized is the order that they get them. Half the subjects will get real first and sham second, and the other half will get sham first in real second. And they’ll be blinded to which order they’re getting these interventions in. So randomized designs are appealing because you get twice as much data per subject. So one of the biggest challenges in running these kind of studies is recruiting subjects. When I applied for the grant to fund it I started with a non-crossover design, a randomized design, and I was told that there weren’t enough subjects. I needed to recruit more subjects. I went to the clinical people and they said, “I don’t think we can get more subjects than this for this study.” So I had to get more data per subject and a crossover design is a wonderful tool for that in that for every subject you’re getting both real and sham: you’re doubling your data per subject. And not only that but you’re controlling for all sorts of subject variables like age, medication use, comorbidities, etc. But the problem with crossover designs is that if you have a change: if the symptom change induced by the intervention is not just static after the intervention is over, you know … A crossover design works great if when you’re getting the real neurofeedback say, you drop five points on your clinical scale – you’re five points improved – and you stay at that level of being five points improved indefinitely until another intervention affects you. But in reality that’s that’s not what we saw. What we saw in the OCD study was that, you know, you’d be five points improved maybe after the intervention, and then you’d continue and by a month later you’d be maybe 12 points improved. So if you have a dynamic symptom change that continues to evolve after the intervention is over that’s problematic for crossover designs. Because the symptom change is driven by the first – it’s called the first arm or the first leg of the intervention – effect what’s happening in the second half of the intervention. So, for example, if you got real neurofeedback first and then you got sham neurofeedback, well, if the real neurofeedback caused your symptoms to start improving and to continuing to improve for a month then during that window of follow-up, you’re running the sham. So the symptoms are continuing to improve because of ‘the real’ but that symptom improvement is being attributed to ‘the sham’ a good effect of your intervention is working against you statistically because it’s being attributed to the control intervention. So it’s deadly for power in crossover designs if you have the kind of symptom pattern that we had. And in general, you know, also if you have a symptom pattern where people improve during the intervention and then they regress back to baseline: any kind of change dynamic change in symptoms after the intervention is over makes crossover designs undesirable.
[ Back to topics ]
Normalizing data for statistical analysis
Leigh: The OCD and Tourette studies targeted different brain areas and distinct patient populations and used different clinical instruments to assess different types of symptoms. Despite the idiom that “you can’t compare apples and oranges,” Michelle explains how she and her team were able to combine these two very different data sets in their analysis.
Hampson: When you think about combining the data from the Tourette’s syndrome study and the OCD study, these are totally different types of data. Like the Tourette’s syndrome study we’re using what is called the YGTSS, which is the Yale Global Tic Symptom Severity Scale. And it measures, you know, the number of tics kids are having and how much it’s affecting their life. So that’s a scale that has its own range and variability, and is measuring tic symptoms. The data for the OCD study, the primary clinical outcome measure was YBOCS, which stands for Yale Brown Obsessive-Compulsive Symptoms scale. And that’s a totally different scale that has a different range, and is measuring different symptoms. So you can’t just treat these data like they’re coming from the same distribution. And we were kind of like, “Well, how will we combine these data?” And we just we had actually a couple different approaches in the paper. The z-score analysis approach was one where we decided we have to get each of the studies into z-score space so that the data basically have a mean of 0 and a standard deviation of 1. So what we did was we took the baseline data from all the subjects: both the subjects that got real neurofeedback and the subjects that got sham. We took just their baseline data because we didn’t want our normalization to be influenced by what the intervention had done to them, we wanted it to be based on baseline normal ranges of the measure. We took that baseline data and we computed the mean and standard deviation and then for every single measure collected in the study we subtracted that mean and divided by the standard deviation. And that way put every measure we’d collected into a z-score normalized space. And then these two data sets were essentially in a very similar space, and we could roll them together and do GLM analysis on them. So that was the z-score normalized approach. We also did a different form of analysis where for every subject we computed percent signal change from baseline at all of the post intervention time points. And we did analysis on percent signal change. So we basically rolled these two data together in two different ways, but the results were very similar regardless of which analysis we looked at.
[ Back to topics ]
Subconscious feedback learning
Watkins: After analyzing their data, Michelle and her team found that the patient’s improved not just during neurofeedback but also in the weeks following the treatments as well. As this is contrary to reasoning that interventions are most effective immediately after treatment and then fade with time, we asked Michelle what specific coping strategies were most effective in controlling the symptoms of OCD and Tourette syndrome.
Hampson: There are some questionnaires where we asked, “Which strategies did you find most effective?” and “What strategies did you use?” We did not really manage to extract from that a clear idea of something that was particularly effective. So, there’s a whole debate in the field the extent to which neurofeedback learning in our clinical studies is conscious strategy selection versus subconscious feedback learning. I think the basic science researchers have demonstrated that subconscious feedback learning is a powerful effect in neurofeedback. We don’t know the extent to which that might be augmented with conscious strategy selection, but when we debrief subjects and ask them, “What were you doing?” we don’t see any obvious pattern, any obvious correlations, between what they consciously think they were doing and who’s responding well to the intervention. So when we first saw the OCD data where the subject symptoms continue to improve after the intervention, kind of what we thought is that they’re taking what they’ve learned, they’re implementing it in their life, they’re getting positive feedback, and it’s kind of causing this positive feedback cycle. But whether that’s something conscious or something unconscious is the big question. Is it because they’ve consciously realized that they have to do a certain mental strategy? Or is it just that unconsciously they’re better at doing what they were trying – always trying to do? The analogous thing I always try to explain this with is learning to ride a bike, right? If you go out there and you’ve never ridden a bike and you’re trying to ride a bike, you know, people can sit there and tell you, “Oh, you’ve got to tighten your quad muscle when you start to lean that way,” you know? It’s really not gonna help you. You just have to go out there, use the feedback of the tipping bike, and learn to ride it. And if after you’ve learned to ride the bike somebody says, “Wow, that’s great! You’re balancing really well! What are you doing differently?” You know, are you gonna be able to verbalize that? Probably not. Because that’s the nature of subconscious feedback learning is that you don’t necessarily consciously know what you’re doing differently. It’s just that your brain knows how to achieve what you’re trying to achieve.
[ Back to topics ]
What we don’t know about neurofeedback
Leigh: Since neuroscientists aren’t yet certain as to the conditions under which one particular behavioral therapy might be more effective than another, Ryan and I followed up by asking Michelle what then is necessary in order to find out the answers to such questions.
Hampson: I don’t know. And I can tell you you know what my guesses were, but the reality is that we need more data. So there are actually many, many parameters in neurofeedback that need to be optimized, and we don’t know what’s optimal. We don’t know how many sessions of neurofeedback is optimal. We don’t know the timing of the sessions that’s optimal. We don’t know the type of neurofeedback that’s optimal: should we give visual feedback or auditory feedback? should we give intermittent feedback or continuous feedback? By that I mean some people give feedback after each volume of data is collected that’s so-called continuous feedback, although it’s not strictly speaking continuous, but it’s essentially ongoing throughout the whole scan. Whereas other people give intermittent, where for a whole period of time a block of time, maybe 30-40 seconds, you’re trying to control the brain area, and then at the end of it you get your feedback. Which has the advantage that you’re not distracted by the feedback while you’re trying to control. So, there’s all sorts of parameters like this that we have to play with. Some people are using virtual reality feedback and trying to incorporate social reward into the feedback interface: how much does that help? So how do how do we optimize our protocols to really get the biggest learning is is a huge question; there’s this huge parameter space to explore. It’s actually one of the big challenges in our field.
[ Back to topics ]
Challenges of clinical trials involving neurofeedback
Watkins: Clinical trials are carried out in a series of four phases, or stages, with each building on the results of that before it, and with each subsequent stage being increasingly stringent in the standards applied to them. In Stage 1 trials researchers are trying to figure out if a new intervention is safe, while Stage 2 trials serve to determine the effectiveness of it. Later, Stage 3 trials compare the effectiveness of the new intervention against standard or similar treatments, while Stage 4 trials seek to determine if there are any other uses or benefits from the intervention. Doug and I were curious to learn what Michelle and her team sees as pros and cons to this approach to carrying out clinical research.
Hampson: Clinical trial standards are really designed for late stage clinical trials. And so what is considered running a clinical trial well, is a great way to do research in late-stage clinical trials. It’s been only a recent thing that NIH has required us to call all the early stage research we do a clinical trial. In some ways this is wonderful. I think it causes us to register our expected outcome measures on a public site so that it reduces fishing. it also sort of guarantees that you can publish the results of your research even if there are no results, because there’s now big emphasis to publish null results of clinical trials. So it’s good in a lot of ways. But one of the things that I think is really unhealthy for the for the field is that late stage clinical trial protocol is kind of being imposed on early stage research. And early stage clinical trial research doesn’t benefit from using these kind of approach. So, for example, in late-stage research you’re supposed to have a fixed protocol that you never change. By the time you go into a late-stage clinical trial you should have optimized your intervention, you’re sure that everything works, there’s no unforeseen circumstances: this is totally not the case when you’re doing early-stage research of a novel intervention that nobody’s ever used for this application before. You have all sorts of things come up that you didn’t even think about. You require flexibility to deal with those things. You can’t be completely rigid. And then, you know, another aspect is the idea that you’re not supposed to look at your data. If we didn’t look at our data all the time, who knows what kind of garbage we’d be giving is feedback. Because we have these extremely difficult technical kind of scenarios we’re constantly monitoring to make sure all of the different computers involved are working properly, and the data are being analyzed properly, and fed back properly. We can’t not look at our data. The reality is that we had registered these both as clinical trials, and there is a culture that you should never even look at, nevermind publish, interim data from a clinical trial, that we kind of had against us in terms of publishing this. On the other hand it’s very important to our field of neurofeedback research which is at a very early stage and development that this information get out there. And I knew people who are designing neurofeedback studies where they weren’t following people. So, you’re investing millions of dollars into collecting data and if they had the same pattern as us, where the subjects who responded to the intervention continued to respond and get better and better for a month: the time point they most need to sample is a month out, because that’s the time point of greatest power. But if they’re only sampling immediately afterwards, they’re losing a humongous amount of their power, and they could come up with a null result for an intervention that was actually effective.
[ Back to topics ]
The future of evidence-based neurofeedback research
Leigh: A similar pattern of changes that increase among patients over time, in two distinct data sets, suggest that the effects that Michelle and her team were able to identify may well generalize to other neurofeedback applications as well. So we closed out our conversation by asking her to reflect on what this might suggest about the future of evidence-based research in the field.
Hampson: a hard thing to over state is the degree to which it’s really a surprise to the field. You know, we all kind of thought, “We’re going to run a study. We’re just gonna see if we can get an effect. We’ll run maybe six or seven subjects, we’ll assess them after, and we will see if we can change their symptoms, or change whatever it is we want to change.” And if you don’t succeed, okay, well, you assume it’s not working. You change your paradigm. You try something else, right? And so when we first found it in the OCD study, we thought it was probably idiosyncratic to OCD. But then we thought, “Well, we’ll just, you know, look at the data from the Tourette’s Syndrome study and see if it has a similar pattern.” And then we looked at it and we were surprised that it did seem to have a similar pattern. So we analyzed them together and they had this shared pattern and I just thought this was perplexing. And then I went to lunch with another neurofeedback researcher at Yale, Nick Turk-Browne, and was telling him about this finding and he said, “Wow, we had exactly the same pattern in an open-label study of depression that we ran.” And he gave me a citation of the paper, and I went and looked it up and, sure enough, exactly the same pattern. And that made me think, “My goodness.” And it was clear in his data, in his figure: you could look at it and you could see the pattern. But they didn’t really focus on it in the paper because they thought it was odd, right? and maybe they thought it was idiosyncratic to depression. But then I started to think, “Well maybe there’s more like this out there,” and I just started pouring over all the papers and the literature looking for anything that had information relevant to this. You know, a lot of studies don’t even have follow-ups, so you can’t get a sense of if they had this pattern. But I found multiple examples of data that seemed to show a similar pattern – and I’m not saying all the neurofeedback studies show this pattern – there was definitely at least one neurofeedback study I looked at that had data and did not seem to have this pattern. So I’m not saying every neurofeedback study is going to have this pattern. but I was surprised at the prevalence. And I’ve since talked to multiple other researchers who are telling me that that they’re having a similar pattern, where the peak symptom improvement or the peak change in behavior is occurring not immediately after the intervention, but a while after. So I think that this idea – that every time you collect this data you should be following up, or you’re missing the time point of greatest effect – is really important to help us find the effective interventions.
[ Back to topics ]
Links to article, bonus audio and other materials
Watkins: That was Michelle Hampson discussing her open-access article, “Time course of clinical change following neurofeedback,” which she published in the journal NeuroImage on May 2nd 2018. You’ll find out link to their paper at parsingscience.org/e60, along with bonus audio and other materials we discussed during the episode.
Leigh: Interested in the latest developments in science? Then consider signing up for our weekly roundup of the latest science news from across the disciplines at parsingscience.org/newsletter. Or, if you’d like to check out our first 58 issues before deciding, just head over to parsingscience.org/news.
[ Back to topics ]
Preview of next episode
Watkins: Next time, in episode 61 of Parsing Science, we’ll be joined by Saptarshi Das from Penn State University. He’ll discuss his research into engineering a two dimensional nanotechnology transistor which determines the location of sounds based on the auditory cortex of a barn owl: a technology which may someday end up on a computer chip in your cell phone, or other electronics.
Das: They have evolved over millions of years, and they have evolved their neurobiological architecture in such a way that they can do this high-precision tasking, because that is important for their survival. And can we learn from them and can implement those in solid-state devices to make our sensor – holistic sensor devices – more smarter and more high-precision?
Watkins: We hope that you will join us again.
[ Back to topics ]