Might enabling computational aids to “self-correct” when they’re out of sync with people be a path toward their exhibition of recognizably intelligent behavior? In episode 46, Neera Jain from Purdue University discusses in her experiments into monitoring our trust in AI’s abilities so as to drive us more safely, care for our grandparents, and do work that’s just too dangerous for humans. Her article “Computational Modeling of the Dynamics of Human Trust During Human–Machine Interactions” was published on October 23, 2018 in IEEE Transactions on Human-Machine Systems and was co-authored with Wan-Lin Hu, Kumar Akash, and Tahira Reid.

Trusting Our Machines - Neera Jain
Trusting Our Machines - Neera Jain
Trusting Our Machines - Neera Jain Trusting Our Machines - Neera Jain
{{svg_share_icon}}
Click bottom of waveform to add your comments


 

 

 

Websites and other resources

 
 

Bonus Clips

Patrons of Parsing Science gain exclusive access to bonus clips from all our episodes and can also download mp3s of every individual episode.

Support us for as little as $1 per month at Patreon. Cancel anytime.

 

🔊 Patrons can access bonus content here.


We’re not a registered tax-exempt organization, so unfortunately gifts aren’t tax deductible.

Hosts / Producers

Ryan Watkins & Doug Leigh

How to Cite

Watkins, R., Leigh, D., & Jain, N.. (2019, April 3). Parsing Science – Trusting Our Machines. figshare. https://doi.org/10.6084/m9.figshare.7955777

Music

What’s The Angle? by Shane Ivers

Transcript

Neera Jain: If humans don’t trust, and in turn are unwilling to use or interact with different types of automation, then there’s no point in us designing them to begin with.

Doug Leigh: This is Parsing Science. The unpublished stories behind the world’s most compelling science as told by the researchers themselves. I’m Doug Leigh…

Ryan Watkins: And I’m Ryan Watkins. Developed by Alan Turing in 1950, the Turing test is an assessment of a machines’ ability to exhibit intelligent behaviors that are indistinguishable from those of a human. Turing predicted that machines would be able to reliably pass such a test by the year 2000. While things haven’t quite come that far, might enabling machines to self-correct when they’re out of sync with people be a path toward this goal? Today, in episode 46 of Parsing Science we talk with Neera Jain from Purdue School of Mechanical Engineering about her research into monitoring people’s trust in machines’ abilities to drive us autonomously, care for our grandparents, and do work that is just too dangerous for humans. Here’s Neera Jain…

Jain: So my name is Neera, and when I took physics in high school as a junior it was eye-opening, and I absolutely loved it. I loved that it involved a lot of math but also involved getting answers to a lot of questions about how things work and I thought that that was extremely satisfying. So I got really excited and then of course applied to universities that had strong engineering programs, and my choice of mechanical engineering was largely driven by having really loved physics specifically within physics loved one part of it called mechanics. And so I thought why don’t I just try my hand at mechanical engineering. So that was really my trajectory getting into the field. Graduate school was something I wanted to do and subsequently spent several more years earning a masters and a PhD in a sub area of mechanical engineering called dynamical systems and control, and it was during that process that I realized that I actually wanted to pursue a career in academia, and that’s why I’m here now.

Read More