Might enabling computational aids to “self-correct” when they’re out of sync with people be a path toward their exhibition of recognizably intelligent behavior? In episode 46, Neera Jain from Purdue University discusses in her experiments into monitoring our trust in AI’s abilities so as to drive us more safely, care for our grandparents, and do work that’s just too dangerous for humans. Her article “Computational Modeling of the Dynamics of Human Trust During Human–Machine Interactions” was published on October 23, 2018 in IEEE Transactions on Human-Machine Systems and was co-authored with Wan-Lin Hu, Kumar Akash, and Tahira Reid.

Trusting Our Machines -- Neera Jain
Trusting Our Machines -- Neera Jain
Trusting Our Machines -- Neera Jain Trusting Our Machines -- Neera Jain
Click bottom of waveform to add your comments




Websites and other resources

Bonus Clips

Patrons of Parsing Science gain exclusive access to bonus clips from all our episodes and can also download mp3s of every individual episode.

Support us for as little as $1 per month at Patreon. Cancel anytime.


Patrons can access bonus content here.

Please note that we aren’t a tax-exempt organization, so unfortunately gifts aren’t tax deductible.

Hosts / Producers

Ryan Watkins & Doug Leigh

How to Cite

Watkins, R., Leigh, D., & Jain, N.. (2019, April 3). Parsing Science – Trusting Our Machines. figshare. https://doi.org/10.6084/m9.figshare.7955777


What’s The Angle? by Shane Ivers


Coming soon!