94.1K
Downloads
310
Episodes
The Faculty Factory is a community of faculty development leaders in academic medicine. We share a passion for serving faculty and helping them exceed their clinical, research, education, program building, and leadership expectations. Learn more at FacultyFactory.org!
Episodes
Friday Oct 11, 2024
Friday Oct 11, 2024
Important news and notes for the academic medicine community about building safe artificial intelligence systems are discussed in depth on this week's Faculty Factory Podcast.
We’re excited to be joined by first-time guest Eric Nalisnick, an Assistant Professor in the Department of Computer Science at Johns Hopkins University for this timely discussion.
Alongside thoughts on the current state of incorporating the human element into these systems, one thing will remain abundantly clear after listening to today’s discussion: these A.I. systems, when left unchecked, are unreliable for work that allows no margin for error (i.e., medical practice, tax returns, etc.).
Large language models, like ChatGPT, are effective for low-stakes tasks, brainstorming, and bouncing ideas off of in order to stimulate creativity or encourage alternative ways of thinking.
With the ongoing and rapidly growing integration of artificial intelligence in the medical, research, and education fields, maintaining safety, ethical standards, and ensuring that the human touch is not lost are central themes in today’s interview.
“Integration and efficiency are something I hope we will see from A.I. systems, as opposed to more erosion of the human aspect,” he optimistically mentioned in the closing moments of our podcast
If you enjoyed today’s podcast or found it useful, consider listening to previous Faculty Factory interviews related to the topics Eric discussed with us:
Comments (0)
To leave or reply to comments, please download free Podbean or
No Comments
To leave or reply to comments,
please download free Podbean App.