Big Tech is on the precipice of ‘counterfeiting’ human beings – creating AI that passes itself off convincingly as a human being in online contexts. Even experts are susceptible to being convinced. Although there is widespread agreement between philosophers and AI experts that AI chatbots are not sentient, their existence and use raises pressing ethical questions. Should we outlaw AI designed to pass itself as human, for much the same reason that we outlaw counterfeit money?
YOUTUBE YLzZ1V4S-tY Published Sep 27, 2022.
How should philosophers and cognitive scientists help educate people about how to relate to AI that seems sentient? How could we tell if an AI system is or isn’t conscious, for that matter? Further, how will the possibility that we might be unknowingly interacting with AI online affect social interactions on the internet?
Daniel Dennett is the Fletcher Professor of Philosophy and Director of the Center for Cognitive Studies at Tufts University. site
Susan Schneider is the Dietrich Professor of Philosophy and Director for the Center of the Future Mind at Florida Atlantic University. site
.
The problem of counterfeiting paper money has become more serious since the advent of high-speed digital copiers in the past 30 years. Earlier copiers were adequate to make crude facsimiles of our paper money. Starting perhaps 20 years ago, the Bureau of Printing and Engraving and the FBI conducted research into the issue of how to produce quality paper money that could not be readily duplicated with the copier technology. post
With Daniel Dennett's cooperation, Anna Strasser, Matthew Crosby, and Eric Schwitzgebel have "fine-tuned" GPT-3 on millions of words of Daniel Dennett's writings, with the thought that this might lead GPT-3 to output prose that is somewhat like Dennett's own prose. post