ai is a misnomer for the current rollouts of artificial intelligence such as chatgpt. more appropriately, ahi, artificial human intelligence, should be their designation. “human” implies the likelihood of imperfection, carelessness and bias.
those using ai have an initial concern. what data is the programming employing to produce answers? is it limited to information on the internet? if so, is it accessing information behind paywalls? is it relying upon sources not on the internet? in certain matters, is it consulting with human experts?
assuming that it is limited to regular internet access, ai is sort of a better version of wikipedia. essentially, it can “clip and paste” an article in accordance with a particular questions. relying on sources such as wikipedia can be problematic. some of the wiki pages have become has become corrupted with human bias and perhaps false information.
overtime, this ai will likely incorporate subscription based information. likely, the consumers will pay extra. thus, there will be ai services within the sciences, law and other subjects. to do so, it will access the paid services, i.e. lexis/nexis or scholarly journals.
for over three decades, my legal research has been almost entirely done with an online subscription service. this service allows for exclusive access to information with the added value of a powerful search engine. the alternative would be to either buy the books or review them at a law library. legal opinions have standards. having done hundreds of briefs presented to various adjudicating bodies, there is the ethical requirement of accurately stating the facts and the law. once accomplished, one is then free to make argument. within law, there are serious consequences for mis-stating the facts or law. you will likely lose your case and subject yourself to possible sanctions.
academia confronts similar problems. accurate positions, however, are even more treacherous. i enjoy listening to countless lectures by different individuals on the same topic. with this, i have become familiar with certain accepted reference points. some scholars, however, omit some of the items. these omissions are either in error, intentional or reveal the scholar’s lack of knowledge. thus, while academia is challenged to lay down an accurate assessment before jumping off into making hypotheses, there are additional pitfalls of unreliable scholarship.
thus, there are challenges with all fields of knowledge as to what is the best or most correct answer to a question. will the artificial intelligence be able to correctly navigate through truths, half-truths, and lies in order to provide the best answer? or will artificial intelligence be programmed to embrace bias based omissions and opinions? in the end, with humanity’s bias tendencies, the current ai will likely be a faster, more high tech version, of problematic human intelligence.
thus, at best, an ai inquiry and answer may simply be a starting point on one’s search for truth and accuracy.
be well!!
if you enjoyed this post, please “like”
if you would like to read more posts, click here
if you find this post meaningful, please share