Is AI something we should fear or embrace?
/Artificial intelligence is getting amazing – and a little scary at the same time.
The latest version of the artificial intelligence program ChatGPT, developed by OpenAI, has passed the Uniform Bar Examination by “a significant margin,” according to ABAJournal, which reported that the AI earned a combined score of 297 that surpasses even the high threshold of 273 set by Arizona.
To put that in perspective, you need a score of 260 to practice law in low-threshold jurisdictions like Alabama, Minnesota, Missouri, New Mexico, and North Dakota.
Concerns about AI’s rapid advances have led some people to urge caution.
An open letter signed by Elon Musk, Steve Wozniak, Andrew Yang, and more than 2,500 others recently asked that companies like OpenAI stop releasing new AI models until risks are better understood and can be better managed.
Signers fear that the AI models will create a tsunami of misinformation, eliminate good jobs, and eventually – as the letter explains – “outnumber, outsmart, obsolete, and replace us.”
The letter calls for a moratorium of at least six months on the development of AI systems “more powerful than GPT-4.” If such a pause cannot be enacted quickly, the letter calls on governments to step in and institute a moratorium.
The problem is that, as an article in Fast Company points out, the AI genie is already out of the bottle and it may be too late to slow its progress. Companies struggling with labor shortages and a desire to improve customer service and satisfaction are eager to embrace the potential of this technology. ChatGPT 4, for example, is part of Bing Chat. You can experience the technology by downloading the new version of Microsoft Edge.
Advocates the call for a moratorium on development is misguided.
AI pioneer Yann LeCun, chief AI scientist at Meta, tweeted: “The year is 1440 and the Catholic Church has called for a 6 months moratorium on the use of the printing press and the movable type. Imagine what could happen if commoners get access to books! They could read the Bible for themselves and society would be destroyed.”
The problem is the issue of AI is more complicated. A recent federal assessment of AI concludes that the technology has great potential to make workers more productive, firms more efficient, and spur innovations in new products and services.
But it also determined that AI has the potential to damage society.
“While previous technological advances in automation have tended to affect ‘routine’ tasks, AI has the potential to automate ‘nonroutine’ tasks, exposing large new swaths of the workforce to potential disruption,” the report stated.
As an evidence-based organization, Methuselah Foundation is committed to scientific advancement. But we are also painfully aware that unintended consequences can occur.
That’s why we support a concerted effort to understand how AI will affect humanity – and what we must do to make sure the technology enhances our lives.
This cannot be accomplished with a moratorium on development and self-reflection by developers. It will only occur when government, academia, and AI researchers formally collaborate to develop guideposts and metrics to assess the implications of this potent technology.
It will be worth the effort.