US May Soon Add Rules To Regulate AI Like ChatGPT

The Biden administration has disclosed its intention to gather public feedback on potential accountability measures for artificial intelligence (AI) systems like ChatGPT, given the questions surrounding its impact on national security and education.

ChatGPT, an AI program that has recently captured public attention due to its capacity to generate responses to a diverse range of queries quickly, is of particular concern to US lawmakers. With over 100 million monthly active users, it has become the fastest-growing consumer application in history.

In light of the “growing regulatory interest” in an AI “accountability mechanism,” the National Telecommunications and Information Administration, a Commerce Department agency responsible for advising the White House on telecommunications and information policy, is seeking input.

The agency is interested in exploring measures that could offer assurance that AI systems are legal, effective, ethical, safe, and trustworthy.

“Responsible AI systems have the potential to provide enormous benefits, but we must address their potential consequences and harms. For these systems to reach their full potential, companies and consumers must trust them,” said Alan Davidson, NTIA Administrator.

President Joe Biden recently stated that it remains to be seen if AI is dangerous. “In my opinion, technology firms have a responsibility to ensure the safety of their products before releasing them to the public,” he added.

ChatGPT, developed by California-based OpenAI and supported by Microsoft, has impressed some users with its rapid response to queries while causing concern among others due to its inaccuracies.

NTIA intends to prepare a report that examines “efforts to ensure AI systems work as claimed – and without causing harm.” This initiative will inform the Biden administration’s ongoing efforts to “establish a consistent and comprehensive federal government strategy for addressing AI-related opportunities and risks.”

A technology ethics group, the Center for Artificial Intelligence and Digital Policy, has also requested that the U.S. Federal Trade Commission prevent OpenAI from releasing new commercial versions of GPT-4, citing its “bias, misleading nature, and threat to privacy and public safety.”

Related Stories:

Help Someone By Sharing This Article