OpenAI Exec Admits AI Needs Regulation

Google+ Pinterest LinkedIn Tumblr +



OpenAI Exec Admits AI Needs Regulation

OpenAI CTO Mira Murati fueled controversy over government oversight of artificial intelligence on Sunday when she admitted in an interview with Time magazine that the technology needed to be regulated.

“It’s important for OpenAI and companies like ours to bring this into the public consciousness in a controlled and responsible way,” Murati told Time. “But we’re a small group of people, and we need a ton more information in this system and a lot more information that goes beyond the technologies – certainly from regulators and governments and all others.”

When asked if government involvement at this stage of AI development could hinder innovation, she replied, “It’s not too early. It is very important that everyone starts to get involved, given the impact that these technologies are going to have.

Since the market incentivizes abuse, some regulation is likely needed, agreed Greg Sterling, co-founder of Near Media, a news, commentary and analysis website.

“Thoughtful deterrents against unethical behavior can minimize potential AI abuse,” Sterling told TechNewsWorld, “but regulation can also be poorly constructed and stop nothing.”

He acknowledged that too early or too burdensome regulation could hurt innovation and limit the benefits of AI.

“Governments should bring together AI experts and industry leaders to jointly establish a framework for possible future regulation. It should also have an international reach,” Sterling said.

Consider existing laws

Artificial intelligence, like many technologies and tools, can be used for a wide variety of purposes, explained Jennifer Huddleston, a technology policy research fellow at the Cato Institute, a Washington, DC think tank.

Many of these uses are positive, and consumers are already seeing beneficial uses of AI, such as real-time translation and better traffic navigation, she continued. “Before calling for new regulations, policymakers should consider how existing laws on issues, such as discrimination, can already address concerns,” Huddleston told TechNewsWorld.

Artificial intelligence should be regulated, but how it’s already regulated must also be considered, added Mason Kortz, clinical instructor at the Cyberlaw Clinic at Harvard University Law School in Cambridge, Mass.

“We have a lot of general regulations that make things legal or illegal, whether done by a human or an AI,” Kortz told TechNewsWorld.

“We need to look at the ways in which existing laws are already enough to regulate AI, and what are the ways in which they don’t and need to do something new and be creative,” he said.

For example, he noted that there is no general liability regulation for autonomous vehicles. However, if an autonomous vehicle causes an accident, there are still many areas of law to fall back on, such as negligence law and product liability law. These are potential ways to regulate this use of AI, he explained.

Light touch needed

Kortz admitted, however, that many of the existing rules come into play after the fact. “So in a way they’re kind of second best,” he said. “But this is an important measure to put in place while we develop regulations.”

“We should try to be proactive in regulating where we can,” he added. “Redress through the court system occurs after harm has occurred. It would be better if evil never happened.

However, Mark N. Vena, president and principal analyst at SmartTech Research in San Jose, Calif., says strict regulation could suppress the nascent AI industry.

“At this early stage, I’m not a big fan of government regulation of AI,” Vena told TechNewsWorld. “AI can have many benefits, and government intervention could end up stifling them.”

This kind of stifling effect on the internet was avoided in the 1990s, he argued, thanks to “lightweight” regulation like Section 230 of the Communications Decency Act, which granted online platforms immunity responsibility for third party content appearing on their websites.

Kortz believes, however, that the government can reasonably rein in something without shutting down an industry.

“People criticize the FDA, that it’s subject to regulatory capture, that it’s run by pharmaceutical companies, but we’re still in a better world than before the FDA where anyone could sell anyone. what and put anything on a label,” he said. .

“Is there a good solution that captures only the good aspects of AI and stops all the bad ones? Probably not,” Vena continued, “but some structure is better than no structure.”

“Letting good AI and bad AI go head-to-head won’t be good for anyone,” he added. “We can’t guarantee the good AIs will win this fight, and the collateral damage could be quite significant.”

Regulation without throttling

There are some things policymakers can do to regulate AI without hampering innovation, observed Daniel Castro, vice president of the Information Technology & Innovation Foundation, a research and public policy organization, in Washington, D.C. DC

“One is to focus on specific use cases,” Castro told TechNewsWorld. “For example, regulating self-driving cars should be different from regulating AI used to generate music.”

“Another is to focus on behaviors,” he continued. “For example, it is illegal to discriminate when hiring employees or renting apartments – whether a human or an AI system makes the decision should be irrelevant.”

“But policymakers must be careful not to unfairly impose a different standard on AI or put in place regulations that don’t make sense for AI,” he added. “For example, some of the safety requirements in today’s vehicles, like steering wheels and mirrors, don’t make sense for autonomous vehicles without passengers or drivers.”

Vena would like to see a “transparent” approach to regulation.

“I would prefer regulations requiring AI developers and content producers to be fully transparent about the algorithms they use,” he said. “They could be reviewed by a third-party entity made up of academics and some commercial entities.”

“Being transparent around the algorithms and content sources from which AI tools are derived should encourage balance and mitigate abuse,” he said.

Plan for the worst scenarios

Kortz noted that many people think technology is neutral.

“I don’t think technology is neutral,” he said. “We have to think about bad actors. But we also have to think about the bad decisions of the people who create these things and release them to the world.

“I encourage anyone developing AI for a particular use case to think about not only its intended use, but also the worst possible use of their technology,” he concluded.

Tech

Share.