Sunday, November 10, 2024

Nextgov: Regulating AI: 3 Experts Explain Why it’s Difficult to do and Important to Get Right

From fake photos of Donald Trump being arrested by New York City police officers to a chatbot describing a very-much-alive computer scientist as having died tragically, the ability of the new generation of generative artificial intelligence systems to create convincing but fictional text and images is setting off alarms about fraud and misinformation on steroids. Indeed, a group of artificial intelligence researchers and industry figures urged the industry on March 29, 2023, to pause further training of the latest AI technologies or, barring that, for governments to “impose a moratorium.”…
Given the potential for widespread harm as technology companies roll out these AI systems and test them on the public, policymakers are faced with the task of determining whether and how to regulate the emerging technology. The Conversation asked three experts on technology policy to explain why regulating AI is such a challenge – and why it’s so important to get it right…
HUMAN FOIBLES AND A MOVING TARGET
S. Shyam Sundar, Professor of Media Effects & Director, Center for Socially Responsible AI, Penn State
The reason to regulate AI is not because the technology is out of control, but because human imagination is out of proportion. Gushing media coverage has fueled irrational beliefs about AI’s abilities and consciousness. Such beliefs build on “automation bias” or the tendency to let your guard down when machines are performing a task. An example is reduced vigilance among pilots when their aircraft is flying on autopilot…
Tags:

[related-post]

LEAVE A REPLY

Please enter your comment!
Please enter your name here

FedHealthIT Xtra – Find Out More!

Recent News

Don’t Miss A Thing

Subscribe to our mailing list

* indicates required