“As the integration of artificial intelligence (AI) into processes becomes more and more the norm, issues of privacy, bias, and accessibility all must be taken into account when considering what is ethical use of AI, a National Institute of Health (NIH) official said today.”
“Issues of privacy and bias are often, rightfully, mentioned prominently when talking about ethical AI, but the goal should be to also make sure the technologies have broad benefit, Laura Biven, the data science technical lead in the NIH’s Office of Data Science Strategy, said at a May 10 FedInsider webinar.”
“’I think [AI ethics] is a really broad space actually and it’s incredibly important to make sure that we think about all aspects of this,’ Biven said on the webinar. ‘Privacy, bias, these are all really important areas of focus within this space of ethics, but I think also thinking about how we can make sure that these technologies really benefit the broadest number of people.'”
“In order for AI to have the broadest impact, there needs to be broad engagement and representation within the data sets, Biven said. Part of that means having a testing framework that addresses any issues of bias as they arise and also making sure that there is broad engagement in determining what type of questions people can ask of the datasets and models…” Read the full article here.
Source: Privacy, Bias, and Accessibility Key in Considering Ethical AI Use — By Lamar Johnson, May 10, 2021. MeriTalk.