New! Sign up for our free email newsletter.
Science News
from research organizations

Someone to watch over AI and keep it honest - and it's not the public!

Date:
March 8, 2021
Source:
Lancaster University
Summary:
The public doesn't need to know how Artificial Intelligence works to trust it. They just need to know that someone with the necessary skillset is examining AI and has the authority to mete out sanctions if it causes or is likely to cause harm.
Share:
FULL STORY

The public doesn't need to know how Artificial Intelligence works to trust it. They just need to know that someone with the necessary skillset is examining AI and has the authority to mete out sanctions if it causes or is likely to cause harm.

Dr Bran Knowles, a senior lecturer in data science at Lancaster University, says: "I'm certain that the public are incapable of determining the trustworthiness of individual AIs... but we don't need them to do this. It's not their responsibility to keep AI honest."

Dr Knowles presents (March 8) a research paper 'The Sanction of Authority: Promoting Public Trust in AI' at the ACM Conference on Fairness, Accountability and Transparency (ACM FAccT).

The paper is co-authored by John T. Richards, of IBM's T.J. Watson Research Center, Yorktown Heights, New York.

The general public are, the paper notes, often distrustful of AI, which stems both from the way AI has been portrayed over the years and from a growing awareness that there is little meaningful oversight of it.

The authors argue that greater transparency and more accessible explanations of how AI systems work, perceived to be a means of increasing trust, do not address the public's concerns.

A 'regulatory ecosystem', they say, is the only way that AI will be meaningfully accountable to the public, earning their trust.

"The public do not routinely concern themselves with the trustworthiness of food, aviation, and pharmaceuticals because they trust there is a system which regulates these things and punishes any breach of safety protocols," says Dr Richards.

And, adds Dr Knowles: "Rather than asking that the public gain skills to make informed decisions about which AIs are worthy of their trust, the public needs the same guarantees that any AI they might encounter is not going to cause them harm."

She stresses the critical role of AI documentation in enabling this trustworthy regulatory ecosystem. As an example, the paper discusses work by IBM on AI Factsheets, documentation designed to capture key facts regarding an AI's development and testing.

But, while such documentation can provide information needed by internal auditors and external regulators to assess compliance with emerging frameworks for trustworthy AI, Dr Knowles cautions against relying on it to directly foster public trust.

"If we fail to recognise that the burden to oversee trustworthiness of AI must lie with highly skilled regulators, then there's a good chance that the future of AI documentation is yet another terms and conditions-style consent mechanism -- something no one really reads or understands," she says.

The paper calls for AI documentation to be properly understood as a means to empower specialists to assess trustworthiness.

"AI has material consequences in our world which affect real people; and we need genuine accountability to ensure that the AI that pervades our world is helping to make that world better," says Dr Knowles.

ACM FAccT is a computer science conference that brings together researchers and practitioners interested in fairness, accountability, and transparency in socio-technical systems.


Story Source:

Materials provided by Lancaster University. Note: Content may be edited for style and length.


Cite This Page:

Lancaster University. "Someone to watch over AI and keep it honest - and it's not the public!." ScienceDaily. ScienceDaily, 8 March 2021. <www.sciencedaily.com/releases/2021/03/210308111940.htm>.
Lancaster University. (2021, March 8). Someone to watch over AI and keep it honest - and it's not the public!. ScienceDaily. Retrieved March 28, 2024 from www.sciencedaily.com/releases/2021/03/210308111940.htm
Lancaster University. "Someone to watch over AI and keep it honest - and it's not the public!." ScienceDaily. www.sciencedaily.com/releases/2021/03/210308111940.htm (accessed March 28, 2024).

Explore More

from ScienceDaily

RELATED STORIES