AI has a huge trust problem

People often express skepticism about AI – and those concerns extend to business, where the technology is used extensively. A UT researcher looked at why and how people override AI-recommended decisions and why it can overcorrect for bias.

By Shelly BrisbinAugust 29, 2024 2:43 pm,

In business, AI is a big part of how decisions get made. And because bias and inaccurate information are known problems for AI, decision-makers often override automated recommendations.

Tech expert Omar Gallaga wrote about the findings of a UT-Austin researcher studying the impact of human involvement in AI-based decision-making.

Highlights from this segment:

– Researchers studied the role of AI in hiring decisions. The system read thousands of applicant biographies and was able to use keywords to classify applicants by their believed gender, among other factors. When presented with the results, human hiring managers, believing that AI was biased, often sought to override the system. Researchers found these overrides could lead to bad decisions, as they overcorrect the AI.

– If AI-based explanations of its decisions are incomplete, humans could end up using them to make bad overriding decisions.

– The researchers conclude that better training of AI is important, as well as realistic expectations of what a particular AI model can do.

If you found the reporting above valuable, please consider making a donation to support it here. Your gift helps pay for everything you find on texasstandard.org and KUT.org. Thanks for donating today.