New AI Technology Picks Up Steam
“How did you do that?” is no longer just a question from parents to their tech-savvy children, but also from human workers to their AI tools. (We wonder if the tone of plaintive bewilderment is similar.)
Scientists in a growing field known as Explainable AI are designing systems to provide comprehensible explanations to their human operators, according to Reuters.
“Show your work”: AI technology can accurately predict business outcomes, but AI scientists are discovering that to make the predictions more helpful for human operators, the AI needs to explain how it reached its conclusions.
Explainable AI: “The emerging field of ‘Explainable AI,’ or XAI, has spurred big investment in Silicon Valley as startups and cloud giants compete to make opaque software more understandable and has stoked discussion in Washington and Brussels where regulators want to ensure automated decision-making is done fairly and transparently.”
Regulations in the offing: In the past two years, U.S. consumer protection regulators, including the Federal Trade Commission, have said that AI that is unexplainable could be investigated. Next year, the European Union could pass the Artificial Intelligence Act, which would require that users be able to interpret AI predictions. Government requirements that AI explain itself could lead to a boom for the Explainable AI field.
Proponents and critics: Supporters of Explainable AI say that it has already been successful in improving the effectiveness of AI applications in health care and sales. Critics argue that the explanations for AI predictions are not reliable because the technology is not good enough yet. Others worry that Explainable AI could lead to a false sense of security in AI predictions.