In Has Microsoft 365 Been Clinically Tested? James Robertson poses a few hard questions, and rightly so. I do not want to enter into the debate about the nature of AI (is it intelligent, or just algorithms?); regardless of the answer to that question what interests us here is the relevance, accuracy, usefulness, reliability and sustainability of the solutions offered – and of course, not just Microsoft but any provider of AI-based solutions should be able to provide us with clear answers to those questions.
One of the core problems with AI is bias, and in the words of Julia Powles and Helen Nissenbaum (in “The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence“) all AI “bias is social bias”. Even if we ignore the (much larger) problems AI bias can cause in society at large, there is the issue of how well an IA solution will work for company X if it was built/trained by a company on another continent, in a different culture, and even with a different company culture.
Will be be able to “Build our own AI”? What mechanisms and tools will we have to investigate the workings of an off-the-shelf AI? Should we avoid AI altogether (there is excellent SF literature that makes this point – try Frank Herbert’s “Dune”)? Or do we have to teach all AI the equivalent of Asimov’s three Laws of Robotics?