AI: Less like Skynet, more like coffee

A by-product of techie love of sci-fi is that discussions on the dangers of AI can veer off into the realm of killer robots.

This risk is really about advanced Artificial General Intelligence – AGI – which I simplify as conscious artificial superintelligence. We are very far from AGI, and AI is more prosaic: algorithmic models fed on large lumps of data.

The real risks of AI are that it is dumb but fast and it only works as well as the quality of its programming and the pertinence of its data. Feedback isn't always good either so the roots of error can be hard to find.

Some of the biggest risks with AI are around bias (e.g. embedding racist hiring), inexplicable error (e.g. market ‘flash crashes’) and unethical use (e.g. Cambridge Analytica). These are amenable to practical solutions at societal, organisational and personal levels. This is where our efforts should be directed.

As for the Skynet menace; Andrew Ng, ex AI guru of Google and Baidu, says that while dangerous AGI might be possible, he’s currently not worrying about it any more than he's worrying about overcrowding on our Mars colonies.

The biggest risk of AI is that it provides new ways for humanity to do stupid things quicker.

A bit like coffee.