Not Good As Gold: Today’s AI’s Are Dangerously Lacking in AU (Artificial Understanding)

Knowledge graphs are beautiful, but knowing that x is-related-to y is a far cry from understanding x and y.

We would not be comfortable giving a severely neurologically-impaired person — say someone with no functioning left brain hemisphere — real-time decision-making authority over our family members’ health, our life savings, our cars, or our missile defense systems.  Yet we are hurtling in that direction with today’s AI’s which are impaired in almost exactly that same fashion!

They – those people and those AI programs – have trouble doing multi-step abstract reasoning, and that limitation makes their decision-making and behavior brittle, especially when confronted by unfamiliar, unexpected and unusual situations.

Don’t worry, this is not one of those “Oh, woe is us!” AI fear-mongering articles as we have been graced with by such uniquely qualified AI researchers as Henry Kissinger, Stephen Hawking, and Elon Musk.  Yes, we are moving toward a nightmarish AI crisis, but No, it is not unavoidable:  there is a clear path out of this devil’s bargain, and I’m going to articulate exactly what it is and how and when it’s going to save us.

Before I can explain that though, I need to say a few more words about today’s AI’s.

“I knew, and worked on, machine learning as a Stanford professor in the 1970’s, decades before it was a new thing.”

I knew, and worked on, machine learning as a Stanford professor in the 1970’s, decades before it was a new thing.   Machine learning algorithms have scarcely changed at all, in the last 40 years.[1: To be sure, a few tweaks have been made, such as increasing the number of hidden neural net layers, convolution, and rectified linear activation, but overall ML has changed much less than, say, the Honda Accord since 1982.]  But several big things have happened, in that time period that have breathed new life into applying that old AI technology:

(i) Computers are a hundred thousand times faster, and on top of that the video game market has given birth to cheap, fast, parallel GPUs which turned out to be well-matched to the voracious appetites of these AI’s

(ii)  Data storage costs and transmission speeds have likewise improved by orders of magnitude

(iii) the internet has grown up (well, at least grown), and

(iv) “big data” has gone from scarce to scarcely avoidable. This means there are lots of patches of fertile ground, now, for successfully applying machine learning; I don’t need to survey them here – just try and avoid hearing about them these days.

“Machine Learning has changed much less than, say, the Honda Accord since 1982.”

Current AI’s can form and recognize patterns, but they don’t really understand anything.  That’s what we humans use our left brain hemispheres for – what Dan Kahneman calls “thinking slow.”  That’s the other kind of thinking we do, and that’s also the other kind of AI technology that exists in the world.  It involves representing pieces of knowledge explicitly, symbolically, to build a model of (part of) the world, and then doing logical inference step by step to conclude things which can then become the grist for even deeper logical reasoning.  Think, e.g., of the Sherlock Holmes character’s dazzling displays of deduction.[2: which are actually something logicians call “abduction,” but let’s not worry about that yet.]

For most of this article, I want to talk about symbolic representation and reasoning (SR&R) – the “other AI” besides machine learning.  So let’s try to contrast those two types of thinking; those two types of AI…

Continue reading this article on Forbes.