Two (AI) Worlds Collide
One of the topics we’re exploring at Origami Labs is neuro-symbolic AI and we’re really excited about the potential applications. What is it and why do we think it’s important?
Let’s start by breaking down the term. “Neuro” comes from neural networks, loosely modelled on the way biological neural networks collaborate in the brain. Neural networks and other statistical learning techniques typically rely on using large datasets to train models that are (hopefully!) able to generalise and cope with previously unseen inputs. “Symbolic” refers to the notion of creating abstract representations of facts and rules that we can use to reason.
A brief look back
That explanation of symbolic is a little woolly but has tremendous power. Symbolic AI, also known as classical (or “good old fashioned”) AI was the foundation of much of the field until the deep learning revolution happened a decade or so ago.
Symbolic AI was an attractive approach as it aligned with philosophical theories about the nature of intelligence. Initially, technical progress was quick with advances in expert systems and robotics. A highlight was IBM’s Watson computer and Deep QA software beating expert players of the gameshow Jeopardy! using a symbolic AI knowledge engine back in 2011. However, it became clear that the world is inherently uncertain, and symbolic AI methods were struggling to cope.
Stay statistical
If symbolic AI has proven to be limited, shouldn’t we just continue with statistical learning techniques like deep learning? As powerful as deep learning has been shown to be, it has its own weaknesses. The trend is towards increasingly complex models requiring vast quantities of data and computing power. That’s not practical for many applications, including some of the problems we’re trying to solve. Deep learning models are a mysterious black box, with puzzling blind spots and vulnerabilities. For critical applications where assurance and trust are paramount, this is a real barrier to adoption.
Back to the future?
Can we combine the power of statistical learning techniques to deal with uncertainty and unknowns, with symbolic AI’s ability to reason and explain? The early results look promising, with notable successes in the difficult visual question answering problem domain.
The challenge will be to understand how best to combine the neural and symbolic methods to solve practical problems. Should we develop neural networks that build symbolic representations during training? Use machine learning to create knowledge graphs? Maintain a separation and have machine learning models feed into symbolic systems? Lots of experimentation and learning lies ahead, and we’re keen to jump in.
We’ll finish with a quote from Alexander Gray, VP of IBM AI Science: “Classical AI is not cool anymore; deep learning is cool. So we’re definitely in a minority — or you can look at it as we’re ahead of the game. We think we’re ahead of the game.”