Thank you! We will be in touch shortly.

News and Blog

The Future is Happening Now

on: September 19, 2016, by: Anne Lapkin

About two weeks ago, I saw an article (actually, one of my colleagues posted it on our intranet) from the MIT Technology Review about the limitations of Artificial Intelligence. The article is here for those of you who want to read it in full, but the fundamental concept is; while AI has made great strides in the last 20 years or so (see the recent win by Google’s AlphaGo over Lee Sodol, who is thought to be one of the best Go players of all time), it is still fundamentally inadequate in one respect – we have not yet built a machine that can carry on a conversation with anything remotely approximating human facility. Quite simply, the computer does not understand the meaning of words that it is using and is therefore unable to use them intelligently.

The reason for this, according to the article, is that “words often have meaning based on context and the appearance of the letters and words.” It’s not enough to be able to identify a concept represented by a bunch of letters strung together. There are many rules that need to be put in place that affect the meaning of the word; from its placement in a sentence, to grammar and to the words around – all of these things are important. And the number of rules required to create an intelligent, speaking agent of any kind is so vast that it’s just not practical. Technologies like neural networks (on which AlphaGo is based), which are often referred to as “machine learning” do show promise, but we’re a long way away from any practical outcomes.

The other penny dropped for me when I saw another article in Scientific American, which drives some great gaping holes in Chomsky’s theories of language acquisition. Now I have to say that Chomsky’s theories are near and dear to my heart, because when I decided to raise my children multi-lingual, the only thing that stood between me and a mother-in-law who was convinced that I would so confuse my children that they would never learn to speak, was Chomsky’s theory of universal grammar.

Chomsky said that underlying all language is a set of principles that are embedded in the human psyche that allow children to generate grammatical sentences from an early age. It did not matter what the languages were, the universal grammar capability of the child would allow them to acquire as many languages as you could throw at them. The only critical component was that you strictly separate the languages to minimize confusion. Chomsky went on to say that the principles of universal grammar could be reduced to a set of mathematical equations, giving hope to generations of linguists who aspire to teach computers how to understand language. The theory has undergone quite a bit of revision over the years, to accommodate new evidence of how children learn language, which contradicted the basic theory. But hey – we lived with String Theory a pretty long time before it died too, so I suppose we can’t complain.

Chomsky’s theories have taken longer to die than scientific theories normally do, but research coming out of Harvard now basically rebuts the theory with evidence that shows that children learn by recognizing patterns of speech from which they derive the rules of the language(s) that they are exposed to. They categorize concepts into different buckets (things, actions, descriptions, etc.) and intuit the relationships between them.

So why is this important or even interesting? Because Semaphore is a model-driven, rules based platform for the creation of metadata – and it sits squarely in the gap that that AI systems have yet to cross. We can’t derive rules that are abstract enough, or numerous enough, to allow a machine to learn the real meaning of language. But we can model the problem domain. The model defines the different buckets of concepts in the problem domain and the relationships between them. In the same way a child does when learning language by a process we have yet to understand.

We can then auto-generate FROM THE MODEL the rules that are germane to that particular usage of language in that particular problem domain. Semaphore can do that today. And as the usage changes, or more concepts are added into the domain, you can expand the model, thereby expanding the rules and expanding the ability of the machine to understand the meaning of the information assets in this problem domain. Is it universal? No. Is it all-encompassing? No. Will it allow us to create a Siri-like agent that can carry on an intelligent conversation on any topic? Um…. Probably not. What it can do however, is bring the power of semantic technology to a problem space and achieve positive business outcomes RIGHT NOW. Not in 20 or 50 years while we wait for neural networks and linguistic theory to catch up. NOW.

Clients are using Semaphore every day to enrich information assets with precise, complete and consistent metadata to power information discovery and governance – making their knowledge workers more effective. They are using Semaphore to extract critical facts, entities and relationships from information to power case management and other workflows. Semaphore is being used to harmonize data in dissimilar formats from disparate data sources – creating logical data warehouses with a true semantic layer.

You can do it too.

And oh, by the way, the kids turned out fine. If anything, they talk too much…..

News Categories:

Stay in the loop

Sign up for our newsletter to receive the latest updates, features and news on Semaphore.