Skip to main content

Software Language Makes First Step Towards AI? Hardly.

Posted by n_alex on June 20, 2005 at 12:02 AM PDT

First of all, I know bad reporting very well, because I've done my share of it. The first step in writing a misleading technology article is to step out of your domain of expertise. I daresay the person who wrote the article in question doesn't know up from down in cognitive linguistics or AI. That said, I'm no Ph.D. in cognitive linguistics or AI either. But I just-so-happen to have been recently studying the problem of polysemous prepositions in natural language semantics, and I do think the editors over at news.softpedia.com are taking the public for a ride down fantasy la-la lane.

Maybe they're not doing it on purpose, but that's the first thing that popped into my mind. The softpedia article proposes that ISO 18629 "Makes the first step towards AI." The title alone is ridiculous. When I read the article, (you should, if you want to understand what follows), I came to my la-la lane conclusion.

The article begins "Mankind is making the first steps towards artificial intelligence, or AI if you like." and then proceeds to explain why, apparently oblivious to the steps folks have been making in so-called "AI" for quite awhile.

The proof cited in the article?

If a person who hears the commands “paint it, before shipping it” and “turn on the coolant, before milling” understands that the word "before" has slightly different meanings in these two different contexts. In the first command, it is understood that painting and drying must be completed prior to the next action, shipping. In the second command, however, the first action, turning on the coolant, continues after the milling starts. ISO 18629 supports computer systems with this type of rudimentary understanding of context-specific language.

I don't think this has anything to do with AI. It certainly isn't something that "enables" AI, and I'll paint my face blue if it "makes the first step towards AI", as the title of the article claims. On the other hand, it does strike upon a subject I find VERY interesting, because of my recent research in language design: Polysemous Prepositions.

Here's how it breaks down. In natural languages, there is a common occurance called "polysemy". Polysemy happens when one word can mean a number of things, dependent on context. One of the areas most notably affected by polysemy in English is prepositions.

The Metaphor Research Group at Georgetown University has been conducting discussions and research into the question of whether polysemy in English prepositions follows a logical pattern, or whether its arbitrary and idiomatic. This work has recently culminated in a fascinating publication on the subject, titled "The Semantics of English Prepositions: Spatial Scenes, Embodied Meaning and Cognition" by Andrea Tyler and Vyvyan Evans. (ISBN: 0 521 81430 8 hardback)

The problem with polysemous prepositions in language, which is familiar to any language geek who has ever done any work trying to approximate natural language in a logical semantic system, can be illustrated by part of a simple quote at the beginning of this book:

"We won't come back 'til it's over, over there."

The word "over" has two radically different meanings in this sentence. Same word, but how do you know which version of "over" is meant? "Over" as in "finished", or "over" as in "on the other side of a certain amount of space"? The book goes into other meanings for "over": in motion above, as in "flying over the ocean", resting above, as in "over an open fire", more than, as in "over 90 percent", the list goes on.

To resolve this question of polysemy, the human listener has a number of potential options: one is to construct the meaning "on the fly", at "runtime", so to speak. This implies that the reader is generating hypotheses about the meaning of the language as they are reading. Another option is for idiomatic or "rote" memorization, where you learn to map spatial and temporal concepts with certain instances of usage in your mind. This is really just pre-processed language.

Now, it is undeniable that, over time, people develop idiomatic concepts to help them process polysemous expressions. That part of their language usage is already precompiled and in memory. But when learning a new language, such as a foreign language, must people memorize what the proper prepositions are for certain expressions, or is there a hidden, underlying system that actually makes rational sense?

Tyler and Evans argue the latter point, and do so with a candor that is charming and unexpected in such a dry-sounding topic as polysemous prepositions.

The questions their book raised in my mind, were along the following tracks: If prepositions deal primarily with mapping conceptions of space and time to language, what changes need be made to language and its study when Gaussian and Riemannian concepts of space and time are taken into consideration? Specifically, what's the plan to overhaul our notions of location and time? We know with absolute certainty that old Cartesian and Euclidean concepts of space, time and other manifolds are simplistic and rife with paradox, and that Gauss' cyclonic functions and complex domain, taken alongside Riemann's notions of multiply-connected manifolds lend radically new and elegant ways of resolving colocation (a type of polysemy) and other spatial paradoxes into simple and rational forms.

But, because Gaussian and Riemannian spatial concepts ("space" is such a limiting concept, really) have not penetrated our education systems (yet), neither our usage nor our understanding of language and the ideas that underlie it have changed noticibly since these discoveries. Structuralism and post-structuralism actually took us further from these discoveries, first toward the realm of discrete mechanical breakdown of language, a throwback to the scholastics (which we see here in ISO 18629), and later toward the post-formal absurdities of post-structuralism, another throwback, only this time to nihilism. Neither showed the foggiest recognition of these 19th century advances in mathematics and geometry, or of the immense philosophical implications of these advancements. Nowadays the works of Gauss are becoming available in English, but Reimann's still hard to find in English.

Don't be fooled. ISO 18629 is a library for mapping "process information related to discrete manufacturing." It uses AI, it doesn't enable AI. It's no breakthrough in AI, and it won't lead to rational machines. For my dime, it's just another domain specific language (I'm sure it's excellent at what it does, and that its designers should be proud--my complaint was with the slipshod reporting at news.softpedia.com)

To keep people thinking, I'd like to put forward the following assertions, which I'll be happy to elaborate on at a later time:
1. Logic is not Reason.
2. Now, I'm biased, but I think if people seriously want to make real breakthroughs in AI (break-outs, that is--I hold AI and cybernetics to be totally absurd, but that's for another day), they should be studying geometry (and its many cousin languages), not drooling over a specification for a discrete manufacturing syntax.

Related Topics >>