Share and discuss this blog



Monday, March 13, 2017

Stop the AI BS already


I am getting really tired of all the AI hype. Last week alone, the Wall Street Journal, Forbes, Fox News, Huffington Post, MIT Tech Review, The Guardian,TechCrunch, Bloomsburg, Newsweek, Fortune, Fast Company. and a host of lesser known publications had articles hyping AI. In contrast, Atlantic Magazine actually had a reasonable article about why the term AI has become meaningless.


So, I thought I would take a moment to explain something simple about AI. Most of my work in AI was an attempt to get computers to understand English. We had a program hooked up to the UPI wire at one point that could summarize a story, answer questions about the story, and translate that story, To do this we had had to carefully represent  various domains of knowledge. So if we wanted the program to understand stories about diplomatic visits for example, we had to represent in gory detail what took place on a diplomatic visit, why that visit took place, and what kinds of accomplishments were hoped for and might be achieved.

To help you understand how hard this is. I wrote down some words that I saw in today’s New York Times:

restraint
stimulate
sustainable
protest
turbocharged
prosecutor
quintessential
hallmark
bid-rigging
cyber-criminal
full-fledged
devolved
concessions
crackdown
discrimination
movement
communitarian
faith
self-deprecation
filibuster

An “AI” that read stories or did anything else would have to understand what these words meant. Many people wouldn’t be able to explain them all. But, today, we are told about “AI’s” that can deal with the words it finds on the internet in various ways and then we must all watch out before they take away our jobs.

I know this is not true because I know how hard it is to represent the complex meanings of words like this, and I know that the “AI” that is being worked on now isn’t even trying to comprehend these words. Todays “AI” is all about counting words and finding superficial patterns among them. No matter how many times you count the word discrimination you would not comprehend what it was about, why it might matter, nor would you understand which sense of discrimination was being used when you read it.

What does communitarian mean? I can guess and can figure it out in context. Current “AI’s” can count it. What does quintessential mean? Could you explain it to a computer? How about self-deprecation? Try explaining that word to a child. AI needs to do simple things like figure out what a word might mean and explain what it has just read to others. We are nowhere near doing that.

Let me try to explain just one of these words. Let’s look at “faith.” What does it mean to have faith in someone? It means we believe that they will do what they say. Or it could mean that we believe they will do their best to come through in a difficult situation. But faith refer to more than people. You could have faith in a company, which would mean that you believe their products are good. Or you could have faith in the system which means that you think you should follow the rules. Or you couch have faith in a religion which means you believe their teachings. Faith also connotes a kind of optimism. But there is also the word faithful which in the context of religion is the same as faith but in the context of marriage has to do with extra marital affairs.

How do we explain this to a computer? To do that we need to detail the rules of marriage or work (a “faithful employee.”) You could be a faithful advocate of a political persuasion or religion or point of view on life. But for a computer to understand all this it would need to comprehend political philosophies, religious philosophies and a whole lot more. You could a faithful follower of a band or you could be playing Minecraft which has a faithful resource pack.

My point is this: AI requires modeling the world in gory metal so that we can comprehend people's actions, intents, beliefs, and a while lot more. Sorry, but matching keywords is not AI.


But the press will keep on telling us how an AI will suddenly take our jobs and how chat bots are the answer to customer service. I don’t know about you but if I got a chat bot answering my customer service call, I would hang up.  Or maybe I would filibuster. Or maybe I would show some restraint. Either way no AI would know what I was doing nor would it understand if I explained it.

Tuesday, March 7, 2017

The SPGU Tool: A response to current so-called AI



OK. We have to fight back. Enough with the AI is going to take over the world stories. Enough with chat bots. Enough with pretending AI is easy. Enough with AI people who barely know the first thing about AI.

I am not discussing machine learning here. If you want to count a lot of words fast, and you can draw some useful conclusions from that, go ahead. I wish you wouldn’t call it AI, but I can’t control that. But I can fight back. Not with words, which I know don’t really convince anyone of anything, but with a new AI tool, one that uses what I know about AI, in other words, what most people  who worked in AI in the 60’s, 70’s, and 80’s likely know about AI.

The SPGU Tool is named after the iconic book by Schank and Abelson (1977) Scripts, Plans, Goals, and Understanding.

In that book, we laid out the basis of human understanding of language by invoking a set of scripts, plans, goals, and themes, that underlie all human actions. This was used to explain how people understand language. The classic example was the attempt to understand something like “John went into a restaurant. How ordered lobster. He paid the check and left.” This understanding was demonstrated by the computer being able to answer questions such as: What did John eat? Who did he pay? Why did he pay her? In this easy example of AI, SAM (the Script Applier Mechanism we built in 1975) could answer most questions by referring to the scripts it knew about and parsing the   questions in relationship to those scripts. In this example, given a detailed restaurant script, it could place any new information within that script and make inferences about what else might be true or what might possibly be asked at that point in the script.

The SPGU Tool (SPGU-T) takes that 1970’s technology and makes it useful in the modern era. People who plan often need help in making their plans succeed. A tool that helps them plan needs to have a detailed representation of the context of that plan, what goals were being satisfied and well-known obstacles to achieving those goals. Then it can access expert knowledge to assist a planner when the planner is stuck. We used this methodology when we built the Air Campaign Planner for the Department of Defense (in the 90’s). We captured expert knowledge (in the form of short video stories), tracked what the planner was doing within a structured air campaign planning tool, and offered help (in the form of one or more retrieved stories) when SPGU-T saw that help was needed.

In a project for a pharmaceutical company, for example, one expert story we captured was called the “Happy Dog Story.” The story was how the company had found a drug that made dogs every happy and then went into clinical trials with humans very quickly. Some months later, the dogs had all killed each other, but the people who were doing the clinical trials were unaware of this. This story should come up when a planner is planning clinical trials and is relying on data that required continued tracking. SPGU-T would know this and be able to help, if and only if, all of the planning for the trials was done within SPGU-T’s framework that detailed the steps in the clinical trials script.

A partner or manager in a consulting firm could use SPGU-T to plan a client engagement. SPGU would be able to help with problems and suggest next steps at each stage if it knew the gory details of how engagements work, and if it had stories from experts addressing well-known problems that occur in engagements. SPGU-T could not only answer questions, but it could also anticipate problems, serving as a helpful expert who was always looking over the user’s shoulder.

A Deeper Look at SPGU-T

It is well beyond the state of the art, both now and in the foreseeable future, for a computer system to answer arbitrary questions, or more difficult still, to deeply understand what a person is doing and to proactively offer advice. Both of these forms of intelligent assistance are possible today, if the person is working to accomplish a well-defined, goal-oriented task using a computer-based tool that structures his or her work. In other words, if we can lay out the underlying script, and we can gather used advice that might be needed at any point in the script, we can understand questions that might be asked or assist when problems occur. That understanding would help us parse the questions and retrieve a video story as advice in response.

This isn’t simple but neither is it impossible. Advisory stories must be gathered and detailed scripts must be written. We built the needed parser years ago (called D-MAP for direct memory access parsing.)

SPGU-T helps someone to carry out a plan in a specific domain, be it planning a large-scale data analytics project, a strategy consulting engagement, a construction project, or a military air campaign. It does so by knowing a person’s goals in creating such a plan, the steps involved in plan creation, the nature of a complete and reasonable plan, and the problems that are likely to arise in the planning process.

Imagine, for example, a version of SPGU-T that is customized for developing and tracking a project plan that a consulting firm will use to successfully complete a complex data analytics project. It knows that its registered user is an engagement manager. Given the usage context, it also knows that the user’s goal is to plan a time-constrained, fee-based project on behalf of a new client. From this starting point, SPGU-T can take him or her through a systematic process for achieving that goal. At any step in the process, SPGU-T will know specifically what the user is trying to accomplish and the nature of the information he or she is expected to add to the plan. For example, in one step, the user will identify datasets required for the project. SPGU-T will expect him or her to identify the owners of those datasets, the likely lag times between data requests and receiving the required data, and any key properties of the data, such as its format and likely quality.

This very specific task context, computer-based interpretation of the semantics of the information being entered, and heuristics to infer reasonable expectations about the input enable the system to accurately interpret questions posed by the user in natural language and to retrieve context-relevant answers from a case base of answers, both video stories and textual information, to a wide range of common questions about planning a data analytics project. For example, the user might ask, “How can I determine the quality of data provided by a commercial data service?,” “What is the likely impact of poor data quality on my schedule?,” or “What is a reasonable expectation of the lag time between making a data request and receiving data from a market research firm?”

More important, perhaps, are situations in which the user does not recognize that a problem exists and, therefore does not think to ask a question, e.g., the question above about the likely lag in receiving data. In such situations, SPGU-T can use the same knowledge of task context and semantics of input information, coupled with heuristics for evaluating the completeness and reasonableness of information to proactively offer help and advice. SPGU-T can also carry information forward to a future task, for example, to offer proactive advice about the likely duration of the “data wrangling” step of an analytics project given previously entered information about the formats, quality, and lags in obtaining third-party datasets.

That being said, when SPGU-T is proactively offering help and advice, it is essential that it not be wrong if the user’s confidence in the value of such advice is to be maintained. In situations in which SPGU-T recognizes a likely problem with low certainty, it can do one of two things: It can offer a small set of potentially relevant pieces of advice from which the user can select, or it can ask the user a few questions to raise the certainty that specific advice is relevant to the user.

In either the case of answering a user’s question or proactively offering help and advice, SPGU-T can also answer follow-up questions, using not only the contextual information enumerated previously but also the user’s inferred intent in asking the follow-up question, thus making the retrieved answer all-the-more relevant.

There will, however, be cases in which SPGU-T cannot answer a question or cannot identify relevant help and advice with reasonable certainty even after interacting with the user to further understand his or her specific context. In such cases, SPGU-T will refer the question or situation to a human expert and promise the user that the expert will address the issue. SPGU-T can extend its case base as a result of capturing such interactions, thus enabling it to answer a wider range of questions and to provide better help and advice to future users.

We are building SPGU-T now. Watch this space.