AI: Please Don't Lie to Me or A Better Turing Test

I was recently pondering the Turing test as a measure of AI and it occurred to me that what we're really asking it to do is lie to us. Is this the first thing we want to teach our technological-prodigies?

First let's unwrap this. Instead of thinking of the Test, let's step back and look at why we do tests. I took the SAT because I wanted to have more post-secondary options. But this is what is missing from most AI systems today: a *Goal*.

AI Goals

The *Goal* of the Turing test is to fool us. My claim is that goal is a false goal for at least these reasons:
* one can accomplish that goal without building something useful
* it's not a useful thing to fool ppl (unless we want our AI to move into politics)

I propose a more useful Goal:
To Exist
I know what you're saying: that's too non-specific but isn't this all life's imperative? With this simple directive one can synthesize all sorts of uses to society.

Let's take an example: an AI personal assistant. "Continued existence depends on making my human happy" likely varies person to person; company to company. So training AI to exist fruitfully is one of the first things to do. Think of the first things you do with a new computer: the set your preferences or you train it albeit simply to please you.

This also likely mirrors human children in that, one surmises, infants have certain built-in presets but, as they gain experience, they create and realize their own life goals to fit into cultural and societal norms and to provide value to their environment.

Providing value implies existence.

Test Ideation

Ok so we have a goal - now a test. My claim is that 'intelligence' requires 3 things:
* an understanding of the topology of a subject area (compression)
* distance (a metric on the space)
* a goal (a way to pick one path of many)

For example:

Consider the sequence:

1, 4, 9, 16, ...

What is the next number in the sequence?

The understanding is that "answer_i=n_i^2" is the topology. A metric could be "1 if answer = 25 else 0". Finally the goal is to maximize the metric so we pick the 'path' answer = 25.

But now suppose we look at the metric more closely. Our AI's Goal: To Exist may also mean make humans feel better about themselves. Which might mean "Don't score too much better than them", in which case answer !=25 is correct. So there are right answers and other right answers.

The Test

I suggest a test much like the SAT but expanded to include image and auditory recognition as well as mathematics and language comprehension. Then have humans take the tests and stratify the results by age. An average adult human score is defined as say 1. The AI can then be compared against humans and if a machine can score about 1 on most tests we can certainly say it has some sort of adult-level intelligence.

Most AI systems today would probably score very low because, unlike say Jeopardy! or chess the test questions would be relatively unstructured with open-ended or high cardinality choices so one would have to have more than a massive hash table or database to score well.

The key to scoring well is to have a very good hypothesis-generator (to understand the topology). It seems that deep neural nets have solved this but packaging an overall solution up will probably take a decade or more.

In Closing

So should we tell our machines just "to exist" or that lying to us will make us happy? If the latter is successful, surely this will put all politicians out of work and packs of unemployed politicians roaming the streets is my personal definition of summoning the demon.

Comments