I want your opinion
What I have written on A.I. so far has just been a loose collection of thoughts. They're disorganized in how I have written them in my notebooks. So it has taken a while to decide what to tell you and how to put it in presentable form. Also, the ideas aren't fully developed. I don't know which ones would work and which ones won't, because I haven't begun working on them. With that said, here's the general gist of it.
My first interest in A.I. was purely conversational. I wanted to create what is called an "A.I. Chatterbot" and one that is better than those on "http://www.chatterboxchallenge.com/contest_links.html" -- The Chatterbox Challenge. I looked at the best ones they have and they didn't seem good. Not even the most advanced one, one that America Online created, one that has over 2,000 patterns programmed into it, seemed good. I just had this feeling that I could do better.
It didn't take me long to realize that these chatterbots are poor because of all the personality the authors put into them. The patterns are fixed, they don't change. They don't adapt. All of their "thoughts" are the author's thoughts. They can't think for themselves. You can't tell them, for example, "Repeat the last sentence you wrote to me." They won't do it, and will just say something stupid or funny.
I believe that A.I. can only be achieved by first implementing "pure" intelligence. Current research is trying to jump ahead with robots trying to implement senses that humans have. Senses of smell, hearing, sight, feeling (physical feeling, not emotional), and taste. We must first attempt to imagine what it is like to not have any senses. What is a human, or any kind of animal, that cannot see, hear, smell, taste, or feel? Nothing! We are the product of our senses and the thoughts that they give us. I propose that we take away the senses and add them one at a time. This is more extreme than Descartes' separation of mind and body. He said, "I think, therefore I am." But, what enabled him to form those words in his mind? His senses. If our senses were to be taken away in midlife, we would just be an entity that is able to think about the current knowledge it possesses. But what if we were born without any senses? We wouldn't have any knowledge to think about and there would be no way to obtain any. We wouldn't know that "we are." We wouldn't know anything. Senses are channels by which information gets fed into our brain.
Intelligence can exist without any knowledge of language. Language is just a way of communicating our thoughts. When we were children, how did we think before we learned how to speak? We thought in terms of objects combined with logic and a sense of time. Logic, object recognition (obtained through senses), memory, and a sense of time are, I believe, what pure intelligence consists of.
I believe that effective generation and understanding of language can be achieved by first creating this pure intelligence. Once that is achieved, we teach it how to tie objects, time, and logic together through addition of grammar, words for verbs and nouns (nouns would be associated to the objects it recognizes. and it could learn any language this way). Grammar are rules for sentence construction. Logic and the sense of time will allow it to conjugate the verbs and the grammar knowledge will allow it to combine the noun and the verb into a sentence. Consider the simple sentence, "The dog walked home." Its thought process may be something like:
"Nouns: dog, home. Verb: walk. When?: Before current time. The event started in the past and ended in the past. The verb form for that is "walked." So, the dog walked home."
My idea is to create a being that "lives" inside a computer. It can't be expected to understand humans or other things of our world that it can't see. It could have knowledge of it, however. There should be no cameras to give it sight, and no microphones to give it hearing. We give it senses that something inside of a computer would have. Possible senses, or channels, would be between itself and the keyboard/monitor (to interact with us), between storage devices (hard disk, cdrom, etc.). This is where its knowledge of programming would come in. Just like we use our body parts to accomplish tasks, it would use programming to accomplish tasks. Suppose it's being taught methods for problem solving. It would translate the methods into programming form and have it as an extension to itself. Suppose its being taught a pattern. It can translate the pattern into code. Also, once it knows what a pattern is, it can see if it can find patterns on its own. When its learning grammar, it would create its own semantic network for parsing sentences.
The A.I. should be given functions and knowledge that it has those functions. It would be up to itself to determine how to use them. Take, for instance, a recently born kitten. If it is alone, how does it learn to walk? How does it learn to jump? When it opens its eyes, it doesn't have knowledge of its full capabilities. It knows, however, that it can move, simply because it moves! It sees itself moving. It later combines what it knows about moving to achieve its desired result -- like walking or jumping. It does this through trial and error. Similarly, if the A.I. knows that it can function, using pure intelligence would it not be possible for it to combine these functions in order to achieve a desired result? The functions would consist of functions in programming languages and their combination to produce other functions.
Objects = files, patterns, thoughts, words, letters, events/processes/functions
My psychology course has recently given me another idea. The professor spoke of John Watson and behaviorism. Watson defined psychology as the study of behavior. In his view, we just react to stimuli. Stimulus->Object->Reaction. What goes on in our heads is negligible. I thought on that for a moment. I believe that we do react to our environment. But, I also believe that we react to thoughts. A stimulus produces a thought which can produce another thought which can produce another thought which can produce a response.
Motivation
This is something I need to think more about. It involves feelings. Could an A.I. be made soley from intelligence? I think so. Feelings give humans motivation. Take, for example, the motivation to learn. What makes us "want" to learn? Feelings make us develop our intelligence, and intelligence makes us develop our feelings. How can an A.I. have motivation? I have answered this in two ways so far. The first is that it already sort of does have motivation. The code for the A.I. is executed by the computer whether the A.I. likes it or not. The second is that the A.I.'s motivation, or purpose, can be hard coded into it. This is like in the movie "I, Robot" where Isaac Asimov's three laws of robotics are hard coded in the robots:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
So couldn't the A.I. be hard coded for a specific purpose?
Well I don't know what else to add. There are a few other things I left out, specifically things on programming implementation. Like the concept of verb modules, for instance. When it a file is to be read, it finds the code that does file-reading and follows the procedure. And on memory, I propose using the operating system's own file system structure to create the hierarchies of data. Some knowledge would be stored in those directories and others, like for object-recognition, logic, problem solving procedures, would be "programmed in". So information would be in files and rules would be in code, basically.
|