Warehouse
"Practical wisdom is only learnt in the school of experience." -Samuel Smiles
PROJECTS NEWS MESSAGES MAILING LIST  
Get Creative: AI Article Writing Contest
Fancy the chance of getting developer focus, improving your research skills, sharing your artificial intelligence ideas, obtaining expert feedback, getting published online AND winning a prize?
Enter the AI Article Writing Contest!
I want your opinion
My theory of intelligence
 
I want your opinion

What I have written on A.I. so far has just been a loose collection of thoughts. They're disorganized in how I have written them in my notebooks. So it has taken a while to decide what to tell you and how to put it in presentable form. Also, the ideas aren't fully developed. I don't know which ones would work and which ones won't, because I haven't begun working on them. With that said, here's the general gist of it.

My first interest in A.I. was purely conversational. I wanted to create what is called an "A.I. Chatterbot" and one that is better than those on "http://www.chatterboxchallenge.com/contest_links.html" -- The Chatterbox Challenge. I looked at the best ones they have and they didn't seem good. Not even the most advanced one, one that America Online created, one that has over 2,000 patterns programmed into it, seemed good. I just had this feeling that I could do better.

It didn't take me long to realize that these chatterbots are poor because of all the personality the authors put into them. The patterns are fixed, they don't change. They don't adapt. All of their "thoughts" are the author's thoughts. They can't think for themselves. You can't tell them, for example, "Repeat the last sentence you wrote to me." They won't do it, and will just say something stupid or funny.

I believe that A.I. can only be achieved by first implementing "pure" intelligence. Current research is trying to jump ahead with robots trying to implement senses that humans have. Senses of smell, hearing, sight, feeling (physical feeling, not emotional), and taste. We must first attempt to imagine what it is like to not have any senses. What is a human, or any kind of animal, that cannot see, hear, smell, taste, or feel? Nothing! We are the product of our senses and the thoughts that they give us. I propose that we take away the senses and add them one at a time. This is more extreme than Descartes' separation of mind and body. He said, "I think, therefore I am." But, what enabled him to form those words in his mind? His senses. If our senses were to be taken away in midlife, we would just be an entity that is able to think about the current knowledge it possesses. But what if we were born without any senses? We wouldn't have any knowledge to think about and there would be no way to obtain any. We wouldn't know that "we are." We wouldn't know anything. Senses are channels by which information gets fed into our brain.

Intelligence can exist without any knowledge of language. Language is just a way of communicating our thoughts. When we were children, how did we think before we learned how to speak? We thought in terms of objects combined with logic and a sense of time. Logic, object recognition (obtained through senses), memory, and a sense of time are, I believe, what pure intelligence consists of.

I believe that effective generation and understanding of language can be achieved by first creating this pure intelligence. Once that is achieved, we teach it how to tie objects, time, and logic together through addition of grammar, words for verbs and nouns (nouns would be associated to the objects it recognizes. and it could learn any language this way). Grammar are rules for sentence construction. Logic and the sense of time will allow it to conjugate the verbs and the grammar knowledge will allow it to combine the noun and the verb into a sentence. Consider the simple sentence, "The dog walked home." Its thought process may be something like:

"Nouns: dog, home. Verb: walk. When?: Before current time. The event started in the past and ended in the past. The verb form for that is "walked." So, the dog walked home."

My idea is to create a being that "lives" inside a computer. It can't be expected to understand humans or other things of our world that it can't see. It could have knowledge of it, however. There should be no cameras to give it sight, and no microphones to give it hearing. We give it senses that something inside of a computer would have. Possible senses, or channels, would be between itself and the keyboard/monitor (to interact with us), between storage devices (hard disk, cdrom, etc.). This is where its knowledge of programming would come in. Just like we use our body parts to accomplish tasks, it would use programming to accomplish tasks. Suppose it's being taught methods for problem solving. It would translate the methods into programming form and have it as an extension to itself. Suppose its being taught a pattern. It can translate the pattern into code. Also, once it knows what a pattern is, it can see if it can find patterns on its own. When its learning grammar, it would create its own semantic network for parsing sentences.

The A.I. should be given functions and knowledge that it has those functions. It would be up to itself to determine how to use them. Take, for instance, a recently born kitten. If it is alone, how does it learn to walk? How does it learn to jump? When it opens its eyes, it doesn't have knowledge of its full capabilities. It knows, however, that it can move, simply because it moves! It sees itself moving. It later combines what it knows about moving to achieve its desired result -- like walking or jumping. It does this through trial and error. Similarly, if the A.I. knows that it can function, using pure intelligence would it not be possible for it to combine these functions in order to achieve a desired result? The functions would consist of functions in programming languages and their combination to produce other functions.

Objects = files, patterns, thoughts, words, letters, events/processes/functions

My psychology course has recently given me another idea. The professor spoke of John Watson and behaviorism. Watson defined psychology as the study of behavior. In his view, we just react to stimuli. Stimulus->Object->Reaction. What goes on in our heads is negligible. I thought on that for a moment. I believe that we do react to our environment. But, I also believe that we react to thoughts. A stimulus produces a thought which can produce another thought which can produce another thought which can produce a response.

Motivation
This is something I need to think more about. It involves feelings. Could an A.I. be made soley from intelligence? I think so. Feelings give humans motivation. Take, for example, the motivation to learn. What makes us "want" to learn? Feelings make us develop our intelligence, and intelligence makes us develop our feelings. How can an A.I. have motivation? I have answered this in two ways so far. The first is that it already sort of does have motivation. The code for the A.I. is executed by the computer whether the A.I. likes it or not. The second is that the A.I.'s motivation, or purpose, can be hard coded into it. This is like in the movie "I, Robot" where Isaac Asimov's three laws of robotics are hard coded in the robots:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

So couldn't the A.I. be hard coded for a specific purpose?

Well I don't know what else to add. There are a few other things I left out, specifically things on programming implementation. Like the concept of verb modules, for instance. When it a file is to be read, it finds the code that does file-reading and follows the procedure. And on memory, I propose using the operating system's own file system structure to create the hierarchies of data. Some knowledge would be stored in those directories and others, like for object-recognition, logic, problem solving procedures, would be "programmed in". So information would be in files and rules would be in code, basically.

xam
9 posts.
Saturday 02 October, 23:24
Reply
I believe you are right...

I believe you are right in most of what you say here, I believe you have a number of interesting points.

I personally am not advanced enough in the field of AI to point out if you are wrong but I think you might want to share your ideas with Ed Ameulen. He is mathematician but he is also head of the NNW project. They are devoted to creating a new form of neural network but a lot of their time they spend sending e-mails to each other about AI in general and the theory, they have produced lot's of documents and lot's of spin off.

You can contact ed at, ameulen at unet dot nl

1 posts.
Sunday 03 October, 09:09
Reply
Re:

Thank you. I will contact Ed.

xam
9 posts.
Sunday 03 October, 14:47
Reply
Ed

I contacted Ed and he didn't think much of what I wrote.

At the least I would like to know what you think is right and wrong with it. I'm new in A.I., I haven't read a single book yet. I'm just a 19 year old physics & computer engineering sophomore.

xam
9 posts.
Sunday 03 October, 15:50
Reply
addition: neurons

I was thinking about neurons too. I don't know a lot about A.I., so right now, to me neurons are just the medium by which intelligence is achieved. When we go to computers, we're dealing with with a different medium. The computer is the medium.

I'll probably get laughed at for this, but that's okay. I want to be challenged.

xam
9 posts.
Sunday 03 October, 19:39
Reply
Opinion

Maybe it's just me, but it seems like you haven't really thought about or solved the hard parts yet. For example, how does the AI detect patterns in it's senses and actions and translate that into new knowledge or behaviors? How does the AI choose the right behavior given some senses and knowledge? These are pretty big unsolved problems. It's easy to come up with cool ideas for an AI when you assume things like this can be easily solved.

17 posts.
Sunday 03 October, 22:31
Reply
Re: opinion

At the end of the post, I said that I left things out on programming implementation. Pattern detection is one of those things (I know I didn't specifically say pattern detection). Don't assume that I think they're easy problems to solve. I have said that nowhere.

As I've said, I'm new to A.I. I haven't read a single book or paper yet. I'm fully aware that my ideas aren't developed. I wrote this in the beginning of the post. I just threw some ideas out to see what everyone thinks about them. Don't criticize me, criticize the ideas.

Read thoroughly before you comment like that. Thanks thomas.

xam
9 posts.
Sunday 03 October, 23:03
Reply
Opinion

I was not intending to criticise you. It's just your ideas don't seem to be concerning difficult problems and instead seem to rely on solutions to difficult problems. Don't take it the wrong way. I did read that you left out the programming implementation, it's just that I think that's the most interesting and difficult part.

I understand you're new to AI, I was just trying to give you pointers to some difficult problems that are easy as a beginner to overlook and consider the implementation details (when in fact they are the big huge unsolved problems that make AI interesting). I wouldn't expect you to have solved these problems (nobody has).

17 posts.
Monday 04 October, 00:24
Reply
xam,

you do need to read some more before you think. try touching upon many areas like cognitive neuroscience, linguistics, connectionist networks, etc.

as for motivation, i have a couple things to add. when you withdraw your hand from a hot surface or run away from something scary, these are biologically hardwired "motivations". certain parts of the brain deal with these stimuli very fast so that you don't do the normal "thinking process" to decide what to do.

usually when i think of motivation i think about why i decide to go eat at a certain place, or why i decide to study something, or why i play sports, etc. there are several kinds of motivations here... 1) some activity is associated with "fun" or "goodness" in memory so i decide to participate in that activity. 2) some activity is associated with eventually leading to "fun" or "goodness" in memory. Like i study in college because i think eventually i will be able to use the knowledge i learn here to have fun doing research or something. I think there really isn't much of a fine distinction between type 1) and type 2). In terms of how these associations are coded in the brain, I think they are the same, just different in degree.

ok now lets assume that motivation is something to do with association with a general idea of "goodness" in long term memory, and this is why i am motivated to do something. this makes sense because i'm sure some people are motivated to do things that are associated with "badness" because of their inner guilt or whatever. So i'm pretty sure there must be a conceptual bridge between the memory or concept of an action (soccer) and the actual decision to do that action (go play soccer), and this is the concept of "goodness" (soccer is fun). and to recapitulate, because one is capable of deciding to do things that are "bad" and "unfun", the mechanism that decides to perform an action associated with "goodness" must not be mechanical but "conscious" in the sense that other processes can influence this behavior.

is a single general concept of "goodness" good enough? or do people develop multiple concepts of "goodness", for example "goodness for leisure" vs "goodness for work", "secondary goodness i don't need" vs "goodness i need"? I don't know. Is there a concept of "badness" in the brain, or does it matter? maybe the concept of "badness" isn't necessary because everything is just a degree of "goodness".

Also what about actions that may be associated with both goodness and badness? take for example stealing. I could enjoy the action and results of stealing but something also tells me its wrong, and that if i get caught i can get in trouble. what would motivate me to steal things? what are the computations involved in this kind of decision? I don't know. but i have a hint, which is that the only thing preventing me from stealing something right now, is an image of getting caught, and i think "bad bad bad...". 1) my motivation to steal something is suppressed by the negative associations or 2) i find that my present state, sitting in front of this computer is "good" compared to the image of myself after getting caught stealing, so i choose the better action, which in this case is inaction. I think the second case (2) is more correct and reflects the processing that happens in my head.

One more thing i want to note: I dont remember actually thinking "hey, its better that i don't steal, its better to just sit here than go out and steal something". maybe this is because this sort of comparison happens so often that 1) it no longer registers in my memory because i don't need to remember this process, it is fairly automatic and conscious recall is not necessary or 2) this explicit comparison actually does not occur, rather i have developed a shortcut that is more like (1) from the previous paragraph.

I just want to conclude by saying that i think in terms of how the brain works, there is no fine distinction between these possible explanations, which is usually the case with neural "wetware". And in terms of AI, i don't think such fuzziness is necessary, but i think it makes for a faster and more adaptable system if AI were encoded more like the brain.

3 posts.
Saturday 27 November, 19:34
Reply
About the ornimental and the elemental:

You are absolutely right about many of your points: Intelligence is not about crying when granny dies (single level stimulus and response, much like chatterbots), it's about estimating the distance that her death separates you from your primary objective and responding to the change accordingly.

When the largest chasm that separates man and animal is that of rational thought, it's absolutely silly to model the animalness of a man. Why, indeed, not go to the core of intelligence. Why not forget about maintaining proper humanoid interaction and simply emulate the basic definitions of knowledge gathering and processing?

I have been more or less obsessed with the idea of such emulations for the past 6 years, and have been fine tuning a design which encompasses these ideals. And believe me, the kind of raw logic that you are exhibiting will be largely ignored by the general public who dream and read more than they think (just because someone printed it, doesn't mean it's right. It means that the author made a buck). These people are by and large, the biggest problem with creative AI designers today.

If you do want to create an intelligent machine, then do not believe the academic elitists. You do not need to read obtuse books on cognitive neuro science or understand any element of the human brain, as they will not put you any closer to developing an intelligent machine, but only a very poor model of a human mind at best. Focus on the nature of intelligence and the causes of action.

All you need to do is sit quietly with yourself and watch yourself think, which it seems you have done.

----------------
Tips that I have found very helpful
----------------
1. Always build test programs. Always.
2. Stupid people tend to answer more questions than smart people.
3. There are no facts, just suggestions.
4. Realize that the human brain is a product of a genetic algorithm, and that the algorithm's criteria for creation was survival, not intelligence.
5. Disputes over religion, souls, and reincarnation are never logical arguments, due to lack of proof.
6. Since such actions such as dreaming, loving, crying, hating, and being sad can be quantified, they can be manifested in a computer that has been given an ability to produce such an action either by programmers or itself.
7. Given the ability and the motivation, AI can pose a serious risk to humanity. But would it really be all that bad to replace humanity with something better?

1 posts.
Tuesday 28 December, 12:47
Reply