Scott T. Jensen said:
Currently while I do my reading, I'm mentally operating under the
assumption that my AI idea has been looked into thus I am just trying to
find out when it has been done and why it failed or why the basic concept
was shown to be faulty.
A very common prolem in AI is that people think they have come up with a
new idea, but after enougth study, it becomes obvious is just a different
way of looking at the something which has already been will studied. You
don't recognize it as being old, becaue you are just using different words
to talk about it.
However, most old ideas have never been proven faulty. They have just not
yielded the type of fruit which was hoped for. That could be because the
idea is just a very weak one to begin with, or it could be that the idea
was good, but the old approach was just lacking some important twist, or
new understanding.
So, no matter where you idea my lie in the range of everything AI, if you
have a new way to look at an old problem, you could still discover
something significant and new.
The important thing for you to do is exactly what you are doing. Studying
all the old work on AI and you can understand for yourself how your ideas
are similar, and different, from what has been done in the past.
My idea seems obvious to me and thus I assume it
would be to others ... or at least one AI researcher somewhere at
sometime in the past sixty years. However...
Everybody has a different perspective and if you just happen to have the
right one, you will uncover something new.
I have three close friends who are experienced professional programmers
and who say they've kept up on the AI field.
What I find is that there is a huge difference bettwen people that have
been "keeping up" with AI, vs trying to build, AI. You can't really
appreciate AI until you have had the ideas, like you are currently having,
built them, found out they didn't work, tried for years to make them work,
throw them away, come up with new ideas, try for years to make those work,
throw those away, repeat until you have AI.
Very few people are doing this in the world. Many programers like your
friends have been exposed to past work, but I would be willing to bet that
none of them ever once came up with their own idea about how to solve AI,
and tried to build it.
A huge percentage of people working in AI have long since given up on (or
never even tried) solving the general AI problem, and are instead, working
on mastering some intersting sub-domain problem. Many who have made AI
their work, or even forced to do this, because if you want to continue to
get paid, you better be producing results. And if the only problem you are
working on is the "big one", and you have produced AI, it's going to be
hard to get funding. So they identify sub-domain problems that seem
important to the big problem, and make good progress studying the
sub-doman.
I've presented my AI idea
to each. After a rather thorough grilling of me by them, they've stated
that they haven't heard of anyone taking my approach and find it
interesting. We've even had a few evening sessions where they've started
to chart out what would be needed for the AI program. Software, not
hardware. My take on how to bring about my AI is that we just need to
tweak several already-existing computer programs and get them to work
together to get them to do what I want them to do.
The one thing that makes me doubt your approach without even knowing what
it is, is the fact that you do not seem to be a strong programmer (or did
you say you don't program at all?). Solving AI is an engineering problem.
It's a very hard engineering problem. You need very talanted and sharp
engineers to find solutions which nobody else has every been able to find
after 50 years of looking at the problem.
Also, people who are not engineers, or scientist famillar with behavior
research, tend to be easily fooled by human behavior. They just have no
clue what intelligece really is. This is the problem we all face by being
"too close" to the problem. We think we know that intelligece is just
because we think "we" are intelligent. The truth is, we aren't
intelligent. Our brains are. And even though we think we know who "we"
are, we don't. It's all a very confusion thing to your arms around.
AI projects fail not because the solution was mis-founded, but because the
problem was mis-defined. They didn't understand what intelligence is.
And that contines to be the real problem of AI - finding the right
definition of what intelligece is.
So the real question you need to ask yourself, is not do you have the
correct solution, but do you have the correct defintion of the problem.
I am quite
comfortable at this stage with any inefficiencies this might allow.
However, at least one of the components for my AI my friends feel will
need to be built from scratch ... if not all the components, as one of
the three maintains.
Programmers like to program. They don't like to waste time dealing with
somebody elses code. They will always push you to let them create new
code. That's just what we do.
These programmer friends are the programmers I've
been referring to in this thread and who are willing to help me do this
project. That and they want me to just tell them what each component is
to do and let them figure out how they'll get that component to do that
for me. They keep telling me not to be concerned about what goes on
inside the "black box" as my contribution is setting up the rules for how
the AI is to think
The black box issue is key. Because that's the defintion of the problem.
You have to describe the problem of AI in terms of what the black box is
going to do. You have to take the extensional stance, when you define the
problem. And, as I said above, getting the correct defintion of the
problem is the AI problem.
If you get the black box definition wrong, then it makes no difference what
so every what you choose to put in the box. If you have framed the AI
problem incorrectly, you have no hope of solving it.
And that takes us to the next point. You wrote "how the AI is to think".
By saying those magic words, you have already framed your problem. You
have already made a decision that the black box must think. Or your
programmers have made that decision. And thinking about AI as a "thinking
machine" has long been shown to be the totaly wrong way to frame the AI
problem. This gets back to what I said about how we, as humans, are too
close to the subject of the problem (humans) to see it for what it really
is. We think we have free will. We think we "think". We4 think we use
logic to make decisions. We think we have goals. We think too much and
understand too little at times.
What you have to answer before you waste any time or money, talking about
what you are going to put inside the black box, is what intelligence really
us. For any black box approach, you have to describe how the box
interfaces with the world, and you have to describe what the purpose of the
stuff inside the black box is. Not in terms of what you put inside the
box, but in pure extensional terms of what the black box is going to do.
And, ignoreing what will need to be inside the box, you have to explain to
yourself, why, the black box, as you have defined it, is "intellgent", and
not just some typical computer doing typcial computer things.
Once you are sure you have the right defintion of intelligence in form of a
black box description, then you can go about trying to figure out how to
build the inside of the box.
once all the components are set to go. Personally, I
feel like the captain of a ship with a loyal crew that wants to go
wherever I want to go, but none of which want me to help out in the
boiler room to get the ship underway ... and tend to get upset everytime
I poke my head into the room to see how they're doing. *laugh* However,
before I ask them to volunteer their time any further, I've decided to
try again to find if my AI idea has already been attempted and proven
wrong.
Just recently, I've emailed a few AI professors at MIT, Stanford, and
Carnegie-Mellon University. I selected these universities as they have
well-known AI labs. I read each college's AI department/lab's faculty
homepages and tried to find the faculty members whose expertise and/or
area of current research is closest to what I feel my AI concept would
call home. I then emailed them asking for reading recommendations that
would help with my search for my AI idea. The professors at MIT and
Stanford never replied back. Fortunately, the two professors I contacted
at CMU did. One suggested that I read Russell's book and the other
suggested I dig through the AAAI archives as outlined above. Both
recommended that if after doing their suggestions, I still don't find my
idea, that I present it to an AI professor or researcher (that I trust)
to see if they've ever heard of it being attempted. One of the two CMU
AI professors then said that if that AI expert says it hasn't been
attempted that I should go ahead and "just implement the thing and show
us all." The other professor said that if the AI expert said my idea
hasn't already been attempted and holds promise that I should pursue a
doctorate in AI.
Saying my search gets me this far, I'm not sure I'd pursue a doctorate in
AI. From my readings of AI graduate programs, they require a lot of math
and computer languages. Neither of which interests me and both of which
I know I'd struggle through due to that lack of interest ... if not -- in
the attempt -- die of numbness of the mind. I'm a psychologist by nature
(possessing a humble little BA in psychology with a minor in marketing
Ah, your psych background might help you understand us for what we are - if
you understand what people like Skinner were saying. That might be even
better than good engineering skills. If you can turn your knowledge into a
good black box description of what it is to be intelligence, then you can
let the programmers try to figure out what to put inside the box. If you
don't have the skills to create a good extensional black box specification,
then you will have problems getting your programmers to do anything
productive - unless they have the interest and skills to turn your ideas
into a good black box specification.
... and having worked for many years now as a marketing consultant and am
now in the process of launching my own talk show,
www.scottjensenshow.com) and barely know how to turn on my computer.
That and my programmer friends understand my limitations and are willing
to provide the technical knowledge and expertise to see what develops by
attempting to bring about my AI idea. My friends want us to do it along
the lines of how the Wright Brothers brought about the age of aviation.
However...
Working with my friends to bring about my AI will mean the construction
of the AI will take a long and sporadic time since they'll be working on
it in their free time ... when they have free time. Whereas working in
an AI lab with dedicated researchers helping me would speed up the
timetable significantly. However, I'd only do that if I could get a full
scholarship (a.k.a. "full ride") to pursue a doctorate in AI. Then
there's the issue that I'm be a "bit" old for a graduate student at the
age of 40. However, still being single I think I could handle the social
ramifications of that ... especially if I could just come in and work on
my idea with the help of an AI professor and a couple of her/his lab's
talented computer science graduate students. That and not have to take
any college courses in computer science and math ... though I'd be up for
attending some graduate-level psychology courses. If I was able to get
all that, I would then not need to buy a computer for this project since
I would then be allowed use one of that university's supercomputers.
Yes, yes, I know this paragraph is just wishful daydreaming. Anyway...
Maybe people have become addicted to the dream of AI. I'm one of them.
There are many others. Most are considered "kooks" because the AI problem
has a bad habit of looking easy. But, the mistake most the "kooks" make,
is not understanding the importance of the getting the defination of
"intelligence" correct. They all assume that "know" what intelligence is
just because they are a human, and start to "solve" the problem based on
the fact that they obviously know what it is to be intelligent, so they
understand the problem.
AI is hard, because no one understand the problem. That's why it's so damn
hard to find the solution. We don't know what the problem is - past the
far too vague idea of "being smart".
I am looking at this search for my AI concept as merely an intellectual
exercise and a nice way to motivate myself to educate myself further on
this topic.
Trying to solve AI is the best way to educate yourself on the problem.
If I learn that my AI idea has been attempted and proven
wrong, I will be a bit down but not terribly so. First, I am still
expecting to find out that it has already been done and tossed aside.
Second, I would very much want to know why it is a bad idea. And, third,
I feel the mere pursuit of knowledge has a value in and of itself.
And while I do the above research, I've decided to address the hardware
aspect of this project and thus the reason for this thread.
I appreciate the offer. If you would email me your credentials, I will
look forward to receiving them.
I don't have any "credentials".
I'm just an old fart like you addicted
to the dream of solving AI. I work on as a hobby on the side. I've been
doing it on and off for close to 25 yearas.
I'm a professional programmer by trade. I founded and currently run a
small dot com business for a living.
I waste far too much time on AI, but like I said, it's an addiction.
Also, are you within reasonable driving
distance of Madison, WI, USA?
I'm in the Washington DC area. If you are in the area and what to get
together to chat, feel free to look me up. You can find contact
information on my web page:
http://curtwelch.com/
I'd prefer to meet face to face. That and
I'd like you to sign a confidentiality agreement before I tell you my
idea.
I have no problem with that.
I think I know how to solve AI. All good kooks do. And we tend to believe
that all other kooks ideas are crazy becasuse they are not the same as
ours. If we get together and talk, what is most likely to happen is I will
spend an hour trying to explain to you why your framing of the defintion of
intelligence is wrong (unless you happened to frame it the way I did).
But, I can also give you a lot to think about in terms of why or why not
your approach may or may not work.
I am not well read on the AI literature, so I can not give you good
pointers to other work that might be similar to yours. The contacts you
are making in the AI field should be good for that.
I talk about my ideas in comp.ai.philosophy which is where I'm posting
from. So you can read some of my posts there and get a good flavor of my
approach (learning machines). I don't have any good summary of my work on
the web I can point your do. Sorry.
I'm attempting to do the first one.
That's the fun one to work on. But it's also the one that makes people
know you are a "kook" without even knowing what you are doing.