Let's talk about AI

A place to discuss the science of computers and programs, from algorithms to computability.

Formal proofs preferred.

Moderators: phlip, Moderators General, Prelates

User avatar
Earlz
Gets Obvious Implications
Posts: 785
Joined: Sat Jun 09, 2007 8:38 am UTC
Location: USA
Contact:

Let's talk about AI

Postby Earlz » Fri Nov 20, 2009 2:13 am UTC

Well, I have always been interested in AI, but I lack someone to talk about it with(and brain storm with, cause I'm always trying to solve it in my mind). So why not here!

What are your takes on how AI should be done?

I've recently read http://www.technewsworld.com/story/68678.html and it's very interesting. To take a new form toward AI(er, machines simulating thoughts).

I can figure out a few of the common problems with figuring out thought. But implementing a data structure that can hold a single "block" of what the mind may think about. The block can be anything. A color, a word, an image, even a thought such as an action... Each block of thought it so complex it seems impossible to store such a thing in a finite size data structure...

One possible thought is "lazy" data structures. This would be where every data structure does not contain "real" data. Rather it contains "code" to link together other blocks.. The problem with this is the chicken and egg problem....

Also, the brain doesn't seem to be a state machine. It's constantly processing things.. it never really stops.. In an AI implementation, what do the machine isn't receiving any new data?
My new blag(WIP, so yes it's still ugly..)
DEFIANCE!
Image
This is microtext. Zooming in digitally makes it worse. Get a magnifying glass.. works only on LCD

User avatar
Xanthir
My HERO!!!
Posts: 5426
Joined: Tue Feb 20, 2007 12:49 am UTC
Location: The Googleplex
Contact:

Re: Let's talk about AI

Postby Xanthir » Fri Nov 20, 2009 5:34 am UTC

I think that this is a very relevant article. Focus especially on the 'thinking algorithms' part, though the whole thing is very interesting.
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

keeperofdakeys
Posts: 658
Joined: Wed Oct 01, 2008 6:04 am UTC

Re: Let's talk about AI

Postby keeperofdakeys » Mon Nov 23, 2009 11:43 am UTC

required article link: http://web.media.mit.edu/~minsky/papers/ComputersCantThink.txt

So Xanthir, that article you linked to is essentially trying to say that although an AI could theoretically self-improve infinitely, if it is based upon a program written by a human and does not include any guessing, it could only self-improve as much as the knowledge in the program. Therefore a 'natural selection' like algorithm would need to be developed to enable infinite self-improvement?

User avatar
Xanthir
My HERO!!!
Posts: 5426
Joined: Tue Feb 20, 2007 12:49 am UTC
Location: The Googleplex
Contact:

Re: Let's talk about AI

Postby Xanthir » Mon Nov 23, 2009 3:55 pm UTC

Well, I was actually trying to point to the bits where EY talks about the "levels" of causality, specifically the "Cognitive" level:
Elizier Yudkowsky wrote:"Cognitive", in humans, is the labor performed by your neural circuitry, algorithms that consume large amounts of computing power but are mostly opaque to you. You know what you're seeing, but you don't know how the visual cortex works. The Root of All Failure in AI is to underestimate those algorithms because you can't see them... In an AI, the lines between procedural and declarative knowledge are theoretically blurred, but in practice it's often possible to distinguish cognitive algorithms and cognitive content.

I was attempting to suggest that we won't get anywhere in AI until we decipher more of how cognitive circuitry works - dicking around with data storage and such is pretty much irrelevant to the core problem, and is the reason why AI has consistently failed for three decades. I get the impression that things are finally changing in this regard, and that there's more interest in deciphering the *ways* something can think, and how to code this.

You seem to have focused on the Metacognitive level. Natural selection is *our* source of metacognitive changes, but it's a very bad source. We'll provide the metacog source for the AI at first, and hopefully the AI will be able to take that up itself once it gets smart enough.
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

User avatar
Berengal
Superabacus Mystic of the First Rank
Posts: 2707
Joined: Thu May 24, 2007 5:51 am UTC
Location: Bergen, Norway
Contact:

Re: Let's talk about AI

Postby Berengal » Mon Nov 23, 2009 4:36 pm UTC

Xanthir wrote: I get the impression that things are finally changin [...]

Yeah, people haven't been saying that the last four decades...

(I hope you're right though)
It is practically impossible to teach good programming to students who are motivated by money: As potential programmers they are mentally mutilated beyond hope of regeneration.

User avatar
Xanthir
My HERO!!!
Posts: 5426
Joined: Tue Feb 20, 2007 12:49 am UTC
Location: The Googleplex
Contact:

Re: Let's talk about AI

Postby Xanthir » Mon Nov 23, 2009 4:51 pm UTC

Well, when we first started screwing around with AI, we really had *no clue* how the human brain worked at all. The cognitive sciences have put out some amazing advances since then, and really illuminated a lot. We at least have the *potential* to start doing good work now; the only way we would have succeeded in AI back in the 80s was with a stroke of amazing luck.
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

User avatar
Yakk
Poster with most posts but no title.
Posts: 11129
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

Re: Let's talk about AI

Postby Yakk » Mon Nov 23, 2009 6:16 pm UTC

Re: the cat experiment.

I asked someone in more know than I. They said they basically hooked up N neuron simulations where the connections and weights where randomly determined (using various heuristics, naturally), where N is about the number of neurons in a cat cortex, and said "hey look, we are simulating a cat cortex".

If they are extremely lucky, the actual weighing of connections and connection structure of the human brain really doesn't matter that much, and when they build a "human brain simulation" with random interconnects and weights it will start crying like a baby.

I doubt it.
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

User avatar
Xanthir
My HERO!!!
Posts: 5426
Joined: Tue Feb 20, 2007 12:49 am UTC
Location: The Googleplex
Contact:

Re: Let's talk about AI

Postby Xanthir » Mon Nov 23, 2009 7:13 pm UTC

Yeah, no. If it were valid to make that claim, then it's equally valid to claim that *I* am simulating a cat cortex right now. In fact, I am simultaneously simulating the entire brain of every animal with a brain-size less than or equal to mine.
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

makc
Posts: 181
Joined: Mon Nov 02, 2009 12:26 pm UTC

Re: Let's talk about AI

Postby makc » Mon Nov 23, 2009 7:54 pm UTC

judging from the movies, thinking robots will absolutely kill us all. so don't make them.

userxp
Posts: 436
Joined: Thu Jul 09, 2009 12:40 pm UTC

Re: Let's talk about AI

Postby userxp » Tue Nov 24, 2009 5:59 pm UTC

Sorry, but that cat thing is false:
http://spectrum.ieee.org/blog/semicondu ... -cat-brain

Still, if we had enough power to simulate that number of neurons, we would only need to get a map of our brain to have AI.
I think we should start by mapping the simplest brain we can find (an ant, a fly, or some other insect) and simulating it.

achilleas.k
Posts: 75
Joined: Wed Nov 18, 2009 3:21 pm UTC

Re: Let's talk about AI

Postby achilleas.k » Tue Nov 24, 2009 8:54 pm UTC

AI is a tricky subject when you get into it seriously. There are various interpretations these days as to what constitutes AI. Academically speaking, AI started off as logic programming and path finding algorithms and all that (GOFAI: Good old fashioned AI). This is AI in the sense that we call the computer controlled players in a game AI (admittedly, they're not THAT intelligent). I guess it all boils down to the definition of cognition, conciousness and intelligence, which is far from well defined. "Modern AI" or computational intelligence, as it's often called, is the new flavour in AI research. It's all about fuzzy systems and replicating nature's way of doing things in a more heuristic manner, rather than a really intelligent one. This is where meta-heuristic algorithms come in like evolutionary computation and swarm intelligence, as well as other stuff like neural networks. A main ingredient in these ways, one might say, is randomness (noise) and intractability. It's mostly a bunch of simple behavioural rules, with a bit of added randomness, that end up resembling a natural equivalent, high level behaviour. In the sense that intelligence and AI has a prerequisite that it must be able to learn, we have achieved much in the field of learning.

When talking seriously about AI I always end up bringing up the Chinese room[wikipedia.org]. The thought experiment can be used to touch upon many subjects, but I like to use it to consider what may constitute conciousness. Simply put, if a machine does symbol matching and acts the exact same way as a concious being (e.g., when you present it with a question or argument, it reacts in the same way a person would), can you say that we have achieved artificial conciousness? Even if it is perfect in every regard, way beyond a machine that could pass the Turing test, would you call it concious or intelligent? If not, how is it different from us? What makes the activity in our brain any more "magical" than what may be happening in a computer? Is there a metaphysical entity called the "mind" that we cannot infuse upon a machine?

As for brain simulations and models, there's been a lot going on but it's a grind-fest. We can simulate large portions of various animal brains and we do know how some things work in detail even for human brains, but even if we have a brain model that can replicate the brain activity of a rat's brain in detail, it still might not mean as much as we'd hope.
The most exciting phrase to hear in science -the one that heralds new discoveries- is not "Eureka!" but "That's funny...". - Isaac Asimov

User avatar
Xanthir
My HERO!!!
Posts: 5426
Joined: Tue Feb 20, 2007 12:49 am UTC
Location: The Googleplex
Contact:

Re: Let's talk about AI

Postby Xanthir » Tue Nov 24, 2009 10:00 pm UTC

One problem I have with the Chinese room is that it's not obvious from the description of the problem that actually simulating consciousness in that way would require truly mind-boggling amounts of storage and computational resources. We're talking planet-sized brains just to simulate a single human brain. It's silly.

But, assuming you have that, then yes of course it's conscious. Otherwise you're admitting that it's a P-Zombie.
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

achilleas.k
Posts: 75
Joined: Wed Nov 18, 2009 3:21 pm UTC

Re: Let's talk about AI

Postby achilleas.k » Tue Nov 24, 2009 10:12 pm UTC

Xanthir wrote:One problem I have with the Chinese room is that it's not obvious from the description of the problem that actually simulating consciousness in that way would require truly mind-boggling amounts of storage and computational resources. We're talking planet-sized brains just to simulate a single human brain. It's silly.

But, assuming you have that, then yes of course it's conscious. Otherwise you're admitting that it's a P-Zombie.

That's the whole point of a thought experiment though. You assume you have all the physical requirements and it allows you to think of the consequences and meaning. What's wrong with admitting it's a P-Zombie? There's no right answer for thought experiments. They're meant to set a premise where arguments can hold for both (or multiple) sides, whether the experiment can be physically performed or not.
In this case, saying that the machine is conscious implies that consciousness is a result of, or can be verified from, behaviour alone. I don't think I can agree to that entirely, but I suspect this discussion wasn't the original intent of the thread. Let's drop this part, at least for now, and get back to the original topic.
The most exciting phrase to hear in science -the one that heralds new discoveries- is not "Eureka!" but "That's funny...". - Isaac Asimov

User avatar
Xanthir
My HERO!!!
Posts: 5426
Joined: Tue Feb 20, 2007 12:49 am UTC
Location: The Googleplex
Contact:

Re: Let's talk about AI

Postby Xanthir » Tue Nov 24, 2009 10:44 pm UTC

What I mean is that the way the problem is presented, people intuitively imagine something relatively small, a small library of rulebooks that are being consulted. Their mind balks, rightly, at the notion that such a collection could possibly contain enough information to simulate consciousness. Of course, you're really intending that the library is of arbitrary size, whatever's required, but that's just not communicated well in the thought experiment.

What's wrong with admitting it's a P-Zombie?

P-zombies don't exist, that's what's wrong.

But heh, yeah, back to the OT.
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

User avatar
Yakk
Poster with most posts but no title.
Posts: 11129
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

Re: Let's talk about AI

Postby Yakk » Tue Nov 24, 2009 11:58 pm UTC

Xanthir wrote:P-zombies don't exist, that's what's wrong.

Hey, I exist!
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

MadRocketSci2
Posts: 69
Joined: Sat Nov 21, 2009 5:31 pm UTC

Re: Let's talk about AI

Postby MadRocketSci2 » Wed Nov 25, 2009 1:46 am UTC

If you can have mechanisms that do everything that organisms do, but for whatever reason don't have qualia (even though you can point to them going through the exact same internal evaluations and representations that you yourself do with a brain-scanomatic 5000), then why can't you have objects with no mechamism whatsoever that do have qualia? What exactly determines what ends up with it, and what doesn't, and how in the world do you propose to know this anyway? What do you use as a test? Where is your qualiometer?

If two identically operating organisms can exist, one with qualia, one without, then why not the same organism that sometimes has qualia, and sometimes only remembers and acts like it has qualia? Where does the madness end?

I was attempting to suggest that we won't get anywhere in AI until we decipher more of how cognitive circuitry works - dicking around with data storage and such is pretty much irrelevant to the core problem, and is the reason why AI has consistently failed for three decades. I get the impression that things are finally changing in this regard, and that there's more interest in deciphering the *ways* something can think, and how to code this.


Agreed. It seems to me that with a few exceptions, people have been playing with ever more complicated language parsers as AI, going straight for the Turing test, and miss the point of AI entirely - that you're trying to do what minds do, not fool people into thinking you're doing that. Perhaps there are other radically different ways to arrive at awareness, but I'm in the NN people's camp right now - they're the working examples we have. Besides which, most static programs don't hold near enough degrees of freedom such that they can change to internally represent information about the world to the same fidelity that the human brain does. It seems to me that to have an internal representation of the world, an internal "mind's eye", that you would have to have not just clever data structures, but extremely fine grained units variably interconnecting, and controlling the rules by which they interconnect with their state, to represent the "pixellation" of thought.
Last edited by MadRocketSci2 on Wed Nov 25, 2009 2:02 am UTC, edited 1 time in total.

achilleas.k
Posts: 75
Joined: Wed Nov 18, 2009 3:21 pm UTC

Re: Let's talk about AI

Postby achilleas.k » Wed Nov 25, 2009 2:00 am UTC

MadRocketSci2 wrote:If you can have mechanisms that do everything that organisms do, but for whatever reason don't have qualia (even though you can point to them going through the exact same internal evaluations and representations that you yourself do with a brain-scanomatic 5000), then why can't you have objects with no mechamism whatsoever that do have qualia? What exactly determines what ends up with it, and what doesn't, and how in the world do you propose to know this anyway? What do you use as a test? Where is your qualiometer?

I'm not sure if these questions are directed at me or are simply food for thought.
If they are directed at me (since I brought up the Chinese room), I just want to make it clear that I do not explicitly support the existence of qualia, nor do I oppose them. The same goes for the existence of an intangible mind and it's relation to a hypothetical mind in an artificial being. I mention it because it often troubles me (as well as the possibility of P-Zombies, which is a related matter) and I have yet to make up my mind on the subject. I usually present the arguments to someone and just oppose whichever side he/she takes, not to annoy or troll (though some often think so), but to play the devil's advocate or simply to show that I have no strong belief on either side and I'm really just trying to make up my mind.

MadRocketSci2 wrote:It seems to me that to have an internal representation of the world, an internal "mind's eye", that you would have to have not just clever data structures, but extremely fine grained units variably interconnecting to represent the "pixellation" of thought.

This is one of the most interesting concepts for me. There's strong evidence that suggests this is actually what happens in the brain at varying levels of detail from simply the view you have of how you fit in a room to how complex objects in your environment interact. They all exist to some degree in your "mind's eye".
We've known for a long time that there are orientation specific neurons in the brain, that go crazy when you're moving in a specific direction. Plot the firing frequencies of all directional neurons while a person is moving and you can reconstruct his perception of the path he followed. This happens when you move through a room in real life or even if you're just thinking about moving through it (or moving through a similar room in a simulation). The existence of this activity in all three cases suggests that our brain perceives physical movement in the same way it perceives mental movement which is compatible with the "internal mind's eye" you mentioned.

2nd edit:
I just remembered seeing this recently. I love TEDtalks :)
http://www.youtube.com/user/TEDtalksDir ... S3wMC2BpxU
Last edited by achilleas.k on Wed Nov 25, 2009 2:14 am UTC, edited 2 times in total.
The most exciting phrase to hear in science -the one that heralds new discoveries- is not "Eureka!" but "That's funny...". - Isaac Asimov

MadRocketSci2
Posts: 69
Joined: Sat Nov 21, 2009 5:31 pm UTC

Re: Let's talk about AI

Postby MadRocketSci2 » Wed Nov 25, 2009 2:03 am UTC

I don't mean to come off as confrontational. (Well, maybe I do, but not in an unfriendly way).

But if you do have P-zombies, then how do you know you weren't a P-zombie 5 minutes ago?

You can say "I clearly remember", but that just means P-zombie you was processing the same way Not-P-zombie-you are, and laying down memories for you to reference.

Suppose intelligence B has the same layout as the human brain, and you could mind-meld with it, locking your brain's state to his/hers/it's (say, by growing an extra corpus callosum that maps it's cortex to yours, or other gedanken-sci-fish methods), and getting information - would the fact that it's internal representations are now imposing themselves on your brain and being recorded in your memory for reference, be enough to screen it as a P-zombie - or is it just a P-zombie, faking awareness of the universe the moment you disconnect?

achilleas.k
Posts: 75
Joined: Wed Nov 18, 2009 3:21 pm UTC

Re: Let's talk about AI

Postby achilleas.k » Wed Nov 25, 2009 2:23 am UTC

MadRocketSci2 wrote:I don't mean to come off as confrontational. (Well, maybe I do, but not in an unfriendly way).

But if you do have P-zombies, then how do you know you weren't a P-zombie 5 minutes ago?

You can say "I clearly remember", but that just means P-zombie you was processing the same way Not-P-zombie-you are, and laying down memories for you to reference.


Agreed, there's really no way to tell, which is probably why I don't push myself to reach a decision. You can't really live as a hardcore sceptic, doubting the existence of consciousness every time you have a conversation. These aren't real-world problems to me I guess, just interesting food for thought when subjects relating to artificial minds pop up.
It really doesn't mean much one way or another. We attach ourselves emotionally to our cars and our PCs and other objects that you just wouldn't care if a hypothetical robot or AI entity was conscious or not; you'd probably treat it like you would a person if it acted in at least a semi-convincing manner. Even if I were to accept the existence of a mind or soul or consciousness that only born humans can have, it won't change much for real matters so, what's the point? I guess the point is I like to babble on so I'll just shut up now :|
The most exciting phrase to hear in science -the one that heralds new discoveries- is not "Eureka!" but "That's funny...". - Isaac Asimov

MadRocketSci2
Posts: 69
Joined: Sat Nov 21, 2009 5:31 pm UTC

Re: Let's talk about AI

Postby MadRocketSci2 » Wed Nov 25, 2009 2:32 am UTC

Somewhere out there in abstract math-space is a board operating according to Conway's game of life, on which is inscribed an unimaginably long turing string, which is simulating our universe. Within said universe, things are going on more or less exactly as they do here on Earth. Unfortunately, it's about to run out of rope. :-P Bwahahaha.

(Of course, all that assumes that the universe can be expressed with cardinality equal to the integers)

(P-zombies - much creepier than the normal kind)

stephentyrone
Posts: 778
Joined: Mon Aug 11, 2008 10:58 pm UTC
Location: Palo Alto, CA

Re: Let's talk about AI

Postby stephentyrone » Wed Nov 25, 2009 3:57 am UTC

achilleas.k wrote:When talking seriously about AI I always end up bringing up the Chinese room[wikipedia.org]. The thought experiment can be used to touch upon many subjects, but I like to use it to consider what may constitute conciousness. Simply put, if a machine does symbol matching and acts the exact same way as a concious being (e.g., when you present it with a question or argument, it reacts in the same way a person would), can you say that we have achieved artificial conciousness? Even if it is perfect in every regard, way beyond a machine that could pass the Turing test, would you call it concious or intelligent? If not, how is it different from us? What makes the activity in our brain any more "magical" than what may be happening in a computer? Is there a metaphysical entity called the "mind" that we cannot infuse upon a machine?


I'll never understand why anyone thinks that the Chinese Room is a meaningful or useful argument. No, the human operator does not know chinese, but that's entirely beside the point. The system composed of the room and the operator does. It's only a remotely interesting argument if you're dedicated to some weird human need to feel "special" or "different" from other animals / a mechanical system / whatever. As soon as you step back from the assumption that there's something "unique" about human consciousness (which is a ridiculously strong assumption to make without compelling evidence -- which I certainly have never seen anyone produce), the whole damn thing ceases to be interesting at all.

If it looks like a duck, and it quacks like a duck, it's a duck. You can argue as much as you want that ducks are privileged by their creator with some unique energy that this thing lacks, but until you can come up with an experiment to measure the effect of that energy, it's a duck.
GENERATION -16 + 31i: The first time you see this, copy it into your sig on any forum. Square it, and then add i to the generation.

achilleas.k
Posts: 75
Joined: Wed Nov 18, 2009 3:21 pm UTC

Re: Let's talk about AI

Postby achilleas.k » Wed Nov 25, 2009 4:19 am UTC

The reason I find it interesting and meaningful, not as an argument against machine understanding, but as a scenario that raises questions, is that by accepting that a machine that can act human-like is intelligent and understands its actions, then it can be argued that an AI agent of today's capabilities (an ANN trained to perform a few meaningful tasks in its environment) does indeed understand what it is doing and is intelligent. The intelligence of the agent being limited to its choices and environment, but intelligent nonetheless.
Maybe I'm just taking it to too far extremes though.
The most exciting phrase to hear in science -the one that heralds new discoveries- is not "Eureka!" but "That's funny...". - Isaac Asimov

User avatar
Yakk
Poster with most posts but no title.
Posts: 11129
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

Re: Let's talk about AI

Postby Yakk » Wed Nov 25, 2009 4:20 pm UTC

So there are reasons why the "AI faking" stuff might turn out to be useful.

See, it seems (from our naive experience) that humans are hard wired to learn some things. This may not be just "neural networks that are large learn" (try teaching an elephant how to read/write) -- we may have a bias towards certain kinds or particular algorithms in our brains somehow.

And you can write (well, a friend of mine is writing) compilers that turn high level descriptions of algorithms into a neural network (or parts of a neural network). That kind of abstraction might be needed to avoid the 3 billion years of time that it took evolution to go from basic DNA replicators to intelligence running on a physics system far more advanced than the kind we can run computers on.

The hope is we go and use the psychologist's theories about how our brain works (which are validated by blood-use and timing studies) when solving a given problem (be it vision processing or audio processing or memory or what have you), then compile a neural net that approximates that theory, possibly as a seed for later automated learning mechanisms that fine tune it.

Of course, this probably will not work. But I'm pretty sure that just shoving together a bunch of neurons in an approximation of brain topology, then feeding it nearly random bits, won't work either.

Think about it: people seem to process vision similarly. If it was just a pile of neural networks that learned how to process vision, wouldn't you expect a bunch of radically different solutions to develop based on the different experiences small children have?
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

achilleas.k
Posts: 75
Joined: Wed Nov 18, 2009 3:21 pm UTC

Re: Let's talk about AI

Postby achilleas.k » Wed Nov 25, 2009 5:38 pm UTC

I just found this thread which seems to be still active.

Back to AI talk though, a lot of the current AI research has been turning towards attention (visual mostly but attention in general as well) and memory. Although the memory part is much older, it's still a massive subject. Thoughts?
The most exciting phrase to hear in science -the one that heralds new discoveries- is not "Eureka!" but "That's funny...". - Isaac Asimov

User avatar
RockoTDF
Posts: 582
Joined: Thu Oct 18, 2007 6:08 am UTC
Location: Tucson, AZ, US
Contact:

Re: Let's talk about AI

Postby RockoTDF » Sat Nov 28, 2009 12:34 am UTC

stephentyrone wrote:
achilleas.k wrote:When talking seriously about AI I always end up bringing up the Chinese room[wikipedia.org]. The thought experiment can be used to touch upon many subjects, but I like to use it to consider what may constitute conciousness. Simply put, if a machine does symbol matching and acts the exact same way as a concious being (e.g., when you present it with a question or argument, it reacts in the same way a person would), can you say that we have achieved artificial conciousness? Even if it is perfect in every regard, way beyond a machine that could pass the Turing test, would you call it concious or intelligent? If not, how is it different from us? What makes the activity in our brain any more "magical" than what may be happening in a computer? Is there a metaphysical entity called the "mind" that we cannot infuse upon a machine?


I'll never understand why anyone thinks that the Chinese Room is a meaningful or useful argument. No, the human operator does not know chinese, but that's entirely beside the point. The system composed of the room and the operator does. It's only a remotely interesting argument if you're dedicated to some weird human need to feel "special" or "different" from other animals / a mechanical system / whatever. As soon as you step back from the assumption that there's something "unique" about human consciousness (which is a ridiculously strong assumption to make without compelling evidence -- which I certainly have never seen anyone produce), the whole damn thing ceases to be interesting at all.

If it looks like a duck, and it quacks like a duck, it's a duck. You can argue as much as you want that ducks are privileged by their creator with some unique energy that this thing lacks, but until you can come up with an experiment to measure the effect of that energy, it's a duck.


Thank you for putting into words something I have been trying to say for years!
Just because it is not physics doesn't mean it is not science.
http://www.iomalfunction.blogspot.com <---- A collection of humorous one liners and science jokes.

guyy
Posts: 610
Joined: Tue May 06, 2008 3:02 am UTC

Re: Let's talk about AI

Postby guyy » Mon Nov 30, 2009 6:58 am UTC

stephentyrone wrote:I'll never understand why anyone thinks that the Chinese Room is a meaningful or useful argument. No, the human operator does not know chinese, but that's entirely beside the point. The system composed of the room and the operator does. It's only a remotely interesting argument if you're dedicated to some weird human need to feel "special" or "different" from other animals / a mechanical system / whatever. As soon as you step back from the assumption that there's something "unique" about human consciousness (which is a ridiculously strong assumption to make without compelling evidence -- which I certainly have never seen anyone produce), the whole damn thing ceases to be interesting at all.


Well, whether you agree with the argument mainly depends on what you think "understand" means. The point of the argument isn't that the person, or the system, doesn't "understand" Chinese in the sense of being able to answer questions in Chinese. Obviously the system as a whole does do that, and so you might claim that means it understands Chinese.

But remember: the whole thing, person plus room plus books plus whatever else, is just manipulating symbols. In a certain sense, it doesn't understand Chinese at all, because it can't relate the symbols to anything except more symbols. Write "What color is this text?" in Chinese with a red pen, and put it in the machine; presumably the person in there knows what red looks like, but he can't answer the question, because he can't read it or figure out what it means by any method available in the system. (And even if he could, he couldn't write a response in Chinese because he doesn't know how to write "red" in Chinese, and the books can't tell him that.)

It's an important point (if you agree with it, that is), since computers have the same barrier of understanding. Your computer has no way of knowing what a one or zero is, or what you're using its endless processing of those ones and zeros for; it just scans through the symbols, does logical rearrangements of them, and spits out a result. It only seems to know what it's doing because some other human (or you) programmed your computer with an arrangement of symbols and rules which, like the pile of books in the Chinese Room, makes it produce outputs that make sense based on the inputs.

Also, the Chinese Room thing only applies to Turing machines and similar devices, not all machines (the brain is a machine, after all); the point of it is just that you can't make a brain-equivalent machine by taking a pile of microprocessors and giving them the right software. Consciousness may not be "unique" or "special" or whatever, but unless you're prepared to say it doesn't exist at all, I don't see how you could make a conscious Turing machine. (And if it doesn't exist, the whole question of conscious AI is kind of moot.)

Forgive my pedantry; I took a philosophy class from the inventor of the Chinese Room a while back, so I've heard more about it than is probably healthy.

User avatar
phlip
Restorer of Worlds
Posts: 7573
Joined: Sat Sep 23, 2006 3:56 am UTC
Location: Australia
Contact:

Re: Let's talk about AI

Postby phlip » Mon Nov 30, 2009 1:33 pm UTC

guyy wrote:But remember: the whole thing, person plus room plus books plus whatever else, is just manipulating symbols. In a certain sense, it doesn't understand Chinese at all, because it can't relate the symbols to anything except more symbols. Write "What color is this text?" in Chinese with a red pen, and put it in the machine; presumably the person in there knows what red looks like, but he can't answer the question, because he can't read it or figure out what it means by any method available in the system. (And even if he could, he couldn't write a response in Chinese because he doesn't know how to write "red" in Chinese, and the books can't tell him that.)
Why would you assume the books don't have whole sections on what to respond with if the writing is in different colours? It's probably nestled between the sections on dealing with spelling errors (or whatever the Chinese equivalent would be... bad handwriting?), and made-up words.

You also later seem to be assuming the universe isn't a Turing machine. From what I understood, that's still an open question. And even if the universe isn't a Turing machine, that doesn't mean you couldn't design a stronger theory of computation that's Universe-complete, and then just work with that instead of Turing machines. Such an idea is a bit intractable at the moment, but should the universe be proven to be not a Turing machine, hopefully that proof would give some clues as to where to start building the new theory.

Also: your mentioning of how computers don't "understand" 0s and 1s is a distraction... I'm sure you don't understand all the neuron interactions in your own brain either. If I showed you a complete snapshot of which neurons are firing in your brain at a certain time, and a map of how the neurons are connected... would you "understand" it? Clearly not. And yet that doesn't seem to be a barrier to you claiming to understand things that, ultimately, are only represented in those firing neurons.

Code: Select all

enum ಠ_ಠ {°□°╰=1, °Д°╰, ಠ益ಠ╰};
void ┻━┻︵​╰(ಠ_ಠ ⚠) {exit((int)⚠);}
[he/him/his]

User avatar
Xanthir
My HERO!!!
Posts: 5426
Joined: Tue Feb 20, 2007 12:49 am UTC
Location: The Googleplex
Contact:

Re: Let's talk about AI

Postby Xanthir » Mon Nov 30, 2009 3:15 pm UTC

guyy wrote:But remember: the whole thing, person plus room plus books plus whatever else, is just manipulating symbols. In a certain sense, it doesn't understand Chinese at all, because it can't relate the symbols to anything except more symbols. Write "What color is this text?" in Chinese with a red pen, and put it in the machine; presumably the person in there knows what red looks like, but he can't answer the question, because he can't read it or figure out what it means by any method available in the system. (And even if he could, he couldn't write a response in Chinese because he doesn't know how to write "red" in Chinese, and the books can't tell him that.)

This is nonsense. The problem isn't that the Room doesn't understand "red", it's that the Room can't sense the color of text (at least in this formalization). This is exactly like saying a blind person isn't fully conscious because they can't answer the question "What color are the braille dots you're reading now?". If you feed it information using a sense that it has, or extend it so that it has more senses, then it could answer the question just fine.

The Room can certainly answer the question, it'll just be with something like "I can't see the color of text, dumbass. I don't have eyes like you do." (Translated from Chinese)

Also, the Chinese Room thing only applies to Turing machines and similar devices, not all machines (the brain is a machine, after all); the point of it is just that you can't make a brain-equivalent machine by taking a pile of microprocessors and giving them the right software. Consciousness may not be "unique" or "special" or whatever, but unless you're prepared to say it doesn't exist at all, I don't see how you could make a conscious Turing machine. (And if it doesn't exist, the whole question of conscious AI is kind of moot.)

All machines are either Turing Machines, or are strictly weaker than Turing Machines. Unless you're willing to postulate that the brain possesses magic physics that have never been observed, are not predicted or even hinted at by any credible theory of physics, and would invalidate massive swathes of current scientific knowledge. You certainly can make a brain-equivalent machine by taking a pile of microprocessors and giving them the right software. What you can't do is then magically expect it to have the exact same sort of eyes, ears, nose, tongue, and skin as humans do, unless you build them and properly attach them to the computer-brain.
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

User avatar
Earlz
Gets Obvious Implications
Posts: 785
Joined: Sat Jun 09, 2007 8:38 am UTC
Location: USA
Contact:

Re: Let's talk about AI

Postby Earlz » Mon Nov 30, 2009 5:59 pm UTC

Wow, Xanthir. You just triggered something in my brain.

I was looking at AI wrongly. I was looking at it like "can it recognize this picture" and such.

No! Seeing pixels is not the same as our adjustable focus, adjustable white balance, extremely amazing eyes.

I mean, AI is intelligence! Not what senses it has. Sure, an AI should be able to recognize a pixellized apple.
My new blag(WIP, so yes it's still ugly..)
DEFIANCE!
Image
This is microtext. Zooming in digitally makes it worse. Get a magnifying glass.. works only on LCD

User avatar
Xanthir
My HERO!!!
Posts: 5426
Joined: Tue Feb 20, 2007 12:49 am UTC
Location: The Googleplex
Contact:

Re: Let's talk about AI

Postby Xanthir » Mon Nov 30, 2009 6:12 pm UTC

Glad I could be of service. ^_^
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

achilleas.k
Posts: 75
Joined: Wed Nov 18, 2009 3:21 pm UTC

Re: Let's talk about AI

Postby achilleas.k » Mon Nov 30, 2009 7:19 pm UTC

Earlz wrote:I mean, AI is intelligence!


Well yes. An intelligent agent's connection to the outside world may only be a keyboard. Whether it's intelligent or not has nothing to do with how well it can see or hear or read lines. It's about perception and behaviour, even if that behaviour is only limited to a small environment we chose for it.
The most exciting phrase to hear in science -the one that heralds new discoveries- is not "Eureka!" but "That's funny...". - Isaac Asimov

User avatar
RockoTDF
Posts: 582
Joined: Thu Oct 18, 2007 6:08 am UTC
Location: Tucson, AZ, US
Contact:

Re: Let's talk about AI

Postby RockoTDF » Mon Dec 07, 2009 8:28 am UTC

guyy wrote:Also, the Chinese Room thing only applies to Turing machines and similar devices, not all machines (the brain is a machine, after all); the point of it is just that you can't make a brain-equivalent machine by taking a pile of microprocessors and giving them the right software. Consciousness may not be "unique" or "special" or whatever, but unless you're prepared to say it doesn't exist at all, I don't see how you could make a conscious Turing machine. (And if it doesn't exist, the whole question of conscious AI is kind of moot.)


I don't understand why you think that a perfect simulation of the brain wouldn't be conscious. There is no reason to think that the carbon based chemicals in our brain are why we are conscious.
Just because it is not physics doesn't mean it is not science.
http://www.iomalfunction.blogspot.com <---- A collection of humorous one liners and science jokes.

stephentyrone
Posts: 778
Joined: Mon Aug 11, 2008 10:58 pm UTC
Location: Palo Alto, CA

Re: Let's talk about AI

Postby stephentyrone » Mon Dec 07, 2009 5:40 pm UTC

In particular, assuming you want to do science and not be a mystic, if you can't come up with an experiment that can tell the difference between a turing machine and a brain, the reasonable belief is that either both are conscious, or both are not conscious. Hand-wavey arguments like "it's a machine, it can't be conscious" certainly aren't science, and probably aren't serious philosophy.

I never understood why some people at Cal were so awed by John Serle; the Chinese Room argument seems pretty weak, and his early work on speech acts has aged poorly (viewed from my computational and cognitive-sciencey linguistics background). I assume that he did something else to achieve his relative prominence, but every time I asked my Philosophy grad student friends at Cal why I should care about Serle, their answer was "he invented the Chinese Room". His primary claim to status in my mind is that he was an early FSM supporter, but that's hardly an academic credential.
GENERATION -16 + 31i: The first time you see this, copy it into your sig on any forum. Square it, and then add i to the generation.

achilleas.k
Posts: 75
Joined: Wed Nov 18, 2009 3:21 pm UTC

Re: Let's talk about AI

Postby achilleas.k » Tue Dec 08, 2009 12:26 pm UTC

The problem with consciousness is that it's easy to dismiss as something mystical and metaphysical whenever someone brings it up. Sure, when someone says "consciousness", you might think he's talking about the abstract concept of the mind, something related to a soul, or maybe even just pure awareness. The thing is that all three of these can be linked to or interpreted as consciousness, but all three are completely different things. For one, the soul is a metaphysical and even a religious concept. The mind, to some people, is another word for the software of the brain, though there's no evidence that two identical brains can have different minds running them.
Awareness on the other hand, is another concept linked to consciousness and is probably the most everyday meaning we append to the word. "He's unconscious" the doctor says, something that can easily be replaced by "He's lost consciousness". "I got hit in the head and I lost consciousness for 10 minutes". I don't think anyone can argue that consciousness, by this meaning, is something unmeasurable or disputable. Sleep is an interesting concept here. We're relatively unconscious when sleeping (i.e., not fully aware of our environment), yet we have dreams we can remember and we can move limbs around (at least sometimes). We have fully conscious thoughts while dreaming (thoughts we would have while awake in a similar situation). We have all experienced the states between awake and asleep where you're aware of your room yet you also feel there's another presence there.
So in this sense, we can measure consciousness and there's a concrete, globally acceptable meaning to the word.
I can imagine the next question though would be "What does this have to do with anything we've said so far? I'm pretty sure the Chinese room isn't an argument about whether or not the room is asleep or awake."
Agreed. My major point is that consciousness is a misunderstood term in most contexts. We've been talking about it, dismissing it and supporting it without defining it, at least not in this thread.


A final point to all of this, which is a direct reply to the last post by stephentyone.
I don't find the Chinese room argument weak. I think it's an important idea that, at the very least, helps you define how you perceive consciousness. Even if by listening to it once someone's first reply is "But what if there's no such thing as consciousness? Can we just assume its existence for the sake of the argument?", then at least it has made someone consider the existence of the concept. I know the argument is originally meant to argue against machine consciousness, but I believe that some ideas are worth their time even if their greatest contribution is that they've been shot down, in the same way that disproving something is just as important as proving something (and by that I don't mean that consciousness is a valid concept because we haven't disproved it yet).

Also, I carry a certain flaw, personally. I find it hard to dismiss an idea as rubbish when too many people who are experts at the subject matter, take it seriously. If a physicist claims something too outlandish for me to believe may exist, and half the physicists of his field support it and the other half dismiss it (or even if all take the time to disprove it), then to me that means that the idea wasn't as outlandish as I thought (even though it may be proven to be invalid - I'm not talking about the validity of the idea here, only about if it can be dismissed easily by someone like me, who knows nothing about the field). If the experts in the field took the time to disprove it, that means it's not complete rubbish. People don't spend time disproving rubbish ideas if they are apparently rubbish to begin with. So if world renowned experts in the fields relating to the argument have taken the time to argue for and against it, then I may have my own view on the matter, my own opinion, but I feel I'm out of place to be calling it rubbish or weak.
The most exciting phrase to hear in science -the one that heralds new discoveries- is not "Eureka!" but "That's funny...". - Isaac Asimov

userxp
Posts: 436
Joined: Thu Jul 09, 2009 12:40 pm UTC

Re: Let's talk about AI

Postby userxp » Wed Dec 09, 2009 6:53 pm UTC

Independently of whether it has consciousness or not, our brain is a machine that works similarly to the Chinese room. It has inputs, an internal state and outputs. Given the internal state and the input you could predict the next state and the output *. So if consciousness exists, it is fully determined by the brain. And if consciousness had some physical existence, there would be no special evolutionary reason for it to appear.

* Assuming that the brain follows the laws of physics and does not use any magic. Some argue that quantum indeterminacy gives us free will, but we have no evidence that our brain uses any quantum effect, and even if it did, it would be randomness, not free will.

graatz
Posts: 87
Joined: Thu Oct 29, 2009 7:24 pm UTC

Re: Let's talk about AI

Postby graatz » Mon Dec 14, 2009 10:16 pm UTC

achilleas.k wrote:I don't think anyone can argue that consciousness, by this meaning, is something unmeasurable or disputable.


As I understand it, we actually have a pretty hard time determining if humans are aware of their surroundings in some cases. So as far as I can tell, this isn't as clear cut as you seem to want it to be. This has the additional problem of not extending properly to machine intelligence. Is a computer "aware" that someone is typing on its keyboard? Is this the same kind of awareness as a human being able to see written words and understand them as a method of communication?

achilleas.k
Posts: 75
Joined: Wed Nov 18, 2009 3:21 pm UTC

Re: Let's talk about AI

Postby achilleas.k » Mon Dec 14, 2009 10:30 pm UTC

While I acknowledge all the problems you pointed out with my definition of consciousness, I wasn't suggesting we should go on and just assume that consciousness = awareness. I did present awareness as being something directly measurable where, as you pointed out, it's not as easy as I may have lead a reader to believe (guilty as charged). Your next point, that this doesn't extend properly to machine intelligence, is something I already explained however. You could say that this definition isn't relevant or is disputable in the current discussion and you're probably right. My main point with that post was specifically that we are talking about consciousness when we have yet to define it in the context of the thread topic. The problem with consciousness is that there is no generally accepted definition for the word and certain definitions have a more solid explanation than others. I used the awareness/"awakedness" definition as a meaning that is understood by everyone and can, even roughly, be measured.
So basically it was a proposal to define what each of us is talking about before carrying on the discussion. I didn't intend to propose a definition however, merely that we should define it somehow so we would all be talking about the same thing.
The most exciting phrase to hear in science -the one that heralds new discoveries- is not "Eureka!" but "That's funny...". - Isaac Asimov

nhamann
Posts: 2
Joined: Wed Dec 16, 2009 11:13 am UTC

Re: Let's talk about AI

Postby nhamann » Wed Dec 16, 2009 11:56 am UTC

The problem with the Chinese room argument, for me at least, is that its formulation seems to be incoherent. Searle assumes that the computer instructions of presumably some sort of natural language understanding module of an AI system could actually be printed out and enclosed in a book. This is a completely naive view of how natural language understanding actually occurs. Language is a social phenomenon, with all sorts of dependencies on cultural knowledge, and furthermore linguistic meaning is in part derived from knowledge of the real-world context in which utterances occur. Given this, Searle's argument breaks down, and has basically no relevance to the feasibility of AI.

There are some reasons to be excited about AI research these days: MIT just launched the Mind Machine Project, which from what I see is trying to return to a broader formulation of the AI problem, one that's been missing in the field for the past two decades. I'm not too convinced that this will be successful, but it's a step in the right direction.

One of the more exciting prospects I see for AI is the nascent field of Artificial General Intelligence, which essentially has some researchers trying to formulate a mathematical theory of general intelligence rather than trying to make programs that can solve specific problems very well (i.e. what constitutes reasoning strategies that are general enough to be applied across a wide variety of problem domains rather than just being able to solve one type of problem well, like Deep Blue and various expert systems). This field isn't very big right now, but I'm convinced that a refusal to tackle the broad problem of general intelligence has held back AI in recent decades.

I do want to mention another approach to AI, which is definitely fringe at this point and probably scares most people off as being too "out there," but it's Eliezer Yudkowsky's Friendly AI approach. I've seen a lot of people get turned off by all the Singularity talk, but if you look past all the seemingly zany futurism stuff, there's actually a valid point here: assuming that the rest of the AI field actually manages to solve the AGI problem (i.e. manages to come up with an abstract model for an intelligent entity that could reason generally about a wide variety of problem domains), is there any guarantee that such an AGI would form goals that are safe, let alone interesting to us meat machines?

Before you respond to that question, you would do well to avoid anthropomorphizing AI's, which, not being products of evolution like we are, would think in ways that are completely alien to us. So there's a valid research problem here: how do you formalize a notion of value or preference (which would almost certainly be required for building a human-level AI that could actually do things that other people found interesting)?

I'm not confident that we'll make AI within the near future (say, 30-50 years), but neither do I think the field has stagnated. There is some cause for optimism, just not the kind of unrestrained optimism that's plagued AI in the past.

MadRocketSci2
Posts: 69
Joined: Sat Nov 21, 2009 5:31 pm UTC

Re: Let's talk about AI

Postby MadRocketSci2 » Thu Dec 17, 2009 10:36 am UTC

I think the singularity guys are putting the cart way before the horse. It's going to be a hard enough problem getting an AI to have goals of any complexity in the first place. Let the guys who figure that out figure out how to make it friendly - they will be in the best position to do so, as they will be the ones with the particulars of how it works in the first place.

I don't see how FAI qualifies as an approach without an AI to work with.

User avatar
Xanthir
My HERO!!!
Posts: 5426
Joined: Tue Feb 20, 2007 12:49 am UTC
Location: The Googleplex
Contact:

Re: Let's talk about AI

Postby Xanthir » Thu Dec 17, 2009 1:56 pm UTC

MadRocketSci2 wrote:I think the singularity guys are putting the cart way before the horse. It's going to be a hard enough problem getting an AI to have goals of any complexity in the first place. Let the guys who figure that out figure out how to make it friendly - they will be in the best position to do so, as they will be the ones with the particulars of how it works in the first place.

I don't see how FAI qualifies as an approach without an AI to work with.

The scary part is the possibility of a 'hard takeoff'; the idea that, once you have an intelligent machine, it can self-alter to make itself more intelligent, which allows it to make itself more intelligent, which allows it to make itself more intelligent, and so on, rendering it very quickly into a weakly godlike intelligence. (Take this as an example of why this is plausible.) If this is an actual possibility (it's not certain one way or another yet, because we don't know enough about artificial intelligence to tell), then the people who create the first AI better hope they create a friendly AI as well. Note that an AI doesn't have to be evil in order to be non-friendly, it just has to have goal systems sufficiently different from us humans that it does things *we* think are bad but it thinks are good.

That's why EY stresses that we should proceed cautiously and not take the final step into full AI until we've figured out ethics and goal systems sufficiently to coherently tell the AI how to act.
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))


Return to “Computer Science”

Who is online

Users browsing this forum: No registered users and 4 guests