Can a machine be conscious?

Please compose all posts in Emacs.

Moderators: phlip, Moderators General, Prelates

User avatar
insom
Posts: 40
Joined: Mon Feb 25, 2008 11:29 am UTC

Re: Can a machine be conscious?

Postby insom » Sun Sep 27, 2009 11:43 pm UTC

Goplat: What is the difference between simulation and reality?
In the case of the black hole simulation, it would of course only affect the objects in the simulation, just as the simulated human would only know the simulated world. Extending functionality beyond that depends on what I/O hardware you got lying around.
Normal cynics think they are realists. Hardcore cynics know they are optimists.
Woo I draw stuff - how incredibly awesome

User avatar
nyeguy
Posts: 580
Joined: Sat Aug 04, 2007 5:59 pm UTC

Re: Can a machine be conscious?

Postby nyeguy » Mon Sep 28, 2009 2:55 am UTC

Goplat wrote:A simulation of a human wouldn't be actually conscious, any more than a simulation of a black hole would make the computer swallow up the Earth. Please, let's dispense with the delusion that a mere representation of something automatically gets some real-world properties of that thing - it's called magical thinking, and has been pretty well discredited by science.

But the primary function of a black hole is to suck in physical matter, something which obviously, as you said, would not be in a simulation. But conscious thought doesn't need anything physical besides the medium with which it is thought on. At this point, what is the difference between the calculations on a CPU and the electric signals shot through neurons in the brain? If the computer outputs a speech synthesizer, or, more primitively, can output what it thinks to a text terminal, and communicate the same ideas in the same form a human does, where do you draw the distinction?

We don't make the mistake of assigning "magical thinking" to the computer... You have the mistake of assigning a mystical property to the brain.
Image

User avatar
Yakk
Poster with most posts but no title.
Posts: 11129
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

Re: Can a machine be conscious?

Postby Yakk » Mon Sep 28, 2009 3:02 am UTC

Goplat wrote:A simulation of a human wouldn't be actually conscious, any more than a simulation of a black hole would make the computer swallow up the Earth. Please, let's dispense with the delusion that a mere representation of something automatically gets some real-world properties of that thing - it's called magical thinking, and has been pretty well discredited by science.

How good is the simulation? Does the simulation cause real things to be sucked into it? I see you are implying it doesn't -- so the black hole simulation is highly limited in what domain it can interact with things. Quite reasonable -- simulations can be less than perfect.

The simulated human -- can it interact and speak with physical humans? So along that axis, it is pretty good at simulating a human! Along other axis (using its mass to attract other mass) the simulation is incomplete. Maybe it can attract some other simulated mass in the simulation.

Ie, imagine if you had a simulation such that the humans in the simulation (who you could not speak to, nor hear) would move around the world in ways that are similar from the patterns of human movement. This simulation is more limited than the earlier one I mentioned -- so it might be simpler (possibly much simpler).

The fact that simulations can often be simplified along some axis, and along those axis the simulation isn't all that similar to the thing being simulated, doesn't mean that along the axis that aren't being simplified that the properties aren't real.
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

User avatar
PhantomPhanatic
Posts: 84
Joined: Thu Nov 13, 2008 5:32 am UTC

Re: Can a machine be conscious?

Postby PhantomPhanatic » Fri Nov 13, 2009 3:16 am UTC

I have some thoughts on what consciousness is that I'm not sure anyone here has put together yet. I've seen a few already stated but not combined into a singular definition.

I would say that consciousness exhibits the following five qualities:

  1. Receives stimulus or inputs
  2. Reacts to stimulus or inputs
  3. Is at least partially circular
  4. Is altered by receiving or reacting to stimulus or inputs
  5. Has the ability to generalize

When I say stimulus or inputs I mean an outside influence or an internal state feedback. A consciousness is something that can receive these and create some output. The requirement of circularity and alteration are linked but not always found together in other systems so I thought it necessary to differentiate. Circularity requires some form of feedback. This feedback can be provided internal to the consciousness or external to it. The consciousness must be able to be altered by the receiving of either external or internal inputs. Finally, the consciousness must have the ability to generalize inputs and/or outputs. This is very important as I believe consciousness to be impossible without it. Generalizations allow the consciousness to react to stimulus that are not always clear cut. Generalizations prevent the requirement of the consciousness to analyze each new stimulus as completely different from the others. Without this a coherent response to inputs outside the range of the current consciousness' scope would not be possible.

This seems very lenient when it comes to defining what is conscious, however I think it makes a very good distinction between things we believe to be unconscious and things we believe to be conscious.

In the case of the thermostat it can be seen that it has three of the important qualities of consciousness (since it is a controller). What it lacks, and what is most likely perceived that sets it apart from consciousness, is the ability to alter itself, to learn or adapt. Even when we have a very complex static controller we do not perceive consciousness because it is not modified by inputs; however, even an adaptive controller might not fit the definition of conscious if it does not have the ability to generalize.

It is my opinion that varying levels of possible feedback loops, alter-ability, and ability to generalize create varying levels of consciousness. A fly's brain has few feedback loops, very little able to be altered, and makes very wide generalizations about its inputs. As we progress through more and more complex species we find that the amount of feedback and alter-ability increase, while the generalizations become more complex.

Neural networks seem to explain all aspects of the criteria for consciousness. A hidden layer translates inputs (analogous to the our spinal cord and lower parts of our brain) and sends the translated inputs into the system (the reptilian part of the brain) this then can react by action or send outputs through a control loop (analogous to the cortex albeit highly simplified). The neural network has the ability to be altered by some algorithm based on inputs and reaction (this is most likely achieved in the brain simply by growth due to repetition). Finally the alteration of the neural network has the ability of storing reactions and creating generalizations which can be understood easily by analyzing perceptron and Hopfield networks.

If a neural network model can explain all aspects of consciousness and a neural network can be simulated on a von Neumann computer or constructed physically then I see no reason why a machine could not be conscious. We may actually be fairly close to creating consciousness in adaptive control systems right now, but we may not see much more advanced consciousness for quite some time.
We can lick gravity, but sometimes the paperwork is overwhelming.
-- Wernher Von Braun

makc
Posts: 181
Joined: Mon Nov 02, 2009 12:26 pm UTC

Re: Can a machine be conscious?

Postby makc » Tue Nov 24, 2009 12:04 am UTC

Newbreed wrote:Is a machine concious of its surrondings? Some cars turn on their windshield wipers when it begins to rain, so it is reacting to external stimuli. Does it know why it turns on its wipers, or is it simply a controlled response?
It is almost like if you're asking, could a car make free choice and not turn their windshield wipers on... like the driver does in other cars that do not do it on their own.

Derezo
Posts: 1
Joined: Sun Sep 06, 2009 8:45 pm UTC

Re: Can a machine be conscious?

Postby Derezo » Tue Nov 24, 2009 12:29 am UTC

The problem with most people is the third point. Almost everyone will agree that we could do the first two eventually. However the majority of the population believe in currently unmeasurable forces like souls or free will and that would be a deal breaker because they are intractably linked to conscious thought for these people.
The existence of a soul isn't really a deal breaker. It's just another barrier in the process. These souls need to have a cause, and a source, and some sort of substance that can be tapped into.

I believe that the sun is the largest source of souls on our planet, but that the soul is the consciousness of matter having influence on it's environment, creating life and everything we see around us. We are star stuff after all.

Even the most improbable of things happens eventually. It's just a matter of where and when, because time is just a construct which prevents everything from happening all at once. If it all happened at once, we'd never get to see it and experience it! :)
Machines will probably someday, somewhere, take souls.. but I'd rather it didn't happen on Earth. I'd rather the sun wipe all this crap out and start over as higher level beings, having already experienced this stage of our development. After we've gained insight about the infection of soulless creatures that run our countries and economies, we'll develop an immunity to it as a planetary system, ushering in the 'golden age' of heaven on earth that seems to be all the rage these days.. we wont be humans anymore, we'll be something more. I think recent increase in earthquakes, hurricanes, global warming and more are related to an impending doom for this civilization... but if we do have souls, and they are the consciousness of the stars, we'll be back. We're not going anywhere.

User avatar
headprogrammingczar
Posts: 3072
Joined: Mon Oct 22, 2007 5:28 pm UTC
Location: Beaming you up

Re: Can a machine be conscious?

Postby headprogrammingczar » Tue Nov 24, 2009 2:55 am UTC

Someone just won a Time Cube award.
<quintopia> You're not crazy. you're the goddamn headprogrammingspock!
<Weeks> You're the goddamn headprogrammingspock!
<Cheese> I love you

Derezo
Posts: 1
Joined: Sun Sep 06, 2009 8:45 pm UTC

Re: Can a machine be conscious?

Postby Derezo » Tue Nov 24, 2009 7:27 am UTC

An interesting site, but how does that relate to my post? :?

[edit] It took a while, but I found it.

The Sun has ruled
over the Light on Earth for eons and
might just fight back with another
"Big Bang Catastrope"if Dark rules
over the light from the highest office
on Earth.


This guy's a bit extreme though :P

User avatar
Xanthir
My HERO!!!
Posts: 5426
Joined: Tue Feb 20, 2007 12:49 am UTC
Location: The Googleplex
Contact:

Re: Can a machine be conscious?

Postby Xanthir » Tue Nov 24, 2009 3:02 pm UTC

Derezo wrote:Machines will probably someday, somewhere, take souls.. but I'd rather it didn't happen on Earth. I'd rather the sun wipe all this crap out and start over as higher level beings, having already experienced this stage of our development. After we've gained insight about the infection of soulless creatures that run our countries and economies, we'll develop an immunity to it as a planetary system, ushering in the 'golden age' of heaven on earth that seems to be all the rage these days.. we wont be humans anymore, we'll be something more. I think recent increase in earthquakes, hurricanes, global warming and more are related to an impending doom for this civilization... but if we do have souls, and they are the consciousness of the stars, we'll be back. We're not going anywhere.

This is the TimeCube-award-winning part. It's like a train-wreck occurring right in front of you. You keep wanting to look away, but something draws your eye, and it's always worse than what came before.
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

Derezo
Posts: 1
Joined: Sun Sep 06, 2009 8:45 pm UTC

Re: Can a machine be conscious?

Postby Derezo » Tue Nov 24, 2009 9:23 pm UTC

I must admit, the words were just sort of.. appearing there on my screen without planning. lol

I think that TimeCube guy inspired something in the TV Show LOST. He's basically saying we're all spread out over the world and we're not suppose to be, we belong on specific islands.. and when we hop off our Islands, bad things happen to the people who are still on the Island. If we don't go back to the island, imminent doom awaits.. heh

WE'VE GOTTA GO BACK KATE!

monroetransfer
Posts: 26
Joined: Wed Aug 27, 2008 1:03 am UTC

Re: Can a machine be conscious?

Postby monroetransfer » Wed Nov 25, 2009 6:07 am UTC

Goplat wrote:A simulation of a human wouldn't be actually conscious, any more than a simulation of a black hole would make the computer swallow up the Earth. Please, let's dispense with the delusion that a mere representation of something automatically gets some real-world properties of that thing - it's called magical thinking, and has been pretty well discredited by science.


The reason a simulation[1] of a black hole does not swallow up the earth is because part of what's necessary to define a black hole is the universe it's embedded in. An accurate simulation of a black hole would certainly swallow up anything it came sufficiently close to in the simulated world. I think your confusion over this matter results from a sort of category error. We should no more expect a simulation of a black hole to swallow up our world than we should expect a simulation of an atom (which is much easier and practically feasible in our time) to start interacting with the atoms of our world.

The function of the human mind, however, does not really derive its definition from what space it is embedded in. All that is really necessary to define the human mind is to accurately model the functions of and relationships between its parts. How could it be otherwise? You are no longer made of any of the same stuff as you were at birth, and yet your mind is as real now as it was then[2]. But this is intuitive and lacking rigour. A stronger argument runs: if I were to incrementally replace parts of your brain with machine parts which fulfill the same function, that is, blend in perfectly with the natives, eventually we would get to the point where you were all machine. We cannot speak meaningfully of a point at which your mind stopped being "real" because such a point does not exist. The medium of computation is irrelevant. This notion of "parts" extends to any part of the nervous system, including the nerves in the ends of the fingers etc. As well, the human mind has certain expectations about the hardware it has to work with -- that is, it expects a normal human body through which to interact with its world. This all to say, you are you whether I put your body (including brain and all) on venus, on earth, or anywhere. To your mind, the difference is only in the phenomena it observes through its senses. The world you experience in fact, is and has always been a sort of simulation, filtered from the raw data gathered by your sense organs and represented in your mind. It's more sophisticated than simply being projected on a screen, but to be able to perform computations on the data you receive, such as storing it in memory, categorizing it by colour, sorting it by shape, figuring out perspective and distance from certain visual cues, or discerning different voices or recognizing language in a piece of music or sound, you need to be able to adapt the raw data and store it in some sense in your brain.

So if you embed a simulation of a human body with his brain in a virtual world capable of sustaining that simulation, the mind of the simulation and the mind of the meat-brain watching it would both exist in the same sense. Again, the medium of computation is irrelevant -- whether it be a meat-brain or a simulated one. The electric pulses and charges which take place in the computer's memory, cpu, etc... are just as real as the electric pulses and charges which take place in the meat-brain. So not only does it not matter to the human brain whether it (and its body) are on venus, or earth, but it also does not matter whether it is in our universe or an accurately simulated copy, or even an altered copy, or even a totally different simulated universe.

The function of the human mind does not depend on the universe in which it is embedded, whether accurate simulation or our physical universe, but the content of the mind does. The content of your mind changes whether you're on top of mount everest, in your backyard, on the moon, or in a simulated reality of hovering cubes, jumping from one to the next, wondering how to escape, or if that's all there is (it would be very boring after a while.) In the same way that the simulated black hole could only suck in matter in its simulated universe, the simulated human mind could only know its simulated universe.

I hope this has clarified your thoughts.



[1]: ...and by simulation we mean an accurate one. Keep this in mind as you read what I've written. I think perhaps you take "simulation" to be equivocal with "simplification", or "abstraction". My clue is that you also refer to a "representation", which connotes a certain degree of simplification, or of hiding the real thing behind merely a reference or pointer. This is an error. The concepts are not identical. The simulations I describe are very complicated systems. To simplify them would break them. Don't be so vain as to presume that the human mind is such a complicated computer, that it cannot be modeled in any computer[3], even if we let technology advance and evolve until the end of time. That such a faithful simulation is not available to us now does not mean it may never be.

[2]: ...well, I suppose we should bump the lower bound up to about the age of 10, since at birth you really did have a totally different thought process, brain chemistry, etc.

[3]: This is a deep and powerful property of the computer you and I are typing on, and the type of computer it is presumed we are using for our simulations, is that they are universal computing machines. They are capable of simulating inside themselves any other computer. They can carry out any computation that is possible[4]. When we consider that a computation is simply the evolution of a formal system according to laws of production which carry us from one state to the next, and if we concede that the evolution of the state of the universe from one state to the next is governed by finite rules of production (god doesn't play dice and there are a finite number of laws in nature), we can conclude that the universe is a running calculation and given a sufficiently powerful computer (hint, such a computer could not exist in our universe by definition, unless it were our universe which isn't the point of the exercise) we could simulate it perfectly. See this relevant xkcd comic: http://xkcd.com/505/.

[4]: It may surprise you to hear that some computations are not possible. See: http://en.wikipedia.org/wiki/Computability_theory.

achilleas.k
Posts: 75
Joined: Wed Nov 18, 2009 3:21 pm UTC

Re: Can a machine be conscious?

Postby achilleas.k » Thu Nov 26, 2009 4:24 am UTC

monroetransfer: I'm pretty sure there's a law against nested footnotes and if there's not there should be :)

Apologies for quoting a 3 month old post but I got here late :(

joejoefine wrote:I think an important element in helping to set apart "true" human consciousness as well as free will from a very complicated set of programs that could mimic human behaviour closely, is the ability of the machine to modify its own code ("beliefs") in accordance with changes in the environment. I like to use this definition of consciousness as given by 0xBADFEED.

So far the "learning" process is limited to applying received sensory inputs to a pre-programmed code, which means the machine can only exhibit conscious behaviour up to a certain extent. "Learning" (requirement 3) should also include the ability to adapt to situations and grow beyond the original programming parameters (i.e. remember Data from Star Trek trying to discover emotions - trying to grow beyond his original programming).

The difference between a machine with a pre-programmed code and a human being is that, while both can be placed into a new situation and experience shock or panic (ignoring the whole argument of whether a machine is conscious of the feeling in the same way that a human is), the human will gradually acclimatize to its new environment and create new rules with which to make progress. The machine, not being programmed with a set of code that deals with the new experience, would never be able to learn how to deal with it, or to even know what a preferred resolution should be.

Just thinking at some possibilities, I imagine creating a code that works in the same way as evolution - it keeps pumping out new instructions until it finds something that "works", or that is aligned with the enhancing or maintaining the original programming code. I suppose that's how early cavemen did it - learning that fire isn't a nice thing to touch, trying to put a piece of wood in it, observing that it could be used as a torch (etc.). But the problem is you need to have some pre-programmed way to analyze the new situation in order make any sense out of it - what "works"? What will determine whether the new experience helps or hinders the machine in its pre-programmed directives? To be able to analyze the billions of new situations we deal with everyday, a machine would need a pre-programmed "analysis code" for each of these situations. This means it has to store a TON of data and instructional code for each new situation before it encounters it, which doesn't make sense, and would probably require far too much storage space. If these analysis codes aren't preprogrammed, then it circles back to the original problem - how can a machine create its own code, such that it is meaningful in terms of advancing or maintaining its original programming (i.e. personality)?

The program changing its own code isn't as tricky as it seems. Pre-programmed directives or ways to analyse the appropriateness (or goodness) aren't really necessary. If you're simulating evolution then you're also simulating natural selection. Make the program reproduce and mix with other programs. Add some randomness that introduces new bits of code and removes some others. "Force" programs to reproduce regularly throughout their lifetime and, if done properly, natural selection will populate the environment with the programs that are a) the best at reproduction and b) the best survivors.
This forcing of reproduction and other things we may add in the first versions of the code are no different than a person's or animal's instinctive urge to reproduce and survive. Given, they may be results of evolution rather than prerequisites, but the same goes for the programs or AI agents. Provided they discover the code that causes reproduction urges and survival instincts, the ones that stumble upon it will produce a lot more offspring, thus allowing the best genes to dominate.
These concepts aren't entirely new to AI. Genetic programming is a very active research topic that does just this (at a simpler and more controlled level of course). The point I'm making here wrt to the thread topic is that it's wrong to say that machines can't be conscious because they can't change their code and we have to tell them what's good and what's bad for them. The solution to the problem of learning and adaptive agents isn't really that advanced.
The most exciting phrase to hear in science -the one that heralds new discoveries- is not "Eureka!" but "That's funny...". - Isaac Asimov

Tyr_oathkeeper
Posts: 30
Joined: Tue Mar 18, 2008 4:03 am UTC

Re: Can a machine be conscious?

Postby Tyr_oathkeeper » Thu Nov 26, 2009 5:32 am UTC

monroetransfer wrote: A stronger argument runs: if I were to incrementally replace parts of your brain with machine parts which fulfill the same function, that is, blend in perfectly with the natives, eventually we would get to the point where you were all machine.


Get out of my head monroetransfer.
I was formulating a post on just that in my head right before I read your post.

How about this thought experiment:
I put my brain in a jar and hook up all the nerves (including the optic nerve) to a virtual reality simulation. Say the whole thing was somewhat contained. In the virtual world, I believe I am moving my arms when I send the signal to where my arms used to be. The sim even tracks where it would be and draws in my field of view. In my little world I interact with entities that can pass the Turing test.
If you look at my brain-in-a-jar, you wouldn't be able to tell I was reacting to external stimuli. I do not react to my physical environment.
Lets take it a step further and saw I've suffered a little brain damage and can't convert short term memory to long term memory anymore (but still retain the the old memories). This would hamper my ability to learn from my experiences in the long term, but I could still do so in the short term.
I would think think I still have consciousness, as my long term memory holds memories of me being conscious, but would I still be? I fail most of the rules set out so far. What if I still could form long term memories? Or instead of being a brain in a jar, I was nearly playing Team Fortress 2 and was so engrossed in it that I lost track of my surroundings?

achilleas.k
Posts: 75
Joined: Wed Nov 18, 2009 3:21 pm UTC

Re: Can a machine be conscious?

Postby achilleas.k » Thu Nov 26, 2009 5:51 am UTC

Good old HM patient... or as I like to call it "The Memento effect".

Interesting tidbit: Did you know that, although HM couldn't create new long term memories after his lobotomy, he did manage to learn to draw certain shapes and got better with it in time? This was true for a couple of other motor learning tasks. Now all this was of course fascinating in terms of our understanding of the brain and its regions, but what does it mean in a consciousness discussion?
The most exciting phrase to hear in science -the one that heralds new discoveries- is not "Eureka!" but "That's funny...". - Isaac Asimov

monroetransfer
Posts: 26
Joined: Wed Aug 27, 2008 1:03 am UTC

Re: Can a machine be conscious?

Postby monroetransfer » Sat Nov 28, 2009 12:44 pm UTC

achilleas.k wrote:monroetransfer: I'm pretty sure there's a law against nested footnotes and if there's not there should be :)


Oh god you actually read that far? Thanks for taking the time!

achilleas.k
Posts: 75
Joined: Wed Nov 18, 2009 3:21 pm UTC

Re: Can a machine be conscious?

Postby achilleas.k » Sat Nov 28, 2009 6:04 pm UTC

monroetransfer wrote:Oh god you actually read that far? Thanks for taking the time!

I get carried away. It's a curse. Sometimes I browse a thread reading the longer posts while ignoring the 1-2 liners.
HELP ME! I HAVE PROBLEMS!!!
The most exciting phrase to hear in science -the one that heralds new discoveries- is not "Eureka!" but "That's funny...". - Isaac Asimov

graatz
Posts: 87
Joined: Thu Oct 29, 2009 7:24 pm UTC

Re: Can a machine be conscious?

Postby graatz » Mon Dec 14, 2009 6:45 pm UTC

hammerkrieg wrote:I'm interested in hearing arguments that machines cannot be conscious, if only to rebut them. :D


For one, we've already stumbled upon at least one definitions problem. No one in this thread has adequately said what it means to be conscious. I'd also challenge you to come up with a definition for machine. If you are already calling the human body a "biological machine," you're just winning by definition. I also think it's slightly unfair to claim that we could just make a machine that perfectly simulates the biological functions of a human. The fact that you think that a "machine" could, for all intents and purposes, be indistinguishable from a human but still be a machine reduces to just calling humans "biological machines," in which case we already build loads of them--the process is called procreation.

So the question I ask you... could we humans build something distinguishably a machine (say, cameras for eyes, a hard drive for memory, etc.) that appears to have thoughts and feelings indistinguishable from the so-called human experience?

achilleas.k
Posts: 75
Joined: Wed Nov 18, 2009 3:21 pm UTC

Re: Can a machine be conscious?

Postby achilleas.k » Mon Dec 14, 2009 9:49 pm UTC

graatz wrote:For one, we've already stumbled upon at least one definitions problem. No one in this thread has adequately said what it means to be conscious.

That's sort of what I said on another topic which slowly evolved into having major common points to this one.
The most exciting phrase to hear in science -the one that heralds new discoveries- is not "Eureka!" but "That's funny...". - Isaac Asimov

User avatar
Xanthir
My HERO!!!
Posts: 5426
Joined: Tue Feb 20, 2007 12:49 am UTC
Location: The Googleplex
Contact:

Re: Can a machine be conscious?

Postby Xanthir » Tue Dec 15, 2009 5:18 pm UTC

graatz wrote:So the question I ask you... could we humans build something distinguishably a machine (say, cameras for eyes, a hard drive for memory, etc.) that appears to have thoughts and feelings indistinguishable from the so-called human experience?

Yes.
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

graatz
Posts: 87
Joined: Thu Oct 29, 2009 7:24 pm UTC

Re: Can a machine be conscious?

Postby graatz » Tue Dec 15, 2009 7:48 pm UTC

Xanthir wrote:Yes.


What makes you so sure? Any sort of supporting arguments?

User avatar
Xanthir
My HERO!!!
Posts: 5426
Joined: Tue Feb 20, 2007 12:49 am UTC
Location: The Googleplex
Contact:

Re: Can a machine be conscious?

Postby Xanthir » Wed Dec 16, 2009 8:54 pm UTC

Sure. As far as we've been able to tell, the brain doesn't contain magic pixie dust. Thus, we can simulate it. It also doesn't appear that our sensory inputs are magically tied to our notion of consciousness; a human full-brain emulation will of course need *some* sort of sense data to avoid going crazy, but whether this data comes from organic eyes or mechanical cameras doesn't appear to matter.

Finally, once we have the computing power necessary to do a full-brain emulation, we'll have the power to actually explore consciousness modification in silicon (or whatever computing substrate we're using at the time). Thus we should be able to begin optimizing minds and separating them from the biological roots that we're emulating.
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

graatz
Posts: 87
Joined: Thu Oct 29, 2009 7:24 pm UTC

Re: Can a machine be conscious?

Postby graatz » Wed Dec 16, 2009 10:23 pm UTC

Xanthir wrote:As far as we've been able to tell, the brain doesn't contain magic pixie dust. Thus, we can simulate it.


It seems to me that you are already operating under the notion that a human is, for all intents and purposes, a machine. How much of it are we allowed to "simulate" until it turns into "replicate"? Do you mean simulating the neural net? No regards to the chemical/biological aspects of brain function? You're only concerned with the flow of electricity? What if that creates an "intelligence" that is fundamentally inhuman? Emotionless, unable to explain the warm, fuzzy feeling of watching the sunset or reading a good novel. In reality we know so little about the mind. How do we remember things, by the way, and is it important that we forget some things? Of course, we still have no working definition of what it means to be conscious.

It also doesn't appear that our sensory inputs are magically tied to our notion of consciousness; a human full-brain emulation will of course need *some* sort of sense data to avoid going crazy, but whether this data comes from organic eyes or mechanical cameras doesn't appear to matter.


It probably matters if we were to try to, say, map a human brain into a computer simulation. Do you have any idea how much of our brain is devoted to the actual mechanics of the human body? Quite a lot. I should think that, for example, if you tried to download a human brain into a robot with gears, motors, optical sensors, etc., it would be like trying to install a Mac program on a PC. Incompatible. That being the case, I suppose what you're proposing to do is replace the brain functions that normally control a human body to control the robot parts. And at that point, you're mutating the neural net so much that it's no longer sure that what makes consciousness happen will happen.

Finally, once we have the computing power necessary to do a full-brain emulation, we'll have the power to actually explore consciousness modification in silicon (or whatever computing substrate we're using at the time). Thus we should be able to begin optimizing minds and separating them from the biological roots that we're emulating.


We really don't need to download a human brain in order to study it. fMRI is a wonderful tool, which has unfortunately not gotten us much closer to explaining human consciousness.

I'm not saying that I necessarily disagree with your position, but it sounds like what you are arguing is this:

1) Humans are nothing but organic machines.
2) We can create robotics that behave just like the machinery of humans.
3) Given (1) and (2), it's only a matter of time before we create a conscious machine.

What I'm saying is that this doesn't get to the heart of the matter. You are taking it as assumption that humans are nothing but organic machines, in which case we are already machines with consciousness so what's the point of further debate? Additionally, your solution to the problem is to create a robot that is nearly indistinguishable from a human and then upload a map of a human brain. This seriously misses the point. Classically, the question is that if we could program an AI to learn, could it actually become conscious. You aren't proposing to program an AI, you're proposing to repurpose an existing intelligence for an artificial human, akin to Frankenstein's monster.

User avatar
Xanthir
My HERO!!!
Posts: 5426
Joined: Tue Feb 20, 2007 12:49 am UTC
Location: The Googleplex
Contact:

Re: Can a machine be conscious?

Postby Xanthir » Thu Dec 17, 2009 2:29 am UTC

graatz wrote:
Xanthir wrote:As far as we've been able to tell, the brain doesn't contain magic pixie dust. Thus, we can simulate it.

It seems to me that you are already operating under the notion that a human is, for all intents and purposes, a machine.

No, not really. These are my assumptions:
1) The brain operates according to physics, not magic.
2) Physics is computable.
3) Much of the physical detail of neurons is actually irrelevant to their operation and can be abstracted away (this isn't actually necessary to my hypothesis, but it means that we'll be able to to full-brain emulation much sooner; having to simulate the quantum interactions within a neuron would require orders of magnitude more hardware - luckily it's almost certainly not necessary).

Of course, you may consider "operates according to physics, not magic" as meaning "basically a machine", in which case we either agree completely or disagree with no hope of reconciliation.

How much of it are we allowed to "simulate" until it turns into "replicate"?

As much as necessary, hopefully not all the way down to the quantum structure of neurons. Still, as long as we're not building the computers out of biological machinery, we fit your definition of being composed of computing machinery.

Do you mean simulating the neural net? No regards to the chemical/biological aspects of brain function? You're only concerned with the flow of electricity? What if that creates an "intelligence" that is fundamentally inhuman? Emotionless, unable to explain the warm, fuzzy feeling of watching the sunset or reading a good novel. In reality we know so little about the mind.

You're positing magic. We have never observed magic, and don't expect to. The brain almost certainly operates on purely physical principles, and consciousness very likely originates at a level far above the actual physical substrate. Most likely we can do full-brain emulation either by black-box simulating neurons, or possibly going one level down and simulating neurotransmitters and calcium channels. Actually dropping to atomic or subatomic levels of details is almost certainly unnecessary. It's also very likely that we can do brain *construction* at a level above that, once we get the hang of it. It's probable that once we're more certain of how brains work, we can produce artificial minds at the level of thought modules, above the level of individual neurons.

How do we remember things, by the way, and is it important that we forget some things? Of course, we still have no working definition of what it means to be conscious.

These are technical details that aren't expected to be fundamentally difficult to unravel; it just requires a pretty heavy level of detail that we haven't achieved yet.

Do you have any idea how much of our brain is devoted to the actual mechanics of the human body? Quite a lot. I should think that, for example, if you tried to download a human brain into a robot with gears, motors, optical sensors, etc., it would be like trying to install a Mac program on a PC. Incompatible. That being the case, I suppose what you're proposing to do is replace the brain functions that normally control a human body to control the robot parts. And at that point, you're mutating the neural net so much that it's no longer sure that what makes consciousness happen will happen.

You don't have to change the brain. If you're doing human-brain emulation, you can just provide sensory inputs that are roughly analogous to that which the brain expects to receive. We're already doing this to a small degree - paralysis victims can be given cybernetic extensions that they learn to control using parts of their brain that they no longer have use for.

We really don't need to download a human brain in order to study it. fMRI is a wonderful tool, which has unfortunately not gotten us much closer to explaining human consciousness.

You are vastly underestimating the power of a faithful fully electronic simulation. fMRI is severely limited in what it can do. We can imagine future enhancements that are smaller, more powerful, and less intrusive, but still, there's a huge difference between hooking a scanner up to a living brain and telling it to think of blue triangles, and monitoring a sufficiently faithful simulation of the same. The latter can be run enormously faster and in great quantities. You can also modify the latter without violating human-based ethics standards (it's likely that we'll develop standards of ethics for the treatment of sentient machines and human-brain emulations as well, but such rules do not exist yet).

[quote_Additionally, your solution to the problem is to create a robot that is nearly indistinguishable from a human and then upload a map of a human brain. This seriously misses the point. Classically, the question is that if we could program an AI to learn, could it actually become conscious. You aren't proposing to program an AI, you're proposing to repurpose an existing intelligence for an artificial human, akin to Frankenstein's monster.[/quote]
That's not really my 'solution', it's simply *a* solution to the problem that is basically inevitable. Even if we never come up with a novel form of intelligence, as long as Moore's Law holds we'll eventually reach the point where full-brain emulation is possible and then we'll have an intelligent machine.

I do, however, personally believe that we'll find a way to create novel forms of intelligence in digital media, and fear that it will be both too alien and too powerful for us to properly control. I'm not expecting a Skynet-type situation (that's honestly ridiculous), but a machine intelligence that has "get smarter" as a goal and cannibalizes the Earth for computronium is just as deadly if not moreso.
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

graatz
Posts: 87
Joined: Thu Oct 29, 2009 7:24 pm UTC

Re: Can a machine be conscious?

Postby graatz » Thu Dec 17, 2009 2:27 pm UTC

Xanthir wrote:Still, as long as we're not building the computers out of biological machinery, we fit your definition of being composed of computing machinery.


I'm not entirely convinced that we could produce something inorganic that is conscious, and yes I realize that we still don't have a working definition. My guess is that by simulating organic processes, we'll end up with simulated intelligence, simulated emotions, simulated self-awareness. And I think that after five minutes of conversation with this simulated life form, you'd be able to tell that something was fundamentally "off." My basis for this guess is that those things (intelligence, emotions, and self-awareness) could derive not from the electronic processes, but rather the organic nature of brain function. Perhaps if nanobots and quantum machines could replicate the organic process, there would be no difference, but at that point we've just found a novel, expensive way to procreate.

The brain almost certainly operates on purely physical principles, and consciousness very likely originates at a level far above the actual physical substrate.


"Almost certainly" and "very likely"? It almost sounds like your argument is stemming from some kind of confirmation bias. You know what you suspect is true. Building an inorganic machine with some kind of consciousness would confirm what you're already assuming is the case.

Let me try this another way. Let's say you're in some kind of observation room where you can see two people in two separate rooms. You are told that one of them is hooked up to a "sending" machine and the other is hooked up to a "receiving" machine that work like such: The sender just does whatever he wants. Paces, flails his arms, sings to himself, whatever he wants to while under observations. His brain function is mapped to the receiver and causes him to perform the same actions. You'd probably be tempted to say that one is moving "of his own free will" and the other is moving "against his will." Let's go so far to assume that the sensation of free will is just another brain function that would get sent to the receiver. The receiver might end up thinking that he's choosing his actions of his own accord. But as an outside observer with knowledge of what's going on, you're still (likely) going to say that the receiver's not the one in control. Same observable behavior, same thought processes, but we still want to say that there's a difference.

Similarly, a robot that is given the same brain processes as a human would be fundamentally different. Not actually conscious, not actually "at the helm." Not behaving of its own accord.

User avatar
BlackSails
Posts: 5315
Joined: Thu Dec 20, 2007 5:48 am UTC

Re: Can a machine be conscious?

Postby BlackSails » Thu Dec 17, 2009 2:47 pm UTC

graatz wrote:
I'm not entirely convinced that we could produce something inorganic that is conscious,.


Wohler urea synthesis shows there is no fundamental difference between organic and inorganic systems.

0xBADFEED
Posts: 687
Joined: Mon May 05, 2008 2:14 am UTC

Re: Can a machine be conscious?

Postby 0xBADFEED » Thu Dec 17, 2009 3:17 pm UTC

graatz wrote:My guess is that by simulating organic processes, we'll end up with simulated intelligence, simulated emotions, simulated self-awareness. And I think that after five minutes of conversation with this simulated life form, you'd be able to tell that something was fundamentally "off." My basis for this guess is that those things (intelligence, emotions, and self-awareness) could derive not from the electronic processes, but rather the organic nature of brain function. Perhaps if nanobots and quantum machines could replicate the organic process, there would be no difference, but at that point we've just found a novel, expensive way to procreate.

You're basically just saying:
(1) If you build it with inorganic materials it won't work.
(2) If you build it out of organic materials, you're cheating because it's really just another human.
(3) QED

Sorry, point (1) is a rather large claim and I think it needs more justification than merely your gut-feeling. I've never seen anything to indicate this is even remotely true.

"Almost certainly" and "very likely"? It almost sounds like your argument is stemming from some kind of confirmation bias. You know what you suspect is true. Building an inorganic machine with some kind of consciousness would confirm what you're already assuming is the case.
Let me try this another way. Let's say you're in some kind of observation room where you can see two people in two separate rooms. You are told that one of them is hooked up to a "sending" machine and the other is hooked up to a "receiving" machine that work like such: The sender just does whatever he wants. Paces, flails his arms, sings to himself, whatever he wants to while under observations. His brain function is mapped to the receiver and causes him to perform the same actions. You'd probably be tempted to say that one is moving "of his own free will" and the other is moving "against his will." Let's go so far to assume that the sensation of free will is just another brain function that would get sent to the receiver. The receiver might end up thinking that he's choosing his actions of his own accord. But as an outside observer with knowledge of what's going on, you're still (likely) going to say that the receiver's not the one in control. Same observable behavior, same thought processes, but we still want to say that there's a difference.

Similarly, a robot that is given the same brain processes as a human would be fundamentally different. Not actually conscious, not actually "at the helm." Not behaving of its own accord.


We only say that the "receiver" is moving "against his will" because we are aware that there is one consciousness overriding another consciousness. This is not the case with a conscious machine. It is one consciousness acting of its own accord (in so far as anyone is able to act of their own accord). Also, now you're conflating "free-will" and consciousness and muddying everything up. You're saying the machine is not conscious because it doesn't have "free-will". To make that claim you'd first have to successfully argue that humans have free-will and that there is some identifiable difference that robs a machine's consciousness of its free-will. But free-will != consciousness it's a completely separate discussion.

You're basically starting with the assumption that human-like consciousness can't be replicated except in a human and then arguing backwards. It sounds like you essentially believe in dualism. Is this so?

User avatar
Xanthir
My HERO!!!
Posts: 5426
Joined: Tue Feb 20, 2007 12:49 am UTC
Location: The Googleplex
Contact:

Re: Can a machine be conscious?

Postby Xanthir » Thu Dec 17, 2009 4:44 pm UTC

graatz wrote:I'm not entirely convinced that we could produce something inorganic that is conscious, and yes I realize that we still don't have a working definition. My guess is that by simulating organic processes, we'll end up with simulated intelligence, simulated emotions, simulated self-awareness. And I think that after five minutes of conversation with this simulated life form, you'd be able to tell that something was fundamentally "off." My basis for this guess is that those things (intelligence, emotions, and self-awareness) could derive not from the electronic processes, but rather the organic nature of brain function. Perhaps if nanobots and quantum machines could replicate the organic process, there would be no difference, but at that point we've just found a novel, expensive way to procreate.

You are now *very explicitly* assuming that biology works by magic. It doesn't. Stop assuming this without evidence (hint: there isn't any, and there's no indication that we'll ever find any).

The brain almost certainly operates on purely physical principles, and consciousness very likely originates at a level far above the actual physical substrate.


"Almost certainly" and "very likely"? It almost sounds like your argument is stemming from some kind of confirmation bias. You know what you suspect is true. Building an inorganic machine with some kind of consciousness would confirm what you're already assuming is the case.

No, I'm just hedging my bets. It's certainly *possible* that the brain contains magic pixie dust, it's just tremendously unlikely. It's also possible that consciousness originates from the quantum makeup of neurons, and thus to do a brain emulation we have to run a quantum physics simulator. I think that this is also very unlikely. If I were speaking more casually, I'd certainly omit the hedges, but this is the internet and I have to be more precise.

Let me try this another way. Let's say you're in some kind of observation room where you can see two people in two separate rooms. You are told that one of them is hooked up to a "sending" machine and the other is hooked up to a "receiving" machine that work like such: The sender just does whatever he wants. Paces, flails his arms, sings to himself, whatever he wants to while under observations. His brain function is mapped to the receiver and causes him to perform the same actions. You'd probably be tempted to say that one is moving "of his own free will" and the other is moving "against his will." Let's go so far to assume that the sensation of free will is just another brain function that would get sent to the receiver. The receiver might end up thinking that he's choosing his actions of his own accord. But as an outside observer with knowledge of what's going on, you're still (likely) going to say that the receiver's not the one in control. Same observable behavior, same thought processes, but we still want to say that there's a difference.

Wait, you're vague on a very important detail here. Does the receiving machine just cut the receiver's brain out of the loop, so that it's still there and thinking and such, but none of its signals down to the body get through (the sender's signals get sent instead)? Or does the receiving machine actually modify the receiver's brain to have the same thoughts as the sender?

In the former case, of course the receiver is acting against his will. He's got a separate mind that we can watch and see that it's not corresponding to what's happening in his body. If we hooked him up to his own sending machine and let it transmit to a third meat puppet, we could see that he's clearly doing things differently than what his original body is doing.

In the latter case, of course the receiver isn't acting against his will. By definition, his will originates from his mind, and the mind is being copied from the sender's. You essentially have just one person who happens to inhabit two bodies at once. You might be tempted to object with "but that's not the receiver's *original* mind!". No, it's not. But the receiver's original mind *isn't in that body anymore*. Maybe the receiving machine records it and stores it or something before overwriting the brain-program with that of the sender's. What's in the body is a single will that knows what it'd doing, as much as the first dude's mind and body do.

Similarly, a robot that is given the same brain processes as a human would be fundamentally different. Not actually conscious, not actually "at the helm." Not behaving of its own accord.

I don't see how this has any connection to the previous paragraph. You appear to still be assuming magic biology and/or magic humanity that somehow renders machine intelligence not really real.
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

graatz
Posts: 87
Joined: Thu Oct 29, 2009 7:24 pm UTC

Re: Can a machine be conscious?

Postby graatz » Thu Dec 17, 2009 6:07 pm UTC

0xBADFEED wrote:Sorry, point (1) is a rather large claim and I think it needs more justification than merely your gut-feeling. I've never seen anything to indicate this is even remotely true.


All I'm hearing from the other side is gut feelings, too. I'm not sure of point (1) to any degree, other than I don't have substantial reason to think otherwise. All I'm saying is that we don't really seem to have a definition of conscious that can extend to inorganic material. We don't really know why humans have consciousness or if other living creatures can be said to have it. I don't think it's laughably absurd to hold it plausible that consciousness stems from the organic mechanics of the brain, as opposed to what we might be able to emulate through inorganic substitutions. Admittedly, my understanding of chemistry and biology is nowhere near complete. A poster suggested I look into the works of Wohler, which could be useful.

What I mean by point (2) is that if we are machines, and if we are conscious, than it's a trivial question to ask if a machine can be conscious, don't you agree? I'm not trying to be overly metaphysical about this (I'm an atheist and I think the concept of a soul that exists on some ethereal plane of existence is the "breath of live" is laughable). I'm just trying to make sense of this debate in some non-trivial way.

We only say that the "receiver" is moving "against his will" because we are aware that there is one consciousness overriding another consciousness. This is not the case with a conscious machine.


Missing the point of this. My point is that we can't necessarily rely on identically human brain functions to know if something is conscious. If we concede that the receiver isn't really moving at will, and if we concede that the receiver can believe to be moving at free will, there's something other than brain function going on when we try to apply these kinds of definitions.

Also, now you're conflating "free-will" and consciousness and muddying everything up.


In the absence of a proposed definition of consciousness, I can only assume that self-awareness, emotions, the sensation of free will, or intelligence can substitute as similarly hard to define properties of the human mind stemming from brain function.

graatz
Posts: 87
Joined: Thu Oct 29, 2009 7:24 pm UTC

Re: Can a machine be conscious?

Postby graatz » Thu Dec 17, 2009 7:05 pm UTC

Xanthir wrote:You are now *very explicitly* assuming that biology works by magic. It doesn't.


No, although I'm openly admitting that my knowledge of biology is lacking. :( What is the difference between an organic process and an inorganic process that obtain the same end by operating essentially the same way? I don't know.

If you believe that a machine can be conscious, you seem to be merely stringing a couple logical assumptions together: 1) That the human mind is just an organic machine, 2) That even the most sophisticated organic processes can be duplicated through inorganic means, leading to 3) A robot entirely composed of inorganic processes that perfectly mimic our human organic processes would obviously have all of our properties of mind, including consciousness.

I don't at all disagree with point (1). I'm not entirely sold on point (2), not owing to my belief in magic pixie dust but possibly due to my lack of a PhD in biochemistry. Philosophically, I'm not altogether convinced that (1) + (2) = (3), due to the points below.

Also, I'm trying to make some kind of sense of this debate separate from "Well let's just build a robot that is indistinguishable from a human in all matters of brain function so that it picks up all the qualities of mind," which seems to trivialize the whole thing.

In the latter case, of course the receiver isn't acting against his will. By definition, his will originates from his mind, and the mind is being copied from the sender's. You essentially have just one person who happens to inhabit two bodies at once.


You won't like this (you as in probably everyone who reads these forums because you are "real" science guys as opposed to psychology, the black sheep of all sciences). You're forcing yourself to believe in a definition of free will that makes it a function of the brain, or of the human mind. I'm not convinced that's how we normally define it. I mean, a layperson will define free will something like: "I wanted to do something, so I did it." Based on this definition, a more learned refinement might get something like: "Free will is the sensation a person gets when the brain acts out what it's programmed to do in response to stimuli such as hunger, lust, boredom, and the accessibility of food, sex, and an XBox." Yet we can envision a person doing things they "wouldn't normally," possibly even thinking that they wanted to do it, but were actually deprived of any "say" in the matter. And I think the reason that we can do that is because the Self isn't just some emergent property of a person's own brain function, but rather a collection of thoughts and perceptions held by other people. I have a Self because you perceive me as having one, and my concept of myself is defined by my ability to perceive the Self of me that exists in your mind.

:( I know this sounds freaky, outlandish, and just plain laughably weird if you haven't been exposed to social psychology, ego-states, etc. It actually isn't entirely outlandish; it's just hard to describe without sounding like the last episode of NGE. Think of it this way, for the first few months of our lives, we are nothing more than little critters that just do what we are wired to do. But in this process we perceive everything and everybody. And the way that we are treated eventually causes us to have a personality, an identity. We would never get this if we existed alone. Thoughts, feelings, emotions, free will, these are all things that we perceive of ourselves, yes, and are caused by neural activity. But they don't take on meaning without being perceived and accepted by other people. A robot, even perfectly mimicking human brain behavior, will probably be perceived by others as just a robot. A robot that actually isn't thinking things or having its own wants, but just some sophisticated technology running some advanced computer program, just doing what it's programmed to do. Unfairly, that's all we are, too (at least, as far as everyone in this thread seems to agree). But it's the perceptions of others that give meaning to free-will, consciousness, etc. A "robot" can't do something it's not programmed to do. A human "can do anything." As long as that's what people believe, that's the only good way to define free will.

User avatar
Xanthir
My HERO!!!
Posts: 5426
Joined: Tue Feb 20, 2007 12:49 am UTC
Location: The Googleplex
Contact:

Re: Can a machine be conscious?

Postby Xanthir » Fri Dec 18, 2009 4:58 am UTC

graatz wrote:If you believe that a machine can be conscious, you seem to be merely stringing a couple logical assumptions together: 1) That the human mind is just an organic machine, 2) That even the most sophisticated organic processes can be duplicated through inorganic means, leading to 3) A robot entirely composed of inorganic processes that perfectly mimic our human organic processes would obviously have all of our properties of mind, including consciousness.

Again, no I'm not. We just had this conversation, and I laid out precisely what my assumptions were a few posts back. Which one of them do you disagree with?

I don't at all disagree with point (1). I'm not entirely sold on point (2), not owing to my belief in magic pixie dust but possibly due to my lack of a PhD in biochemistry. Philosophically, I'm not altogether convinced that (1) + (2) = (3), due to the points below.

(2) is a given if you don't assume magic pixie dust is involved. Anything that relies on physics can be simulated, as physics appears to be computable.

(1)+(2) is guaranteed to equal (3), again unless you think magic is involved.

Also, I'm trying to make some kind of sense of this debate separate from "Well let's just build a robot that is indistinguishable from a human in all matters of brain function so that it picks up all the qualities of mind," which seems to trivialize the whole thing.

Well, yeah, it does trivialize it. That's the point. The whole question of 'can a machine ever be conscious' is uninteresting, because so far there's no evidence that full-brain emulation is anything more than an engineering problem. The interesting question is if we can design novel machine intelligences without resorting to brute-force hacks like evolution.

You won't like this (you as in probably everyone who reads these forums because you are "real" science guys as opposed to psychology, the black sheep of all sciences).

There's nothing wrong with psychology. It's a 'soft science' because it's much more difficult to perform repeatable controlled trials, which limits its ability to filter out falsehoods, but that doesn't render it intrinsically bad. It just means you have to be more careful.

You're forcing yourself to believe in a definition of free will that makes it a function of the brain, or of the human mind. I'm not convinced that's how we normally define it.

You're correct. Most people define free will in a magical, inconsistent way. They don't want their will to be deterministic, and thus predictable, but they don't want it to be random either. Unfortunately, those really are the only two possibilities. Either something can be predicted, or it incorporates randomness, in which case you can predict everything but the randomness. The lay definition of free will, well, typically isn't a definition at all.

I mean, a layperson will define free will something like: "I wanted to do something, so I did it." Based on this definition, a more learned refinement might get something like: "Free will is the sensation a person gets when the brain acts out what it's programmed to do in response to stimuli such as hunger, lust, boredom, and the accessibility of food, sex, and an XBox." Yet we can envision a person doing things they "wouldn't normally," possibly even thinking that they wanted to do it, but were actually deprived of any "say" in the matter. And I think the reason that we can do that is because the Self isn't just some emergent property of a person's own brain function, but rather a collection of thoughts and perceptions held by other people. I have a Self because you perceive me as having one, and my concept of myself is defined by my ability to perceive the Self of me that exists in your mind.

:( I know this sounds freaky, outlandish, and just plain laughably weird if you haven't been exposed to social psychology, ego-states, etc. It actually isn't entirely outlandish; it's just hard to describe without sounding like the last episode of NGE. Think of it this way, for the first few months of our lives, we are nothing more than little critters that just do what we are wired to do. But in this process we perceive everything and everybody. And the way that we are treated eventually causes us to have a personality, an identity. We would never get this if we existed alone. Thoughts, feelings, emotions, free will, these are all things that we perceive of ourselves, yes, and are caused by neural activity. But they don't take on meaning without being perceived and accepted by other people. A robot, even perfectly mimicking human brain behavior, will probably be perceived by others as just a robot. A robot that actually isn't thinking things or having its own wants, but just some sophisticated technology running some advanced computer program, just doing what it's programmed to do. Unfairly, that's all we are, too (at least, as far as everyone in this thread seems to agree). But it's the perceptions of others that give meaning to free-will, consciousness, etc. A "robot" can't do something it's not programmed to do. A human "can do anything." As long as that's what people believe, that's the only good way to define free will.

There's nothing wrong with saying that part of consciousness is socially determined. It's quite true; there are humans who are by many objective measures less conscious than some animals, but we still rate them higher than the animals.

It is, however, a copout to then say that, thusly, no machine can ever be intelligent because we'll refuse to define it as intelligent. That's question-begging and makes the entire exercise pointless.
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

graatz
Posts: 87
Joined: Thu Oct 29, 2009 7:24 pm UTC

Re: Can a machine be conscious?

Postby graatz » Fri Dec 18, 2009 2:41 pm UTC

Xanthir wrote:The interesting question is if we can design novel machine intelligences without resorting to brute-force hacks like evolution.


Yeah, that. Do you think that we will? Why? Feel free to propose a definition for "machine intelligence."

User avatar
Xanthir
My HERO!!!
Posts: 5426
Joined: Tue Feb 20, 2007 12:49 am UTC
Location: The Googleplex
Contact:

Re: Can a machine be conscious?

Postby Xanthir » Fri Dec 18, 2009 3:37 pm UTC

While I do think it's the more interesting question, it's probably not worth discussing with you while you still disagree with the less interesting question of full-brain emulations being conscious. Anything that's stopping us from agreeing on the latter will stop us from agreeing on the former, except it will be clearer exactly where we disagree because there's less room for personal opinion.

So, which of my assumptions did you disagree with, that makes you reject the idea that full-brain emulations are conscious in the same way that living humans are?
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

User avatar
phillipsjk
Posts: 1213
Joined: Wed Nov 05, 2008 4:09 pm UTC
Location: Edmonton AB Canada
Contact:

Re: Can a machine be conscious?

Postby phillipsjk » Fri Dec 18, 2009 6:49 pm UTC

graatz wrote: . . .The fact that you think that a "machine" could, for all intents and purposes, be indistinguishable from a human but still be a machine reduces to just calling humans "biological machines," in which case we already build loads of them--the process is called procreation.

So the question I ask you... could we humans build something distinguishably a machine (say, cameras for eyes, a hard drive for memory, etc.) that appears to have thoughts and feelings indistinguishable from the so-called human experience?


This is interesting, but graatz has failed to articulate why humans are not just biological machines. One possibility: intelligent life can't evolve without struggle. If a brain simulation never has to fight for food, never has to strengthen its muscles (motors), and never has to fight with it's peers for space: it will be less than human. It is well-known that people need something to do (or will cause trouble) or else they essentially die of boredom.
Did you get the number on that truck?

User avatar
Xanthir
My HERO!!!
Posts: 5426
Joined: Tue Feb 20, 2007 12:49 am UTC
Location: The Googleplex
Contact:

Re: Can a machine be conscious?

Postby Xanthir » Fri Dec 18, 2009 7:00 pm UTC

phillipsjk wrote:This is interesting, but graatz has failed to articulate why humans are not just biological machines. One possibility: intelligent life can't evolve without struggle. If a brain simulation never has to fight for food, never has to strengthen its muscles (motors), and never has to fight with it's peers for space: it will be less than human. It is well-known that people need something to do (or will cause trouble) or else they essentially die of boredom.

What does any of that have to do with whether or not you can legitimately call living bodies 'biological machines'?
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

graatz
Posts: 87
Joined: Thu Oct 29, 2009 7:24 pm UTC

Re: Can a machine be conscious?

Postby graatz » Fri Dec 18, 2009 10:04 pm UTC

Xanthir wrote:So, which of my assumptions did you disagree with, that makes you reject the idea that full-brain emulations are conscious in the same way that living humans are?


Probably lack of definition:
"Conscious" means what?
"in the same way that living humans are." What do you take that to mean?

I 100% agree that if a human is perfectly emulated, in and of itself the emulation would have all of the normal properties of being human. I can't say how society might interact with it, or how an emulated human mind might react to this interaction. I suppose by and large that depends on if people know that it's a robot. But I thought we both agreed that this trivializes the debate.

graatz
Posts: 87
Joined: Thu Oct 29, 2009 7:24 pm UTC

Re: Can a machine be conscious?

Postby graatz » Fri Dec 18, 2009 10:11 pm UTC

phillipsjk wrote:One possibility: intelligent life can't evolve without struggle. If a brain simulation never has to fight for food, never has to strengthen its muscles (motors), and never has to fight with it's peers for space: it will be less than human. It is well-known that people need something to do (or will cause trouble) or else they essentially die of boredom.


Yes, but assuming that we create a computer simulation that maps the brain of person who has already gone through those struggles, we'd pick up whatever contributions those might add to the process of making us human. Now, if this simulation realized that it was a robot who no longer needed to do those things, who knows what kind of ramification that would have :twisted:

If consciousness was born from evolution as a survival mechanism, we can probably thank our biological needs for resulting in our having it in the first place. But I don't see why this would mean that we couldn't, trivially speaking, create a perfect inorganic robotic simulation of a fully functional, evolved human, and thereby make it indistinguishably human.

User avatar
eijkaibjck
Posts: 42
Joined: Sun May 20, 2007 2:45 pm UTC
Location: Spain

Re: Can a machine be conscious?

Postby eijkaibjck » Tue Jan 26, 2010 11:32 am UTC

stephentyrone wrote:First, define consciousness. Next, under your definition, is it even clear that humans are conscious?

At first the debate took off in the direction of arguing over consciousness, which I find an interesting topic.

But I cannot go directly there and discuss about artificial machines being conscious etc.

To me it seems more or less accepted that any definition of consciousness should include human consciousness. That is how our anthropocentric views teach us.

Then if we accept that the question seems pretty obvious because I cannot imagine human beings as different from machines in any meaningful way.

Of course we are neural machines built using, um, wetware, but that is not different enough from any other kind of machine. The fact that we still cannot build such a machine except by using the traditional ways has little or no importance.

Maybe that is just the less interesting side of the argument. We should talk only about "artificial" or man made machines. But then the only answer that I can come up with is "depending on the ability". The question just doesn't take me in the direction of asking fundamental questions about the nature of consciousness. Consciousness to me is, evidently, something that can be achieved by matter. I am 100% made out of ordinary matter, as far as anybody can tell.
Ok, I take it back.

Apteryx
Posts: 174
Joined: Tue Feb 09, 2010 5:41 am UTC

Re: Can a machine be conscious?

Postby Apteryx » Mon Feb 15, 2010 4:02 am UTC

Xanthir wrote: The lay definition of free will, well, typically isn't a definition at all.




What a smug prejudice you wave there. All 5.5 billion lay people, eh?. Of course, you spoke to each of us, right?.
:roll:
Abuse of words has been the great instrument of sophistry and chicanery, of party, faction, and division of society.
John Adams

User avatar
phillipsjk
Posts: 1213
Joined: Wed Nov 05, 2008 4:09 pm UTC
Location: Edmonton AB Canada
Contact:

Re: Can a machine be conscious?

Postby phillipsjk » Mon Feb 15, 2010 9:26 am UTC

Xanthir wrote:What does any of that have to do with whether or not you can legitimately call living bodies 'biological machines'?

Biology evolves, but machines are designed. :twisted:

If you accept that premise, calling "living bodies" "biological machines" is a gross over-simplification. I guess the question then becomes: can machines (be designed to) evolve?
Did you get the number on that truck?

User avatar
Xanthir
My HERO!!!
Posts: 5426
Joined: Tue Feb 20, 2007 12:49 am UTC
Location: The Googleplex
Contact:

Re: Can a machine be conscious?

Postby Xanthir » Mon Feb 15, 2010 4:34 pm UTC

Apteryx wrote:
Xanthir wrote: The lay definition of free will, well, typically isn't a definition at all.

What a smug prejudice you wave there. All 5.5 billion lay people, eh?. Of course, you spoke to each of us, right?.
:roll:

Um? "Lay definition" doesn't mean "the set of all definitions that all lay people in the world could provide". As well, I said "typically".

Also, 5.5billion? Welcome to 1990, I suppose.

phillipsjk wrote:
Xanthir wrote:What does any of that have to do with whether or not you can legitimately call living bodies 'biological machines'?

Biology evolves, but machines are designed. :twisted:

If you accept that premise, calling "living bodies" "biological machines" is a gross over-simplification. I guess the question then becomes: can machines (be designed to) evolve?

Yes. They do all the time. Forms of evolutions are popular methods for training neural networks, for example.
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))


Return to “Religious Wars”

Who is online

Users browsing this forum: No registered users and 7 guests