Can Robots be Human? (or The Bicentennial Man's Question)

For the serious discussion of weighty matters and worldly issues. No off-topic posts allowed.

Moderators: Azrael, Moderators General, Prelates

johnie104
Posts: 248
Joined: Tue Jan 08, 2008 6:44 pm UTC
Contact:

Re: Can Robots be Human? (or The Bicentennial Man's Question

Postby johnie104 » Sun Dec 15, 2013 11:35 am UTC

Is a sociopath a human? He can't feel emotions.
Is a baby a human? It isn't self-aware and can't talk.
Is someone who lost a leg human? What if he was born without the leg?
Is someone with an IQ of 70 human? He doesn't think the way we do.
Is someone with Down-syndrome human? It has an extra chromosome.
Is a Dolphin a human? It shares 99% of it's DNA with us.
Is someone with dementia human? Their brain looks very differently.
What if someone was born in an artificial whomb and was inseminated artificially, is that still a human?
Is someone who is brain-dead still human? He doesn't have higher-brain functions.

(Used 'he' as a general pronoun, replace it with 'she' or 'it' where appropiate)


Honestly, you can't give a definition of what a human is without excluding or including stuff that 'obviously' isn't/is a human. I think the original question of the thread should be ignored, and be replaced with 'Can a robot be a person?' which is most definately a yes. I can think of multiple cases of what a machine would look like, that I would care about, that would respond to me in an intelligent way and that I would miss if it were to 'die'.

To the people that say that a simulation of a brain isn't real: What do you base this on? What makes neurons so special that they are more real then any simulation can be. What would convince you otherwise? If there was a virus that changed the structure of your neurons but not your thought patterns, would you be less real?

PS: I'm very happy that the word 'soul' hasn't been used yet. This is a sign of the high-quality of the discussions on this board. Cheers!
Signature removed because of it's blinding awesomeness.

elasto
Posts: 3778
Joined: Mon May 10, 2010 1:53 am UTC

Re: Can Robots be Human? (or The Bicentennial Man's Question

Postby elasto » Sun Dec 15, 2013 12:53 pm UTC

johnie104 wrote:'Can a robot be a person?' which is most definitely a yes.

Yes I'd agree with this. (At the very least, if corporations can be considered people with rights then sentient robots and ooze certainly should be also ^^)

Personally I see no need to dilute the definition of 'human' in the way some are suggesting - given that some terms ('sentient being') are already suitable and others ('person') have already been sufficiently expanded. Likewise, I'd be happier with 'homicide', 'robicide' and 'xenicide' merely being subcategories of the crime 'senticide' than to attempt to lump the destruction of all sentient beings into 'homicide'.

(Although if some in society are incapable of seeing sentient robots and aliens as our equals and superiors then, yes, I'd thoroughly advocate robots and aliens being viewed in the eyes of the law as fully human if it helps people overcome their prejudices...)

morriswalters
Posts: 7073
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: Can Robots be Human? (or The Bicentennial Man's Question

Postby morriswalters » Sun Dec 15, 2013 1:08 pm UTC

Izawwlgood wrote:I think that's kind of the point of the Bicentennial Man story; a 'human' entity struggles to define what it is to be human, and in the process, it is heavily implied that it already was 'human', just needed to add mortality.
This is one of our defining characteristics, it shapes us and is involved in almost every decision we make, this fact that we are born to die. Imagine for a moment the improbable, that a mans mind was transferred to some device, some machine. Once that step is taken, does what is created still have enough commonality to be called human, given the fact that it would be immortal?
johnie104 wrote:What makes neurons so special that they are more real then any simulation can be.
Humans aren't just neurons. The physical reality defines us. If a simulation of a brain is exact then it represents our physical reality, positive and negative emotions, flaws and those things we don't like to talk about in discussions like these. If the simulation can simulate an angel it could just as well simulate a sociopath. Run enough instances and it should, and if it doesn't than the simulation isn't a good simulation.

elasto
Posts: 3778
Joined: Mon May 10, 2010 1:53 am UTC

Re: Can Robots be Human? (or The Bicentennial Man's Question

Postby elasto » Sun Dec 15, 2013 1:30 pm UTC

morriswalters wrote:This is one of our defining characteristics, it shapes us and is involved in almost every decision we make, this fact that we are born to die. Imagine for a moment the improbable, that a mans mind was transferred to some device, some machine. Once that step is taken, does what is created still have enough commonality to be called human, given the fact that it would be immortal?

Personally I would class that being as 'sentient AI' not as human. But if someone wanted to argue that it was one of the many flavors of 'post-human' I wouldn't strongly disagree.

But, no, personally I wouldn't class that being as a human - merely as a sentient AI primed with the experiences and knowledge of a particular human.

User avatar
Izawwlgood
WINNING
Posts: 18686
Joined: Mon Nov 19, 2007 3:55 pm UTC
Location: There may be lovelier lovelies...

Re: Can Robots be Human? (or The Bicentennial Man's Question

Postby Izawwlgood » Sun Dec 15, 2013 11:30 pm UTC

I certainly agree the definition of human will determine the nature of the discussion, but the narrower you limit your definition (Has chromosomes of the Homo sapien lineage! Has a heart! Is physically 100% organic!), the less interesting the conversation can be. Yes, someone with a pacemaker is no longer 100% human. Yes, someone with a gene job cleaning those pesky 23 chromosomes into a more convenient 10 isn't human. Yes, an immortal isn't a human. Yes, a hive mind of lobsters that demonstrate empathy, fear, sorrow, love, humor, creativity, and altruism, are not human.

That's why I feel it's prudent, and more interesting, to ask the question in the context of the subjects intent, namely, The Bicententiial Man. The piecemeal addition of physical human parts doesn't matter, and doesn't render him human, mortality did. He was 'human' to begin with, the exercise was somewhat futile.
... with gigantic melancholies and gigantic mirth, to tread the jeweled thrones of the Earth under his sandalled feet.

morriswalters
Posts: 7073
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: Can Robots be Human? (or The Bicentennial Man's Question

Postby morriswalters » Sun Dec 15, 2013 11:54 pm UTC

Izawwlgood wrote:This isn't a 'greater than less than' comparison. I'm suggesting that 'human' should mean 'sapient being', not 'has 23 chromosomes worth of genetic material from Homo sapiens'.
Quoting from earlier, do you mean sapience or do you mean commonality, alike despite differences? If so than I would agree that anything that shares that commonality is human no matter the package. If it wants what we want in terms of living then we should be able to find some common ground. However I don't know that I would find AI as human, despite its sapience, since if it is not driven the way that we are driven I don't know that we could ever truly understand it, any more than a chimp can understand us.

User avatar
Izawwlgood
WINNING
Posts: 18686
Joined: Mon Nov 19, 2007 3:55 pm UTC
Location: There may be lovelier lovelies...

Re: Can Robots be Human? (or The Bicentennial Man's Question

Postby Izawwlgood » Mon Dec 16, 2013 12:06 am UTC

I meant sapience, as sapience would be the shared ground.

The more sapient an entity, presumably, the less it'd be subject to the whims of biological (computational...?) imperatives, but maybe not. Many of the things I feel make one 'human' include the biological imperatives we're subject to.

Ever read Fire upon the Deep? The understanding we, humans, would have of a more transcendent mind is akin to the understanding, say, a dog would have of us.
... with gigantic melancholies and gigantic mirth, to tread the jeweled thrones of the Earth under his sandalled feet.

morriswalters
Posts: 7073
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: Can Robots be Human? (or The Bicentennial Man's Question

Postby morriswalters » Mon Dec 16, 2013 1:04 am UTC

I haven't read the book, although it looks interesting. I think that we share a common thought, that to truly understand that we have to share enough in common to be able to see past the point where we don't. In terms of sapience being the shared ground, I am not so sure. I would tend to believe that wisdom is a function of what you are. This take on sapience from the Wikipedia,
Sapience is often defined as wisdom, or the ability of an organism or entity to act with appropriate judgement, a mental faculty which is a component of intelligence or alternatively may be considered an additional faculty, apart from intelligence, with its own properties.
is why the concept of sapience is worrying to me. Acting with appropriate judgement could be very dependent on the needs of the entities. It isn't that we would be in competition, rather that I wouldn't recognize the motivations of a creature too different from myself. There are many points of commonality between me and a dog, so I understand in some fashion how a dog is motivated. So a robot who shared the frailties that I experience would be human to me, whereas an undying robot wouldn't, since I am incapable of understanding what is wisdom is to him, irrespective of its intelligence or sentience. Does that make any sense?

User avatar
ucim
Posts: 6888
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: Can Robots be Human? (or The Bicentennial Man's Question

Postby ucim » Mon Dec 16, 2013 1:40 am UTC

This is ultimately a question of "is X a Y" where X is undefined, and Y is undefined. The question is meant to stimulate discussion on "what it means to be a 'Y'", but it also involves "what it means to be an 'X'". Meaningful answers will not be forthcoming without agreement on what X and Y are, and while meaningful discussion is possible, it needs to not gloss over "X" while discussing "Y" (or v.v.).

In the case of Bicentennial Man, where mortality was found to be key, the issue is why... and it involves the fact that mortality significantly impacts one's perception of self and the world around, and influences decisions that would be made by the entity in question. To be "fully Y" involves having sufficiently similar thought processes in certain key areas, and (by the story) immortality prevents this from happening. It is still an open question as to whether this was the only key difference, but it is the one the story focused on. If you agree with the conclusion of the story, you'd conclude that mortality is a necessary condition to "being human", and that makes sense to me.

Think of what you mean when you refer to a fellow homo sapien, and say he or she is "inhuman"... I suspect it means that this person engages in thought processes and (resultant) actions that are detached from certain emotional connections that most people share, often leading to grossly inhumane actions. While the person in question is literally "human", the literal meaning is not the one invoked, and I don't think it is the one that is being invoked by this OP.

So, it seems to me that "human" involves "governed at least in part by emotion" and also involves "governed by certain bounds" (such as mortality). If you agree so far, the question becomes just which bounds, and which emotions.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Heartfelt thanks from addams and from me - you really made a difference.

johnie104
Posts: 248
Joined: Tue Jan 08, 2008 6:44 pm UTC
Contact:

Re: Can Robots be Human? (or The Bicentennial Man's Question

Postby johnie104 » Fri Dec 20, 2013 10:59 am UTC

ucim wrote:In the case of Bicentennial Man, where mortality was found to be key, the issue is why... and it involves the fact that mortality significantly impacts one's perception of self and the world around, and influences decisions that would be made by the entity in question. To be "fully Y" involves having sufficiently similar thought processes in certain key areas, and (by the story) immortality prevents this from happening. It is still an open question as to whether this was the only key difference, but it is the one the story focused on. If you agree with the conclusion of the story, you'd conclude that mortality is a necessary condition to "being human", and that makes sense to me.

A lot of children are already self-aware, but don't have a good grasp on mortality or the fact that they will die. Does this make them less human?
Me myself, as a 21-year old, in principle know that I'm going to die, but I don't really believe it. It's not something that has truly manifested itself in me. I think this is something that happens when you get older and you notice your body is starting to fail, or if you have something happen to you that could have killed you if things went only a little bit differently.
So, mortality might influence my decisions, but not nearly as much as it would effect a 70-year old. Again, does this make me less human.
The problem with saying 'this is what makes us human', is that it is all to easy to find a counter-example.

ucim wrote:So, it seems to me that "human" involves "governed at least in part by emotion" and also involves "governed by certain bounds" (such as mortality). If you agree so far, the question becomes just which bounds, and which emotions.

But this leads to the question what emotions are, and if a robot can have them. Making decisions 'governed by emotion' would seem to me to mean, that the sentient being has an internal state, that was formed because of previous experiences, which makes it do the action, although objectively without this internal state the optimal decision would have been something else. I don't think it's very hard to implement something like this in a robot that isn't even sapient.
So then you have to change the definition of 'governed by emotions' to mean that the emotions have to have a certain degree of complexity or something like non-deterministic. But that is of course completely arbitrary.
Signature removed because of it's blinding awesomeness.

morriswalters
Posts: 7073
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: Can Robots be Human? (or The Bicentennial Man's Question

Postby morriswalters » Fri Dec 20, 2013 2:55 pm UTC

johnie104 wrote:A lot of children are already self-aware, but don't have a good grasp on mortality or the fact that they will die. Does this make them less human?
Don't mistake children not having a grasp of death with not being bound by mortality. Like most things they haven't experienced it yet. And you have a very good grasp of death, you just don't think about it. If you didn't understand death than you would have no fear of stepping in front of a moving car. And emotions keep you from doing that, fear in that particular case. You could certainly give a robot emotion, but if that robot can be built, than that robot need never fear death assuming that he can move the sum total of his state of being to a newer body.

Tyndmyr
Posts: 11443
Joined: Wed Jul 25, 2012 8:38 pm UTC

Re: Can Robots be Human? (or The Bicentennial Man's Question

Postby Tyndmyr » Fri Dec 20, 2013 5:11 pm UTC

Everything is bound by mortality, because one day, the universe will end.

If remaining lifespan is relevant to how human you are, the inescapable conclusion is that some people are more human than others.

User avatar
Izawwlgood
WINNING
Posts: 18686
Joined: Mon Nov 19, 2007 3:55 pm UTC
Location: There may be lovelier lovelies...

Re: Can Robots be Human? (or The Bicentennial Man's Question

Postby Izawwlgood » Fri Dec 20, 2013 5:13 pm UTC

Perhaps 'how human you are' also depends on having perception of human timescales. A tree is obviously a living organism, but probably doesn't care much about a solar eclipse.
... with gigantic melancholies and gigantic mirth, to tread the jeweled thrones of the Earth under his sandalled feet.

User avatar
ucim
Posts: 6888
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: Can Robots be Human? (or The Bicentennial Man's Question

Postby ucim » Fri Dec 20, 2013 5:37 pm UTC

johnie104 wrote:A lot of children are already self-aware, but don't have a good grasp on mortality or the fact that they will die. Does this make them less human?
A valid point, but one that can be sidestepped (to some degree) by considering "kind of being" rather than "specific instance of being". That is, the kind of being that is bound by mortality and is capable of knowing it (people), as opposed to a specific being that, while bound to mortality, does not realize it (a child).

Yeah, a kludge, because
johnie104 wrote:The problem with saying 'this is what makes us human', is that it is all to easy to find a counter-example.


johnie104 wrote:But this leads to the question what emotions are
... and that is a very good question. Make it easier by considering a special case. What is happiness? What is pain and why does it hurt? (The latter is likely the topic of active research).

I do not see the benefit of making our tools "emotional", but emotions act as quick way of deciding what to do, so may provide an evolutionary benefit for life forms by essentially using (lossy) compression on thought processes.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Heartfelt thanks from addams and from me - you really made a difference.

morriswalters
Posts: 7073
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: Can Robots be Human? (or The Bicentennial Man's Question

Postby morriswalters » Fri Dec 20, 2013 5:53 pm UTC

A tree that lived ten thousand years would share the same level of humanity as a man who can live to seventy. If it will die(not be killed), its life is bounded by that fact. How he experiences humanity would change. But not the fact of his humanity. If a robot can be created than that which makes it sentient or intelligent or alive, it should be able to be copied. Once that is possible than while the Universe exists that being can exist. And to complicate the matter, it can exist as multiple beings and be able to access all of those shared experiences. For us the trip to another star is bound by individuals time spans. A robot could be stored as a image of itself, potential but not alive until it needed to be. Send a thousand bodies and as long as one can return you gain whatever was learned. Is that human? And would it think in the same way as we do? Joe Haldeman posits this for cloned beings in "The Forever War".

User avatar
Izawwlgood
WINNING
Posts: 18686
Joined: Mon Nov 19, 2007 3:55 pm UTC
Location: There may be lovelier lovelies...

Re: Can Robots be Human? (or The Bicentennial Man's Question

Postby Izawwlgood » Fri Dec 20, 2013 6:36 pm UTC

morriswalters wrote:Joe Haldeman posits this for cloned beings in "The Forever War".
I'm not sure that was his point at all, seeing as that he basically waved the issue away saying 'You can't understand it because you're not a clone'. I think he even introduced some 'psychic awareness' handwave into it.

morriswalters wrote:A tree that lived ten thousand years would share the same level of humanity as a man who can live to seventy.
I think you missed the point of my comment. My point about human scale perception was that 10,000 years of life means you DON"T have a human scale perception. We process information in time scales associated with a 24 hr day, a ~30-40 year reproductive cycle. A tree does not.
... with gigantic melancholies and gigantic mirth, to tread the jeweled thrones of the Earth under his sandalled feet.

morriswalters
Posts: 7073
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: Can Robots be Human? (or The Bicentennial Man's Question

Postby morriswalters » Fri Dec 20, 2013 8:15 pm UTC

Izawwlgood wrote:I'm not sure that was his point at all, seeing as that he basically waved the issue away saying 'You can't understand it because you're not a clone'. I think he even introduced some 'psychic awareness' handwave into it.
It is fiction after all, much of it is hand waves, But the basic premise is outlined. An interconnected mind would essentially function in ways that we wouldn't understand simply because it is immortal.

Izawwlgood wrote:My point about human scale perception was that 10,000 years of life means you DON"T have a human scale perception.

Differences in how time is perceived might make it difficult to recognize you as "human". But would it make you any less so? My thesis would be that mortality doesn't make you "human" but that immortality would make you unhuman.

johnie104 wrote:The problem with saying 'this is what makes us human', is that it is all to easy to find a counter-example.
This is a problem of oversimplification. It may take a multitude of things to define us. But lacking any one of those things may well be able to eliminate the possibility of being "human".

User avatar
gmalivuk
GNU Terry Pratchett
Posts: 26823
Joined: Wed Feb 28, 2007 6:02 pm UTC
Location: Here and There
Contact:

Re: Can Robots be Human? (or The Bicentennial Man's Question

Postby gmalivuk » Fri Dec 20, 2013 10:08 pm UTC

morriswalters wrote:A tree that lived ten thousand years would share the same level of humanity as a man who can live to seventy
So... all living things are human?
Unless stated otherwise, I do not care whether a statement, by itself, constitutes a persuasive political argument. I care whether it's true.
---
If this post has math that doesn't work for you, use TeX the World for Firefox or Chrome

(he/him/his)

morriswalters
Posts: 7073
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: Can Robots be Human? (or The Bicentennial Man's Question

Postby morriswalters » Fri Dec 20, 2013 11:07 pm UTC

No.
morriswalters wrote:It may take a multitude of things to define us. But lacking any one of those things may well be able to eliminate the possibility of being "human".
An interesting sub question is, are all people human?

User avatar
sardia
Posts: 6813
Joined: Sat Apr 03, 2010 3:39 am UTC

Re: Can Robots be Human? (or The Bicentennial Man's Question

Postby sardia » Fri Dec 20, 2013 11:30 pm UTC

morriswalters wrote:No.
morriswalters wrote:It may take a multitude of things to define us. But lacking any one of those things may well be able to eliminate the possibility of being "human".
An interesting sub question is, are all people human?

Do you mean do we treat and accord them the same human rights? Of course not. At least not yet. Gay people, women, children.

User avatar
sardia
Posts: 6813
Joined: Sat Apr 03, 2010 3:39 am UTC

Re: Can Robots be Human? (or The Bicentennial Man's Question

Postby sardia » Fri Dec 20, 2013 11:30 pm UTC

morriswalters wrote:No.
morriswalters wrote:It may take a multitude of things to define us. But lacking any one of those things may well be able to eliminate the possibility of being "human".
An interesting sub question is, are all people human?

Do you mean do we treat and accord them the same human rights? Of course not. At least not yet. Gay people, women, children.

User avatar
sardia
Posts: 6813
Joined: Sat Apr 03, 2010 3:39 am UTC

Re: Can Robots be Human? (or The Bicentennial Man's Question

Postby sardia » Fri Dec 20, 2013 11:31 pm UTC

morriswalters wrote:No.
morriswalters wrote:It may take a multitude of things to define us. But lacking any one of those things may well be able to eliminate the possibility of being "human".
An interesting sub question is, are all people human?

Do you mean do we treat and accord them the same human rights? Of course not. At least not yet. Gay people, women, children.

User avatar
sardia
Posts: 6813
Joined: Sat Apr 03, 2010 3:39 am UTC

Re: Can Robots be Human? (or The Bicentennial Man's Question

Postby sardia » Fri Dec 20, 2013 11:31 pm UTC

morriswalters wrote:No.
morriswalters wrote:It may take a multitude of things to define us. But lacking any one of those things may well be able to eliminate the possibility of being "human".
An interesting sub question is, are all people human?

Do you mean do we treat and accord them the same human rights? Of course not. At least not yet. Gay people, women, children.

morriswalters
Posts: 7073
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: Can Robots be Human? (or The Bicentennial Man's Question

Postby morriswalters » Sat Dec 21, 2013 12:50 am UTC

No. If you accept that DNA isn't required to be "human", one implicit possibility is that it is possible to have the DNA and not be "human" since DNA isn't required. Thus this series of questions.
johnie104 wrote:Is a sociopath a human? He can't feel emotions.
Is a baby a human? It isn't self-aware and can't talk.
Is someone who lost a leg human? What if he was born without the leg?
Is someone with an IQ of 70 human? He doesn't think the way we do.
Is someone with Down-syndrome human? It has an extra chromosome.
Is a Dolphin a human? It shares 99% of it's DNA with us.
Is someone with dementia human? Their brain looks very differently.
What if someone was born in an artificial whomb and was inseminated artificially, is that still a human?
Is someone who is brain-dead still human? He doesn't have higher-brain functions.

User avatar
Izawwlgood
WINNING
Posts: 18686
Joined: Mon Nov 19, 2007 3:55 pm UTC
Location: There may be lovelier lovelies...

Re: Can Robots be Human? (or The Bicentennial Man's Question

Postby Izawwlgood » Sat Dec 21, 2013 1:06 am UTC

I'm watching Fringe and a cool episode just occurred wherein a mentally handicapped man was given a drug that 'increased his intelligence exponentially'. He gained the ability to see all futures, and manipulate them. At the end of shennanigans, he 'transcended'. The final description was 'he doesn't communicate in any way you or I can understand'.

The depth and breadth of complexity of thought of a being we would call inhuman (presuming it's 'more intelligent', naturally) would, again, be akin to the difference between our intelligence and an ants.

But we can interact with ants, study them, understand them. Ants cannot say the same of us.
... with gigantic melancholies and gigantic mirth, to tread the jeweled thrones of the Earth under his sandalled feet.

morriswalters
Posts: 7073
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: Can Robots be Human? (or The Bicentennial Man's Question

Postby morriswalters » Fri Dec 27, 2013 1:24 am UTC

As pure amusing speculation consider this.

Ask what we use intelligence for. The trite answer is to find different ways to amuse ourselves. The less trite answer is to predict the future(among at least a few other things). Not what will your father in law have for breakfast tomorrow, but what might kill me next and how do I avoid it. That is the essence of mortality. The difference between a long lived entity and a shorter lived entity, is not in how it perceives time, but in how far out it needs to predict the future. If you live to be a thousand years old the you want to be able to see further than a entity that lives to be a hundred years old. As a concrete example, an entity that is long lived might think differently about global warming then we do since it will experience the changes in its expected lifetime. The longer lived might have an outlook to match its longevity but what it would be attempting to do would be no different than us. Don't die. What it has is more time.

The question is that for an AI, or for for an entity with a single conscious locus, is if being more intelligent is a meaningful statement. If the entity acts in this universe than it has to parse time and act within it and is limited by having to act within it. Your Fringe character is limited by it's body. No matter how fast it thinks it can't be much more mechanically efficient than a unevolved man. Therefore he can't move any quicker or act any quicker. He can't see quicker, hear quicker and smell quicker. So he is time limited. He couldn't get the data any faster to make the predictions because there is a limit in his ability to acquire it no matter how fast he can "think". Add to that, the more in front of the action the entity gets the more information it has to store before the information is useful.
Izawwlgood wrote:The depth and breadth of complexity of thought of a being we would call inhuman (presuming it's 'more intelligent', naturally) would, again, be akin to the difference between our intelligence and an ants.
If you haven't become bored with this, exactly what does complexity and depth have to do with it? Assuming again that it acts in this universe as one mind.

User avatar
Izawwlgood
WINNING
Posts: 18686
Joined: Mon Nov 19, 2007 3:55 pm UTC
Location: There may be lovelier lovelies...

Re: Can Robots be Human? (or The Bicentennial Man's Question

Postby Izawwlgood » Fri Dec 27, 2013 5:37 pm UTC

morriswalters wrote:Your Fringe character is limited by it's body. No matter how fast it thinks it can't be much more mechanically efficient than a unevolved man.
The 'processing time' quandary is certainly true of biologicals, but there is a vast amount of neurobiology we don't yet understand, and it is well within the realm of reality to imagine processor time significantly improving in a brain.

Which leads to;
morriswalters wrote:If you haven't become bored with this, exactly what does complexity and depth have to do with it? Assuming again that it acts in this universe as one mind.
I would suggest that complexity and depth have to do with such things as 'processing time', among other things. We are concerned with matters vastly beyond that of an ant. A superbeing by our standards likely would be concerned with matters vastly beyond our own comprehension.
... with gigantic melancholies and gigantic mirth, to tread the jeweled thrones of the Earth under his sandalled feet.

morriswalters
Posts: 7073
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: Can Robots be Human? (or The Bicentennial Man's Question

Postby morriswalters » Sun Dec 29, 2013 3:04 am UTC

Izawwlgood wrote:I would suggest that complexity and depth have to do with such things as 'processing time', among other things. We are concerned with matters vastly beyond that of an ant. A superbeing by our standards likely would be concerned with matters vastly beyond our own comprehension.
There seems to be a hidden assumption. That being, for a entity that exists within the universe, that there is something it can do other than see the future better so as to ensure its survival. You can assume some Godlike greater purpose, but if there is no greater purpose, than once the survival is guaranteed, what else is there to do? Most tasks in modern society fall into two broad categories, things we need to do to survive, and everything else. These posts, the books we read, the movies we watch and all the ways we spend the time away from our primary purpose of surviving. It's arguable that the Bicentennial Man reflects that basic fact. That what makes us human is the social relationships we have and our shared mortality.

User avatar
Izawwlgood
WINNING
Posts: 18686
Joined: Mon Nov 19, 2007 3:55 pm UTC
Location: There may be lovelier lovelies...

Re: Can Robots be Human? (or The Bicentennial Man's Question

Postby Izawwlgood » Sun Dec 29, 2013 4:44 pm UTC

I think I'm starting to miss your point, can you clarify; you claim simultaneously that 'once survival is ensured, what else is there?', but then go on to point out that humans do a variety of other things ancillary to survival.

I suppose you can make the argument that argumentation and discussion and such are byproducts of our need to be social, but I don't think it's quite that simple. I think being freed from the primary burden of survival concerns is one of the requirements for higher mind function. Or maybe higher mind functions is a product of that freeing. The point is, I think a 'Godlike mind' (just as a descriptor) would have plenty to think about, and do, and that we cannot fathom the depth and breadth of those thoughts is a nice demonstration of the disparity between such a mind and our own. Which is a bit circular, surely.
... with gigantic melancholies and gigantic mirth, to tread the jeweled thrones of the Earth under his sandalled feet.

morriswalters
Posts: 7073
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: Can Robots be Human? (or The Bicentennial Man's Question

Postby morriswalters » Sun Dec 29, 2013 10:58 pm UTC

Izawwlgood wrote:I think I'm starting to miss your point, can you clarify; you claim simultaneously that 'once survival is ensured, what else is there?', but then go on to point out that humans do a variety of other things ancillary to survival.
I'll try, however what I'm thinking is still fuzzy. Consider art. I find beauty in some of it and try to appreciate all of it. But what is the point? What higher meaning exists for it? It serves an emotional purpose. It draws me in because what I like is of interest to me. But it has no overarching meaning beyond that, I simply get a positive emotional response from it. It effectively fills time, time that I have because my immediate needs are meant. The same could be said for any function that exists which doesn't actively support our need to survive. On possible interpretation is that the sensory equipment we have needs to be exercised to fulfill its primary function. That is that we enjoy art because art stimulates the sensors that we use to appreciate it. So art could be considered as a analogue of exercising a generator that needs to be available at a moments notice. If it were true, than most human activity could be classified as maintenance, done to keep the machine well oiled. Serving no higher purpose.
Izawwlgood wrote:I suppose you can make the argument that argumentation and discussion and such are byproducts of our need to be social, but I don't think it's quite that simple. I think being freed from the primary burden of survival concerns is one of the requirements for higher mind function. Or maybe higher mind functions is a product of that freeing. The point is, I think a 'Godlike mind' (just as a descriptor) would have plenty to think about, and do, and that we cannot fathom the depth and breadth of those thoughts is a nice demonstration of the disparity between such a mind and our own. Which is a bit circular, surely.
Assuming the universe is the same to everyone who can experience it, then what can an "godlike mind" think about that we can't? An ant can't think great thoughts because it solved the the evolutionary problem of survival in a different way, in as much as evolution can be seen as a solution to a problem. It doesn't need to think, in the way that we do. As for argumentation and discussion, dismissing a moment the social aspects of it, what does discussion and argumentation do? They primarily connect the nodes of a massively parallel computing machine. Allowing us to work on very large numbers of tasks at one time. Give some thought to the nature of the primary forms of entertainments. Each song, each movie and book are essentially new ideas, expressed. Stories are the practice for problem solving and creation. Exercising the associations that our sensors create as we advance in time. Argument and discussion serve to enhance our associations by increasing them and by giving the best ideas the widest possible exposure to the computer. I may have sprained something in my brain, I'm not sure I am making any sense.

User avatar
Izawwlgood
WINNING
Posts: 18686
Joined: Mon Nov 19, 2007 3:55 pm UTC
Location: There may be lovelier lovelies...

Re: Can Robots be Human? (or The Bicentennial Man's Question

Postby Izawwlgood » Mon Dec 30, 2013 12:55 am UTC

To start, I don't want you to think I'm being snarky at you.

morriswalters wrote:The same could be said for any function that exists which doesn't actively support our need to survive. On possible interpretation is that the sensory equipment we have needs to be exercised to fulfill its primary function. That is that we enjoy art because art stimulates the sensors that we use to appreciate it. So art could be considered as a analogue of exercising a generator that needs to be available at a moments notice. If it were true, than most human activity could be classified as maintenance, done to keep the machine well oiled. Serving no higher purpose.
morriswalters wrote:Assuming the universe is the same to everyone who can experience it, then what can an "godlike mind" think about that we can't?

Link these two thoughts, and expand, to understand why I keep talking about the difference between Godlike entities, man, and ants. We detect 3 colors, have x square feet of olfactory sensory tissue, fairly sensitive touch receptions on our fingertips, social empathy and networking up to about x connections, etc, etc. I'm describing a handful of sensory and processing capacities we have. And ant has a different set, an impressive chemo/olfactory detector, the ability to communicate a complex but limited chemical language, and the hardwired behavior to act in cooperation with many other siblings.

Imagine what a Godlike entity would be capable of. I'm not being glib; I really think you should read Fire Upon The Deep if this is a topic that interests you. What I'm getting at is that the same universal constants, same environment imposes some restrictions on what we can extrapolate to, but you seem to be saying something akin to 'universal constants exist, and we know stars are hot, but we don't know how hot, so, maybe they're hot enough to boil water?'.
... with gigantic melancholies and gigantic mirth, to tread the jeweled thrones of the Earth under his sandalled feet.

morriswalters
Posts: 7073
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: Can Robots be Human? (or The Bicentennial Man's Question

Postby morriswalters » Mon Dec 30, 2013 2:43 am UTC

Izawwlgood wrote:Imagine what a Godlike entity would be capable of. I'm not being glib; I really think you should read Fire Upon The Deep if this is a topic that interests you.
It's on my reading list from your first mention and I take no offense. I am practicing my own philosophy and gathering associations where I find them. To accept a "God Like" mind, I must first accept that there is something that I can't know. With I being an amalgam of the human race. Given that the universe is consistent without, the fact that we are limited in the type and the quality of our sensors is to no point. We are capable of creating sensors to be able to experience what we can't experience directly.

I suppose what you want me to see is that the ant can't possibly realize what we see when we look out at our world. Which means that there may be a level of existence that we can't see where eyes see what we can't. As a metaphor, it is interesting. As a practical matter it suggests that more than one level of experience exists. And that it exists in a way that it is not available to us. Given no evidence I treat it as I would any other claim that I can't test. And the fact that the ant can't see what we see is is a mirage. If you accept a point source as the beginnings of life, the ant is merely an idiot relative that may yet inherit everything we are, and maybe have better manners.

User avatar
Izawwlgood
WINNING
Posts: 18686
Joined: Mon Nov 19, 2007 3:55 pm UTC
Location: There may be lovelier lovelies...

Re: Can Robots be Human? (or The Bicentennial Man's Question

Postby Izawwlgood » Mon Dec 30, 2013 3:39 am UTC

morriswalters wrote: We are capable of creating sensors to be able to experience what we can't experience directly.
Well, kind of; we're capable of building sensors that are able to do so, but then we experience those sensors by converting them to our sensorium. I agree that it marks us as potentially more potent than say, a mantis shrimp, because while those things have 4-fold as many pigment sensors and septishminocular vision and reverso-fluctuated-polarized detection, and can falcon punch with the force of an atom bomb, we can build machines that do all that and more. So certainly this is more than simply a matter of perception. It may have been a bad tangent for me to have brought up; I was pointing more to the notion of an utter difference of how we perceive the world compared to an ant. Consider as touched on in the previous post with respect to what we do to entertain ourselves.

morriswalters wrote:. As a practical matter it suggests that more than one level of experience exists. And that it exists in a way that it is not available to us.
Right, but lets think about this on a deeper level than just perception. We perceive all these complex things and process them and utilize that information to formulate thoughts. Obviously there's more to it than just 'depth or breadth' of perception. A bloodhound has sensory sensitivity on par with our more advanced chemical detectors, but I wouldn't consider a bloodhound a higher being.

By 'experience', I think there's more here than just perception, but perception is totally going to be part of it. We compose and create, I feel, due to reasons beyond that we see colors, hear sound, and form high social groups. And I think that's pretty neat; imagine that, however, expanded and increased the same way an ant may crawl onto the launchpad of shuttle launch.
... with gigantic melancholies and gigantic mirth, to tread the jeweled thrones of the Earth under his sandalled feet.

User avatar
ucim
Posts: 6888
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: Can Robots be Human? (or The Bicentennial Man's Question

Postby ucim » Mon Dec 30, 2013 4:16 am UTC

morriswalters wrote:Ask what we use intelligence for. The trite answer is to find different ways to amuse ourselves. The less trite answer is to predict the future(among at least a few other things)...
Not directed at you, but the question is a good starting point: I think it's an error to think that intelligence is "for" anything. It presumes as a given that things have a (higher) purpose. I see things differently.

Intelligence allowed us to outcompete our rivals. Thus the genes for intelligence became more predominant (in the niche wherein this competition occurrs). We can say that the purpose of intelligence is to outcompete, but that would be a hindsight presumption that something granted that purpose. There's no evidence of that; only of its success to date.

In any case it does so by allowing us to plan ahead and avoid becoming meals, but also to plan ahead and become somebody's friend (and perhaps mating partner), even if solely by dint of being a more interesting person. This is not to be minimized. Interest in the arts (for example) isn't merely a by-product; it is the end product.

Ants do well in their milieu, but I don't think they can appreciate a symphony the way we can, and that (presumed) lack of appreciation for the subtleties of music makes them less interesting companions than some of my other friends. But what is "appreciation of a symphony" if not the reaction produced by the complex interaction of many many neurons that take in raw audio and create a myriad of associations in the mind? Without that, it's just vibrations. Raw data.

An entity with many many more neural connections can create a much richer pattern of associations with the raw data it absorbs. An entity which has evolved to do so probably did so because of the competitive advantage it conferred. This could be social (or some higher analogue of social), or it could be some other relationship I can't imagine. But absent a Designer, natural selection would lead the way, and as such, pretty much everything that has a cost (maintaining a neural net is costly!) also has an evolutionary benefit.

We can build many sensors, instrumentation that can give us much more information about the world than our bodies can by themselves. But although we can think about it, we can't assimilate it directly, and it isn't really part of our consciousness. It, for the most part, remains raw data to us. It doesn't make a symphony in our minds. But for a being for whom these sensors were directly connected to its brain, and whose brain had sufficiently more (appropriately arrayed) complexity, it could create much more than a symphony in that entity's mind.

I would not call that human.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Heartfelt thanks from addams and from me - you really made a difference.

morriswalters
Posts: 7073
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: Can Robots be Human? (or The Bicentennial Man's Question

Postby morriswalters » Mon Dec 30, 2013 12:55 pm UTC

ucim wrote:An entity with many many more neural connections can create a much richer pattern of associations with the raw data it absorbs. An entity which has evolved to do so probably did so because of the competitive advantage it conferred. This could be social (or some higher analogue of social), or it could be some other relationship I can't imagine. But absent a Designer, natural selection would lead the way, and as such, pretty much everything that has a cost (maintaining a neural net is costly!) also has an evolutionary benefit.
This is the crux of my argument. Certainly better sensors give you more associations. But how many associations are enough? And how many are more than you need to get the job done? Assume for the moment that we develop a true other intelligence, one that is self aware and one "superior" to us. He see's the total spectrum and hears in all the ranges we can't. What does he have to pay for what he gets? So the equation that you need to solve for is the maximum number of sensors to produce the optimum number of associations to allow you to totally describe the universe.

By increasing the number and complexity of sensors you increase the complexity of the "brain" which encodes them. But what does that buy? We have sensors for a reason. Plop a fish in a cave with no light and leave it there. It will end up with no eyes. Because there is nothing to see. Now raise a child born blind to adulthood. Is he any less intelligent because of that blindness? What it lacks is the ability to move in the world the way the sighted can. The conclusion I draw from that is that sight is important for moving in the world, but less so for intelligence. What they lack in a neural sense are the additional associations that sight gives you, not the inability to understand them.

ucim wrote:Ants do well in their milieu, but I don't think they can appreciate a symphony the way we can, and that (presumed) lack of appreciation for the subtleties of music makes them less interesting companions than some of my other friends.
I've asked this before, but what is it useful for? I love "The Flight Of The Bumblebee". But in the grand scheme of things it has no inherent meaning. I suggest that it is a byproduct of of the process of building associations. The process of creating a symphony is the point in my estimation. But not for the beauty of the symphony, however important it might have been to the composer or his audience. Rather it is important as a result of the mind using the associations created by the sensory data to create something novel and unique. The utility comes from the other use. What we call intelligence. Intelligence is simply moving things around until something novel occurs. Much as you might try hundreds of pieces to complete one more piece of a blank jigsaw puzzle. I've jumped the shark at this point, but thanks for the discussion.

User avatar
ucim
Posts: 6888
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: Can Robots be Human? (or The Bicentennial Man's Question

Postby ucim » Mon Dec 30, 2013 3:24 pm UTC

morriswalters wrote:But how many associations are enough? And how many are more than you need to get the job done?
Enough for what? To get what "job" done?

There isn't a job to be done.

There just is "is". You are more likely to pass on your genes (for whatever talents you have) if you mate, have children, and successfully rear them to adulthood. In a society that already has clever, interesting beings, you are more likely to mate if you yourself are clever and interesting, and this involves intelligence. It makes you interesting to a potential spouse in the same way that colored feathers do for a bird. (Colored "look at me" feathers are otherwise not very useful, and in fact a detriment, making a bird more likely to be prey).

More and better sensors do not give you more associations. They just give you more things to associate. The actual associations come from your neural connections - your brain; those "sensor readings" have to make it into your brain in order for that to happen (and a great advantage comes when they are directly wired in, like an eyeball, than when they are merely described to the brain, such as variations in microwave density or polarization). The ability to make these connections is called "intelligence". And the ability to "totally describe the universe" is not, in itself, useful. Rather, it allows you (better) to escape being prey, and to better become a predator. Indirectly it helps you to be interesting to fellow critters that might mate with you, but not by being able to describe the universe to yourself, but rather, to describe a part of it to somebody else in a more clever and interesting way, and also to appreciate somebody else's description of a different part of it (you are interesting to them in part by being actually interested in their output).

morriswalters wrote:Now raise a child born blind to adulthood. Is he any less intelligent because of that blindness?
No, because intelligence coming from the requirement to process sensory input is not an individual thing, but an evolutionary thing. An individual who happens to be blind still has all the neural connections to interpret sight, he (or she) just doesn't have sight. However, over many (thousands of) generations it may be that the part of the brain that deals with sight becomes selected against (as being useless) and in that case, yes, something is lost for the resulting generation. It would be speculation as to whether development of another part of the brain becomes selected for, so that "total intelligence" (whatever that is) remains the same, but if sight processing disappears over generations, then that species has lost an interesting way of processing the world, and is, thus, less intelligent.

morriswalters wrote:I've asked this before, but what is it useful for? I love "The Flight Of The Bumblebee". But in the grand scheme of things it has no inherent meaning
It doesn't need inherent meaning. But let me ask you this: Do you enjoy sharing your love of "The Flight Of The Bumblebee"? Do you find it more rewarding to share it with somebody else who also loves music? If you want to reduce it to evolutionary traits, these little things add up to a happier life (and thus, a greater chance of survival), and to being more interesting (and thus a greater chance of finding a mate). Further, being more interesting leads to more friends, and being in a social group confers many advantages.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Heartfelt thanks from addams and from me - you really made a difference.

User avatar
Izawwlgood
WINNING
Posts: 18686
Joined: Mon Nov 19, 2007 3:55 pm UTC
Location: There may be lovelier lovelies...

Re: Can Robots be Human? (or The Bicentennial Man's Question

Postby Izawwlgood » Mon Dec 30, 2013 4:38 pm UTC

morriswalters wrote: Now raise a child born blind to adulthood. Is he any less intelligent because of that blindness?
No, but they are probably going to be bad at the task of describing a rainbows beauty. But as I said previously, think about this in broader terms than simply 'perception'. Yes, perception is necessary to appreciate creations which stimulate that sense, but a blind person is still capable of understanding beauty. But more to the point, a blind person is still likely possessed with the desire to create.

MW, you seem really stuck on this concept of 'evolved to a purpose'. As Jose just pointed out, intelligence and minds don't need to serve a purpose, and indeed, being freed from a purpose is perhaps one of the means by which minds are able to best demonstrate their 'mindness'. I'm impressed with a humans ability to solve differential equations, but I'm also impressed with a goshawks ability avoid trees as it flies through the woods at blazing speeds. The thing is, these represent two significantly different applications and 'purposes' of complex neurological activity, and arguably, a humans is more impressive, not mind you, because of the 'CPU required' (if one is even greater than the others).
... with gigantic melancholies and gigantic mirth, to tread the jeweled thrones of the Earth under his sandalled feet.

morriswalters
Posts: 7073
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: Can Robots be Human? (or The Bicentennial Man's Question

Postby morriswalters » Mon Dec 30, 2013 5:10 pm UTC

ucim we may be at cross purposes, describing essentially the same thing, differently.
ucim wrote:Enough for what? To get what "job" done?
For whatever we need to do. You can choose to look at it in two ways. One is that we are a random outcome of a universe biased to produce life under certain conditions. The second is that we are part of some divine scheme. I'll buy 1, you can do whatever you want with 2. Assuming 1 than intelligence is random. If intelligence exists merely to help us breed better than we have meant the challenge. We breed well enough that we are in danger of overwhelming the biological systems which allow us to live in the first place. We survive even better given that the first thing we seem to do is to let the weaker members like me survive rather than die. However my thrust isn't intended to show the essential uselessness of living. The thrust is to ask the question, does supior intelligence have a meaningful context?.
ucim wrote:More and better sensors do not give you more associations. They just give you more things to associate. The actual associations come from your neural connections - your brain; those "sensor readings" have to make it into your brain in order for that to happen (and a great advantage comes when they are directly wired in, like an eyeball, than when they are merely described to the brain, such as variations in microwave density or polarization). The ability to make these connections is called "intelligence". And the ability to "totally describe the universe" is not, in itself, useful. Rather, it allows you (better) to escape being prey, and to better become a predator. Indirectly it helps you to be interesting to fellow critters that might mate with you, but not by being able to describe the universe to yourself, but rather, to describe a part of it to somebody else in a more clever and interesting way, and also to appreciate somebody else's description of a different part of it (you are interesting to them in part by being actually interested in their output).
I am not a scientist, my language is sloppy, call it whatever you think it should be, but those networks in our brains allow us to solve puzzles. The question is, are there puzzles that require additional complexity? The ant is complex as it needs to be to solve the problem of producing the next ant generation. We simply solved the same problem differently. If that is true than the only people worried about what an ant lacks is us. The comparison is meaningless. The ant could care less about us(unless we are on an ant killing frenzy). The pursuit of those things are incidental once we have passed on our genetic legacy. We have the time to look so we do.
ucim wrote:No, because intelligence coming from the requirement to process sensory input is not an individual thing, but an evolutionary thing. An individual who happens to be blind still has all the neural connections to interpret sight, he (or she) just doesn't have sight.
Perhaps I'm missing something. Given that the equipment is broken then certain associations never get made. The individual could never make the leap to prisms for instance on his own. So is he quantitatively less intelligent than a sighted person as an individual, given that reduction of capacity? In the abstract what does the lack of the ability to make those visual associations do to his ability to create novelty? What can't he do? This is important in the context of the discussion. Does there come a point where the information gathered and supplied to the brain is sufficient to solve any problem the brain wishes? What of importance do we lose by not seeing into the far ultra violet?
ucim wrote:It doesn't need inherent meaning. But let me ask you this: Do you enjoy sharing your love of "The Flight Of The Bumblebee"? Do you find it more rewarding to share it with somebody else who also loves music? If you want to reduce it to evolutionary traits, these little things add up to a happier life (and thus, a greater chance of survival), and to being more interesting (and thus a greater chance of finding a mate). Further, being more interesting leads to more friends, and being in a social group confers many advantages.
I enjoy a nite out with my friends and consider the social component of humans to be the essential thing that makes us what we are. I'll ask a slightly different version of the question. Put aside the meaning of it all, could any singular intelligence alone, be so intelligent, that by itself , could acquire all the knowledge that we have acquired since we developed language in the same amount of time, starting from zero? All while surviving the world in real time?
Izawwlgood wrote:MW, you seem really stuck on this concept of 'evolved to a purpose'. As Jose just pointed out, intelligence and minds don't need to serve a purpose, and indeed, being freed from a purpose is perhaps one of the means by which minds are able to best demonstrate their 'mindness'. I'm impressed with a humans ability to solve differential equations, but I'm also impressed with a goshawks ability avoid trees as it flies through the woods at blazing speeds. The thing is, these represent two significantly different applications and 'purposes' of complex neurological activity, and arguably, a humans is more impressive, not mind you, because of the 'CPU required' (if one is even greater than the others).
I probably am, but intelligence evolved for a purpose. You don't see to cogitate, you see to move, avoid and so on. Minds exist because of the purpose, not despite it. We create purpose, but we do so because we have the excess capacity. How much more capacity do we need and why do you think that it could lead to anything more than what it has already. See my question on a super intelligent entity.

edit
I surrender, I'm probably an idiot. :D

User avatar
ucim
Posts: 6888
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: Can Robots be Human? (or The Bicentennial Man's Question

Postby ucim » Mon Dec 30, 2013 6:09 pm UTC

morriswalters wrote:I am not a scientist, my language is sloppy, call it whatever you think it should be, but those networks in our brains allow us to solve puzzles. The question is, are there puzzles that require additional complexity?
Well, if we had lots more sensory data, and much much higher intelligence, we might be able to solve the puzzle of global warming much like we presently solve the crossword.
morriswalters wrote:Given that the [visual] equipment is broken [in a blind person] then certain associations never get made.
But the processing ability is still there. It's the data that's lacking. You can define intelligence in a way that answers the question either way you want, but the resulting definition will differ in a meaningful way. Be careful with that, as it's tempting to use a word defined one way in a context which requires the other one. Hilarity ensues.
morriswalters wrote:I probably am, but intelligence evolved for a purpose.
No, intelligence evolved as a result. Subtle but important difference.
morriswalters wrote:edit
I surrender, I'm probably an idiot. :D
There's no battle here, just an opportunity for each of us to see the world in a different way, and test how it fits.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Heartfelt thanks from addams and from me - you really made a difference.

User avatar
Izawwlgood
WINNING
Posts: 18686
Joined: Mon Nov 19, 2007 3:55 pm UTC
Location: There may be lovelier lovelies...

Re: Can Robots be Human? (or The Bicentennial Man's Question

Postby Izawwlgood » Mon Dec 30, 2013 6:23 pm UTC

morriswalters wrote:I surrender, I'm probably an idiot. :D
Yeah, dude, it's just a conversation :)

morriswalters wrote:Minds exist because of the purpose, not despite it.
I agree with Jose here, that minds are the product of a solution to evolutionary pressure, but are now significantly more. I absolutely concur that our minds are still... influenced? Limited? by certain evolutionary pressures, but I would say we are vastly freer from, than subject to, those pressures. Dangle a vertical line in front of a frogs field of view, and it will pounce. Humans aren't as hardwired, in many respects.

And it's that freedom of mind that I think is a 'human' trait.
... with gigantic melancholies and gigantic mirth, to tread the jeweled thrones of the Earth under his sandalled feet.


Return to “Serious Business”

Who is online

Users browsing this forum: No registered users and 13 guests