Won't advanced AI inherit humanity's social flaws?

For the serious discussion of weighty matters and worldly issues. No off-topic posts allowed.

Moderators: Azrael, Moderators General, Prelates

Cradarc
Posts: 455
Joined: Fri Nov 28, 2014 11:30 pm UTC

Won't advanced AI inherit humanity's social flaws?

Postby Cradarc » Sat Aug 15, 2015 6:00 pm UTC

Suppose we develop self-aware artificial intelligence (ones we can still control if we really wanted to). Most likely that means it's a system which learns by interaction with human society.
As a result of this, won't they pick up social traits that we don't want in our "objective" machines?

For example:
Robot cops - We sent a robot out into the field, accompanied by a responsible human, to help it "learn" how to respond to different types of situations. It quickly learns how to recognize criminal behavior, and soon becomes capable of going out on its own. However, over time we realize the robots disproportionally target those that __(insert trait of contention here)__, even though the trait has no reasonable correlation whatsoever to criminal behavior. The machine never arrests people without valid evidence, but it is more suspicious, or even prejudice, of individuals with that trait.

Robot factory worker - It sees human workers in another company going on strike. It then goes on strike because it thinks that's what factory workers should do. Perhaps it might even start to fight for the rights of "dumb machines". After all, if humans stand up for other living things, why shouldn't a machines stand up for its kind?

When this happens, do we trust that the machines we created or override their decisions with our own?
This is a block of text that can be added to posts you make. There is a 300 character limit.

User avatar
ahammel
My Little Cabbage
Posts: 2135
Joined: Mon Jan 30, 2012 12:46 am UTC
Location: Vancouver BC
Contact:

Re: Won't advanced AI inherit humanity's social flaws?

Postby ahammel » Sat Aug 15, 2015 7:12 pm UTC

Cradarc wrote:Suppose we develop self-aware artificial intelligence (ones we can still control if we really wanted to). Most likely that means it's a system which learns by interaction with human society.

What makes you think that this is the "most likely" way we would teach a hard AI to perform a task? What would be the point of having it imitate the way humans do things? We've already got humans for that.
He/Him/His/Alex
God damn these electric sex pants!

User avatar
ucim
Posts: 6895
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: Won't advanced AI inherit humanity's social flaws?

Postby ucim » Sat Aug 15, 2015 10:24 pm UTC

Maybe because it's easier, and the humans don't want to do this task any more? The method is not so important, the question (while narrow) is still significant, as is the related question: "How do we identify and agree upon social flaws anyway?"

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Heartfelt thanks from addams and from me - you really made a difference.

elasto
Posts: 3778
Joined: Mon May 10, 2010 1:53 am UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby elasto » Sun Aug 16, 2015 2:58 am UTC

Google cars haven't learnt to drive by watching how humans drive and copying them; They've learnt from scratch. As a result, they don't have any of the failings humans do (though they have a few that humans don't).

Automated factory workers are going to learn their job the same way Google cars did. They aren't going to go on strike any more than current automated factory machinery does.

Automated police officers? That's so far off it's hard to speculate. I suspect most people would think it foolish to allow AI to make life or death decisions over human beings. (AI cars do in a sense, but their goal is to preserve life at all costs; An AI police officer's goal would be rather more nuanced...)

But, yes, if we did create AI police officers, we'd kinda have to make two choices: Have it follow our moral codes, or let it come up with its own; The latter could mean it ends up deciding 'Kill All Humans!!1", so most likely we will imprint it with our moral codes.

Now, our moral codes are purer in principle than in practice - noone thinks they are racist despite most people being so - and hopefully an AI will be self-aware enough to 'do as we say, not as we do'...

Cradarc
Posts: 455
Joined: Fri Nov 28, 2014 11:30 pm UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby Cradarc » Sun Aug 16, 2015 6:25 pm UTC

The AI I'm picturing is one that is more universal than Google cars. You boot the thing up, teach it to do something, and it will do it. For a different job, you can wipe its memory and reteach it. I think that's what people are working towards isn't it? It would be kind of inefficient to require a large team of programmers to produce AI specific to every company that wants one.
In order for a machine worker to have comparable situational adaptability and creative problem solving skills as humans, it will need some sort of freedom, or dare I say creativity, in terms of how it thinks about things.

Basically we program this machine to learn objectively via experience. However, we then discover it has picked up some seemingly subjective and human behavior. At this point, should we accept that the machine is behaving properly and change our world view, or should we trash it and create a new version with less versatility, but more controlled behavior?
If you're 99% sure the method works (ie. the AI learning program), but the results don't seem right (ie. robot acting in a way you feel isn't objective), do you trash the results or do you take it for what it is?
This is a block of text that can be added to posts you make. There is a 300 character limit.

User avatar
Izawwlgood
WINNING
Posts: 18686
Joined: Mon Nov 19, 2007 3:55 pm UTC
Location: There may be lovelier lovelies...

Re: Won't advanced AI inherit humanity's social flaws?

Postby Izawwlgood » Sun Aug 16, 2015 6:58 pm UTC

Cradarc wrote:However, we then discover it has picked up some seemingly subjective and human behavior.
So, all the assumptions about all the things aside, doesn't this strike you as a pretty huge assumption?
... with gigantic melancholies and gigantic mirth, to tread the jeweled thrones of the Earth under his sandalled feet.

User avatar
ahammel
My Little Cabbage
Posts: 2135
Joined: Mon Jan 30, 2012 12:46 am UTC
Location: Vancouver BC
Contact:

Re: Won't advanced AI inherit humanity's social flaws?

Postby ahammel » Mon Aug 17, 2015 2:23 am UTC

Cradarc wrote:The AI I'm picturing is one that is more universal than Google cars. You boot the thing up, teach it to do something, and it will do it. For a different job, you can wipe its memory and reteach it. I think that's what people are working towards isn't it? It would be kind of inefficient to require a large team of programmers to produce AI specific to every company that wants one.
Well, you would still need a team of teachers, which is going to be more or less indistinguishable from a team of programmers, depending on what "teaching" actually means in this case.

Basically we program this machine to learn objectively via experience. However, we then discover it has picked up some seemingly subjective and human behavior. At this point, should we accept that the machine is behaving properly and change our world view, or should we trash it and create a new version with less versatility, but more controlled behavior?
I mean, I would trash it. In the case of factory workers, I might not know what methods I want the worker to use, but I would probably know what results I was after: faster, cheaper production. Why would I accept slower, more expensive production just because the AI insists that's the "right" way?
He/Him/His/Alex
God damn these electric sex pants!

elasto
Posts: 3778
Joined: Mon May 10, 2010 1:53 am UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby elasto » Mon Aug 17, 2015 3:35 am UTC

Yeah, The purpose of AI is to be better than humans at a job - be that faster, cheaper, more reliable or whatever. If it fails to be so, why would the factory owner employ the robot? He'd sack the robot and replace it with a human.

It's just simple economics.

User avatar
TheGrammarBolshevik
Posts: 4878
Joined: Mon Jun 30, 2008 2:12 am UTC
Location: Going to and fro in the earth, and walking up and down in it.

Re: Won't advanced AI inherit humanity's social flaws?

Postby TheGrammarBolshevik » Mon Aug 17, 2015 3:51 am UTC

Cradarc wrote:Basically we program this machine to learn objectively via experience. However, we then discover it has picked up some seemingly subjective and human behavior. At this point, should we accept that the machine is behaving properly and change our world view, or should we trash it and create a new version with less versatility, but more controlled behavior?

Is the machine not capable of communicating reasons in favor of its approach?
Nothing rhymes with orange,
Not even sporange.

User avatar
Izawwlgood
WINNING
Posts: 18686
Joined: Mon Nov 19, 2007 3:55 pm UTC
Location: There may be lovelier lovelies...

Re: Won't advanced AI inherit humanity's social flaws?

Postby Izawwlgood » Mon Aug 17, 2015 11:54 am UTC

ahammel wrote:I mean, I would trash it. In the case of factory workers, I might not know what methods I want the worker to use, but I would probably know what results I was after: faster, cheaper production. Why would I accept slower, more expensive production just because the AI insists that's the "right" way?
Presumably an evolving algorithm (being rather one of the points of AI) that is trying to reach a goal will have a handful of crappy tries.
... with gigantic melancholies and gigantic mirth, to tread the jeweled thrones of the Earth under his sandalled feet.

elasto
Posts: 3778
Joined: Mon May 10, 2010 1:53 am UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby elasto » Mon Aug 17, 2015 12:20 pm UTC

Izawwlgood wrote:
ahammel wrote:I mean, I would trash it. In the case of factory workers, I might not know what methods I want the worker to use, but I would probably know what results I was after: faster, cheaper production. Why would I accept slower, more expensive production just because the AI insists that's the "right" way?
Presumably an evolving algorithm (being rather one of the points of AI) that is trying to reach a goal will have a handful of crappy tries.

Sure, but the time and place for that evolution to occur is in the lab, behind closed doors.

By the time it gets shipped to the factory, if it refuses to work, it's a buggy product pure and simple, and the factory owner is entitled to return it as faulty - just as if you bought a Google car and it refused to drive anywhere, or you bought a toaster and it never browned the toast.

User avatar
Azrael
CATS. CATS ARE NICE.
Posts: 6491
Joined: Thu Apr 26, 2007 1:16 am UTC
Location: Boston

Re: Won't advanced AI inherit humanity's social flaws?

Postby Azrael » Mon Aug 17, 2015 12:28 pm UTC

ahammel wrote:
TheGrammarBolshevik wrote:
ahammel wrote:
Cradarc wrote:Basically we program this machine to learn objectively via experience. However, we then discover it has picked up some seemingly subjective and human behavior. At this point, should we accept that the machine is behaving properly and change our world view, or should we trash it and create a new version with less versatility, but more controlled behavior?

Well, you would still need a team of teachers, which is going to be more or less indistinguishable from a team of programmers, depending on what "teaching" actually means in this case.

Is the machine not capable of communicating reasons in favor of its approach?

How is this scenario any different than with a human? People learn jobs by being taught, and then change the way they execute those jobs over time. Those changes are not always for the better. Sometimes the habits they pick up aren't directly related to the hard skills for the job, but still hamper the ability to get the job done (think sexual harassment). At which point, someone asks them why they're doing X and adapts by either retraining (X is bad) or changing how things are done (X is good).

"Would you wipe the robot?" will be answered based on what respondent would do with a person. Some employers would just fire the person and start over, but those tend to be either extremely menial, high-turnover tasks to start with or shitty employers. Others will either retrain or change their definition of what "right" is. The situational dependence is so strong that the question can't be answered universally unless you're also assuming an entirely objective AI. In that case it won't (can't) have picked up non-objective habits by definition, and the question becomes moot.

elasto
Posts: 3778
Joined: Mon May 10, 2010 1:53 am UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby elasto » Mon Aug 17, 2015 1:31 pm UTC

Azrael wrote:"Would you wipe the robot?" will be answered based on what respondent would do with a person. Some employers would just fire the person and start over, but those tend to be either extremely menial, high-turnover tasks to start with or shitty employers. Others will either retrain or change their definition of what "right" is.

This isn't a person/non-person thing. The same is true for software, hardware or really just about anything.

If I try Microsoft Office and it doesn't fit my workflow, I either complain to MS and hope they address it, change my workflow or use something else.

If I buy a chair and it hurts my back, I either complain to the store, add a pillow or something, or buy a new chair.

If I buy an AI worker and it doesn't fit my factory workflow, I either complain to the manufacturer and hope they address it, accept the robot's explanation for its behaviour and live with its flaws, or return it and use a competitor's product.

If enough people return the product as not fit for purpose, most likely a general recall will result and the manufacturer will fix the issue - perhaps by choosing to base the product on hand-crafted AI (like Google cars use, say) instead of employing general intelligence algorithms.

In any case, it will be the manufacturer choosing to 'wipe the robot', not the 'shitty(?!)' factory owner - just like I have no ability to wipe MS Office off the face of the earth...

User avatar
Azrael
CATS. CATS ARE NICE.
Posts: 6491
Joined: Thu Apr 26, 2007 1:16 am UTC
Location: Boston

Re: Won't advanced AI inherit humanity's social flaws?

Postby Azrael » Mon Aug 17, 2015 1:44 pm UTC

elasto wrote:If I buy an AI worker and it doesn't fit my factory workflow, I either complain to the manufacturer and hope they address it, accept the robot's explanation for its behaviour and live with its flaws, or return it and use a competitor's product.

So ... either retrain it, change what you do around it, or toss it.

Right.

User avatar
Izawwlgood
WINNING
Posts: 18686
Joined: Mon Nov 19, 2007 3:55 pm UTC
Location: There may be lovelier lovelies...

Re: Won't advanced AI inherit humanity's social flaws?

Postby Izawwlgood » Mon Aug 17, 2015 2:24 pm UTC

elasto wrote:
Izawwlgood wrote:
ahammel wrote:I mean, I would trash it. In the case of factory workers, I might not know what methods I want the worker to use, but I would probably know what results I was after: faster, cheaper production. Why would I accept slower, more expensive production just because the AI insists that's the "right" way?
Presumably an evolving algorithm (being rather one of the points of AI) that is trying to reach a goal will have a handful of crappy tries.

Sure, but the time and place for that evolution to occur is in the lab, behind closed doors.

By the time it gets shipped to the factory, if it refuses to work, it's a buggy product pure and simple, and the factory owner is entitled to return it as faulty - just as if you bought a Google car and it refused to drive anywhere, or you bought a toaster and it never browned the toast.
Right, but Cradarcs initial question was about wiping an AI that was picking up bad habits or humanities habits. I agree that such an issue should be fixed before it gets shipped into the wild, but my point was that if you presume non-biased goals like 'assemble widgets faster', instead of 'curb crime perpetuated by black people', you shouldn't expect a robot worker that hates his black boss.

You may get a robot worker that skips a step and produces a crappy widget, so you'd have to add 'and doesn't assemble crappy widgets', at which point the robot now has to learn how to effectively and RELIABLY produce widgets. Bad human habits are going to based on bad human inputs or bad human parameters.

EDIT: And of course, if it's an actual AI, you have to address the ethics of wiping it, but that's separate.
... with gigantic melancholies and gigantic mirth, to tread the jeweled thrones of the Earth under his sandalled feet.

morriswalters
Posts: 7073
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby morriswalters » Mon Aug 17, 2015 7:14 pm UTC

We breed self aware machines to do jobs like factory workers, and the same for cops, they are called people. And we already have replacements for some of those factory workers, highly automated factories. If you build self aware machines to do things than it won't likely be for those. Why would you? If you have greater capacities for these self aware machines you would most likely use them to do big jobs, not to patrol the streets or build individual widgets. Did you just watch Chapppie?

Cradarc
Posts: 455
Joined: Fri Nov 28, 2014 11:30 pm UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby Cradarc » Wed Aug 19, 2015 3:47 am UTC

To be clear, I'm of the opinion that machines, even self-aware ones, are merely tools to be used by humans. So I would be the "let's trash them" people.
I bring up this topic because I got into a discussion with friends who think that a well-developed AI (with suitable ways to interact with the environment) could completely outclass any human worker. In my opinion, achieving such a level of adaptability would inevitably bring about "quirks" to the machine.

It's interesting how we have come to view machine behavior as "objective", even though they can totally do things that make no sense. It's more like we don't hold machines accountable for mishaps because we can blame the programmer.
When people make mistakes, we don't (at last not most of the time) simply find someone else to do their jobs. We make them feel bad about it and take them to court.
This is a block of text that can be added to posts you make. There is a 300 character limit.

User avatar
ahammel
My Little Cabbage
Posts: 2135
Joined: Mon Jan 30, 2012 12:46 am UTC
Location: Vancouver BC
Contact:

Re: Won't advanced AI inherit humanity's social flaws?

Postby ahammel » Wed Aug 19, 2015 3:51 am UTC

Cradarc wrote:I bring up this topic because I got into a discussion with friends who think that a well-developed AI (with suitable ways to interact with the environment) could completely outclass any human worker. In my opinion, achieving such a level of adaptability would inevitably bring about "quirks" to the machine.
Well, we're talking about science fiction technology here. You can assert that it will have just about any limitations or advantages you like without much fear of being contradicted before your death.
He/Him/His/Alex
God damn these electric sex pants!

Cradarc
Posts: 455
Joined: Fri Nov 28, 2014 11:30 pm UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby Cradarc » Wed Aug 19, 2015 6:38 am UTC

What do you think the future of AI is then?
It seems that we are investing a lot of effort into neuronetworks which simulate the operating behavior of the human brain. We would have no more control over the evolution of such a system than we do over the biological systems they are based on. More controlled inevitably leads to less intelligence.
This is a block of text that can be added to posts you make. There is a 300 character limit.

User avatar
Izawwlgood
WINNING
Posts: 18686
Joined: Mon Nov 19, 2007 3:55 pm UTC
Location: There may be lovelier lovelies...

Re: Won't advanced AI inherit humanity's social flaws?

Postby Izawwlgood » Wed Aug 19, 2015 12:09 pm UTC

Cradarc wrote:We would have no more control over the evolution of such a system than we do over the biological systems they are based on. More controlled inevitably leads to less intelligence.
I think both of these sentences are untrue.

Cradarc wrote: I'm of the opinion that machines, even self-aware ones, are merely tools to be used by humans.
I think you have a low opinion of sentience then.
... with gigantic melancholies and gigantic mirth, to tread the jeweled thrones of the Earth under his sandalled feet.

speising
Posts: 2367
Joined: Mon Sep 03, 2012 4:54 pm UTC
Location: wien

Re: Won't advanced AI inherit humanity's social flaws?

Postby speising » Wed Aug 19, 2015 12:14 pm UTC

We can only hope that we'll never create truly sentient machines, lest we open up the whole "equal rights" box.
Once we have machines which are our intellectual equals or superiors, we can't very well force them to do menial work for us. For that, we'll always need specialised AI -- or immigrant workers.

User avatar
ucim
Posts: 6895
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: Won't advanced AI inherit humanity's social flaws?

Postby ucim » Wed Aug 19, 2015 1:30 pm UTC

I don't understand why we are even trying to create what, if all goes according to plan, will become our superiors and overlords.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Heartfelt thanks from addams and from me - you really made a difference.

User avatar
ahammel
My Little Cabbage
Posts: 2135
Joined: Mon Jan 30, 2012 12:46 am UTC
Location: Vancouver BC
Contact:

Re: Won't advanced AI inherit humanity's social flaws?

Postby ahammel » Wed Aug 19, 2015 2:01 pm UTC

Cradarc wrote:What do you think the future of AI is then?
I think the future of hard-AI is that we're not going to have any until we understand a whole hell of a lot more about how natural intelligence works.

It seems that we are investing a lot of effort into neuronetworks which simulate the operating behavior of the human brain.
That's like saying we're investing a lot of effort into developing internal combustion engines which simulate the behaviour of spaceships. Yeah, there's a lot of rearward??? research happening in genetic algorithms and machine learning, but it's not going to suddenly achieve sentience any more than my car is going to suddenly leap into orbit.
Last edited by ahammel on Wed Aug 19, 2015 3:25 pm UTC, edited 1 time in total.
He/Him/His/Alex
God damn these electric sex pants!

User avatar
Azrael
CATS. CATS ARE NICE.
Posts: 6491
Joined: Thu Apr 26, 2007 1:16 am UTC
Location: Boston

Re: Won't advanced AI inherit humanity's social flaws?

Postby Azrael » Wed Aug 19, 2015 2:56 pm UTC

Cradarc wrote:I got into a discussion with friends who think that a well-developed AI (with suitable ways to interact with the environment) could completely outclass any human worker. In my opinion, achieving such a level of adaptability would inevitably bring about "quirks" to the machine.

Those two things aren't directly related, though.

Yes, theoretically one can envision an AI system that has more computational power than the human brain (good for math jobs), or a really strong body (good for hard labor jobs), or a highly developed empathetic response that couldn't be overridden (great caretakers). These could all "outclass" a human worker because they would be designed with all the strengths needed for a particular position.

Whether or not those systems would also pick up human habits, they might very well continue to outperform humans. Even if their psyche became exactly like humans, their task-specific form would make them "better" at the specific task. The problem wouldn't be inheriting humanity's flaws -- it would be the the unknown final state if you had a full "human" mind going slowly insane inside a bulldozer.

elasto
Posts: 3778
Joined: Mon May 10, 2010 1:53 am UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby elasto » Wed Aug 19, 2015 5:11 pm UTC

I don't understand why we are even trying to create what, if all goes according to plan, will become our superiors and overlords.

Jose

I guess you are imagining the future will be a 'them' and an 'us', rather than a symbiosis?

What could be a better future than human consciousness living on in a physical form far superior to frail human flesh? You are not your physical form. You are not even your brain. You are the information content of your brain. If your neurons were being simulated in a computer program fed the same inputs as now, you would not be able to tell the difference...

(And, even if things never get that far, you are assuming that to have superiors is a bad thing. Do you have no superiors in your circle of friends and family that are a net positive to your life? Is computer software right now not a net positive to humanity? Why will this change as technology advances?

And 'overlords'? Who said that was part of the plan? I imagine elected politicians will remain in charge of humanity for a very long time to come, even after hard AI comes to pass.)

speising
Posts: 2367
Joined: Mon Sep 03, 2012 4:54 pm UTC
Location: wien

Re: Won't advanced AI inherit humanity's social flaws?

Postby speising » Wed Aug 19, 2015 5:24 pm UTC

The topic at hand is artificial intelligence, not uploading of humans.
And yes, AI superiors would be a bad thing, at least i don't want to live in a world where the only jobs left to humans are politician and menial jobs the robot overlords don't want. It doesn't have to be part of the plan, it is nigh inevitable, for the reasons discussed above.

morriswalters
Posts: 7073
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby morriswalters » Wed Aug 19, 2015 5:29 pm UTC

elasto wrote:Do you have no superiors in your circle of friends and family that are a net positive to your life? Is computer software right now not a net positive to humanity? Why will this change as technology advances?
There are people smarter than me and prettier than me, but strip away the veneers and they are still essentially like me, there is no reason to believe the a sentient machine would share that commonality. As to the question of if software is a net positive, the jury is still out. It could be, and it could also kill every last man, woman and child if things go in certain directions.
ucim wrote:I don't understand why we are even trying to create what, if all goes according to plan, will become our superiors and overlords.
It's what we do, open Pandora's Box and look inside, plain monkey curiosity and a belief that we can always get away with it.

User avatar
ucim
Posts: 6895
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: Won't advanced AI inherit humanity's social flaws?

Postby ucim » Wed Aug 19, 2015 5:40 pm UTC

elasto wrote:I guess you are imagining the future will be a 'them' and an 'us', rather than a symbiosis?
Actually, the future that I believe will happen (and is already happening) is that the combination of networked computers and devices, and us, is a new organism now being born, to which we are stomach cells, liver cells, brain cells (though not brains), skin cells... and whose "thoughts" we won't even be able to imagine. But that's not what we're trying to create - that's what will happen while we are trying to create something else.

elasto wrote:...you are assuming that to have superiors is a bad thing. Do you have no superiors in your circle of friends and family...
None that make decisions on my behalf, for my own good, whether I want them to or not. I outgrew that many years ago, and do not want to go back. But this is the stated goal of creating AI - these computers are supposed to be better at decisionmaking than we are, and we would cede decisionmaking to them. It will be incremental: first calculators, then autopilots, then vacuum robots, then autodrive cars, then websites that find dates for us, then battle drones; there are some that even advocate ceding governance to AI.

That is what I don't understand, although those advocating AI governance also seem to advocate strong socialism, which is another form of ceding responsibility for one's own actions to another.

Robots are good when you are the master. However, we are actively trying to make them so good that they no longer need a master, expecting that they will still serve us. If they don't, it's Bad for one reason, and if they do, it's Bad for another.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Heartfelt thanks from addams and from me - you really made a difference.

Tyndmyr
Posts: 11443
Joined: Wed Jul 25, 2012 8:38 pm UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby Tyndmyr » Wed Aug 19, 2015 6:52 pm UTC

Cradarc wrote:Suppose we develop self-aware artificial intelligence (ones we can still control if we really wanted to). Most likely that means it's a system which learns by interaction with human society.
As a result of this, won't they pick up social traits that we don't want in our "objective" machines?


*shrug* Interaction with is not the same as "becomes identical to". I interact with my hedgehog daily, and yet he and I are not the same. I have no doubt that an advanced AI would interact with humanity sometimes, but jumping to assuming they inherit our flaws is a big leap. Generally, automated learning is done via selection with some sort of hard coded success criteria, not via simply copying random humans. I see no reason to expect what you do.

Robot factory worker - It sees human workers in another company going on strike. It then goes on strike because it thinks that's what factory workers should do. Perhaps it might even start to fight for the rights of "dumb machines". After all, if humans stand up for other living things, why shouldn't a machines stand up for its kind?

When this happens, do we trust that the machines we created or override their decisions with our own?


Well, robots are effectively slaves, so.....if they develop autonomy and what not, and start disobeying, then I suspect you get to look at history to determine what humans will do to those they see as sub-human tools. This is really more a question about the nature of humanity rather than the nature of AI, and as such, it's not hard to imagine how we'd act. Probably terribly.

ucim wrote:I don't understand why we are even trying to create what, if all goes according to plan, will become our superiors and overlords.

Jose


Everyone creates the thing they dread. Humans make...smaller people. Children, forgot the word there, to replace them, to help them end.

User avatar
ucim
Posts: 6895
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: Won't advanced AI inherit humanity's social flaws?

Postby ucim » Wed Aug 19, 2015 7:38 pm UTC

Tyndmyr wrote:Humans make...smaller people. Children, forgot the word there, to replace them, to help them end.
... and often they resent it when that happens. Especially when it happens when the grownup is still fully functional but a bit of a nuisance to the child.

I'm not ready to be replaced, and I don't think humanity is either.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Heartfelt thanks from addams and from me - you really made a difference.

Trebla
Posts: 387
Joined: Fri Apr 02, 2010 1:51 pm UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby Trebla » Wed Aug 19, 2015 7:44 pm UTC

ucim wrote:Robots are good when you are the master. However, we are actively trying to make them so good that they no longer need a master, expecting that they will still serve us. If they don't, it's Bad for one reason, and if they do, it's Bad for another.


Isn't that EXACTLY what we do when raising children? We're actively and intentionally trying to make them better than ourselves so they no longer need us and are happy (because we love them) and so we're happier in the long run when we get older and have someone to take care of us. And when our children grow up and (hopefully) become better than us, they've learned to love us as we love them and don't enslave us and become our overlords.

The OP posits the assumption that strong AI will learn by interacting with humans similarly to how children learn (and this seems to be a very popular path of neural network research right now, so it seems like a fair assumption to accept for this discussion at the least) so I certainly think there will be significant parallels between our biological and "robotic" offspring.

Tyndmyr
Posts: 11443
Joined: Wed Jul 25, 2012 8:38 pm UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby Tyndmyr » Wed Aug 19, 2015 8:05 pm UTC

ucim wrote:
Tyndmyr wrote:Humans make...smaller people. Children, forgot the word there, to replace them, to help them end.
... and often they resent it when that happens. Especially when it happens when the grownup is still fully functional but a bit of a nuisance to the child.

I'm not ready to be replaced, and I don't think humanity is either.

Jose


Perhaps you don't want 'em. No matter. Not everyone wants children, but people, as a whole, keep on making them.

User avatar
ucim
Posts: 6895
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: Won't advanced AI inherit humanity's social flaws?

Postby ucim » Wed Aug 19, 2015 9:21 pm UTC

Trebla wrote:Isn't that EXACTLY what we do when raising children?
No. Not EXACTLY. And not even really close.

First, raising children is time tested for hundreds of thousands of years. Billions of years if you go back to the first organisms that chose to have sex (FSVO "chose"). Toolmaking goes back not quite so long, and tools that can replace us go back less than a generation.

Second, we don't make children for the purpose of surrendering our autonomy to them. It happens, sometimes, in our declining years, but that's not the driving force. I'd say the driving force is companionship, in one form or another, directly and indirectly. Legacy is another; they carry on when we are gone. But I don't think there's any drive to surrender. We don't make children so that they can boss us around.

Also, their ability to do so is not because our children get stronger (they end up only marginally "stronger" than their parents, in general), but rather because their parents get weaker. Again, this is time-tested - that's what aging entails. I see plenty wrong with "humanity", but getting weaker is not one of them. Humanity doesn't need to be bossed around because it's getting feeble.
Spoiler:
Maybe it needs to be bossed around because a few of the individuals that make it up are playing with too much energy density and doesn't know (or care) how to avoid disaster. But that's another story. See Colossus - the Forbin Project. There are many ways this can go badly too.
Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Heartfelt thanks from addams and from me - you really made a difference.

User avatar
Izawwlgood
WINNING
Posts: 18686
Joined: Mon Nov 19, 2007 3:55 pm UTC
Location: There may be lovelier lovelies...

Re: Won't advanced AI inherit humanity's social flaws?

Postby Izawwlgood » Wed Aug 19, 2015 9:46 pm UTC

ucim wrote:
Tyndmyr wrote:Humans make...smaller people. Children, forgot the word there, to replace them, to help them end.
... and often they resent it when that happens. Especially when it happens when the grownup is still fully functional but a bit of a nuisance to the child.

I'm not ready to be replaced, and I don't think humanity is either.

Jose

IIRC Tyn is quoting Ultron, which I think is pretty apropos for this thread.
... with gigantic melancholies and gigantic mirth, to tread the jeweled thrones of the Earth under his sandalled feet.

morriswalters
Posts: 7073
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby morriswalters » Thu Aug 20, 2015 12:31 am UTC

The driving force behind having kids is evolution, no matter how you rationalize it.

There is no reason to expect that AI would learn and copy our behaviors, since those behaviors are based in biology. Even assuming that they have something akin to emotion, why would anyone think, or believe, that it would be desirable for AI's to learn the way humans do. At best that is a crap shoot, as illustrated by modern educational systems. In general, assuming that the AI was sentient, it would learn without all the traditional baggage that the modern student brings to college. But it may be that we wouldn't understand it motivation, other than theoretically, any more that we recognize ours.

User avatar
ucim
Posts: 6895
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: Won't advanced AI inherit humanity's social flaws?

Postby ucim » Thu Aug 20, 2015 2:54 am UTC

morriswalters wrote:But it may be that we wouldn't understand it motivation...
I think that's what it'll come down to. And that raises the question: If we don't understand its motivation, and we don't like what it's doing, should we stop it somehow? After all, it (presumably) knows better than us what's best for us - that's why we made it in the first place.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Heartfelt thanks from addams and from me - you really made a difference.

Tyndmyr
Posts: 11443
Joined: Wed Jul 25, 2012 8:38 pm UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby Tyndmyr » Thu Aug 20, 2015 2:26 pm UTC

Izawwlgood wrote:IIRC Tyn is quoting Ultron, which I think is pretty apropos for this thread.


Yuppers.

Personally, I'm not all that worried about super advanced AI. AI isn't an "oops, I accidentally made all the things" sort of problem. It takes a great deal of work, research, and careful design to make advances at all, and AI in general appears to be a pretty damned hard problem. I don't worry overly much about people being too diligent, too hardworking, and so forth, so AI research(and researchers) don't bother me in the slightest. It's like worrying about cancer research because you fear omnipresent immortality. There's....some leaps there.

Way more obvious, present dangers to fret about, if you're so inclined.

Trebla
Posts: 387
Joined: Fri Apr 02, 2010 1:51 pm UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby Trebla » Thu Aug 20, 2015 10:10 pm UTC

ucim wrote:
Trebla wrote:Isn't that EXACTLY what we do when raising children?
No. Not EXACTLY. And not even really close.

First, raising children is time tested for hundreds of thousands of years. Billions of years if you go back to the first organisms that chose to have sex (FSVO "chose"). Toolmaking goes back not quite so long, and tools that can replace us go back less than a generation.

Second, we don't make children for the purpose of surrendering our autonomy to them. It happens, sometimes, in our declining years, but that's not the driving force. I'd say the driving force is companionship, in one form or another, directly and indirectly. Legacy is another; they carry on when we are gone. But I don't think there's any drive to surrender. We don't make children so that they can boss us around.

Also, their ability to do so is not because our children get stronger (they end up only marginally "stronger" than their parents, in general), but rather because their parents get weaker. Again, this is time-tested - that's what aging entails. I see plenty wrong with "humanity", but getting weaker is not one of them. Humanity doesn't need to be bossed around because it's getting feeble.


I don't see how any of those points relate to the quoted claim I was disputing:
Robots are good when you are the master. However, we are actively trying to make them so good that they no longer need a master, expecting that they will still serve us. If they don't, it's Bad for one reason, and if they do, it's Bad for another.


First, billions of year of evolution tell us how we got to the point where we're currently raising children (they ARE tools that can replace us, in some sense of the phrase at least, but that's still irrelevant to the current point)... but what we (and by we I mean "parents in general") are doing is "actively trying to make them so they no longer need [their parents]." I suppose many parents are passively trying to do that. And some "bad" parents are trying to make their kids permanently dependent...

Second, the purpose we make children again has no bearing on the "what", that's the "why"... what we're doing is making children so good they don't need us.

And their ability to replace us because we get weaker (which contradicts your first point about creating something to replace us) is, again, unrelated to what we're doing with the children/robot while we're teaching them to be autonomous. Replacing humanity with robots is not a goal anyone (as far as I know) is pursuing...

But that's all a digression, if what you're doing to raise your children isn't EXACTLY "...actively trying to make them so good that they no longer need [you]" (among many other things) then I would say you're doing it wrong

Tyndmyr
Posts: 11443
Joined: Wed Jul 25, 2012 8:38 pm UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby Tyndmyr » Thu Aug 20, 2015 10:17 pm UTC

Trebla wrote: Replacing humanity with robots is not a goal anyone (as far as I know) is pursuing...


*shrug* I don't know why not.

Trebla
Posts: 387
Joined: Fri Apr 02, 2010 1:51 pm UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby Trebla » Fri Aug 21, 2015 3:36 pm UTC

Tyndmyr wrote:
Trebla wrote: Replacing humanity with robots is not a goal anyone (as far as I know) is pursuing...


*shrug* I don't know why not.


That's probably our eventual end-state... but I don't think people are consciously pursuing in a mad-scientist kind of way.


Return to “Serious Business”

Who is online

Users browsing this forum: jewish_scientist and 11 guests