Won't advanced AI inherit humanity's social flaws?

For the serious discussion of weighty matters and worldly issues. No off-topic posts allowed.

Moderators: Azrael, Moderators General, Prelates

morriswalters
Posts: 7073
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby morriswalters » Fri Aug 28, 2015 10:28 pm UTC

Tyndmyr wrote: It is ridiculous to think that those who know least about a technology are best equipped to judge it.
Knowing about a technology has nothing to do with understanding the ramifications. If anybody knew what was going to happen they wouldn't do some things. Would you like examples? We find the future, we don't predict it.
Tyndmyr wrote:It is possible that, at some point, we figure out how to do everything that it is possible to do.
There hasn't been anything "new" to do for some years. We do some things differently, as compared to how we used to do them, but we are doing the same things over and over again. We do things to make humans able to live in comfort, such as that concept is. Food, shelter and the things that enable those two. Everything else is fluff. Angry Birds?

I'm not against or for technology. It doesn't matter. If we can do a thing, we will, even if it kills us. Bio weapons, chemical weapons, nuclear weapons, directed energy weapons and last but not least, kinetic energy weapons, any of those you think we could have lived without? That doesn't even begin to take into account the various industrial and other processes that we would have been better off without, PCB's, asbestos, lead additives in gas and paint and so on. And you don't need machines to be much smarter than the are now to cause problems.

Autolykos
Posts: 97
Joined: Wed Jun 25, 2014 8:32 am UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby Autolykos » Sat Aug 29, 2015 7:37 am UTC

ahammel wrote:
Empathy will reliably lead to worse decisions than math on any moderately important problem.
What about the set of "moderately important problems" which involve building consensus and cooperation among human beings without pissing them off?
Psychopaths are better at manipulating people than the most empathetic person you can possibly imagine. Building consensus among people who don't have it is basically a euphemism for manipulating. Unless you actually manage to convince them logically, which a) doesn't require empathy either and b) good luck with that.
I concede that I prefer the people I interact with to have some degree of empathy (which makes them more pleasant to have around), but not so much as to intervene with actually doing good (which makes them quite dangerous to have around).

But being the curious monkeys we are, we'll try to teach empathy to a computer nonetheless. If only to see if we can, and what it takes.

User avatar
ahammel
My Little Cabbage
Posts: 2135
Joined: Mon Jan 30, 2012 12:46 am UTC
Location: Vancouver BC
Contact:

Re: Won't advanced AI inherit humanity's social flaws?

Postby ahammel » Sat Aug 29, 2015 3:57 pm UTC

Autolykos wrote:Psychopaths are better at manipulating people than the most empathetic person you can possibly imagine.
My understanding is that, while certain personality disorders may make one inclined to be manipulative, they don't necessarily make them any good at it. It's not a superpower.

Building consensus among people who don't have it is basically a euphemism for manipulating.
No, it isn't.

Unless you actually manage to convince them logically, which a) doesn't require empathy either and b) good luck with that.
"Building consensus" doesn't mean "getting people to do what you, personally, think is best", either. And convincing people that your idea is the best one does, in fact, require empathy, even if you're right.
He/Him/His/Alex
God damn these electric sex pants!

User avatar
gmalivuk
GNU Terry Pratchett
Posts: 26836
Joined: Wed Feb 28, 2007 6:02 pm UTC
Location: Here and There
Contact:

Re: Won't advanced AI inherit humanity's social flaws?

Postby gmalivuk » Sat Aug 29, 2015 6:15 pm UTC

Empathy doesn't just make people "pleasant" to be around, it makes them safer to be around.

Logic may be necessary in order to effectively achieve the goals you want to acheive, but empathy is necessary to make sure those goals aren't those of a sociopath.

A combination of both is absolutely necessary in any sufficiently powerful autonomous entity, as a lack of either could make that entity extremely dangerous to be around.
Unless stated otherwise, I do not care whether a statement, by itself, constitutes a persuasive political argument. I care whether it's true.
---
If this post has math that doesn't work for you, use TeX the World for Firefox or Chrome

(he/him/his)

User avatar
The Great Hippo
Swans ARE SHARP
Posts: 7368
Joined: Fri Dec 14, 2007 4:43 am UTC
Location: behind you

Re: Won't advanced AI inherit humanity's social flaws?

Postby The Great Hippo » Sat Aug 29, 2015 8:05 pm UTC

Autolykos wrote:Psychopaths are better at manipulating people than the most empathetic person you can possibly imagine.
What makes you think this is true? Manipulation is often about building trust -- and one of the ways you build trust is by expressing empathy.

Presuming by "psychopath" you mean "person without empathy", I'd expect they'd have a harder time with manipulation. They have to fake empathy -- but an empathetic person doesn't. On top of that, empathizing with someone means understanding some aspect of their experience, and connecting it to your own -- and when you understand someone's experiences, it makes it easier to manipulate them.

It's true that an empathetic person might not want to manipulate someone, but that doesn't mean they're bad at it; that just means they're less likely to do it.

User avatar
gmalivuk
GNU Terry Pratchett
Posts: 26836
Joined: Wed Feb 28, 2007 6:02 pm UTC
Location: Here and There
Contact:

Re: Won't advanced AI inherit humanity's social flaws?

Postby gmalivuk » Sun Aug 30, 2015 1:17 am UTC

Being manipulative is something that distinguishes psychopaths. That's one of the few things Autolykos got correct in that mess of a post.
Unless stated otherwise, I do not care whether a statement, by itself, constitutes a persuasive political argument. I care whether it's true.
---
If this post has math that doesn't work for you, use TeX the World for Firefox or Chrome

(he/him/his)

User avatar
Cleverbeans
Posts: 1378
Joined: Wed Mar 26, 2008 1:16 pm UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby Cleverbeans » Sun Aug 30, 2015 5:59 am UTC

The Great Hippo wrote:Presuming by "psychopath" you mean "person without empathy", I'd expect they'd have a harder time with manipulation.

Maybe he just meant psychopath?
"Labor is prior to, and independent of, capital. Capital is only the fruit of labor, and could never have existed if labor had not first existed. Labor is the superior of capital, and deserves much the higher consideration." - Abraham Lincoln

Autolykos
Posts: 97
Joined: Wed Jun 25, 2014 8:32 am UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby Autolykos » Sun Aug 30, 2015 6:05 am UTC

I think the point where we miss each other is that all of you seem to talk about person-to-person interactions. And I already agreed that empathy was nice to have in that case. Both as a lubricant and (I accept gmalivuk's point on this) as a failsafe. Logic alone won't prevent you from taking options with potentially horrible consequences, and while empathy might lead you to suboptimal choices, your intuition is better and the damage is more limited when few people are involved.
However, the game changes completely with large-scale decisions in management or politics. Suddenly, worrying more about persons just because you know their name and face can have enormous costs. And "slightly suboptimal" at this scale can easily mean killing thousands - of whom you might never become aware, so empathy can't save you there. Human intuitions aren't build for this, so there's no reason to expect them to be any good at it - and in fact they aren't.

I don't want to get sidetracked into a discussion of psychopathy, but I still find it noteworthy how many of the related traits you can observe with people in the higher levels of management and politics. I can basically go down the whole laundry list: Pathological lying, lack of remorse, failure to accept responsibility, superficial charm, promiscuous sexual behavior, ...
And since management and politics is all about "building consensus" on a large scale, as you might put it, psychopathic traits don't seem to be that much of a disadvantage. On the other hand: Try naming just one living, powerful politician with high levels of empathy...

User avatar
gmalivuk
GNU Terry Pratchett
Posts: 26836
Joined: Wed Feb 28, 2007 6:02 pm UTC
Location: Here and There
Contact:

Re: Won't advanced AI inherit humanity's social flaws?

Postby gmalivuk » Sun Aug 30, 2015 6:20 am UTC

Why do you think empathy is only a failsafe one on one? Where do large policy decisions come from in someone who lacks empathy?
Unless stated otherwise, I do not care whether a statement, by itself, constitutes a persuasive political argument. I care whether it's true.
---
If this post has math that doesn't work for you, use TeX the World for Firefox or Chrome

(he/him/his)

User avatar
The Great Hippo
Swans ARE SHARP
Posts: 7368
Joined: Fri Dec 14, 2007 4:43 am UTC
Location: behind you

Re: Won't advanced AI inherit humanity's social flaws?

Postby The Great Hippo » Sun Aug 30, 2015 8:44 am UTC

gmalivuk wrote:Being manipulative is something that distinguishes psychopaths. That's one of the few things Autolykos got correct in that mess of a post.
I think there's a subtle difference between just being manipulative and actually being good at manipulation. I've met plenty of manipulative people; I've only met a few who were really any good at it -- and none of them appeared to lack empathy.

(Then again, you get better at doing something by practicing it -- and if you have no empathy, you may be more prone to practicing manipulation! So I don't think it's impossible to have zero empathy and still be an excellent manipulator -- I just don't believe that being psychopathic naturally lends itself to being an excellent manipulator. Maybe I'm wrong? I'm not a psychiatrist, and I've never met someone that I know was diagnosed as a psychopath.)
Cleverbeans wrote:Maybe he just meant psychopath?
Even psychiatrists and psychologists can't seem to agree with what 'psychopath' means (the article opens by describing the history of this problem!), so I think there's utility to being clear about what we mean by the term when we use it.
Autolykos wrote:However, the game changes completely with large-scale decisions in management or politics. Suddenly, worrying more about persons just because you know their name and face can have enormous costs. And "slightly suboptimal" at this scale can easily mean killing thousands - of whom you might never become aware, so empathy can't save you there. Human intuitions aren't build for this, so there's no reason to expect them to be any good at it - and in fact they aren't.
I'm perfectly capable of feeling empathy with someone even as I throw the switch and mow them down with a trolley. You don't need to be a psychopath to be good at this sort of problem.

You seem to be claiming that 1) Psychopaths are better at solving trolley problems, and 2) Powerful politicians exhibit psychopathic behavior. But when we look at policy decisions made by the US, we don't see a history of well-managed trolley problems. If the powerful lack empathy, it doesn't seem to be helping: During the Cold War, we almost literally annihilated our species over political squabbling. If anything, having empathy probably steers you away from nonsense like that.

morriswalters
Posts: 7073
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby morriswalters » Mon Aug 31, 2015 12:26 am UTC

gmalivuk wrote:Why do you think empathy is only a failsafe one on one? Where do large policy decisions come from in someone who lacks empathy?
If it wasn't limited you would be emotionally crippled. Just thinking about each child that will go hungry tonight to bed, is enough to make you lose sleep if you attached too much emotion to it. Think about the burnout of social workers. Utilitarians have the right of it, if they can define the greatest good for the greatest number sufficiently. Empathy gets in the way.

User avatar
gmalivuk
GNU Terry Pratchett
Posts: 26836
Joined: Wed Feb 28, 2007 6:02 pm UTC
Location: Here and There
Contact:

Re: Won't advanced AI inherit humanity's social flaws?

Postby gmalivuk » Mon Aug 31, 2015 12:55 am UTC

Great, but do you have an answer to my question? (You haven't yet, because I never said anything about unlimited empathy, which isn't actually a possible thing, nor did I say anything about being ruled by emotion, or about needing to have an equal amount of empathy for every living person.)
Unless stated otherwise, I do not care whether a statement, by itself, constitutes a persuasive political argument. I care whether it's true.
---
If this post has math that doesn't work for you, use TeX the World for Firefox or Chrome

(he/him/his)

User avatar
Quercus
Posts: 1810
Joined: Thu Sep 19, 2013 12:22 pm UTC
Location: London, UK
Contact:

Re: Won't advanced AI inherit humanity's social flaws?

Postby Quercus » Mon Aug 31, 2015 6:36 am UTC

morriswalters wrote:Utilitarians have the right of it, if they can define the greatest good for the greatest number sufficiently. Empathy gets in the way.

Why would you wish for the greatest good for the greatest number without empathy? Presumably a total lack of empathy would mean not caring about the suffering of others at all, and therefore "good for others" wouldn't be a particularly desirable goal for such a person.

User avatar
kanesimicart
Posts: 1
Joined: Mon Aug 31, 2015 7:35 am UTC
Location: Ha Noi
Contact:

Re: Won't advanced AI inherit humanity's social flaws?

Postby kanesimicart » Mon Aug 31, 2015 8:01 am UTC

Possibly in light of the fact that it's less demanding, and the people would prefer not to do this errand any more?

The system is not all that vital, the inquiry (while slender) is still huge, similar to the related inquiry: "How would we distinguish and concur upon social imperfections at any rate?"

We breed mindful machines to do tasks like assembly line laborers, and the same for cops, they are called individuals. Also, we as of now have substitutes for some of those assembly line laborers, very robotized production lines. On the off chance that you manufacture mindful machines to do things than it won't likely be for those. Why OK?

In the event that you have more prominent limits for these mindful machines you would in all probability use them to do difficult tasks, not to watch the lanes or assemble singular gadgets. Did you simply watch Chapppie?

Autolykos
Posts: 97
Joined: Wed Jun 25, 2014 8:32 am UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby Autolykos » Mon Aug 31, 2015 9:18 am UTC

The Great Hippo wrote:You seem to be claiming that 1) Psychopaths are better at solving trolley problems, and 2) Powerful politicians exhibit psychopathic behavior.

Ah, I see. This is not what I'm claiming at all. I made two statements beside each other to illustrate the shortcomings of empathy, not the "superiority" of psychopaths (which I do not believe in myself):
1) Empathy does not help when solving trolley problems.
2) You do not need empathy to unite people to a common cause.

I brought the psychopaths in as an edge case, because edge cases are how I learned to test theories:
- Let one parameter go to zero or infinity.
- Check if the results are plausible
- Take the next parameter and repeat
The only claim I make about this edge case is that my theory (empathy has little use in management or politics) holds better under these circumstances than your theory (you can't effectively coordinate people without empathy). I do NOT claim that psychopaths would make better managers or politicians, and I definitely don't want them in positions of power. I only observe them to be in power way more often than should be expected if empathy was necessary or even just highly useful.
What I do want instead of empathy is compassion. A desire to do good that is based on rationality and carefully thought out ethics, not on random instincts that were well-adapted to win tribal popularity contests in the stone age.

Autolykos
Posts: 97
Joined: Wed Jun 25, 2014 8:32 am UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby Autolykos » Mon Aug 31, 2015 9:31 am UTC

Quercus wrote:Why would you wish for the greatest good for the greatest number without empathy? Presumably a total lack of empathy would mean not caring about the suffering of others at all, and therefore "good for others" wouldn't be a particularly desirable goal for such a person.
Self interest does suffice. You'll notice that it is very hard to campaign for different rules to apply for different people, especially if you would somehow end up better off by those rules. So you should find a set of rules and values that makes you reasonably well-off if universally followed, but is also symmetrical. Utilitarianism is one of these.

morriswalters
Posts: 7073
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby morriswalters » Mon Aug 31, 2015 9:37 am UTC

Quercus wrote:Why would you wish for the greatest good for the greatest number without empathy? Presumably a total lack of empathy would mean not caring about the suffering of others at all, and therefore "good for others" wouldn't be a particularly desirable goal for such a person.
Because empathy gets in the way. When we triage in disasters where large numbers of people are hurt and there are limits to our ability to treat them all quickly, we take a utilitarian view, help those who can be helped, and let the rest die. The temptation to put yourself in the shoes of the dying makes you want to spend too much time trying to save the one at the expense of all. It isn't that the emotion doesn't exist, it's that it is rerouted cognitively, from the individual to the group. Families flip this on its head when dealing with end of life issues for their older members. Willing to spend any amount to change an endpoint that can't be changed. One would assume that an AI would need to see the greater good, much as policy makers try to do when they exercise concepts that influence large numbers of people.

User avatar
gmalivuk
GNU Terry Pratchett
Posts: 26836
Joined: Wed Feb 28, 2007 6:02 pm UTC
Location: Here and There
Contact:

Re: Won't advanced AI inherit humanity's social flaws?

Postby gmalivuk » Mon Aug 31, 2015 12:07 pm UTC

Empathy gets in the way of acheiving a goal you only care about... because of empathy?
Unless stated otherwise, I do not care whether a statement, by itself, constitutes a persuasive political argument. I care whether it's true.
---
If this post has math that doesn't work for you, use TeX the World for Firefox or Chrome

(he/him/his)

morriswalters
Posts: 7073
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby morriswalters » Mon Aug 31, 2015 12:41 pm UTC

Yes. I think. Your question is ambiguous.

User avatar
Quercus
Posts: 1810
Joined: Thu Sep 19, 2013 12:22 pm UTC
Location: London, UK
Contact:

Re: Won't advanced AI inherit humanity's social flaws?

Postby Quercus » Mon Aug 31, 2015 12:50 pm UTC

That seems like an inherently contradictory viewpoint to hold (which I believe is what gmalivuk was trying to point out). I don't think that anyone is arguing that excess, or poorly dealt with, empathy doesn't get in the way, but arguing that empathy is inherently obstructive to the goals empathy itself promotes strikes me as illogical.

morriswalters
Posts: 7073
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby morriswalters » Mon Aug 31, 2015 1:27 pm UTC

Quercus wrote:That seems like an inherently contradictory viewpoint to hold (which I believe is what gmalivuk was trying to point out). I don't think that anyone is arguing that excess, or poorly dealt with, empathy doesn't get in the way, but arguing that empathy is inherently obstructive to the goals empathy itself promotes strikes me as illogical.
Maybe. But triage is contradictory. Empathy places me in your shoes. If you are dying and might be saved if I devote enough resources, empathy drives me to do something for you, because I would want you to save me were I in your shoes. But at what cost. This is the primary dilemma of triage, head over heart. Empathy is a built in response, it takes cognitive effort to override it. In terms of AI this is exactly the point. You don't want an AI to feel empathetic to the individual.

User avatar
gmalivuk
GNU Terry Pratchett
Posts: 26836
Joined: Wed Feb 28, 2007 6:02 pm UTC
Location: Here and There
Contact:

Re: Won't advanced AI inherit humanity's social flaws?

Postby gmalivuk » Mon Aug 31, 2015 2:50 pm UTC

Empathy means I feel bad because you're suffering. In no way does it imply I should devote unlimited resources to ending that suffering.

All the arguments you people are putting forth against empathy seem based on the problems that arise if decisions are based *only* on empathy, without any logic or reason. But "empathy without reason is bad" is not a valid counterargument to the position, "logic without empathy is bad".
Unless stated otherwise, I do not care whether a statement, by itself, constitutes a persuasive political argument. I care whether it's true.
---
If this post has math that doesn't work for you, use TeX the World for Firefox or Chrome

(he/him/his)

Tyndmyr
Posts: 11443
Joined: Wed Jul 25, 2012 8:38 pm UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby Tyndmyr » Mon Aug 31, 2015 5:14 pm UTC

morriswalters wrote:
Tyndmyr wrote: It is ridiculous to think that those who know least about a technology are best equipped to judge it.
Knowing about a technology has nothing to do with understanding the ramifications. If anybody knew what was going to happen they wouldn't do some things. Would you like examples? We find the future, we don't predict it.


Understanding how a thing works is essential to understanding the ramifications of things. This is why some politicians have looked...kind of idiotic when talking about ramifications of the internet or what not.

Knowledge doesn't prevent all errors, but it IS greatly helpful, and is far superior to....not having knowledge.

Tyndmyr wrote:It is possible that, at some point, we figure out how to do everything that it is possible to do.
There hasn't been anything "new" to do for some years. We do some things differently, as compared to how we used to do them, but we are doing the same things over and over again. We do things to make humans able to live in comfort, such as that concept is. Food, shelter and the things that enable those two. Everything else is fluff. Angry Birds?


Perhaps to you, but labeling it as "fluff" does not change the fact that humans want it, and will pursue it. We pretty much have food, shelter, etc mostly figured out to decent levels, but people have many, many more desires. You can call them needs or not, but from an economic perspective, it doesn't really matter.

Autolykos wrote:And since management and politics is all about "building consensus" on a large scale, as you might put it, psychopathic traits don't seem to be that much of a disadvantage. On the other hand: Try naming just one living, powerful politician with high levels of empathy...


*shrug* Sniping at bosses and politicians is shooting fish in a barrel here. Sure, there have been plenty of those without empathy.

But there have also been some with it. A whole bunch of millionares have signed a pledge to give away half their fortune to charity before they die, and are actively working on that, for instance. That doesn't SOUND like lack of empathy.

Quercus wrote:
morriswalters wrote:Utilitarians have the right of it, if they can define the greatest good for the greatest number sufficiently. Empathy gets in the way.

Why would you wish for the greatest good for the greatest number without empathy? Presumably a total lack of empathy would mean not caring about the suffering of others at all, and therefore "good for others" wouldn't be a particularly desirable goal for such a person.


Because it's practical. I cannot forsee the future. My station in life might change. Therefore, there is a positive value associated with society being better in general, even if it isn't something that helps me.

Plus, there's that whole cooperation thing. Not screwing over my neighbor for no good reason leaves him better disposed toward me. So, while being nice to the neighbors may not be a primary goal...it usually costs me little to do so, and is superior to alternatives. That makes it pretty logical.

Imagining oneself in place of another, and feeling as they do is valuable in terms of determining what they want. Sure, that isn't the only trait you need, but it's helpful information to have. Empathy wouldn't have evolved otherwise.

morriswalters wrote:Maybe. But triage is contradictory. Empathy places me in your shoes. If you are dying and might be saved if I devote enough resources, empathy drives me to do something for you, because I would want you to save me were I in your shoes. But at what cost. This is the primary dilemma of triage, head over heart. Empathy is a built in response, it takes cognitive effort to override it. In terms of AI this is exactly the point. You don't want an AI to feel empathetic to the individual.


Sure I do. Why would I want a less capable AI?

morriswalters
Posts: 7073
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby morriswalters » Mon Aug 31, 2015 6:55 pm UTC

gmalivuk wrote:Empathy means I feel bad because you're suffering. In no way does it imply I should devote unlimited resources to ending that suffering.

All the arguments you people are putting forth against empathy seem based on the problems that arise if decisions are based *only* on empathy, without any logic or reason. But "empathy without reason is bad" is not a valid counterargument to the position, "logic without empathy is bad".
Maybe you're right. I'm unsure of my ground.
Tyndmyr wrote:Sure I do. Why would I want a less capable AI?
I'm no longer certain. Emotionally I don't see it as producing a less capable AI. I see it as removing the barrier of feeling guilt because I have to favor one bad situation over another.

User avatar
gmalivuk
GNU Terry Pratchett
Posts: 26836
Joined: Wed Feb 28, 2007 6:02 pm UTC
Location: Here and There
Contact:

Re: Won't advanced AI inherit humanity's social flaws?

Postby gmalivuk » Mon Aug 31, 2015 7:26 pm UTC

Is guilt an inherent part of empathy? Is guilt inherently crippling to subsequent decisionmaking?

I myself have empathy, but that doesn't mean I'm incapable of carrying out everyday tasks thanks to the overwhelming guilt of being unable to prevent every tragedy ever.
Unless stated otherwise, I do not care whether a statement, by itself, constitutes a persuasive political argument. I care whether it's true.
---
If this post has math that doesn't work for you, use TeX the World for Firefox or Chrome

(he/him/his)

Tyndmyr
Posts: 11443
Joined: Wed Jul 25, 2012 8:38 pm UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby Tyndmyr » Mon Aug 31, 2015 7:33 pm UTC

morriswalters wrote:I'm no longer certain. Emotionally I don't see it as producing a less capable AI. I see it as removing the barrier of feeling guilt because I have to favor one bad situation over another.


Guilt is useful. Feeling guilt, or empathy, or other emotions does not mean you are more or less logical, any more than being blind would make you more or less logical. All other things being equal, the machine that can also see is superior to the one that cannot. Same, same, guilt, empathy, etc. They are simply capabilities.

morriswalters
Posts: 7073
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby morriswalters » Mon Aug 31, 2015 8:08 pm UTC

gmalivuk wrote:Is guilt an inherent part of empathy? Is guilt inherently crippling to subsequent decisionmaking?

I myself have empathy, but that doesn't mean I'm incapable of carrying out everyday tasks thanks to the overwhelming guilt of being unable to prevent every tragedy ever.
Perhaps you're right. What I was thinking was about social workers. They must have empathy, but having empathy may be what causes burnout.
Tyndmyr wrote:Guilt is useful. Feeling guilt, or empathy, or other emotions does not mean you are more or less logical, any more than being blind would make you more or less logical. All other things being equal, the machine that can also see is superior to the one that cannot. Same, same, guilt, empathy, etc. They are simply capabilities.

Maybe, again I'm uncertain. We got all those things as part of our evolutionary path. My question, I suppose, is do they serve the purpose in a society composed of 7 billion. Could an AI use a better paradigm?

Autolykos
Posts: 97
Joined: Wed Jun 25, 2014 8:32 am UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby Autolykos » Tue Sep 01, 2015 8:45 am UTC

Tyndmyr wrote:
Autolykos wrote:And since management and politics is all about "building consensus" on a large scale, as you might put it, psychopathic traits don't seem to be that much of a disadvantage. On the other hand: Try naming just one living, powerful politician with high levels of empathy...


*shrug* Sniping at bosses and politicians is shooting fish in a barrel here. Sure, there have been plenty of those without empathy.

But there have also been some with it. A whole bunch of millionares have signed a pledge to give away half their fortune to charity before they die, and are actively working on that, for instance. That doesn't SOUND like lack of empathy.
Okay, it looks like a cheap shot at first. But consider that they are still doing an adequate job most of the time (Evidence: Most countries are places you can happily live in, and most corporations are profitable). Which again shows empathy as non-essential, and provides no evidence for it to be even helpful.
Tyndmyr wrote:
morriswalters wrote:I'm no longer certain. Emotionally I don't see it as producing a less capable AI. I see it as removing the barrier of feeling guilt because I have to favor one bad situation over another.


Guilt is useful. Feeling guilt, or empathy, or other emotions does not mean you are more or less logical, any more than being blind would make you more or less logical. All other things being equal, the machine that can also see is superior to the one that cannot. Same, same, guilt, empathy, etc. They are simply capabilities.
This is actually a very good point. Empathy in a machine does not have to work the same way as in humans, by making me mirror the other guy's emotions, and thus biasing my decisions. If the machine can just make a model of the others' internal state and file that under "information" without feeling compelled to act on it, that would be an advantage, I guess. Not sure how the tradeoff looks between new information that can only be gained this way and required complexity, memory and CPU cycles, but that "empathy as a sense" concept may be worth investigating.
(EDIT: To get back to topic: Observations like this lead me to believe that AI is not doomed to inherit our flaws together with our skills. Having the rational part uncontested in control, and not in constant revolt against the rule of an intuitive, hardwired System 1 is quite a significant advantage that AI can easily have over us.)

I'm still holding the point that humans can't do this, at least without impractical amounts of training (like meditating in a cave for half a century). We're running on hostile hardware, and anything that induces biases is likely to screw up our decision making more than potential new information would improve it.

(On a side note, I do see guilt as purely dysfunctional. There is no new information to be gained from feeling guilty, and updating on mistakes can be done more quickly and productively without.)

Tyndmyr
Posts: 11443
Joined: Wed Jul 25, 2012 8:38 pm UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby Tyndmyr » Tue Sep 01, 2015 3:23 pm UTC

Autolykos wrote:This is actually a very good point. Empathy in a machine does not have to work the same way as in humans, by making me mirror the other guy's emotions, and thus biasing my decisions. If the machine can just make a model of the others' internal state and file that under "information" without feeling compelled to act on it, that would be an advantage, I guess.


That's how it works in humans, too. Just because I empathize with someone doesn't mean I have a compulsion.

User avatar
The Great Hippo
Swans ARE SHARP
Posts: 7368
Joined: Fri Dec 14, 2007 4:43 am UTC
Location: behind you

Re: Won't advanced AI inherit humanity's social flaws?

Postby The Great Hippo » Tue Sep 01, 2015 8:59 pm UTC

Tyndmyr wrote:
Autolykos wrote:This is actually a very good point. Empathy in a machine does not have to work the same way as in humans, by making me mirror the other guy's emotions, and thus biasing my decisions. If the machine can just make a model of the others' internal state and file that under "information" without feeling compelled to act on it, that would be an advantage, I guess.


That's how it works in humans, too. Just because I empathize with someone doesn't mean I have a compulsion.
Right, this was something that was confusing me a lot about the way empathy was being discussed in this thread -- because empathy is different than sympathy; I model people's perspectives all the time, and try to see things from their viewpoint -- but this doesn't make me necessarily care about them. I might be more prone to caring about people I empathize with, but empathizing with you doesn't necessarily mean I give a crap about you. It just means I understand where you're coming from.
Autolykos wrote:(On a side note, I do see guilt as purely dysfunctional. There is no new information to be gained from feeling guilty, and updating on mistakes can be done more quickly and productively without.)
Guilt is dysfunctional if you're really good at remembering and fixing your errors. Otherwise, it's basically an error-reminder system ("Look, you fucked this up once, don't fuck this up the same way again, okay?", or "Look, you fucked this up, don't forget to go back and fix it"). AI could potentially feel 'guilt' in the sense of identifying past solutions it tried that worked on paper -- but didn't work in reality -- and filing those solutions (and solutions that look similar) under 'bad ideas'.

(Or even prioritizing problems based on whether or not those problems are the AI's 'fault'; IE, they occurred because of solutions the AI attempted to enact. In a way, this is the AI 'cleaning up' after itself, and attempting to reduce its 'footprint' in a given system -- the AI feels 'guilt' over the damage its solutions cause, so it works to repair that damage before it declares a problem sufficiently 'solved')

Tyndmyr
Posts: 11443
Joined: Wed Jul 25, 2012 8:38 pm UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby Tyndmyr » Tue Sep 01, 2015 9:39 pm UTC

I think normal use of 'empathy' implies a certain degree of sympathy, but definitely far short of a compulsion, and definitely not logical agreement. I can empathize with someone while disagreeing with their actions, of course. Do agree with regard to guilt. It's useful within certain contexts, but it can definitely be a problem if it plays too large a role.

morriswalters
Posts: 7073
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby morriswalters » Wed Sep 02, 2015 12:21 am UTC

From the Wikipedia.
The empathy-altruism relationship also has broad and practical implications. Given the power of empathic feelings to evoke altruistic motivation, people may unfortunately learn to suppress or avoid these feelings. This numbing or even loss of the capacity to feel empathy for clients may be a factor in the experience of burnout among case workers in helping professions. Awareness of this impending futile effort—such as nurses caring for terminal patients or pedestrians walking by the homeless—may make individuals try to avoid feelings of empathy in order to avoid the resulting altruistic motivation. Promoting an understanding about the mechanisms by which altruistic behavior is driven, whether it’s from minimizing sadness or the arousal of mirror neurons, allows people to better cognitively control their actions. However, empathy-induced altruism may not always produce prosocial effects. It could lead one to increase the welfare of those for whom empathy is felt at the expense of other potential prosocial goals, thus inducing a type of bias. Researchers suggest that individuals are willing to act against the greater collective good or to violate their own moral principles of fairness and justice if doing so will benefit a person for whom empathy is felt.[28]

Autolykos
Posts: 97
Joined: Wed Jun 25, 2014 8:32 am UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby Autolykos » Wed Sep 02, 2015 9:49 am UTC

Tyndmyr wrote:That's how it works in humans, too. Just because I empathize with someone doesn't mean I have a compulsion.
A cognitive bias is not the same thing as a compulsion. You won't feel like you have to do something that looks illogical - instead, what seems right to you will systematically be off from the optimal course in a predictable way. And without some abstract way to reason about ethics and compare to your intuitions, you're bound to have a damn hard time even discovering it. The Wikipedia quote morriswalters brought up is basically making the same point with more ten-dollar-words.

@The Great Hippo: We seem to have a different view of where to draw the lines between the words "empathy", "sympathy" and "compassion". I suspect that once we stop using these words and describe what we mean, we might agree about a lot more.
On guilt, YMMV. I never found a use for it - it tends to keep me from fixing mistakes way more often than it keeps me from making mistakes. But some people tend to make the same mistakes over and over and over, so they might need a more hardwired reminder (still, I suspect they also feel guilt, and it obviously doesn't do them much good either). My ad-hoc explanation is that it is an adaptive trait, not a fitness increasing one. By feeling down when you screwed up, you can somewhat credibly convince others that you have no intention of repeating the mistake. Whether you actually will repeat it in the future is strictly secondary compared to whether you will be kicked out of the tribe right now.

User avatar
gmalivuk
GNU Terry Pratchett
Posts: 26836
Joined: Wed Feb 28, 2007 6:02 pm UTC
Location: Here and There
Contact:

Re: Won't advanced AI inherit humanity's social flaws?

Postby gmalivuk » Wed Sep 02, 2015 12:32 pm UTC

Why "without some abstract way to reason about ethics"? No one is talking about making a robot that empathizes but is incapable of reason.
Unless stated otherwise, I do not care whether a statement, by itself, constitutes a persuasive political argument. I care whether it's true.
---
If this post has math that doesn't work for you, use TeX the World for Firefox or Chrome

(he/him/his)

Autolykos
Posts: 97
Joined: Wed Jun 25, 2014 8:32 am UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby Autolykos » Wed Sep 02, 2015 2:56 pm UTC

Now you're just quote-mining. That point was talking about the way it works in humans, not robots. For the robot part I agree with Tyndmyr's point that "empathy" in the sense of "having a model of the other guy's internal state" (let's call it Empathy-1) may be useful. For the human part, I'm inclined to agree it would be useful, if it was achievable. But as far as I can tell, we only have "mirroring the internal state of the other guy, affecting our own internal state" (let's call it Empathy-2a for a slight bias and Empathy-2b for a compulsion). Which is not the same as becoming incapable of reason, as you put it, but is making reason less effective and more erratic.
If we build a machine, there is no reason to assume Empathy-1 would be harder to implement than 2a or 2b (probably even easier). Version 1 would definitely be the most desirable*, and I'd be hard-pressed to name a disadvantage (except for very specific scenarios; from the perspective of game theory, it can sometimes be an advantage not to have some information, or even to be irrational).
Depending on your values, your goals and your environment, 2a (or 2b) may be a net-benefit or a net-loss. But don't pretend it can not do harm under any circumstances, or that anyone would profit from having it.

*And at this point, I have answered the part of this question that is actually relevant for this thread: No, I do not believe a machine would inherit the human "quirks" related to empathy when given a skill/sense that serves the same purpose.

Tyndmyr
Posts: 11443
Joined: Wed Jul 25, 2012 8:38 pm UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby Tyndmyr » Wed Sep 02, 2015 3:36 pm UTC

Autolykos wrote: My ad-hoc explanation is that it is an adaptive trait, not a fitness increasing one.


Those...are not exclusive terms. Adaptive traits generally are fitness increasing ones.

Autolykos wrote:
Tyndmyr wrote:That's how it works in humans, too. Just because I empathize with someone doesn't mean I have a compulsion.
A cognitive bias is not the same thing as a compulsion. You won't feel like you have to do something that looks illogical - instead, what seems right to you will systematically be off from the optimal course in a predictable way.


The word "compulsion" was yours, not mine.

As for cognative biases...look, emotions are not inherently good or bad. They just are. They're a fast, but less precise decision making system. Logic is slower, but more detailed. Having different layers of precision in decision making is perfectly fine. This isn't even unusual now. We have LoD systems in every 3d engine.

Autolykos wrote:That point was talking about the way it works in humans, not robots. For the robot part I agree with Tyndmyr's point that "empathy" in the sense of "having a model of the other guy's internal state" (let's call it Empathy-1) may be useful. For the human part, I'm inclined to agree it would be useful, if it was achievable. But as far as I can tell, we only have "mirroring the internal state of the other guy, affecting our own internal state" (let's call it Empathy-2a for a slight bias and Empathy-2b for a compulsion). Which is not the same as becoming incapable of reason, as you put it, but is making reason less effective and more erratic.


What would be the point of it if it didn't affect actions?

Less effective and more erratic is all YOUR assumption, because you keep assuming emotion is negative somehow.

morriswalters
Posts: 7073
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby morriswalters » Wed Sep 02, 2015 4:49 pm UTC

Emotions are good until they aren't. You can list any number of examples where bad is a understatement. The problem is that they are unreliable. They work great when your in the woods trying not to be prey, less so when the herd response is triggered at the wrong moment, and people get crushed by a crowd. Or when your hormones take over and you take off cross country to do something evil to your boyfriends other girlfriend. The theory of mind response that empathy represents in part, makes people treat their dogs like people and name their cars. Not the type of confusion I want in an AI. In a fraught situation where their are ethical or moral issues, I want it to spit out numbers and ideas, not get involved in doe eyed starving children or protecting its progeny at the expense of mine.

Autolykos
Posts: 97
Joined: Wed Jun 25, 2014 8:32 am UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby Autolykos » Thu Sep 03, 2015 1:05 pm UTC

Tyndmyr wrote:Those...are not exclusive terms. Adaptive traits generally are fitness increasing ones.
You're right. Insert a "but" if it makes you feel better.
Tyndmyr wrote:The word "compulsion" was yours, not mine.

As for cognative biases...look, emotions are not inherently good or bad. They just are. They're a fast, but less precise decision making system. Logic is slower, but more detailed. Having different layers of precision in decision making is perfectly fine. This isn't even unusual now. We have LoD systems in every 3d engine.
I can't remember where I used it except when referring to someone else's allegations. It may have entered the discussion because I misunderstood someone - in that case, I'm sorry for the confusion that may have caused. And while I agree that emotions are neither good nor bad, I see them as part of your "internal state". Intuition is what actually makes the System 1 decisions, based on your current emotional state and trained habits. And yes, with most situations we had a million years to adapt to, System 1 does a decent job making quick, effortless decisions that keep us from dying before we can pass on our genes. For anything else (of which there's quite a lot in the modern world), it's a crapshoot.

As to the 3D Engine analogy, a system that will quickly output rough estimates certainly has its place. But if your engine will systematically draw faraway objects too bright and a few degrees off their actual position, you'd call it broken, right? And if you had no way of replacing or fixing it (say, because you're a user, not a programmer), you might want to limit its use as far as possible (e.g. by cranking the detail all the way up, even if it hurts the framerate). I'm not claiming LoD systems are generally a bad idea, I'm claiming that ours is broken. And there is no reason to assume that any other one we build will be broken in the same way.
Tyndmyr wrote:What would be the point of it if it didn't affect actions?
Of course information needs to affect your decisions and ultimately your actions to be useful. But it should do so in a transparent, well-calibrated and controllable way. Otherwise it might just as well be worse than useless. We should settle for no less in a machine, but our wetware doesn't quite cut it in this regard (at least "out of the box" - you might get there with extensive training and a lot of effort).

Tyndmyr
Posts: 11443
Joined: Wed Jul 25, 2012 8:38 pm UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby Tyndmyr » Thu Sep 03, 2015 3:32 pm UTC

Autolykos wrote:As to the 3D Engine analogy, a system that will quickly output rough estimates certainly has its place. But if your engine will systematically draw faraway objects too bright and a few degrees off their actual position, you'd call it broken, right? And if you had no way of replacing or fixing it (say, because you're a user, not a programmer), you might want to limit its use as far as possible (e.g. by cranking the detail all the way up, even if it hurts the framerate). I'm not claiming LoD systems are generally a bad idea, I'm claiming that ours is broken. And there is no reason to assume that any other one we build will be broken in the same way.


Probably not. It depends on the degree of inaccuracy. It's actually really routine to have small visible adjustments as levels of detail are shifted. This is usually not a big deal. Now, sure, there are accuracy for performance tradeoffs that are undesirable, but the principle is fine, regardless of if those inaccuracies are systematic or random.

Our system may not be optimal, but it's good enough for us to have evolved to intelligence and being in charge of the planet, so....it's kind of the best thing out there. AIs may well be flawed in slightly different ways. So?

Autolykos
Posts: 97
Joined: Wed Jun 25, 2014 8:32 am UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby Autolykos » Tue Sep 08, 2015 10:08 am UTC

Depends on what you use it for. Those problems might be perfectly fine in a flight sim, but look like total arse in a strategy game. Which is exactly the situation humans find themselves in: We don't live in the environment we evolved in, and some of our quirks that were not that harmful or even helpful back then can become a problem. Do not trust your intuitions unless you have good reason to assume they're accurate.
For example, taking a hit to your reputation isn't that big of a deal any more. You can always switch your circle of friends, or even move to a different city. In the stone age, it was all but a death sentence. This "bug" reliably leads to people being overly obsessed with honour, sometimes to the point of getting violent over it and *actually* risking their life.

Animals that find themselves in a vastly different environment have to adapt pretty darn quick or go extinct. We can instead use reason to survive, but this removes a lot of the adaption pressure. So we will have to continue using reason indefinitely. From the perspective of evolution, reason is the ultimate crutch. It solves most of the problems of not being adapted, for the small price of never becoming adapted.


Return to “Serious Business”

Who is online

Users browsing this forum: No registered users and 7 guests