Won't advanced AI inherit humanity's social flaws?

For the serious discussion of weighty matters and worldly issues. No off-topic posts allowed.

Moderators: Azrael, Moderators General, Prelates

Tyndmyr
Posts: 11443
Joined: Wed Jul 25, 2012 8:38 pm UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby Tyndmyr » Fri Aug 21, 2015 4:16 pm UTC

Trebla wrote:
Tyndmyr wrote:
Trebla wrote: Replacing humanity with robots is not a goal anyone (as far as I know) is pursuing...


*shrug* I don't know why not.


That's probably our eventual end-state... but I don't think people are consciously pursuing in a mad-scientist kind of way.


It's on my list of things to do, it's just not very high, on account of AI being a bear of a problem. But, if big progress starts getting made in that field, I don't see any particular reason why there shouldn't be robots running things.

User avatar
ucim
Posts: 6888
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: Won't advanced AI inherit humanity's social flaws?

Postby ucim » Fri Aug 21, 2015 5:18 pm UTC

Tyndmyr wrote:But, if big progress starts getting made in that field, I don't see any particular reason why there shouldn't be robots running things.
It's the same reasons we wouldn't want to be ruled by a "benevolent dictator". At least for me.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Heartfelt thanks from addams and from me - you really made a difference.

Trebla
Posts: 387
Joined: Fri Apr 02, 2010 1:51 pm UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby Trebla » Fri Aug 21, 2015 5:54 pm UTC

ucim wrote:It's the same reasons we wouldn't want to be ruled by a "benevolent dictator". At least for me.


I'm not sure why "robots running things" is being equated to "unquestioned authority"... I wouldn't want a human running things like a benevolent dictator either... but I'm fine with humans running things in general (because it's currently the best option, I don't want no stinkin' dolphins in charge). Doesn't take much more of a stretch to step on the self-driving cars thread from here... if robots can do our jobs better (safer) than us, then that's a "good thing."

User avatar
ucim
Posts: 6888
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: Won't advanced AI inherit humanity's social flaws?

Postby ucim » Fri Aug 21, 2015 6:02 pm UTC

Trebla wrote:I'm not sure why "robots running things" is being equated to "unquestioned authority"...
Because they would be ostensibly put in charge because they make better decisions - "replacing us". If us humans want a different decision (and therefore, to oust the computer), the computer should overrule us for our own good because it's (by definition) a worse decision.

The self-driving cars provides a small microcosm - the idea of removing the manual override. This is"unquestioned authority"[/i] in the context of driving.

There are various futurist threads that advocate for computers running the world (using an ideal socialist model, natch) because it will be better for us and remove the fallible humans from the equation.

Fallible or not, I don't want to be removed from the equation, and don't understand those that do.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Heartfelt thanks from addams and from me - you really made a difference.

Trebla
Posts: 387
Joined: Fri Apr 02, 2010 1:51 pm UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby Trebla » Fri Aug 21, 2015 6:21 pm UTC

ucim wrote:Fallible or not, I don't want to be removed from the equation, and don't understand those that do.


I would dispute this statement... this is an argument of scope. There are all sorts of "menial" tasks that you want to be removed from... keeping your food cold in the fridge, injecting fuel into your car's engine. I would be extremely surprised if you didn't want to be entirely removed from those processes.

Maybe you want to be involved in determining which gas to use, or which fridge to get... just as humanity would want to be involved in which leaders we choose (whether they're human or machine).

And again... putting a machine "in charge of society" does not necessarily equate to "benevolent dictator." It could be as simple as electing a robot as president with all the checks and balances (LOL) that entails. There's little reason to jump to the conclusion that an A.I. leader would be able to overrule us any more than current humans can (and again: LOL).

User avatar
Quercus
Posts: 1810
Joined: Thu Sep 19, 2013 12:22 pm UTC
Location: London, UK
Contact:

Re: Won't advanced AI inherit humanity's social flaws?

Postby Quercus » Fri Aug 21, 2015 7:16 pm UTC

ucim wrote:Fallible or not, I don't want to be removed from the equation, and don't understand those that do.

I'm not sure I would mind humanity being replaced, even to the point of extinction, by a sufficiently human-like AI. It would have to be something that could experience all of, or more than, the richness of human experience - love, joy, beauty, all that jazz. In that case, I don't really see any substantive difference between replacement by AI and replacement by the next generation of humans, which happens already. I would consider it evolution (in the broad sense, not the biological sense), rather than revolution.

I would have more difficulty with being replaced by an utterly "alien" AI whose experiences are not comparable to human experiences.

Tyndmyr
Posts: 11443
Joined: Wed Jul 25, 2012 8:38 pm UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby Tyndmyr » Fri Aug 21, 2015 7:23 pm UTC

ucim wrote:
Tyndmyr wrote:But, if big progress starts getting made in that field, I don't see any particular reason why there shouldn't be robots running things.
It's the same reasons we wouldn't want to be ruled by a "benevolent dictator". At least for me.

Jose


A robut isn't inherently a dictator any more than a person is.

And there are always rulers, more or less. Democracy or not, you or I do not wield the power that some others do. If it's possible to build better people or better robots(a fairly large sci-fi premise, granted), then yeah, why not use that?

ucim wrote:
Trebla wrote:I'm not sure why "robots running things" is being equated to "unquestioned authority"...
Because they would be ostensibly put in charge because they make better decisions - "replacing us". If us humans want a different decision (and therefore, to oust the computer), the computer should overrule us for our own good because it's (by definition) a worse decision.


*shrug* People overrule other folks' decisions now. And it's not as if human authority structures would simply allow me to oust them. Hah.

In any case, I acknowledge that in many respects, other humans are smarter than me at given fields, and they should generally make the decisions regarding them, not I. That's kind of implicit in specialization. Robots, humans, whatever.

morriswalters
Posts: 7073
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby morriswalters » Fri Aug 21, 2015 11:12 pm UTC

Quercus wrote:I would have more difficulty with being replaced by an utterly "alien" AI whose experiences are not comparable to human experiences.
I wouldn't lose any sleep over either possibility. We are more likely to do ourselves in, than to have the overseer's do it to us. But I suspect that if it happens it will be the type that you aren't comfortable with. I say that because we were formed by our environment. AI, if it comes, will come from another space entirely. But good, bad or indifferent, if it is possible some monkey human will do it, all the while pooh poohing the danger.

User avatar
ucim
Posts: 6888
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: Won't advanced AI inherit humanity's social flaws?

Postby ucim » Mon Aug 24, 2015 1:35 am UTC

Trebla wrote:
ucim wrote:Fallible or not, I don't want to be removed from the equation, and don't understand those that do.
I would dispute this statement... this is an argument of scope. There are all sorts of "menial" tasks that you want to be removed from... keeping your food cold in the fridge, injecting fuel into your car's engine.
...but these are not decisions on how I want my life to be run.

On a smaller scale, I still want to be able to choose the route I take to go from here to there. On a larger scale, I want governing decisions to be made by those people who have the most at stake, and that will not be the AI.

Tyndmyr wrote:A robut isn't inherently a dictator any more than a person is.

And there are always rulers...
A robot isn't a dictator in and of itself. However, if humanity decides that computers are smarter than people (it's already happening), then it's not that long a step towards surrendering larger and larger decisions to the computer. Right now it's elevators and autopilots, but it's silly to think it will stay that way.

You see, an AI will have no empathy for us if it's just a machine. In order for it to have empathy for humanity it will need to love us, and love is profoundly illogical. We don't understand it, certainly not well enough to program it in (or to know that somebody else has programmed it in correctly). I doubt it can be "programmed in". Asimov's laws of robotics were largely a farce, created so that he could write stories highlighting their ridiculocity. Rather, if anything "emotional" were to arise in an AI, it would do so as an unexpected result of the interactions of various programs and agents it embodies. This will evolve (or devolve) as we select the ones that go to version 2.0, but we are not going to pick the best ones; because of the "best for whom" question. The ones doing the picking (Google? Apple? Advertising.com?) will pick what works for them, and we'll just use it.

So, in a sense, it will inherit our flaws and amplify them. But it's hard to say which ones; that could be a matter of chance - a matter of which company comes out first and which ones go bankrupt pursuing what would have been "better".

But in any case, we won't be the master. We'll be the product.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Heartfelt thanks from addams and from me - you really made a difference.

User avatar
ahammel
My Little Cabbage
Posts: 2135
Joined: Mon Jan 30, 2012 12:46 am UTC
Location: Vancouver BC
Contact:

Re: Won't advanced AI inherit humanity's social flaws?

Postby ahammel » Mon Aug 24, 2015 2:08 pm UTC

ucim wrote:[You see, an AI will have no empathy for us if it's just a machine. In order for it to have empathy for humanity it will need to love us, and love is profoundly illogical.
Neither of those things are true. I can empathise with people I don't love, and the idea that love is inherently irrational is pure Hollywood screenwriting laziness.

We don't understand it, certainly not well enough to program it in (or to know that somebody else has programmed it in correctly). I doubt it can be "programmed in".
Empathy the least of the things we don't understand that we would need to in order to build a hard AI. Maybe by the time we figure all of those things out, we'll have a pretty good handle on empathy as well.
He/Him/His/Alex
God damn these electric sex pants!

Tyndmyr
Posts: 11443
Joined: Wed Jul 25, 2012 8:38 pm UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby Tyndmyr » Mon Aug 24, 2015 3:35 pm UTC

ucim wrote:We don't understand it, certainly not well enough to program it in (or to know that somebody else has programmed it in correctly). I doubt it can be "programmed in".


While you are absolutely correct on our lack of present understanding, it can obviously be programmed in. Humans aren't made of magic, and of course a turing complete machine, regardless of it runs on meat or silicon, can simulate any other. So, once you have sufficient data, you can of course simulate humans in a computer. The "can" is trivial, it's the how that's a bitch to figure out.

User avatar
ucim
Posts: 6888
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: Won't advanced AI inherit humanity's social flaws?

Postby ucim » Mon Aug 24, 2015 9:20 pm UTC

ahammel wrote: Neither of those things are true. I can empathize with people I don't love, and the idea that love is inherently irrational is pure Hollywood screenwriting laziness.
I was using "empathy" and "love" as synonyms here; while they are (of course) not really synonymous, it was a useful shortcut. Perhaps "devotion" would work better instead of "empathy"; my point is that an AI is no longer "just a machine". Certainly when it gets to the point where questions like the OP's are relevant, it needs to be considered on a completely different level.

Tyndmyr wrote:While you are absolutely correct on our lack of present understanding, it can obviously be programmed in.
Not so fast. It has yet to be demonstrated that humans are Turing complete. I do not doubt that we will, one day, be able to create machines that react in unexpected ways, and have their own desires that are complex enough to be worthy of the shortcut word "emotion". What I think will never happen is that we will have anything like an "emotion chip" (a la Star Trek), because emotions are not like that.

And emotions are certainly not the output of the logical part of the brain, which is what I mean by being "inherently irrational". There is a (social) calculus involved in all relationships, but the kind of devotion credited to "love" is not a rational decision. There may be people who stay together as a devoted couple (or more) based on their version of utility theory, but that's not what I call love. And love can be overruled by the rational part too; one can recognize that one is falling in love with "the wrong person" and do something about it. But that's not the same thing either.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Heartfelt thanks from addams and from me - you really made a difference.

User avatar
ahammel
My Little Cabbage
Posts: 2135
Joined: Mon Jan 30, 2012 12:46 am UTC
Location: Vancouver BC
Contact:

Re: Won't advanced AI inherit humanity's social flaws?

Postby ahammel » Mon Aug 24, 2015 9:54 pm UTC

ucim wrote:And emotions are certainly not the output of the logical part of the brain, which is what I mean by being "inherently irrational".
Nonsense.
He/Him/His/Alex
God damn these electric sex pants!

User avatar
ucim
Posts: 6888
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: Won't advanced AI inherit humanity's social flaws?

Postby ucim » Mon Aug 24, 2015 11:03 pm UTC

ahammel wrote:
ucim wrote:And emotions are certainly not the output of the logical part of the brain, which is what I mean by being "inherently irrational".
Nonsense.
Well, that was convincing!

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Heartfelt thanks from addams and from me - you really made a difference.

User avatar
ahammel
My Little Cabbage
Posts: 2135
Joined: Mon Jan 30, 2012 12:46 am UTC
Location: Vancouver BC
Contact:

Re: Won't advanced AI inherit humanity's social flaws?

Postby ahammel » Tue Aug 25, 2015 12:49 am UTC

Sorry, that was unbecoming of SB.


What I mean is that the idea that there is a part of the vain that deals with logic which is completely district from the part that deals with emotion and which is somehow more computer-like than the rest is false. There is no reason to expect that a human-like artificial intelligence would experience to be incapable of experiencing empathy any more than we should expect it to be bad at understanding utilitarianism or writing poetry or recognising pictures of birds.
He/Him/His/Alex
God damn these electric sex pants!

User avatar
ucim
Posts: 6888
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: Won't advanced AI inherit humanity's social flaws?

Postby ucim » Tue Aug 25, 2015 2:32 am UTC

ahammel wrote:What I mean is that the idea that there is a part of the vain that deals with logic which is completely district from the part that deals with emotion and which is somehow more computer-like than the rest is false.
Sure, if your bar is "completely distinct"; it all comes from the same wetware. However, it does seem to me (and supported by the fact that animals don't seem to do propositional calculus but do form emotional bonds) that what we call "higher functions" (the ability to do propositional calculus, for example), does occur in a different part of the brain - one that has developed much more in humans than in other creatures. In animals (and people), emotions came first. Logical reasoning came later. In computers it's the other way around, it seems.

However, we may not be looking at the right thing. I suspect that AI will not be a machine programmed to do awesome things (like Deep Blue, for example), but rather, will emerge from the interactions of many computers each doing different awesome things that depend on each other. Certain heuristics will "work out better" (whatever those in charge decide "better" is) and the way they interact will not be simple. But we might be able discern certain kinds of behaviors, and that is the beginning of a shortcut which will soon get a name, which in humans we call "emotion".

Yes, it's ultimately a bunch of NAND gates, but to see it that way misses the point.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Heartfelt thanks from addams and from me - you really made a difference.

User avatar
ahammel
My Little Cabbage
Posts: 2135
Joined: Mon Jan 30, 2012 12:46 am UTC
Location: Vancouver BC
Contact:

Re: Won't advanced AI inherit humanity's social flaws?

Postby ahammel » Tue Aug 25, 2015 4:39 am UTC

ucim wrote:Sure, if your bar is "completely distinct"; it all comes from the same wetware. However, it does seem to me (and supported by the fact that animals don't seem to do propositional calculus but do form emotional bonds) that what we call "higher functions" (the ability to do propositional calculus, for example), does occur in a different part of the brain - one that has developed much more in humans than in other creatures. In animals (and people), emotions came first. Logical reasoning came later.
Not sure i buy that reasoning, but it's beside the point anyway since we're talking about human-like intelligence. Maybe one "came first", but i still don't think they're as distinct as you're making it out to be, and it doesn't follow that you can get "logical" thinking without "emotional" thinking.

In computers it's the other way around, it seems.
Competers have neither. No computer program extant reasons in a human-like way at all. Sure, computers can solve some of the same problems that humans do, but they're demonstrably going about it in completely different ways.
He/Him/His/Alex
God damn these electric sex pants!

User avatar
ucim
Posts: 6888
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: Won't advanced AI inherit humanity's social flaws?

Postby ucim » Tue Aug 25, 2015 1:32 pm UTC

ahammel wrote:No computer program extant reasons in a human-like way at all.
Not yet, but the OP presumes advanced AI, so I was going with it. In any case, most computer programs "reason" through logic. The tasks they are programmed for are elementary enough that the underlying code is fairly direct in its use of logic to do whatever it is they do.

However, you are right - computers have not developed a logical thought process as an abstraction, which is what humans have done.

I'm not sure how this would play out in the future; much depends on the uses to which AI would be put (and thus, the selection pressure).

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Heartfelt thanks from addams and from me - you really made a difference.

User avatar
ahammel
My Little Cabbage
Posts: 2135
Joined: Mon Jan 30, 2012 12:46 am UTC
Location: Vancouver BC
Contact:

Re: Won't advanced AI inherit humanity's social flaws?

Postby ahammel » Tue Aug 25, 2015 1:54 pm UTC

ucim wrote:
ahammel wrote:No computer program extant reasons in a human-like way at all.
Not yet, but the OP presumes advanced AI, so I was going with it.
And, in going with it, you assumed that tha they would develop human-like logical thinking before they developed human-like emotional thinking? Even granting that those are different things (which I don't), what makes you think that it would happen in that order?
He/Him/His/Alex
God damn these electric sex pants!

User avatar
ucim
Posts: 6888
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: Won't advanced AI inherit humanity's social flaws?

Postby ucim » Tue Aug 25, 2015 3:28 pm UTC

ahammel wrote:And, in going with it, you assumed that tha they would develop human-like logical thinking before they developed human-like emotional thinking? Even granting that those are different things (which I don't), what makes you think that it would happen in that order?
No, though in retrospect that wasn't all that clear. Since people were not designed, but computers were, and were designed for tasks that were logically driven, they already have "logical thinking" as a basis and don't need to develop it as an abstraction. Organisms don't have this basis, thus any logical thinking does have to be through an abstraction. For comptuers, I think something will emerge as an abstraction, that could be akin to emotion.

Although empathetic responses may be programmed into a device right now, that's not quite the same (it's more like faking an orgasm), and not what I'm talking about.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Heartfelt thanks from addams and from me - you really made a difference.

Trebla
Posts: 387
Joined: Fri Apr 02, 2010 1:51 pm UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby Trebla » Tue Aug 25, 2015 3:32 pm UTC

ucim wrote:
Trebla wrote:
ucim wrote:Fallible or not, I don't want to be removed from the equation, and don't understand those that do.
I would dispute this statement... this is an argument of scope. There are all sorts of "menial" tasks that you want to be removed from... keeping your food cold in the fridge, injecting fuel into your car's engine.

...but these are not decisions on how I want my life to be run.

On a smaller scale, I still want to be able to choose the route I take to go from here to there. On a larger scale, I want governing decisions to be made by those people who have the most at stake, and that will not be the AI.


I still think this is a questionable claim. There is a point below which you do want to be completely removed from the equation. Deciding how much fuel is injected into your engine each rotation IS PART OF the decision about the route you take "to go from here to there"... and it's a decision you want to be completely removed from. You're setting an arbitrary boundary for at which point you say that's unacceptable. How would you answer someone who didn't understand why you wouldn't want to be involved in the aforementioned aspects of life? It's just a disagreement about degree.

I'll try another one... "how you run your life" is intimately tied to the rotation of the earth (day/night cycle). I assume that's a mechanism that you have no problem with your absolute lack of control over?

Tyndmyr
Posts: 11443
Joined: Wed Jul 25, 2012 8:38 pm UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby Tyndmyr » Tue Aug 25, 2015 3:35 pm UTC

ucim wrote:
Tyndmyr wrote:While you are absolutely correct on our lack of present understanding, it can obviously be programmed in.
Not so fast. It has yet to be demonstrated that humans are Turing complete.


How could we not be? Granted, we have limited memory, so that's a handicap in an absolute respect, but this is not actually different from designed computers, so from the practical standpoint of applicability to this question, who cares? Sure, fully simulating a human will require a certain degree of storage space(and it's not trivial), but it's not impossible.

I do not doubt that we will, one day, be able to create machines that react in unexpected ways, and have their own desires that are complex enough to be worthy of the shortcut word "emotion". What I think will never happen is that we will have anything like an "emotion chip" (a la Star Trek), because emotions are not like that.


Meh. It's easier to make a chatbot behave as a person does when angry and insulting than otherwise, because angry speech is usually less complex. There's nothing magic about emotions.

And emotions are certainly not the output of the logical part of the brain, which is what I mean by being "inherently irrational". There is a (social) calculus involved in all relationships, but the kind of devotion credited to "love" is not a rational decision. There may be people who stay together as a devoted couple (or more) based on their version of utility theory, but that's not what I call love. And love can be overruled by the rational part too; one can recognize that one is falling in love with "the wrong person" and do something about it. But that's not the same thing either.

Jose


There isn't one half of the brain labeled logic, and one half labeled emotion. Any such labeling systems are ludicrously over-simplified garbage. At best, we're hooking electrodes to people's heads and observing that, hey, in most people, this area lights up more in these circumstances. Imagine trying to describe a computer by looking at disk activity on the hard drive. It's...informative, perhaps, but incredibly limited, and missing a great deal.

ucim wrote:
ahammel wrote:What I mean is that the idea that there is a part of the vain that deals with logic which is completely district from the part that deals with emotion and which is somehow more computer-like than the rest is false.
Sure, if your bar is "completely distinct"; it all comes from the same wetware. However, it does seem to me (and supported by the fact that animals don't seem to do propositional calculus but do form emotional bonds) that what we call "higher functions" (the ability to do propositional calculus, for example), does occur in a different part of the brain - one that has developed much more in humans than in other creatures. In animals (and people), emotions came first. Logical reasoning came later. In computers it's the other way around, it seems.


Clearly, since some people don't do propositional calculus, for instance, they are obviously missing this part of the brain. Which MUST be distinct, because reasons. :roll:

This is indeed nonsense. Animals can and do solve fairly difficult physics problems sometimes, and have a degree of intelligence. How much varies wildly depending on the species and even the individual, but it's really, really difficult to communicate with them on abstract topics. The idea that the order in which things are developed matter for computers is ludicrous, and the idea that developing something at a different time means it must be discrete is also clearly wrong.

There isn't even a single skill called "logic". It's a term used to describe a whole bunch of different things. Intelligence is a wild collection of things that is terribly complicated, it can't really be accurately dealt with by simply tossing things into logic and emotion.

User avatar
ahammel
My Little Cabbage
Posts: 2135
Joined: Mon Jan 30, 2012 12:46 am UTC
Location: Vancouver BC
Contact:

Re: Won't advanced AI inherit humanity's social flaws?

Postby ahammel » Wed Aug 26, 2015 12:12 am UTC

ucim wrote:Although empathetic responses may be programmed into a device right now, that's not quite the same (it's more like faking an orgasm), and not what I'm talking about.
It's not what I'm talking about either. I'm saying that there's no reason that a machine which could do human-like reasoning could not also experience human-like empathy.
He/Him/His/Alex
God damn these electric sex pants!

morriswalters
Posts: 7073
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby morriswalters » Wed Aug 26, 2015 12:28 am UTC

There is no reason to believe that a machine could reason like us at all. Which is not to say that a machine couldn't reason in some fashion. But why should biology and electronics or something like electronics lead to similar strategies for achieving those goals?

elasto
Posts: 3778
Joined: Mon May 10, 2010 1:53 am UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby elasto » Wed Aug 26, 2015 2:07 am UTC

ucim wrote:Although empathetic responses may be programmed into a device right now, that's not quite the same (it's more like faking an orgasm), and not what I'm talking about.

ahammel wrote:It's not what I'm talking about either. I'm saying that there's no reason that a machine which could do human-like reasoning could not also experience human-like empathy.

morriswalters wrote:There is no reason to believe that a machine could reason like us at all. Which is not to say that a machine couldn't reason in some fashion. But why should biology and electronics or something like electronics lead to similar strategies for achieving those goals?


You all make useful points.

I agree with Ahammel: Unless you think there's something literally magical going on in the human brain, there's no reason an artificial lifeform couldn't subjectively experience exactly the range of emotions we do.

But ucim and morris also make an important point: There's no reason we'd program AI to have subjective experiences (let alone emotions) unless it actually served a purpose; Indeed, it could lead to the kind of issues outlined in the OP. What we ideally want from advanced AI is a completely subservient and selfless slave - just like the calculator sitting on your desk, or the browser sitting on your desktop. Emotions experienced consciously probably complicate more than they assist.

If it turns out that AI cannot advance beyond a certain barrier without becoming self-aware, then we will have a genuine dilemma. But personally I don't expect that to turn out to be the case.

(Right now we don't know how to tell if a being is conscious or self-aware, but I assume that we will know eventually; And if we only discover after the event that the AI we created is fully conscious, that too will create a genuine dilemma...)

User avatar
ahammel
My Little Cabbage
Posts: 2135
Joined: Mon Jan 30, 2012 12:46 am UTC
Location: Vancouver BC
Contact:

Re: Won't advanced AI inherit humanity's social flaws?

Postby ahammel » Wed Aug 26, 2015 2:17 am UTC

elasto wrote:But ucim and morris also make an important point: There's no reason we'd program AI to have subjective experiences (let alone emotions) unless it actually served a purpose; Indeed, it could lead to the kind of issues outlined in the OP.
I, personally, would do it just for fun if I could. I'm sure I'm not alone in that.

If it turns out that AI cannot advance beyond a certain barrier without becoming self-aware, then we will have a genuine dilemma. But personally I don't expect that to turn out to be the case.
I think it probably will turn out to be the case that in order to get as clever as a human you probably need self-awareness and all the rest of that jazz. I don't think you need a full-blown AI to solve the problems that the OP is describing, though. (Robot factory workers already exist, for one thing.)
He/Him/His/Alex
God damn these electric sex pants!

elasto
Posts: 3778
Joined: Mon May 10, 2010 1:53 am UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby elasto » Wed Aug 26, 2015 10:43 am UTC

ahammel wrote:I think it probably will turn out to be the case that in order to get as clever as a human you probably need self-awareness and all the rest of that jazz.

I remain unconvinced.

What does 'as clever as a human' mean in any case? Google cars will likely drive better than any human within a decade or so, but are they self-aware to achieve that? With the proviso that noone knows how to detect self-awareness, my guess is they are not.

Watson can probably already diagnose cancers better than any human, but, again, I don't think there's any self-awareness going on in there.

Being 'as clever as a human' doesn't seem to be particularly necessary to solve real-world problems; Perhaps it's even a detriment. As you say, robot factory workers already exist and have no need for any hard/general AI.

I don't think we will create self-aware AI until we are ready to partner/fuse/upload into it.

Autolykos
Posts: 97
Joined: Wed Jun 25, 2014 8:32 am UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby Autolykos » Wed Aug 26, 2015 12:44 pm UTC

ahammel wrote:Empathy the least of the things we don't understand that we would need to in order to build a hard AI. Maybe by the time we figure all of those things out, we'll have a pretty good handle on empathy as well.

But why would we even want to, maybe except for very specialized tasks?
Empathy will reliably lead to worse decisions than math on any moderately important problem. What we tend to call empathy is a jumble of human quirks and skills that are beneficial in person-to-person interaction (or winning popularity contests), but tend to be enormously detrimental when used in large scale decisions. One example is Scope Insensitivity, or the tendency to massively overvalue the fate of characters with a name and a face (one death is a tragedy, a million deaths is a statistic...). A machine would not be prone to such mistakes, unless we explicitly program it to. But why should we? If we want sentimental decisions, we can always ask a human. For getting sh*t done, I prefer a computer.

speising
Posts: 2363
Joined: Mon Sep 03, 2012 4:54 pm UTC
Location: wien

Re: Won't advanced AI inherit humanity's social flaws?

Postby speising » Wed Aug 26, 2015 12:57 pm UTC

Without empathy, you get this: http://www.smbc-comics.com/?id=2569

User avatar
ahammel
My Little Cabbage
Posts: 2135
Joined: Mon Jan 30, 2012 12:46 am UTC
Location: Vancouver BC
Contact:

Re: Won't advanced AI inherit humanity's social flaws?

Postby ahammel » Wed Aug 26, 2015 1:59 pm UTC

elasto wrote:I remain unconvinced.
I'm probably not going to convince you: it's just my intuition. This us very much an unsolved problem in psychology.

What does 'as clever as a human' mean in any case?
I'd say it means something like able to solve a large subset of the problems that humans can solve, including natural language fluency and teaching onesself things by example and trial and error, rather than having them programmed directly. Things like self-driving cars and cancer classifiers might be better than humans at particular tasks, but they fail the "large subset" criterion.

Being 'as clever as a human' doesn't seem to be particularly necessary to solve real-world problems; Perhaps it's even a detriment.
Natural language fluency is AI hard for all we know, and that's required for plenty of jobs.

Autolykos wrote:But why would we even want to, maybe except for very specialized tasks?
As I say: we'll do it out of curiosity if for no other reason.

Empathy will reliably lead to worse decisions than math on any moderately important problem.
What about the set of "moderately important problems" which involve building consensus and cooperation among human beings without pissing them off?
He/Him/His/Alex
God damn these electric sex pants!

elasto
Posts: 3778
Joined: Mon May 10, 2010 1:53 am UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby elasto » Wed Aug 26, 2015 2:36 pm UTC

ahammel wrote:I'd say it means something like able to solve a large subset of the problems that humans can solve, including natural language fluency and teaching onesself things by example and trial and error, rather than having them programmed directly. Things like self-driving cars and cancer classifiers might be better than humans at particular tasks, but they fail the "large subset" criterion.

To be clear, when I say I remain unconvinced, it's that I remain unconvinced that we would ever need a single AI to be good at a large subset of tasks all by itself.

Would I want a single household device that can keep my milk cold, boil water for coffee, watch films on and clean my floor? Is it not better to have a separate fridge, kettle, tv and cleaner?

The right tool for the right job. A tool that can do everything usually ends up not doing anything particularly well - humans would be a case in point: We can't multiply as well as a calculator, or run as fast as a car, or a million other things that a specialized tool can do.

Natural language fluency is AI hard for all we know, and that's required for plenty of jobs.
Watson has already smashed through that barrier, without any particularly novel AI techniques. Watson is currently reading millions of research papers all written in natural english to learn everything it can about cancer. Most likely it will end up in your average call centre also: When you phone up to complain about your net connection being slow, it will probably be Watson who answers.

As I say: we'll do it out of curiosity if for no other reason.

Oh, for sure. We will do it - not because we need to, but because we want to.

I am predicting three great advances for this coming century: Human genetic engineering, bio-nanotechnology, and self-aware AI whose consciousness we fuse with. Each would be a revolution by themselves. Together, well, they will change the world beyond recognition. Were we to jump forwards in time 100 years, we might barely recognise our descendants as human at all...

User avatar
ahammel
My Little Cabbage
Posts: 2135
Joined: Mon Jan 30, 2012 12:46 am UTC
Location: Vancouver BC
Contact:

Re: Won't advanced AI inherit humanity's social flaws?

Postby ahammel » Wed Aug 26, 2015 3:22 pm UTC

elasto wrote:
Natural language fluency is AI hard for all we know, and that's required for plenty of jobs.
Watson has already smashed through that barrier, without any particularly novel AI techniques. Watson is currently reading millions of research papers all written in natural english to learn everything it can about cancer. Most likely it will end up in your average call centre also: When you phone up to complain about your net connection being slow, it will probably be Watson who answers.
There's a big difference between scraping research papers for the kind of analyses that Watson is up to and the ability to pass an English language fluency test. (And by the way, if you think that cancer papers are written in "natural English", you've clearly never read one.) Watson's analysis is just generating executive summaries of the form "there's evidence that this drug can treat this tumour". It doesn't have to read and understand the whole paper, and it doesn't have to speak.

Even call centre work probably doesn't require the ability to cary out a sensible conversation on an arbitrary subject.

I am predicting three great advances for this coming century: Human genetic engineering, bio-nanotechnology, and self-aware AI whose consciousness we fuse with.
This century? You're more optimistic than I am.
He/Him/His/Alex
God damn these electric sex pants!

Tyndmyr
Posts: 11443
Joined: Wed Jul 25, 2012 8:38 pm UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby Tyndmyr » Wed Aug 26, 2015 3:51 pm UTC

ahammel wrote:
elasto wrote:But ucim and morris also make an important point: There's no reason we'd program AI to have subjective experiences (let alone emotions) unless it actually served a purpose; Indeed, it could lead to the kind of issues outlined in the OP.
I, personally, would do it just for fun if I could. I'm sure I'm not alone in that.


I assure you that you're not. Of course, the "how" is always the difficult bit.

It's likely that barrier will keep decreasing, though. Sure, we're still appreciating just how big of a problem AI is, but progress is being made. Eventually it'll make it into the domain of at least an interesting research project.

elasto wrote:
ahammel wrote:I'd say it means something like able to solve a large subset of the problems that humans can solve, including natural language fluency and teaching onesself things by example and trial and error, rather than having them programmed directly. Things like self-driving cars and cancer classifiers might be better than humans at particular tasks, but they fail the "large subset" criterion.

To be clear, when I say I remain unconvinced, it's that I remain unconvinced that we would ever need a single AI to be good at a large subset of tasks all by itself.

Would I want a single household device that can keep my milk cold, boil water for coffee, watch films on and clean my floor? Is it not better to have a separate fridge, kettle, tv and cleaner?


Why on earth would I want a single device that is camera, telephone, pocket game device, etc?

User avatar
ahammel
My Little Cabbage
Posts: 2135
Joined: Mon Jan 30, 2012 12:46 am UTC
Location: Vancouver BC
Contact:

Re: Won't advanced AI inherit humanity's social flaws?

Postby ahammel » Thu Aug 27, 2015 1:50 pm UTC

elasto wrote:To be clear, when I say I remain unconvinced, it's that I remain unconvinced that we would ever need a single AI to be good at a large subset of tasks all by itself.

Would I want a single household device that can keep my milk cold, boil water for coffee, watch films on and clean my floor? Is it not better to have a separate fridge, kettle, tv and cleaner?

Perhaps for a service like this?
He/Him/His/Alex
God damn these electric sex pants!

elasto
Posts: 3778
Joined: Mon May 10, 2010 1:53 am UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby elasto » Fri Aug 28, 2015 3:12 am UTC

Tyndmyr wrote:Why on earth would I want a single device that is camera, telephone, pocket game device, etc?

Is that facetious? You need a single device to do all that because it has to fit in your pocket. The same requirement is not true for a fridge/tv/microwave/cleaner combo device.

ahammel wrote:Perhaps for a service like this?

There's still no need for that to be a single AI as opposed to a collection of them, each of which is specialized in its own field. And perhaps a Watson type AI (which is still as dumb as a bag of rocks, sentience-wise) at the top to divvy up the tasks.

The only use for a sentient AI,as I see it, is in a leadership or creative position that requires actual empathy - but even then empathy could probably be emulated rather than experienced - and, as ucim says, there is going to be a good deal of resistance to AI taking over leadership roles.

Besides, once AI takes over both the grunt work and the managerial/creative work, our economy is going to collapse - either into a Star Trek style socialist paradise, or into some kind of Eloi-Morlock dystopia.

My guess is that progress will be the other way around: We are going to create sentient AI because we want to, and then it is slowly going to take over more and more roles as people warm to it.

Hell, I see no reason why one day an AI couldn't run for president. If it can get the votes, why shouldn't it have the chance to lead? Not like we don't have a ton of checks and balances already in place due to how crappy and temperamental human leaders can be.

We might finally elect a politician that sticks to their pre-election pledges...

Tyndmyr
Posts: 11443
Joined: Wed Jul 25, 2012 8:38 pm UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby Tyndmyr » Fri Aug 28, 2015 1:52 pm UTC

elasto wrote:
Tyndmyr wrote:Why on earth would I want a single device that is camera, telephone, pocket game device, etc?

Is that facetious? You need a single device to do all that because it has to fit in your pocket. The same requirement is not true for a fridge/tv/microwave/cleaner combo device.


It is slightly sarcastic, yes.

Pocket space is limited. So is home space. Sure, it's larger than a pocket, but when you're talking about large, chunky appliances, how much space is available becomes a factor. Where practical, people DO get combo appliances. Fridge/freezer. Stove/Microwave. People often describe hyper-specialized kitchen appliances as not worth storing.

Something like a roomba is...well, okay, fairly small. So not a great burden. BUt it's also pretty inflexible. The idea that someone would get an equally specialized robot for every household task is a bit much. You'd need a room full of nothing but robots...which would probably start getting in each other's way.


but even then empathy could probably be emulated rather than experienced[/quote[

What's the difference?

Besides, once AI takes over both the grunt work and the managerial/creative work, our economy is going to collapse - either into a Star Trek style socialist paradise, or into some kind of Eloi-Morlock dystopia.


Everyone keeps saying this. Who cares? There will still be an economy, it'll just be different. The economy isn't a human entity we owe anything to, it's just a description of our economic activity. Getting more stuff with less work is pretty much the point of most of civilization.

morriswalters
Posts: 7073
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby morriswalters » Fri Aug 28, 2015 5:23 pm UTC

elasto wrote:We might finally elect a politician that sticks to their pre-election pledges...
Politicians lie because people want to hear the truth they think is real. No politician could be elected if they kept their promises. there are more opinions then there are policies that might work.

The household AI is what I am holding out for, one who replaces me, at least for the tedious things. One brain to rule them all, vacuums and Fridges. And household manager and receptionist. One who can find the lowest prices, not be controlled by the Googlebusybodies or the Microborg, Steve Job's ghost in the shell, or the great Bezo's. And when she answers my phone, I want her to act just like me. I Make people crazy.
Tyndmyr wrote:Everyone keeps saying this. Who cares? There will still be an economy, it'll just be different.
Yes, think of Rio's slums, writ large.

Tyndmyr
Posts: 11443
Joined: Wed Jul 25, 2012 8:38 pm UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby Tyndmyr » Fri Aug 28, 2015 7:14 pm UTC

morriswalters wrote:
Tyndmyr wrote:Everyone keeps saying this. Who cares? There will still be an economy, it'll just be different.
Yes, think of Rio's slums, writ large.


Why?

Tech hasn't lead us there yet. Well, no inexorably, anyway. There's a goodly diversity in economies even among technically advanced nations, but technical accomplishment doesn't usually result in pervasive slums.

morriswalters
Posts: 7073
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby morriswalters » Fri Aug 28, 2015 8:48 pm UTC

It may not. But if the methods of production end up in control of a few, and the factories are smart enough to do what factories do, with little or no labor than you are going to put a lot of people out of work. And I would think permanently. And you would end up with a shadow economy such as what exists in the slums of Rio, and everything that goes along with it. And most so called tech work is grunt work. Routine. And there isn't any reason that it couldn't or wouldn't be automated as well. The wave of intelligent machines could move across all sectors at once. It isn't like buggy whips, and I'm not sure why we seem to believe it would work the same way as the transition into the industrial age did. The things that become rare in a society that doesn't value labor as it has in the past, are things like materials and energy. Intelligence can make it cheaper to acquire, but it can't change the how much of it there is.

Tyndmyr
Posts: 11443
Joined: Wed Jul 25, 2012 8:38 pm UTC

Re: Won't advanced AI inherit humanity's social flaws?

Postby Tyndmyr » Fri Aug 28, 2015 9:38 pm UTC

Most of ALL work throughout history is routine.

So we automate the routines. Then, we develop new ones. That's what we've always done.

It is possible that, at some point, we figure out how to do everything that it is possible to do. An end to invention. Leaving off how ridiculously far off that is(even AI alone is wildly difficult), why should we worry about that now? How can we, so very far from such a place, accurately predict those times? Why should we assume that all changes brought about by this must be for the worst, in the absence of actual knowledge? Isn't this simply blind fearmongering of the sort that gets trotted out whenever a new technology is considered?

The ol' "your scientists were so busy wondering if they could, that they never stopped to ask if they should" idea needs a bullet in the head. It is ridiculous to think that those who know least about a technology are best equipped to judge it.


Return to “Serious Business”

Who is online

Users browsing this forum: mashnut and 6 guests