Many points. Sorry, this will be a long one.
infernovia wrote:As if their pleasures were "grand." And of course it is filled with the contempt that he holds for the last man. This evaluation differentiates him from this type of man, it shows his taste. And it is excellently written, but only for those who see this type of future as stifling, that is the refusal to join the religion of happiness.
The point I want to get across is that his depiction of the Last Men is not a proper representation of all potential ways to live hedonic maximization. He depicts them as inevitably lazy, enjoying no challenges, and then he adds purely propagandistic parts like the blinking. He essentially portrays a shallow, empty life. However, this is a deliberate misrepresentation of the value of happiness. I have spent brilliant subjective times in virtual story-telling worlds, thrilled with excitement, while sitting alone in my room with my computer, causing no suffering to anyone
. From an outside
perspective, I might have been a shallow "Last Man" who blinks a lot. From an inside
perspective, it was pure brilliance. Now take this principle to the next level and amplify it artificially, while systematically ending all non-consensual suffering.
That man has changed from being cruel to others to cruel to himself (not by everyone, there are still those who enjoy warfare), this is the value that Nietzsche realizes is within the birth of this type of morality (the soul). The soulful man is the initiator and the target, he realizes that this is another way to gain power. This is the realization of the Brahmins, who forced themselves away from society to nurture their spirit of cruelty against themselves. That is, the sacrifice of not being satiated.
But infernovia, there is
no soul! You are in danger of muddling up real-world concepts with vague and ambigous spiritual concepts that sound a certain way and then divert your reasoning from what really exists. What you're basically describing with relation to the Brahmins is not
non-consensual severe suffering, it is the thrill
of an ascetic challenge! For me, this thrill falls into the same category as happiness - maybe a semantic misunderstanding? Should one now write a propaganda piece that says, "'We have discovered ascetic happiness', say the Brahmins, and they blink"? Such self-inflicted challenges have a legitimate part in my vision of the future, as long as they are voluntary
. Chances are, people will find there are artificially enhanced short-cuts to the thrill that they really
value in this.
In fact, acceptance is the only way out of the misery of thinking of it as a misfortune (as something unfair, as something that should not have happened).
No, it is not
. To rationalize acceptance of something that you can't control in the first place (such as non-consensual, severe suffering) is nothing but a cheap and very ineffective placebo. What we need instead is the creation of a system in which such misfortune is systematically prevented in terms of real-world causality
. If one steps out of the bias of false identification with the winning side, such power dynamics are zero-sum games - or net-negative games.
Quite hilarious how zero goes to negative.
0 = -1? Hardly.
That was rather cheap, since you could easily have understood why I wrote it: Depending on the balance of sadistic thrill and suffering, some games may be zero-sum (if the harm is not severe), while in other cases, the suffering may be far more severe than the thrill is worth (net-negative). It is also somewhat subjective whether you think some thrills are worth some types of suffering, and which ones, and under which circumstances. Everyone has a threshold where non-consensual agony becomes a black hole of negativity. This is true for you too, and it has to be taken seriously.
Ah, how delectable, a super AI system. Not a human of course, that would not be too troublesome.
No, and this distinction is really
important: It would be untrustworthy
to give such power to a darwinian primate. I would not trust myself if I had such power. Humans are neither intelligent nor morally trustworthy enough to be endowed with such power. We are hard-wired with darwinian primal drives, even if they are subconscious. This is why throughout all of history, tyranny has ended up causing much more harm than good, not matter how much utopian ideology was backing it up. And this is exactly why I'm deeply wary of alpha-male idealism. It is a symptom of the darwinian paradigm, not a path into any worthwhile future.
Why the heck would you want to replace all natural ecosystem? Are you afraid that something will come out of there and wreck your fantasy? Are you saying that primates, gorillas, rats, cats do not suffer? Or are you talking about a quick annihilation that causes no suffering?
No, what you mean is that those who do not want to have this kind of lifestyle, they will be the ones that the AI will annihilate. It is very obvious that rats and cats find pleasure, and are in fact much easier to please than humans. This is the first mistake you have done, imagined intelligence as someone that falls into this drugging.
Intelligence is a requirement for informed consent. In natural ecosystems, there may exist some pleasure, but there also exists severe, non-consensual suffering (starvation, predatorship, sickness, parasites, hardship from temperature and climate etc.) None of these animals has ever given informed consent to their own existence, let alone to their severe suffering, when it happens. And it happens all the time. This has been true for millions of years, and we assume it's ok because it's natural (naturalistic fallacy), or because there is no other way. I think that in the future, there is
another way. Maybe the system could contain animals if it were guaranteed that they are not inflicted with involuntary suffering. But then again, why keep animals in a completely artificial system when you could instead keep intelligent humans who are able to consent, and make choices about their modes of experience?
I have a quick analysis for you, this AI will not only murder all the non intelligent life form, but also the intelligent ones. Why? Because if one's goal is to cause no harm or suffering, then it is better for the subject to not exist in the first place!
This would conflict with its supergoal of hedonic maximization. If the no sentient life exists, there can be no positive thrills and peak experiences anymore either. (We've had a discussion a while ago about the ethics of negative utilitarianism vs. average-maximizing utilitarianism, in relation to metaphysical interpretations of the nature of individuality, the nature of time and their consequences for decision making -> this question is more complicated than you think, and counter-intuitively so.)
Who wants to be intelligent and to think anymore? We will have the AI do it for us, let us be mentally handicapped to rely on our AI!
Who wants to be challenged anymore? Let us annihilate those that disagree into non-existence!
Who wants to act anymore? Let us force all of our actions to come from the supreme master!
Who wants violence anymore? Let us crush all of our instincts that say so!
The top-down power approach is not
implemented because paternalism is the ideal, it is implemented because it is necessary
for the system's stability, and the system is necessary
for the ideal of preventing severe, non-consensual suffering. I already pointed it out: Violence and sadism and cruelty are fun - but not for the victim
. This isn't worth it, and preventing it is a valuable ethical supergoal, more so than the whimsical freedoms of the sadists. We already acknowledge it in our current legal systems, and my argument is that in the future, the thrill of real violence and sadism can be replaced with other types of thrill, without losing anything valuable
. For all I care, people could spend all their time raping, torturing and slaughtering fictional characters in virtual worlds - no real sentient entity would suffer from it. And as for the challenges? There are all kinds of challenges that would still be possible. The only taboos would be 1) threats to the system's stability, and 2) the real-world well-being and volition of other sentient beings.
After that is done, the AI is free to control all replication to shape the evolution of sentient life systematically, breaking the Darwinian paradigm, and ending all game-theoretic conflicts between individuals and tribal groups once and for all.
Who wants to live anymore? Let us not exist anymore!
With all due respect, this is a misrepresentation of what I wrote in this paragraph. Sentient life would still exist, thrills and challenges could still exist, but the conflicts that have shaped the natural world and human history would be bound by the system. It's easy to lament the lost freedoms of the warlords, while harboring zero identification with the victims
of warfare. I presume you've never been one yourself. This may be a lack of a crucial piece of information in the architecture of your ideals.
It then uses its power base and technology to create an interstellar - or even intergalactic - non-mutating replication and distribution system that efficiently transforms all locally available cosmic resources into hedonic value - with a well-defined state-space of deeply significant, colorful, and superhumanly pleasurable experience modes of unprecedented quality and quantity,
I am deeply confused why you think this would be the case. Hedonic value? All you are doing is instilling some chemical into some type of process which will illict some kind of response. Why an AI would do this, I have no clue. Why not create an AI to CRUSH the men who live into death by pleasure, and then, never make anything anymore?
Because the AI is not human. It is not
a darwinian primate. It has no sadistic lust for cruelty. It does not want power just for power's sake! Power is a means to an end for such a system, not an end in itself. If it is programmed with a utilitarian supergoal, the end is hedonic maximization, maybe with respect for consent-based personal freedoms.
As for the hedonic value vs. "instilling some chemical into some kind of process to illicit some kind of response" - you are
aware that this is a proper description for everything that ever happened in your brain? Everything you ever experienced? You can't degrade the value of hedonic experience by expressing it as a material process. The thrill of your Brahman ascetics, the poetic elegance of Nietzsche's expressions, all
of those things lead to nothing but "instilling some chemical into some kind of process to illicit some kind of response" in your brain -> neurotransmitters into your synapses, input and output - this is what we are. This is the core of our naure. Shaping it with sophisticated means is a perfectly fine thing to do.
Religion of happiness? Yeah, maybe, but what else is there, really?
What else is there is a desire of violence, cruelty, and a refusal of the easy solutions, a play of power. That you can be drugged into this is of no issue.
Again: When you're finally ready to step back one moment and step out of the identification-with-the-powerful-winner to total identification with all sentient beings involved, as soon as you have the phenomenological knowledge how severely bad
non-consensual suffering can feel, you will realize that this is not a game worth playing.
Edit: And how well does Hedonic Treader's fantasy match my statement:
infernovia wrote:That happiness is something people enjoy, that also I will not argue. But the happiness that people speak of is also a way of self-denial, a way of repression, a way of mediocrity, a religion that murders the soul and psyche of the humans, that is something people need to understand.
In a certain way, I agree: I do engage in self-denial - in the sense that I deny the existence of the self as a consistent, metaphysical entity. Otherwise, I would just act selfishly in my own life instead of debating this. I do not
agree that all concepts of happiness have to be mediocre - they certainly don't have to feel
mediocre! - and murdering the soul and psyche... yeah, I don't know about that, I think that freedom to warfare, freedom to cruelty etc. murder much, much more of the human "soul", which you could understand from the victims' perspective.
I'm curious: Do you really think that concepts such as these are helpful in our understanding of what our real-world future should be like?