I don’t mean that humans are machines that happen to feel emotions. I mean that humans are designed to be machines whose output is the feeling of emotions—“emotion-feeling” is the thing of value that we produce.
Humanity has wondered what the purpose of life is for so long that it’s one of history’s oldest running jokes. And while everyone is fairly concerned with the question, transhumanist singularitarian are particularly worried about it because an incorrect answer could lead to a universe forever devoid of value, when a superhuman AI tries to make things better by maximizing that (less than perfect) answer. I’m not here to do anything as lofty as proposing a definition of the purpose of life that would be safe to give to a superhuman AI. I expect any such attempt by me would end in tears, and screaming, and oh god there’s so much blood, why is there so much blood? But up until very recently I couldn’t even figure out why I should be alive.
“To be happy” is obviously right out, because then wireheading is the ultimate good, rather than the go-to utopia-horror example. Everything else one can do seems like no more than a means to an end. Producing things, propagating life, even thinking. They all seem like endeavors that are useful, but a life of maximizing those things would suck. And the implication is that if we can create a machine that can do those things better than we can, it would be good to replace ourselves with that machine and set it to reproduce itself infinitely. Imagining such a future, I disagree.
I recently saw a statement to the effect of “Art exists to produce feelings in us that we want, but do not get enough of in the course of normal life.” That’s what makes art valuable – supplementing emotional malnutrition. Such a thing exists because “to feel emotions” is the core function of humanity, and not fulfilling that function hurts like not eating does.
The point is not to feel one stupid emotion intensely, forever. It is to feel a large variety of emotions, changing over time, in a wide variety of intensities. This is why wireheading is bad. This is why (for many people) the optimal level of psychosis is non-zero. This is why intelligence is important – a greater level of intelligence allows a species to experience far more complex and nuanced emotional states. And the ability to experience more varieties of emotions is why it’s better to become more complex rather than simply dialing up happiness. It’s why disorders that prevent us from experiencing certain emotions are so awful (with the worst obviously being the ones that prevent us from feeling the “best” desires)
It’s why we like funny things, and tragic things, and scary things. Who wants to feel the way they feel after watching all of Evangelion?? Turns out – everyone, at some point, for at least a little bit of time!
It is why all human life has value. You do not matter based on what you can produce, or how smart you are, or how useful you are to others. You matter because you are a human who feels things.
My utility function is to feel a certain elastic web of emotions, and it varies from other utility functions by which emotions are desired in which amounts. My personality determines what actions produce what emotions.
And a machine that could feel things even better than humans can could be a wonderful thing. Greg Egan’s Diaspora features an entire society of uploaded humans, living rich, complex lives of substance. Loving, striving, crying, etc. The society can support far more humans than is physically possible in meat-bodies, running far faster than is possible in realspace. Since all these humans are running on computer chips, one could argue that one way of looking at this thing is not “A society of uploaded humans” but “A machine that feels human emotions better than meat-humans do.” And it’s a glorious thing. I would be happy to live in such a society.
I think there’s an easy enough analogy to wireheading for this, where you figure out more points to stimulate than just the one for happiness, and the AI stimulates different centres at different times following a complex algorithm, and then other than that change it’s exactly the same as the wireheading future. Does this actually sound any better than the go-to utopia-horror example?
I was also going to ask about this: “The point is not to feel one stupid emotion intensely, forever. It is to feel a large variety of emotions, changing over time, in a wide variety of intensities. This is why wireheading is bad.”
I think any solution which allows you to artificially experience an unlimited amount of happiness would be able to allow you to artificially experience an unlimited amount of all emotions in varying intensity, duration, and combination. This is why wireheading is good, yes? :D
I mean that humans are designed to be machines whose output is the feeling of emotions—“emotion-feeling” is the thing of value that we produce.
This is clearly untrue. We know the design process by which emotions came into existence – evolution by natural selection. And we know what this process optimizes – inclusive genetic fitness. We know what the brain is for – generating useful motor output from sensory input.
So we already know human emotions were designed to generate motor output from sensory input to maximize inclusive genetic fitness in the ancestral environment.
Emotions aren’t the output, movement is the output, emotions are just high level functions to calculate which movements are fitness-maximizing.
Eh. Yes, that is literally true. But I wasn’t asking “What is the human body designed for?”, I was asking “What is the thing of value that humans produce?” Why exist at all? All biological replicators replicate. That is not special, it can theoretically be done better by non-sentient agents in a future we would not recognize as having any value. If we are replaced by something that can do Thing X better, but we view the future where this is the case as drastically worse than the present (or downright nightmarish), than Thing X is not the answer to “Why exist at all?” I believe we’re answering different questions.
I apologize that my use of the phrase “designed to be X” was confusing, I was being more poetic than literal at that moment.
Yes, “Why do we exist?” and “Why should we want to exist?” are two different questions.
The only – entirely subjective – answer I have for the second question is because it is a positive experience, or it has good consequences (which, at some point, would have to be grounded in positive experiences or the prevention of negative ones).
I guess that leaves me open to wireheading scenarios, in principle. In practice, I think they would screw it up and we need the person as input for practical reasons.
I see you’ve linked to Ozy, above, and this article reminds me of a number of times they’ve said that they value their Borderline Personality Disorder, for precisely the reason that it allows them to feel emotions more.
It seemed relevant.