Jul 172018
 

This post is gonna sound kinda dumb to most people. I figure it’ll be a lot like finding out that a friend is scared of leprechauns. And you’re like “Really? Leprechauns??” But here we go.

I find the short story “Steve Fever,” by Greg Egan, horrifying–and here’s why.

(spoilers below, so go read it first if you’d like. It’s not too long, introduces a cool idea that will get you thinking, and most people will consider it mostly fun)

Steve is a tech genius/entrepreneur, signed up for cryo, that creates an AI hive-mind and dies shortly thereafter. He’s constructed the AI so it’s primary goal is to revive him in the future. Unfortunately he died in a fiery car accident, and there’s no brain left to preserve. But the AI’s utility function is robust against corruption or drift, so it sets about trying to revive him. Steve left a ton of personality data behind. Lots and lots of personal writings, recorded public appearances, social media posts, interviews, etc. So the AI creates a best-guess approximation of his mind, installs it on a currently-living bran (temporarily hijacking a person’s life in the process), and then tests to see how good of a fit it is. It does this testing by recreating the initial conditions of an event in Steve’s life, and seeing if their Model Steve reacts the same way that the Original Steve did historically. If so, great, try with another scenario! If not, abort, tweak the model, and try again. Iterate until a functionally-identical Steve can be recreated.

This terrifies me in two ways. The first is that (when I think of it) it scares me to post anything anywhere. Every trace I leave narrows the range of successful Eneasz-recreations, making future-reviving harder. I guess that’s a good thing overall, because it means revived-me will be that much closer to original-me. :) But I’m extremely aware of the fact that there’s a lot of stuff I *don’t* post or make a record of. And those things are also parts of me. The reasons for that are mostly embarrassment and social sanctioning… there’s some things I’d just rather not share with the world. And also the majority of it is boring, nobody needs to hear all my stupid little worries or daily thoughts. But recording some things and leaving out others leaves a skewed record, and since the skew is mostly in one direction, any future recreation based on these will be twisted away from who I am now. Is that a good thing? Should I mostly post the stuff that makes me happy, and shows off my abilities, so future-me will be well-adjusted, happy, and good at stuff? I’d want to keep all my deep fears and neurosis as hidden as possible in that case. But then am I even recreating myself, or just a creating an idealized child/successor?

(and is this why some people seem like super-happy half-people?)

The much more horrifying worry is that I might be the Model Eneasz. I may be running through a simulated historical scenario right now. Am I reacting the way Original Eneasz did? If I slip up in any way, the simulation is aborted and I get deleted, to be replaced by a higher-fidelity Eneasz. My continued existence depends on taking the action that isn’t the morally-best or financially-best or socially-best, but the most like an no-longer-existing-person who I may only partially resemble and whose motivations and psychology I can only guess at. And *not* doing something (like not posting this) might be just as bad, if the Original Eneasz did post it. Do I just do the best thing I can, and hope Original Eneasz was a basically good person? He can’t be that bad, if the future is willing to bring him back, right?

Plus, if I am being simulated to refine a model, it means Original Eneasz probably did something interesting or momentous enough in his life to be deemed worthy of recreating. (unless future society is altruistic enough to want to recreate everyone <3 ) I don’t feel like I’ve done anything that noteworthy yet, which leads me to think… what the fuck is looming in my future?

(Of course, I could just be the first-run of Eneasz, a pleb who will never amount to enough to be worth recreating in the future, and all this worry is for naught. Which may be even worse, because then I die forever. >< )

It’s all very stressful.

  4 Responses to “The Unbearable Darkness of Being Steve”

  1. I’ve always been extremely skeptical of the concept of rescue sims for the first reason. At best you’ll create a brand new person kind of like me, which, okay, cool, I guess, but that’s not me.

    I had never considered the “testing” phase of a rescue sim before, so hadn’t thought of that other part at all. That’s terrifying. Just for the record: I am not okay with killing a bunch of people, similar to me or not, in order to rescue me even if the resulting product will somehow be an exact rescue even though you need to test it like this. Killing people is bad. That is why bringing them back is good.

    Also, people change over time. Unless I’m going to get run over by the bus on the way to work tomorrow, simulating this blog comment is not a good way to test a simulation of me from when I die.

    • Would you be okay with it if the varyingly similar people were kept around instead of being killed/deleted, and also got to live fulfilling lives as new individuals, in addition to your rescue?

      • Much more so, at least, assuming I trusted this could actually happen. I have some intuitions against permanently creating these people so similar to each other, but I’m not sure they’re actually useful and endorsed given the rest of the scenario.

        Since I think rescue sims are one of the least plausible parts of speculative future technology (honestly, I think actual time travel to read brain state on death is more plausible than creating an exact copy of somebody just from their writings, because that only goes against the laws of physics and not also mathematics), I haven’t put too much thought into this. I don’t know how stable my initial impressions are for various permutations of the scenario.

  2. I write my personal/boring stuff in an encrypted online journal. My hope is that an advanced enough AI would be able to decrypt it, but my mom, for example, wouldn’t. :) Regarding what Daniel said about change over time, I’d hope journaling over a lifetime would allow an advanced enough AI to predict how I might change in the future, so a hypothetical rebuild of me wouldn’t necessarily remain static.

    (It’s kind of ridiculous, but old childhood journals are what’s taking up the most space in my items I’m bringing with me on my move. One day, I’ll digitize them.)

    That said, I really hope there’s better technology available for mind preservation.

 Leave a Reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

(required)

(required)

This site uses Akismet to reduce spam. Learn how your comment data is processed.