Death by Installments?
2002-04-14 12:18 pm![]() |
Robotic eye implant developed by Dr. Albert Cook at the University of Alberta. Photo by John Clark, WIRED Magazine, December 2000 Issue |
(Title ripped off of Peter Cochrane's paper of the same name.)
In the follow up discussion to a recent post, amanda42 made the comment:
I don't know about the downloading thing... once I downloaded my consciousness, the copy would be a separate, independent entity from me. I (this brain sitting in this hunk of meat) would still not want to die regardless of whether or not I had copied my consciousness or my body.
I'm holding out for nanotech...
Amanda's post triggered memory of a post I made on nanodot.org in response to a skeptic of uploading. In brief, I suggest that the capability to upload will happen gradually, and that as it does our definition of "self" will expand to include entities more widely distributed than our current bodies will allow.
One of the bits about the common "gradual replacement" uploading hypothesis that both amuses and irritates me is how it is obviously an attempt to sidestep some of the more disturbing possibilities such a course of action might make possible.
Imagine that you were in the shoes of the person arguing for uploading. Assume that you were addressing a skeptical, possibly hostile audience. Would you immediately jump to the most radical implications of your position? Or would you begin from what you assume to be common ground?
It has not been my experience that extropians sidestep the more unusual implications of uploading--I encourage anyone who may have questions to read/subscribe to the extropians mailing list. There you will find a number of people quite willing to explore even the most far-reaching ramifications of uploading.
It also directs attention away from one of the other implications of the "Memorex" system of uploading: that of making multiple copies. Some people might find the idea of numerous examples of their personality and memory existing simultaneously an attractive idea. They are probably very scary, and should not be allowed to.
Well, the lack of redundancy in the human brain has always bothered me. It takes at least a decade or so to acquire competency in most fields. Yet a single careless driver, a fall in the bathtub, a stray bit of fat in the wrong artery and...poof!...in an instant, all of that accumulated knowledge is gone. This seems quite wasteful to me.
I've always loved to read. And what bibliophile would not feel a tremendous sense of loss upon learning of the burning of the Library of Alexandria? Or the burning of any major library for the matter.
Yet what is a book? Are not most books simply a story that somebody has written down? Most people write down only a few, if any, of the stories that they know. When they die, all of their stories become lost. As a result, everyday, the equivalent of thousands of Libraries of Alexandria are lost. If saving those people's lives (and their stories) means expanding my concept of self to include duplicates, then I'm quite happy to do so.
The whole "backup" idea also get to me. "Okay, I backed up my brain yesterday, so it doesn't matter if I die today". See if you still feel that way as the plane you are in goes down in flames.
Right now, most people's sense of self is strongly associated with a single, physically localized body. However, I expect that our sense of self will gradually expand to include a network of selfs possibly spread over a large geographical area.
We already store a part of ourselves external to our body. For example, couldn't the written word be considered an externalized form of long-term memory? Computers and calculators also assist with arithmetic, visualization, long term memory and other mental functions that prior to the computer's invention were performed primarily by our biological apparatus alone. I expect that as these "external brain assistants" increase in capability, and as the interface between our biological brains and these "brain assistants" improve, our sense of self will also expand to encompass a network of brains, possibly spread over a wide geographical area.
Would a duplicate trapped on a burning plane feel fear? Probably, if it were an exact duplicate of me as I am now. But as I imply above, I would expect that only part of "myself" would be duplicated in any given node of my "brain network". I also would expect that those partial duplicates would not necessarily experience fear in the same way we do now. We feel fear when endangered because those ancient relatives who felt no fear presumably did not survive to reproduce. From the standpoint of the "brain network" as a whole, the loss of a single node would not be the catastrophic loss that the death of my body would be now. Fear appears to be controlled primarily by the amygdala. Therefore, I expect that when we have advanced enough to create the "brain networks" I describe above, I will also be able to alter the functioning of the amygdala (or its equivalent) in each node of the brain network so that node would feel whatever level of fear I felt appropriate.
Therefore, the "node" going down in flames will likely feel some fear (I would leave the capability to feel some fear in place, because I wouldn't want my nodes to endanger themselves willy-nilly) but I expect that the "node" would most likely spend most of its attention attempting to radio transfer as much of its unique memory, skills, personality to another part of my network, rather than stewing about its own eventual demise. In any case, assuming that you have made appropriate backups, only a small part of "me" (referring to the entire "brain network") would be lost--I would imagine that the loss "I" would feel would be analogous to the loss I would now feel from the loss of a few days of unsaved work on my computer.
Many people, if they think about the possibility of "brain networks" now, will find them strange and frightening, But such "brain networks" will evolve gradually over many years (decades probably). At each step, the advances will seem obviously helpful. A prosthetic eye for blind people? Of course. Brain/computer interfaces for the paralyzed? Seems very helpful. Nanometer scale medical imaging? Marvelous. By itself, no single innovation will suffice, but each advance will build upon the others.
Eventually I expect that we will become "brain networks" without ever realizing what was happening. Our future selves (or our descendants) will look back on our present crude state (by comparison) and feel the same way we do about our ape-like ancestors:
"They only had single node brain networks? How sad."

no subject
But, like her, I was thinking about it in terms of copying consciousness, not in terms of expanding it.
This is one of the more interesting ideas I've run into recently. You should post original thoughts more often! :)
no subject
Date: 2002-04-16 12:30 am (UTC)Vinge's novel portrays a race of super-intelligent dogs, who have developed a "pack mind" in which each member of the pack has specialized in some skill--verbal, mathematical, spatial, etc. Over short ranges, the pack members can communicate telepathically, so the "pack" operates as a single unit/individual.
If separated, however, although each pack member retains its specialty, they becomes stunted in the areas specialized in by other members of the pack.
One group of dogs develops technology to amplify the telepathic signal, which allows them to operate as a single "mind" over a much wider range, which gives them a great advantage over their rivals.