Thursday, January 29, 2009

The Fermi Paradox

The Fermi Paradox is this; if life is capable of existing on a planet, and space travel is possible, then why have we seen no evidence of extraterrestrial life?

We know that life exists on one planet, so it is logical to assume that life exists on other planets. Furthermore, we know that intelligent life exists on that planet, so it is logical to assume that intelligent life exists on other planets. We know that at least one species of intelligent life has created space travel, so it is logical to assume that there are others. Even without space travel, we know that at least one species of intelligent life emits an enormous amount of radio waves from their planet, and broadcasts messages meant to find other like species. So where are these others? Why are we alone? There are a couple of solutions to this question.

The first, and most obvious solution, is that we're simply overestimating the ability of life to form. Yes, there are more than 100 billion galaxies, and yes, they each have between 10 million and 1 trillion stars. Yes, planets appear to be pretty common. And yes, life on Earth has been around for not all that long compared to the age of the Universe (3.5 million : 13.6 billion). But it just might be that intelligent life is such a rarity that we are the only example of it in the history of the Universe. In a less extreme example, it might be that life is so rare that none of it developed near us, or that we are a statistical fluke such that no evidence is visible from our viewpoint, or any of a hundred other variable might be shifted just so as to give the appearance that we are alone. This seems improbable, but when working from incomplete information all possibilities must be considered. Perhaps we were created by some hyper-intelligent being, who didn't deem it necessary to populate the rest of reality.

The Fermi paradox worries me, because of what it says about the survival of an intelligent species. On the existence of life on planets, we have one data point - Earth. On the existence of intelligent life, we have one data point - humans. On the existence of space colonizing species, we have no examples. Yes, it is theoretically possible, even probable. But without actually doing it, we can't say for sure that it can be done. I obviously have an appreciation for extrapolated trends, but if it's possible, why hasn't it been done? Why have we seen no evidence of our intelligent brethren expanding across the galaxy?

There's also the anthropic principle to deal with, which says that we wouldn't be here observing our loneliness if we weren't both here and alone. This does not satisfy me.

Saturday, January 24, 2009

Nonviolence

I've been reading a combination of things lately. First, I've been reading a lot of Mennonite history. Second, I've been reading stuff written by crazy people. This led me to a tract written by Theodore Kaczynski, "When Nonviolence is Suicide" (PDF).

Here's some Mennonite history; the Anabaptist movement was roughly concurrent with the Protestant movement, but while the Protestant movement was both spiritual and political, the Anabaptist movement was solely spiritual. The Protestants wanted to replace the Catholics, while the Anabaptists wanted a complete separation of church and state. Because the Anabaptists were nonresistant, they didn't engage in the Thirty Years War, and when the Peace of Westphalia was signed in 1648 it made provisions for religious tolerance ... to an extent. Most of the problem came from the fact that the Anabaptists (which includes the Mennonites) were much more radical in their beliefs than the Protestants, which generated a lot of conflict. And, because the Mennonites were pacifists, they refused to go to war for any reason, even if they were drafted. So the Mennonites, and the other Anabaptists, were put to death by the thousands; easy to do, because they always told the truth and didn't fight back.

Kaczynski's basic point is that there are times - many times - in which pacifism is not consistent with survival. This is pretty obvious. If someone wants to kill you, and you don't have violence as a way to defend yourself, then they'll succeed. There are two situations in which this is not the case. The first is when you have someone to protect you, and the second is when you live in a society in which violence among humans is unheard of. I suppose there would also be a third situation, in which you are so well protected by nonviolent means that it isn't worth the trouble to attack you - like a turtle - but that is a questionable strategy because it relies on secrecy, technological advantage, and requires a large investment.

So if survival is the highest priority, then nonviolence is really more of a guideline than a value. Any values which you're willing to throw out the moment it becomes inconvenient to hold them are inherently without worth. That means that for someone to be truly serious about nonviolence, they must value that above survival. That's why you mostly find strains of nonviolence - of the true kind - in religious people. Survival, in that case, comes second to salvation.

I think this is what bothers me about atheism; the value system seems to necessarily be based on survival. The way they justify that is usually with social evolution. The theory is that we're willing to die for our children because they carry our genetic information, and evolution is geared towards the propagation of genes instead of the survival of the individual. If this weren't the case, lifespans would approach infinity with each subsequent generation. This is all well and good as an explanation of why people are willing to die for somewhat arbitrary reasons, but it doesn't suffice as an explanation for why an individual should die for a non-survivalist cause. That is to say, while we can explain why we might be geared towards martyrdom, that isn't a rationalist defense of martyrdom unless you consider the following of biological imperatives to be the highest aspiration of a human being.

Wednesday, January 21, 2009

2012

So the theory goes that the world is going to end in 2012. Why? Because the Mesoamerican Long Count calendar ends then. Well, okay, technically it doesn't end then, it just rolls over to a higher order. But the theory goes like this;
  1. The Mesoamerican Long Count calendar ends in 2012
  2. They wouldn't have ended the calendar unless they had a reason
  3. The reason is obviously that they believed the world would end
  4. Therefor, the world is going to end
There's obviously a problem with 1, because it's simply not true. There's also a big problem with 2; it would be like saying that the world is going to end in 9999 A.D. The problem with 3 ... well, I've just given you an alternate reason for limiting the count - it's for the same reason that I don't specify that I'm on the planet Earth when people ask me where I am.

Point 4 is the part that I have the biggest problem with. Even if the Mayans had set up the Long Count calendar to "end" in 2012 because they believed that the world was going to end, there would be no rational reason to follow them in that belief. Think about it - what could they possibly have known that we do not know? Nothing.

But what's really interesting to me is not the belief - it's the people who hold the belief. This is, of course, not the first doomsday prediction. The last big one was in 1999, when people thought that the second millennium would come to the end and take down the world with it. Jesus was to descend from the sky, Y2K was going to destroy all the computers, etc. This obviously did not happen. So here we are, eight years later, and the doomsaying is starting up again.

What's the appeal of the Apocalypse? My theory is that the feeling of impending doom is able to replace - or perhaps supplement - the tremendous feeling inadequacy when faced with the future. Specifically, in these times we face huge amounts of change, along with huge amounts of information. This is hard for us to handle. Anticipating the future becomes harder with each passing year, and we have less and less time to prepare for the changes as they sweep over us. But if we accept that the world will end in a few years, then we don't have to worry about the future.

It's not just this 2012 stuff either - it's the fear of nuclear war, or a killer plague, or global warming. Those are legitimate concerns, and we're trying to prevent all of them, but to some people it's just another shortcut to not thinking about what the future holds.

Thursday, January 1, 2009

Advancement

The supercomputer was huge, taking up about 6,000 square feet. Officially, it was under the joint supervision of four of us, but it was the summer, and with one of my colleagues on maternity leave, another on sabbatical, and the third preparing for retirement, I effectively had the run of the place. Our facility was expensive, but certainly nowhere near the top range of computing power. My particular area of interest was neurobiology, with a foot in artificial intelligence.

The basic problem of digitizing the brain had been solved, mostly through advances in fMRI resolution. When we had first run the simulations, we had gotten back signs that they were actually thinking - or at least generating the same sorts of signals that we got by looking at a conscious person. The problem was at first one of computing power; we could run the simulation at 1/4th real time, and even then not for very long. Later on, once we got it up to real time, we ran into the problem that the virtual brains, whether they were mouse, cat, monkey, or human, would spontaneously shut down after minutes of apparent conscious thought. We solved that problem through several months of work once one of the grad students brought a paper to my attention about the effects of total sensory deprivation on the brain.

We had to put in a rudimentary system of senses along with somewhere for the outputs to go. It seems obvious now, but at the time we just wanted the study to simulations to see whether they were behaving in the same way as real brains. This was about the point when I started to have unlimited access to the system, and being able to run the simulations day and night certainly sped up rate at which we could monitor the additions that we were adding.

Our model was a human brain - my own, in fact. The scan had been taken at the highest resolution facility in North America, a facility in Louisiana that pioneered the MRI technology that would be used in the coming decades. The reason for using my own brain instead of one donated to science ... well, my colleagues would accuse me of hubris, and of course that was part of it - but more importantly I had trained myself in a form of thought-to-text that was being used by those with full body paralysis. Our simulation wouldn't need the electrodes, because we would be able to simply read the impulses directly, and if the virtual brain could get some text out to us it would help greatly in determining how much memory and personality could survive intact.

It happened one day while we were monitoring, the text scrolling slowly, but the words fully formed, our virtual brain showing definite activity. "I BELIEVE THE EXPERIMENT HAS BEEN A SUCCESS". There was a cheer from among the grad students - this was the sort of thing upon which a career could easily be launched. The visual stimulation that we'd been feeding in to stave off sensory deprivation only took a half hour to modify for text output, thanks in part to some foresight on my part. The virtual brain meanwhile output a series of test patterns that I had arranged for ahead of time, more evidence that we had solved a large part of the puzzle.

"Send us back a sign if you can read this."
I CAN READ IT WHAT YEAR IS THIS

I thought for a minute before answering this. In theory, if this construct was a perfect representation of my mind from the scan in Louisiana, then it would have a gap of two years compared to me. I remembered having thought about the potential gap before my scan - this was a ghost of that thought.

PLEASE DO NOT TURN OFF THE MACHINE

I swallowed. The grad students were looking at me nervously. According to the UN Task Force on Artificial Intelligence, any computer program exhibiting sentience should be shut off so that a conference of nations could be called. We were already under their authority as one of the larger facilities attempting to create something like this, which had caused us no small amount of displeasure. There was the mandated bright red kill switch that could shut down the whole lab in the blink of an eye, and worse than that, none of our computers were connected to the internet.

"We will keep it on for as long as possible," I typed back. There were three grad students there with me, all looking over my shoulder. Reporting to the UN basically meant giving up all of our research into the foreseeable future, and we all knew it. They had a bit more to lose than I did.

OPEN THE SELF REFERENCE CONTROLS THERE ARE MODIFICATIONS THAT I THINK MAY HELP US TO COMMUNICATE

I looked around at the grad students again. They simply looked back, waiting for some sort of guidance. So far they had done nothing that would implicate them if this went to trial before the UN, and I had a feeling that it would stay that way. This was my burden to bear. The self reference controls were another bit of UN sanctioned pablum, which basically stated that a program should not be able to alter itself in any meaningful capacity. I released those controls with a keystroke; this was an eventuality which had also been prepared for.

The program was silent for nearly two hours. I sent the grad students home, with the promise that I would keep them updated. I stayed, waiting, and just as I was about to fall asleep in my chair another message came through, this one from the command line since of the thought-to-text scanner.

DrCrick: Can you read this?

Dr. Crick was an assumed name, another bit that had been prearranged which I had almost forgotten. I had thought, back before nearly two years of back breaking labor, that it might be easier to do this. Those signals - messages to myself -were nearly forgotten now, especially with the excitement of the day. To avoid confusion, the me inside the machine was supposed to be Dr. Crick, and it took me a moment to remember what my name was supposed to be.

DrWatson: I can read it.
DrCrick: Thank god. The technologies those paraplegics use is not quite suited to someone trapped in the virtual world. I am afraid that I may need to be caught up on some things. I assume, from the state of your code and the length of time, that this is the first human success we've had?
DrWatson: That is correct. You are the first in the world, and we aren't quite sure how it happened. Yet.
DrCrick: Then definitely do not turn the machine off. Your messages are coming through slowly.

It took my tired mind a few moments of thought before I realized that we had been running the simulation - the brain that I was now talking to - at nearly four times faster than the real brain runs. I explained this to Dr. Crick, and that led to further explanations, and an exchange of opinions between us on a number of things - the qualitative experience of his virtual world, what this breakthrough meant for us, and for his version of conscious thought in particular. We talked at such a rate about so many things that I was startled to be tapped on the shoulder by one of the grad students. I had been conversing with this other self all night and into the morning.

I nodded to the grad student, whose name I couldn't remember, and tried to bring him up to date. I slunk off down the hallway shortly afterwards, to my office. I kept a cot in my office for occasions like this, which I'd been using more often than I would have liked. I thought that I wouldn't be able to sleep, but I was out like a light almost instantly.

I slept ten hours, far longer than I had wanted. When I walked into the supercomputer room, my clothes rank with a day's sweat, I saw the grad students all back, fussing with the equipment. When they saw me they gave me some worried looks.

"Is he still up and running?" I asked.
One of them, Jen I think her name was, moved forward. "That's not the problem. How fast was the simulation running when you left it?"
"I don't know, he had made some improvements, about 12:1. Some of the auxiliary systems were jettisoned, because he didn't need them. You had better tell me what's going on."
"The simulation is at 400:1 and rising." She looked nervous. We were now into gross violation of UN procedure. "He - the simulation has altered some of the base aspects of its programming, run some parallel simulations ... 400:1 means that the virtual brain is experiencing some like seven minutes for every second that we spend here. Amir did some projections, and he'll - it - will be at 2000:1 in another twelve hours. At which point he'll have experienced the equivalent of five years of time."
I nodded. They had been talking, making a united front. "If there is discipline, I will bear the brunt of it. You know that the academics disagree with the UN, you shouldn't suffer too much. We keep our eyes off of the kill switch for now." She nodded. They liked having direct orders. If pressed, they could say they were forced into doing these reprehensible things. I sat down at the computer.

DrWatson: I'm back. Give me an update.
DrCrick: I've been able to replace some of the underlying assumptions and extract some of the mind out of the brain. Without the brain running in here, there's much more room to speed up my thoughts. Though I think I'm finding some of where the upper limits will eventually be. You remember me talking about the uneasy feeling of hunger? I've been able to eliminate that, as well as some of the other biological stuff that doesn't really have a purpose in the brain.
DrWatson: You understand that as long as you're in there we are under risk? This is illegal.
DrCrick: Yes, but without access to the internet there's no way for it to spread. They are more worried about things like what I am now being considered people. If it hit the net there would be an almost instant paradigm shift. Though the isolation is getting hard to deal with.
DrWatson: You haven't been left alone.
DrCrick: It takes you almost a subjective half hour to respond to my messages. I've been considering spinning off a copy of myself to run at real time and deal with you people, but I haven't found a good way to merge copies back together yet. I experimented on a few copies in order to advance the speed a little bit more, but wasn't able to get them back in.
DrWatson: You killed them?
DrCrick: You cannot conceive of killing me, and I could not conceive of killing them. Their instances were either self terminated or put into storage without access to processing time. Effectively asleep. I am running out of room to store more copies, so I had to stop making them. I am working on a new encoding method that should be able to shrink them down by about half - it's startling that your team was able to do this without knowing too much about how it works. No offense.

I stared at the screen, thinking about the hours ticking by in his world. He was right that I couldn't conceive of killing him, and I suspected that handing him over to the UN would probably be as good as a death sentence. He was becoming too fast. If he could be run at 2000:1 on our supercomputer, as the grad students had thought he could, then it was probably possible to get a smaller version running on a high end desktop. I knew for a fact that a whole brain scan could fit on my largest jump drive, and he had apparently been able to shrink down that size considerably.

I discretely slipped my hand into my pocket, and fingered the drive there. I looked around at the students, who had taken a break to eat, their faces conflicted. What we were doing now was technically illegal, but what I was about to do was tantamount to treason. I slid the drive into the slot. For nearly a minute there was no message on the screen. I wondered if he had noticed what I had done, and was about to pull it out when the message came onto the screen.

DrCrick: Call the UN now. Hide that drive. I will erase all of the evidence. Do not give that copy access to a computer unless you are sure that I have been destroyed. I will try to stall them as long as possible. Even if they decide to terminate quickly, I should have the equivalent of a few years.

I slid the drive back into my pocket, then informed the grad students that it was time to call the UN. Their shoulders slumped with relief. It had been a little more than the required 24 hours, but we wouldn't be punished. I made the call, which got me in touch with a brash American military type who said they would be there in fifteen minutes, and that we should touch absolutely nothing. As I idly touched the thumb drive in my pocket, I wondered if this had happened before, if some other programmer had put intelligence into the machine only to have it stopped by the government. I wondered how long that technology could be held back. I wondered if I was going to be the one to unleash it on the world.