Friday, May 7, 2010

The Total Disclosure Project

So I've said a couple of times on this blog that I'm not real big on privacy. Most of the time, the information that we generate every day is completely worthless, not only to people to know us, but to corporations. The only real concern is whether or not this information could be used against me - as with phishing attacks and things like that.

Anyway, here's what I want to do; live with most of my privacy stripped away. I've been thinking hard about how to actually accomplish this. The big problem with total surveillance (which is what this would amount to) is that I would have to involve everyone that I see in a social setting, and everyone that I work with. I don't really want to do that.

A less extreme version would go something like this;
  • Total location tracking provided by a custom app for Android which would automatically load information onto the web.
  • An online repository of receipts to track everything that I purchase.
  • Setting all privacy settings on social media to the lowest possible setting.
  • Logging media consumption and daily activities.
  • Logging biometrics.
  • Logging web-browsing history.
  • Photographing or filming parts of my life, preferably automatically.
  • Blogging more.
And here is a list of pros and cons.

Pros:
  • We tend to behave better when we know people are watching.
  • It would generate some useful discourse.
  • It would match up with my stated ideals.
  • It would tell me things about myself.
Cons:
  • It would mean at least some infringing on the privacy of people around me.
  • It's a little narcissistic.
  • It would be technically complicated.
  • It would open me up to identity theft if I wasn't careful.
So far, I like the idea, but I think it's sunk unless I get permission from Alyssa.

Edit: Okay, I have permission from Alyssa. Enabling location tracking was easy - it's now to the right of this blog. I'll be working on figuring out how difficult it is to do the rest of the stuff.

Tuesday, April 27, 2010

Arizona Nazis

So there's a new bill out of Arizona which (basically) requires people to carry their papers on them at all times in order to aid the police officers of that state in cracking down on illegal immigration, along with a number of other measures.

Cue comparisons to the Nazis.

There's a famous quote by Ben Franklin, "They who can give up essential liberty to obtain a little temporary safety, deserve neither liberty nor safety." This has always struck me as a horribly vindictive statement. There are always trade-offs between liberty and safety; the real issue is managing the various exchanges so that you don't come out with a net loss, and the rate of exchange really depends on the individual person. It might be that Franklin was using careful wording when he said "essential liberty", but then again, I'm not really of the opinion that the phrase "essential liberty" is really meaningful. As the risk of getting stabbed in the back approaches zero, the amount of liberties I would give up approaches infinity.

I'm also usually for increased government information. A lot of our infrastructure and services would run better if there were, say, ubiquitous fingerprinting. It would allow the identification of runaways and dead bodies, it would help solve crimes, and it would make identity verification much faster (though a system of that sort would always have problems). Divulging medical information would greatly increase the speed of medical research (though this would require that no one could turn you down for insurance, or fire you from your job, because of that information). Full demographic information would better allow government distribution of funding, as well as sociological research that would further our understanding of which government programs are working and which aren't.

So I don't really see the problem with requiring people to carry around their ID. That's not such a big loss of liberty, especially since it doesn't even affect the majority of people who are carrying wallets and driver's licenses in the first place. You might argue about the right of the minority who don't want to have to carry that stuff, but I'm willing to place something as basic as identification as a requirement for living in this country (just as I'm willing to require the payment of taxes).

The big problem with the bill is that it's being perceived as racist. This is more a problem with the general perception of the government than anything else. People assume that a law passed like this is just going to be used to give the police a bullshit reason to stop and detain brown people. It might - I'm not from Arizona, I don't know how deep the police and government prejudice runs.

All I'm saying is that requiring identification is actually a pretty good idea.

Monday, April 19, 2010

Should the First Amendment Include Corporations?

Alright, so the grievously stupid invocation of the First Amendment when talking about corporations is quickly becoming a pet peeve of mine, as I seem to be finding it everywhere I look. Also, Google recently made a post about their approach to free expression. (Made just a few days after my blog post. Coincidence? Yes.)

So a lot of people think that these major corporations are beholden to an ideal form of the First Amendment instead of the actual law. In part, this is because speech-through-intermediaries is a somewhat new concept. In the past, this took the form of letters to the editor, or call-ins to radio shows, or actual employment with the press. In some cases, this included hand-cranking presses in a small basement somewhere - and once a government starts cracking down on do-it-yourself presses, it's a pretty good sign that totalitarianism is nigh.

So why do we even have such a concept as free speech in the first place? I suppose it's because of the belief that free speech is good for our society. Majorities are often mistaken, which is why we need to have minorities to have reasoned discussion about what should be done. In a sense, the founding fathers left free speech to market forces. Good ideas would float to the top, while bad ideas would sink to the bottom, and in the end, society would be the better for it. Obviously, as with the free market, they saw the need to place restrictions on speech, hence "clear and present danger" etc.

As time went on, and America expanded, facilitated communication grew. In the era of letters, this mostly took the form of the USPS, which, as a government agency, falls under the First Amendment, and is thus fairly uninteresting. Given more time, new technologies and new ways of communicating came along. In 1910, telephones, telegraphs, and radio became "common carriers" which meant that they had to provide their services to the public without discrimination. This is one of the ways in which the United States is somewhat unique, and demonstrates one of the reasons that common law is sort of stupid. The idea of a "common carrier" originally belonged to transportation of people and goods, and was taken from that context to apply to information. In this sense, telecommunications are covered under the First Amendment, as they are services licensed by the United States government.

The original internet was built on the backbone of telephone infrastructure, which meant that it could be regulated by the FCC in the same way that telephone, television, telegraph, and radio services were. As time went on, and technology changed, DSL and cable got reclassified as an "information service", which is legally distinct from a "telecommunications service" and is the whole reason that anyone is arguing about net neutrality. But that's not what this post is about.

Even if the Obama administration comes down on the side of net neutrality and reclassifies the internet as a telecommunications service, there will remain the larger question of how to regulate the huge companies that control the flow of content. I am speaking specifically of Apple, Google, and Amazon. All three of those companies exhibit enormous power on the market not just of things, but of ideas. Small companies have been known to collapse when Google tweaks its search algorithm and sends their website to the second page of results.

So here's the question - is Google a common carrier? It's obviously not in the legal sense, as it doesn't fall under the authority of the government, but it is in the sense that people depend on it. Yet Google's whole job is to separate the useful from the worthless: in other words, discrimination. If Google were a common carrier, how would it function, as it by definition it needs to value some speech over others? In some senses it would be easier for Apple, as their app store would simply have to accept all submissions, and no song or podcast would be denied access to iTunes.

Right now, we depend on these companies to not cross any lines. A free market optimist might say that we have nothing to fear, as these huge companies have no real choice but to follow the will of the masses. I would respond that this is exactly why we should be afraid. On the other hand, if these companies trend liberal (and I believe they do) then it may mean that the undesirable parts of free speech, such as hate speech and conspiracy theorists, will suffer from erosion as they become less and less accessible.

(Note: this title has two potential meanings, but I'm pretty obviously talking about restrictions rather than protections, the latter having been decided in Citizens United v. Federal Election Commission.)

Thursday, April 15, 2010

Corporations and Free Speech

So there's recently been some murmuring about Apple and how awful it is that they block apps for political reasons. Phrases similar to "I guess Apple doesn't care about the 1st Amendment" keep cropping up. This also gets tossed around when any online service starts to censor people for any reason.

Let's get this straight right now; that is not what free speech means. The very first words of the First Amendment are "Congress shall make no law ..." You will note that those words explicitly mean that this does not apply to people or corporations (because corporations are people too). So a corporation doesn't have to give people a voice, even if it's in the business of providing voices, and it can censor those voices however it wants. The First Amendment argument only really has a place when it's the government which is restricting speech - such as the FCC slapping down fines on people. Why do people think this is the case? It's a misunderstand, sure, but there has to be a reason that so many people seem to misinterpret the law.

It might be a sense of entitlement. In this country, we have it hammered into our heads that we can say anything we want, short of a "clear and present danger" or defamation. But because of various technological advances, a huge majority of our speech is mediated by corporations. For me to make this blog post requires the use of a computer (made by HP), OS (made by Microsoft), blogging software (made by Google), and internet access (provided by Charter). At any point in that chain, there is an opportunity for censorship, because none of those corporations has any legal requirement to allow me to do what I want - in fact, I have "signed" contracts all of them, even if most of those were click-through EULAs that I didn't actually read.

The other reason might be that it doesn't happen all that often. People then assume that because it's not happening, it must be illegal. There is then the question of why it's not happening, and I have a few theories about that:

1) Restricting speech is bad for business.
If Company A restricts speech and Company B doesn't, people will be more likely to go take their content creation to Company B. While speech restriction might raise the overall quality of Company A's offerings, overall quality is pretty unimportant in the era of search. There are also a few (weak) legal grounds on which to sue Company A - misrepresentation and discrimination being the big two.

2) Restricting speech is damned expensive.
This is one of the things that always bothered me about 1984. Who is watching all of these people? It would take a huge amount of workers to police even a small slice of the content output of the internet, and it would be brutally inefficient to boot. In the future, this job will probably be handled by artificial intelligence - already there are algorithms that can pick out "content of note", but those are mostly used by our intelligence and advertising agencies.

3) Restricting speech happens, but it happens to people who aren't sympathetic.
There are certainly examples of this happening. This is why there aren't nude pictures on Facebook or graphic videos on Youtube. In those circumstances though, the company in question looks good, because they're removing something that most people don't want to see when they go to those places. In other words, it doesn't look self-serving. More problematic is the issue of copyrighted videos, which Youtube will remove even if they fall under fair use (though this is still its prerogative).

Of those three, I think that the second one makes the biggest impact. As time goes on, bots will get better at tagging objectionable posts for human review. They'll need to do this for all the stuff you're not allowed to put on any kind of commercial site, both of the "clear and present danger" type and the porno/graphic type. This is especially true now that the scepter of cyberbullying has been raised. As the technology gets better, the temptation will grow to turn it on whatever kind of speech hurts the corporate bottom line. This is especially true if, like Apple, you have the infrastructure set up to go through content by hand, one at a time.

Tuesday, April 13, 2010

Labor Implications of Content-Aware Fill

So I've been checking out a lot of the the videos and commentary on Adobe's new Content-Aware Fill. I find several things about the online discussion to be fairly amusing.

Firstly, there are the people who claim that this is fake. I can sort of understand this, as it was originally posted around April Fool's. However, there's nothing all that funny about this particular technology, and nothing all the unbelievable. Of course, to some people, this is unbelievable - because Content-Aware Fill makes a lot of the menial labor parts of digital manipulation disappear.

Second, there are those people who see what it can do and get the wrong impression. They say "Finally! I'm going to have so much more time!" This belies a basic misunderstanding of market economics. If it takes less time to do something, you have fewer billable hours. While it's possible to reduce labor required and keep your prices the same, you'll quickly be undercut by your competition. This applies doubly so for a profession that's less likely to have permanent contracts.

Third, there are those people who think that this will cost people their jobs. This is the other side of the "labor saving" coin. I've often heard the argument that the only result of new technology is a shifting of labor. Basically it goes like this; I invent the cotton gin, which decreases the work required to seperate cotton fibers from cotton seeds by a factor of fifty. This makes cotton cheaper, which means more people will buy cotton, which means that I need to hire more people. Additionally, cheap cotton boosts a number of other industries, such as clothing manufacture.

I often question whether this is actually true. History has shown that increases in technology mean that labor will shift to less and less "essential" tasks, as seen by the movement over time from agricultural to industry to service. It's somewhat difficult to find the data to compare occupations over time adjusted for population increases, so I have no idea whether there are (for example) fewer farmers today than there were a hundred years ago. It actually seems likely that while the number of farmers has decreased over time, the number of people employed in secondary agricultural occupations (fertilizer, herbicide, and pesticide production, genetic engineering, tractor manufacture) has increased. Again this is just a guess - if I were in grad school, and for something other than computer science, this is probably what I would study.

What would happen first though is that prices would fall, which means more people would be able and willing to pay for graphical work. This gives a bit of a cushion. Additionally, since retraining takes both time and money, a new technology will reduce wages before it cuts any actual jobs.

Finally, there are those people who say "GIMP has been able to do this for years with the ReSynth plug-in". This is (mostly) true. But for whatever reason (I've heard it's mostly the UI) most people who do image manipulation for a living use Photoshop, and for them, if a feature isn't in Photoshop, it doesn't exist. Personally, I get most excited about technologies when they're being researched at universities. Content-Aware Fill owes a lot to PatchMatch, which was developed at Princeton by people who also work for Adobe. The problem, of course, is that it takes seemingly forever from any interesting technology to get from "cool idea" to "workable reality".

Monday, April 12, 2010

Meritocracy is the new Aristocracy

It used to be that kings actually were better than everyone else. This was because of their diet; a prince or dauphin growing up would receive a lot more meat than a peasant child, not to mention how much more varied their diet was. So when they finally got to be king, they would be much more fit for their position than some random worker in the fields for whom meat was a special weekly treat. This is besides the fact that kings-to-be got a huge amount of training in matters both martial and intellectual. So it's fairly safe to say that kings were more fit to rule than their subjects.

Fastforward a few hundred years later, and the problem is well on its way to being fixed. The state tries its best to feed children as much as they need (though it fails miserably on several counts) and also to educate these children growing up (and again, tends to have decidedly mixed results) so that they can be useful members of society - if not at the upper echelons. In terms of intelligence and physical prowess among our youth, I think it's fairly safe to say that there's less of a divide between the rich and the poor than there was in medieval times.

But the big problem on the horizon is that education and nutrition are only the starting point. If you really want to improve your children's capacity for greatness, you'll engage in genetic engineering.

Intelligence, as a high level function, is going to be ridiculously difficult to engineer. Genetic engineering isn't all that sophisticated compared to where it will be in the end game, but that doesn't mean that we're not quickly approaching territory which used to be reserved for science fiction. In fact, the ability to engineer intelligence will most likely happen after human cloning becomes viable (especially if they lift the ban). Either way, once this happens, the divide between the rich and the poor will become even more pronounced, as rich children will be genetically superior to poor children, in addition to the host of other benefits they enjoy.


This is an interesting vision of the future that I don't think will come to pass. The big problem with extrapolating from current trends is that multiple trends happen at the same time. So while genetic engineering is moving fast, it's also competing with artificial intelligence and nanotechnology. Those three form the basic core of what's to be expected from the future, and they will all arrive into fullness in a series of quick steps that's already happening. So don't worry about the new aristocracy - worry about a million small things happening at once that will render this world unrecognizable.

Thursday, April 8, 2010

What I Believe: Part 2

Continued from Part 1.

Spirituality


The evidence for the existence of a god is weak. To me, of course, the question is one of proof. I think that's almost the antithesis of what most religions would teach, which is faith. I'm not saying that religious people have "God exists" as a simple axiomatic statement in their minds, because that's a somewhat reductionist view. People believe things for a whole host of reasons. Besides that, not all people have tried to build up their beliefs from a set of axioms - it's a somewhat stupid way to go about it. I also don't think that most people care if their beliefs are internally consistent (and I'm not really sure that mine are).

So while there might not be good evidence for the existence of a god, there's no evidence against it. In fact, I can conceive pretty easily of a being with massively more power than me - an entity capable of altering the laws of the universe at a whim and violating physical constraints. However, if such a being were to exist, I think that it would still follow a set of concrete laws, even if those laws aren't the same as those in normal existence.

I get there by imagining the universe as a virtual place, like a giant simulation being run on a massive scale. The simulation follows a set of rules, but the user running it can alter those rules at a whim or change variables while the simulation is in motion. That's what god is to me. But even in that case, god would have to follow a different set of rules and be constrained in some way by a bigger reality. To claim that there exist things that are not bound to any law or system is essentially nonsensical to me.

The biggest problem I've always had with the concept of a god is that pain and suffering exist in this world. So either God is not omnipotent, or not good. The argument against this is either that the divine plan is ineffable, or that suffering is a requirement for free will. I find both of those to be incredibly weak arguments. Even if I came to the logical conclusion that there was a god, how would I know what he wanted?

Morality

Morality has always been a difficult subject for me, mostly because it's hard to build from base principles. Most of the time, I just do like society tells me to, or follow my own particular compulsions. There's also a difference between what I think is morally right and what I feel to be right - a difference that I think is accounted for by the contrast between how I was raised and what an intellectual working through of things produces.

So if you start with the foundation laid down in the philosophy section, namely that existence is ultimately arbitrary and moral absolutes don't exist, where do you go from there? This is the basic problem with any atheistic stance. Trying to reconcile this brings people to many different conclusions. Evolutionary ethics says that we should do what we're programed to do. An ethical egoist would say that we should do what's in our best self interest. A humanist would say that we should what's best for humans.

Objectivism starts with "You have chosen to be alive" as its founding principle, and works up from there. I've been thinking about this lately, mostly as a result of playing Bioshock and idly thinking about rereading The Fountainhead before remembering how much I hated it. At any rate, we choose to live, and we have to accept that choice as moral because without it, we're left with nonexistence. The decision to live is therefor presumptively privileged over not living.

The problem with this is that there are a huge host of situations where choosing your own life is clearly the wrong choice. A hypothetical situation would be choosing to add several years onto your own life in exchange for the murder of a few hundred other people. A system of morality that lacks empathy can only really work in the context of a totalitarian society, because utterly selfish people would naturally start to work against each other.

So when I think about the statement "You have chosen to be alive", I have to modify it somewhat, because "alive" is a somewhat stupid term. There are things that we would say are alive which are incapable of thought (and therefore choice). There are also things that I would consider capable of thinking but which are also not alive, such as a hypothetical computer simulation of the human brain. So in substitute for "alive", I need to insert something else - like "conscious". But the statement then becomes strictly untrue, because at least once a day I choose to sleep and lose consciousness.

You can probably see where I'm going with this. If I accept that particular discontinuity, then why shouldn't I accept others? Hypothetically, if I were able to destructively upload my brain into a computer, there might be no more of a discontinuity between that existence and sleep. The person who wakes up the next day is more like another instance of the same person than a strict continuity, especially given how much goes on in the brain during sleep that's completely outside of any conscious control. And yet these different instances don't engage in sabotage (like, say, living for the moment instead of the long term). It's a little odd to think of myself as a series of people, but I think it's instructive. The phrase above becomes not "You choose to live" but "You choose to be conscious when it's viable and it won't harm the collective". This is something that I can accept as fundamentally true, because that's the result of the conditions that I find myself in.

Mostly this is my attempt to reconcile the jump from "Care about yourself" to "Care about others". I don't think it logically works.

Wednesday, April 7, 2010

What I Believe: Part 1

Okay, so I figured it's time that I set about on a new project, besides the 652 project. The title should be pretty self-explanatory, but here are the reasons that I'm embarking on it. First, this serves as a sort of time capsule for me. In five to ten years, I'll be able to look back on this series of posts and figure out who I was. Everything I've put on the net has been a sort of time capsule for me; even now, there's a distinct thrill in calling up old articles to see what I thought. The second reason is that it'll help me to figure out what it is that I actually believe. I've long agreed with Socrates that the unexamined life is not worth living. So the third reason is that once I have all this down in writing, I can take quick mental shortcuts, or look things up instead of having to actually think.

Philosophy

Let's start with my axioms. In parentheses are the philosophical concepts that are most closely related to those beliefs. This whole section also comes from the caveat that I generally think philosophy is a bunch of wankery full of useless distinctions.

1. Reality exists. (Philosophical Realism)
2. I exist. (Cogito ergo sum)
3. My memory and senses are mostly reliable. (Critical Realism)
4. Logic is infallible. (Rationalism)

Of those, I think maybe number 4 needs the most explanation. Logic gets a bad rap, mostly because of Spock. Logic isn't absolutely opposed to emotion, and I'm not saying that it's the king of decision making. But logic, as a system, absolutely cannot fail. If A = B, and B = C, then A = C. There are obviously things that can't be proven logically (see Gödel's incompleteness theorems), but the basic axiomatic statement I'm making is that things, once established, do not change unless you got it wrong the first time (which is very probable).

From 3 and 4, I get another theory; that a combination of senses and logic can actually tell me things about the world (Empiricism) (5). (From there comes a disbelief in a large number of things, which are mostly defined by their innate inability to be proven, such as miracles and supernatural forces. If something supernatural were able to be explained by science, then it would cease to be supernatural. Nonexistence is one of those things that can't always be disproven.)

Eventually a study of existence seems to reveal (to me) that the whole of it is made up of stuff (energy, matter, etc.) which follows laws (Metaphysical naturalism) (6). This would imply that things happen because of prior events, including conscious choices (Determinism) (7). It would also imply that consciousness itself is somehow physical in nature (Materialism) (8).

In summation: free will is an illusion, consciousness is some kind of emergent phenomenom, and the universe is composed entirely of things which are natural and driven upon laws which are likewise natural. There are some other philosophical questions to which I also hold beliefs, but which are somewhat less connected to the main axioms and derived truths.

1. The strong Church-Turing thesis is true.
2. My experience of consciousness is roughly equivalent to the experience of consciousness as experienced by other people.
3. I exist as the end result of mostly randomness.
4. Reality as we know it is (probably) virtual.
5. There are no moral absolutes.

Upcoming parts will probably include Morality, Spirituality, and Politics.

Tuesday, April 6, 2010

Videogame Meta-narratives

Alright, so I just got done with Assassin's Creed, and while jumping across the rooftops of Damascus and stabbing people in the throat is great fun, what I found really interesting is the story.

Spoilers Follow for Assassin's Creed and Bioshock

The story in Assassin's Creed is about a guy wandering through the 12th century holy land and killing lots of bad guys. This is where about 90% of the game takes place. The frame story, on the other hand, takes place in the modern day; a twenty-five year old shut-in is reliving the genetic memories of his ancestor. While frame stories aren't at all uncommon in literature (Canterbury Tales, Arabian Nights, Frankenstein) or movies (The Usual Suspects, The Princess Bride) or television (How I Met Your Mother), you don't see them much in videogames.

This is a real shame, because having a narrative frame adds a lot to the interactivity. Videogames have never been real big on immersion for two reasons; first, the user-interface gets in the way, and second, the player is in at least partial control. Adding a frame narrative can solve some of those problems. So in Assassin's Creed, the reason you have a UI is that Desmond needs a UI to pilot the memory program. If you see a glitch, or something that's unrealistic, you can justify it as a side effect of the memory-reliving machine. This is used a few times in the second game, where the instruction manual or other characters talk about flaws in their memory-reliving machine that were fixed this time. For example, in the first game it was impossible to swim, which is chalked up to being a bug. The dialog sometimes shifts into full-blown Italian (the sequel takes place in Italy) which is again an "unintended" effect of the translation software.

Penning in a meta-narrative is a very post-modern thing to do. It's not enough to just present the story, there's a real need to present the story in such a way that we acknowledge that it's a story. Everything has to be done with a wink and a nod, because irony is hip now, and the worst thing that you can do is be earnest about your story. If done well, the effect can be great, as it allows a deeper immersion. All of the artifacts of story-telling - small casts, synchronicity, production constraints, symbolism - are present because it's a story, so is it really so implausible that our lead character is named Hiro Protagonist, or that it turns out that a series of coincidences have led to the killer being the main character's long lost father? Metanarratives excuse inherent artificiality with a wink and a nod.

Ubisoft must like this conceit, because they've used it twice; Prince of Persia basically takes the form of the Prince recounting his adventure to someone. In a similar way to Assassin's Creed, you're playing through a memory. Only this time, story you're playing through is a story. When you die in Prince of Persia, you hear the Prince say, "No no, that's not how it happened, let me start over." and you reappear at the last checkpoint. From a narrative standpoint, I think this is better than having your previous progress undone and reset to an arbitary place without comment or explanation.

(Incidentally, I think that this is one of the funniest parts of Prince of Persia, because it means that - depending on how you play - he's one of the worst story-tellers of all time. "And then I swung from a post and fell into a pit of spikes. Wait, that's not how it happened," or "And I was fighting this huge sand monster and he stabbed me through the heart. Wait, that's not how it happened.")

There's another game I played recently, Bioshock, that does something similar. Huge Spoilers Follow. In Bioshock, you're playing a faceless character with no past, similar to many other shooters. Narratively, shooters use this as a way to get the player to associate more closely with the character - it's also one of the reasons that cutscenes have started to be phased out. This started around the introduction of Half Life, because of the greater sense of immersion it allows. Sometimes the player will be forced to watch as something happens, but they'll still be able to move around and be in full control the whole time.

So in Bioshock, you follow the directions of a guy named Atlas, who's trying to get you to kill a guy named Andrew Ryan. There's an Art Deco asthetic, banter about Objectivist philosophy, and some creepy moments. So you finally get to the end of the game and meet Andrew Ryan, and it's revealed to you that you've been under mind control the whole time. The entire linear path of the game was only followed by you because someone was saying code phrases to control you. Then Andrew Ryan tells you to kill him, which you do (in a cutscene) and you gain back your free will through applied science and go to kill the real bad guy (or maybe just the worse guy).

This was all very startling, because as the player you've been doing these things and following these orders because that's what the game wants you to do; if you try to disobey orders, nothing really happens because the game isn't designed for that. Bioshock is completely linear; there's no choice in what events will happen, or in what order they'll happen. In other words, it's sort of the perfect meta-narrative, because it calls attention to the narrative constraints and at the same time justifies them. I would like to see more of this, because it's the sort of thing that helps videogames develop as a medium.

On the other hand, if you're sticking to a meta-narrative, you have to be careful about how you use it. In Bioshock, the last third of the game is somewhat of a letdown, because the game doesn't really change once you have free will. You're still following a voice on a radio down linear levels. And in Assassin's Creed, even when you step out of the Animus, and the UI disappears, Desmond is being controlled from the third person perspective.

Tuesday, March 23, 2010

Cannibalism

I've always said that one of the great and terrifying things about the internet is that it allows all of the niche people to find each other. This means chat rooms and message boards that 99% of the population can't relate to, and online stores where you can buy pretty much anything.

Cannibalism, for one reason or another, has never been outlawed in 49 of the 50 states (Idaho being the exception). It's also something that crops up quite a bit in pop culture, usually when there needs to be some way for the antagonist to stand out - see Silence of the Lambs or The Hills Have Eyes. Alternately, there are stories - both fiction and non-fiction - about people who have had to resort to cannibalism to stay alive. Part of the reason I see a business opportunity in cannibalism is that it's one of the few taboos that we have left. If our society has proven anything, it's that we love to break our taboos.

So if you want to sell human flesh for consumption, I see two basic ways to go about it: either you open up a restaurant, or you sell the meat online. But before I go over the benefits and drawbacks to that, let's talk law.

While cannibalism itself might not be illegal, there are a huge host of laws concerning what's to be done with human remains, not to mention food safety laws. What this basically means is that you will need someone legally allowed to handle human remains (a list which includes morticians, policemen, medical examiners, forensic specialists, and other people in the medical field). The other problem is that it's illegal to sell or buy human remains. So a business that is established with just that purpose runs into a little bit of trouble. One of the standard tricks of prostitution is to redefine the service being performed into something else that's of no legal consequence. A masseuse who gives happy endings is being paid for her time, not for the sexual act. This isn't a very convincing argument, but it has kept prostitutes and other sex workers from jail time if the judge is lenient enough. Translating that to the sale of flesh, you would have to advertise it as complimentary to something else - like, say, a free gift that comes with a t-shirt.

So let's say that you want to start a restaurant. Your biggest hurdle is probably finding a location, and once you have one, keeping that location. I imagine that especially at the beginning, public pressure would be on you to move out once people realized what was going on. There would be news stories, protests, etc. In addition to that, you would need more staff - a chef, waiters, that kind of thing - and all of them would have to be okay with the idea of cannibalism and the reality of working with human remains every day. A restaurant also has a physical location, which means that you're cutting yourself off from a large amount of the population. However, there is some precedent in New York, where a chef made cheese out of his wife's breast milk. The New York Department of Health shut that down fairly quickly (and he was just giving it away, not selling it), but you can see the strategy that would have to be taken; the restaurant would sell other dishes as its main product, with the long pig being a specialty to draw in other business.

The other option is the internet. The great thing about the internet is that it's reasonably anonymous, which is why pretty much every dark thought that's ever passed through someone's head has its own private place online. This includes all manner of niche things - this is why Rule 34 exists. Because of the anonymity, people would be able to buy the meat without feeling social stigma for breaking the cannibalism taboo. Because the internet has no physical location, the business would be able to extend across the country, assuming that relevant laws about transporting human remains across state lines could be properly observed. And since it wouldn't have to be in a place with a large population, the business could be incorporated in whichever state has the most lenient laws on human remains.

So here comes the next inevitable question, which you might have been wondering since the beginning of this post: where is this flesh coming from? There are a few options that don't actually involve having someone die. Tumors get removed all the time, and limbs are occasionally unable to be reattached. The problem here is in finding someone who would be willing to sell those things to the company for someone else to eat. I have no doubt that those people exist, but probably not in enough quantity or regularity to base a business off of them. I feel it wouldn't hurt to pursue people with body identity integrity disorder, but again, there's the issue of quantity and regularity. This option is good, because no one can claim that the business is built on death.

A second option is to use flesh grown in labs. Since I'm not a biologist, I can't really speak to how difficult it would be to actually grow muscle (assuming that's what people want to eat). Tengion is already growing artificial organs for transplant. At any rate, it's something that will become easier with time, given that there are a huge number of medical technologies that result from the basic ability to grow parts of people. There would be less of a question about the safety and health issues of eating the meat. It would also remove some of the stigma of cannibalism, because it was never part of a person. However, this is a question of feasibility, because even if it's remotely possible now, it's sure to be damn expensive. In another thirty years, it might be possible to do from a garage.

And finally, we come to dead people. Dead people are a good source of flesh mostly because there are so many of them. Besides, through organ donor programs there are a huge number of people who are willing to promise away parts of their body for no monetary compensation. It's not too ridiculous to believe that people would promise away their flesh in return for a few hundred dollars, especially young people who are strapped for cash. Bringing contracts into the mix also allows for the use of more elaborate legal constructions which help to ensure that no lawsuits are filed against the business. Seeing as organ donation doesn't normally take the edible parts, there could be quite a bit of overlap between the two practices; both of them use parts of the body that would otherwise go to waste, and having them side by side allows for beneficial comparisons.

I think the bigger question here is whether the demand actually exists to support the costs that a business of this nature would entail, but I suppose that's a question that will only be answered when someone makes the effort.

Thursday, March 18, 2010

Are people digital or analog?

So I was watching the latest episode of Caprica, which features a digitized person trapped inside of a computer chip. The computer scientists were talking about why they were unable to make a copy of the chip, and the reason that they come up with is that the chip encodes something analog instead of digital.

But this makes no sense. The whole reason that computer chips work at all is that they're digital in nature; it all comes down to 1s and 0s. Though there were electronic analog computers, they were used mostly for solving problems that were also analog; circuits would be set up to represent hydraulic pressure. There are a huge number of problems with analog computing, which is why we don't use analog computers anymore. A possible fan wank explanation for the show would be that the chips they're using are actually some kind of hypereffective analog device, or that their computer chips run in a way that's completely different from how ours run, or that this computer chip was corrupted in such a way that it behaves in an analog fashion (though this is also stupid). But it brought up an interesting question for me, mostly because the underlying assumption is that people are analog. And I really like interesting questions.

So obviously the question will eventually come down to how the brain works. You might think people are analog simply because they're complex; people certainly seem to be partly irrational*. I know I'm making a large leap here by arguing that people are nothing more than their brains, and that this precludes the possibility of the soul or some kind of other outside force, but that's for another time. As for what neuroscience says about the nature of the brain, my quick Googling of the subject reveals quite a bit of disagreement on that subject. Since I'm not a neuroscientist, obviously my opinion on the subject holds little weight.

However, having a blog is all about making observations that hold no weight, so I'll go ahead with it. The brain is a feedback control mechanism; it has inputs, outputs, does something with them, and "controls" the body. For the purposes of digitization, it almost doesn't matter whether the brain is analog or not. Digitizing something that's analog means a loss of fidelity, but at a certain point that loss is so negligible that it's not worth worrying about. While that point might well hold true for something like music, it's another thing entirely to talk about the very essence of your being rather than something like music (which, because of the way our eardrums communicate with our brain, ends up being digital anyway).

*This is a pun; π is analog, 3.14 is digital. Digital computers, for example, are forced to use approximations, while analog computers could theoretically use the actual irrational numbers (but they can't, because of noise).

Thursday, March 11, 2010

Will Google Fiber really help?

If you're the sort of person who reads this blog, you probably know that Duluth is in the running to become a test city for Google's proposed ultrahighspeed bandwidth test. At first, I really liked the idea of living in a city with 1 gigabit connection speed, aside from the fact that Google has no real experience running a local ISP (they already control huge amounts of fiber across the country, but it's backbone stuff) and a history of privacy violations (though as I've stated before, privacy is overrated).

But my fair city, in one of their many attempts to show their worth, decided to organize an idea contest with $500 dollar prize. Since I'm poor, and I consider myself to be smart, I decided that I would give it a shot. Here's the big problem that I hadn't really considered though; the 1 gigabit speeds would only be local. So if I were communicating with a server here in town, I would get that full experience, but anywhere else in the country would still be roughly the same speed because of the bottlenecks on their end.

The possibilities for ultrahighspeed are immense. High-definition video is what most people think of right off the bat; the high-def you see on YouTube comes in at about 5 Mbps, meaning that it's not even close to the real thing (Blu-ray has a bitrate of 40 Mbps). At 1 Gbps, a whole movie can be downloaded in about four minutes. I'll confess that I've done a little movie pirating, and waiting a couple of hours for DVD quality video is one of the reasons that owning a copy of movies isn't something that's done a lot (either legally or illegally). Fiber would allow a business model where people actually download movies for keeps - though it would probably be hamstrung by DRM. If your speeds are fast enough, there's no real need to ever download the movies in the first place; some large company would have a database table showing which movies you own, and you would be able to watch your movies from any browser with a fast enough connection. But there's not even any reason for notional ownership once you have that technology, because "rental" is instant, especially if the payment scheme is seamless. The paradigm will shift from "ownership" to "access". Movie rental places are already trying to do this, Netflix cheif among them with their on demand service.

Aside from raw content, of which video is definitely the most broadband intensive, the other lure of fiber is that it would allow the use of applications which are currently confined to your operating system. This isn't such a big shift, because most of the time when you "buy software" what you're really buying is a liscence to use the software (depending on your EULA, natch). So when speeds get high enough, you'll buy the lisence without any CDs, DVDs, or downloads, and whenever you want to use the program it's as fast as logging in to check your e-mail. But as stated above, once you're at that point "ownership" is entirely notional, and so you might as well just rent out Final Cut Pro from Apple instead of paying for it normally. This is more or less the de juris reality, which goes counter to how we actually think about application ownership (a traditional EULA specifies no termination date).

Here's the problem with Google Fiber: no matter what town Google chooses, the population won't be large enough for the application and media giants to build the necessary infrastructure. If Duluth receives the contract, will Apple build a new data center here specifically so that Duluthians will be able to download a four-minute high-def movie? Will they modify their existing data centers to push us high-def movies down whatever fiber their own or lease? Apple is perhaps a bad example, because they're in contention with Google, but the point stands; there's too little to gain by catering to a city the size of Duluth. Maybe I'm wrong, and the "last mile" problem is really all there is to it; maybe the huge rights holders will hop on the bandwagon right away. But I'm really curious as to whether this would actually change how we browse.

Friday, March 5, 2010

Free Will

So free will doesn't exist; it's just a convincing illusion. Here's why.

Scientists have been writing down sets of rules to describe the workings of the universe for as long as there have been scientists. These rules help us extrapolate what will happen next in a given situation, and if those extrapolations turn out to be wrong, the scientists will run to their chalkboards and write down new rules until the whole system of rules conforms to what we know about reality.

So the universe would appear to be rule based; most people will agree to this (in its general form). But if it's true that everything in the universe is rule based, then it also means that people must be based on rules. This goes against what we feel to be true about ourselves. This gut feeling exists, I think, because it's too difficult to extrapolate both our thoughts and our actions. In part this is because the brain's "processing power" is taken up by thinking about the brain when we try to do this, and in part it's because our information about the brain is incomplete in even the best of circumstances (i.e. under an fMRI). Even the best techniques of today can't predict a person's actions at even the most rudimentary level.

If the universe is rule based, then the people that inhabit it must also be rule based, and strict adherence to the rules means that any choice is essentially fated to happen - or, if you buy into some interpretations of quantum mechanics, the "choice" is not under your control but instead the result of electron spin etc. Consciousness itself is an illusion.

Even if things like consciousness and free will are illusory, it doesn't mean that they aren't useful. Obviously the justice system would have to work very differently if people thought that things were not your fault because there is no real "you" to speak of. Our society is founded on the belief that some things matter and others don't, and without these constructs society requires remodeling (especially if morality is itself a construct).

One of the reasons that I don't like writing about philosophical issues is that I'm very aware that they've been rehashed a thousand times before, and that I'm unable to actually add anything to the global, scholarly conversation. I actually feel this way about a lot of things; there are a large number of people who are much smarter than I, and typing away at my computer serves only selfish purposes. But to examine our beliefs requires conversation, and since I have no one to really talk about these things with, it needs to go out to the internet instead.

Thursday, March 4, 2010

Political Ideology and Free Will

So the more I think about it, the more I think that one of the basic differences between liberals and conservatives is a belief in free will. (Disclaimer: liberal and conservative are two labels which don't really map properly as a spectrum of belief, but I'll be talking about two general viewpoints on a number of issues)

I mostly came about this view by thinking about the approach those two camps take to the justice system. The conservative viewpoint on criminal punishment is that it should be punitive; if we make a punishment strong enough and we're "tough on crime", people will stop committing crimes. Criminals lack the willpower to make the right choices. The liberal viewpoint, on the other hand, stresses reformation and changing the person to be different. This is why they tend to be softer; it's not about second chances so much as it is about changing the person into someone who doesn't engage in criminal acts. People can be changed, not through acts of will but by conscious shaping by outside forces.

Another point of contention between liberals and conservatives is what some people would derisively call the "nanny state". This applies to things like gun control, drug enforcement, health insurance, safety protocols, and so on. Free will also explains this difference; liberals believe that people are literally not in control of themselves. Taxes on alcohol and cigarettes and bans on most other addictive drugs result from the belief that addicts are not capable of controlling their actions; the brain is a feedback control mechanism, and restricting the inputs results in different outputs. But for the conservatives, this is more a matter of will - if you don't want to die from lung cancer, you should stop smoking. If you don't want to get fat, stop eating so much. If free will exists, and humans are under their own agency, then these are personal failings rather than the result of outside conditions.

And finally, there comes the issue of gay rights. Liberals would have you believe (in the strong form) that homosexuality is something that you're born with or (in the weak form) something that occurs because of uncontrollable environmental factors. Conservatives will say that it's a choice. What more needs to be said on that issue?

It has long confused me why the conservative cause marries two seemingly different ideologies. Christian conservativism stresses a restriction on immoral things, while the free-market ideologues espouse the theory that people must be free to choose. Why should I be free to pay my workers a fair but unjust amount of money for their work, but not free to buy a magazine with lewd images in it? For me, this seems an inherent contradiction within the party. It seemed at first that there were just two groups that bound themselves together so as not to split the vote, another unsatisfactory result of the two-party system. Then I thought that perhaps this was too cynical, and there had to be some thing which bound them together. I now think that this binding trait might be philosophical.

If free will does not exist, then the market is going to behave in certain ways depending on what all the variables within the market are, and what restrictions are placed on the market by both technologies and governmental interference. In this way, the market is no different than anything else in the universe. It is therefor in the best interests of the people (embodied in the government) to restrict the market in such a way that it does good things for the people (in the form of new technologies, a good distribution of resources, health and safety for workers, etc.) instead of bad things (pollution, child labor, defective products).

However, if free will does exist, then the market is instead determined by how the actors in the market choose to behave. Companies will stop polluting because they are good and honest instead of because they have incentives to stop. If you believe in free will, then I think you almost have to believe that people are by their nature good, or if not that, then at least you must believe that good will prevail in the end. So perhaps the lack of restrictions on the market show that while individuals are not to be trusted in the area of personal choices, large companies are to be trusted on large issues.

Perhaps I'm simplifying the issues too much.

Tuesday, March 2, 2010

Digital Natives

So there's this theory that people of my generation have some huge advantage with technology because we were born into it; the buzzword is "digital native". The idea is that because we were exposed to digital technology while growing up, our brains have been wired differently, our neural networks better able to respond to fast visual stimuli.

This isn't bunk - there's some good science behind it - but where this theory fails is in assuming that there's some sort of concrete divide between those who grew up on technology and those who didn't. Technology doesn't work like that. Every single year, advancements are being made in computers. If we work from the assumption that technology actually does alter the mind, and that the brain becomes less plastic as we age, then we also have to pay attention to the fact that the current generation of "young people" have been exposed to vastly different levels of technology throughout their lives.

I was born in 1986, which means that the internet really started to move into full swing when I was 10: 1995 was the year of HTML and the expansion of the true World Wide Web. When I was 15, Google finally came to town, and became the powerhouse of search, starting the slow transformation of the web into a pile of information to be sifted through rather than a series of interlinked pages. During my first year of college, Facebook came out, and social media started to hit it big.

So that's roughly how milestones in technology map to someone my age. But for someone just a few years younger or a few years older, those milestones would look very different by virtue of having different developmental contexts. For someone who's 15 right now, like my cousins, Google has been around for as long as they can read, and social media will be around for their entire high school experience. If we're going so far as to say that technologies cause changes in the brain, are we just going to discount the different contexts of a change in time?

Yet when people, especially those over 30, talk about "digital natives" what they're really referring to is a group of people with different habits from them; habits that they don't really understand, and which they see as less valuable than the status quo. For kids born today, it's very likely that their entire life will be online, pictures of them posted to Flickr, Picasa, Facebook, etc. at every step of their life. We're entering into the era of full recording, where everything you do is accompanied by a stream of data.

I'm not going to mount a defense of the digital lifestyle, mostly because that's a little useless; technology keeps going, and any such defense would have to be constantly updated to explain why new thing X is not so bad. But I can at least look at the recommendations that are being made by those people who would have you believe that the Internet, and everything that comes with it, is a bad thing. This camp puts out fear-based books like iBrain, The Dumbest Generation, and The Cult of the Amateur. These books are written not to help understand young people, but to comfort the old.

The most common thing they suggest is a move away from the internet. If people just spent more time face-to-face, and sat down with each other to have actual conversations, we wouldn't have this problem of narcissism, echo-chambers, amateurism, piracy, or immaturity. The argument, in essence, is this: the old ways worked, why would we change them?

This betrays a fundamental misunderstanding of both history and human nature. On the history front: those Baby Boomers who are making these claims grew up in an era of ubiquitous television. There were reactionaries then (and even now) who claimed that television would rot the mind and create a nation of illiterates. When recorded music made its debut, there were people who wondered why anyone would want to listen to something that wasn't live; and when recordings started to become popular, those same people lamented that live music was becoming harder to find. Every time any job is automated, there are those people who seem to think that the amount of work in the world is finite, and that this is a permanent net loss for employment rates.

And yet the world continues on. Any worthwhile technology is unstoppable, because it appeals to people in some way; it increases value, provides entertainment, or makes someone money. Turning back the clock to a "simpler time" is simply impossible, and nearly every reactionary claim about some new technology has proven to be unfounded.

Besides this argument from history, there is this argument from human nature; simply telling people that they shouldn't do something is never enough if that thing has some sort of reward for them. Websites and social media provide a psychological reward, as well as offering utility. Telling people "you would be happier if you stopped" is not good enough; to change people, you need to offer them a stiff punishment or a greater reward. This is why we have taxes, and why we punish people for their crimes.

Here's one of the difficult issues then: there is not some grand committee somewhere deciding how the world will be structured. There is no Council on Technology that decides what will or will not be made, and what will or will not be popular. Instead, the path of technology is built mostly by human nature. Social media are evolving along the dual lines customer satisfaction and profitability. Profitability, in almost all cases, comes from advertising, which is itself built around human nature; getting people to do things they wouldn't otherwise do.

And if advertising has taught us anything, it's that getting people to change their habits is usually something that needs to be accomplished by offering them rewards or punishments. So if you feel that people are going down the wrong path, the best way to convince them of the error of their ways is to set up a different system of thought that's more rewarding.

Monday, March 1, 2010

The Future of Narcissism

Alright, so my old friend Travis necroed a note I had posted on Facebook some four years ago about the disposability of content in the modern age. Because it was short, here's the entire thing note reposted:
Most people don't realize this, but we live in what used to be called the future.

Don't believe me? It's true. Historians are already prematurely calling this the Digital Age, because it can at times seem like the whole world is online and connected to your fingertips. Since we've officially entered into the Web 2.0 (that's a buzzword that you can show off to your boss with) era, there's been a massive outpouring of words, pictures, and videos of all shapes and sizes.

The problem I have with this is two-fold.

First, the signal to noise ratio has risen to stratospheric levels. For every piece of useful information, there are a hundred pictures of someone's cat. For every scrap of genuine human insight, there are a hundred teenage girls bitching about a hundred other teenage girls. It's sometimes possible to tell at first glance what is and what isn't time-wasting garbage, but the general clues of misspelled words and poor web-formatting aren't always enough.

Second, our digital medium has a very poor staying power, if any. There was a time in human history when everything that was written down was important, because writing itself was expensive. Papyrus kept well, and can still be read today, whereas our computers don't come with floppy drives anymore, and the term paper you wrote last semester can't be opened on your new computer. If your parents made a Betamax home movie, chances are it would be incredibly difficult for you to find a way to play it.

It may not matter to you now, but this era in human history, this Digital Age, is leaving nothing of cultural value behind. There will be too much sewage for the historians to wade through, and the cost of reviving old technology from the dead will be too much work. This is, perhaps, the cost of cultural technology; because everyone can be heard, no one can be heard; because it is easy to create, it is easy to lose.
(Everything from this blog is auto-imported into Facebook, and Google Buzz. If you're reading this post at one of those places, this is your warning: I like to talk about things that aren't really all that interesting.)

Anyway, in some hypothetical future where human society has collapsed and been rebuilt, and future historians/anthropologists/archeologists are looking through the remains of our society, they're going to run into a few problems, as stated above. Hardware and software keep shifting through phases of adoption and obsolescence, which means that the effective lifespan of any digital work isn't really all that long - even if it's still on a disk, the odds of the software and hardware supporting that file on that medium get lower and lower with every year.

But the other problem with time as it relates to digital media is that there are some hard limits on how long that stuff can even last. Here are some figures pulled from around the net:
  • Magnetic tape (VHS): 25 years
  • Optical media (CD/DVD): 100-200 years
  • Solid state (flash drives): 10 years
  • Paper: 100s of years
  • Plastic: indefinite
(Sources here, here, here, and here.)

Now obviously there are a huge number of considerations involved in "how long something lasts". When I say that paper lasts for hundreds of years, that assumes ideal conditions: dry, cool, microbe-free environments, with acid-free paper. And for many of the things on that list, the time for decomposition is longer than the actual product in question has existed (I'm older than the DVD). In a way, that list is pretty pointless.

So what the future historians find will depend on how far into the future they are, and the extent of the destruction caused by whatever it was that wiped out all of the people. If they're a hundred years in the future, our history will be a strange sort of patchwork to them, the surviving evidence being a patchwork of discs and plastics. They'll be able to see all our movies and music, but none of our blogs and websites but whatever's been printed out. Of course, I have been known to underestimate the tenacity of those in the "soft sciences". It's also possible that someone will finally invent fast than light travel, move ahead of the outgoing radio signals, and learn about the past by intercepting those transmissions (or without FTL, waiting for lucky signals to bounce off comets/asteroids/etc.).

Regardless of all that, my original point was about what they would find if they got access to a random sampling of all this information being produced by us. The answer, of course, is that they would get a giant load of irrelevant crap; part of the reason there's so much data floating around is that we, as a society, are falling further and further into the well of personalized content. This is standard long-tail distribution stuff; because it's free to read and write online, there's been a huge explosion of stuff. If you like model trains, you can find a whole host of websites, blogs, and forums dedicated to that one thing. The same goes for pretty much any subject on the face of the planet.

All of this is only really feasible because of search. Without search, the huge amount of data would be a confusing mess of hyperlinks. With it, the mess gets organized around whatever it is you were looking for.

That's all well and good, and there are many who would argue that this mess is the path to enlightenment. But the other side of this glut of information is that people isolate themselves into their particular interests, creating echo chambers that lock them out from the rest of the world. This has always happened, but online (where the vast majority of discourse takes place) the long tail (mostly) eliminates the need for conversational compromise.

That's where narcissism comes into play. On the web, you don't have to change anything about yourself, because you will always be able to find people who like you just the way you are. You can spew out whatever is on your mind, and odds are that at least a few people will find it interesting enough to read. In this way, people get turned in on themselves.

But this isn't anywhere near the endgame for narcissism. As I've theorized in my post The Future Will Be Customized, there will come a time in the future when pretty much bit of media that you consume will be generated by artificial intelligence, synthesized to your preferences. This is a natural extension of long-tail dynamics; instead of stopping at a certain level of "nicheness", the tail continues on forever, until works are being produced that appeal to only a single person. It will happen because it's possible, and because there are economic/social/cultural benefits at every step of the process towards getting there.

So historians who will be looking at the future that hasn't happened yet will see a widening of media until it comes to the point of oblivion. Eventually, reading a novel will tell you far more about the person it was crafted for than it will about the society that person inhabits. This also extends beyond the realm of fiction; Google News already gives me a customized feed of information that's tailored to what stories I've read in the past, gradually building up a news narrative specifically tailored to me.

This is the end game. People surround themselves with the world they think they want, cut off from everything that doesn't give a positive feedback. The machines don't take over through strength of arms or by holding our technology hostage, but instead by giving us exactly what we want.

Sunday, January 31, 2010

Exploiting Multiple Universe Time Travel

Let's say you're in a nondescript room with a box that can move things backwards in time. Every hour, on the hour, it moves whatever is in it to fifteen minutes ago, but offset by a few meters so that in ends up in a different box (the boxes are labeled). You have a thousand dollars.

In the stable-time-loop model of time travel, it is impossible for you to make money in this scenario. It's also impossible for you to make any changes at all, and is therefor only interesting insofar as it allows audiences to make observations about free will.  This is the model used by The Time Traveller's Wife, Terminator, and Twelve Monkeys.

(Edit: I came across this post while searching Google today, and I feel the need to clarify that statement about stable time loops; it's very possible to make money if you have one, because you can look up stocks or lotto numbers or use what's referred to as Time Loop Logic. However, in the scenario presented, it is indeed impossible.)

In the multiverse model, each instance of travelling back in time spawns a new universe (a brief note on terminology: when I say universe, I actually mean cosmos, because the is universe by definition everything. When I say multiverse, I mean the collection of all cosmoses.) Given what you have in the world, it's technically possible to make money, but you would never do it.

Here is the stupid plan: put the money in the time travel box (from now on, T-box), and when it hits the hour, it'll be sent fifteen minutes backwards to the recipient box (R-box). But then, when the money comes into the R-box, you run back over to the T-box and pull that money out before it can be sent back, leaving you with twice as much money. This would seem to create a paradox, but since we're actually dealing with different universes, there's you-A in universe-A with no money, and you-B in universe-B with double the money.

First, this plan is stupid because it redistributes the money in a stupid way. Marginal utility says that you-A will be a lot less happy with not having the money than you-B will be with having twice the money. This is why you should never make a 50/50 bet at 2:1 payoff.

Second, it won't work because there's no incentive for you-A. He'll see that there's not any money in the R-box, so he'll pull his money out of the T-box before it disappears forever. Which means that there's never going to be a universe-B, or (depending on how the T-box works) universe-B won't be much different from universe-A.

There are a few ways to circumvent this; the first, if you're comfortable with yourself, is to just cilmb into the T-box and wind up in universe-B R-box. Of course, making a profit this way is difficult, because you'd only be able to take as much stuff with you as would fit in the box, and you would have to share an identity with the other you. The various benefits include being able to halve your rent, having someone to play co-op games on the Xbox with, and being able to work phenomenol hours in comparison with the rest of the workforce.

The second circumvention method is to send information back in time. Since information is practically free (for small amounts, you only need a pencil and paper, and you get to keep the pencil when you're done), it's something that you would do without any incentive - unlike the example wherein you waste $1000, it has some chance of actually happening. The real problem is finding some bit of information that's worth more 15 minutes ago than it is at the time the T-box activates.

And here's where we have to take the thought experiment outside the idyllic realm of nondescript rooms and boxes. The first thought that comes to mind is lottery tickets. Fifteen minutes isn't a long enough time, because a play on any of the numbered lottos rolls over to the next week more than fifteen minutes prior to the drawing (generally one to two hours before the drawing, depending on what state you live in). However, a similar effect can be achieved with the use of scratch-off games. Your course of events would look something like this:

1) Wait by the R-box to see if any tickets come back. If they do, go to step 2b.
2a) Run to the nearest gas station and buy some scratch-off tickets (if you're too far away from one, you can coordinate this by phone with another person).
3a) Record all the numbers on the cards.
4a) Have the employee run them through to see if you've won (this is faster than actually using a penny and checking for yourself).
5a) Make a note of the winning numbers.
6a) Put the sheet of paper with your records in the T-box.

2b) Take the sheet of results to the nearest gas station.
3b) Only buy the winning tickets (most clerks, if it's not busy, will let you buy scratch cards out of sequence, especially if you act really superstitious)
4b) Profit!

Of course this plan requires some initial investment money, because you have to actually play the cards to know what they contain. However, you've given yourself a huge edge, because for each time you use this system there's a 50% chance that you'll be able to skip the whole "losing" part of gambling. However, most of us would be willing to spend $3 on a making hundreds of dollars.

But we can actually increase those odds considerably. At the point you hit 4b, you should have two things: a sheet that tells you which tickets to buy and some amount of cash. One of those things is now worthless to you. Here is our replacement step that we'll do instead of screaming "Profit!":

4b) Put the sheet of paper with your records in the T-box.

What happens now is that instead of two universes, one in which you gamble and one in which you win without gambling, there are a nearly infinite number of universes. In one of those, you gamble, while in the other infinitude of universes you win without gambling. This means that for any time you try this system, it's a freak occurrence for you to actually have to gamble, because all the versions of you on the b-track outnumber the single version of you on the a-track.

(Note that step 4b should be moved down to 5b, and the new step 4b should be something like "If the piece of paper is getting unreadable, write all the numbers down on a new sheet of paper and put that one in the T-box". You'll note that if those steps were followed as is, the piece of paper would still be aging with every cycle. A million cycles of 15 minutes is about 30 years. By copying the information onto a new sheet of paper, you can ensure that information is the only thing traveling. Of course, then you get into the problem of transcription errors, but whether that would actually happen depends on what you believe about the universe as far as chaos theory and quantum mechanics goes - whether the physical laws mean that things are predetermined, and how quickly events diverge.)

So using a single player, with a highly restrictive time machine, the best method of exploit is to send information back to yourself, especially if there's minimal cost associated with that information. Time to expand the scope. Let's say someone else finds this room, works out the principles of how this mysterious box works, and turns it into just another piece of consumer electronics, which can translate anything of any size to another time and place.

Lotteries would immediately stop, as would most other games of chance, because the house edge would absolutely evaporate. The stock market would most likely stop serving any useful function, because information from the future would (in almost all realities) be flooding in. A large number of information services would fold, because it's only necessary to pay for an application once, at which point you simply send the source code back to before you paid for someone to create it. Crime would be stopped before it happened, because the date and time of nearly every murder, theft, arson, etc. would be known beforehand. There would also be a mass exodus to the past, where information about the future is even more valuable than in the present, and the further back you go the easier it is to fake an identity.

Of course, this assumes a reality which has experienced more instances of time travel than the initial reality. Arranged left to right, we could order these universes in order of causality, with the leftmost universe having been the root cause of time travel and having never actually experienced anything coming from the future, and the rightmost universe being one in which nothing has ever actually traveled to the past because every attempt has been interrupted by people from the future.

The impossibility of time travel has often been disputed with the simple fact that if it were possible, we would be knee deep in people from the future. However, in this model, we can clearly see that this doesn't prove anything; it merely means that we find ourselves in a universe further to the left.

Tuesday, January 26, 2010

The Wierd Inversions of Singularity Fiction

Alright, first a definition of terms: the Singularity is sort of the Geekpocalypse. Only instead of dying, or going to heaven, we all become super intelligent and transcend into almost unimaginable beings of immense power. Supposedly, this will happen because of a large number of advances in technology - brain uploading, artificial intelligence, nanotech, bioengineering, etc. It will be incomprehensible to us in the same way that the Pythagorean Theorm is incomprehensible to a dog.

Personally, I'm sort of a soft believer in the Singularity. This whole concept of exponential growth is all well and good, but it requires a little too much faith for my tastes. The truth is, there's so much we don't know about consciousness. There are also hard limits on technological progress, which means that what looks like an exponential curve must actually be an S-curve. There's also a question of economics; it might be that AI is possible, but that we'll never reach the point where it's actually pursued. Or AI will take a lot of time to make better AI, so instead of an intelligence spike, it's a gradual upwards slide, and there's no future shock, just slow and steady progress. Or we might blow ourselves up, or get hit by an asteroid, or exterminated by extraterrestrials. All that aside, we do seem on track to be hitting some serious boosts in technology if the next hundred years are anything like the last hundred.

Anyway, I've been reading a lot of science fiction recently, much of it focused on the Singularity. There's a weird theme running through these books; as technology progresses, civilization starts to regress.

For some reason, governments start to melt away or dissolve completely. Instead, you have the ungovs of Marooned in Realtime, the phyles of The Diamond Age, and the synthetic groupings of Accelerando. The reasons for this breakdown varies. Either the government is too big and slow, or the corporations gradually take over, or the system of taxation becomes infeasible. Some authors take it farther than others - sometimes the future is only of companies and their shifting alliances, and sometimes it's nothing more than independent agents, connected to other people only through tenuous agreements. Society becomes tribal once again, only this time humans - and our descendants - don't move through the plains or forests, but through the sea of humanity.

Another throwback is currency. Historically, we moved from barter, to backed currencies, to fiat currencies. In Singularity fiction, this also warrants a regression; typically this comes in the form of reputation markets, where people spend goodwill (basically, their upvotes). That system seems really dumb to me, primarily because it needs the same sort of belief supporting it that fiat currency does. Only instead of being under the control of government, that wonderful institution whose primary occupation is existing, it would have to be under control of a private corporation with a risk of folding. The other two dominant narratives are an information currency (in Accelerando they use computerized people) or a return to barter (on the theory that computerization can eliminate all the costs associated with that). Barter makes a lot of sense if you've already postulated that civilization is doomed - you have to trade in things with actual value, and anything that's useful can't very well be put into wide circulation, and if you're using something without intrinsic value as your currency, then you're basically using a currency founded on belief anyway.

Okay, so why do the writers choose to do it this way? First and foremost, I should point out that a writer can only be so clever. That's why if you read (bad) detective fiction, the detective solves easy problems and everyone pretends that they're hard. Most hard scifi writers aren't geniuses; they read scientific papers, look at trends, and then try to write an interesting story that's within the bounds of reality. So perhaps the reason that systems seem to be regressing is that we don't really know what form civilization might take. A monarchy is at least easy to understand.

I guess the other reason this happens is that it provides a nice sort of mirroring effect; humanity moves from tribes to kingdoms to nation-states, then back downwards until we are tribes once more. It has a certain irony to it.

Tuesday, January 12, 2010

The Future Will Be Customized

The archaic model of production was artisan based. If you wanted shoes, you went to a cobbler, who would take measurements of your feet, and a few days later you would come back to pick up your order. If you were poor, you would buy shoes secondhand, or just go barefoot.

The old model is mass production, of the "any color so long as it's black" variety. Eventually this becomes something more like "any color so long as it's black, blue, or green", which gives enough variety for possessions to be relatively distinct, especially since the added cost of something like different colors is trivial. The stuff that actually has to be engineered requires larger costs, and so won't be made unless it has a large enough market.

Eventually a variety of technologies come together to vastly expand available markets and greatly lower the cost to enter those markets, which leads to the current model - long-tail distribution. iTunes can sell not just the big hits (like a physical music store would have to) but really obscure songs that only a hundred people would want to buy. They can do this because the cost of having a song in the store (because of smart searching and digital hierarchies) is practically nothing. In the same way, anyone can start up a website and start selling something, with a cost of about $50, and the search and delivery infrastructures mean that it can actually be worth their while. This leads to things like etsy.com, where people can (for virtually no cost) put their wares online and sell them. Instead of profit being located exclusively at the head of our market size graph, it exists further and further down the tail. Note that this model works better for things that don't require huge amounts of capital to produce, like works of art or small consumer goods. Something like a car still requires a factory, so you're less likely to be able to buy something exclusively suited to you.

The long-tail distribution of goods represents the direction for the future. Production costs for nearly everything are falling, due to improvements in technology. There are two technologies coming down the pipeline, whether it be ten years from now or a hundred, that will drop those costs to practically zero. Those two are artificial intelligence and nanotechnology.

Artificial intelligence will allow the dynamic generation of pure information commodities; books, movies, music, software, and so on. Once those abilities are up to the point of being able to match human quality a large number of things will happen, but one of the biggest will be the near-infinite extension of the long tail. Right now it's not uncommon to find books that only a handful of people read. But with hard AI, there will be books only read by one person; custom crafted to that person's tastes. In the same way that Amazon takes in all of your reviews, rating, and prior purchases to suggest things that you might like, an AI would be able to take in all of that information to create an entirely new work. Alternately, large commercial works would be able to build variability into their pieces - a comedic movie would be able to respond to the audience's laughter in the way a stage production can, for example. This complete customization applies not just to information goods, but services as well. Instead of sitting in a classroom with thirty other students, you would be able to sit in front of your media center and learn from an AI program which customizes itself to your individual needs and learning style.

Nanotechnology means pretty much the same thing for physical goods. In the science fiction version, this means a powder that transforms nearly anything into nearly anything else, usually by rearranging protons, neutrons, and electrons. That's a long way away; in the short run, nanotechnology means that nearly anything will be able to be produced as a one-of. We can do some of this now with 3D printers, assuming that you want to make something out of plastic and at a fairly low resolution. As time goes on, the resolution will keep getting better, and mixed materials will become available. Since information is the only important input, the same rules that will apply to things like movies and books will start to apply to physical things like cars, dinner plates, and so one. Instead of buying a car from a dealership, you'll go to a dealership (or a large rapid production facility), have an AI figure out what your needs and desires are, and custom make a vehicle for you. The same thing will happen to larger things like houses, as nearly all aspects of home construction will be outsourced to cheap AI and robotics. Instead of finding a house suited to you, one will be custom built.

In the future, when this technology comes around - whether that's twenty or a hundred years away - nearly everything will be made on a case by case basis. Everything that you wear, watch, and use will be custom made. The complications that arise from this are numerous; there's already been talk that with the availability of the internet, people are segregating themselves into diverse groups, reducing our ability to get along as a society. When we no longer have even our mass produced goods in common, and a culture of one, what happens then?