Tuesday, April 27, 2010

Arizona Nazis

So there's a new bill out of Arizona which (basically) requires people to carry their papers on them at all times in order to aid the police officers of that state in cracking down on illegal immigration, along with a number of other measures.

Cue comparisons to the Nazis.

There's a famous quote by Ben Franklin, "They who can give up essential liberty to obtain a little temporary safety, deserve neither liberty nor safety." This has always struck me as a horribly vindictive statement. There are always trade-offs between liberty and safety; the real issue is managing the various exchanges so that you don't come out with a net loss, and the rate of exchange really depends on the individual person. It might be that Franklin was using careful wording when he said "essential liberty", but then again, I'm not really of the opinion that the phrase "essential liberty" is really meaningful. As the risk of getting stabbed in the back approaches zero, the amount of liberties I would give up approaches infinity.

I'm also usually for increased government information. A lot of our infrastructure and services would run better if there were, say, ubiquitous fingerprinting. It would allow the identification of runaways and dead bodies, it would help solve crimes, and it would make identity verification much faster (though a system of that sort would always have problems). Divulging medical information would greatly increase the speed of medical research (though this would require that no one could turn you down for insurance, or fire you from your job, because of that information). Full demographic information would better allow government distribution of funding, as well as sociological research that would further our understanding of which government programs are working and which aren't.

So I don't really see the problem with requiring people to carry around their ID. That's not such a big loss of liberty, especially since it doesn't even affect the majority of people who are carrying wallets and driver's licenses in the first place. You might argue about the right of the minority who don't want to have to carry that stuff, but I'm willing to place something as basic as identification as a requirement for living in this country (just as I'm willing to require the payment of taxes).

The big problem with the bill is that it's being perceived as racist. This is more a problem with the general perception of the government than anything else. People assume that a law passed like this is just going to be used to give the police a bullshit reason to stop and detain brown people. It might - I'm not from Arizona, I don't know how deep the police and government prejudice runs.

All I'm saying is that requiring identification is actually a pretty good idea.

Monday, April 19, 2010

Should the First Amendment Include Corporations?

Alright, so the grievously stupid invocation of the First Amendment when talking about corporations is quickly becoming a pet peeve of mine, as I seem to be finding it everywhere I look. Also, Google recently made a post about their approach to free expression. (Made just a few days after my blog post. Coincidence? Yes.)

So a lot of people think that these major corporations are beholden to an ideal form of the First Amendment instead of the actual law. In part, this is because speech-through-intermediaries is a somewhat new concept. In the past, this took the form of letters to the editor, or call-ins to radio shows, or actual employment with the press. In some cases, this included hand-cranking presses in a small basement somewhere - and once a government starts cracking down on do-it-yourself presses, it's a pretty good sign that totalitarianism is nigh.

So why do we even have such a concept as free speech in the first place? I suppose it's because of the belief that free speech is good for our society. Majorities are often mistaken, which is why we need to have minorities to have reasoned discussion about what should be done. In a sense, the founding fathers left free speech to market forces. Good ideas would float to the top, while bad ideas would sink to the bottom, and in the end, society would be the better for it. Obviously, as with the free market, they saw the need to place restrictions on speech, hence "clear and present danger" etc.

As time went on, and America expanded, facilitated communication grew. In the era of letters, this mostly took the form of the USPS, which, as a government agency, falls under the First Amendment, and is thus fairly uninteresting. Given more time, new technologies and new ways of communicating came along. In 1910, telephones, telegraphs, and radio became "common carriers" which meant that they had to provide their services to the public without discrimination. This is one of the ways in which the United States is somewhat unique, and demonstrates one of the reasons that common law is sort of stupid. The idea of a "common carrier" originally belonged to transportation of people and goods, and was taken from that context to apply to information. In this sense, telecommunications are covered under the First Amendment, as they are services licensed by the United States government.

The original internet was built on the backbone of telephone infrastructure, which meant that it could be regulated by the FCC in the same way that telephone, television, telegraph, and radio services were. As time went on, and technology changed, DSL and cable got reclassified as an "information service", which is legally distinct from a "telecommunications service" and is the whole reason that anyone is arguing about net neutrality. But that's not what this post is about.

Even if the Obama administration comes down on the side of net neutrality and reclassifies the internet as a telecommunications service, there will remain the larger question of how to regulate the huge companies that control the flow of content. I am speaking specifically of Apple, Google, and Amazon. All three of those companies exhibit enormous power on the market not just of things, but of ideas. Small companies have been known to collapse when Google tweaks its search algorithm and sends their website to the second page of results.

So here's the question - is Google a common carrier? It's obviously not in the legal sense, as it doesn't fall under the authority of the government, but it is in the sense that people depend on it. Yet Google's whole job is to separate the useful from the worthless: in other words, discrimination. If Google were a common carrier, how would it function, as it by definition it needs to value some speech over others? In some senses it would be easier for Apple, as their app store would simply have to accept all submissions, and no song or podcast would be denied access to iTunes.

Right now, we depend on these companies to not cross any lines. A free market optimist might say that we have nothing to fear, as these huge companies have no real choice but to follow the will of the masses. I would respond that this is exactly why we should be afraid. On the other hand, if these companies trend liberal (and I believe they do) then it may mean that the undesirable parts of free speech, such as hate speech and conspiracy theorists, will suffer from erosion as they become less and less accessible.

(Note: this title has two potential meanings, but I'm pretty obviously talking about restrictions rather than protections, the latter having been decided in Citizens United v. Federal Election Commission.)

Thursday, April 15, 2010

Corporations and Free Speech

So there's recently been some murmuring about Apple and how awful it is that they block apps for political reasons. Phrases similar to "I guess Apple doesn't care about the 1st Amendment" keep cropping up. This also gets tossed around when any online service starts to censor people for any reason.

Let's get this straight right now; that is not what free speech means. The very first words of the First Amendment are "Congress shall make no law ..." You will note that those words explicitly mean that this does not apply to people or corporations (because corporations are people too). So a corporation doesn't have to give people a voice, even if it's in the business of providing voices, and it can censor those voices however it wants. The First Amendment argument only really has a place when it's the government which is restricting speech - such as the FCC slapping down fines on people. Why do people think this is the case? It's a misunderstand, sure, but there has to be a reason that so many people seem to misinterpret the law.

It might be a sense of entitlement. In this country, we have it hammered into our heads that we can say anything we want, short of a "clear and present danger" or defamation. But because of various technological advances, a huge majority of our speech is mediated by corporations. For me to make this blog post requires the use of a computer (made by HP), OS (made by Microsoft), blogging software (made by Google), and internet access (provided by Charter). At any point in that chain, there is an opportunity for censorship, because none of those corporations has any legal requirement to allow me to do what I want - in fact, I have "signed" contracts all of them, even if most of those were click-through EULAs that I didn't actually read.

The other reason might be that it doesn't happen all that often. People then assume that because it's not happening, it must be illegal. There is then the question of why it's not happening, and I have a few theories about that:

1) Restricting speech is bad for business.
If Company A restricts speech and Company B doesn't, people will be more likely to go take their content creation to Company B. While speech restriction might raise the overall quality of Company A's offerings, overall quality is pretty unimportant in the era of search. There are also a few (weak) legal grounds on which to sue Company A - misrepresentation and discrimination being the big two.

2) Restricting speech is damned expensive.
This is one of the things that always bothered me about 1984. Who is watching all of these people? It would take a huge amount of workers to police even a small slice of the content output of the internet, and it would be brutally inefficient to boot. In the future, this job will probably be handled by artificial intelligence - already there are algorithms that can pick out "content of note", but those are mostly used by our intelligence and advertising agencies.

3) Restricting speech happens, but it happens to people who aren't sympathetic.
There are certainly examples of this happening. This is why there aren't nude pictures on Facebook or graphic videos on Youtube. In those circumstances though, the company in question looks good, because they're removing something that most people don't want to see when they go to those places. In other words, it doesn't look self-serving. More problematic is the issue of copyrighted videos, which Youtube will remove even if they fall under fair use (though this is still its prerogative).

Of those three, I think that the second one makes the biggest impact. As time goes on, bots will get better at tagging objectionable posts for human review. They'll need to do this for all the stuff you're not allowed to put on any kind of commercial site, both of the "clear and present danger" type and the porno/graphic type. This is especially true now that the scepter of cyberbullying has been raised. As the technology gets better, the temptation will grow to turn it on whatever kind of speech hurts the corporate bottom line. This is especially true if, like Apple, you have the infrastructure set up to go through content by hand, one at a time.

Tuesday, April 13, 2010

Labor Implications of Content-Aware Fill

So I've been checking out a lot of the the videos and commentary on Adobe's new Content-Aware Fill. I find several things about the online discussion to be fairly amusing.

Firstly, there are the people who claim that this is fake. I can sort of understand this, as it was originally posted around April Fool's. However, there's nothing all that funny about this particular technology, and nothing all the unbelievable. Of course, to some people, this is unbelievable - because Content-Aware Fill makes a lot of the menial labor parts of digital manipulation disappear.

Second, there are those people who see what it can do and get the wrong impression. They say "Finally! I'm going to have so much more time!" This belies a basic misunderstanding of market economics. If it takes less time to do something, you have fewer billable hours. While it's possible to reduce labor required and keep your prices the same, you'll quickly be undercut by your competition. This applies doubly so for a profession that's less likely to have permanent contracts.

Third, there are those people who think that this will cost people their jobs. This is the other side of the "labor saving" coin. I've often heard the argument that the only result of new technology is a shifting of labor. Basically it goes like this; I invent the cotton gin, which decreases the work required to seperate cotton fibers from cotton seeds by a factor of fifty. This makes cotton cheaper, which means more people will buy cotton, which means that I need to hire more people. Additionally, cheap cotton boosts a number of other industries, such as clothing manufacture.

I often question whether this is actually true. History has shown that increases in technology mean that labor will shift to less and less "essential" tasks, as seen by the movement over time from agricultural to industry to service. It's somewhat difficult to find the data to compare occupations over time adjusted for population increases, so I have no idea whether there are (for example) fewer farmers today than there were a hundred years ago. It actually seems likely that while the number of farmers has decreased over time, the number of people employed in secondary agricultural occupations (fertilizer, herbicide, and pesticide production, genetic engineering, tractor manufacture) has increased. Again this is just a guess - if I were in grad school, and for something other than computer science, this is probably what I would study.

What would happen first though is that prices would fall, which means more people would be able and willing to pay for graphical work. This gives a bit of a cushion. Additionally, since retraining takes both time and money, a new technology will reduce wages before it cuts any actual jobs.

Finally, there are those people who say "GIMP has been able to do this for years with the ReSynth plug-in". This is (mostly) true. But for whatever reason (I've heard it's mostly the UI) most people who do image manipulation for a living use Photoshop, and for them, if a feature isn't in Photoshop, it doesn't exist. Personally, I get most excited about technologies when they're being researched at universities. Content-Aware Fill owes a lot to PatchMatch, which was developed at Princeton by people who also work for Adobe. The problem, of course, is that it takes seemingly forever from any interesting technology to get from "cool idea" to "workable reality".

Monday, April 12, 2010

Meritocracy is the new Aristocracy

It used to be that kings actually were better than everyone else. This was because of their diet; a prince or dauphin growing up would receive a lot more meat than a peasant child, not to mention how much more varied their diet was. So when they finally got to be king, they would be much more fit for their position than some random worker in the fields for whom meat was a special weekly treat. This is besides the fact that kings-to-be got a huge amount of training in matters both martial and intellectual. So it's fairly safe to say that kings were more fit to rule than their subjects.

Fastforward a few hundred years later, and the problem is well on its way to being fixed. The state tries its best to feed children as much as they need (though it fails miserably on several counts) and also to educate these children growing up (and again, tends to have decidedly mixed results) so that they can be useful members of society - if not at the upper echelons. In terms of intelligence and physical prowess among our youth, I think it's fairly safe to say that there's less of a divide between the rich and the poor than there was in medieval times.

But the big problem on the horizon is that education and nutrition are only the starting point. If you really want to improve your children's capacity for greatness, you'll engage in genetic engineering.

Intelligence, as a high level function, is going to be ridiculously difficult to engineer. Genetic engineering isn't all that sophisticated compared to where it will be in the end game, but that doesn't mean that we're not quickly approaching territory which used to be reserved for science fiction. In fact, the ability to engineer intelligence will most likely happen after human cloning becomes viable (especially if they lift the ban). Either way, once this happens, the divide between the rich and the poor will become even more pronounced, as rich children will be genetically superior to poor children, in addition to the host of other benefits they enjoy.


This is an interesting vision of the future that I don't think will come to pass. The big problem with extrapolating from current trends is that multiple trends happen at the same time. So while genetic engineering is moving fast, it's also competing with artificial intelligence and nanotechnology. Those three form the basic core of what's to be expected from the future, and they will all arrive into fullness in a series of quick steps that's already happening. So don't worry about the new aristocracy - worry about a million small things happening at once that will render this world unrecognizable.

Thursday, April 8, 2010

What I Believe: Part 2

Continued from Part 1.

Spirituality


The evidence for the existence of a god is weak. To me, of course, the question is one of proof. I think that's almost the antithesis of what most religions would teach, which is faith. I'm not saying that religious people have "God exists" as a simple axiomatic statement in their minds, because that's a somewhat reductionist view. People believe things for a whole host of reasons. Besides that, not all people have tried to build up their beliefs from a set of axioms - it's a somewhat stupid way to go about it. I also don't think that most people care if their beliefs are internally consistent (and I'm not really sure that mine are).

So while there might not be good evidence for the existence of a god, there's no evidence against it. In fact, I can conceive pretty easily of a being with massively more power than me - an entity capable of altering the laws of the universe at a whim and violating physical constraints. However, if such a being were to exist, I think that it would still follow a set of concrete laws, even if those laws aren't the same as those in normal existence.

I get there by imagining the universe as a virtual place, like a giant simulation being run on a massive scale. The simulation follows a set of rules, but the user running it can alter those rules at a whim or change variables while the simulation is in motion. That's what god is to me. But even in that case, god would have to follow a different set of rules and be constrained in some way by a bigger reality. To claim that there exist things that are not bound to any law or system is essentially nonsensical to me.

The biggest problem I've always had with the concept of a god is that pain and suffering exist in this world. So either God is not omnipotent, or not good. The argument against this is either that the divine plan is ineffable, or that suffering is a requirement for free will. I find both of those to be incredibly weak arguments. Even if I came to the logical conclusion that there was a god, how would I know what he wanted?

Morality

Morality has always been a difficult subject for me, mostly because it's hard to build from base principles. Most of the time, I just do like society tells me to, or follow my own particular compulsions. There's also a difference between what I think is morally right and what I feel to be right - a difference that I think is accounted for by the contrast between how I was raised and what an intellectual working through of things produces.

So if you start with the foundation laid down in the philosophy section, namely that existence is ultimately arbitrary and moral absolutes don't exist, where do you go from there? This is the basic problem with any atheistic stance. Trying to reconcile this brings people to many different conclusions. Evolutionary ethics says that we should do what we're programed to do. An ethical egoist would say that we should do what's in our best self interest. A humanist would say that we should what's best for humans.

Objectivism starts with "You have chosen to be alive" as its founding principle, and works up from there. I've been thinking about this lately, mostly as a result of playing Bioshock and idly thinking about rereading The Fountainhead before remembering how much I hated it. At any rate, we choose to live, and we have to accept that choice as moral because without it, we're left with nonexistence. The decision to live is therefor presumptively privileged over not living.

The problem with this is that there are a huge host of situations where choosing your own life is clearly the wrong choice. A hypothetical situation would be choosing to add several years onto your own life in exchange for the murder of a few hundred other people. A system of morality that lacks empathy can only really work in the context of a totalitarian society, because utterly selfish people would naturally start to work against each other.

So when I think about the statement "You have chosen to be alive", I have to modify it somewhat, because "alive" is a somewhat stupid term. There are things that we would say are alive which are incapable of thought (and therefore choice). There are also things that I would consider capable of thinking but which are also not alive, such as a hypothetical computer simulation of the human brain. So in substitute for "alive", I need to insert something else - like "conscious". But the statement then becomes strictly untrue, because at least once a day I choose to sleep and lose consciousness.

You can probably see where I'm going with this. If I accept that particular discontinuity, then why shouldn't I accept others? Hypothetically, if I were able to destructively upload my brain into a computer, there might be no more of a discontinuity between that existence and sleep. The person who wakes up the next day is more like another instance of the same person than a strict continuity, especially given how much goes on in the brain during sleep that's completely outside of any conscious control. And yet these different instances don't engage in sabotage (like, say, living for the moment instead of the long term). It's a little odd to think of myself as a series of people, but I think it's instructive. The phrase above becomes not "You choose to live" but "You choose to be conscious when it's viable and it won't harm the collective". This is something that I can accept as fundamentally true, because that's the result of the conditions that I find myself in.

Mostly this is my attempt to reconcile the jump from "Care about yourself" to "Care about others". I don't think it logically works.

Wednesday, April 7, 2010

What I Believe: Part 1

Okay, so I figured it's time that I set about on a new project, besides the 652 project. The title should be pretty self-explanatory, but here are the reasons that I'm embarking on it. First, this serves as a sort of time capsule for me. In five to ten years, I'll be able to look back on this series of posts and figure out who I was. Everything I've put on the net has been a sort of time capsule for me; even now, there's a distinct thrill in calling up old articles to see what I thought. The second reason is that it'll help me to figure out what it is that I actually believe. I've long agreed with Socrates that the unexamined life is not worth living. So the third reason is that once I have all this down in writing, I can take quick mental shortcuts, or look things up instead of having to actually think.

Philosophy

Let's start with my axioms. In parentheses are the philosophical concepts that are most closely related to those beliefs. This whole section also comes from the caveat that I generally think philosophy is a bunch of wankery full of useless distinctions.

1. Reality exists. (Philosophical Realism)
2. I exist. (Cogito ergo sum)
3. My memory and senses are mostly reliable. (Critical Realism)
4. Logic is infallible. (Rationalism)

Of those, I think maybe number 4 needs the most explanation. Logic gets a bad rap, mostly because of Spock. Logic isn't absolutely opposed to emotion, and I'm not saying that it's the king of decision making. But logic, as a system, absolutely cannot fail. If A = B, and B = C, then A = C. There are obviously things that can't be proven logically (see Gödel's incompleteness theorems), but the basic axiomatic statement I'm making is that things, once established, do not change unless you got it wrong the first time (which is very probable).

From 3 and 4, I get another theory; that a combination of senses and logic can actually tell me things about the world (Empiricism) (5). (From there comes a disbelief in a large number of things, which are mostly defined by their innate inability to be proven, such as miracles and supernatural forces. If something supernatural were able to be explained by science, then it would cease to be supernatural. Nonexistence is one of those things that can't always be disproven.)

Eventually a study of existence seems to reveal (to me) that the whole of it is made up of stuff (energy, matter, etc.) which follows laws (Metaphysical naturalism) (6). This would imply that things happen because of prior events, including conscious choices (Determinism) (7). It would also imply that consciousness itself is somehow physical in nature (Materialism) (8).

In summation: free will is an illusion, consciousness is some kind of emergent phenomenom, and the universe is composed entirely of things which are natural and driven upon laws which are likewise natural. There are some other philosophical questions to which I also hold beliefs, but which are somewhat less connected to the main axioms and derived truths.

1. The strong Church-Turing thesis is true.
2. My experience of consciousness is roughly equivalent to the experience of consciousness as experienced by other people.
3. I exist as the end result of mostly randomness.
4. Reality as we know it is (probably) virtual.
5. There are no moral absolutes.

Upcoming parts will probably include Morality, Spirituality, and Politics.

Tuesday, April 6, 2010

Videogame Meta-narratives

Alright, so I just got done with Assassin's Creed, and while jumping across the rooftops of Damascus and stabbing people in the throat is great fun, what I found really interesting is the story.

Spoilers Follow for Assassin's Creed and Bioshock

The story in Assassin's Creed is about a guy wandering through the 12th century holy land and killing lots of bad guys. This is where about 90% of the game takes place. The frame story, on the other hand, takes place in the modern day; a twenty-five year old shut-in is reliving the genetic memories of his ancestor. While frame stories aren't at all uncommon in literature (Canterbury Tales, Arabian Nights, Frankenstein) or movies (The Usual Suspects, The Princess Bride) or television (How I Met Your Mother), you don't see them much in videogames.

This is a real shame, because having a narrative frame adds a lot to the interactivity. Videogames have never been real big on immersion for two reasons; first, the user-interface gets in the way, and second, the player is in at least partial control. Adding a frame narrative can solve some of those problems. So in Assassin's Creed, the reason you have a UI is that Desmond needs a UI to pilot the memory program. If you see a glitch, or something that's unrealistic, you can justify it as a side effect of the memory-reliving machine. This is used a few times in the second game, where the instruction manual or other characters talk about flaws in their memory-reliving machine that were fixed this time. For example, in the first game it was impossible to swim, which is chalked up to being a bug. The dialog sometimes shifts into full-blown Italian (the sequel takes place in Italy) which is again an "unintended" effect of the translation software.

Penning in a meta-narrative is a very post-modern thing to do. It's not enough to just present the story, there's a real need to present the story in such a way that we acknowledge that it's a story. Everything has to be done with a wink and a nod, because irony is hip now, and the worst thing that you can do is be earnest about your story. If done well, the effect can be great, as it allows a deeper immersion. All of the artifacts of story-telling - small casts, synchronicity, production constraints, symbolism - are present because it's a story, so is it really so implausible that our lead character is named Hiro Protagonist, or that it turns out that a series of coincidences have led to the killer being the main character's long lost father? Metanarratives excuse inherent artificiality with a wink and a nod.

Ubisoft must like this conceit, because they've used it twice; Prince of Persia basically takes the form of the Prince recounting his adventure to someone. In a similar way to Assassin's Creed, you're playing through a memory. Only this time, story you're playing through is a story. When you die in Prince of Persia, you hear the Prince say, "No no, that's not how it happened, let me start over." and you reappear at the last checkpoint. From a narrative standpoint, I think this is better than having your previous progress undone and reset to an arbitary place without comment or explanation.

(Incidentally, I think that this is one of the funniest parts of Prince of Persia, because it means that - depending on how you play - he's one of the worst story-tellers of all time. "And then I swung from a post and fell into a pit of spikes. Wait, that's not how it happened," or "And I was fighting this huge sand monster and he stabbed me through the heart. Wait, that's not how it happened.")

There's another game I played recently, Bioshock, that does something similar. Huge Spoilers Follow. In Bioshock, you're playing a faceless character with no past, similar to many other shooters. Narratively, shooters use this as a way to get the player to associate more closely with the character - it's also one of the reasons that cutscenes have started to be phased out. This started around the introduction of Half Life, because of the greater sense of immersion it allows. Sometimes the player will be forced to watch as something happens, but they'll still be able to move around and be in full control the whole time.

So in Bioshock, you follow the directions of a guy named Atlas, who's trying to get you to kill a guy named Andrew Ryan. There's an Art Deco asthetic, banter about Objectivist philosophy, and some creepy moments. So you finally get to the end of the game and meet Andrew Ryan, and it's revealed to you that you've been under mind control the whole time. The entire linear path of the game was only followed by you because someone was saying code phrases to control you. Then Andrew Ryan tells you to kill him, which you do (in a cutscene) and you gain back your free will through applied science and go to kill the real bad guy (or maybe just the worse guy).

This was all very startling, because as the player you've been doing these things and following these orders because that's what the game wants you to do; if you try to disobey orders, nothing really happens because the game isn't designed for that. Bioshock is completely linear; there's no choice in what events will happen, or in what order they'll happen. In other words, it's sort of the perfect meta-narrative, because it calls attention to the narrative constraints and at the same time justifies them. I would like to see more of this, because it's the sort of thing that helps videogames develop as a medium.

On the other hand, if you're sticking to a meta-narrative, you have to be careful about how you use it. In Bioshock, the last third of the game is somewhat of a letdown, because the game doesn't really change once you have free will. You're still following a voice on a radio down linear levels. And in Assassin's Creed, even when you step out of the Animus, and the UI disappears, Desmond is being controlled from the third person perspective.