Continued from Part 1.
The evidence for the existence of a god is weak. To me, of course, the question is one of proof. I think that's almost the antithesis of what most religions would teach, which is faith. I'm not saying that religious people have "God exists" as a simple axiomatic statement in their minds, because that's a somewhat reductionist view. People believe things for a whole host of reasons. Besides that, not all people have tried to build up their beliefs from a set of axioms - it's a somewhat stupid way to go about it. I also don't think that most people care if their beliefs are internally consistent (and I'm not really sure that mine are).
So while there might not be good evidence for the existence of a god, there's no evidence against it. In fact, I can conceive pretty easily of a being with massively more power than me - an entity capable of altering the laws of the universe at a whim and violating physical constraints. However, if such a being were to exist, I think that it would still follow a set of concrete laws, even if those laws aren't the same as those in normal existence.
I get there by imagining the universe as a virtual place, like a giant simulation being run on a massive scale. The simulation follows a set of rules, but the user running it can alter those rules at a whim or change variables while the simulation is in motion. That's what god is to me. But even in that case, god would have to follow a different set of rules and be constrained in some way by a bigger reality. To claim that there exist things that are not bound to any law or system is essentially nonsensical to me.
The biggest problem I've always had with the concept of a god is that pain and suffering exist in this world. So either God is not omnipotent, or not good. The argument against this is either that the divine plan is ineffable, or that suffering is a requirement for free will. I find both of those to be incredibly weak arguments. Even if I came to the logical conclusion that there was a god, how would I know what he wanted?
Morality has always been a difficult subject for me, mostly because it's hard to build from base principles. Most of the time, I just do like society tells me to, or follow my own particular compulsions. There's also a difference between what I think is morally right and what I feel to be right - a difference that I think is accounted for by the contrast between how I was raised and what an intellectual working through of things produces.
So if you start with the foundation laid down in the philosophy section, namely that existence is ultimately arbitrary and moral absolutes don't exist, where do you go from there? This is the basic problem with any atheistic stance. Trying to reconcile this brings people to many different conclusions. Evolutionary ethics says that we should do what we're programed to do. An ethical egoist would say that we should do what's in our best self interest. A humanist would say that we should what's best for humans.
Objectivism starts with "You have chosen to be alive" as its founding principle, and works up from there. I've been thinking about this lately, mostly as a result of playing Bioshock and idly thinking about rereading The Fountainhead before remembering how much I hated it. At any rate, we choose to live, and we have to accept that choice as moral because without it, we're left with nonexistence. The decision to live is therefor presumptively privileged over not living.
The problem with this is that there are a huge host of situations where choosing your own life is clearly the wrong choice. A hypothetical situation would be choosing to add several years onto your own life in exchange for the murder of a few hundred other people. A system of morality that lacks empathy can only really work in the context of a totalitarian society, because utterly selfish people would naturally start to work against each other.
So when I think about the statement "You have chosen to be alive", I have to modify it somewhat, because "alive" is a somewhat stupid term. There are things that we would say are alive which are incapable of thought (and therefore choice). There are also things that I would consider capable of thinking but which are also not alive, such as a hypothetical computer simulation of the human brain. So in substitute for "alive", I need to insert something else - like "conscious". But the statement then becomes strictly untrue, because at least once a day I choose to sleep and lose consciousness.
You can probably see where I'm going with this. If I accept that particular discontinuity, then why shouldn't I accept others? Hypothetically, if I were able to destructively upload my brain into a computer, there might be no more of a discontinuity between that existence and sleep. The person who wakes up the next day is more like another instance of the same person than a strict continuity, especially given how much goes on in the brain during sleep that's completely outside of any conscious control. And yet these different instances don't engage in sabotage (like, say, living for the moment instead of the long term). It's a little odd to think of myself as a series of people, but I think it's instructive. The phrase above becomes not "You choose to live" but "You choose to be conscious when it's viable and it won't harm the collective". This is something that I can accept as fundamentally true, because that's the result of the conditions that I find myself in.
Mostly this is my attempt to reconcile the jump from "Care about yourself" to "Care about others". I don't think it logically works.