The Problematic Scale of Morality

I love watching and listening to Sam Harris’ debates and podcasts. While I don’t agree with everything he says, I find him a powerful mind and voice for reason. While listening to him speak, I’ve noticed he has been cornered by religious scholars into having to take a position on morality. Specifically, he is often confronted with the claim that without religion, man would have no basis for morality and that science can’t provide one.

Sam’s rebuttal to this claim seems to basically be that in order for morality to be defined by science, one must only make a few basic assumptions, mostly that not suffering is preferable to suffering, and that suffering depends on an ability to experience, and that the ability to experience is on a kind of sliding scale, beginning perhaps with plants and working up to humans. Sam can correct me if I’m wrong here but that does seem to be the basic idea. An option which leads to less suffering is by default more “good.”

Being raised in Scientology, I was exposed to the Scientology Dynamics, which are categorizations used in ethics procedures and concepts. These are those Scientology dynamics:

  1. Self
  2. Family and procreation
  3. One’s groups (for example colleagues)
  4. All of mankind
  5. All living things
  6. All physical things, space, time
  7. The spirit or spiritual pursuance
  8. Infinity. All everything in all directions forever

Morality in Scientology is taught as being relatively straightforward. If you’re tasked with deciding which between two actions is more ethical you need only to determine which does the greatest good for the greatest number of dynamics. That is your ethical choice. Scientologists believe that Scientology itself does everything for the higher dynamics. This is why they think attacking Scientology is such a crime. They view it as literally attacking all of mankind and more.

As I’ve grown up with this mentality, it seemed very easy for me to accept Sam’s theory about morality. After all, on closer inspection you’re likely to see it’s nearly identical. The problem with this theory is that it seems to be correct. And that’s a big problem.

If there is no God, and there is no basis for morality from any kind of ancient wisdom, no arbitrary set of rules or laws, then this does seem to be the most reasonable interpretation of actions: cause the least harm and/or cause the most benefit.

People have seemed very confused or disbelieving in the past when I’ve explained that Scientologists lie on a daily basis to any non-Scientologists they’re exposed to. It’s part of their nature because they are sure they are doing something that while earthly mortals view as wrong because it’s a lie, it isn’t wrong because it does much more good than bad. Lying to you by telling you Scientologists don’t believe in aliens isn’t a transgression because the point of that lie is to get you into Scientology where you’ll be set free and mankind itself will progress. And in the end you’ll see we’re all aliens anyway. That’s the logic. That’s why they lie. It also forms the backbone of the famous disconnection policy. Disconnecting from your family member forever is unimportant when faced with the reality of helping all of mankind forever.

But Scientologists lying isn’t the real problem with this theory. The problem with this theory is that it’s as callus and abrasive as nature itself. It’s the exact type of morality that, ironically, Sam is afraid of in artificial intelligence. Why would an all-powerful super-intelligent super-being not just immediately wipe out all humans on the planet? It wouldn’t need their minds anymore and clearly we do much more harm to the higher dynamics, specifically the physical universe, than we help.

In fact it could be argued the best step to take would be to wipe out all humans and create self-sustaining robots which seek only to populate the entire universe with plant life which doesn’t consume other plant life.

Another deep issue here is that determining the effects of actions requires an understanding of the future. While we may all like to imagine being able to go back in time to kill Hitler before his rise to power, we actually have no idea if that would have been an ethical action. What if we discovered after doing so that without Hitler, a chain of events kick off which quickly end in a nuclear Armageddon? Certainly the horrible suffering of tens of millions of people pales in comparison to the suffering of billions of people. In fact without the knowledge of the future, that unknowable abyss of infinity, we can never truly be certain our actions are right or wrong. We can only basis morality of our actions on our local time scales, which often pan out to be wrong.

That means that in order to evaluate if an action is moral based on this theory, you must depend on your local evidence for its repercussions. Without that future view, everything is up for debate and the only thing you can strive for in the end is a good intention. And intentions are hard to prove, hard to verify and even hard to define.

Without some unworkable point system in place or referee, there is no real way to determine the level of ability to experience, nor a way to determine the actual repercussions of an action even on a local time scale. If I kick a girl in the leg for no reason on the street and she jumps back from it and narrowly avoids getting hit by a car that neither of us saw, was that still a “bad” action? If we only consider intentions then it was. If we don’t consider intentions then I did the right thing. What is the suffering value of a child missing his father for an hour because you made him stay late at work out of spite? Is that better or worse than killing a giraffe? The problems are endless and straining.

I don’t have a better theory than his, or Scientology’s, for that matter. But if they are right, morality is a real problem that isn’t going to scale well.

Leave a Reply

Your email address will not be published. Required fields are marked *