In an Existentialism class I attended recently, this scenario came up: Imagine we had a mobile app for morality that everyone could download. It would tell you at all times what the moral thing to do would be (if you were a consequentialist, then it would calculate based on consequentialism, and so on). If a person decided to just follow whatever the app told them to do, would we consider them a perfectly moral person? Or has something morally valuable been lost?
Before we can begin, there are some considerations we need to put aside. We ignore the very real concerns about the difficulty in building the app, as well as the possibility of malfunctioning. We also ignore the fact that different people do not agree on moral theories; hence this is a question of meta-ethics, which hopefully does not depend on which moral theory we take to be right.
After talking to a few people, it seems that intuitively something does seem to be missing. For me, it reminded of two things. The first was something Avital Ronell says in Examined Life:
If we’re not anxious, if we’re okay with things, we’re not trying to explore or figure anything out. So anxiety is the mood, par excellence, of ethicity, I think…If you feel that you’ve acquitted yourself honorably, then you’re not so ethical. If you have a good conscience, then you’re kind of worthless. Like, if you think- “Oh, I gave this homeless person five bucks. I’m great”- then you’re irresponsible. The responsible being is one who thinks they’ve never been responsible enough.
The anxiety involved in making tough moral choices would be missing from an app-using agent. We then need to ask if this anxiety is valuable in itself, or only in so far as it makes people make better decisions. If it’s the latter, the accuracy of the app would make the anxiety unnecessary, and nothing valuable would be lost.
The second thing I was reminded of was a speech in Tony Kushner’s Angels in America:
LOUIS: Jews don’t have any clear textual guide to the afterlife; even that it exists. I don’t think much about it… for us it’s not the verdict that counts, it’s the act of judgment. That’s why I could never be a lawyer. In court all that matters is the verdict… It’s the judge in his or her chambers, weighing, books open, pondering the evidence, ranging freely over categories: good, evil, innocent, guilty; the judge in the chamber of circumspection, not the judge on the bench with the gavel. The shaping of the law, not its execution… That it should be the questions and shape of a life, its total complexity gathered, arranged and considered, which matters in the end, not some stamp of salvation or damnation which disperses all the complexity in some unsatisfying little decision—the balancing of the scales.
Unlike Ronell, Kushner seems less ambiguous in insisting on the inherent worth of moral contemplation. How do we connect this idea back with the main question?
We need to recognize that when we’re talking about morality, we’re not talking about one thing, but rather a cluster of closely related concepts. One aspect of morality is to know moral truths (and act on them), but another crucial aspect is to be able to impute praise-worthiness and blame-worthiness to actions. To see that these two aspects are related but different, consider this scenario: Imagine someone who contributes to charities, but only because they were threatened at gun-point. Now, it might very well be the case that giving to charity was the right thing to do (in terms of having the best consequences, say), but we don’t think that this person is praise-worthy in any way.
Once we’ve established that “morality” is actually a bundle of concepts, we can see why the case of the mobile app is complicated. While it probably would decrease the chance of errors in what we should do, it’s not clear whether we can think of the app-users as praise-worthy anymore.
People who take discovering moral truths as the primary aspect of morality might insist that praise-worthiness is only instrumentally valuable (ie., it is good only in so far as it helps create a system that promotes good consequences), and so isn’t really all that important. Hence, praise-worthiness will now just be understood as the ability to use the app well. On the other hand, people who see the praise-worthiness aspect as the core of morality might insist that even though we might have better consequences, we might be jettisoning the very heart of what makes morality valuable in humans. Opponents will, of course, point out that this imagined “value” comes at the cost of real human suffering. Is that a trade-off moral people ought to make? (Or is this query question-begging?) Maybe the solution is to struggle all the time over whether we use the app or not, and then use it. Or maybe not.
Anyway, everyone will probably be too busy playing Pokemon GO on their phones to even use the morality app if it came out, so never mind.