Moral decisions are inescapable

…if you are not a nihilist. One of the greatest difficulties with uncertainty is that, although you are unsure what the effects – if any – of your actions are, you cannot escape moral decisions.

Provided you think there is some value, then your actions or thoughts may affect to what extent potential value is realised. ‘Inaction’ is as much of a decision as ‘action’. You merely differ in how much your body has moved! If the effects of you doing an action are uncertain, so are the implied effects of not doing anything. Let me give an example then explain.

Bed time

You need to decide whether or not to get out of bed. Unfortunately, when you do a quick analysis of the situation (as you do), it seems to spiral out of control. It is 6.00am and you are tired. What are the potential risks on health of not going back to sleep? How will it effect your stress levels? What are the different traffic risks of commuting at different time? In this case you probably have a decent idea that any cost of these are far lower than the cost of worrying about it.

Can we put moral theorizing to bed…

However, if you are unsure about what is the moral thing to do then these uncertainties can build-up just as easily. You want to decide whether you should be a vegan. Which mammals (or fish, insects …) are conscious? How might they be affected by farming? What about effects on the environment? Yet – with over 3 chickens farmed per person worldwide – these decisions can have large consequences. If there are going to be (say) 5 animals alive at any time in pens to fuel your meat-guzzling habits, it matters what you do. If those animals enjoy/hate their lives, it should affect your decision.

What to do

Say an action A (getting out of bed/eating meat) might have a number of consequences. Let’s call them C1, C2…, which would not have otherwise occurred. Then choosing ‘inaction’, or not A (A’), has the equivalent consequences C1’, C2’, C3’ … If you are uncertain about the value of a consequence then you must be uncertain about the value of not realising that consequence. This is because the value of ‘C1’ is merely how much worse or better it is than C1. If you are truly uncertain then either option is unpalatable.

An Inkling

However, even the slightest inkling on one side – if that is all you have to go on – can tip the balance. That inkling means there is some reason to think there is a chance of a positive outcome. For example, I might have an inkling that animals are conscious and feel pain because of the scientific literature. This is despite me being somewhat unsure to what extent consciousness is a merely biological phenomenon.

For the uncertain events to outweigh the inkling would imply a contradiction. This would imply the ‘uncertain’ events have a negative expected outcome. Otherwise how could they outweigh the positive expected outcome of your ‘inkling’? That contradicts the other outcomes being uncertain.

Sad skeptics

In fact, the problem is most difficult for the skeptic. To be unsure of others’ and your own existence and experience, the effect of your actions and thoughts and yet still to face this uncertainty is extraordinarily difficult. Especially if you are unsure of what you might know but be denying to yourself!


An endnote. I will write up these thoughts with greater clarity at a later date, but I wanted to give my first impressions here.

Expected values… what?

This is all sounds horribly utilitarian. However, morality implies a preference between outcomes. That is what I am referring to here. You might take a Nietzschean line that there is value but no morality. If we are not being facetious about what we mean by value, this isn’t just numbers being applied to things. It’s not like we could say that mass genocide or torture, in our opinion, is worth ‘+5’.

No: even if what is of value is not easily quantifiable that does not mean morality does not exist. Maybe certain experiences and actions cannot be quantifiably compared, but as soon as you recognise value in your own life, you must question whether this exists for others. To deny others’ experiences should hold weight when you make choices is absurd – the nature of experience is that it is not a nothing!

What about others?

Further, if it is the experiencing which you value, then if another (person) experiences why should it be calculated different? If the experience is the same then there is no difference. Should the two people have slightly different experiences,  why should that be approached differently from when you experience two different things and evaluate them? If they should be, and the miscellaneous data to the experience (e.g. character, intelligence) this scarcely changes things. Why should a set of experience and ‘miscellaneous information’ for one person and another not be weighted as impartially as possible?

Again, comes the uncertainty: what do others experience? (if at all…?)

Sharing is caring!

Leave a Reply

Your email address will not be published. Required fields are marked *