Morality In One Lesson
I'm going to taboo common moral terms for this post. Frankly, moral terms are highly confused and self-referential. It's common practice to use established terms in the way that everyone else uses them, lest the mind revert easily and you get misunderstood or, heaven forbid, you don't get referenced in journals. But I can't abide people using definitions in a wobbly manner. That is, terms are defined to be about something, but that something cannot be pinned down upon closer examination.
Well, I'm bucking the trend in this case because I want the words to be redefined to something which actually exists but which still carries a lot of the established mental associations. I want to do this as a not-so-covert attempt to convert people to a new way of thinking about morality, and the whole sphere of ought.
Well, I'm bucking the trend in this case because I want the words to be redefined to something which actually exists but which still carries a lot of the established mental associations. I want to do this as a not-so-covert attempt to convert people to a new way of thinking about morality, and the whole sphere of ought.
Against Universal, Absolute Moral Systems
I really hate breaking the news to people because so much of their worldview is wrapped up in the need for there to be someone at the wheel driving the whole thing. The concept that humans are the authors of and ultimately responsible for their own moral judgements is a frightening paradigm shift. If I push the issue too hard, I get outright rejection or anger.
Though I have long been in the relativist camp, it took a long time for me to let go of the last vestiges which kept me in a moral realist mindset. Being useful and being true don't necessarily overlap. I put universal, absolute moral systems into that category.
Though I have long been in the relativist camp, it took a long time for me to let go of the last vestiges which kept me in a moral realist mindset. Being useful and being true don't necessarily overlap. I put universal, absolute moral systems into that category.
The Moral Ecosystem
The previous two articles pretty fully explain the high level view of how the process of morality works. I'm not going to delve into the neuroscience which undergirds being an agent susceptible to moral thinking an suasion because I'm not a neuroscientist.
People who I expect to know better are still looking for the brass ring of moral system which will work based on their definition of human nature, or happiness, or well-being. The last section of the previous post touched on the moral ecosystem, but I felt it was important enough to get its own section.
People who I expect to know better are still looking for the brass ring of moral system which will work based on their definition of human nature, or happiness, or well-being. The last section of the previous post touched on the moral ecosystem, but I felt it was important enough to get its own section.
Letting go of Moral Thought Entirely
I was recently made a moderator of anti-theism, anti-moralism, anti-statism and it's gotten me rethinking the use of moral terms and interactions. While I disagree with the premise that moral right and wrong do not exist at all (per the previous entry I showed how they could exist as mind-dependent relationships), I think that there's a lot of potential upside in scrapping the whole mysticism surrounding the terms. This poses some interesting problems for me.
The first is that I'm working on The Actualization Ethic which is a moral system and, frankly, I don't want to give it up. I'm honest about it not being the one true way™ and, rather, adopt a "if you value X judge action A positively and action B negatively" approach. I don't think moral eliminitivism really throws a wrench in here too much but it does scare me a bit. Maybe I need to grow up more.
The second is that I don't think it's possible to escape morality as a process. If morality is forming judgements about the actions of others and making statements of condemnation or approval to get others on your side and applying punishment and reward to convert others so that they value what you value, then that's not going away even if it's entirely rationally discussed. Evolution is a competition of material replicators for matter. Morality is a competition of mental replicators for mindshare. Just because humans invented lasik surgery doesn't mean evolution stopped and just because humans rationally discuss things about instead of saying "good! bad!" doesn't mean morality stops. This stance alone prevents me from going full eliminativist.
The third is that humans make decisions largely through habit and I frankly don't find it practical to rationally figure things out in the moment especially under time pressures. I think that acting unmindfully is unavoidable and the best that can be done barring wetware augmentation is to have well-thought-out judgements which are then cached. I'm still looking into what moral eliminitivism has to say about this.
Now, the big upsides - people can finally own their values when they stop pretending that they're doing anything other than projecting their values onto the world when making moral statements. This presumably opens the door for more dialog and idea trade rather than conversation-enders and brow-beating. Idea trade is a critical component of workable societies and, being a social creature, I have a vested interest in that.
The first is that I'm working on The Actualization Ethic which is a moral system and, frankly, I don't want to give it up. I'm honest about it not being the one true way™ and, rather, adopt a "if you value X judge action A positively and action B negatively" approach. I don't think moral eliminitivism really throws a wrench in here too much but it does scare me a bit. Maybe I need to grow up more.
The second is that I don't think it's possible to escape morality as a process. If morality is forming judgements about the actions of others and making statements of condemnation or approval to get others on your side and applying punishment and reward to convert others so that they value what you value, then that's not going away even if it's entirely rationally discussed. Evolution is a competition of material replicators for matter. Morality is a competition of mental replicators for mindshare. Just because humans invented lasik surgery doesn't mean evolution stopped and just because humans rationally discuss things about instead of saying "good! bad!" doesn't mean morality stops. This stance alone prevents me from going full eliminativist.
The third is that humans make decisions largely through habit and I frankly don't find it practical to rationally figure things out in the moment especially under time pressures. I think that acting unmindfully is unavoidable and the best that can be done barring wetware augmentation is to have well-thought-out judgements which are then cached. I'm still looking into what moral eliminitivism has to say about this.
Now, the big upsides - people can finally own their values when they stop pretending that they're doing anything other than projecting their values onto the world when making moral statements. This presumably opens the door for more dialog and idea trade rather than conversation-enders and brow-beating. Idea trade is a critical component of workable societies and, being a social creature, I have a vested interest in that.
Desirism : The Most Honest "Mainstream" Ethical Philosophy
I've been laying the framework for The Actualization Ethic for a while. Through lesswrong.org, I discovered a podcast run by Alonzo Fyfe and Luke Muehlhauser which discussed an ethical system called Desirism. It had a lot in common with my own Actualization Ethic but went a lot further in some cases and caused me to have to redesign a lot of my ethical system.
If I could sum it up the reason I like it so much in only a few words those words would be honest and explanatory. Desirism does away with Platonic forms and other magical, ungrounded terms and focuses on reductionist premises which can be backed by neuroscience. It's explanatory because, once one understands the basics, it predicts a lot of how humans will react to certain situations.
My largest complaint with it is that, like all ethical theories, it suffers from the problem of why anyone should adopt it and I've seen little evidence that it explains power disparities and the ability to have concurrent moral systems. The Actualization Ethic addresses these issues at the expense of being the "one true answer." It's influenced heavily by Desirism for descriptive positions, less so for normative ones.
If I could sum it up the reason I like it so much in only a few words those words would be honest and explanatory. Desirism does away with Platonic forms and other magical, ungrounded terms and focuses on reductionist premises which can be backed by neuroscience. It's explanatory because, once one understands the basics, it predicts a lot of how humans will react to certain situations.
My largest complaint with it is that, like all ethical theories, it suffers from the problem of why anyone should adopt it and I've seen little evidence that it explains power disparities and the ability to have concurrent moral systems. The Actualization Ethic addresses these issues at the expense of being the "one true answer." It's influenced heavily by Desirism for descriptive positions, less so for normative ones.
Ought Implies Can
An apartment is on fire and, on the top floor, a child is trapped. This child will die unless rescued. The fire department is unable to get to the top floor. You are on the street witnessing the fire. You fail to teleport the child out because, like all humans, you are not capable of teleporting matter*. Should a moral system consider that (in)action wrong?
There is a widely accepted ethical precept that ought implies can - that a moral system cannot "reasonably" make moral prescriptions which are impossible to follow. This is an almost universally prevalent human intuition. That doesn't guarantee it's correct, but it puts the onus on the one claiming it's advisable to place blame on those who fail to do something because they could not have done otherwise.
It's not just intuitive, it's practical. Given the how moral systems are used and the ever-hungry jaws of natural selection, it makes sense that functional moral systems requires practicality - this, if for no other reason, is why practicality trumps idealism.
Viewing moral rules through the lens of virtue ethics (which Desirism has some nice overlap with) reveals yet another reason for ought implies can to be accepted: you can't determine anything about the character of a person unless they're opportunity to act because values are expressed through actions! For instance, one isn't a noble pacifist because they do not know how to fight. One is a noble pacifist because one knows how to fight and chooses other actions. The first person may want to fight, but realizes he can't win. Others don't realize his inaction is a sign of an ignoble character.
Ok. So ought implies can makes sense... or does it?
There is a widely accepted ethical precept that ought implies can - that a moral system cannot "reasonably" make moral prescriptions which are impossible to follow. This is an almost universally prevalent human intuition. That doesn't guarantee it's correct, but it puts the onus on the one claiming it's advisable to place blame on those who fail to do something because they could not have done otherwise.
It's not just intuitive, it's practical. Given the how moral systems are used and the ever-hungry jaws of natural selection, it makes sense that functional moral systems requires practicality - this, if for no other reason, is why practicality trumps idealism.
Viewing moral rules through the lens of virtue ethics (which Desirism has some nice overlap with) reveals yet another reason for ought implies can to be accepted: you can't determine anything about the character of a person unless they're opportunity to act because values are expressed through actions! For instance, one isn't a noble pacifist because they do not know how to fight. One is a noble pacifist because one knows how to fight and chooses other actions. The first person may want to fight, but realizes he can't win. Others don't realize his inaction is a sign of an ignoble character.
Ok. So ought implies can makes sense... or does it?
You Don't Have a Right to Be Wrong
I'm a pretty easy-going person. I really try to not become so attached to my positions that I cannot abandon them in the face of new evidence. The fact that I've been changing less recently scares me because I can't tell if that's indicative of more strongly bolstered ideas forming a highly-coherent conceptual framework or I'm just getting old and calcified in my thinking.
Regardless, one area where I'm practically a zealot is that of truth. I can't stand it when people are dishonest, especially to themselves about their motives. As a relativist, I'll grant that you may have different values than me though I may not value them myself may even have to do something about it if they're grossly incompatible with mine. Where we're going to have a big hurdle is if you're spewing a lot of bullshit. If you keep moving the goalposts or not accepting conclusions accordant with your stated beliefs or are just flat-out lying or dodging the question, then I'm going to call you out. I grant you no right to engage in such behavior.
Regardless, one area where I'm practically a zealot is that of truth. I can't stand it when people are dishonest, especially to themselves about their motives. As a relativist, I'll grant that you may have different values than me though I may not value them myself may even have to do something about it if they're grossly incompatible with mine. Where we're going to have a big hurdle is if you're spewing a lot of bullshit. If you keep moving the goalposts or not accepting conclusions accordant with your stated beliefs or are just flat-out lying or dodging the question, then I'm going to call you out. I grant you no right to engage in such behavior.
There are NO Grounding Norms - The Cost of Action Indicates Values
Exploding Argumentation Ethics
Argumentation ethics is appealing because it claims to logically prove that, if you care about truth and logic and consistency, you cannot engage in argument without accepting the corpus of libertarian theory. Now, I know what you're thinking, many an asshole has tried to hop the fence of Hume's is/ought problem without introducing an unjustified value-laden premise. This would be just another case if it wasn't for a bunch of loud libertarians rushing to Dr. Hoppe's defense.
The assertion isn't without its challengers. Bob Murphy and Gene Callahan chastised it pretty strongly, Stephen Kinsella wrote a rebuttal to their criticism.
While I agree with much of what Murphy and Callahan wrote, most of their arguments relate to the physical and contextual requirements of argumentation. My approach is hopefully simpler: I want to attack the claims about the purpose of argumentation itself. If that falls apart, then anything contextually related to argumentation also falls apart.
The assertion isn't without its challengers. Bob Murphy and Gene Callahan chastised it pretty strongly, Stephen Kinsella wrote a rebuttal to their criticism.
While I agree with much of what Murphy and Callahan wrote, most of their arguments relate to the physical and contextual requirements of argumentation. My approach is hopefully simpler: I want to attack the claims about the purpose of argumentation itself. If that falls apart, then anything contextually related to argumentation also falls apart.
The Limits of the Non-Aggression Principle
The non-aggression principle is a moral stance which asserts that aggression is inherently illegitimate or, to put things in terms I'd prefer to use, is a moral wrong. Taken no further, most people probably won't find fault with it. But wait! How does one define aggression? Threats? Force? Property? Depending on how these terms are defined, different results fall out.
While I won't bring up much that hasn't been discussed before, I find it a useful excercise to point out to libertarians that things aren't as cut-and-dried as the ideologues among them like to believe. Ultimately, consequences matter and adhering to a principle acontextually is not likely to get you what you want. Just look at how many people pay taxes to support an immoral war. "But I had to!" Bullshit! Go work under the table. You take the easy path because you care more about your concrete experiences than abstract principle. Welcome to humanity. That's also the reason why the Non-Aggression Principle doesn't work in many cases as stated by libertarians.
Still, given my stance on how humans need moral training, I think that the Non-Aggression Principle is good enough. As a desire utilitarian might say, people have reasons to promote the non-aggression principle in others.
While I won't bring up much that hasn't been discussed before, I find it a useful excercise to point out to libertarians that things aren't as cut-and-dried as the ideologues among them like to believe. Ultimately, consequences matter and adhering to a principle acontextually is not likely to get you what you want. Just look at how many people pay taxes to support an immoral war. "But I had to!" Bullshit! Go work under the table. You take the easy path because you care more about your concrete experiences than abstract principle. Welcome to humanity. That's also the reason why the Non-Aggression Principle doesn't work in many cases as stated by libertarians.
Still, given my stance on how humans need moral training, I think that the Non-Aggression Principle is good enough. As a desire utilitarian might say, people have reasons to promote the non-aggression principle in others.
Lying is Unlibertarian
Most, though not all, libertarians oppose fraud due to the effects it has on the property of others - namely deprivation through trickery. Many of these same libertarians don't have a problem with people spreading falsehoods about others to gain business or simply to cause others to lose business. Remove money from the equation and those same libertarians are even more ok with lying. The standard response is that "people have no right to their reputation - people have no right to what others think about them." I completely agree.
Overlooked in this view is the basis for determining property harm in the first place. If it can be summed up as a deprivation or degradation of the property so as to no longer satisfy the desires of the property owner (with a few clarifying sub-principles) and property is assumed in the self, then the case can be made that lying is at least as unlibertarian as fraud.
Overlooked in this view is the basis for determining property harm in the first place. If it can be summed up as a deprivation or degradation of the property so as to no longer satisfy the desires of the property owner (with a few clarifying sub-principles) and property is assumed in the self, then the case can be made that lying is at least as unlibertarian as fraud.
Being Good in the Moment
All right. Pop quiz. The airport. Gunman with one hostage. He's using her for cover. He's almost to a plane. You're a hundred feet away. What do you do?
Ok, so maybe Speed isn't your thing. What about the numerous trolley problems used to evaluate moral reasoning? While fascinating in that they may indicate multiple centers of moral reasoning including ones which lean more deontological and ones which lean more utilitarian, is it really useful normatively? Do moral systems really have to address highly unlikely edge cases?
As a computer programmer, I operate under the belief that the neocortex and limbic system behave differently and humans have a fast "cached pattern matcher" and a slower, but more flexible, deliberative thought capability. I'm not alone in this belief: see here and here.
Furthermore, as a longtime observer of humans, I find that rationalization (not owning one's values, excuse clouding) is very prevalent in the species. Given that people can usually find some weird way to make a means to their end justified to themselves or others, can humans be trusted to be good using a system which employs rationality in the moment? Add in the time factor for good decisions / satisficing and there's a recipe for a lot of guesswork, inconsistency, and I ultimately figure, suffering.
Why not ditch the edge cases and focus on the practical? If Haid's Elephant and the Rider metaphor is viable, then morality should be ingrained as proclivities and aversions deep in the mental substrate so they become the ends that the rational mind is attempting to justify. This is far stronger and more predictable than doing it in the rational mind even if it misses some edge case. It still requires the rational mind to figure out which values should be ingrained but relies on the rational mind when not under time pressures.
Ok, so maybe Speed isn't your thing. What about the numerous trolley problems used to evaluate moral reasoning? While fascinating in that they may indicate multiple centers of moral reasoning including ones which lean more deontological and ones which lean more utilitarian, is it really useful normatively? Do moral systems really have to address highly unlikely edge cases?
As a computer programmer, I operate under the belief that the neocortex and limbic system behave differently and humans have a fast "cached pattern matcher" and a slower, but more flexible, deliberative thought capability. I'm not alone in this belief: see here and here.
Furthermore, as a longtime observer of humans, I find that rationalization (not owning one's values, excuse clouding) is very prevalent in the species. Given that people can usually find some weird way to make a means to their end justified to themselves or others, can humans be trusted to be good using a system which employs rationality in the moment? Add in the time factor for good decisions / satisficing and there's a recipe for a lot of guesswork, inconsistency, and I ultimately figure, suffering.
Why not ditch the edge cases and focus on the practical? If Haid's Elephant and the Rider metaphor is viable, then morality should be ingrained as proclivities and aversions deep in the mental substrate so they become the ends that the rational mind is attempting to justify. This is far stronger and more predictable than doing it in the rational mind even if it misses some edge case. It still requires the rational mind to figure out which values should be ingrained but relies on the rational mind when not under time pressures.
The Harm Test
People love to claim harm and many moral systems weigh harm strongly in their considerations rather than looking at what people have reasons to want to promote or inhibit in others.
Judging solely on harm creates moral systems many would balk at. I can show this through something I call The Harm Test - a scientific test for the notion of harm. This dovetails nicely with my view on the subjective/objective divide not being an impenetrable wall.
Harm must ultimately have a subjective component because "one man's poison is another's pleasure." It must also have objective, verifiable components for any moral system which addresses harm claims to function properly. The purpose of the harm test is solely to determine whether someone is objectively causing the claimed harm. It places no restrictions on what one does or does not subjectively like - what they consider harm - nor does it demand that something which passes the test is considered good, bad, or neutral. It simply traces the causal links between the victim and the alleged perpetrator. If no such link exists, then objectively no harm can be caused by the alleged perpetrator.
Judging solely on harm creates moral systems many would balk at. I can show this through something I call The Harm Test - a scientific test for the notion of harm. This dovetails nicely with my view on the subjective/objective divide not being an impenetrable wall.
Harm must ultimately have a subjective component because "one man's poison is another's pleasure." It must also have objective, verifiable components for any moral system which addresses harm claims to function properly. The purpose of the harm test is solely to determine whether someone is objectively causing the claimed harm. It places no restrictions on what one does or does not subjectively like - what they consider harm - nor does it demand that something which passes the test is considered good, bad, or neutral. It simply traces the causal links between the victim and the alleged perpetrator. If no such link exists, then objectively no harm can be caused by the alleged perpetrator.