Desirism : The Most Honest "Mainstream" Ethical Philosophy
I've been laying the framework for The Actualization Ethic for a while. Through lesswrong.org, I discovered a podcast run by Alonzo Fyfe and Luke Muehlhauser which discussed an ethical system created by Fyfe, called Desirism. It had a lot in common with my own Actualization Ethic but went a lot further in some cases and caused me to have to redesign a lot of my ethical system.
If I could sum it up the reason I like it so much in only a few words those words would be honest and explanatory. Desirism does away with Platonic forms and other magical, ungrounded terms and focuses on reductionist premises which can be backed by neuroscience. It's explanatory because, once one understands the basics, it predicts a lot of how humans will react to certain situations.
My largest complaint with it is that, like all ethical theories, it suffers from the problem of why anyone should adopt it and I've seen little evidence that it explains power disparities and the ability to have concurrent moral systems. The Actualization Ethic addresses these issues at the expense of being the "one true answer." It's influenced heavily by Desirism for descriptive positions, less so for normative ones.
If I could sum it up the reason I like it so much in only a few words those words would be honest and explanatory. Desirism does away with Platonic forms and other magical, ungrounded terms and focuses on reductionist premises which can be backed by neuroscience. It's explanatory because, once one understands the basics, it predicts a lot of how humans will react to certain situations.
My largest complaint with it is that, like all ethical theories, it suffers from the problem of why anyone should adopt it and I've seen little evidence that it explains power disparities and the ability to have concurrent moral systems. The Actualization Ethic addresses these issues at the expense of being the "one true answer." It's influenced heavily by Desirism for descriptive positions, less so for normative ones.
Refresher Course: Know your Ethical Approaches
Virtue Ethics
Deontology
Act Utilitarianism
Rule Utilitarianism
Deontology
Act Utilitarianism
Rule Utilitarianism
Where they Fail
Virtue Ethics
Deontology
Act Utilitarianism
Rule Utilitarianism
Deontology
Act Utilitarianism
Rule Utilitarianism
Desires: Rules you can't Ignore
As described above, rule utilitarianism devolves into act utilitarianism if an agent has any desires which are stronger than following certain rules. The only way rule utilitarianism can work is if that's not the case. What rules are at least as strong to an agent as desires? Desires!
Moreover, desires are the only reason for intentional action within an agent; gods and intrinsic value and spooky dualism need not apply. As an aside: Desirism tends to focus on intentional action which I think is overly limiting since it ignores the power of habit in forming desires which cause apparently intentional action but which were cached. I prefer the term dispositions, but I will speak in the language of Desirism on this page.
An agent has desires and they operate so as to attempt to fulfill the strongest of their desires given their beliefs at the time. This would have no real connection with ethics if it wasn't also the case that desires (or at least their expression) can often be promoted or inhibited. Now others have a way to foster rules in others which they can't ignore.
Moreover, desires are the only reason for intentional action within an agent; gods and intrinsic value and spooky dualism need not apply. As an aside: Desirism tends to focus on intentional action which I think is overly limiting since it ignores the power of habit in forming desires which cause apparently intentional action but which were cached. I prefer the term dispositions, but I will speak in the language of Desirism on this page.
An agent has desires and they operate so as to attempt to fulfill the strongest of their desires given their beliefs at the time. This would have no real connection with ethics if it wasn't also the case that desires (or at least their expression) can often be promoted or inhibited. Now others have a way to foster rules in others which they can't ignore.
Good and Bad, Right and Wrong
Various moral systems define different ways of determining whether an action is right or wrong. Act utilitarianism, for instance, determines the rightness of an action by how much it leads to the overall good - where good is typically defined as pleasure or thriving, etc. Desirism's standard is simple: an act is good if it's what an agent with good desires would do, an act is bad if it's what an agent with good desires would not do.
Ok... so that just pushes things to the definition of "good desires." Desirism defines a good desire as one which tends to fulfill other desires. A bad desire is one which tends to thwart other desires. This beautifully simplistic definition maps nearly perfectly to human intuition about moral rules as well as how moral systems are actually practiced.
Since desires are reasons for action and, as Desirism claims, the only reasons for intentional action - pissing other people off (by thwarting their desires) is a good way to get them to have reasons to try to change your desires. Conversely, doing things which others want to encourage is a good way to get encouragement from others to strengthen your desire to do that thing. That other moral systems glaze over this obvious truth is a travesty.
Even though Desirism is considered a form of rule utilitarianism, it has an almost virtue ethic quality to it - people's actions are judged by what those actions say about a person's character (desires).
Ok... so that just pushes things to the definition of "good desires." Desirism defines a good desire as one which tends to fulfill other desires. A bad desire is one which tends to thwart other desires. This beautifully simplistic definition maps nearly perfectly to human intuition about moral rules as well as how moral systems are actually practiced.
Since desires are reasons for action and, as Desirism claims, the only reasons for intentional action - pissing other people off (by thwarting their desires) is a good way to get them to have reasons to try to change your desires. Conversely, doing things which others want to encourage is a good way to get encouragement from others to strengthen your desire to do that thing. That other moral systems glaze over this obvious truth is a travesty.
Even though Desirism is considered a form of rule utilitarianism, it has an almost virtue ethic quality to it - people's actions are judged by what those actions say about a person's character (desires).
Desires are Incommensurable from Outside of Agents
Too many moral systems try to rigidly define the good. Pleasure, happiness, thriving, the ambiguous term well-being...
Desirism makes the claim that there is no intrinsic value; nothing has value in and of itself - at least outside of an agent. Even fulfilling desires doesn't have intrinsic value: it's possible to imagine someone who wants to sacrifice themselves to save someone else. Such an individual, being dead, will no longer be fulfilling desires. Another aside: a curiosity is that action indicates desire, but all action may be falsely said to indicate a desire to fulfill desires - I believe that this is a conflation of two forms of the term desire.
This is where Bentham and Mills and, recently, Harris have gone off the rails. Fyfe pointed out to Harris that the well-being of conscious creatures is not the only thing to which agents ascribe value.
Due to the problem of incommensurability, Fyfe chose to simply weigh values equally in the abstract - which is one of the reasons the ethic falls under utilitarianism. It's not directly about maximizing fulfillment of desires, but is about maximizing desires which tend to fulfill desires (or, conversely, minimizing desires which tend to thwart desires). I think this approach is on the right track, but flawed as I'll discuss near the end of this page.
Desirism makes the claim that there is no intrinsic value; nothing has value in and of itself - at least outside of an agent. Even fulfilling desires doesn't have intrinsic value: it's possible to imagine someone who wants to sacrifice themselves to save someone else. Such an individual, being dead, will no longer be fulfilling desires. Another aside: a curiosity is that action indicates desire, but all action may be falsely said to indicate a desire to fulfill desires - I believe that this is a conflation of two forms of the term desire.
This is where Bentham and Mills and, recently, Harris have gone off the rails. Fyfe pointed out to Harris that the well-being of conscious creatures is not the only thing to which agents ascribe value.
Due to the problem of incommensurability, Fyfe chose to simply weigh values equally in the abstract - which is one of the reasons the ethic falls under utilitarianism. It's not directly about maximizing fulfillment of desires, but is about maximizing desires which tend to fulfill desires (or, conversely, minimizing desires which tend to thwart desires). I think this approach is on the right track, but flawed as I'll discuss near the end of this page.
Putting it Together: Some Examples
The examples where it lines up with utilitarian thought is not very interesting since it's a utilitarian theory - however, where it lines up with deontology, virtue ethics, and weird human intuition are far more interesting. I only list those weird examples where an action which would normally be bad from a utilitarian perspective is correct in Desirism (presuming that it does actually fulfill more desires).
- A mob is decending on an innocent man and will lynch him. If the sheriff intervenes, the mob will riot and destroy the town. If defending an innocent man is a desire which should be encouraged due to it fulfilling more present desires, then the sheriff should protect the man at the expense of the town even though more immediate harm will result.
- A surgeon has five patients who each need a different organ replaced. A healthy person comes in for a checkup. He is an exact match for all five of the patients. Desirism would say that he shouldn't be killed to save the five so long as the desire kill innocent people to save others should not be encouraged to fulfill other desires.
- A personal favorite of mine, and one I use in T.A.E. - lying to the Nazis about hiding people in your house. Lying tends to thwart more desires than it fulfills and thus should be discouraged.
Where Desirism Influences The Actualization Ethic
To shamelessly plug myself, some of these were already in T.A.E., but are listed here due to being in both. Others were not.
- An act is good if it indicates good desires in the agent performing it - T.A.E. focuses on habit building to create desires which work most of the time even if they fail sometimes instead of thinking things through in the moment.
- A desire is good if it fulfills other desires - T.A.E.'s standard is one of maximizing actualization which has some similarities with fulfilling other desires, but is sufficiently different to not be Desirism-compatible.
- Desires are incommensurable
- Desires can be modified by punishment/shame or reward/praise
- Desires are rules which an agent cannot disobey
- Intrinsic value does not exist
- The belief-desire-intention model - T.A.E. strongly discounts intention in favor of a more mechanical view of dispositions, states, and opportunities more akin to Dispositions+Belief->Desires+Belief+Opportunity->Actualization,Cost
- The instrumental ought allows crossing the is/ought bridge, even though it still requires an ought in the premises; ought statements are is statements.
- Moral systems are objective, subjective, and relative but not absolute.
- Ought implies can
- Desires and their expression can sometimes be determined by proxies
- Effective means to ends is not a matter of subjective opinion - it has a mind-independent truth value.
- Some of the theory of mind indicated by Desirism - T.A.E. strongly identifies with a theory of multiple minds in a human and the effectiveness of cognitive behavioral therapy and habits/rituals.
The Glaring Hole
The glaring hole I'm finding with desirism is not that it's trying to be an objective moral system because in the most limiting sense of objective-as-mind-independent, it's not. The hole relates to how desires are weighed.
In other parts of the site, I make the claim that desires are incommensurable without an agent. That's completely true. However, they are commensurable within an agent. A functional moral practice requires agents and will necessarily favor the desires its adherents tend to have. It is rightly recognized in Desirism that any foundational desire (if they aren't damped circular references) will be axiomatic at some point. It's not possible to create a complete, non-abstract moral system with an "insert foundational desires here" placeholder. Fyfe eliminates the arbitrariness by claiming that all desires are to be considered. This prevents drawing an arbitrary line and leaves the moral system up to what agents actually want.
That sounds good, but it's not how the world works. There are at least two heavily-practiced ways of drawing boundaries around desires even if the desires themselves are arbitrary: cost and possibility. There are numerous desires operating in an agent's mind simultaneously. In the abstract, one only stops working toward something when countervailing desires collectively become stronger or the opportunity disappears (such as is the case when natural selection says "no"). Costs circumscribe action. Possibility circumscribes action. Though desires themselves could be called subjective, their relationship to other desires and the power agents have to bring them to bear are objective, testable boundaries.
Note the pointed parallels between the lifecycles of biological and memetic entities: both are in-group and out-group competitions for resources. The boundaries of conversion to any particular entity or type of entity are determined by the cost of such a conversion: at the point where it becomes too hard for cheetahs to catch additional gazelles the point of conversion of gazelle to cheetah becomes defined.
That's also exactly the way it functions between the memes of moral systems within morality, so long as one is willing to admit that there can be (and are) multiple moral systems operating concurrently. I know of no other situation which points this out as clearly as that of human treatment of animals. Not surprisingly this is an area where Fyfe has given unsatisfactory answers.
According to Fyfe, animal desires must be weighed (by humans). Why? If desires are the only reasons for action which exist and humans don't have desires to weigh certain animal desires, then there are no reasons which exist to weigh certain animal desires. To claim that there are is a false statement and highlights the biggest recurring problem with objective moral systems - the jump to caring about others.
In practice, animals are mistreated or at least eaten (which I think is reasonable to assume is against the wishes of the animal - especially informed animals). Humans make up bogus reasons to justify eating animals instead of the honest one which is meat tastes awesome and we don't have to care about what the animals think. The second part of that justification boils down to the observation that neither animals, nor their human advocates, bringing sufficient corrective force to bear on meat-eaters by changing beliefs, removing opportunities, or causing other desires to be conditionally fulfilled.
The consequences of this view are as follows:
In other parts of the site, I make the claim that desires are incommensurable without an agent. That's completely true. However, they are commensurable within an agent. A functional moral practice requires agents and will necessarily favor the desires its adherents tend to have. It is rightly recognized in Desirism that any foundational desire (if they aren't damped circular references) will be axiomatic at some point. It's not possible to create a complete, non-abstract moral system with an "insert foundational desires here" placeholder. Fyfe eliminates the arbitrariness by claiming that all desires are to be considered. This prevents drawing an arbitrary line and leaves the moral system up to what agents actually want.
That sounds good, but it's not how the world works. There are at least two heavily-practiced ways of drawing boundaries around desires even if the desires themselves are arbitrary: cost and possibility. There are numerous desires operating in an agent's mind simultaneously. In the abstract, one only stops working toward something when countervailing desires collectively become stronger or the opportunity disappears (such as is the case when natural selection says "no"). Costs circumscribe action. Possibility circumscribes action. Though desires themselves could be called subjective, their relationship to other desires and the power agents have to bring them to bear are objective, testable boundaries.
Note the pointed parallels between the lifecycles of biological and memetic entities: both are in-group and out-group competitions for resources. The boundaries of conversion to any particular entity or type of entity are determined by the cost of such a conversion: at the point where it becomes too hard for cheetahs to catch additional gazelles the point of conversion of gazelle to cheetah becomes defined.
That's also exactly the way it functions between the memes of moral systems within morality, so long as one is willing to admit that there can be (and are) multiple moral systems operating concurrently. I know of no other situation which points this out as clearly as that of human treatment of animals. Not surprisingly this is an area where Fyfe has given unsatisfactory answers.
According to Fyfe, animal desires must be weighed (by humans). Why? If desires are the only reasons for action which exist and humans don't have desires to weigh certain animal desires, then there are no reasons which exist to weigh certain animal desires. To claim that there are is a false statement and highlights the biggest recurring problem with objective moral systems - the jump to caring about others.
In practice, animals are mistreated or at least eaten (which I think is reasonable to assume is against the wishes of the animal - especially informed animals). Humans make up bogus reasons to justify eating animals instead of the honest one which is meat tastes awesome and we don't have to care about what the animals think. The second part of that justification boils down to the observation that neither animals, nor their human advocates, bringing sufficient corrective force to bear on meat-eaters by changing beliefs, removing opportunities, or causing other desires to be conditionally fulfilled.
The consequences of this view are as follows:
- Multiple moral systems can operate concurrently
- Force can prevent other moral systems from gaining mindshare or can be used to capture mindshare
- Agents will only be as good (accordant with a moral system's rules) as they can afford to be (this relates to ought implies can)