I identified as a utilitarian for most of my life but stopped after SBF.
My main critique of utilitarianism is that as an ideology it is criminogenic. EA generated at least three crypto criminals over a one year period and SBF himself was the child of two utilitarian philosophers. The odds ratios here are undeniable.
I don’t really know what you do with that, but it’s made me more sympathetic to folk skepticism around it.
The issue seems to be that in utilitarianism, it is easy to come up with plausible sounding self-justifications for antisocial behavior. If the rules are simple and apply to everybody equally (deontology), then there's no way to come up with a convincing self-interested excuse.
If I had a dollar for every time someone read Neitzsche, and came away with the wrong conclusion that master morality is 'good' and slave morality is 'bad', well...
(Americans, in particular, seem highly predisposed to a moralistic worldview, that is to say making moral lenses on society primary.)
>> "Charity exists to expand structures that are dysfunctional...if foreign aid was able to lift the poor out of poverty, it would be called loans and investments."
The charities EAs support are things like malaria nets for children and iodine for pregnant mothers in deficient areas. How do those "expand structures that are dysfunctional"? How would you structure these as a loan? Tell an illiterate pregnant woman in rural Pakistan "I promise you that this mysterious pill you know nothing about will increase the IQ of your child by, on average, 4 points - after twenty years, please test your child's IQ, test against the counterfactual, and pay me back the 40 cents that it cost"? Obviously in a perfect world, illiterate Pakistani women would themselves realize all of this was true and important and pay 40 cents for the pill, but American woman have 1000x the resources that Pakistanis do and still don't take choline during pregnancy, so doesn't seem like everyone is being a perfect rational actor here.
For a while everyone was trying microloans/microcredit, but then the evidence came in that this didn't work very well. I talk slightly more about considerations around capitalism vs. charity at https://www.astralcodexten.com/p/does-capitalism-beat-charity .
>> Note that none of the issues that they try to tackle are right wing - eugenics, deregulation, obesity prevention, reducing microplastic contamination, and banning toxic pesticides and additives.
A bunch of effective altruist money has gone into several of those causes - for example, EAs were the first funders of the YIMBY movement, one of the most important deregulation pushes of our time. EAs were one of the main funders of the charter cities movement, though Pronomos has since taken over. I personally put $100K of grant money into various embryo screening initiatives (I won't speak for anyone else, but consider that not everyone who does this announces it to the world!). EAs got national recognition for their work removing toxic heavy metals from the spice supply.
I agree not all of these were exactly framed in the way that maximally appeals to right-wingers. But think about what your leftist counterpart would say (and has said, dozens of times) - that no EA money goes into leftist causes like increased abortion access, defund-the-police-advocacy, DEI programs, or organic farming. This is also true. The terminally-politics-brained person always looks at everything through the lens of politics - if you're not pursuing our left-wing agenda, it must be because you're an evil right-winger, or vice versa. I like EA because it's actually trying to do what's effective (mostly extremely boring infectious disease related technologies) and not what will placate various tribes of politics-brained morons. Like, seriously, you expect lobbying for removing microplastics from things, in the First World, before anyone has done decent research on how and whether they're bad, to be one of the most effective and evidence-based causes in the world, while a bunch of people are still eating lead or drinking from sewage-contaminated water?
>The charities EAs support are things like malaria nets for children and iodine for pregnant mothers in deficient areas. How do those "expand structures that are dysfunctional"?
Less death in a post-1800s undeveloped area means a larger population but not better infrastructure. Though interventions that increase health/intelligence should lead to better infrastructure, at least to some degree.
>How would you structure these as a loan? Tell an illiterate pregnant woman in rural Pakistan "I promise you that this mysterious pill you know nothing about will increase the IQ of your child by, on average, 4 points - after twenty years, please test your child's IQ, test against the counterfactual, and pay me back the 40 cents that it cost."
My mind initially drifted towards the government -- they are loaned 10 million to spend on x or y nutrients the region is deficient in, then I realized that was a terrible idea because of rampant corruption. So I guess outside institutions are the best option.
>EAs were the first funders of the YIMBY movement, one of the most important deregulation pushes of our time
I consider YIMBY to be left wing coded, though it does qualify as deregulation.
>EAs were one of the main funders of the charter cities movement
That is a good example of a right wing cause. I'll edit that in.
>I personally put $100K of grant money into various embryo screening initiatives (I won't speak for anyone else, but consider that not everyone who does this announces it to the world!)
I'm not too close to the embryo selection people, but I wouldn't doubt that wealthy people adjacent to EA/rationalism fund the process covertly.
>I agree not all of these were exactly framed in the way that maximally appeals to right-wingers. But think about what your leftist counterpart would say (and has said, dozens of times) - that no EA money goes into leftist causes like increased abortion access, defund-the-police-advocacy, DEI programs, or organic farming.
The skew of the causes is still towards the left, reflecting their community, though I will concede that they do avoid funding clearly bad causes on the left (as in, DEI is clearly bad in a way that funding AI risk research is not).
>Like, seriously, you expect lobbying for removing microplastics from things, in the First World, before anyone has done basic research on how and whether they're bad, to be one of the most effective and evidence-based causes in the world?
That is a fair point, it would be better to fund research on them first.
Sorry, but this is all kinds of confused. Your presented argument isn't valid or even clearly constructed.
> People have values and preferences.
Yes, but it's hard to see how this could be a premise in an argument for utilitarianism. But let's consider it anyway and see if you can show that it is a good premise.
> There are states of existence that maximize those values and preferences.
Doubtful, depending on what you mean by "maximize." Due to the nature of infinity, it's likely that for any state of affairs, there's always some state of affairs that's more valuable than it. Also, "states of affairs" is better and more precise terminology because it might otherwise sound like you're arguing for ontological pluralism.
If by maximizing you mean something more like what is within our power to do, then you should make that clarification, as it is otherwise ambiguous and confusing.
> Those states are independent of the values of any one person.
You went from saying "values and preferences" to just "values." You should keep saying "values and preferences" to be precise or use some variable. To make it less ambiguous, you should also say "states of affairs."
> “Morality” (here defined as a rule or action that contributes to the maximized state of existence) is therefore objective.
Hm? Not only does this not follow from your premises, but based on the context of the post, it sounds like you're trying to argue for utilitarianism, not objective morality, but you've suddenly switched to arguing for objective morality anyway. You also shouldn't stipulate definitions in the middle of a formal argument like this.
Let me try cleaning it up a little using some of my suggestions. The argument is still terrible and doesn't even argue for utilitarianism, but at least the argument will be valid and less ambiguous.
1. People have values and preferences.
2. There are possible states of affairs where we can maximize those values and preferences.
3. Those possible states of affairs are independent of the values and preferences of any one person.
4. If people have values and preferences, there are possible states of affairs where we can maximize those values and preferences, and those possible states of affairs are independent of the values and preferences of any one person, then morality is objective.
5. Morality is, therefore, objective.
(1) is obviously true and uncontroversial.
(2) I'm still unsure of what "maximize those values and preferences" means, but we'll come back to it later.
(3) Basically, yes. Even if my maximized value and preference is to go to the moon, there are possible worlds where I go to the moon and it's not my maximized value or preference. It's still difficult to see how this could lead to objective morality or utilitarianism, but we'll see with this last premise.
As much as I dislike referencing the is-ought gap because it is frequently misused, (4) commits the fallacy. The descriptive point that we can maximize values and preferences is irrelevant to whether we *should* maximize those preferences.
Further, even if we were to make this an argument for utilitarianism by replacing "morality is objective" with "utilitarianism is true" in the fourth premise and "morality is, therefore, objective" with "utilitarianism is, therefore, true," the argument is still awful. Utilitarianism is not the idea that we should maximize our values and preferences. Utilitarianism says we should maximize utility regardless of our values and preferences, even if those values and preferences are deontological. Deontologists and others can also accept (1-3) without contradicting deontology. Non-utilitarians can accept that we can maximize our values and preferences, even if they think that's wrong. Even if we fix the is-ought gap in your problem, non-utilitarians also accept that we should fulfill our non-utilitarian obligations independent of our values and preferences, so this does nothing to contradict them.
> deontology is arbitrary, natural law is for the religious, and the social contract is a universalist illusion.
Not at all; yes and no; and yes, it's an illusion, but I don't know what being "universalist" has to do with it.
Deontological side constraints on utility maximization aren't fabricated from thin air. They're based on intuitions, just like utilitarianism. Natural law (as an ethical rather than legal or political theory) attempts to use common secular intuitions to derive religious moral views, so it doesn't just appeal to divine revelation or anything. Also, just because it's religious doesn't mean it's wrong. I don't know what "universalist" means in this context, so I refer you to my initial response.
I don't have the time to answer the rest of this post right now, but you should read Michael Huemer's "Knowledge, Reality, and Value" to learn more about philosophical arguments and formulate them better. Here's an interview he's done on the book if you're curious. https://www.youtube.com/watch?v=-I_LIchUKwk
>Doubtful, depending on what you mean by "maximize." Due to the nature of infinity, it's likely that for any state of affairs, there's always some state of affairs that's more valuable than it.
This is a somewhat pedantic criticism. Perhaps there is no "ultimate" state of affairs because of the nature of infinity, but I was communicating the idea that some states are better than others.
>Utilitarianism says we should maximize utility regardless of our values and preferences
Not if "utility" is defined as the aggregation of those preferences.
>deontology is arbitrary
Yes it is. If the rules are not arbitrary, they have to be based on something else. And if they are based on something else, then they are not deontological. Kant's categorical imperative for example is covert rule utilitarianism.
A few years ago I would have made the mistake of going into painstaking depth as to why every other system of morality fails in the original post, and attempted to respond to every single one of your criticisms, but I do not think there is any value in doing that. Even though I think utilitarianism, as a moral theory, is clearly the correct, I am not really interested in convincing anybody that it is true and more or less disavowed the effect it has on the way people reason morally in the post.
>Not at all; yes and no; and yes, it's an illusion, but I don't know what being "universalist" has to do with it.
I specified universalist because social contract theory posits that w, x, y, and z are all in it together, but in practice what usually happens is something like w is screwing over x, x is screwing over y, and z fled to the mountains. I suppose "universalist" in this context is somewhat confusing without additional details.
>Utilitarianism is not the idea that we should maximize our values and preferences. Utilitarianism says we should maximize utility regardless of our values and preferences, even if those values and preferences are deontological.
Yeah, but he said “preferences and values” so it wasn’t clear. Perhaps if he had specified that he was defending a desire-based theory of utilitarianism (preference utilitarianism) then I would have thought of that.
I despise Benthamite utilitarianism but enthusiastically endorse subjectivist consequentialism - persuading people of the worthiness of a considered course of action based on its consequences’ consistency with their individual value set.
I think morality is kayfabe nonsense and don't like other people or care what happens after I'm dead. In fact, I block people who try to argue moral philosophy, it's gay nonsense on the level of free will and God. Anyone who has an opinion on it just isn't welcome to interact with me, I don't value their opinions or lives.
>The problem is that they do not want to advocate for them because they are either libtards (so not they are not altruists) or because they do not want to signal being right wing (still not altruists).
The actual reason is largely that it's much harder to reliably quantify the benefit gained per marginal charity dollar spent on these causes, and their methodology is based around focusing on easily quantifiable things. (I think AI risk is sort of the odd man out for them in these terms, but they get more severe public criticism for their beliefs here from leftists than from any rightists, who in fact precisely read EA beliefs as a right-wing dogwhistle, however delusionally. See Timnit Gebru and the like.)
Utilitarianism is basically a heuristic, it isn’t even well-defined enough to be called a philosophy in my opinion. You can’t quantify harm because everyone experiences phenomena differently. And with this, you’re left with numcel window dressing on an ethical system equally aesthetic in origin as Nicomachean Ethics or the Categorical Imperative. Furthermore, Utilitarians are forced to create more axioms to avoid exactly what you propose, that nuclear war is ethical in the long term. Namely, the notion that even the worst life is more desirable than no life at all. But this is obviously problematic to much of their positions such as supporting voluntary euthanasia and being against animal domestication. The Buddhists realized it early on. Suffering in this world is infinite, you can’t add or subtract from it in the long run. Everything goes to shit. Of course, everything that goes to shit eventually sprouts flowers, but it becomes all so tiresome. Moral consequentialism from this perspective is unwholesome because it is grounded on a false premise that you can control what happens “out there”. It’s unnecessary attachment
All that being said, it’s still maybe a good way to operate a state that is stable and gets things done. Even though, as you and others point out, it can still easily be used for selfish or clannish interests
>Furthermore, Utilitarians are forced to create more axioms to avoid exactly what you propose, that nuclear war is ethical in the long term. Namely, the notion that even the worst life is more desirable than no life at all.
No utilitarian I know of thinks that the worst life is more desirable than no life. Maybe if you look really hard you can eventually find one, but it's crazy to suggest that this is taken as some standard entailment of utilitarianism. Actually, the closest thing I can readily think of is the exact opposite - David Benatar arguing that *any* realistic life is worse than no life, and his views are extremely fringe.
Didn’t you reject objective morality in a different post on 10 pieces of advice? Incidentally I can’t find the piece anymore so I imagine you deleted it, or posted it on a different Substack, or I’m just confused.
Your 4-step proof contains an assumption that begs the question/undermines the conclusion. You defined morality as 'a rule or action that contributes to a better state of affairs.' But this assumes "better" means "more aligned with preferences" without any explanation this definition.
The ability to measure preference satisfaction objectively doesn't make morality objective unless we first prove that preference satisfaction is the correct measure of moral worth. In fact, this actually can be an arguement that "morality" is subjective:
1. There are multiple competing frameworks for what makes something "better" or "morally good" (eg. maximizing autonomy, promoting "excellence", universable apllicability, etc.)
2. We have no objective way to measure what framework is "better"
The past is different from moral values. Moral truths are abstract concepts, whereas the past are concrete events. There is something that objective that happened 1 billion years ago (we assume). We know there is an objective past (unlike morality), so the different theories are subjective ways to determine an objective truth. The different moral frameworks are determining something an abstract concept, which we have no reason to say is objective.
Any ideology that optimizes for increasing the quantity of human biomass (including utilitarianism) is bad. We need to optimize for quality of life and the progress of mankind, which is best served by a medium sized, high quality population.
I identified as a utilitarian for most of my life but stopped after SBF.
My main critique of utilitarianism is that as an ideology it is criminogenic. EA generated at least three crypto criminals over a one year period and SBF himself was the child of two utilitarian philosophers. The odds ratios here are undeniable.
I don’t really know what you do with that, but it’s made me more sympathetic to folk skepticism around it.
The issue seems to be that in utilitarianism, it is easy to come up with plausible sounding self-justifications for antisocial behavior. If the rules are simple and apply to everybody equally (deontology), then there's no way to come up with a convincing self-interested excuse.
I've never bothered to justify myself. If people don't like what I do they're welcome to commit suicide.
hard to disagree with that
If I had a dollar for every time someone read Neitzsche, and came away with the wrong conclusion that master morality is 'good' and slave morality is 'bad', well...
(Americans, in particular, seem highly predisposed to a moralistic worldview, that is to say making moral lenses on society primary.)
Sorry, bad take for several reasons.
>> "Charity exists to expand structures that are dysfunctional...if foreign aid was able to lift the poor out of poverty, it would be called loans and investments."
The charities EAs support are things like malaria nets for children and iodine for pregnant mothers in deficient areas. How do those "expand structures that are dysfunctional"? How would you structure these as a loan? Tell an illiterate pregnant woman in rural Pakistan "I promise you that this mysterious pill you know nothing about will increase the IQ of your child by, on average, 4 points - after twenty years, please test your child's IQ, test against the counterfactual, and pay me back the 40 cents that it cost"? Obviously in a perfect world, illiterate Pakistani women would themselves realize all of this was true and important and pay 40 cents for the pill, but American woman have 1000x the resources that Pakistanis do and still don't take choline during pregnancy, so doesn't seem like everyone is being a perfect rational actor here.
For a while everyone was trying microloans/microcredit, but then the evidence came in that this didn't work very well. I talk slightly more about considerations around capitalism vs. charity at https://www.astralcodexten.com/p/does-capitalism-beat-charity .
>> Note that none of the issues that they try to tackle are right wing - eugenics, deregulation, obesity prevention, reducing microplastic contamination, and banning toxic pesticides and additives.
A bunch of effective altruist money has gone into several of those causes - for example, EAs were the first funders of the YIMBY movement, one of the most important deregulation pushes of our time. EAs were one of the main funders of the charter cities movement, though Pronomos has since taken over. I personally put $100K of grant money into various embryo screening initiatives (I won't speak for anyone else, but consider that not everyone who does this announces it to the world!). EAs got national recognition for their work removing toxic heavy metals from the spice supply.
I agree not all of these were exactly framed in the way that maximally appeals to right-wingers. But think about what your leftist counterpart would say (and has said, dozens of times) - that no EA money goes into leftist causes like increased abortion access, defund-the-police-advocacy, DEI programs, or organic farming. This is also true. The terminally-politics-brained person always looks at everything through the lens of politics - if you're not pursuing our left-wing agenda, it must be because you're an evil right-winger, or vice versa. I like EA because it's actually trying to do what's effective (mostly extremely boring infectious disease related technologies) and not what will placate various tribes of politics-brained morons. Like, seriously, you expect lobbying for removing microplastics from things, in the First World, before anyone has done decent research on how and whether they're bad, to be one of the most effective and evidence-based causes in the world, while a bunch of people are still eating lead or drinking from sewage-contaminated water?
>The charities EAs support are things like malaria nets for children and iodine for pregnant mothers in deficient areas. How do those "expand structures that are dysfunctional"?
Less death in a post-1800s undeveloped area means a larger population but not better infrastructure. Though interventions that increase health/intelligence should lead to better infrastructure, at least to some degree.
>How would you structure these as a loan? Tell an illiterate pregnant woman in rural Pakistan "I promise you that this mysterious pill you know nothing about will increase the IQ of your child by, on average, 4 points - after twenty years, please test your child's IQ, test against the counterfactual, and pay me back the 40 cents that it cost."
My mind initially drifted towards the government -- they are loaned 10 million to spend on x or y nutrients the region is deficient in, then I realized that was a terrible idea because of rampant corruption. So I guess outside institutions are the best option.
>EAs were the first funders of the YIMBY movement, one of the most important deregulation pushes of our time
I consider YIMBY to be left wing coded, though it does qualify as deregulation.
>EAs were one of the main funders of the charter cities movement
That is a good example of a right wing cause. I'll edit that in.
>I personally put $100K of grant money into various embryo screening initiatives (I won't speak for anyone else, but consider that not everyone who does this announces it to the world!)
I'm not too close to the embryo selection people, but I wouldn't doubt that wealthy people adjacent to EA/rationalism fund the process covertly.
>I agree not all of these were exactly framed in the way that maximally appeals to right-wingers. But think about what your leftist counterpart would say (and has said, dozens of times) - that no EA money goes into leftist causes like increased abortion access, defund-the-police-advocacy, DEI programs, or organic farming.
The skew of the causes is still towards the left, reflecting their community, though I will concede that they do avoid funding clearly bad causes on the left (as in, DEI is clearly bad in a way that funding AI risk research is not).
>Like, seriously, you expect lobbying for removing microplastics from things, in the First World, before anyone has done basic research on how and whether they're bad, to be one of the most effective and evidence-based causes in the world?
That is a fair point, it would be better to fund research on them first.
Sorry, but this is all kinds of confused. Your presented argument isn't valid or even clearly constructed.
> People have values and preferences.
Yes, but it's hard to see how this could be a premise in an argument for utilitarianism. But let's consider it anyway and see if you can show that it is a good premise.
> There are states of existence that maximize those values and preferences.
Doubtful, depending on what you mean by "maximize." Due to the nature of infinity, it's likely that for any state of affairs, there's always some state of affairs that's more valuable than it. Also, "states of affairs" is better and more precise terminology because it might otherwise sound like you're arguing for ontological pluralism.
If by maximizing you mean something more like what is within our power to do, then you should make that clarification, as it is otherwise ambiguous and confusing.
> Those states are independent of the values of any one person.
You went from saying "values and preferences" to just "values." You should keep saying "values and preferences" to be precise or use some variable. To make it less ambiguous, you should also say "states of affairs."
> “Morality” (here defined as a rule or action that contributes to the maximized state of existence) is therefore objective.
Hm? Not only does this not follow from your premises, but based on the context of the post, it sounds like you're trying to argue for utilitarianism, not objective morality, but you've suddenly switched to arguing for objective morality anyway. You also shouldn't stipulate definitions in the middle of a formal argument like this.
Let me try cleaning it up a little using some of my suggestions. The argument is still terrible and doesn't even argue for utilitarianism, but at least the argument will be valid and less ambiguous.
1. People have values and preferences.
2. There are possible states of affairs where we can maximize those values and preferences.
3. Those possible states of affairs are independent of the values and preferences of any one person.
4. If people have values and preferences, there are possible states of affairs where we can maximize those values and preferences, and those possible states of affairs are independent of the values and preferences of any one person, then morality is objective.
5. Morality is, therefore, objective.
(1) is obviously true and uncontroversial.
(2) I'm still unsure of what "maximize those values and preferences" means, but we'll come back to it later.
(3) Basically, yes. Even if my maximized value and preference is to go to the moon, there are possible worlds where I go to the moon and it's not my maximized value or preference. It's still difficult to see how this could lead to objective morality or utilitarianism, but we'll see with this last premise.
As much as I dislike referencing the is-ought gap because it is frequently misused, (4) commits the fallacy. The descriptive point that we can maximize values and preferences is irrelevant to whether we *should* maximize those preferences.
Further, even if we were to make this an argument for utilitarianism by replacing "morality is objective" with "utilitarianism is true" in the fourth premise and "morality is, therefore, objective" with "utilitarianism is, therefore, true," the argument is still awful. Utilitarianism is not the idea that we should maximize our values and preferences. Utilitarianism says we should maximize utility regardless of our values and preferences, even if those values and preferences are deontological. Deontologists and others can also accept (1-3) without contradicting deontology. Non-utilitarians can accept that we can maximize our values and preferences, even if they think that's wrong. Even if we fix the is-ought gap in your problem, non-utilitarians also accept that we should fulfill our non-utilitarian obligations independent of our values and preferences, so this does nothing to contradict them.
> deontology is arbitrary, natural law is for the religious, and the social contract is a universalist illusion.
Not at all; yes and no; and yes, it's an illusion, but I don't know what being "universalist" has to do with it.
Deontological side constraints on utility maximization aren't fabricated from thin air. They're based on intuitions, just like utilitarianism. Natural law (as an ethical rather than legal or political theory) attempts to use common secular intuitions to derive religious moral views, so it doesn't just appeal to divine revelation or anything. Also, just because it's religious doesn't mean it's wrong. I don't know what "universalist" means in this context, so I refer you to my initial response.
I don't have the time to answer the rest of this post right now, but you should read Michael Huemer's "Knowledge, Reality, and Value" to learn more about philosophical arguments and formulate them better. Here's an interview he's done on the book if you're curious. https://www.youtube.com/watch?v=-I_LIchUKwk
>Doubtful, depending on what you mean by "maximize." Due to the nature of infinity, it's likely that for any state of affairs, there's always some state of affairs that's more valuable than it.
This is a somewhat pedantic criticism. Perhaps there is no "ultimate" state of affairs because of the nature of infinity, but I was communicating the idea that some states are better than others.
>Utilitarianism says we should maximize utility regardless of our values and preferences
Not if "utility" is defined as the aggregation of those preferences.
>deontology is arbitrary
Yes it is. If the rules are not arbitrary, they have to be based on something else. And if they are based on something else, then they are not deontological. Kant's categorical imperative for example is covert rule utilitarianism.
A few years ago I would have made the mistake of going into painstaking depth as to why every other system of morality fails in the original post, and attempted to respond to every single one of your criticisms, but I do not think there is any value in doing that. Even though I think utilitarianism, as a moral theory, is clearly the correct, I am not really interested in convincing anybody that it is true and more or less disavowed the effect it has on the way people reason morally in the post.
>Not at all; yes and no; and yes, it's an illusion, but I don't know what being "universalist" has to do with it.
I specified universalist because social contract theory posits that w, x, y, and z are all in it together, but in practice what usually happens is something like w is screwing over x, x is screwing over y, and z fled to the mountains. I suppose "universalist" in this context is somewhat confusing without additional details.
>Utilitarianism is not the idea that we should maximize our values and preferences. Utilitarianism says we should maximize utility regardless of our values and preferences, even if those values and preferences are deontological.
Preference utilitarianism exists https://en.wikipedia.org/wiki/Preference_utilitarianism
But otherwise agree with the argument being invalid
Yeah, but he said “preferences and values” so it wasn’t clear. Perhaps if he had specified that he was defending a desire-based theory of utilitarianism (preference utilitarianism) then I would have thought of that.
I despise Benthamite utilitarianism but enthusiastically endorse subjectivist consequentialism - persuading people of the worthiness of a considered course of action based on its consequences’ consistency with their individual value set.
I think morality is kayfabe nonsense and don't like other people or care what happens after I'm dead. In fact, I block people who try to argue moral philosophy, it's gay nonsense on the level of free will and God. Anyone who has an opinion on it just isn't welcome to interact with me, I don't value their opinions or lives.
Interesting podcast criticising EA. Not about eugenics :( but still worth listening to: https://open.spotify.com/episode/3TR5UQYxj9IRhW9WL5LqCf?si=ZO6isyUWRHujMeFd9LICQQ&t=3967&pi=5FuG17SUSAeCW
>The problem is that they do not want to advocate for them because they are either libtards (so not they are not altruists) or because they do not want to signal being right wing (still not altruists).
The actual reason is largely that it's much harder to reliably quantify the benefit gained per marginal charity dollar spent on these causes, and their methodology is based around focusing on easily quantifiable things. (I think AI risk is sort of the odd man out for them in these terms, but they get more severe public criticism for their beliefs here from leftists than from any rightists, who in fact precisely read EA beliefs as a right-wing dogwhistle, however delusionally. See Timnit Gebru and the like.)
Utilitarianism is basically a heuristic, it isn’t even well-defined enough to be called a philosophy in my opinion. You can’t quantify harm because everyone experiences phenomena differently. And with this, you’re left with numcel window dressing on an ethical system equally aesthetic in origin as Nicomachean Ethics or the Categorical Imperative. Furthermore, Utilitarians are forced to create more axioms to avoid exactly what you propose, that nuclear war is ethical in the long term. Namely, the notion that even the worst life is more desirable than no life at all. But this is obviously problematic to much of their positions such as supporting voluntary euthanasia and being against animal domestication. The Buddhists realized it early on. Suffering in this world is infinite, you can’t add or subtract from it in the long run. Everything goes to shit. Of course, everything that goes to shit eventually sprouts flowers, but it becomes all so tiresome. Moral consequentialism from this perspective is unwholesome because it is grounded on a false premise that you can control what happens “out there”. It’s unnecessary attachment
All that being said, it’s still maybe a good way to operate a state that is stable and gets things done. Even though, as you and others point out, it can still easily be used for selfish or clannish interests
>Furthermore, Utilitarians are forced to create more axioms to avoid exactly what you propose, that nuclear war is ethical in the long term. Namely, the notion that even the worst life is more desirable than no life at all.
No utilitarian I know of thinks that the worst life is more desirable than no life. Maybe if you look really hard you can eventually find one, but it's crazy to suggest that this is taken as some standard entailment of utilitarianism. Actually, the closest thing I can readily think of is the exact opposite - David Benatar arguing that *any* realistic life is worse than no life, and his views are extremely fringe.
https://open.substack.com/pub/farmanimalwelfare/p/ten-big-wins-in-2024-for-farmed-animals?utm_source=share&utm_medium=android&r=7h8xz regarding your scepticism that EA makes progress on animal welfare
Utilitarianism makes the most sense, but is not true. Btw I think someone should work on the whole adding genetic interests to utilitarianism thing that Emil was thinking about https://www.emilkirkegaard.com/p/adding-genetic-interests-to-utilitarianism
Didn’t you reject objective morality in a different post on 10 pieces of advice? Incidentally I can’t find the piece anymore so I imagine you deleted it, or posted it on a different Substack, or I’m just confused.
Different substack: https://sebjensen.substack.com/p/what-i-have-learned-in-20-years-of
I changed my mind.
Then I'm unsubscribing. I'm not interested in this masturbstoru retard subject.
ok
Your 4-step proof contains an assumption that begs the question/undermines the conclusion. You defined morality as 'a rule or action that contributes to a better state of affairs.' But this assumes "better" means "more aligned with preferences" without any explanation this definition.
The ability to measure preference satisfaction objectively doesn't make morality objective unless we first prove that preference satisfaction is the correct measure of moral worth. In fact, this actually can be an arguement that "morality" is subjective:
1. There are multiple competing frameworks for what makes something "better" or "morally good" (eg. maximizing autonomy, promoting "excellence", universable apllicability, etc.)
2. We have no objective way to measure what framework is "better"
3. Morality is subjective
1. There are multiple competing theories of what happened in the spot I’m standing exactly 1 billion years ago
2. We have no objective way of determining which of these is correct
3. The past is subjective
The past is different from moral values. Moral truths are abstract concepts, whereas the past are concrete events. There is something that objective that happened 1 billion years ago (we assume). We know there is an objective past (unlike morality), so the different theories are subjective ways to determine an objective truth. The different moral frameworks are determining something an abstract concept, which we have no reason to say is objective.
You should know this was a silly counter analogy.
You’re just straight-up begging the question
Any ideology that optimizes for increasing the quantity of human biomass (including utilitarianism) is bad. We need to optimize for quality of life and the progress of mankind, which is best served by a medium sized, high quality population.