I never felt comfortable with the argument that expected value calculations justify giving priority to moonshots. I have made some counterarguments previously, but I thought of another just now, which is that the economy, which is how we accomplish most things, doesn't really run off of moonshots, most of the work is done in pursuit of much surer things.
I do think EA as a whole sees the moonshot as one tool among many, and accordingly, there are a lot of efforts in stuff that is higher probability.
An interesting corollary of the EV stuff is that perhaps we would all be wealthier if more money were being put into market moonshots. It's an interesting question whether we have the right ratio of unicorn chasing, both in the economy, and in EA.
"Anyway, at one point, Swinburne tries to explain why God gave us imperfect knowledge about the causes and effects of our actions ..."
Unfortunately this really does not address the problem of evil. Much of the evil in the world happens because of people who are sure of themselves, or who simply don't care what the effects of their actions are.
That's true, but only if her *overriding/ultimate* purpose is to be a morally good person with concern for ethics, ie virtue, and not doing good in the world.
It seems self evident to me that beneficence is more desirable as an ultimate aim than virtue (whose value is instrumental) and easy moral choices that lead to better outcomes are as ultimately desirable as hard ones -- outside our limited human minds concerned with self image and social status.
Pursuing "being a moral person" seems to me not to be a particularly moral goal but more like a self development or well being one. More like staying fit (in fact similar to not smoking) than feeding the hungry.
It's very possible I'm getting this wrong by using non philosophical frameworks.
That's an interesting thought. Ultimately it depends on your moral framework - under utilitarianism, for example, being virtuous would be synonymous with having a character that makes you likely to undertake actions wich produce the most welfare.
Christianity has an additional twist where being virtuous might have extremely valuable effects in the afterlife, regardless of how much good it results in currently. Different theologies have a different take on what this would mean exactly.
Yes, so it seems to me that under utilitarianism the "good character" is the tool (means, proximate aim, whatever philosophers call this), not the ultimate moral purpose.
Obviously any system that proposes the actual life on earth as some kind of test or moral strength / virtue challenge to "pass" is entirely different matter.
As an aside I kinda buy into virtue "ethics" in the Aristotelian (ish) sense of "flourishing" etc as a general "how to live one's life" project. But it seems to extend notion of morality / obligation beyond of what feels like the realm of morality -- for example we might end up with the idea that wasting one's talent or killing oneself (even if one lacks obligation to dependants) is a MORAL failure, a failure of obligation. Which is close to suggesting that our life is never our own fully. Which is not implausible (humans are so hypersocial after all) but also hard to accept -- this extension of obligation beyond others to oneself. Ouch.
Generally I take morality to refer to rules or principles by which one OUGHT to live.
In that sense, "how to live one's life" (or "flourishing") is very much what morality is about.
OUGHT one "waste their talents" or "kill oneself"? Aren't those personal decisions dependent on surrounding circumstances? These choices depend on the judgment of the one affected by them. Isn't it oppressive to impose one's choices on another?
If a person is making one of these decisions, they OUGHT to take into account how their act would affect others. But others OUGHT NOT interfere unless the contemplated act would cause unnecessary harm to third persons.
Ok but which is the ULTIMATE aim and which is the (instrumental aim) means to achieve this. If they are 100% the same then it's a moot point/hair splitting. But I don't know if many people genuinely think or feel like that. For example I think many would argue that "avoiding harm" -- the same harm -- that's difficult to achieve for the avoider makes them a "better" person than if that avoidance was easier. Even though the good achieved is exactly the same.
Trying to be a "better" person as compared to others is suboptimal.
Trying to be a "better" person as compared to one's previous state is desirable. This is self-improvement; always a good thing.
If one acts from two different aims ("avoiding harms" and "self-improvement") why need either be the ultimate aim and the other instrumental? I see no difference anyway.
Great post, Amos!
🫶
I never felt comfortable with the argument that expected value calculations justify giving priority to moonshots. I have made some counterarguments previously, but I thought of another just now, which is that the economy, which is how we accomplish most things, doesn't really run off of moonshots, most of the work is done in pursuit of much surer things.
I do think EA as a whole sees the moonshot as one tool among many, and accordingly, there are a lot of efforts in stuff that is higher probability.
An interesting corollary of the EV stuff is that perhaps we would all be wealthier if more money were being put into market moonshots. It's an interesting question whether we have the right ratio of unicorn chasing, both in the economy, and in EA.
Good point. For what it's worth I agree that Swinburne "is the most important living Christian philosopher and philosophical theologian"
"Anyway, at one point, Swinburne tries to explain why God gave us imperfect knowledge about the causes and effects of our actions ..."
Unfortunately this really does not address the problem of evil. Much of the evil in the world happens because of people who are sure of themselves, or who simply don't care what the effects of their actions are.
yeah, this is only supposed to be like 0.02% of swinburne's theodicy
I'm willing to bet the remaining bulk is not going to answer the problem either.
I agree (for the most part — there are some real insights, but I don’t buy the whole package.)
That's true, but only if her *overriding/ultimate* purpose is to be a morally good person with concern for ethics, ie virtue, and not doing good in the world.
It seems self evident to me that beneficence is more desirable as an ultimate aim than virtue (whose value is instrumental) and easy moral choices that lead to better outcomes are as ultimately desirable as hard ones -- outside our limited human minds concerned with self image and social status.
Pursuing "being a moral person" seems to me not to be a particularly moral goal but more like a self development or well being one. More like staying fit (in fact similar to not smoking) than feeding the hungry.
It's very possible I'm getting this wrong by using non philosophical frameworks.
That's an interesting thought. Ultimately it depends on your moral framework - under utilitarianism, for example, being virtuous would be synonymous with having a character that makes you likely to undertake actions wich produce the most welfare.
Christianity has an additional twist where being virtuous might have extremely valuable effects in the afterlife, regardless of how much good it results in currently. Different theologies have a different take on what this would mean exactly.
Yes, so it seems to me that under utilitarianism the "good character" is the tool (means, proximate aim, whatever philosophers call this), not the ultimate moral purpose.
Obviously any system that proposes the actual life on earth as some kind of test or moral strength / virtue challenge to "pass" is entirely different matter.
As an aside I kinda buy into virtue "ethics" in the Aristotelian (ish) sense of "flourishing" etc as a general "how to live one's life" project. But it seems to extend notion of morality / obligation beyond of what feels like the realm of morality -- for example we might end up with the idea that wasting one's talent or killing oneself (even if one lacks obligation to dependants) is a MORAL failure, a failure of obligation. Which is close to suggesting that our life is never our own fully. Which is not implausible (humans are so hypersocial after all) but also hard to accept -- this extension of obligation beyond others to oneself. Ouch.
Generally I take morality to refer to rules or principles by which one OUGHT to live.
In that sense, "how to live one's life" (or "flourishing") is very much what morality is about.
OUGHT one "waste their talents" or "kill oneself"? Aren't those personal decisions dependent on surrounding circumstances? These choices depend on the judgment of the one affected by them. Isn't it oppressive to impose one's choices on another?
If a person is making one of these decisions, they OUGHT to take into account how their act would affect others. But others OUGHT NOT interfere unless the contemplated act would cause unnecessary harm to third persons.
Being a good person means doing no unnecessary harm to others. It IS just that simple.
Ok but which is the ULTIMATE aim and which is the (instrumental aim) means to achieve this. If they are 100% the same then it's a moot point/hair splitting. But I don't know if many people genuinely think or feel like that. For example I think many would argue that "avoiding harm" -- the same harm -- that's difficult to achieve for the avoider makes them a "better" person than if that avoidance was easier. Even though the good achieved is exactly the same.
Trying to be a "better" person as compared to others is suboptimal.
Trying to be a "better" person as compared to one's previous state is desirable. This is self-improvement; always a good thing.
If one acts from two different aims ("avoiding harms" and "self-improvement") why need either be the ultimate aim and the other instrumental? I see no difference anyway.