Derivative Value


While some groups seek to make societies of equality of opportunity and others of equal outcome, utilitarians generally seek equal consideration of interest: assigning different beings moral weight based on their capacity for having experience.

This sort of well-being however is rather difficult to quantify, and if you can’t quantify it you probably can’t maximize it. The way a lot of people try to do this is by assuming everyone is equal, but while good for dealing with large groups of average people, this can’t handle animals, and if you have more information this assumption of human equality in well-being capacity isn’t always easily justified.

While we can’t measure subjective well-being, it certainly doesn’t seem plausible that the life of a person who remains miserable has as much value as the life of a person who lives a happy life. In healthcare the Quality Adjusted Life Year (QALY) is a metric by which doctors can ration treatments. By surveying thousands of people to see how they rate different health outcomes, trade off life quality for years of life, and gamble on different survival prospects for higher life quality, we can get a window into how people actually feel in a scaled way.

Ultimately though, why assume that in full health everyone is on the exact same scale? People in full health trade away years of life to drugs which make them feel good in the moment. If these sorts of decisions are rational, then the well-being scale has to be able to go higher than 1 QALY per year in full health to explain this. If you consider such people irrational or selfish with respect to their current self at the expense of the future self, then their QALY surveys answers should also be called into question.

Many drugs work by mimicking or blocking different neurotransmitters in the brain.  As people are different they will have slightly different brain chemistry, and you should expect some people to feel better gaining one QALY than an average person due to these differences. Since this hasn’t been quantified, QALYs are still a good metric since it is probably a reasonable assumption that on average, given the same species the variance will not be super great. From this, it seems reasonable to say that the inherent value of different humans is roughly equal: when people are physically and mentally healthy it makes sense to act as though they have the same capacity for happiness and suffering. On the other hand, it doesn’t make sense to trade the life of a suicidal person for that of a someone who wants to live very strongly.

However the inherent value of people is not what matters most, if humanity isn’t wiped out there will be many more beings in the future, and their lives will come with their own inherent value then. The ability of those alive now, to effect the well-being of others is the derivative value of their lives.

Because the derivative value of many beings is likely to matter much more than their inherent value, when considering interventions that save different beings rather than just tallying QALYs, or trying to figure out exactly how much you value animals vs. humans, look also how those saved will impact the future.

For most animals, you can probably safely assume that almost all value is inherent: except when a species is endangered, or an animal has a large impact on humans (therapy, happiness, attack, disease vector, etc.) the life or death of any given animal is replaceable with respect to the far future: happy cow will be a happy cow, and if this cow weren’t here the market would have produced another: sad rat will be sad rat, and if this particular sad rat weren’t here evolution will fill the resource niche with a similar one slightly better adapted on average soon enough.

For people there can be immense variance. Some people have saved hundreds of millions lives, some people have killed tens of millions. While it is much harder to model the impact of specific humans since we can do many possible things with our intelligence, be replaceable in many different ways, and build up convoluted causal chains to impact, it is worth considering our derivative value. All else being equal, it will be better to save those who have lived ethical lives than those who have harmed others.1 Interventions which extend the cognitive lives of great scientists and researchers could potentially be extremely important since it allows more knowledge to build up in individuals and extends the peaks of their performance.2 Designing governments to prevent violent revolution and not allow the worst kind of leaders into power likewise is on a similar scale of importance, especially since such leaders don’t just kill a lot of people, but can also disproportionately harm intelligent people and set back development for generations.

Obviously this sort of derivative value reasoning also applies to Artificial Intelligence and is why a lot of Effective Altruists care about AI.3 For those more focused on people alive today or who need higher certainty about their impact, considering the range between a focus on optimizing the far future and just helping people or animals alive today should be an area where there are effective interventions to be found and possible areas for moral trade between those focuses. Effective altruism movement building/meta-effective altruism and policy fall into this category.


Notes:

    1. People who have lived ethical lives are more likely to continue living ethical lives. Rationing medical care on the basis of your value to others to some degree seems like a great incentive system and rationing system provided that the person controlling the system doesn’t selfishly define what it means to be helpful. One could argue that money already does this, but in that sense money is a measure of your value to other people with money… though even that doesn’t mean the value is value added. Return to Article
    2. Due to risks from synthetic biology, new weapons, and Artificial Super-intelligence, it is not unambiguous that research in the future will be as beneficial as net positive as research has been in the past. Return to Article
    3. Just because someone has high derivative value does not mean you should care more about their happiness, what you should care about is their ability to make good decisions (AI makes this obvious since you can build AIs which make good or bad decisions, but have no feelings).  Happiness and suffering may still be relevant to incentives, but a core heuristic to pull from this is that while happiness should be widespread based on utilitarian values, power should be more focused on those with high derivative value, and taken from those with high derivative dis-value. Evaluating the value of people’s expected actions is still hard of course. Return to Article
    4. The whole reason I chose the creepy picture from animal farm for this post is that this sort of reasoning can easily be hijacked by powerful selfish people, and if one is going to use it, it is better to stick with very bluntly obvious examples where being disproportionate in who you help will actually make the future better.
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s