Home > Uncategorized > What about unborn children?

What about unborn children?

Are we morally obligated to produce as many children as possible?

Overcrowding is often thought to be a big problem in modern society, making life more hectic, increasing misery, etc. What most people fail to acknowledge is that if a city/country/whatever is overcrowded, there are more people alive. Keep in mind that we are cognitively biased when we think about aggregates, finding it very hard to think about aggregates as compared to averages, so don’t trust your intuition here, and try to think about it quantitatively to counter the bias *1 {see footnote}.

If a life is always worthwhile (Good>Bad) …

Surely the preciousness of a life is more valuable than the drain of resources from one extra person. (If you feel inclined to say no, substitute yourself or someone you care about here.) Even without other reasons why this might be, this seems to me to follow from the fact that when we take a “resource” and convert it into something used by or done by humans, that’s a good thing. (E.g., eating food to compose music, plant more food, etc.)

You might be thinking “No, that’s not true–Eventually there’d be so many people it wouldn’t be worth it.” Where exactly is the cutoff, then?—Isn’t it always better to be alive than not, given a few assumptions? The assumptions would vary from person to person but probably include, at the minimum, not starving.

As long as you believe that a person can largely pull themself up and create the life they want, that people aren’t doomed to be inferior or unhappier if they don’t want to, doesn’t that especially increase our obligation and remove an excuse to only “breed” more for people likelier, by genes and/or environment, to be successful and happy? (i.e., weakening a eugenic argument).

Total good and the mere addition paradox

You might be thinking that total good, or happiness (or whatever) is what matters, and I’d agree with you. Now think about two scenarios: (a) 100 billion people alive, with average quality of life = 1 on a scale of 1-10 versus (b) 1 billion people alive, with average quality of life = 5, or 5x more than in the first scenario.

Pursuing this line of thought gets you into the “mere addition paradox“, the “repugnant conclusion” of (total) utilitarianism that vastly more lives, even miserable (but still worthwhile) ones, is better than vastly fewer at a much a higher quality of life. (Average-utilitarianism “solves” that problem, but in my opinion, that’s not how utilitarianism is supposed to work, i.e. you’re doing it wrong.) I, however, am inclined to think it’s not a paradox at all, agreeing with Tannsjo, as his argument is represented in the Wikipedia article anyway.

Perhaps to think well about it we would need some order-of-magnitude estimation for just how miserable the average life, with overcrowding would be. Remember, if humans have the capacity to support themselves, through ingenuity and breakthrough, without limit, there may not be an issue at all.

Beyond Racism: Existism

My grandma says if she hadn’t gone through to term with her daughter Hannah, now 33, who has Down’s syndrome, she would never have known Hannah, and never have grown. But what about the unborn children she never had?

Like “racist” or “sexist”, we can call it “exist-ist”, bias towards those alive, rather than those that could have been. If that seems odd, even wrong–as it does even to me–make a substitution. Say ‘bias towards those alive, rather than those that could be in the future‘.

Most of us do care a little about the future of humanity, after all. And once you start thinking about the persons that could be in the future, oh boy, that leads you on to thinking about existential risk, and astronomical waste, and…  and…   well, you’ll have to peruse the writings of philosopher Nick Bostrom (nickbostrom.com) to know what I’m talking about (and soon enough you’ll know more about these things than I do).

Another Application: Relationships

Does this [same kind of thinking] mean we’re morally obligated to pursue each suitor? Each potential life-mate (or, of course, best friend, in the same vein of thought). We always look back and say ‘it was for the best’; or, ‘he was the one’. Well, was he really? Or was it just a confluence of random factors, like that you’re both mature enough, or mature enough for the other person in particular, or successful enough, or just at that time you were particularly receptive to a partner, or ..  or ..

Such questions can lead into a void, an abyss, a darkness, from which you can’t pull yourself without effort. Is it useless to think of such things? It seems to me a matter of great importance. Perhaps our minds are just too feeble to deal with it, though.


*1 Kahneman demonstrates this in his book Thinking, Fast and Slow, which distills decades of research. He is one of the most distinguished psychologists alive, a researcher who pretty much established “cognitive biases” as an object of study. By the way–this inability to reason properly with aggregates leads to scope insensitivity, which has real (and huge) negative consequences for philanthropy.

Categories: Uncategorized
  1. August 8, 2012 at 8:22 pm

    Are you saying that you think it is inherently better, all other things being equal, to have more people in existence? If so, why do you think so? Why would having, say, 7 billion people on the earth, all basically happy, be inherently better than having 5 billion basically happy people on the earth?

    • August 9, 2012 at 12:19 am

      The reason 7 billion people being alive is better than 5 billion would come down to exactly the same reason as to why it’s better for any one individual, or *you*, to be alive than to not be…. wouldn’t it?
      I must admit here that I am not really arguing as to why it’s better to be alive than not, and Sid (below) disagrees with my… intuition that it is.

      • S.
        August 9, 2012 at 12:36 am

        I would say that it is (often) better to be alive than dead, but also that it is not meaningfully better to be alive than never-existent. I think this is a significant distinction and that it is misleading to squash/simplify the two concepts into one. Plus, one-value philosophy (here: life) is a terrible model for moral choice and ethical planning. If I* had* to I’d maybe pick something more like: “sufficiently complicated consciousnesses holding special causal links according to certain conditions of relative powerfulness/significance and relatedness with each other” that I would write out. On this, I do not accept that (human) life is in itself automatically valuable or that the world is a better place because there are more people than there could have conceivably been.

      • August 9, 2012 at 4:26 pm

        Looking at things from a naturalistic point of view, there is of course nothing of objective value. (By “objective” value, I refer to a value that is inherent in the thing itself as an objective quality rather than simply being the subjective appraisal of an object from the perspective of the likes and dislikes of various finite beings in the universe). Therefore, there is no objective moral obligation to value anything in particular. What we are left with is simply the question that each of us will be inclined to ask: “Given that there is no objective value, what is it that I subjectively value and how can I attain it?”

        We all subjectively value our own happiness. Most of us find happiness partly in the happiness of others around us. Most of us vaguely and weakly find happiness partly in the general idea that people, or even living things, in general throughout the universe are happy. I don’t personally care how many people or living beings there are in the universe, and it wouldn’t bother me at all if there were fewer of them rather than more. I am more interested in how happy those beings who actually exist (or will exist) are. In other words, I would find two completely happy beings a perfectly satisfying concept, and I wouldn’t find the concept really greatly more satisfying by having, say, two thousand perfectly happy beings. I would find the concept of two perfectly happy beings much more satisfying than two thousand somewhat dissatisfied beings.

        But, of course, all this is very subjective within a naturalistic framework. Some people might, for some reason or other, simply have a desire to have more beings in the universe. Some people might subjectively feel that fewer beings are better. Since all of this really isn’t based in much more than varying whims, it is difficult to say much more about it. However, I think I might dare to say that when put to the test, most people motivated by a significant degree of compassion/empathy will conclude that they would rather work towards the greater happiness of fewer beings than the creation of many more beings that requires a diminishing of the overall happiness of them all. I suspect that if someone feels otherwise, it is due to the concept being thought of more abstractly rather than concretely and practically.

    • S.
      August 9, 2012 at 10:23 pm

      Reply to Mark’s second comment:

      I also think it is interesting to recognize that “capital-V” Value does not exist in the universe as a metaphysically fundamental substance (it would be bizarre if that *were* true, if not also impossible to interpret what that would even mean), and that values are created by finite beings based on their own dispositions and experiences.

      Using this view to offer deflationary intuitions about an ethical discussion, however, is a too-common cop-out. Moral and political reasoning/choice are still just as relevant–we do need models of moral reasoning (simplifying variables sometimes, as a start) in order to not merely understand behaviors but to pinpoint and rank and explore and debate on values and to make decisions honoring them in whatever way makes most sense to us finite beings acting, deciding, and feeling together.

      I also do not think the heterogeneity of values invalidates value-ing. Nor that a lack of handed-down/Universe-level moral obligations constitutes an actual lack of moral obligations. Finally, I think that the naturalistic worldview does have important information about moral life to offer us, but probably not nearly as much as many seem to think–the art of living often requires a sort of “working from the inside” reasoning within the gooey mess of our overlapping life-patterns, behaviors, commitments, etc that we are already participating in (“the role of emotion in good moral choice” is a very simple example of this); too much objectification away from ourselves–too much abstraction as you put it–can lead to intense alienation from our own way of life as well as our beyond-the-surface feelings and the importance of nourishing myths. It matters that a god does not legislate morality, and that people are cognitively biased and often lack empathy, and that science works by applying objective language to many types of problems, but there is nothing wrong with letting the subjective have its day when it comes without penalty or prejudice or disregard via stern scientism.

      • August 10, 2012 at 3:47 pm

        I think I basically agree with you. There are some objective conclusions to come to within an naturalistic worldview (granting that a naturalistic universe can exist at all, which I’m granting for the sake of argument here). People value lots of different things, but there are human universals and near-universals. That is why I said before that I think most compassion-motivated naturalists would put the greater happiness of beings who do actually exist or likely will exist over simply a greater number of beings existing. Although I don’t doubt that some people may have a desire for simply a greater number of beings in the universe, I doubt this desire is very deep or widespread overall, compared to the desire, rooted in compassion/empathy, that the beings that do and will exist should be reasonably happy.

        In short, I’m basically challenging the main thesis of your original article from the perspective of general human desires from the perspective of a naturalistic worldview. Sid articulated much the same basic idea earlier when he said, “I would say that it is (often) better to be alive than dead, but also that it is not meaningfully better to be alive than never-existent.” I imagine that sentiment, at least when it comes down to real, practical decision-making, would be somewhat dominant among human beings, rather than a strong sense of value for simply a greater number of non-existent beings to be brought into existence.

      • August 10, 2012 at 3:51 pm

        Addressing PP 2: I also imagine that sentiment to be dominant in real, “practical” decision making – one of the questions I’m posing is whether this should be (is!=ought). As for Sid’s sentence “I would say that it is (often) better to be alive than dead, but also that it is not meaningfully better to be alive than never-existent.” that is challenging me (in a good way) and I’m not sure what to think right now.

  2. S.
    August 8, 2012 at 11:23 pm

    Ha, oh wow I totally disagree. Also: when my parents were born there were literally half as many people on the planet. Woah.

    Overcrowding’s miseries do indeed include many ails, such as a hugely increased unemployment rate even in cool/specialized fields of activity… as our population grows exponentially it turns out that our need (and even ability to employ highly skilled/trained) scientists, artists, writers, and leaders grows much more slowly [maybe you want to do something completely unrelated, but it is harder to find meaning and accomplishment in at least these areas]; today you can even find yourself literally competing with a line of 100+ people for a coffee house job (this happened to me)–more people around cheapens every type of labor thus demanding perfect records and increased training and lower wages and less job security.

    Crude utility calculus can run lots of other unacceptable ways beyond just the mere addition paradox. [I tend to accept treating utility as falling between negative and positive infinity, with a score of zero as the edge of the Not-Worth-Living-Zone, which I believe exists].

    For example, to me it is still wrong to create a society in which a segment of your population is experiencing a high negative utility to rocket up the utility of everyone else: this could be an argument for slavery, inequality, (pardon) hunger games, etc. Depending on how much you believe in positive/negative utility monsters you can even plausibly imagine utilitarians defending scenarios where 49-99% of people suffer for the supreme benefit of the others.

    Note: You dislike average utility? I do think there should be some considerations for egalitarianism. A larger percentage of humans in the world than ever live in a state of relative comfort. This is good. However, thinking absolutely, more suffering in the world exists than ever before… 1,000,000,000 people are experiencing starvation or semi-starvation. Right now. Is the present worse than the past (depending on your evaluation of everyone else’s utility as neutral, happy, etc… it is of course possible to believe that both average and addition utilitarians should be happy)?

    Note: Also, I accept that animals experience the world in a way where they should be factored into the utility calculus, even if less so than humans. I believe that humans can experience larger total utility/happiness than probably any animals on an individual basis due to working out complicated relationships and life projects. Yet, when you approach repugnant conclusion scenarios of many people with increasingly tiny (but positive, barely-worth-living lives…), the happiness of average humans diminishes generally and there is no appreciable difference between a happy pig or horse or human. And this argument is instead that we should just fill the universe up with barely happy Life of any type, even if in form of 10E30 unintelligent artificial brains experiencing miniscule utility (maybe they have constantly stimulated pleasure centers or something). You could believe this, but I think it is not only counter-intuitive but morally inappropriate to do these things. It also irks me that (probably) literally none of these humans could really have near-globally valuable roles or effects even if they wanted to (I would find that universe sad, which is a different and subjective worry).

    Note: These considerations can also imply that we just kill off everyone with negative utility values.

    Note: I’d say that not all biases are bad. Or, if defined as so sturdily, I might argue that not caring about possible people isn’t a bias (if you have qualifications for the attitude of the right kind).

    Practically speaking I also disagree with the anyone-can-lift-his/her-own-bootstraps idea and the romantic view that human ingenuity is carrying us along an unending path of infinite transformations and accomplishments and unsolvable problem –> magic unprecedented solution scenarios. Contentious article: http://www.thebaffler.com/past/of_flying_cars

    • August 9, 2012 at 12:33 am

      I never considered that about overcrowding, about the increasing difficulty of good living and increasing inefficiency on a broad scale. That is very thoughtful.

      I’ve heard of utility monsters (Wikipedia article, and an SMBC comic before that). Not sure what to think, so I’ll pass for now.

      I dislike average utility because I view the value of seeing things from a utilitarian perspective partly as being a way of countering various biases we humans have when it comes to reasoning quantitatively that, on reflection, we don’t want.

      It seems so *very* counter-intuitive to me, too, to disregard average utility, but here’s why I think that is a bias we don’t want. Because when we think about it, we’re considering *ourselves* in the scenarios of (a) high utility with fewer people, versus (b) low utility–life barely being worth living (which is indeed repugnant)–with many other people. Aren’t we?
      The problem there is that we fail to acknowledge the good of *other people actually being alive*, as opposed to not. Perhaps it would be useful to pretend that in the first scenario there is a great likelihood that *we won’t even be alive at all*.
      In other words, I’m repeating myself by arguing that we can expect an unwanted bias to be influencing our acceptance of such reasoning (mediated by a feeling of uneasiness and counter…. intuitivity).

      I hadn’t considered the equivalence of low-utility (maybe low-pleasure-experiencing) human life compared to animal life. And I do feel the same sense of sadness about a hypothetical universe in which no human

      As for your last paragraph, I kind of agree that it is romantic and in optimal to have an anyone-can-lift-themselves attitude, at least when people ignorant of behavior modification principles and such things–that is, most people–blame people for their own problems unhelpfully (and scientifically untenably). I don’t know what to think about the prospect of infinite technological progress frankly, though my prior is high in the absence of existential crises given the cumulative nature of technology.

      In the end, I’m just starting to think about these things. Thanks.

      • August 9, 2012 at 12:35 am

        * [third to last] In which no individual human is highly effective or happy (or whatever).

      • S.
        August 9, 2012 at 12:51 am

        I agree that there is a value in countering scope insensitivity through the attention given to literally everyone provided by additive utilitarian thinking. For me this is most relevant when it comes to philanthropic cases of combating suffering. I think though that our world’s most serious and common (material-scarcity) suffering is simpler than all the meaning-happiness-connectedness-etc stuff that makes up high levels of positive utility, and that it is very difficult to imagine what happiness or utility or their report would even mean to the people in the extravagant limit cases. My point could be pragmatic rather than theoretical–simply increasing the number of people will not make the numbers converge arbitrarily to an average utility of zero while also maintaining only (decreasing) positive utility values… depending on, well, everything: it could be generally negative (more probable in the real world, I think) or generally positive.

  3. S.
    August 9, 2012 at 12:59 am

    I would also add that I’m very glad you wrote this. I find this things interesting. I mentioned to you my favorite philosopher of ethics–this is a summary/obituary that first introduced me to him: http://bostonreview.net/BR28.5/nussbaum.html .

    I think you might find it interesting.

    • S.
      August 9, 2012 at 2:00 am

      *these things

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: