Chapter: Ethics and simulation
(Draft selection from the dissertation)

Steven Mascaro

September 2, 2006

Contents

1 Ethics and simulation
 1.1 Introduction
 1.2 Ethical concepts
 1.3 Ethical systems
 1.4 Utilitarianism: a brief history
 1.5 Utility
  1.5.1 Commensurability
 1.6 The scope of consequences
 1.7 Evolving utilitarianism
 1.8 Criticism of the utilitarian calculus
 1.9 The value of simulation in ethics
  1.9.1 Types of ethically valuable simulations
  1.9.2 Improving the utilitarian decision procedure
  1.9.3 Simulation and non-utilitarian ethical systems
 1.10 Conclusion

Chapter 1
Ethics and simulation

1.1 Introduction

Ethics is the study, be it practical or philosophical, of the ‘shoulds’ or ‘oughts’ of human behaviour. In ethics, we use the terms ‘morality’ and ‘descriptive ethics’ to describe the study of a group’s principles of behaviour (Singer 1994). Thus, descriptive ethics is the study of what is about what people believe ought to be. In contrast, we use the terms ‘moral philosophy’ and ‘normative ethics’ to describe the “systematic study of the reasoning of how we ought to act” (ibid). Thus, normative ethics (or simply ‘ethics’) is just the study of what ought to be. Normative ethics is normally conducted within a particular system for deciding what ought to be, called an ethical system. Many different ethical systems exist today -- virtue-based ethics, deontological ethics and utilitarianism, to name a few. Alongside descriptive and normative ethics, there are meta-ethics and applied ethics. The term ‘meta-ethics’ is normally reserved to describe the study of the meaning of ethical concepts, the nature of ethical study and the inter-relation between ethical systems. The term ‘applied ethics’ describes the application of ethics to solving specific problems (that is, where solving the problem is the principal concern, not the ethical system).

How do these terms apply to my research? To begin, my simulations are not a tool of meta-ethical investigation, nor are they an example of applied ethics (though they could be). Instead, my simulations fit somewhere between descriptive and normative ethics. This is because I use my simulations to calculate utilitarianism’s conclusions about the moral value of acts under various conditions. The results describe conclusions that utilitarians must accept (assuming the simulations are correct, relevant and so forth) without requiring me to advocate those conclusions or utilitarianism itself. In this narrow sense, the simulations are an instance of descriptive ethics. However, the simulations do not necessarily describe anything that utilitarians believe today. If they do describe new conclusions, and if I advocate those conclusions, then the simulations can also be considered an instance of normative ethics -- that is, they would give a better understanding of the ethical value of acts. Ultimately, the simulations are not tied to either category. I (and others) can use the results descriptively or normatively.

1. It is interesting that non-utilitarians such as Foot (1978) and Scheffler (1982) note an attraction to the utilitarian’s intent to improve the general welfare.

While my simulations do not commit me to adopting utilitarianism, I do believe that utilitarianism is the most important ethical system to consider when making decisions that affect others. However, I also hope that my simulations will interest non-utilitarians. While ethical systems tend to be adopted exclusive of one other, most of them are based on similar principles: namely, reducing suffering and increasing well-being (Pojman 1998).1 Even if the virtue ethicist, deontologist and utilitarian disagree on the reasons for the goodness of an act, they often still agree on the goodness itself. Perhaps because of this, Parfit (1994) suggests that one day a large consensus on a single ethical system may emerge. As such, it seems plausible that simulations such as mine will be of general interest.

In the following section, I give a very brief outline of some common ethical concepts and ethical systems, including utilitarianism. I then provide a more detailed discussion of utilitarianism, starting with a brief outline of its history. I then cover three topics within utilitarianism of particular interest to my research: the concept of utility, the scope of consequences and the possibility of evolving utilitarian behaviour. The first two topics help to explain the design of my simulation; the third topic explains why we can expect the simulation to produce ethically interesting results. In the final section, I break from the background discussion to explore how the simulations here, and simulations in general, can aid ethical inquiry.

1.2 Ethical concepts

2. Here,‘right’ is being used in the sense of ‘correct’: e.g. the correct act to achieve some end.

Good, right, just and value The main concept in most ethical theories is the notion of good -- be it good acts, behaviours, principles or circumstances. There is no general agreement on what constitutes the good. For some, such as Moore (1903), the good is intuitive (unanalysable); for others, the good can be derived from other concepts; and for others again, the good can be derived from nature. Despite wide disagreement on what constitutes the good, there is often agreement on what happens to be good in any given case. We often take the right to describe that which is good or which leads to good. For example, the right act may be the best act, or it may be the act that leads to the best outcomes or the act that best upholds justice.2 If we are discussing ‘a right’, then we are discussing something (usually a freedom or service) that a person is owed. In some contexts (particularly legal), a system is considered just when it maintains the rights of those it governs. According to Rawls (1972), a just system is one that maintains fair and equal rights; on this view, justice is a form of fairness. Justice is also connected with right behaviour (as with Plato); in this case, a just act is a right act that adheres to an aesthetic standard.

Some take value to be quantified good; that is, if something is good, it has some (positive) value. Others use value to decide the good. In this case, that which yields the most value is good. Not all theories make (explicit) use of the notion of value.

Virtue Virtues are human behavioural traits that are considered good -- either in themselves, or through their consequences. Traits such as honesty, courage, loyalty and generosity normally count as virtues.

Deontology, teleological ethics and consequentialism Deontology has two closely related senses. One is that good inheres in specific acts, and the other is that good inheres in some set of duties, rules or rights. Teleological ethics, on the other hand, assume that good inheres in the consequences of acts, duties, rules or rights. Consequentialism is teleological ethics. However, the term consequentialism is often used more narrowly to only describe utilitarianism and close variations.

Intuition Not strictly an ethical concept, but one that occurs frequently in ethics. Simply put, intuition is the ability of the mind to yield immediate knowledge without a conscious process of reasoning. On some accounts, an intuitive thing is a known thing, but one that cannot be explained or derived from anything else -- it is prior to all other knowledge (a priori). On other accounts (notably Hare 1981), it is immediate knowledge, common sense or a skill that is evolved or learnt.

Intuitionism and naturalism An intuitionist ethics holds that good is intuitive (a priori). A naturalistic ethics, as described by Moore (1903), holds that good is defined by reference to natural objects -- that is, it defines ought in terms of is. As such, these two forms of ethics are incompatible. Moore argued that naturalistic ethics is based on a fallacy. Earlier, Hume (1984) had noted that pure ought-statements of morality cannot be derived from pure is-statements, and Moore gave this false derivation the name ‘the naturalistic fallacy’. Moore then argued that naturalistic ethics commits the naturalistic fallacy. Note that, I believe this fallacy has no bearing on the value of naturalistic ethics, since naturalistic ethics does not derive ought from is, but rather defines ought in terms of is.

Categorical and hypothetical imperatives Kant (1909) gives us the concept of an imperative, which is something that guides the human will. Imperatives may be conditional, so that we perform them dependent on something else; or they may be absolute, so that we perform them without considering anything else. The former are called hypothetical imperatives, an example of which is doing things because they make you happy (the condition is happiness). The latter are called categorical imperatives, an example of which is telling the truth under all circumstances. Kant holds that categorical imperatives are moral imperatives because our intent to follow them is based solely on the reason we believe them good (rather than that they are prudent).

Utility Utility is a psychological concept that represents something similar to happiness. It is not, however, limited to happiness; it can also represent satisfaction, pleasure, usefulness, desirability, and many other positive psychological quantities. An important property of utility is that utility values can be compared to determine one’s preferences. I give a much more detailed discussion of utility later, in Section 1.5.

1.3 Ethical systems

3. Although, it could be argued that this too is inconsistent.

Amoralism and ethical relativism Amoralism proscribes against guides to behaviour. Stated this way, amoralism is inconsistent, since the statement is clearly a guide to behaviour. However, it can be made minimally consistent: no one ought follow or issue guides to behaviour beyond that of amoralism.3 Ethical relativism is the normative extension of the observation that different cultures often have different ethical norms: thus, we have no common basis to prefer any single set of norms. Ethical relativism allows individuals to have moral systems, but not to push them or to meaningfully compare them.

Kantian ethics Fundamental to Kantian ethics is the notion of an imperfect human will. To be clear, a will is the ability to make conscious choices. Something needs to guide the imperfect will in the choices it makes, and this guidance comes in the form of imperatives, either categorical or hypothetical. (In contrast, nothing needs to guide a perfect or divine will -- it will always make the right choice.) Since hypothetical imperatives are contingent on expected human conditions, they are subjective; on the other hand, categorical imperatives are objective and thus universal. Kant notes that restricting ethical principles to categorical imperatives gives ethical principles the status of natural laws, holding regardless of how humans and their customs might change. He maintains that categorical imperatives make it impossible to treat humans as means, unlike hypothetical imperatives. Kant develops a single universal categorical imperative, from which all moral duties derive. This imperative is: “Act only on that maxim through which you can at the same time will that it should become a universal law”. In other words, act according to those maxims that are universalisable. We appeal to universalisability whenever we ask, ‘What if everyone did that?’. A positive example of a universal maxim is ‘if you make a promise, keep it’, since it is possible for everyone to follow the maxim while keeping the value of a promise in tact. The corresponding negative example is ‘if you make a promise, break it’, since, if everyone followed the maxim, promises would no longer have any value. Interestingly, Hare (1981) argues that universalisability and utilitarianism are thoroughly compatible.

Virtue ethics This system stems from Aristotle and has been supported in recent times by philosophers such as Anscombe (1958), Foot (1978) and MacIntyre (1981). Virtue ethics is a system based on character rather than on acts or duties. It is not so much that acts and duties are irrelevant, but that virtuous character is vital to producing the right acts and duties.

4. Some have argued that Rawls’ contract ethics is a form of average utilitarianism or least-worst utilitarianism, though this is certainly debatable.

Contract ethics Contract ethics (originating from philosophers such as Locke, Rousseau and Hume) is tightly tied to the concept of justice. Rawls (1972) has given it its present, most famous, form. The motivation for contract ethics is a thought experiment: before forming a society, all individuals (who are by design rationally self-interested) meet in a so-called original position where all are equal to decide on the terms of their relations in real society. To ensure the terms of their relations are as equal as possible, individuals in the original position have no specific knowledge of real society: they do not know their eventual roles, status or natural abilities. According to Rawls, individuals in the original position, being rational and self-interested, will all agree on the following: 1) rights, duties and opportunities of office (that is, positions in society) that are equal for all and 2) social and economic inequalities that are attached to offices open to all, but only so far as everyone receives compensating benefits from such inequalities, particularly those who are least advantaged. These conditions form the contract by which rationally self-interested members of a society would agree to abide and embody the concept of justice as fairness: that no one is advantaged or disadvantaged by their choice of principles.4

Evolutionary ethics Evolutionary ethics is a naturalistic system of ethics that derives its principles of good behaviour from the biological concept of altruism. Evolutionary ethics is not to be confused with social darwinism, which focuses unduly on progress, self-interest and competition. While the system is not yet well developed, Ruse (1995) gives an account of what an evolutionary ethics might look like. In particular, he suggests that our intuitive moral feelings are evolutionary adaptations that serve us biologically due to the unique biological altruism that humans exhibit. According to Ruse, the moral feelings that we have evolved are largely Kantian in nature. An interesting claim that Ruse makes is that there is no justification for the moral system that we have. This is because an evolutionary ethics describes how our moral system was caused, and Ruse maintains that something that has a causal explanation can not also be explained with (intentional) reasons. Sober (1994) asserts that evolutionary ethics must depend on normative assumptions (for example, that hedonistic utilitarianism is correct), which would mean it is not a complete ethical system on its own.

Hedonism Hedonism is very different to most other ethical systems -- so much so, it is a little misleading to classify it as an ethical system. Importantly, while it involves care for something, it does not involve care for others. Nevertheless, it does deal with notions of value, virtue, good and right, and, since it is important in comparisons to utilitarianism, I include it here. Hedonism (from the Greek word for pleasure) dates back in part to Epicurus. Epicurus’ moral philosophy was quite broad, but it also contained the seeds of hedonism, and is often soley identified (mistakenly) with expressions such as ‘do that which leads to pleasure and that which leads to freedom from physical pain and mental anxiety’. In Epicurus’ philosophy, the use of the term pleasure may imply a focus on the sensual and the immediate, but Epicurus did not have this in mind: for example, he suggests that “. . . similarly we think many pains are better than pleasures, since a greater pleasure comes to us when we have endured pains for a long time” (Epicurus 1999). In fact, Epicurus was concerned mainly with the avoidance of pain (Johnson 1999, pg. 49).

5. Alternatively, we might assume pleasure includes such things as satisfaction, happiness, etc.

Hedonism as we know it today is based on maximising pleasure and minimising pain. We may generalise the concepts of pleasure and pain to that of utility. We can do this by quantifying them, equating pleasure with positive utility and pain with negative utility, and allowing other things (such as satisfaction, happiness, etc.) to also be considered utilities.5 We might more accurately call this ‘egoism’ (Sidgwick 1907, Bk 1, Ch. 7), but I will retain the term hedonism for simplicity. Thus, the maxim of hedonism can be put as follows: act so as to maximise your personal sum of utilities.

Given this maxim, the value of an act can be expressed simply as the personal sum of utilities derived from that act, and hedonism directs us to choose the act which has greatest value. If we were certain about the effects of an act, we could express hedonistic value with the following equation:

         sum 
vh(a) =    ue(ek)
         k
(1.1)

where a is an action, ek is one of the (certain) effects of a, ue is the performer’s subjective utility function that maps effects to utilities, and, finally, vh is the hedonist’s subjective value function that maps actions to values. In order to simplify future equations, it will be useful to group all of the effects ek of an action in a set called an outcome o of a:

o = {ek}
(1.2)

and have a subjective utility function, u, that maps outcomes to utilities:

       sum 
u(o) =   ue(ek), for all ek  (-  o
       k
(1.3)

If we substitute this into our original equation (Equation 1.1), we have:

vh(a) = u(o)
(1.4)

Recognising that the performer will not be certain what outcomes are possible, we need to alter the hedonist maxim to include this uncertainty: act so as to maximise your expected personal sum of utilities. This normative statement of hedonism also accords with the modern definition of rationality (see, for example, Parfit 1984). We can alter Equation 1.4 to bring it in line with this definition:

        sum 
vh(a) =    u(oj)p(oj| a)
        j
(1.5)

where each outcome oj is a possible set of effects (i.e. a possible world) stemming from an action, and p(oj|a) is the probability of the outcome oj given that the action a is chosen. The equation represents the value both general hedonists and (some) rationalists would attach to an action.

6. Which I presume means we first find the greatest number, and then find the greatest good for that number.

Utilitarianism Utilitarianism is an ethical system that suggests we maximise the sum of utilities across the population. It is often identified with the maxim “the greatest good for the greatest number”, a phrase originating with Francis Hutcheson.6 There are several forms of utilitarianism, some based on differing notions of utility, others based on how best to act according to utilitarian principles in practice. Examples include (but are not limited to) act-, rule-, satisficing-, preference-, negative-, hedonistic-, ideal-, average-, least-worst- and total-utilitarianism (many of these are not mutually exclusive). Utilitarianism inherits much from hedonism, with one crucial difference sufficient to put it in a different category: it is based on care for others. Or, as Sidgwick put it, utilitarianism is universal hedonism (Sidgwick 1907). The idea of utilitarianism as universal hedonism (or universal egoism) tells us how we can alter the hedonist value equation from above (Equation 1.5). We simply need to include the utility functions of all individuals (not just our own utility function), and sum across them all:

         sum   sum 
vu(a) =       ui(oj)p(oj| a)
         i  j
(1.6)

where i represents an agent (e.g. a person, an animal, etc.), ui represents the utility function of the agent i, and vu is the utilitarian value function.

This concludes the description of the major alternative ethical systems. Due to its importance in my simulation, I will now move to a more detailed description of utilitarianism.

1.4 Utilitarianism: a brief history

Utilitarian thought stretches back at least to Plato’s Republic, where Socrates raises the problem of whether to return a sword to a friend who is not in his right mind. Socrates says we must not return the sword because of the possible consequences, which is clearly an example of utilitarian reasoning. The forerunner to modern utilitarianism was Hume’s moral theory, the first secular theory of morality in modern philosophy (Mossner 1984). Hume believed that the morality of an act was to be decided by the reactions it produced in witnesses -- in particular, whether witnesses approved of the motives of the actor. Thus, his theory was consequentialist (but not strongly so). Hume also introduced the term ‘utility’ to moral theory, which Jeremy Bentham (amongst others) later adopted. For Hume, utility simply meant usefulness; he believed that the utility (usefulness) of an act had moral significance, because utility allowed witnesses to sympathise with the value of that act.

Jeremy Bentham was the first person to outline the principles of utilitarianism explicitly, in his ‘An Introduction to the Principles of Morals and Legislation’. His concern was to show that if one is to act in the “interests of the community”, what one must do is to serve the “sum of the interests of the several members who compose it” (Bentham 1987, pg. 66). He took it that the only intrinsically good thing is pleasure, and the only intrinsically bad thing pain. Thus, his was a hedonist utilitarianism. However, what Bentham includes as pleasures is quite vague: “By utility is meant that property in any object, whereby it tends to produce benefit, advantage, pleasure, good or happiness” or “to prevent the happening of mischief, pain, evil or unhappiness to the party whose interest is considered” (ibid). Bentham argues that all concepts of the good and right can be defined only in terms of utility, and that all other moral concepts are either vague, false or can be made consistent with the principle of utility.

John Stuart Mill generalised utilitarianism from the narrow form that Bentham had given it (Mill 1861). The problem had been that Bentham’s utilitarianism was described with government in mind, not with the practice of everyday life, which led to a restricted view of utility. Mill generalised the concept of utility so that it emphasised more of that which we consider valuable, but that we might not call pleasurable. As a well-known example, Mill suggests we would prefer to be Socrates dissatisfied than a fool satisfied to show that satisfaction is not the same as utility: the fool is satisfied, but derives less utility from his situation than a dissatisfied Socrates does from his.

Mill’s utilitarianism (often called ‘eudaemonistic’) does not differ in its content from that of Bentham’s. Rather, it emphasises certain aspects (such as liberty and the ‘higher pleasures’) that were largely ignored in Bentham’s treatment. Sidgwick (1907) gave the most comprehensive treatment of utilitarianism in his ‘Method of Ethics’, detailing many of its implications, and rebutting many apparent arguments against utilitarianism. He describes how utilitarianism can be considered a universal hedonism, in which utilities are generalised across all the people affected by an act. At roughly the same time, Moore (1903) described his form of utilitarianism, called ideal utilitarianism, in which utility is treated as an intuitive concept (which Moore defines as one not susceptible to proof). None of these developments changed the form of utilitarianism -- they merely emphasised or clarified existing concepts.

In the last century, a distinction has arisen in utilitarianism stemming from differences in its practice. The distinction is that between act and rule utilitarianism (described by Smart 1971). Act utilitarianism is similar to how one would interpret ordinary utilitarianism, in that one weighs up utilities at the time of choosing to act. On the other hand, rule utilitarianism chooses a set of rules which one will follow such that the rules tend to maximise the sum of utilities. Some consider that rule utilitarianism reconciles utilitarianism, a teleological system, with deontology -- but others, such as Smart, have described why this is false. If rule utilitarianism holds that its rules are inherently moral, then rule utilitarianism is inconsistent, since it deems its rules to be moral only because of their expected consequences in society. On the other hand, if rule utilitarianism holds that it is expected consequences that decides what is moral, then in any given case it will be better to consider the expected consequences of acts in that case, rather than to blindly trust in the virtue of a rule (a practice Smart has called ‘rule worship’). Trusting rules displaces consequences and utility from the centre of utilitarianism. Smart (1971) asserts that act utilitarianism is the proper theory of right, and that, while act-utilitarians will indeed use moral rules, they will only be used as rules of thumb. (I will discuss the importance of rules of thumb to both utilitarianism and its simulation shortly.)

1.5 Utility

I now look at the two core concepts of utilitarianism: in this section, utility; in the next section, consequences. Both are obviously important to my simulation (for a discussion of the utilities in the simulation, see Section ??). We have seen above that utility is a concept that is, if not vague, then manifold: it spans a motley of concepts such as pleasure, usefulness, happiness and satisfaction. Philosophers, economists and psychologists, working with what is loosely called utility theory or decision theory, have given considerable thought to the concept of utility. And while I cannot give a sufficient treatment of their work here, I can describe some of the more relevant results.

Ramsey (1931) was the first to attempt a formalisation of value (or utility) and preference. He needed the formalisation to define the concept of subjective probability as degree of belief. However, the classic treatment of utility is that given by von Neumann and Morgenstern (1947), which is given in terms of objective probabilities. According to this view, utility is a measure of a person’s preferences. However, utility does not measure a person’s preferences over a set of outcomes. Instead, utility measures preferences over a set of lotteries, which are probability distributions over a set of (mutually exclusive) outcomes. For example, given two outcomes a = receive $10 and b = receive nothing, a person would presumably prefer an act which leads to a lottery with probabilities p(a) = 0.8 and p(b) = 0.2, to another act which leads to a lottery with probabilities p(a) = 0.5 and p(b) = 0.5. What von Neumann and Morgenstern (1947) show is that if and only if a person’s preferences satisfy certain reasonable axioms (completeness, transitivity and the Archimedean axiom -- which essentially asserts continuity), then there exists a utility function whose values are ordered in the same way as the order of that person’s preferences. The consequence for utilitarianism is clear: so long as a person can express preferences for different lotteries that satisfy the axioms, then we can assign them a utility function.

Some have suggested that most people do not hold some of these axioms. One axiom that has been questioned is that of transitivity (Lichtenstein and Slovic 1971). It appears that people will often reverse their preference between two lotteries (involving money outcomes) when the lotteries are replaced by their certain money equivalents. However, others have questioned whether transitivity is actually being violated in these experiments (Karni and Safra 1987) -- they suggest, instead, that the independence axiom is being violated. Allais (1979) and others recognised that the independence axiom is implied by the von Neumann-Morgenstern axiomatisation. The axiom states that a new lottery combined in equal proportions to all existing lotteries does not affect the preferences between those existing lotteries. It appears that humans violate the independence axiom when lotteries involve certain or near certain outcomes, so that risk averse individuals overweigh the value of certain outcomes. Kahneman and Tversky (1979) call this the ‘certainty effect’, and have suggested using subjective probability weights to account for this effect. Allais (1979) also noted that people might change their preferences between lotteries if the same outcome is added to each lottery. Specifically, this would occur in cases where the new outcome is related to the existing outcomes specified by the lotteries. However, this too appears to often be a result of the certainty effect.

1.5.1 Commensurability

The biggest concern for utilitarianism lies in the commensurability of each person’s utility functions (a problem obviously not shared by hedonism). Commensurability here means that if we take a value a from one individual’s utility function, and another value b from another person’s utility function, then a cannot be described as being greater, equal or less than b in any sensible way (Griffin 1986). The most obvious way to understand this (using the von Neumann-Morgenstern axiomatisation to produce a utility function) is that if we take a utility function, u(x), representing an individual’s preferences, then any linear transformation, bu(x) + c, is also a utility function that represents that individual’s preferences -- thus, simply summing the values from given utility functions of different individuals clearly yields a nonsensical result.

To combat this problem, Griffin (1986), following Parfit (1984), suggests we create an objective list of factors to calibrate the different utility functions. But this would seem to make utilitarianism substantially more deontological; what justification can we have for this static, objective list? It also raises many questions. Does Socrates count more in the sum than the fool? Just how dissatisfied must Socrates become before we regard the fool’s life as the better one? What of the mentally disabled? How are we to sum the utility functions of babies, children, adults, and seniors? Or if we are to regard animals as having equal status in our utilitarian theories (as Singer does), then how are we to add their utility functions to our own? Commensurability is not just a theoretical problem: for example, Singer suggests that infanticide in some cases can in fact be moral, while many others disagree; many utilitarians are not happy with Mill’s idea of higher and lower pleasures; and, as a particularly relevant example, the utilities of agents in my simulations seem not to be comparable at all with the utilities that we as a species derive.

However, often our moral considerations concern a group of individuals whose utility functions -- or pleasures and pains -- are commensurate. We are normally concerned with what one typical person, whose acts affect other typical people, ought to do. In these cases, we can rely on our recent common evolutionary history, which has probably produced comparable utility functions in all of us (Hare 1981, pg. 139 implies this). That is, our environment of evolutionary adaptation (our EEA; see Section ?? for discussion) -- which has produced in all humans a prolific ability to use tools, to communicate with each other and to form social networks -- is likely to have produced in all humans comparable utility functions also.

7. While both arguments given here do not show that human and animal utility functions are commensurate, they also do not show they are incommensurate.

There is another evolutionary reason to believe that our utility functions are commensurate. One of the abilities that evolved in our EEA is the ability to communicate with each other in both concrete and abstract ways. For this kind of communication to work and, what is more demanding, evolve, requires referents that are very similar or identical for both talker and listener. If we set aside metaphysical problems, this suggests that the way one person experiences pleasure and pain will resemble the way another person experiences pleasure and pain, since we have evolved the ability to talk about them, often in ways that make explicit inter-subjective comparisons.7

Therefore, it seems highly likely that utility functions are commensurate for individuals in many of the ethical situations we want to consider. Commensurability is certainly an assumption I make within my simulation, in which all agents derive identical utilities for identical outcomes. Nevertheless, commensurability is no certainty, and it is an assumption that should be investigated in future work.

1.6 The scope of consequences

Utilitarians also need to decide the scope of consequences; they must decide who or what is affected (scope over space) and how far into the future consequences must be considered (scope over time).

Let us first consider scope over time. In the most inclusive -- and most usual -- case, the utilitarian will regard all future consequences as counting towards the sum (Smart 1973). It may seem an extreme view to consider all future consequences, which will include effects a thousand and a million years hence. However, the analysis is not so bad. First, the utilitarian’s decision procedure is separate from the utilitarian system; the former need only be an approximation of the latter. With this in mind, the probability of any particular outcome a million years hence is negligible. Indeed, even the probability of any particular outcome over shorter periods will often be negligible. So most of what we do does not change the course of history (at least, not in any regular or predictable way). Ordinarily, the most important consequences of our actions will be few and foremost in our mind. In cases where they are not, we can use techniques (including simulation) to help us discover the important consequences. Thus, viewing all future consequences as contributing to the utilitarian sum is not the burden it may at first seem.

Let us now turn to the scope over space. The most inclusive case here is to consider the set of things capable of feeling pleasure and pain (which is what Singer wants us to consider). Again, even if one holds this position, the burden is not as great as it may seem. The important effects of our actions are often limited to only a handful of individuals. When they do extend to larger populations (such as when politicians make political decisions), we often have some data on the spread of the effects, and the likely utilities people will derive. This may be through specific statistics, (evolved) intuitions about the preferences of others, or the points raised in public debate. It is also, as I will soon discuss, an area where simulation can be of help.

We meet a difficulty when considering the set of things capable of feeling pleasure and pain: it is not clear who or what is a member of the set. Presumably, the set would include many animals (though perhaps not all). Curiously, it may not include all humans -- unconscious humans (while unconscious) do not feel pleasure and pain. Singer (1993) has famously argued that babies less than a month old have no consciousness, which makes infanticide acceptable -- so long as the baby is expected to derive negative utilities for the remainder of his or her life. Regardless, it is clear that there is still much disagreement on the members of the set.

Ultimately, I assume that it is best to consider all utilities over time and space (as do others such as Smart 1973 and Feldman 1997) -- both in the real world and in my simulations. However, as noted, this does not imply an impractical ethical system. If we make the distinction between a utilitarian ethical system, and a utilitarian decision procedure, then there is no problem with how many consequences we believe ultimately count towards the moral value of an act.

1.7 Evolving utilitarianism

In my simulations, I assume that behaviour can evolve that is utilitarian, but not just a side-effect of narrow self-interest (or narrow hedonism). But is this possible? Singer (1981) gives an interesting account of how ethical behaviour could have evolved amongst humans, based on kin, reciprocal and group altruism. In general, evolutionary ethicists note that individuals will act to help others when biological altruism contributes to their inclusive fitness. Thus, biological altruism is consistent with ‘biological hedonism’ and evolution will produce states of high utility in hedonists when their behaviour promotes their inclusive fitness. In particular, the hedonist will act so as to maximise the weighted sum of expected utilities. The utilities are weighted according to the relatedness coefficient between others and the hedonist, because this is the expected percentage of genes the hedonist will share with others by descent (Hamilton 1964). Thus, the value of an act to a hedonist will be the following modification of the utilitarian value equation:

         sum   sum 
vh(a) =       r(i)ui(oj)p(oj| a)
         i  j
(1.7)

where r(i) is the relatedness coefficient of individual i to the hedonist, and vh indicates that this is the hedonist’s value function. This equation makes clear that kin-selected altruistic behaviour will be of as much interest to the hedonist as narrowly self-interested behaviour. That is, if a hedonist (or an amoralist) is left to act as they wish (rather than specifically taught to act in their own interest) they will likely act according to this equation.

8. Reciprocal altruism, as I have discussed in Section xxx, is essentially delayed cooperation and, as such, synergistic.

In fact, there is another factor that affects the behaviour of hedonists: synergy. By synergy, I simply mean positive effects due to a combination of parts, greater than the sum of effects of the individual parts. An example of an evolved synergistic behaviour might be flocking behaviour, in which each animal that travels in a flock has a reduced probability of being attacked, in comparison to each travelling alone. Mating can also be considered synergistic: trying to mate alone in a sexual species will not help fitness. The synergy of mating is made clearer when we realise that there are other means of copulating (such as rape) in which the partner does not receive positive utility. Important to the concept is an act that can be carried out with others. In the example, this act is ‘travelling’; it can be performed individually, or in a group. Under evolution, the hedonist will seek out social situations in which the hedonist’s behaviour is also synergistic.8

By Equation 1.7 above, synergy might be of greater value to the hedonist if performing an act synergistically alters outcomes (and thus utilities) or the probability of outcomes. This might alter a hedonist’s choice of action in one of three ways. First, synergy, by definition, provides more utility for an act when performed with others than when that act is performed alone. Thus, it will pay the hedonist to see whether she can perform her chosen act synergistically. Second, synergy can make some acts more attractive than alternatives if synergistic cooperation is not possible for those alternatives. Third, a synergistic act may make certain desirable outcomes more probable.

The consequence of synergy is to bring the hedonist still closer to the utilitarian ideal. Note that the hedonist’s value equation (Equation 1.7 above) is not altered by these considerations, since an ‘act performed synergistically’ will simply be another possible act the hedonist can choose. But it does alter what kind of acts the hedonist will value. More actions will be attractive to both the hedonist and utilitarian than would be the case without synergy, and thus the hedonist’s behaviour is more similar to that of the utilitarian. We can make the idea more formal by splitting an individual’s utility function between synergistic and non-synergistic utility:

ui(o) = ud,i(o)+ us,i(o)
(1.8)

This equation clarifies that the utility that an agent i derives from an outcome o is a combination of direct (non-synergistic) utility (ud,i) from that outcome plus any additional synergistic utility (us,i). To reiterate, this only clarifies the composition of the individual’s utility function and does not change the form of the hedonist’s value equation (Equation 1.7).

There is one final thing that the hedonist must consider, at least in human society: our willingness to punish and intervene, even at a cost to ourselves (for recent studies on this behaviour in ultimatum and other games, see Henrich et al. 2004). Thus, the utility to a hedonist of some outcome o will be moderated by the negative utility of punishment and the reduced probability of successfully achieving the outcome due to another’s intervention. In the case of punishment, we can add further detail to the utility function:

ui(o) = ud,i(o)+ us,i(o) - up,i(o)
(1.9)

where up,i is the punishment cost to the individual i of outcome o. We can also add detail to the probability function to show the possibility of intervention:

p(oj| a) = p(oj| I,a)p(I|a)+ p(oj| I,a)p( I|a)
(1.10)

where I represents intervention. We expect, of course, that if it is likely someone will intervene (P(I|a) 0), then that intervention will reduce the probability of the outcome (P(oj|I) < P(oj|I)). Further, we expect other (hedonistic) individuals to intervene exactly when the expected utility of intervening for that individual outweighs the expected utility of not intervening. This will bring the hedonist’s behaviour further in line with the utilitarian’s. (Of course, as with synergy, these alterations do not change the form of the hedonist value equation.) In my simulations involving rape, I allow the possibility of punishments and intervention -- and their presence can often suppress the evolution of rape.

9. An issue that is completely separate to the above discussion is that rape could be ethical under some circumstances. But such circumstances for rape will be so extreme and absurd (in the case of humans) that a) they do not merit serious consideration and b) our moral intuitions will not hold in such circumstances anyway -- which is just as Hare (1981) argues for absurd circumstances in general.

Thus, the biological hedonist will often behave like a utilitarian. Nevertheless, there will be important cases in which the hedonist will behave differently. Take an example from my rape simulations. Agents will obviously evolve to be biological hedonists, since they have no interesting phenotypic plasticity (i.e. no ability to change and develop through their lives). In that case, agents will perform an act like rape if a) they do not harm a relative (kin altruism), b) they do not derive more utility from the consensual act of mating (synergy), and c) they will not be stopped (intervention) or hurt (punishment). In such a case, this conditional form of rape would still be unethical, but may be biologically hedonist and thus become evolutionarily stable.9

1.8 Criticism of the utilitarian calculus

10. I do not discuss the most common criticism of utilitarianism, which is that it harms integrity -- that is, that we must always consider everyone equally, losing our sense of self and ignoring our personal relations (for discussion, see Scheffler 1982, who identifies Rawls and Williams as the main proponents). There are many responses to this, the best, I believe, based on the fact that an individual will easily be the leading expert on herself, and the next to best expert on those close to her. For a defence of consequentialism (but not utilitarianism) against this problem, see Portmore (2001).

There are several common criticisms of utilitarianism (and also common rebuttals), but I will discuss just one that is relevant to the value of ethical simulations: that utilitarianism requires too much calculation.10 We can interpret this as making one of the following two claims: 1) there is fundamentally too much calculation involved in utilitarianism for it to be an acceptable ethical theory; or 2) there is too much actual calculation required in utilitarianism for it to be practical. One might make the first claim if one believes an ethical theory ought to have simple means that yield answers to ethical problems. This is a dubious metaphysical requirement; it does not seem to apply to any natural or social science, logical system or even philosophy. One might also make the first claim if one resents the idea of turning human dilemmas into mere computations. However, most reasoning tries to turn abstract ideas into concrete forms and, hopefully, make them susceptible to mechanical checking and computation. If one rejects utilitarianism on these grounds, one needs to also reject reasoning about ethics.

The second claim is a much more serious one for utilitarianism if correct -- indeed one that can defeat utilitarianism on its own ground. If there is too much calculation required to ever apply utilitarianism in practice, then utilitarians ought not be utilitarian in practice. However, as Parfit (1984) argues, we must make a distinction between a decision procedure and a theory of right. The principle claim of most utilitarians is that utilitarianism gives us the latter.

11. Hare (1981), for instance, suggests that there are two levels of thinking: intuitive and critical. Critical thinking involves utilitarian calculation, weighing which of our intuitions we should trust. Intuitive thinking is what we should use in practice (in almost all cases).

Nevertheless, a theory of right that can never have a practical decision procedure is a useless theory of right. One possibility is to adopt rule utilitarianism instead of act utilitiarianism. However, as Smart (19711973) has argued, rule utilitarianism has undesirable properties (outlined earlier at the end of Section 1.4). Instead, we should develop rules of thumb -- heuristics -- as guides to behaviour in cases where no cues indicate that specific consequences may be important. This is precisely where simulations such as my own can help -- by helping us discover plausible consequences for which we can develop heuristics. Beyond heuristics, we can also appeal to two other methods of circumventing calculation: we can cultivate virtues; and we can rely on our ‘intuition’ (Hare 1981) -- either our evolved moral intuition (which, as noted earlier, will often be a good approximation of the utilitarian calculation) or learnt skills.11 Depending on how we interpret virtues and intuitions, simulation can also help us with these.

1.9 The value of simulation in ethics

Let us now turn to the potential utility of simulation to ethics. First, I wish to describe a concept that will hopefully clarify the discussion. The perfect utilitarian choice is the choice that we would make if our decision procedure was identical to our theory of right. That is, it would take into account all possible consequences over time and space, and take into account the correct utilities that every individual would derive (as described in Section 1.5). In almost all cases, an actual utilitarian choice will only be an approximation to the perfect utilitarian choice for that case. In the following, I explore how, in addition to traditional methods, we can use the results of simulation to improve such approximations.

1.9.1 Types of ethically valuable simulations

The use of simulation can be divided into four classes. These four classes correspond to the division of cases produced by two independent pairs of categories (shown in Table 1.1). The categories in one pair are divided between simulations which model utilities explicitly, and those which do not, but have ethically relevant consequences. The categories in the other pair are divided between simulations which are used to decide the best action in a specific situation, and those which are used to decide the best kind of action in many similar situations.


Ethically relevant consequences

Ethically relevant consequences with utilities







Single situation

(A) Global climate models

(B) ?




Repeated situation

(C) Many economic and social simulations

(D) My simulations


Table 1.1: A division of the potential uses of simulation in utilitarianism and consequentialist ethics, with examples in each cell

Consider simulations in class A. Suppose that we (as imperfect utilitarians) are deciding whether to perform a particular act, such as voting for a policy in government. We can, of course, use our own reasoning to estimate if it is more moral to perform the act than to not perform it. There is a certain chance that this reasoning will yield the perfect utilitarian choice, which will depend on how good we are at realising the important consequences. We may be able to improve this chance by using simulation to give us a better idea of the consequences. This will be true regardless of whether that simulation itself models utilities.

An example of this kind of simulation use is in the debate on global climate change. Global climate models improve our understanding of the consequences of our climate policies (Edwards 1999). Suppose that such a climate model confidently indicates that a policy of liberal emissions standards will lead to dangerous levels of greenhouse gases. We would then know that voting for that policy is unethical, since we would do great damage to (the utilities of) many future generations. In this example, the value of simulation is not simply limited to utilitarian ethics -- other systems will also find the results of the simulation informative. For if an act knowingly leads to widespread death, it is surely a relevant consideration in any ethical system.

This example demonstrates that simulations can aid an ethical decision, without modelling utilities (simulations that fall into class A in the table). The global climate model is a physical model of weather, and it says nothing about good or bad. Only when we apply its conclusions to the fate of humans do we make our moral conclusions. They have these implications because climate change is a global phenomena that we can influence and that affects humans and animals. Another example of a physical simulation is a structural mechanics simulation of a bridge -- again, the simulation says nothing about good or bad, but we can use its conclusions to help us make a better ethical decision. There are many other simulations that can aid an ethical decision in a similar way, many of which fall into the field of social simulation (see Section ?? for discussion of this field). As examples, economic models can help us decide which policies to support, while models of disease transmission can help us decide the best response to an outbreak.

It is normally impractical to implement a simulation solely to help tell us the consequences of our options for a specific choice (simulations of class A). It is more practical (and still valuable) to develop simulations aimed at discovering the typical consequences of choices that arise frequently (simulations of class C). To take a random recent example, we can look at the simulations of (Younger 2005). Younger concludes that his simulations show that we can increase the amount of mutual obligation (roughly, the amount of sharing) in society by encouraging indiscriminant sharing. Assuming we agree that this (fairly obvious) conclusion applies beyond Younger’s simulation and believe it novel, then the conclusion will have implications for our consequentialist ethical theories. In the case of utilitarianism, if we have reason to believe that more sharing leads to greater total utilities, we may adopt (or promote) the rule of thumb ‘Share with others indiscriminately when the option to do so arises’. That is, we can make better approximations of perfect utilitarian choices by using Younger’s conclusion to develop a heuristic.

12. Furthermore, I know of no examples of class B at all -- i.e. simulations that include utilities and model specific situations.
13. Indeed, we can also use simulation to explore the concept of utility further, which I do not consider here.

If we want a more direct way of approximating perfect utilitarian choices, we can develop heuristics from simulations that include utilities (simulations of class B and D). The only simulation that I am aware of that is like this is my own.12 We can include utilities either by direct assignment to outcomes for individuals (which is the method I use in my simulations), or by other means such as evolving them or calculating them from changes in the states of agents.13 Whichever method we choose, we can very easily calculate the typical total utilities that complex situations and behaviours yield (something not possible with reasoning alone), and use this in better approximating perfect utilitarian choices.

1.9.2 Improving the utilitarian decision procedure

In practice, our utilitarian decision procedure involves aids such as heuristics, the cultivation of virtues, and intuitions about right action -- and simulation will not change this procedure. However, simulation can help us improve the value of these aids. By far, the most important improvement that simulation can make is to develop and test heuristics. However, simulation may also help us rank virtues, or decide whether putative virtues really are virtues. And it may also help us understand which of our intuitions to trust and which to discard. I will treat each of these possibilities in turn.

Simulation can help us develop heuristics of right action because it allows us to explore the conditions under which actions lead to an improvement in ethical value. For instance, if we speculate that lying to protect has bad consequences, we could set up a simulation in which lying to protect is not allowed, and another in which it is allowed, and check which produces the most utility. This will not give us a certain conclusion, but it will suggest evidence in favour of one or the other. By accumulating studies of this sort, using many different environments and conditions, we may be able to reach a conclusion that we can apply generally -- in other words, a heuristic.

In some cases, treating the conclusion as a heuristic of behaviour may be inappropriate. My simulations of suicide or abortion during droughts show that they can be ethical. Even if we were to become convinced that there were specific circumstances in which suicide or abortion were the ethical option, we could never promote such an option -- misapplication and misunderstanding of the heuristic would cause far more harm than correct application would cause good. In these cases, the conclusion we derive from a simulation is an extra (though very minor) point that can aid a person in their personal decision -- it is not a heuristic to be promoted as right or important.

The use of simulation in understanding virtues is much more restricted. Under utilitarianism, virtues are obviously moral traits -- traits that are only immoral in rare circumstances. Thus, we will rarely need to turn to simulation to tell us what traits count as virtues. However, simulation can help us decide between conflicting virtues -- one can easily imagine circumstances in which our honesty will conflict with our loyalty, and we are forced to choose between the two. In this case, we can imbue a population with these virtues, and then assign to all agents the same ranking over virtues. To oversimplify, we can then run one simulation in which honesty is ranked above loyalty, and another in which the opposite ranking is chosen, and discover which of the options leads to the greatest sum of utility.

14. If we assume intuition takes precedence, then utilitarianism and simulation can fill in the gaps. If we assume intuition is all there is to ethics, then utilitarianism and simulation obviously have no value.

As for intuition, if we assume utilitarianism takes precedence (or, as Hare 1981 puts it, that our intuitions can be evaluated using critical thinking which is utilitarian), then simulation can tell us which of our intuitions to trust and which to distrust.14 We can develop simulations to test our intuitions in much the same way we develop them to test our heuristics.

1.9.3 Simulation and non-utilitarian ethical systems

Simulation is of no value to ethical systems that deny a place to consequences. Simulations are causal models: their main value is in discovering unknown consequences, confirming known consequences, or of discovering or confirming sufficient causes. However, most ethical systems do assign a place to consequences, even if that place is not as prominent as in explicitly consequentialist systems. Kant’s categorical imperative, that requires all moral acts be universalisable, can only be decided empirically -- that is, if an act leads to bad consequences, it is not universalisable. Thus, it is easy to see how universalisability could be tested in a simulation experiment or thought experiment. For virtue ethicists, the consequences of virtues may be what makes them virtues -- it is the propagation of true information that makes honesty valuable, and it is the benefit to the poor that makes generosity valuable. Again, it is easy to see how the consequences of virtues could be tested in simulation. Furthermore, hedonism and economic rationality are obviously consequential. By implication, Rawlsian contract ethics also depends critically on consequences, since the members of the original position are all meant to be rationally self-interested -- according to Rawls, they agree to institutions based on the (worst) expected utilities derived in those institutions. While we cannot implement anything like an original position, we can use simulation to discover the (worst) expected utilities of institutions.

Thus, simulation can be of value to non-utilitarian ethical systems, but that value will depend on the degree to which non-utilitarians make use of consequences and, to a much lesser degree, utility.

1.10 Conclusion

The principal value of simulation to ethics (indeed, to any subject) is its ability to calculate consequences that we would have difficulty calculating unaided. As an added value for utilitarianism specifically, we can also model the utilities that individuals derive from those consequences, and easily calculate the sum of total utilities. As a method of investigating ethical issues, the simulations I will present here are only first steps. But they hopefully show that such investigation can be fruitful for utilitarianism (and perhaps for other systems as well). Indeed, such virtual experiments seem a natural fit with utilitarianism -- both in being completely compatible with its principles, and also in being able to improve the value of our utilitarian decision procedure. The discussion in this chapter supports my principal argument in this thesis: that simulation allows us to experiment with contentious behaviour in a safe and informative way -- that we can explore the consequences of contentious behaviour, be they ethical or evolutionary, in unprecedented detail.

References

   Allais, M. (1979). Translation of ‘Le comportement de l’homme rationnel devant le risque: Critique des postulats et axiomes de l’cole americaine’, in M. Allais and O. Hagen (eds), Expected utility hypotheses and the Allais Paradox, D. Reidel Pub. Co, Dordrecht; Boston.

   Anscombe, G. E. M. (1958). Modern moral philosophy, Philosophy 33: 1-19.

   Bentham, J. (1987). An introduction to the principles of morals and legislation, Utilitarianism and other essays, Penguin Books, New York.

   Edwards, P. N. (1999). Gloabl climate science, uncertainty and politics: Data-laden models, model-filtered data, Science as Culture 8(4): 437-472.

   Epicurus (1999). title, in O. A. Johnson (ed.), Ethics: selections from classical and contemporary writers, Harcourt Brace College Publishers, Fort Worth; Sydney.

   Feldman, F. (1997). Utilitarianism, Hedonism, and Desert, Cambridge University Press, New York.

   Foot, P. (1978). Virtues and vices, and other essays in moral philosophy, Blackwell, Oxford.

   Griffin, J. (1986). Well-Being, Clarendon Press, Oxford.

   Hamilton, W. (1964). The genetical evolution of social behavior I & II, Journal of Theoretical Biology 7: 1-16 & 17-52.

   Hare, R. M. (1981). Moral thinking: its levels, method, and point, Oxford University Press, New York.

   Henrich, J., Boyd, R., Bowles, S., Camerer, C., Fehr, E. and Gintis, H. (eds) (2004). Foundations of human sociality : economic experiments and ethnographic evidence from fifteen small-scale societies, Oxford University Press, Oxford.

   Hume, D. (1984). A treatise of human nature, Penguin, London.

   Johnson, O. A. (1999). Introduction to ‘Epicurus’, Harcourt Brace College Publishers, Fort Worth; Sydney.

   Kahneman, D. and Tversky, A. (1979). Prospect theory: An analysis of decision under risk, Econometrica 47: 263-292.

   Kant, I. (1909). Fundamental principles of the metaphysic of morals, in T. K. Abbott (ed.), Kants critique of practical reason and other works on the theory of ethics, Longmans, London.

   Karni, E. and Safra, Z. (1987). Preference reversals and the observability of preferences by experimental methods, Econometrica 20(1): 675-685.

   Lichtenstein, S. and Slovic, P. (1971). Reversal of preferences between bids and choices in gambling decisions, Journal of Experimental Psychology 89: 46-55.

   MacIntyre, A. (1981). After virtue: a study in moral theory, Duckworth, London.

   Mill, J. S. (1861). Utilitarianism, Collins.

   Moore, G. E. (1903). Principia ethica.

   Mossner, E. C. (1984). Introduction, Penguin, London.

   Parfit, D. (1984). Reasons and Persons, Clarendon Press, Oxford.

   Parfit, D. (1994). How both human history and the history of ethics may be just beginning, in P. Singer (ed.), Ethics, Oxford University Press.

   Pojman, L. P. (1998). Ethical Theory: Classical and Contemporary Readings, Wadsworth Pub. Co.

   Portmore, D. W. (2001). Can an act-consequentialist theory be agent relative?, American Philosophical Quarterly 38(4): 363.

   Ramsey, F. P. (1931). Truth and probability, in R. B. Braithwaite (ed.), The foundations of mathematics and other logical essays, K. Paul, Trench, Trubner & co., London, chapter VII.

   Rawls, J. (1972). A theory of justice, Oxford University Press, Oxford.

   Ruse, M. (1995). Evolution and ethics: the sociobiological approach, in L. Pojman (ed.), Ethical theory: classical and contemporary readings, pp. 91-107.

   Scheffler, S. (1982). The Rejection of Consequentialism, Clarendon Press, Oxford.

   Sidgwick, H. (1907). The methods of ethics, Macmillan, London.

   Singer, P. (1981). The expanding circle: ethics and sociobiology, Clarendon Press, Oxford.

   Singer, P. (1993). Practical ethics, Cambridge University Press, Cambridge; New York.

   Singer, P. (1994). Introduction to ‘ethics’, in P. Singer (ed.), Ethics, Oxford University Press.

   Smart, J. J. C. (1971). Extreme and restricted utilitarianism, in S. Gorovitz (ed.), [Mill’s] Utilitarianism. With Critical Essays, Bobbs-Merrill, Indianapolis, pp. 195-203.

   Smart, J. J. C. (1973). Utilitarianism: For and Against, Cambridge University Press.

   Sober, E. R. (1994). Prospects for an evolutionary ethics, in L. Pojman (ed.), Ethical Theory.

   von Neumann, J. and Morgenstern, O. (1947). Theory of Games and Economic Behavior, Princeton University Press.

   Younger, S. (2005). Reciprocity, sanctions, and the development of mutual obligation in egalitarian societies, Journal of Artificial Societies and Social Simulation 8(2).
URL: http://jasss.soc.surrey.ac.uk/8/2/9.html

All footnotes

1. It is interesting that non-utilitarians such as Foot (1978) and Scheffler (1982) note an attraction to the utilitarian’s intent to improve the general welfare.
2. Here,‘right’ is being used in the sense of ‘correct’: e.g. the correct act to achieve some end.
3. Although, it could be argued that this too is inconsistent.
4. Some have argued that Rawls’ contract ethics is a form of average utilitarianism or least-worst utilitarianism, though this is certainly debatable.
5. Alternatively, we might assume pleasure includes such things as satisfaction, happiness, etc.
6. Which I presume means we first find the greatest number, and then find the greatest good for that number.
7. While both arguments given here do not show that human and animal utility functions are commensurate, they also do not show they are incommensurate.
8. Reciprocal altruism, as I have discussed in Section xxx, is essentially delayed cooperation and, as such, synergistic.
9. An issue that is completely separate to the above discussion is that rape could be ethical under some circumstances. But such circumstances for rape will be so extreme and absurd (in the case of humans) that a) they do not merit serious consideration and b) our moral intuitions will not hold in such circumstances anyway -- which is just as Hare (1981) argues for absurd circumstances in general.
10. I do not discuss the most common criticism of utilitarianism, which is that it harms integrity -- that is, that we must always consider everyone equally, losing our sense of self and ignoring our personal relations (for discussion, see Scheffler 1982, who identifies Rawls and Williams as the main proponents). There are many responses to this, the best, I believe, based on the fact that an individual will easily be the leading expert on herself, and the next to best expert on those close to her. For a defence of consequentialism (but not utilitarianism) against this problem, see Portmore (2001).
11. Hare (1981), for instance, suggests that there are two levels of thinking: intuitive and critical. Critical thinking involves utilitarian calculation, weighing which of our intuitions we should trust. Intuitive thinking is what we should use in practice (in almost all cases).
12. Furthermore, I know of no examples of class B at all -- i.e. simulations that include utilities and model specific situations.
13. Indeed, we can also use simulation to explore the concept of utility further, which I do not consider here.
14. If we assume intuition takes precedence, then utilitarianism and simulation can fill in the gaps. If we assume intuition is all there is to ethics, then utilitarianism and simulation obviously have no value.