Cosmides Tooby
Cosmides Tooby
Models of the various adaptive specializations that have evolved in the human psyche
could become the building blocks of a scientific theory of culture. The first step in
creating such models is the derivation of a so-called “computational theory” of the
adaptive problem each psychological specialization has evolved to solve. In Part II, as
a case study, a sketch of a computational theory of social exchange (cooperation for
mutual benefit) is developed. The dynamics of natural selection in Pleistocene ecological
conditions define adaptive information processing problems that humans must be able
to solve in order to participate in social exchange: individual recognition, memory for
one’s history of interaction, value communication, value modeling, and a shared gram-
mar of social contracts that specifies representational structure and inferential pro-
cedures. The nature of these adaptive information processing problems places con-
straints on the class of cognitive programs capable of solving them; this allows one to
make empirical predictions about how the cognitive processes involved in attention,
communication, memory, learning, and reasoning are mobilized in situations of social
exchange. Once the cognitive programs specialized for regulating social exchange are
mapped, the variation and invariances in social exchange within and between cultures
can be meaningfully discussed.
KEY WORDS: Reciprocal Altruism; Cooperation; Tit for tat; Cognition; Reasoning;
Evolution; Learning; Culture.
INTRODUCTION
H
uman beings live in groups, and their behavior is affected by
information derived from the other individuals with whom they
live. The study of culture is the study of how different kinds of
information from each individual’s environment, especially from
dictive framework that facilitates the design of experiments that can map
the structure of the cognitive programs that guide social exchange in humans.
This article is divided into four sections:
Part 1. Only certain strategies for engaging in social exchange can evolve:
Natural selection’s game theoretic structure defines what properties these
strategies must have.
Part 2. The ecological conditions necessary for the evolution of social ex-
change were manifest during hominid evolution; hominid behavioral ecol-
ogy further constrains a computational theory of social exchange.
Part 3. These strategic and ecological constraints define a set of information
processing problems that must be solved by any human engaging in social
exchange. Computational theories of these problems are developed.
Part 4. Aspects of the computational theory of social exchange have been
tested. Experimental evidence from tests of logical reasoning verifies the
existence of algorithms for detecting cheaters; as predicted, these algo-
rithms operate on item-independent, cost-benefit representations of ex-
change interactions.
Parts 1 and 2 review the constraints from which a computational theory
should be built. Part 3 presents a first attempt to build a computational theory
of social exchange; Part 4 briefly reviews the results of experiments designed
to test aspects of the computational theory developed.
dominate the gene pool (Maynard Smith 1982). During the last 20 years,
game-theoretic models of the dynamics of natural selection have proliferated
in evolutionary biology. The elaboration of these methods now allows a more
precise characterization of the differences between those strategies that can
be selected for and those that will be selected against (e.g., Hamilton 1964;
Williams 1966; Maynard Smith 1982; Dawkins 1982). In the case of social
exchange (“cooperation” or “reciprocation”), Axelrod and Hamilton (1981)
and Axelrod (1984) have shown that only certain families of strategies with
certain distinctive properties can evolve.
Using the iterated Prisoner’s Dilemma as their paradigm of coopera-
tion,’ Axelrod and Hamilton (1981), and Axelrod (1984), following Williams
(1966) and Trivet-s (1971), explored the envelope of conditions limiting the
evolution of social exchange. It is the possibility of cheating (or defecting
once one side of a mutually beneficial exchange has been carried out) that
makes the evolution of cooperation difficult. For this reason, indiscriminate
cooperation under conditions that allow cheating is an unstable strategy that
would be quickly selected out under all models of biologically plausible
conditions. Virtually any nonsimultaneous exchange creates a possibility of
defection, and most “natural” opportunities for exchange are not simul-
taneous. For example:
1. A common “item” of exchange between primates is protection from con-
specifics or predators. Two or more individuals develop coalitional re-
lationships for mutual defense, aggression, or protection (e.g., baboons:
Hall and DeVore 1965; chimpanzees: Wrangham 1986; de Waal 1982). If
one individual is attacked, and another comes to its defense, there is
nothing the aided individual can do at that time to repay his rescuer.
Reciprocation is possible only at another time, when the rescuer is himself
attacked.
2. Interactants are foraging for patchy resources. One individual finds, for
example, a patch containing more than can be easily eaten by itself, and
gives a call to guide others to the patch (or returns and shares the resource
with others). Repayment in kind, to be valued, would have to take place
subsequently (e.g., chimpanzees: Goodall 1968, 1971; bats: McCracken
and Bradbury 1981; Wilkinson 1984).
3. In hunter-gatherer meat sharing, kills may be larger than can be easily
consumed by those who were directly in the hunt, and only irregularly
obtained. The value of consuming the whole kill is less than the value of
sharing the unneeded or less-needed portions with others, provided that
the act is reciprocated at some future time (Lee and DeVore 1968). Again,
repayment on the spot is both unlikely and not valuable.
t Other models of social exchange are possible, but they will not change the basic conclusion
of this section: that reciprocation is necessary for the evolution of social exchange. For example,
the Prisoner’s Dilemma assumes that enforceable threats and enforceable contracts are im-
possibilities (Axelrod 1984), assumptions that are frequently violated in nature. The introduction
of these factors would not obviate reciprocation-in fact, they would enforce it.
A Computational Theory of Social Exchange 55
The “items” of exchange are frequently acts that, once done, cannot be
undone (e.g., protection from attack, alerting others to the presence of
a food source).
Opportunities for simultaneous mutual aid are rare because the needs and
abilities of organisms are rarely exactly and simultaneously complemen-
tary: The female baboon is not fertile when her infant needs protection,
yet this is when the male’s ability to protect is of most value to her.
On those occasions when repayment could be made simultaneously and
in the same currency, declining marginal utilities makes the exchange
senseless: If meat sharers both make a kill on the same day, neither
benefits from the other’s windfall.
’ Indeed, such factors are exactly why it is so useful to have a medium of exchange. One party
doesn’t have to be able to provide the particular goods or services the other party wants because
money can be converted into anything others are willing to exchange for it. Furthermore, money
permits a simultaneous exchange, in which either party can, in fact, withhold the money if he
or she suspects that the other is attempting to cheat.
s The game “unravels” if they do. If we both know we are playing three games, then we both
know we will mutually defect on the last game. In practice, then, our second game is our last
game. But we know that we will, therefore, mutually defect on that game, so, in practice, we
are playing only one game. The argument is general to any known, fixed number of games
(Lute and Raiffa 1957).
56 L. Cosmides and J. Tooby
cooperate defect
me
two interactants in the social exchange will be designated “you” and “I,”
with appropriate possessive If “I” defect when “you” coop-
pronouns).
erated, then you can retaliate by defecting on the next move;4 if I cooperate,
you can reward me by cooperating on the next move. In an iterated Pris-
oner’s Dilemma, a system can emerge that has incentives for cooperation
and disincentives for defection.
The work of Trivers (1971), Axelrod and Hamilton (1981), and Axelrod
(1984), has shown that indiscriminate cooperation cannot be selected for
when the opportunity for cheating exists. A cooperative strategy can invade
4 In nature, I can also retaliate by inflicting a cost on you through the use of violence. However,
if I can, reliably, do this, the game is no longer a Prisoner’s Dilemma. Violent retaliation is a
“tax” on defection that wipes out the incentive to defect (i.e., T minus R). If T s R, then the
situation no longer presents a dilemma-we both have an incentive to cooperate and no incentive
to cheat. The key word in the above scenario is reliably. From a “veil of ignorance” as to the
relative strength of two individuals, on average, half the time I (the cheated on) will be able to
inflict a cost on you, and half the time you (the cheater) will be able to inflict a cost on me. Of
course, most animals are not acting from a veil of ignorance, and one would expect them to
assess their relative strength and adjust their strategies accordingly.
A Computational Theory of Social Exchange 57
a population of noncooperators if, and only if, it cooperates with other co-
operators and excludes (or retaliates against) cheaters. If a cognitive decision
rule regulating when one should cooperate and when one should cheat does
not instantiate this constraint, then it will be selected against. However,
Axelrod (1984) has shown that there are many decision rules that do in-
stantiate this constraint. Any of these could (other things being equal), there-
fore, have been selected for in humans; which decision rule, out of this
constrained family, actually evolved in the human lineage is an empirical
question. The most general statement about such decision rules that natural
selection theory permits is this: Humans have the ability to cooperate for
mutual benefit; this capacity could not have evolved unless it included al-
gorithms for detecting, and being provoked by, cheating.
5 For example, TIT FOR TAT is an ESS if, and only if, the probability that two individuals
will meet again is greater than the larger of these two numbers: (T-R)I(T-P) and (T-R)/(R-S)
(Axelrod 1984).
58 L. Cosmides and J. Tooby
6 An ability that some other primates also possess, to a lesser extent. For example, de Waal
(1982) shows pictures of chimpanzees who have discovered that they can get past an electrified
fence surrounding a tree with edible leaves. One chimpansee holds a large branch against the
tree as a ladder, while another climbs it into the tree. The chimpanzee in the tree then throws
juicy leaves down to his compatriots on the ground.
A Computational Theory of Social Exchange 59
David Mat-r has argued that the first and most important step in understand-
ing an information-processing problem is developing a “theory of the com-
putation” (Mar-r 1982; Marr and Nishihara 1978). This theory defines the
nature of the problem to be solved; in so doing, it allows one to predict
properties that any algorithm capable of solving the problem must have.
Computational theories incorporate “valid constraints on the way the world
is structured-constraints that provide sufficient information to allow the
processing to succeed” (Marr and Nishihara 1978, p. 41).
For humans, an evolved species, natural selection in a particular eco-
logical situation defines and constitutes “valid constraints on the way the
world is structured” for a particular adaptive information processing prob-
lem (Cosmides and Tooby 1987). In the case of social exchange, the eco-
logical and game-theoretic aspects of hominid social exchange discussed
above provide the ingredients for the construction of just such a computa-
tional theory. The ability to engage in a possible strategy of social exchange
presupposes the ability to solve a number of information processing prob-
lems that are highly specialized. The elucidation of these information pro-
cessing problems constitutes a computational theory of social exchange. Any
psychological theory purporting to account for the fact that humans are able
to engage in social exchange must be powerful enough to realize this com-
putational theory-that is, its information processing mechanisms must pro-
duce behavior that respects the constraints imposed by the selective process.
Thus, it must be powerful enough to 1) permit the realization of a “possible”
social exchange strategy, i.e., a strategy that can be selected for, and 2)
exclude “impossible” strategies, i.e., strategies would be selected against.
The problems most specific to social exchange will be incorporated into
a “grammar of social contracts” in the second half of Part 3. A grammar
of social contracts is the set of assumptions about the rules governing a par-
60 L. Cosmides and J. Tooby
The response requires that the defecting individual not be lost in a sea of
anonymous others. (Axelrod and Hamilton 1981)
’ Organisms that lack the ability to recognize different individuals can also evolve a limited
ability to cooperate, but only because of ecological restrictions on their interactions to a very
few partners with whom they are in constant and/or exclusive physical proximity (Axelrod and
Hamilton 1981).
* One would expect people to assume, in the absence of information to the contrary, that such
intercontingent behavior occurs in face-to-face interactions. They should be more likely to
suspect someone of intending to cheat in delayed benefit transactions.
62 L. Cosmides and J. Tooby
9 For example, a person’s facial expression might telegraph his or her intention to cheat. All
else equal, a person’s “likeability” should be a function of his or her tendency to reciprocate,
and cues that suggest “good intentions” ought to be judged more likeable (e.g., sneers and
aggressive scowls do not suggest good intent). Although other explanations are possible, it is
interesting that people remember unfamiliar faces better when, during initial inspection, they
are asked to judge the person’s likeability than when they are asked to assign sex (Carey and
Diamond 1980).
A Computational Theory of Social Exchange 63
probability that one’s partner will cooperate, might be better adapted to the
complexities of exchange in nature. lo If so , then the need to take such factors
into account has implications regarding the organization of human memory.
Information about one’s history of interaction with a particular person ought
to be “filed” with that person’s “identity” and activated quickly and ef-
fortlessly when an exchange-relevant situation with that person arises. When
the payoff matrix of the current interaction is such that “you” will lose a
great deal if “I” cheat you, then more of our past exchange history should
become accessible than for trivial exchanges. When you believe that I have
cheated you in a major way, there should be a flood of memories about your
past history with me: You must decide whether it is worth your while to
continue our relationship. In addition, this information will help you ne-
gotiate with me if you choose to continue our relationship: You can com-
municate how large a cost I have inflicted on you now and in the past (so
I can make amends if I want to continue the relationship), tell me how close
you came to ending our relationship (i.e., categorizing me as a permanent
defector), convince me that 1 have become increasingly untrustworthy,
threaten to injure my reputation by telling others about my past transgres-
sions, and so on.
The activation of past situations in which I have cheated you may, in
turn, activate other” affective mechanisms that communicate cost/benefit
information: They may cause you to cry, turn your back on me, scream at
me, or hit me. The extent and nature of the overt aspects of your affective
reaction communicates to me your view of the extent of my injury of you:
whether you view it as serious enough to require restitution, how much is
required and how soon, whether you intend to cut me off if I defect again,
etc. Emotion communication can be viewed as one way individuals com-
municate costs, benefits, and behavioral intentions to others in negotiative
situations (see Cosmides 1983).
lo An algorithm was submitted to Axelrod’s computer tournament that computed the conditional
probability that an interactant would cooperate based on whether that individual had cooperated
or defected in past interactions (REVISED DOWNING). It cooperated only when this con-
ditional probability was greater than 50% (random). Its downfall was that it did not discount
past behavior relative to present behavior. Therefore, it was exploited by certain programs that
became more likely to cheat in later interactions. In a sense, it failed because it assumed that
competitor programs had static “personalities.”
” We say “other” because we see no principled way of drawing a dividing line between emotion
and cognition. The flood of memories commonly experienced when a person is betrayed is as
much a part of one’s “emotional reaction” as turning red and attacking (see Tooby 1985; Tooby
and Cosmides, in preparation, b).
64 L. Cosmides and J. Tooby
range of items they can exchange is necessarily more limited. For example,
chimpanzees recruit support from others in aggressive encounters and fre-
quently form long-term coalitional relationships (e.g., de Waal 1982). These
coalitions are social exchanges in which the exchanged “item” is mutual
aid in lights. A chimpanzee under attack bares its teeth, emits a fear scream,
looks at the individual from whom it wants support, and holds out its hand,
palm up, toward that individual. If the attacked chimpanzee receives the
requested support, its demeanor changes radically: Its hair stands on end,
it emits aggressive barks, and it charges its opponent-looking over its shoul-
der frequently to see if its supporter is still with it. If the chimpanzee does
not receive support, it continues cowering with hair flat and teeth bared,
screaming and holding out its hand to solicit support.
One also must be able to communicate dissatisfaction with a defector.
This also can be done without language, as is vividly illustrated by an in-
teraction between Puist and Luit, two chimpanzees in the Arnhem chim-
panzee colony in the Netherlands. Puist and Luit had a long-standing coa-
litional relationship: Puist had a long history of aiding Luit whenever he
attacked or was under attack, and Luit had a long history of extending similar
aid to Puist.
This happened once after Puist had supported Luit in chasing Nikkie [an-
other chimpanzee]. When Nikkie later displayed [aggressively] at Puist she
turned to Luit and held out her hand to him in search of support. Luit,
however, did nothing to protect her against Nikkie’s attack. Immediately
Puist turned on Luit, barking furiously, chased him across the enclosure
and even hit him. (de Waal 1982, p. 207)
dividuals who cannot yet speak a language (small children), and distant
individuals beyond the reach of speech but not of sight.
2. Emotion signals can function like intersubjective metrics, permitting an
observer to scale the values of the person emitting the signal: A very
loud scream indicates a greater cost to the screamer than a moderately
loud scream. Signals like screams, smiles, and trembling are “analog”:
The louder the scream, the wider the smile, the more noticeable the trem-
ble-the more strongly the person can be presumed to feel about the
situation causing her to scream, smile, or tremble. Words do not provide
such convenient indicators of magnitude, precisely because they are ar-
bitrary and discrete symbols. Verbal expressions indicating size of cost
or benefit are more ,“digital”: One might reasonably use “very much”
to describe the degree of one’s desire in both these sentences: “I want
very much for my child’s cancer to go into remission” and “I want that
apple very much”-yet in these two cases the degree of desire is, pre-
sumably, vastly different.
3. Emotion signals allow the incidental communication of values to potential
interactants. By observing “your” emotional reactions to various situ-
ations, even though they are not directed at me, “I” can learn what you
value, and hence what sort of exchange you are likely to agree to (see
Proposition 4). The verbal alternative is a process akin to writing to Santa
Claus: Reciting a long list stating one’s preference hierarchy, with pe-
riodic updates. I2
However, the very properties that make a natural language a poor me-
dium for communicating intensity of affect make it an excellent system for
indicating “items” of exchange. The variety of “items” that can be ex-
changed is severely limited in a species that uses only emotion signals. Pri-
mates appear to exchange primarily acts of aggression, protection, food,
sex, alarm, and grooming. The use of language does not, of course, eliminate
the problem of ambiguous reference. In the absence of a shared referential
semantics, knowing what a word refers to is no less problematic than know-
ing what a gesture refers to. l3 But a natural language permits a potentially
infinite number of arbitrary, discriminable symbols to be attached to a po-
tentially infinite number of discriminable classes or entities. As new situa-
tions arise, new words can be opportunistically created to refer to them.
Consequently, language permits a range and specificity of reference impos-
sible in the purely gestural systems of most primates.
‘* Actually, a list stating that you want X, Y, and Z is not sufficient. Your preferences, including
items you already have, would have to be hierarchically ordered using some sort of interval
scale or indifference curves, because the salient issue is: What would you be willing to give up
in order to get X, Y, and Z?
I3 This problem has prompted developmental psycholinguists to posit that children have innately
specified “hypotheses” about what sorts of entities are likely to have words attached to them.
When coupled with articulated models of the world, this hypothesis and model system amounts
to a referential semantics (Gleitman and Wanner 1982).
A Computational Theory of Social Exchange 67
I4 At least for Homo sapiens sapiens. The Homo erectus tool kit appears surprisingly constant
over a wide range of different environments, from Asia to Africa, for over 1 million years
(Pilbeam, pers. comm.). Of course, this observation applies only to tools that are recognizable
as such in the fossil record. For example, a branch used as a ladder would not show up in the
fossil record.
” The Arnhem chimpanzees discovered the ladder trick when one screaming chimpanzee,
fleeing from a very public attack, bounded up a broken branch that happened to be resting
against a tree.
I6 Because the incidental communication of cost/benefit information is important (see Propo-
sition 4), one might predict that, all else equal, individuals are more likely to emit emotion
signals in the presence (or suspected presence) of potential reciprocators than when they are
alone. Similarly, they should be more likely to suppress emotion signals in the presence of
potential aggressors-value information helps aggressors; it tells them what they should threaten
to kill, destroy, or prevent.
68 L. Cosmides and J. Tooby
” Conditioned stimuli linked to events producing large costs or benefits should also recruit
attention, e.g., a fire engine siren on your street.
A Computational Theory of Social Exchange 69
erential semantics. Because humans are tool users, planners, and cooper-
ators who can invent many alternative means for realizing a particular goal,
many specific items of human preference will differ from culture to culture
in ways that depend on that culture’s technology, ecology, social system,
and history. This does not mean, however, that desires are random. Evo-
lutionary theory is rife with hypotheses regarding what states of affairs the
typical human is likely to prefer (see Cosmides 1985, pp. 165-167). A cog-
nitive “list” of typical human preferences would still be inadequate, how-
ever, because there are complex interactions between competing prefer-
ences that evolutionary theory speaks to (e.g., what do you do if your spouse
beats you, but he is your only source of income?). Therefore, the algorithms
that guide our “marketing research” must include cost/benefit analysis pro-
cedures that allow one to take such complexities into account in modeling
other people’s values.
Although researchers from Bartlett (1932) to Schank and Abelson (1977)
have posited that pragmatic inference is guided by “schemas,” “frames,”
or “scripts’‘-domain specific inference procedures-they have provided
little insight into their specific content. Using evolutionary biology as a
guide, the system so far proposed (default hypotheses about typical human
preference hierarchies plus procedures for combining factors) provides a
starting place for elucidating the content of “motivation scripts”-algo-
rithms that guide pragmatic inference about human preference and
motivation.
Motivation scripts should be powerful and sophisticated, for the ability
to model other people’s values is useful in a wide variety of evolutionarily
important social contexts, from social exchange to aggressive threat to mate
choice to parenting. They should prove to be strong organizational factors
in the construction and reconstruction of memories. Details that are normally
considered insignificant should be more easily recalled when activated mo-
tivation scripts allow them to be perceived as causally linked to biologically
significant variables. I8 Veridical recall of stories that violate the assumptions
about human preference instantiated in our motivation scripts should be
difficult (as, indeed it is: e.g., Bartlett 1932). Motivation scripts should
guide the reconstruction of such stories during recall, distorting the original
story in ways that make motivational sense. Implicit motivational assump-
tions are so pervasive in human communication that motivation scripts will
probably be an essential component of any artificial intelligence program
that can usefully converse in a natural language.
An emotion signal should not only recruit attention and activate one’s
own motivation scripts, it should arouse one’s curiosity. One would expect
increased tendencies to observe the emotion-arousing event and ask ques-
” Owens, Bower, and Black (1979) present evidence of this kind. Interestingly, the most bi-
ologically significant motivational theme (an unwanted pregnancy) elicited the highest recall of
mundane details about a character’s day.
70 L. Cosmides and J. Tooby
tions about it. Crowds gather around fights, children follow fire trucks to
the scene of a fire, onlookers bombard police with questions at the scene
of a crime. Journalists make a profession of gathering information about the
values and behavior of people who have a large (real or perceived) impact
on our lives. Motivation scripts may guide inferences about what exactly a
given emotion signal refers to, but it can do this only if it is fed concrete
information. The concrete information one acquires by witnessing an emo-
tion-arousing event tills in parameter values in motivation scripts, deter-
mining which data structures and inference procedures are appropriate in
decoding the reacting person’s values.”
Acquiring information about the values of potential interactants is, in
itself, valuable. Decoding the value systems of potential interactants is there-
fore likely to become a cooperative enterprise in itself. We even have a name
for such exchanges of information and “analysis’‘-gossip. Gossip is usually
about situations that cause emotional reactions in potential interactants-
exactly the kind of situations that provide a window into someone’s values.
The more biologically significant the information, the “hotter” the gossip:
Events involving sex, pregnancy, fights, windfalls, and death should be par-
ticularly “hot” topics, especially when they signal a change in someone’s
needs, values, or capacity to confer benefits. Hot gossip should be partic-
ularly interesting and easily remembered. Gossip about people who can have
a large impact on one’s well-being should be especially interesting; gossip
about people one does not know should be comparatively boring. Similarly,
cues or indicators of the character of potential interactants, including their
disposition to cheat or defect, are themselves extremely valuable informa-
tion, differentially attended to and exchanged. Reputation is an important
kind of social information, and it plays a significant role in the social life of
any stable group of humans.
The learning mechanisms that guide such “marketing research” should
produce person-specific models of the preferences and motivations of po-
tential and actual interactants. General motivation scripts help build person-
specific preference models; these become more elaborated the more contact
one has with that particular person. As this happens, inferences drawn from
a person-specific model will generate more accurate interpretations of that
person’s behavior and emotion signals than inferences drawn from the gen-
eral motivation scripts.
It would be useless for information about the preferences of different
individuals to be stored together in a semantic network, filed under “pref-
erences” or “values.” Like information about an individual’s history of
reciprocation, a model of an individual’s preferences and motivations should
r9 There are, of course, other good reasons for being curious about biologically significant
events, e.g., you yourself might be confronted with the same situation at some point. However,
when such events impact potential interactants, they should be especially interesting: A fist
fight in your academic department provokes more interest than one among strangers in another
city.
A Computational Theory of Social Exchange 71
be “filed” under his or her identity. When the opportunity to acquire more
preference information about an individual arises, the model appropriate to
that individual must be easily retrieved, not just a model of average pref-
erences. “Averaging” the fact that one person prefers Z to W but another
person prefers W to Z into one model of “average” preference does not
enhance one’s ability to engage in social exchange.” In contrast, learning
that “Smith values W more than X more than Y more than Z” and that
“Jones values Z more than X more than Y more than w” increases your
ability to make offers that benefit you given the limits imposed by what Smith
or Jones are willing to accept. Offering W to Smith is more likely to induce
him to give you Y than offering him Z; exactly the reverse is true of Jones.
If you value Z more than W, you are better off making Smith an offer; if
you value W more than Z, then strike a deal with Jones. The proper decision
can be made only if person-specific preference information can be conven-
iently retrieved.
*’ Although noting that most people in your culture prefer W to Z might enhance your ability
to recognize and participate in social exchanges with new interactants. One might expect such
culture-specific information to be incorporated into the “typical human” motivation scripts.
72 L. Cosmides and J. Tooby
For example, a Mr. Michael Pastore of Dallas recently made the following
comment in an interview in the Wall Street Journal:
“I never pay for dinner with anything other than my [American Express]
Platinum Card when I’m on a first date,” says the 30-year-old seafood im-
porter, flashing his plastic sliver inside the glitzy Acapulco Bar. “Women
are really attracted to the success that my card represents.” (The Wall
Street Journal, April 17, 1985, p. 35)
Mr. Pastore perceives a clear causal link between his “plastic sliver” and
a biologically significant variable: the ability to attract sexual partners. His
perception that a Platinum Card can attract sexual partners is based in turn
on the perception that owning one is causally linked to a variable that is
biologically significant to females in choosing male sexual partners-the
ability to accrue resources.*’ Knowing this, one can infer that Mr. Pastore
perceives owning an American Express Platinum Card as a benefit, and that
if he did not own one, he might well be willing to give up other items in
order to acquire one. It is a suitable item for social exchange.
The prediction, then, is that the algorithms regulating social exchange
in humans will be item-independent. Furthermore, they will operate on COSC-
benefit representations of the interaction. As we will argue in the next sec-
tion, any interaction that is interpreted as having a particular, characteristic,
cost-benefit structure will be categorized as an instance of social exchange
and will call up procedural knowledge specialized for reasoning about this
domain.
” In fact, cross-cultural evidence is accumulating that indicates that a potential mate’s ability
to accrue resources is more important to women than to men, just as evolutionary theory predicts
(Buss 1987).
A Computational Theory of Social Exchange 73
benefif ro you @(you)) if, and only if, it increases your utility above your
zero level baseline.22 Assuming you value having a million dollars (Q) more
than you value not having a million dollars (not-Q), then Q-having a million
dollars-constitutes a benefit to you. You will not accept this offer unless,
at the time of acceptance, you believe that Q constitutes a benefit to you.
Using terms defined with respect to your values (rather than mine), we can
rephrase my offer as: “If P then B(you).”
An item is a cost to you (C(you)) if, and only if, it decreases your utility
below your zero level baseline. 23 In this offer, P-walking my dog-is the
item that I have made my offer of B(you) contingent upon. Usually, P will
be something that you would not do in the absence of an inducement; other-
wise, I would be incurring a cost (giving up Q, the million dollars) by making
the offer (if you were going to walk my dog anyway it would be foolish of
me to offer you the million dollars).24 If P is not something you expected to
do in the absence of my offer, then, in your value system, not-P (not walking
my dog) is part of your zero level baseline, O(you). This means that if not-
P comes to pass, you will not have moved from your zero utility baseline,
and you will be no worse off than if my offer had never been made. If we
posit that my dog is ugly and vicious, and that walking him would embarrass
you, endanger your health, and assault your aesthetic sensibilities, then P
(walking my dog) decreases your utility and is therefore a cost to you,
C(YOUh
Stated in terms of your value system, my offer can now be rephrased
as “If C(you) then B(you).” But other conditions must hold before you will
accept my offer. There is a constraint on the magnitudes (absolute values)
of B and C, namely, B(you) > C(you), or, equivalently, &you) minus C(you)
> 0. For you to accept my offer, a million dollars must be more of a benefit
to you than walking my ugly dog is a cost. If this is not the case there would
be no point in your entering into the contract; it would not increase your
utility. The greater the magnitude of B minus C, the more attractive the
contract will appear. A contract that reversed this constraint (such that
C >> B) sounds perverse. For example, the following offer strikes people
as foolish: “If you break your arm then I’ll give you a penny.” Unsurpris-
ingly, Fillenbaum (1976) found that subjects consider such offers “extraor-
dinary” 75% of the time, compared to a 13% rate for offers that fit the
constraints described above.
22 Presumably there are costs and benefits associated with any action. More precisely, B(you)
is a net benefit-the benefit to you of receiving $1 million is greater than the cost to you of
receiving $1 million.
l3 Again, this is a net cost-the cost to you of walking my dog is greater than the benefit to
you of walking my dog.
24 P does not have to be a C(you) for you to accept my contract, although I must believe that
it is a C(you) in order to offer the contract in the first place. You could be trying to defraud
me into offering this contract by dissembling about your real intentions. Perhaps you have been
planning all along to walk my dog, but led me to believe that you are not planning to walk it
so I would make you an offer. See below, on baseline fraud.
76 L. Cosmides and J. Toeby
*’ Not-Q being part of my zero level baseline is not a necessary condition for my making an
offer, but it is necessary that you be&~ it is part of my zero baseline if you are to accept my
offer. Unknown to you, I might intend to give you $l,OOO,OOO regardless, but want to get as
much as I can in return.
A Computational Theory of Social Exchange 77
Table 1. Cost/Benefit Translation of My Offer into Your Value System and Mine
Baseline Fraud
There is a joke that runs like this:
A man from out of town walks up to a woman and says “If you sleep with
me three times 1’11give you $15,000.” She is hard up for cash, so she agrees.
After each session he pays her the money he promised. The woman decides
this is an easy way to make money, so after she has been paid the full
$15,000 she asks him if he would like to continue the arrangement. He says
he can’t because he must return home the next day. She asks “Where’s
home?” “ Oshkosh,” he replies. “Oh!” she says, “That’s where my mother
lives!” He answers, “Yes, I know. She gave me $15,000 to deliver to you.”
The man in the joke has defrauded the woman by concealing information
about their zero level baselines.
A contract has been sincerely offered and sincerely accepted when each
party believes that the B > C constraint holds for the other, in this case,
when the contract has the following cost/benefit structure:
Man’s offer: “If you sleep with me three times then I’ll give you $15,000”
“If P then Q”
Woman’s point of view: “If C(woman) then B(woman)”
Man’s point of view: “If B(man) then C(man)”
78 L. Cosmides and J. Tooby
\ :B) s
b S
“room” for S
profit
margin
0 -
S -7
S price b
\ S) b
The woman in the joke assumed that the man’s offer fit these requirements,
that he offered a sincere contract. However, the man knew that if the woman
knew what he knew about her baseline and his, they would both see the
structure of the contract as:
“If P then Q”
Woman’s point of view: “If C(woman) then O(woman)”
A Computational Theory of Social Exchange 79
Table 2. Sincere Social Contracts: Cost/Benefit Relations When One Party is Sincere and
That Party Believes the Other Party is Also Sincere
ceptor’s value system. The third column shows the contract’s cost/benefit
structure in terms of the sincere acceptor’s value system; the fourth column
shows what the sincere acceptor believes the contract’s structure is in terms
of the offerer’s value system. The table shows that the sincere offerer and
the sincere acceptor view the contract’s cost/benefit structure in exactly the
same way.
Table 3 shows what conditions hold when one person offers or accepts
a contract sincerely, but is the victim of baseline fraud. The sincere person
believes the contract fits the conditions specified in Table 2. However, the
defrauder believes the contract tits the criteria specified in Table 3. Fur-
thermore, if the sincere person were to find out that he or she had been
tricked concerning baseline information, that person would share the de-
frauder’s view of the contract’s cost/benefit structure. An analysis of base-
line fraud is useful because it serves to distinguish the conditions that must
Table 3. Baseline Fraud: Cost/Benefit Relations When a Sincere Party Makes a Social
Contract with an Individual Perpetrating a Baseline Fraud
I try to defraud you; you accept You try to defraud me, I offer
sincerely sincerely
If you knew what I knew. we If I knew what you knew, we
would both believe would both believe
hold for a contract to be offered or accepted, from those that need not hold,
but usually do.
That people represent actions as costs and benefits with reference to a
zero point based on their current expectations is a psychological prediction
that is not strictly necessitated by natural selection theory in its simplest
form. However, reciprocation theory does require that the individual realize
a net increase in its fitness from participation; this could, in principle, be
computed using an ordinal preference scale without reference to a zero point.
We use this system because we believe it provides a powerful means
by which the individual can distinguish exchanges from other kinds of in-
tercontingent behavior. For example, most people would probably recognize
the utterance “If you call the police, then I’ll shoot you” as a threat. Yet
1) it has the same linguistic form as a contract-“If P then (3,” and 2) like
the person who accepts a sincere contract, the person threatened will realize
an increase in fitness by obeying the threat instead of defying it.
However, the hypothesis that humans represent events as costs and
benefits with respect to a zero utility point based on their current baseline
expectations provides a straightforward means of distinguishing threats from
contracts. A contract has the form “If C(you) then B(you).” However a
threat has the form “If O(you) then C(you).” In the absence of the threat,
the hearer in the example intends to call the police; it is part of his or her
zero level baseline. Being shot is not in the hearer’s plans; it constitutes a
cost. This representational system allows the principled differentiation of
the various forms of intercontingent behavior. We hypothesize that recog-
nition of the form of intercontingent behavior at hand (social exchange,
threat, etc.) automatically activates the set of rules appropriate for reasoning
about it. In the next section, we sketch the rules appropriate to social
exchange.
you “accept” my offer? Grice (1957, 1967) has provided a convenient struc-
ture for understanding the meaning of speech acts.
In committing a speech act,
Using this structure and the cost/benefit analysis above, when an actor, “I”,
offers a contract by saying, “If you give me P then I’ll give you Q,” the
actor means:
x Given that hominids probably participated in social exchange long before they had language,
one would expect the act of accepting a benefit to frequently be interpreted as implicit agreement
to a social contract-as a signal that the acceptor feels obligated to reciprocate in the future.
(Of course, one would expect the donor to jump to this interpretation more readily than the
acceptor!) This view is formalized in British and U.S. contract law-a contract is invalid unless
some “consideration” has changed hands-even a symbolic $1 will suffice.
A Computational Theory of Social Exchange 85
Table 4. How Do You and I Make Out When One of Us Cheats the Other?
accepted item P from you, but have not given you item Q. This means you
have paid C(you) (item P), but have not received B(you) (item Q). Your
payoff: C(you). My payoff: &me). These relations are summarized in Table
4.
As mentioned in Proposition 5, social contract algorithms in humans
must represent items of exchange as costs and benefits to the participants,
and operate on those representations. The detection of cheating depends on
modeling the exchange’s cost/benefit structure from the point of view of
one’s partner, as well as from one’s own point of view. Thus, for any given
exchange, two descriptions of each item must be computed by the social
contract algorithms. For a sincere contract, “If you give me P, then I’ll give
you Q,” item P should be described as both B(me) and C(you), and item Q
should be described as both C(me) and &you) (see Table 4). The cost/benefit
structure to oneself should be easily recoverable, even if the contract is
phrased in terms of the value system of one’s exchange partner.*’ There is
a structural parallel to transformational grammars as they were initially con-
ceptualized: The “surface structure” is the way the offer is actually phrased;
the deep structure is a cost/benefit description of the surface structure from
the point of view of each participant. The deep structure of the offer incor-
porates the information shown in Table 2 (or 3, if one party is “baseline
defrauding”). A prediction of this computational analysis is that these cost/
benefit structures are the descriptions from which participants construct
paraphrases and reconstruct the course of the interaction from memory.
Inference procedures for detecting cheaters must operate on a cost/
benefit description of the contract from the potential cheater’s point of view.
These procedures must allow one to quickly and effectively infer that in-
dividual X has cheated when one sees that X has accepted B(X) but not paid
C(X). When a transaction has not yet been completed, or when one’s in-
formation about a transaction is incomplete, “look for cheaters” procedures
should lead one to:
1. Ignore individual X if X has not accepted B(X).
2. Ignore individual X if X has paid C(X).
” However, one might predict that an offer phrased in terms of the potential acceptor’s value
system might sound more attractive, indicating that the offerer really understands (has a good
model of) what the potential acceptor wants.
86 L. Cosmides and J. Tooby
“II a person has a ‘D’ rating, then his documents must be marked code ‘3’.”
(I( P then Q 1’
You suspect the secretary you replaced did not catcgorirc the students’ documents correctly. The cards below have inlormation about the
documents o( (our people who are enrolled at this high school. Each card represents one person. One side of a card tells a person’s letter rating and the
other side of the card tells that person’s number code.
Indicate only those card(s) you definitely need to turn over to see if the documents of any of these people violate this rule.
Indicate only those card(s) you definitely need to turn over to see if any of these people are breaking this law.
Rule I- Standard Social Contract (STD-SC): “If you take the benefit. then you pay the ~0%”
(If P then Q )
Rule 2 - Switched Social Contract (SWC-SC): “If you pay the cost, then you take the benefit.”
(If P then Q )
The cards below have information about iour people. Each card rcprcsrn\s one person. One side of a card tells whether a person accepted
the benefit and, the other side al the card tells whether that person paid the cost.
Indicate only those card(s) you definitely need IO turn over to see if any of these people arc breaking this law.
.,.,..*.....,.. .. . . . . . . . . . . . . . .. . . . . . . . . . . . .. .,,...........
.
. Benefit : . Benefit : . cost ; cost ;
. .
.. Accepted : : NOT Accepted : . Paid -. : NOT Paid .
.... .......... .*..**..*..... . :......,...... ..............
Rule I - STD-SC: (P) (not-P) (not-QJ
Rule 2 - SWC-SC: (Q) (not-Q) (not-P)
FIGURE 3. Content effects on the Wason selection task. The logical structures of these three Wason selection tasks are identical;
they differ only in propositional content. Regardless of content, the logical solution to all three problems is the same: to see if the
rule has been violated, choose the P card (to see if it has a not-Q on the back), and choose the not-Q card (to see if it has a P on
the back). This is because only the combination on the same card of a true antecedent (P) with a false consequent (nor-Q) can
violate-and thereby falsify-a conditional rule. Therefore, the logically correct response is P and not-Q.
Only 4-25% of college students choose both these cards for abstract problems (see panel a). The most common responses are
P and Q, or P: subjects rarely see the relevance of the not-Q card. Yet about 75% see its relevance for the Drinking Age Problem
(panel 6)-a familiar “standard” social contract (see panel c, Rule l)-and choose both P and not-Q. Poor performance on the
abstract problem is not due solely to the use of “abstract” symbols; similar rates of responding are usually found for a number of
more familiar, “thematic” conditionals: relations between food and drink, cities and means of transportation, schools and fields of
study.
Panel c shows the abstract structure of a social contract problem. A “look for cheaters” procedure would lead a subject to
choose the “benefit accepted” card and the “cost not paid” card, regardless of which logical categories they represent. For a
standard social contract, like Rule 1 or the Drinking Age Problem, the correct social contract response--P and not-Q-converges
with formal logic. However, for a switched social contract, like Rule 2, the correct social contract response--not-P and Q-diverges
from formal logic. According to social contract theory, the Drinking Age Problem reliably elicits logically correct responses because
it is a standard social contract, and not because it is familiar. Note: The logical categories (Ps and Qs) marked on the rules and
cards(*) are here only for the reader’s benefit; they never appear on problems given to subjects.
90 L. Cosmides and J. Tooby
sponses (for review, see Cosmides 1985). This effect is known as the “con-
tent effect” on the Wason selection task.
When the content effect on the Wason selection task was first observed,
a number of researchers tried to account for it in terms of differential ex-
perience. Although it is difficult to do justice to the richness of these hy-
potheses briefly, fundamentally they proposed that familiarity (differential
experience) with the rule being tested increases the probability that a subject
will produce the logically correct response (however, different theorists pro-
posed different mechanisms to account for this phenomenon; see, e.g.,
Manktelow and Evans 1979; Griggs and Cox 1982; Johnson-Laird 1982; Pol-
lard 1982; Wason 1983). The problem with these hypotheses was that some
familiar content seemed to produce the content effect, whereas other familiar
content did not.
Cosmides (1985) reinterpreted the existing literature by pointing out that
virtually all of those few familiar rules that did produce a robust and repl-
icable content effect happened to have the cost/benefit structure of a social
contract, as described in Part 3: They could be translated as “If you take
the benefit, then you pay the cost.” Moreover, in reasoning about these
rules, subjects behaved as if they were “looking for cheaters”: They in-
vestigated persons who had accepted benefits (to see if they had failed to
pay the required cost) and persons who had failed to pay the required cost
(to see if they had illicitly absconded with the benefit). For “standard” social
contract rules, such as the ones that were tested, the “benefit accepted”
card and the “cost not paid” card happen to correspond to the logical cate-
gories P and not-Q, respectively (see Figure 3, panel C, Rule I). This means
that a subject who is looking for cheaters will choose the two cards that
correspond to the logically correct response, P and not-Q, by coincidence,
because of the accidental correspondence between the logical and social
contract categories. This accounts for why subjects appeared to reason log-
ically about standard social contract rules, but not about familiar rules that
were not social contracts.
However, all the standard social contract rules tested were familiar. To
experimentally determine whether the content effect is due to the hypoth-
esized “look for cheaters” procedure, rather than to familiarity, Cosmides
tested unfamiliar rules that either did or did not correspond to social
contracts.
Cosmides took highly unfamiliar rules, such as “If a man eats cassava
root, then he has a tattoo on his face,” and embedded them in two different
contexts. One context transformed the rule into a standard social contract,
by telling the subject that cassava root was a rationed benefit and that having
a facial tattoo was a cost to be paid. The other context made the rule describe
some (non-social contract) aspect of the world (e.g.. men with tattoos live
in a different place than men without tattoos; cassava root grows only where
the men with tattoos live; so maybe men are simply eating foods that are
most available to them). If the content effect is due to familiarity. then both
A Computational Theory of Social Exchange 91
problems should yield low levels of logically correct responses, because both
test unfamiliar rules. However, if the content effect is due to the presence
of a “look for cheaters” procedure, then the unfamiliar social contract prob-
lem should yield high levels of logically correct responses, because for stan-
dard social contracts, the “benefit accepted” card and the “cost not paid”
card happen to correspond to P and not-Q, the logically correct response.
This is, in fact, what happened: while only 23% of subjects chose P and not-
Q for the unfamiliar descriptive problems, 73% made this response to the
unfamiliar standard social contracts. Moreover, the unfamiliar social con-
tract problems elicited more logically correct responses than familiar de-
scriptive problems did (the social contract effect was about 50% larger than
the effect that familiarity had on descriptive problems).
These experiments eliminated the hypothesis that familiarity alone can
account for the content effect on the Wason selection task. Furthermore,
they showed that when a rule has the cost/benefit structure of a social con-
tract, subjects are very good at “looking for cheaters,” even when the sit-
uation they are reasoning about is unfamiliar and culturally alien.
To eliminate the hypothesis that social contract content somehow en-
hances logical reasoning, Cosmides next tested unfamiliar social contracts
that were swirched: These are rules that translate to, “If you pay the cost,
then you take the benefit.” If social contract content causes subjects to
reason logically, then they would choose the logically correct response, P
and not-Q, for switched social contracts, just as they did for standard ones,
even though these cards correspond to individuals who could not possibly
have cheated (see Fig. 3, panel C, Rule 2). However, if reasoning on social
contract rules is guided by a “look for cheaters” procedure, then subjects
would choose not-P and Q, a response completely at variance with formal
logic. This is because for a switched social contract, the “cost not paid”
card corresponds to the logical category not-P, and the “benefit accepted”
card corresponds to the logical category Q (see Fig. 3, panel C, Rule 2). A
“look for cheaters” procedure should be blind to logical category: It should
cause the subject to choose the “benetit accepted” card and the “cost not
paid” card, regardless of their logical category, because these are the cards
that represent potential cheaters.
This prediction was also confirmed. The switched social contracts elic-
ited the “look for cheaters” response, not-P and Q. from 71% of subjects,
even though this response is illogical according to the propositional calculus.
In comparison, the unfamiliar descriptive problems (i.e., those not depicting
a social contract) elicited this illogical response from only 2% of subjects,
and elicited the logically falsifying response from 14.5%.
In the above experiments, social contract rules were pitted against de-
scriptive rules. However, in a further set of experiments, social contract
rules were pitted against “permission” rules that lacked the cost-benefit
structure of a social contract. Cheng and Holyoak (1985) had proposed that
the modern individual social experience of permissions causes people
92 L. Cosmides and J. Tooby
CONCLUSION
processing problems, humans in all cultures and of virtually all ages both
understand and successfully participate in such social exchanges with ease.
The costs and benefits of participation in social exchange have also consti-
tuted an intense selection pressure over a significant fraction of hominid
evolutionary history. It is implausible to expect that natural selection would
leave learning in such a domain to the vagaries of personal experience pro-
cessed through some kind of general-purpose learning mechanism. The ev-
olutionary expectation that humans do indeed have adaptively structured
social exchange algorithms receives substantial empirical support from ex-
perimental investigations of human reasoning that 1) have falsified the com-
peting domain general theories of reasoning performance on the Wason se-
lection task and 2) have confirmed the presence of the evolutionarily
predicted complex of design features that are diagnostic of adaptation in this
domain. We argue that this study of social exchange provides an example
of how evolutionary and cognitive techniques can be combined to elucidate
aspects of human culture and the psychological mechanisms that underlie
them.
It is worth noting that the kinds of “domain general” theories that were
falsified as explanations for performance on the Wason selection task are
the same kinds of theories that are more generally advanced to account for
the human “capacity” for culture (Sahlins 1976; Geertz 1973). Because no
imaginable state of human affairs is forbidden by such domain general views
of culture, there is little in the way of cultural phenomena that is inconsistent
with such models, and consequently very little that is predicted or illumi-
nated by them. The view that cultures are arbitrary symbolic productions
is widely and justly criticized by advocates of evolutionary approaches. But
by using evolutionary psychology, it is possible to go further and meet tra-
ditional anthropological theories of culture on their own ground.
As the process of identifying and mapping these innate mechanisms
proceeds, mechanism by mechanism, the differing domains of culture each
can be analyzed through reference to the highly structured information pro-
cessing algorithms that govern its expression. The “interpretation of cul-
tures” can be changed from a post hoc literary exercise about arbitrary
symbolic productions into a principled investigation grounded in the evolved
psychology of humans and its systematic impact on the cultures it produces.
We are very indebted to Jerome Barkow, Martin Daly, Irven DeVore, Roger Shepard, Donald
Symons, and Margo Wilson for many stimulating discussions of the issues explored in this
paper. We are especially grateful to Jerome Barkow, Nicholas Blurton Jones, Michael McGuire,
and an anonymous reviewer for their comments on the various drafts, and to Jason Banfield
and Lisa Bork for their help with the manuscript. We would also like to thank Jeff Wine and
Roger Shepard (and NSF grant no. BNS 85-l 1685 to Roger Shepard) for their support.
value getting P from you more than I value keeping Q,” so this need not
be added as a separate statement.) Clause 2 is an implication of my offer
even if the sincere cost/benefit requirements do not hold. After all, baseline
defrauders mean their offers to be thought sincere.
3. “. . . and I intend that you realize . . .” In other words, I did not make
the offer accidentally. My having made the offer is a consequence of the
activation of my social contract algorithms (my belief that the contract would
result in a net benefit to me is a necessary condition for my making the offer;
see discussion of the meaning of “cause ” in clause 5). If my social contract
algorithms had not been activated, I would not have made the utterance.
This is presumed for a contract that is offered verbally-there are virtually
no circumstances under which one can accidentally utter a sentence. How-
ever, for nonlinguistic primate species, one can imagine scenarios in which
“gestures” are accidentally produced. For example, in the course of a fight,
a chimp is chased up a tree. The tree limb supporting him breaks, causing
him to fall with his arm stretched out. An outstretched arm in the context
of a fight is usually a request for support. However, this gesture was made
accidentally rather than intentionally; it was not made as a consequence of
the chimp’s social contract algorithms having been activated. Therefore,
“ . . . I intend that you realize . . .” is not part of the gesture’s meaning.
The fact that it was “accidentally” produced robs the “gesture” of its mean-
ing as a request for support.
5. My belief that you have given me P cannot cause me to give you Q in
just any old way. For example, the following is not the sense of causation
meant:
Let’s say you own a priceless statue, and I have some very compromising
pictures of you that you want destroyed. I keep these pictures in my car.
I make the offer “If you give me the statue (P), then I’ll destroy the pictures
(Q).” You agree, unaware that I have no intention of destroying the pictures
because I want to continue to enrich myself by blackmailing you. We ar-
range for you to leave the statue at a drop point. I retrieve it, and my belief
that you have given me this priceless statue makes me so agitated and
nervous that I have an accident, and the car blows up, destroying the pic-
tures. I have, in fact, done Q, and my belief that you gave me P caused
me to give you what you wanted-Q-but not in the right sense of “cause.”
(e.g., Nozick 1981, p. 369)
The correct notion of “cause” refers to the psychological realization
of (the algorithm instantiating) this computational theory and the fact that
it is guiding my behavior. My belief that you have given me P fills in the
parameter value in the algorithm; this triggers the set of procedures within
the algorithm corresponding to the contract’s conditions of satisfaction. Trig-
gering these procedures results in my giving you Q. This is the same sense
of “cause” as in a computer program: the information that P can cause a
computer to do something by virtue of that information’s functional relation
to various of its procedures. Let’s say I have written a program in Basic
instantiating all the conditions for making a social contract. The program
A Computational Theory of Social Exchange 95
then offers-“If you type ‘P’ into me then 1’11print ‘Q’ for you”-and 1
accept. Part of the program would involve the computer waiting for me to
fulfill my obligation, and this part may be written thus:
REFERENCES
Axelrod, R. The Evolution of Cooperation. New York: Basic Books, 1984.
-, and Hamilton, W.D. The evolution of cooperation. Science 211: 1390-1396, 1981.
Bahrick, H.P., Bahrick, P.O., and Wittlinger, R.P. Fifty years of memory for names and faces:
A cross-sectional approach. Journal ofExperimental Psychology 104: 54-75, 1975.
Bartlett, F.C. Remembering: A Study in Experimental and Social Psychology. Cambridge,
U.K.: Cambridge University Press, 1932.
Buss, D. Sex differences in human mate selection criteria: An evolutionary perspective. In
Sociobiology and Psychology, C. Crawford, D. Krebs, and M. Smith (Eds.). Hillsdale,
NJ.: Erlbaum, 1987.
Carey, S., and Diamond, R. Maturational determination of the developmental course of face
encoding. In Biological Studies of Menral Processes, D. Caplan (Ed.). Cambridge,
Mass., The MIT Press, 1980.
Cheng, P., and Holyoak, K. Pragmatic reasoning schemas. Cognitive Psychology 17: 391-416
1985.
Chomsky, N. Reflections on Language. New York: Random House, 1975.
Cosmides, L. Invariances in the acoustic expression of emotion during speech. Journal of
Experimental Psychology. 9: 864-881, 1983.
- Deduction or Darwinian algorithms?: An explanation of the “elusive” content effect on
the Wason selection task. Ph.D. diss., Harvard University, University Microfilms,
1985.
-The logic of social exchange: Has natural selection shaped how humans reason? Cognition,
in press.
-, and Tooby, J. From evolution to behavior: Evolutionary psychology as the missing link.
In The Latest on the Best: Essays on Evolution and Opfimality, J. Dupre (Ed.). Cam-
bridge, Mass.: The MIT Press, 1987.
Cutting, J.E., Protitt, D.R., and Kozlowski, L.T. A biomechanical invariant for gait perception.
Journal of Experimental Psychology 4: 357-372, 1978.
Dawkins, R. The Extended Phenotype. San Francisco: W.H. Freeman, 1982.
de Waal, F. Chimpanzee Politics: Power and Sex Among Apes. New York: Harper and Row,
1982.
Eibl-Eibesfeldt, I. Ethology: The Biology of Behavior, (2nd ed.). New York: Holt, Rinehart
and Winston, Inc., 1975.
Ekman, P. (Ed.) Emotion in the HumanFace, 2nded., Cambridge, U.K.: Cambridge University
Press, 1982.
Fillenbaum, S. Inducements: On the phrasing and logic of conditional promises, threats, and
warnings. Psychological Research 38: 231-250, 1976.
96 L. Cosmides and J. Tooby
-, and Cosmides, L. Evolutionary psychology and the generation of culture, part I. Theo-
retical considerations. Ethology and sociobiology 10: 29-49, 1989.
-, and - The evolution of revenge. In preparation, a.
-, and - Evolution and cognition. In preparation, b.
-, and DeVore, I. The reconstruction of hominid behavioral evolution through strategic
modeling. In Primate Modelsfor the Origin of Human Behavior, W.G. Kinzey (Ed.).
New York: SUNY Press, 1987.
Trivers, R.L. The evolution of reciprocal altruism. Quarterly Review of Biology 46: 35-57,
1971.
Wall Street Journal. Prestige cards: For big bucks and big egos. April 17, 1985, p. 35.
Wason, P.C. Reasoning. In New Horizons in Psychology, B.M. Foss (Ed.). Harmondsworth,
U.K.: Penguin, 1966.
- Realism and rationality in the selection task. In Thinking and Reasoning: Psycho/ogical
Approaches, J. St B.T. Evans (Ed.). London: Routledge and Kegan Paul, 1983.
-, and Johnson-Laird, P.N. Psychology of Reasoning: Structure and Content, London:
Batsford, 1972.
Wilkinson, G.S. Reciprocal food sharing in the vampire bat. Nature 308: 181-184, 1984.
Williams, G.C. Adaptation and Natural Selection: A Critique of Some Current Evolutionary
Thought, Princeton, N.J.: Princeton University Press, 1966.
Wrangham, R.W. War in evolutionary perspective. In Emerging Syntheses in Science, D. Pines
(Ed.). Santa Fe: Santa Fe Institute, 1985.