WHAT IS THIS THING CALLED "REPUTATION"? Christopher W

WHAT IS THIS THING CALLED "REPUTATION"?
Christopher W. Morris
Abstract: Concern for one's "reputation" has been introduced in recent
game theory enabling theorists to demonstrate the rationality of
cooperative behavior in certain contexts. And these impressive
results have been generalized to a variety of situations studied by
students of business and business ethicists. But it is not clear that
the notion of reputation employed has much explanatory power once
one sees what is meant. 1 also suggest that there may be some larger
lessons about the notion of rationality used by decisior^ theorists.
W
hy be honest? One answer, exploited recently by game theorists, is that a
reputation for honesty is important and that it may pay to develop one. In
many games repeated indefinitely over time, the story goes, it pays to have a
reputation to be cooperative. It can even pay to be cooperative in games repeated
only finitely many times, or in a single game with many sequential moves. The
specific analyses and the results in question have been cited by David Kreps as
one of the successes of recent game theory.' Shopkeepers and business school
professors will concur, for they have long claimed that it pays to be moral.
What exactly does it mean to have a reputation iti these contexts? Ordinarily,
a reputation is "what is generally said or believed about a person's or thing's
character or standing."^ We might say of a shopkeeper that he is honest, thus
contributing to his reputation for honesty. What is it that we believe of this shopkeeper when we remark on his honesty? We may mean a number of different
things. We might, for instance, believe the shopkeeper to be someone who behaves honestly only when observed, someone who merely acts honestly. Or we
might believe the merchant to be genuinely or really honest. Such a distinction
is common enough; in everyday life, we often distinguish between someone's
being honest and someone (merely) acting honestly because it pays to do so. The
distinction need not be made in terms of real and less-genuine forms of honesty;
that is, it need not be that between true and feigned honesty. Rather, it is something like that between character (as the dictionary says) and mere behavior.
One honest shopkeeper so acts because he is honest; he believes honesty to be
right and acts accordingly. The other shopkeeper acts honestly merely because it
pays to do so.
This distinction may be problematic, or, at the very least, in need of explication. My immediate concern is with the notion of reputation in recent work on
finite repeated games and applications of this work in social science. Here the
©1999. Business Ethics Quarterly, Volume 9, Issue 1. ISSN 1052-1 SOX.
pp. 87-102
88
BUSINESS ETHICS QUARTERLY
notion of reputation is ambiguous, sometimes designating character, sometimes
mere behavior. The developments that have been hailed as a success of recent
game theory may not, upon examination, do more than distract from certain
worries about the dominant conception of rationality. It is, I shall suggest, important to think more carefully about what it could mean to develop a reputation
for cooperation in these contexts.
In ordinary contexts, reputation may often be evidence of character. But character in this sense is something that does not fit well with contemporary rational
choice theory. Character here can be something that involves agents in counterpreferential choice, but it is not obvious that this need be a mark of irrationality.
I shall suggest that agents with character (in this sense) may often do better than
others, in terms of their preferences, because they don't always act in the way
the received view of rationality requires. These claims are compatible with the
celebrated recent results in game theory; indeed, I shall conjecture that they
provide these results with some explanatory import, which they would otherwise lack. In any case, the games in question can provide a test of some of these
claims, so the dispute between alternative conceptions of rationality may be in
part empirical. Some interpretations, then, of the recent results regarding the
value of reputation in finite repeated games may undermine rather than support
the received account of rational choice.
In order to set the stage for the views I wish to discuss, I shall begin with a
discussion of the behavior of firms. I start with firms rather than with humans so
as to avoid certain issues about rationality and agency and so as to focus on
others, having to do with principles of choice.
Suppose that there are many different sorts of firms. Some are profit-maximizing (whatever this will turn out to mean exactly); others are not. Suppose
that these firms find themselves in a competitive environment. Then, in the long
run, only profit-maximizing firms survive. This, in brief, is one of the main arguments for expecting firms to maximize profits in competitive environments.'
The evolutionary argument for the profit-maximizing nature of firms in competitive environments may be understood to sidestep certain issues about whether
firms, or their managers, intended to maximize profits. In fact, they appear not
to, following instead a variety of heuristics. This was thought to be a problem, at
least until the evolutionary argument showed that, whatever the intentions of
their employees, firms that survived maximized profits; those that did not are
eliminated by competition.
Economists and other students of industrial organization, insofar as they are
solely interested in predicting the behavior of firms, can thus treat them as if
they were profit-maximizing entities. Note, however, that what we may say if
we are interested in prediction is not the same as what we may be entitled to say
when we wish to describe or to explain. For description and explanation, unlike
WHAT IS THIS THING CALLED "REPUTATION"?
89
prediction, require premises that are literally true.'' And the literal truth of the
claim that a firm maximizes profits depends on what exactly we mean by this.
The sense in which firms are understood to be profit-maximizing may now
be clarified. A firm maximizes profits, in the sense required, if the consequences
of its behavior are such that, given its environment (which includes the behavior
of other firms), no greater profits can be obtained by its acting differently.^ This
account of profit maximization says nothing about the manner in which the firm
acts, much less the intentions or principles of action of its agents. It focuses on
the consequences of the firm's behavior.
Let us distinguish, then, the consequences of a firm's behavior from what we
might call its principle(s) of decision. A decision principle in this sense is a rule
or norm for choice. It may be rather abstract or relatively concrete, explicit or
implicit. It indicates how the firm or its agents are to evaluate and determine
their choices by specifying the aims or procedures they are to adopt. In a much
cited article, Kreps interprets "corporate culture" in terms of such principles and
the manner in which they are communicated to the various members of a corporate hierarchy. Such principles say "how things are done, and how they are meant
to be done in the organization."*
The evolutionary argument for the profit-maximizing nature of firms says
nothing about the decision principles of firms, that is, nothing beyond the general point that the consequences of acting on these principles must be such that
no other way of acting will bring about greater profits. Most importantly, the
argument offers no reason to think that successful firms must have maximum
profits as their aim or explicit goal, that is, that they must adopt a principle that
specifies maximizing profits as their rule for choice.
Consider different types of principles that firms can have. We might contrast
firms that have simple principles such as "do whatever makes money (or makes
the profits sheets impressive before the quarterly reports are issued)" with others that stress maintenance of market shares, quality, and the like. But that is not
the contrast that I seek. Rather, I wish to distinguish principles that recommend
the direct pursuit of certain ends from principles that (genuinely) constrain the
firm's behavior by reference to certain values or norms. In the case of the latter,
what is important is that the values or norms are not themselves derived from the
ends to be pursued by the firm. An example may be helpful. Contrast two firms
that make the same products for the same markets. Both pursue the same goals.
The only difference is that one of the firms constrains its actions by certain norms,
for instance, of honesty, fidelity, and fair dealing. These norms are self-imposed
(they are not required by law). More importantly, they genuinely constrain, that
is, they can require the firm to do things that apparently impede its pursuit of its
goals, things that the other firm will not be called to do.
If the contrast is sufficiently clear, let us immediately consider an objection.
The second sort of firm, it will be said, cannot survive in a competitive environment; it will not be profit-maximizing and will be eliminated by competition.
90
BUSINESS ETHICS QUARTERLY
The appeal is to the evolutionary argument we have considered. But this is mistaken. Nothing of the sort need follow. The evolutionary argument says only that
competition will eliminate firms that are not profit-maximizing. Recall our characterization: a firm maximizes profits, in the sense required, if the consequences
of its behavior are such that, given its environment (which includes the behavior
of other firms), no greater profits can be obtained by its acting differently. The
second sort of firm could, in many environments, be profit-maximizing. Much
depends on the facts about human motivation, the competitive environment, etc.
But the mere fact that a firm constrains its behavior in certain ways—for instance, by accepting norms of honesty or fidelity—provides no a priori reason
to expect that it will be at a competitive disadvantage, any more than a firm that
follows some principle(s) that does not mention profits. If profit maximization is
the standard by which successfulfirmsare evaluated, the evolutionary argument, by
itself, has no implications for the principles that successful firms (must) employ.
The point, expressed thus, should not be controversial. It has long been recognized that binding oneself and other forms of commitment—or what is called,
redundantly, "precommitment"—may be useful in a variety of interpersonal, as
well as ititrapersonal, contexts. The principles that firms adopt may be analogous to the constitutions of states. Of course, it is not an easy matter to provide
an account of the constitutional structure of a state, and we should expect that it
will be equally difficult to explain what principles firms adopt (and how). But
that is not my task. I am arguing only that the evolutionary argument for the
profit-maximizing nature of firms in competitive environments has no a priori
implications for the particular decision principles of successful firms.
Some will be skeptical. The constraints I have mentioned tend to be moral
(e.g., fidelity, honesty), and the business world is comparatively cynical about
ethical norms and values. But this is not a serious problem, for I can make my
point without recourse to moral principles. Consider a competitive environment
where stable contractual relations are very important but where the temptation
to renege on agreements is often strong. In such a situation firms may have an
incentive to make threats against others that renege on agreements. Contrast two
types of firms, one kind that carries out its threats only when so doing is directly
consistent with its goals, and another that carries out its threats even when so
doing is not directly consistent with its goals. The contrast needs to be made
with greater precision (and this we shall do later). But the point should be clear.
The evolutionary argument has no a priori implications about the relative success of one of these firms as opposed to others.'
2.
Let us leave firms behind for now and consider some recent work in game
theory. It has been argued in the last decade that in certain sorts of games it pays
to develop a "reputation." In a variety of games—for instance, the finitely repeated Prisoners' Dilemma—it has seemed that rational players could achieve
WHAT IS THIS THING CALLED "REPUTATION"?
91
only noncooperative (Pareto-inefficient) equilibria.* The introduction of a (very)
small amount of uncertainty, however, is sufficient to upset these conclusions.
Under normal assumptions, only noncooperative equilibria can be sustained when
these games are played under conditions of complete (and perfect) information.
But with the introduction of uncertainty about the other player(s)'s utilities—
that is, under conditions of incomplete information—cooperative behavior can,
for some time, be sustained.
Consider the Centipede Game,^ represented in extensive form below:
B
(l.I)
(0.3)
B
B
(2,2)
(98,98)
(97,100)
(99,99)
(98,101)
A moves R (right) or D (down), B r (right) or d (down). (A's payoff is listed
before B's inside the parentheses.) As a game of complete and perfect information, the backwards induction "solution" to this game is for A to choose Don the
first round, with payoffs of 1 to each. This result is counter-intuitive. And, of
course, it is not supported by experimental findings."*
Now if the game is played under conditions of incomplete information, where
neither player is certain about what the other will do in response to the next
move, the noncooperative result is no longer guaranteed. For A may then choose
R in the hopes that B, uncertain about A, will respond by r. Cooperative responses may be forthcoming for quite some time in this way."
One way of interpreting this general result is to think of A's choice of/f in the
initial rounds as creating a "reputation" for cooperativeness. Cognizant of A's
reputation, B cooperates in turn. It thus pays to have a good reputation. But what
exactly does it mean to have "a reputation for cooperativeness" in these contexts? Of what, exactly, is B cognizant (when she learns of A's reputation)? Kreps
suggests that the uncertainty in these games concems the "character" of the (other)
players. B is uncertain about A, given A's choiceof/?. A's act raises the possibility that A is, in Kreps's words, "a cooperative soul."'2 The context, however,
makes it clear that this amounts to saying that B is uncertain about A's preferences. Assume that the payoffs above are pecuniary. Then we may not be sure
that the players' utilities are positive affine transformations of their monetary
gains.'^ This interpretation emphasizes that the game is one of incomplete information, for the requisite uncertainty could not survive the common knowledge
of the players' (real) utilities.
92
BUSINESS ETHICS QUARTERLY
Another general interpretation is provided by Robert Aumann. 1 quote from
an article entitled "Irrationality in Game Theory":
In the "crazy perturbation" literature . . . one looks for Nash equilibria of a
repeated game, in which there is a small exogenous probability that a player
plays irrationally, but so as to motivate the other player to play in some
specific way; e.g., in a mutually beneficial way. Most of the results say
that in one sense or another, the rational types tend to mimic those of the
irrational types that are in some sense "best" for the player who does the
mimicking. Intuitively, one might say that the rational types "disguise"
themselves as irrational; they make believe they are crazy, thus "forcing"
the other player to play accordingly (i.e., to maximize against the selected
irrational type).'*
On this interpretation the object of uncertainty is not so much the (other) player's
true preferences but the (other) player's rationality. A has an incentive to choose
R because, given his payoffs, so acting will suggest to B that he may be irrational
and this belief of B, induced by A's behavior, may lead B to choose r. This would
be another instance of the alleged rationality of acting irrationally.'*
A reputation, we said, is ordinarily understood to be something that is said or
believed about someone's character. We shall leave the notion of character unanalyzed for the moment. What then is a good reputation, in these contexts, according
to Kreps or Aumann? On Kreps's view, someone has a good reputation if others
believe that he or she is, or may be, "a cooperative soul," someone with preferences
that dispose him or her to choose R in Centipede Games. On Aumann's view, someone has a good reputation if others believe that he or she is, or may be, irrational.
We could discuss both interpretations, but I shall restrict our attention to the
second (Aumann's). While the first interpretation (Kreps's) is always available
whenever the payoffs are stated, say, in pecuniary terms (as in experimental
games), we can best raise the questions I wish to discuss by assuming that the
payoffs are linear with utilities and by examining Aumann's interpretation.'^ We
shall assume that the players know each other's preferences but that they are
uncertain about their rationality. Our players, then, are uncertain about each
other's rationality. Each has an incentive to exploit this uncertainty and to develop (and maintain) a reputation for irrationality.
There is something troublesome with this interpretation. Why infer from the
sole fact that a player does not choose his or her utility-maximizing move (using
backwards induction) that he or she is irrational?'^ In a brief but telling discussion of a centipede game, James Friedman makes the following remark:
The reasonable solution proposed by Rosenthal rests on a small irrationality or act of faith by the players: Each supposes the other will, with some
positive probability, choose to continue the game at each node. I hesitate to
use the word irrational to describe this behavior, because it prejudges that
only best-reply behavior is rational.... To continue [the game] and trust to
the good sense of the other player also to continue seems a reasonable risk,
because both players stand to gain."*
WHAT IS THIS THING CALLED "REPUTATION"?
93
Aumann and others clearly assume that "only best-reply behavior is rational".
That is precisely the question that is begged by the irrationality interpretation.
Friedman's counter-suggestion is that "trusting to the good sense of the other
player" may be "reasonable." I shall suggest that it can also be rational insofar
as a player can have a (sufficient) reason so to act.
3,
Let us return for a moment to firms and shopkeepers. To anticipate what I
shall say about rational agents, consider an analogy between firms and people.
We saw that firms that adopt principles imposing (genuine) constraints on their
behavior may do better (in terms of profits) than competitors. The evolutionary
argument from competition does not rule this out. Similarly, we saw that in a
variety of finitely repeated games, agents who forego their "best reply" and act
"cooperatively" may do better than others. (This formulation is insufficiently
precise, but it will do for now.) The question is whether this analogy has any
implications for the matter of individual rationality.
Consider next a story I once heard about some business practices in certain
Asian markets where transactions were based on trust. Apparently, newcomers
would be tested by offering them deals where they might be tempted to renege
on their part of the arrangement and where they could, very easily, do so without
expected loss. The purpose of the practice was to determine who might be a trustworthy business partner. Those who reneged would be excluded from future deals.
I do not know of the nature of these tests, but I assume that merchants and
others who need to interact in a variety of situations where monitoring is difficult or costly may employ them. The question is what is being tested for in these
situations? An obvious answer would be, once again, character. But consider
first the likely reaction of many economists and decision theorists to this tale.
They may say that if one knows that one may be observed, then "honesty (fidelity, etc.) is the best policy" in these contexts, and the test would fail to distinguish
as intended between different sorts of potential business partners.
Presumably, though 1 do not know the details about these practices, the tests
are often sufficiently robust to obviate this objection. Adapting a notion from
Robert Frank, let us call a golden opportunity a situation where one is presented
with the possibility of doing something (very) advantageous with low costs and
a very small probability of detection. One of Frank's examples is finding a wallet full of cash in a park, where "A person who returns the cash merely because
he fears he might appear in a "Candid Camera" episode is being paranoid, not
prudent."" Presumably the Asian business practice I referred to seeks to present
newcomers with apparent "golden opportunities" and thereby to acquire valuable information about their character.
Readers skeptical of the usefulness of such a test should consider their expectations about, for instance, the ways people respond to lost wallets in different
cultures. I do not expect that a wallet, full of cash, lost in New York would be
94
BUSINESS ETHICS QUARTERLY
returned, even without the cash. By contrast, I should expect a lost wallet and all
of its cash would be returned, or turned in, almost anywhere in Japan. These
expectations are subjective, and I cannot offer any frequency probabilities.^^
Ordinarily, one would infer here that some people (and members of some cultures) are more honest than others.21 This is a matter about people's character,
we might say.22
Behavior, what people do, when faced with temptation, is not the only evidence we have about character (intention, etc.). We can also make inferences
from what people say. I do not mean we can directly ask people whether they are
honest and expect to learn much of significance. Rather I am thinking, for instance, of the sort of information one picks up incidentally from the sorts of
normative conversations that are the centerpiece of Allan Gibbard's recent account of normative judgment.^^ Gibbard suggests that one of the ways that we
discover, as well as influence, the norms that people accept is through conversation (e.g., discussion, gossip). What people say about actual and hypothetical
situations—"what do you think one should do in such a case?" "isn't it awful
what so-and-so did?"—can tell us something about what norms they accept as
relevant for such cases. Adapting Gibbard's picture to our purposes, it might be
possible to learn something useful, say, about a potential business partner, simply by asking for his or her reaction to a story about someone taking advantage
of a golden opportunity, or a story about a game theorist defecting early in a
finitely repeated PD. Someone who (sincerely) tells you that taking advantage
of a hypothetical golden opportunity or defecting in the repeated game is rational might be a less reliable business partner in some contexts than someone who
accepted different norms. This tells us nothing (so far) about rationality. My
point is merely that there may be differences of character that are important in
these contexts.
Let us turn briefly to shopkeepers. Let us contrast (crudely) "honest" and
"dishonest" shopkeepers. The latter short-change customers (especially children
and trusting tourists), dilute their merchandise, and the like. That is, they do
these things when they can get away with it, or, more precisely, when the expected benefits outweigh the expected costs. (Competition will presumably
eliminate dishonest shopkeepers who are not reliable estimators of expected
benefits and costs.) The "honest" shopkeepers typically don't cheat their customers, etc. Of course, dishonest merchants have an incentive to appear to be
honest, so it may not be easy always to distinguish the two, that is, to tell them
apart (an epistemic problem).
Let us return to our earlier distinction between two different kinds of honest
shopkeepers. One type acts honestly because he or she believes that "honesty is
the best policy." A policy here is to be understood more as a rule of thumb than
a principle (in the sense implicitly introduced earlier). By contrast, another kind
of honest shopkeeper acts honestly because he or she simply is honest. That is,
our second honest tnerchant is honest as a matter of principle; honesty is a fact
WHAT IS THIS THING CALLED "REPUTATION"?
95
about his or her character. Let us label these policy-honesty and principlehonesty respectively.
My interest is in the comparative advantages of principle-honesty. Suppose
first that it is possible for humans to be principle-honest and second that we can,
with some reliability, distinguish between the two sorts of people. (The evolutionary story regarding the former possibility may, of course, depend on that
regarding the latter.) Then, it may be that principle-honest shopkeepers do better
than policy-honest ones in some environments. Principle-honest shopkeepers
may be presented with greater opportunities than others—opportunities where
trust is important—and they may do better when presented with some of the
same opportunities presented to policy-honest merchants.
4.
Let us return to the Centipede Game, but played under conditions of complete (and perfect) information. Suppose that there are two sorts of agents, rational
and irrational." The implication of the recent literature is that rational agents
will be very fortunate to find themselves playing against irrational ones. For,
then, cooperation may be sustained for quite some time.
Suppose instead that there are (at least) two sorts of agents, leaving aside for
the moment the question about rationality. The first sort of agent is a "straightforward maximizer," someone who maximizes at every decision point or node,
never allowing themselves to be moved by any backward-looking consideration
that is not already built into their expected utilities. Straightforward maximizers
are simply the standard agents of decision and economy theory.^5 When paired
with one another, straightforward maximizers do not even get started in Centipede Games played under conditions of complete information with common
knowledge of rationality.
The second sort of agent is a "cooperator." He or she accepts a principle of
conditional cooperation, such as "Cooperate with (and only with) those who can
be expected to cooperate."2^ This sort of agent is not a straightforward maximizer, for the latter cannot (rationally) act on any principle that requires
counter-preferential choice in certain situations. The principle of conditional
cooperation demands just that in, e.g., the last move of a finitely repeated game,
when the other player(s) cooperated on the previous move. Cooperators, then,
can adopt and act on such principles, where straightforward maximizers can only
choose their best reply. Straightforward maximizers accept, and cooperators reject, a separability condition on dynamic choice: for any sequence of choices,
choice at any decision point should be made in abstraction of past choices, as if
choosing anew.2'
Recall the payoffs for the Centipede Game. Suppose the game is one of complete information. Given the appropriate common knowledge assumptions, two
straightforward maximizers can expect to achieve (1,1). By contrast, two cooperators will achieve (100,100). These results are simple and predictable only
96
BUSINESS ETHICS QUARTERLY
because of the artificiality of our assumption of complete information. They are
interesting nevertheless. Cooperators do much better against one another than
do straightforward maximizers. Indeed, the latter seem straightforwardly stupid.
Even if there are few games of complete information of this kind in the real
world, partisans of the orthodox view cannot be very comfortable with that reSult.28
If we add uncertainty to our game between cooperators and straightforward
maximizers, the matter quickly becomes complicated. (For instance, cooperators may have difficulty recognizing one another and distinguishing themselves
from pretenders.) I shall not, however, attempt to determine what we may expect under these conditions. Rather, I should like to focus on what it is that
agents believe about one another. Recall Aumann's interpretation: players know
each other's preferences but are uncertain about each other's rationality. (Agents,
if rational, are expected utility maximizers; otherwise, they are irrational.) Suppose that the agents find themselves cooperating; A has moved R, and B r, a
number of times. Each acquires a "reputation for being cooperative." What is it
that they believe when they believe the other to have "a reputation for being
cooperative"?
The problem is that the "reputation for being cooperative" seems to be the
ground or evidence for conclusions about the other player's rationality. But the
content of this notion of reputation depends essentially on what is believed about
the other's rationality.
Suppose that A believes of B that (1) she is rational (in the received sense)
and (2) she has "a reputation for being cooperative." What would this mean? If
(1) is true—B is rational—what could this "reputation for being cooperative"
amount to over and above the fact that B has moved r on previous moves'! The
fact that B has moved r for the past several moves provides some reason to
believe that she may continue to do so, insofar as one has reason to believe that
the future will resemble the past. But note that "the reputation for being cooperative" has no explanatory value here. It amounts to little more than a summary
of past behavior. This information about past behavior, coupled with some sort
of principle that future behavior is likely to be like past behavior, may enable
one to predict. But it has no explanatory value (and it is hard to determine what
exactly grounds the prediction).
If A believes of B that (]') she is irrational and (2) she has "a reputation for
being cooperative," the matter is even less clear. The "reputation" may then be
nothing more than a summary of past moves with the added implication that
these moves are unmotivated or not rationally explicable.^^
A's beliefs that (1) and (1') need not be understood to be contradictories.
They may be members of a continuum of beliefs: A ascribed probability p to the
proposition asserting B's rationality, with 0 > p > 1. On this interpretation the
argument sketched above rests on a false dilemma, assigning only value of 1 and
0 to the proposition in question.3° It does seem preferable to understand belief as
WHAT IS THIS THING CALLED "REPUTATION"?
97
allowing of degrees in these contexts. So beliefs that (1) and (1') are only contraries. But this does not affect my point. It is still hard to determine what content
to give the notion of reputation here other than a mere summary of past behavior. A learns of B's "reputation," B's past behavior, and changes his subjective
expectations about B's future behavior. Certainly, ordinary notions of character
and related concepts play no role in grounding expectations.
It is hard to see what explanatory work the notion of reputation could play
here. Let us leave Aumann's interpretation for now. Suppose that the players
know each other's preferences but are uncertain about each other's character or
"type," straightforward maximizer or cooperator.^' Suppose that the agents find
themselves cooperating; A has moved R, and B r, a number of times. Each acquires a "reputation for being cooperative." What is it that they believe? Consider
two possibilities.^^ Each might believe that the other is a straightforward maximizer or a cooperator Whatever it is that they believe will, in this case, have
potential explanatory power. Suppose that A believes B to be a cooperator. Then
he could think that B's cooperative behavior is explained by her character or
"type." Further, what potentially explains B's behavior also grounds A's predictions about her future action. Suppose that A believes that B is a straightforward
maximizer. Then A may believe that B acts cooperatively so as to lead him to
infer, falsely, that she is a cooperator. A's beliefs about B may have explanatory
and predictive power in this case as well.
Insofar as there are agents with different characters or decision principles—
e.g., cooperators and straightforward maximizers—and insofar as one is able to
distinguish between them, the notion of reputation may have both explanatory
and predictive power. Otherwise, it is hard to see how these recent results about
finitely repeated games could be understood as one of the successes of recent
game theory.
5.
It will be objected that cooperators, as characterized, are not rational. The
separability condition is a requirement of rationality. The argument will be that
if one cares about consequences, then one must consider each decision node to
be, in effect, the start of a new choice problem.^' But this is a fallacious move.
Recall our discussion of the evolutionary argument. I argued that the argument
had no a priori implications about the decision principles of profit-maximizing
firms. From the sole fact that a firm maximizes profits one cannot infer that its
decision principles presuppose the separability condition.
The objector may then say that cooperators act against reason whenever they
act counter-preferentially. For preferences provide reasons, so acting against one's
preferences is acting against one's reasons. This takes us to the heart of the issue:
what are preferences? Consider a possible ambiguity in 'preferences provide reasons'. We could say that preferences are vi>hatever provides reasons. Alternatively,
98
BUSINESS ETHICS QUARTERLY
we could say merely that preferences provide reasons, among other things (e.g.,
intentions, resolutions). The former view is most common.^^ But to assume or
assert it in this context is simply to beg the question. For to reject separability is
to allow that backward-looking considerations can, in addition to preferences,
offer reasons for actions.
Preferences are rankings of alternatives. As such, they provide an appropriate standard by which to evaluate outcomes, that is, how well an agent does in
terms of his or her ends or objectives. To claim that preferences are whatever
provides reasons is to say that the standard by which outcomes are properly
evaluated is also the standard (or principle) by reference to which choices should
be made. But that is (again) to beg the question. We have argued throughout that
the choice of appropriate principles of decision, for firms or for individuals,
cannot be determined a priori.
The defender of the received view may next argue that "cooperators" are
rational only insofar as their behavior can be interpreted as maximizing a different utility function than the one we have been referring to. In our Centipede
Game, for instance, B is rational in moving r on the last move only if she maximizes the satisfaction of a different set of preferences than those measured by
the original payoffs. We must distinguish, it is suggested, between two different
sets of preferences: the original ones and another set that the cooperator's behavior
can be interpreted as maximizing. The cooperator acts counter-preferentially only
by reference to the first set. It is the second set, however, that are the agent's true
or real preferences.
It is this last claim that is mistaken, depending on what exactly one means by
'true' or 'real'. But I can reformulate my suggestion in terms of the two sets of
preferences. I can say that cooperators may do better, in terms of their first set of
preferences, by acting as j/they were maximizing the satisfaction of the second
set of preferences. The important point is that it is the first, and not the second,
set of preferences that provides the appropriate standard for determining how
well cooperators do. Cooperators do well, in terms of these preferences, by not
acting as the defenders of straightforward maximization would recommend. Similarly, honest shopkeepers may do better than those who merely act honestly
because it so pays.
These remarks are inconclusive, and I do not claim to have established any
major claims. Invoking the views of Gauthier, McClennen, and others, I have
wished to suggest that the use of notions of reputation in recent game theory and
management theory has less explanatory value than first appears and that more
ordinary notions of reputation, which make reference to an agent's character,
may offer more interesting interpretations of these results. Of course, I wish to
express as well some skepticism about the received conception of rationality,
but I do not expect to have said enough to persuade the committed.
WHAT IS THIS THING CALLED "REPUTATION"?
99
Notes
Earlier versions of this paper were presented to the rationality seminar, CREA, Ecole
Polytechnique (Paris), Simon Fraser University (Vancouver), and the Rationality Colloquium
in Cerisy (Normandy). I am very grateful to members of these audiences for their comments
and reactions; I am especially indebted to Jocelyne Couture and to Philippe Mongin for a
number of criticisms and to Peter Vanderschraaf for extensive and helpful comments. A French
translation of an earlier version of the essay appears in Limitations de la rationalite et constitution du collectif, volume 1, Rationalite et ethique, edited by Jean-Pierre Dupuy and Pierre
Livet (Paris: La D6couverte, 1997), pp. 155-173.
'Kreps 1990b, p. 82.
^Oj^ord English Dictionary, p. 1227.
^Alchian 1951.1 do not wish to take a position on the debate to which Alchian was contributing, and I am invoking Alchian's famous argument only to set up some of the claims I
wish to make about explanatory notions of reputation.
••This claim, at least with regard to explanation, is controversial. Certainly, a good explanation may have premises that are only "approximately" true. (See Nelson 1986.)
'The evolutionary argument warrants only the expectation that surviving firms be more
profitable than those eliminated, not that they be profit-maArimizJng. My point, however, is
not to evaluate the evolutionary argument but to use it in order to make a point about norms
or principles of choice.
*Kreps 1990a, p. 93. See also Weigelt and Camerer 1988, pp. 451-452.
^Recent game theory supposedly provides us with the means to distinguish "credible"
from "incredible" threats (and commitments). Compliance with incredible threats (or commitments) will not be a subgame perfect equilibrium. But the evolutionary argument does
not rule out firms whose principles recommend compliance with some allegedly incredible
threats (or commitments).
(Some readers may be thinking of chain stores at this point—which is why I introduced
the example of threat principles with some reluctance. To counter likely objections, for the
moment let me just say that what I find problematic with these stories is the precise characterization of the behavior of the dominant chain store; it is assumed too quickly that retaliatory
policies are motivated uniquely by forward-looking considerations. See Selten 1978 and the
literature that follows from this article.)
"'Cooperative' here, of course, refers to strategies such as not confessing in the Prisoners' Dilemma or swerving in Chicken. I am assuming that the games in question are played
"noncooperatively" in the usual sense. The first cooperative/noncooperative distinction pertains to strategies and outcomes, the second to structure. The argument of this essay, however,
may suggest that the normal way of drawing the second distinction—i.e., in terms of the
absence of binding agreements—is problematic.
^Rosenthal 1981.
'""If one views the experiment as a complete information game, all standard game theoretic equilibrium concepts predict the first Iplayer moves D]. The experimental results show
this does occur." McKelvey and Palfrey 1992, p. 803.
""An alternative explanation for the data can be given if we reconsider the game as a
game of incomplete information in which there is some uncertainty over the payoff functions
of the players." McKelvey and Palfrey 1992, p. 803. The failure of mutual knowledge of
rationality blocks the backwards induction argument. For discussions of recent work on the
relationship between backwards induction and mutual knowledge of rationality, see Bicchieri
1993, esp. chs, 3-4.
100
BUSINESS ETHICS QUARTERLY
i2Kreps 1990b, pp. 78-79, 90.
''Kreps 1990b, pp. n6ff.
'"Aumann 1994, p. 11.
'50rdeshook 1992, pp. 242-247. See also Schelling 1960 and Parfit 1984.
'^Kreps's apparent behaviorism may incline him to the Aumann interpretation. Elsewhere
Kreps emphasizes that game theorists try to understand payoffs
to represent numerically the players' preferences, which in turn are meant to
"represent" their choice behaviour. So if we see a player choosing in a fashion
that doesn't maximize his payoffs as we have modelled them, then we must
have incorrectly modelled his payoffs. (Kreps 1990b, p. 26, note 8)
In the chain store game our move here would obviate the distinction between "weak" and
"strong" chain stores. See note 7.
'''Note that inferring that someone is irrational is not the same as inferring that someone
is "irrational." The use of terms like 'irrational' in double quotes suggests that matters are
not as simple as they may appear. Double quotes in these contexts suggest at the least that the
authors wish to distance themselves from certain substantive claims. (See Aumann's use of
'best' in the long quotation above.)
'SFriedman 1990, p. 194.
Aumann and Sorin conclude that "The work on equilibrium refinements since Selten's
'trembling hand' (1975) indicates that rationality in games depends critically on irrationality. In one way or another, all refinements work by assuming that irrationality cannot be
ruled out, that the players ascribe irrationality to each other with a small probability. True
rationality needs a ''noisy' irrational environment.. . . The rational agent comes to resemble
his irrational environment." (1989, p. 37-38)
•'Frank 1988, p. 73. A different sort of case, rather commonplace, would be tipping at
restaurants where one does not expect to return. Many people appear to tip as generously in
these circumstances as in others, independently of their expectations that the waiter or waitress will publicly express disappointment or anger.
^"But I have noted what happens to unattended bags in New York and Tokyo.
^'1 leave aside for now problems that Frank raises about making inferences from reputations. See Frank 1988, pp. 72-74.
22Earlier 1 said that "Reputation may often be evidence of character. But character in this
sense is something that does not fit well with contemporary rational choice theory." Frank
quotes Lester Telser, a University of Chicago economist, who is skeptical about the value of
reputations. "The difficulty in [Telser's] view," Frank says, "is not that we rarely see events
that test a person's character but that people have no character to test." Telser claims that
people seek information about the reliability of those with whom they deal.
Reliability, however, is not an inherent personality trait. A person is reliable if
and only if it is more advantageous to him than being unreliable. Therefore,
his information about someone's return to being reliable is pertinent in judging the likelihood of his being reliable. For example, an itinerant is less likely
to be reliable if it is more costly to impose penalties on him. . . . someone is
honest only if honesty, or the appearance of honesty, pays more than dishonesty. (Quoted by Frank 1992, p. 75n)
"Gibbard 1990.
^"Rationality here, it should be noted, does not entail mutual or common knowledge of
rationality.
"The label, of course, is David Gauthier's (1975 and 1986).
WHAT IS THIS THING CALLED "REPUTATION"?
101
"cooperators" are like Gauthier's "constrained maximizers." See Danielson 1992
for a specification of a number of alternative principles of cooperation. I do not distinguish
between these different sorts of "cooperators" in this essay.
"See McClennen 1990, pp. 120-122. This condition is implicit in the requirement that a
solution to a dynamic game be a subgame perfect equilibrium.
2*Cristina Bicchieri comes "to the conclusion that if players have bounded knowledge,
and in particular very limited knowledge of mutual rationality, the outcome of the game may
turn out better than if they knew more." (1993, p. 218) If rationality has to do with reasons
for action, this result ought to give pause.
2'Fudenberg and Tirole propose to investigate "the notion that a player who plays the
same game repeatedly may try to develop a reputation for certain kinds of play. The idea is
that if the player always plays in the same way, his opponents will come to expect him to
play that way and will adjust their own play accordingly." (1991, p. 367) But, again, what
exactly is the ground for the expectations of the other players? The use of the ordinary notion
of reputation suggests something (e.g., character) that is not present in the usual models.
^"Philippe Mongin and Allan Gibbard independently suggested this point to me.
^•"we suppose that there is incomplete information about each player's type, with different types expected to play in different ways." (Fudenberg and Tirole 1991, p. 367) Some
interpretations of the notion of type may give us a plausible reading of Kreps's notion of "a
cooperative soul."
ain, I simplify; there are as many possibilities as possible probability assignments,
common injunction to disregard sunk costs is an implication of the separability
condition.
'••Note that the behaviorist interpretation of preference as "choice behavior" does not
allow one to infer that preferences (or anything) provide reasons. It may be charitable to
interpret many would-be behaviorists—e.g., Kreps (see note 16)—to be affirming that preferences are whatever provides reasons for action.
Bibliography
Alchian, Armen A. 1951. "Uncertainty, Evolution, and Economic Theory." Journal
of Political Economy 58:211-221.
Aumann, Robert J. 1994. "Irrationality in Game Theory." In Analyse economique
des conventions. Edited by Andr6 Orlean. Paris: Presses Universitaires de France.
Aumann, Robert J., and Sylvain Sorin. 1989. "Cooperation and Bounded Recall."
Games and Economic Behavior 1:5—39.
Bicchieri, Cristina. 1993. Rationality and Cooperation. Cambridge: Cambridge
University Press.
Danielson, Peter. 1992. Artificial Morality: Virtuous Robots for Virtual Games.
London and New York: Routledge & Kegan Paul.
Frank, Robert H. 1988. Passions Within Reason: the Strategic Role of the Emotions.
New York and London: W. W. Norton.
Friedman, James. 1990. Game Theory with Applications to Economics. 2nd ed. New
York: Oxford University Press.
Fudenberg, Drew and Eric Maskin. 1986. "The Folk Theorem in Repeated Games
with Discounting or with Incomplete Information." Econometrica 54:533-554.
102
BUSINESS ETHICS QUARTERLY
Fudenberg, Drew and Jean Tirole. 1991. Game Theory. Cambridge, Mass.: MIT Press.
Gauthier, David. 1975. "Reason and Maximization." Canadian Journal of Philosophy
4:411^33.
Gauthier, David. 1986. Morals by Agreement. Oxford: Clarendon Press.
Gibbard, Allan. 1990. Wise Choices, Apt Feelings. Cambridge, Mass.: Harvard
University Press.
Kreps, David M. 1990a. "Corporate Culture and Economic Theory." In Perspectives
in Positive Political Economy. Edited by James A. Alt and Kenneth A. Shepsle.
Cambridge: Cambridge University Press.
Kreps, David M. 1990b. Game Theory and Economic Modelling. Oxford: Clarendon
Press.
Kreps, David M.; Milgrom, Paul; Roberts, John; and Wilson, Robert. 1982. "Rational
Cooperation in the Finitely Repeated Prisoners' DUemma." Journal of Economic
Theory 27:245-252.
Kreps, David M., and Robert Wilson. 1982. "Reputation and Imperfect Information."
Journal of Economic Theory 27:253-279.
McClennen, Edward F. 1990. Rationality and Dynamic Choice. Cambridge:
Cambridge University Press.
McKelvey, Richard D. and Thomas R. Palfrey. 1992. "An Experimental Study of the
Centipede Game." Econometrica 60:803-836.
Nelson, Alan. 1986. "Explanation and Justification in Political Philosophy." Ethics
97:154-176.
Ordeshook, Peter C. 1992. A Political Theory Primer London: Routledge and Kegan
Paul.
Parfit, Derek. 1984. Reasons and Persons. Oxford: Clarendon Press.
Putterman, Louis, ed. 1986. The Economic Nature of the Firm. Cambridge: Cambridge
University Press.
Rosenthal, Robert. 1981. "Games of Perfect Information, Predatory Pricing, and the
Chain Store Paradox." Journal of Economic Theory 25:92-100.
Schelling, Thomas. 1960/1963. The Strategy of Conflict. New York: Oxford
University Press.
Selten, R. 1978. "The Chain Store Paradox." Theory and Decision 9:127-159.
Weigelt, Keith, and Colin Camerer. 1988. "Reputation and Corporate Strategy: A
Review of Recent Theory and Applications." Strategic Management Journal
9:443-454.