Robyn Dawes

From Wikiquote
Jump to navigation Jump to search

Robyn Mason Dawes (July 23, 1936 – December 14, 2010) was an American psychologist who specialized in the field of human judgment.

Quotes[edit]

Everyday Irrationality: How Pseudo-Scientists, Lunatics, and the Rest of Us Systematically Fail to Think Rationally (2001)[edit]

Everyday Irrationality: How Pseudo-Scientists, Lunatics, and the Rest of Us Systematically Fail to Think Rationally. Boulder: Westview Books. 2001. ISBN 081336552X. 
All quotes from this hardcover edition
Italics as in the book. Ellipses are brief elisions of historical or illustrative material
  • People treat reason as if it were the most minor and harmful aspect of a whole human being. It is as if a soldier standing guard were to say to himself: “What good would my rifle be I were now to be attacked by a dozen enemies? I shall therefore lay it aside and smoke opium cigarettes until I doze off.”
  • Thus, even if we accept that the people are deviously neurotic rather than outright irrational, we still must specify exactly how they believe that the rest of us can be fooled by them. Throughout the book, I will assert that they are urging us (and themselves) to “associate, but not compare.” This book is written in partial hope that the readers will end up making appropriate comparisons, rather than simple associations—which generally lead to a deficient specification of the categories necessary to reach a rational conclusion.
    • Preface (p. xiv)
  • Unfortunately, there are many irrational conclusions and beliefs in our culture from which to choose. Those analyzed at some length—and as precisely as possible—are those with which I am most familiar. With public opinion polls indicating that more people in the United States believe in extrasensory perception than in evolution, it is not surprising that examples abound.
    • Chapter 1, “Irrationality is Abundant” (p. 2)
  • These people were saying in effect that “I would be rational except that you are so irrational that I can’t be.” This stance makes the person who adopts it just as culpable as anyone who flat-out endorses irrationality.
    • Chapter 2, “Irrationality Has Consequences” (p. 21)
  • At the very least, irrationality per se can be challenged. In contrast, acting irrational because we believe that other people are so irrational that their irrationality cannot be challenged leads to no challenge at all.
    • Chapter 2, “Irrationality Has Consequences” (p. 24)
  • Thus, logic and mathematics are important in determining “what is,” though not necessarily implying what is. “What is” must be consistent with logic, namely, with “what could be.”
    Why? I don’t know. To me one of the great mysteries of life is that by simply thinking logically we can determine a lot about the universe—or at least conclude what can’t be, which together with empirical observation leads us to some pretty good ideas about what is.
    • Chapter 4, “Irrationality as a “Reasonable” Response to an Incomplete Specification” (p. 52)
  • The DID problem is an example of arguing from a vacuum. The argument is basically that if one type of procedure (diagnosis, therapy, business venture, or whatever) does not work, then something else will. Well, perhaps nothing will work, or perhaps the only reason we observe that something did not work is that we were ignoring the cases in which it did—often because, for some very compelling social reasons, they never come to our attention.
    I have discovered this argument from a vacuum often in the context of various “critiquing” studies of statistical versus clinical prediction. There is one overwhelming result from all the studies: When both predictions are made on the basis of the same information, which is either combined according to a statistical (actuarial) model or combined “in the head” of an experienced clinician, the statistical prediction is superior.
    • Chapter 4, “Irrationality as a “Reasonable” Response to an Incomplete Specification” (p. 63)
  • A particular example (i. e., of irrationality) involves interviews. Despite all the evidence about the uselessness of interviews in predicting future behavior, people remain convinced that some people—especially themselves—are superb at “psyching out” other people during an interview. In contrast, the research indicates that interviews are effective only insofar as they yield information they could more consistently and more validly be incorporated into a statistical model. One problem, of course, that leads to the belief in the superiority of the unstructured interview is that it is, in fact, not studied; there is almost no systematic feedback to most interviewers. Much of the time, the interviewer is in a particular position in an organization and never sees the interviewee again. Second, if the interviewer does see the interviewee later, then that means that the interviewee has been accepted, which often implies fairly reasonable performance. Moreover, it is always possible to rationalize failures.
    • Chapter 4, “Irrationality as a “Reasonable” Response to an Incomplete Specification” (p. 65)
  • Finally, the irrationality resulting from incomplete specification can be affected by emotions in a very simple way. If the conclusion is consistent with our desires or needs, the specification may not be examined in detail—in particular, not examined for its incompleteness. How often, when we conclude what we wish to conclude, do we then decide to subject our conclusion to detailed scrutiny? On the other hand, when the conclusion is one that contradicts our wishes and needs, then clearly there is a motive to examine our logic. Then we reconsider or even restructure the possibilities, question whether we have examined them all, and so on.
    • Chapter 4, “Irrationality as a “Reasonable” Response to an Incomplete Specification” (p. 68)
  • A closely allied type of irrationality is termed irrefutability. This name relates to the idea that a good scientific theory should be refutable: At least in theory, there should be some evidence that would lead us to doubt or reject the theory. If all evidence is simply interpreted as supporting it, then it is termed irrefutable, which is a hallmark of pseudoscience, not of science.
    • Chapter 6, “Three Specific Irrationalities of Probabilistic Judgment” (p. 96)
  • Prediction is not the same thing as understanding, but in the absence of prediction, we can certainly doubt understanding.
    • Chapter 6, “Three Specific Irrationalities of Probabilistic Judgment” (p. 97)
  • True scientific demonstration involves convincing an observer who is outside the process, particularly one not deeply and emotionally enmeshed in it.
    • Chapter 6, “Three Specific Irrationalities of Probabilistic Judgment” (p. 99)
  • Of course, our experience concerning such people also involves exposure to media. The selective-availability problems that arise because the media select interesting (if not sensational) news are well known. Consider the overestimation of murder as a cause of death relative to suicide.
    • Chapter 6, “Three Specific Irrationalities of Probabilistic Judgment” (p. 100)
  • The situation is very simple. Familiarity leads to availability and often to accuracy as well; hence availability is used as a cue to accuracy.
    The problem is that mere assertion and repetition also leads to availability, whether or not this assertion and repetition involve reality, as familiarity generally does. Thus, the “big lie” of Nazi propaganda minister Joseph Goebbels was based on the idea that if something is repeated often enough, people will believe it—in large part simply because they have heard it before…
    Goebbels apparently believed that providing a credible source, for him the German national government, was a critical component in having the repeated statements believed. Subsequent research has shown that the credibility of a source is not a necessary condition to develop beliefs…
    Worse yet, mere repetition—which creates an availability bias due to familiarity, can also make people confident of their own decision making in the absence of any feedback that they have made good decisions.
    • Chapter 6, “Three Specific Irrationalities of Probabilistic Judgment” (p. 105)
  • Believing you’re good at something just because you do it—without any information that you’re doing it well—is indeed irrational.
    • Chapter 6, “Three Specific Irrationalities of Probabilistic Judgment” (p. 106)
  • The limitation of the story to a single sequence and the essentially ad hoc nature of causal attributions call into question the whole procedure of using stories as evidence, and of thinking that they establish causality or patterns of reasons.
    • Chapter 7, “Good Stories” (p. 113)
  • Any of these antecedents could have been connected with different consequences—in particular with many scenarios involving safe landings. What we have done is a creative act, but the problem is that we do not really know what the general relationship is between these antecedents and the important consequence of whether the landing is a crash or a safe one; in fact we cannot do so by observing a single “story” of a crash. At the least, we would have to compare this story to additional stories involving safe landings (again, a nonevent). This comparison is made rather difficult, however, by the decision of the Federal Transportation Department to erase tapes following uneventful landings so that these tapes can be reused. Thus, critical comparisons are lacking in the story model of causality. The story model is compelling, but its compelling nature is essentially illusory.
    • Chapter 7, “Good Stories” (p. 121)
  • Prior to studies of unusually intelligent people that showed them to be generally much better adapted and happier than others, the popular belief in the United States was that exceptional intelligence was often associated with exceptional ability to “drive yourself nuts.” Hence, people believed that genius and lunacy were intimately connected. Perhaps, nearly all of us “drive ourselves a little nuts” by virtue of creating stories that lead us to the illusion that we understand history, other people, causality, and life—when we don’t.
    • Chapter 7, “Good Stories” (p. 125)
  • Two biases of memory, however, tend to enhance the illusory nature of our retrospective “understanding” of our own and others’ lives. The first is that we tend to overestimate specific events relative to general categories of events. The second is that we tend to recall specific events and to interpret them in ways that make sense out of a current situation—“sense” in terms of our cultural and individual beliefs about stability and change in the life course. Thus, memories, which appear to be beyond our control as if we are observing our previous life on a video screen, are like anecdotes in that they are often (inadvertently) “chosen for a purpose.” The result is that they will tend to reinforce whatever prior beliefs we have, just as anecdotes tend to reinforce the points they are meant to illustrate.
    • Chapter 7, “Good Stories” (p. 127)
  • Unfortunately, good stories are so compelling to us when we take the role of psychologist or social analyst that we do not realize that at best they constitute just a starting point for analysis.
    • Chapter 7, “Good Stories” (p. 138)
  • As discussed in Chapter 7, we often substitute a good (internally generated) narrative or story for a comparative (“outside”) analysis when we attempt to understand something unusual. We also often substitute pure association for comparison. This reliance on coherent “explanations” provides what is really an illusion of understanding, rather than understanding.
    In this chapter, I present the other side of the coin. That is, even when we have a perfectly valid statistical explanation for a phenomenon, we may ignore it because no “good story” accompanies it to persuade us that we should believe it.
    • Chapter 8, “Connecting Ourselves with Others, Without Recourse to a Good Story” (p. 141)
  • Many people operate as if there are two separate and equal sources of information—the self and others, where the number of others is irrelevant. The result is a “truly” false-consensus effect in the context of knowing one’s own plus a certain number of others’ responses.
    • Chapter 8, “Connecting Ourselves with Others, Without Recourse to a Good Story” (p. 148)
  • I know better than to say “that’s absurd” to someone trained in Freudian analysis, because such a therapist will simply interpret such an assertion as confirmation of whatever is proposed.
    • Chapter 9, “Sexual Abuse Hysteria” (p. 158)
  • The world as postulated by the recovered-memory theorists is not an impossible one—just an extraordinarily unlikely one.
    • Chapter 9, “Sexual Abuse Hysteria” (p. 176)
  • Again, irrationality can hurt, and here we have evidence that a particular form of it is widespread. The people accused around hurt, and the clients—be they children or grown adults—are hurt. Irrationality is not simply an amusing diversion provided by tarot cards or Ouija boards.
    • Chapter 9, “Sexual Abuse Hysteria” (p. 179)
  • The Milgram studies led to a great deal of criticism from other academic and professional psychologists. Ironically, the major focus of this criticism was that the studies would “destroy faith in psychologists as authorities,” to which Milgram’s response was “Fine!”
    • Chapter 10, “Figure Versus Ground (Entry Value Versus Default Mode)” (p. 187)
  • Again, what cannot be is not, and what is can be regarded as an instance of what can be. Individuals who make pseudodiagnoses on the basis of “typical” characteristics—by attending only to the numerator of the likelihood ratio rather than to both numerator and denominator—will similarly be doomed to failure by making diagnoses that are not empirically supported. Because such a diagnostic procedure is based on irrationality, it cannot in general succeed. And similarly, people who argue that both the evidence and its negation support the same conclusion are arguing irrationally, and hence the conclusions will be empirically flawed. The principle is the same.
    • Chapter 11, “Rescuing Human Rationality” (p. 194)
  • What causes the lunatic to demand that ideas not be subject to scrutiny—and in particular that they not be contradicted? No one knows. It may be part of a deliberate campaign to maintain power, an implicit admission of some semiconscious fear that the ideas might not be good, or just a common aspect of types of behavior that we associate with historical monsters. At least, the correlation is there.
    • Chapter 11, “Rescuing Human Rationality” (p. 212)
  • If we reject the idea of the “intrinsic rationality of whatever we do” (at least if we are not some sort of superb expert or monstrous political leader), then we must value scrutiny, which brings me to my final point: the necessity and value of a free society. When we scrutinize arguments, we often do so in a collective way….
    If disagreement can lead to the presentation of one’s remains in a body bag to one’s spouse, this type of scrutiny is horribly constrained. Such constraint in turn implies that irrational conclusions will go unchallenged, and again because irrationality implies impossibility, that lack of challenge in turn implies belief in false conclusions. Such belief harms both societies and individuals.
    • Chapter 11, “Rescuing Human Rationality” (pp. 212-213)
  • The realpolitik view of the individual human—that we are slaves to our desires and attitudes and that knowledge and rationality are necessarily secondary to these other factors—is simply wrong. We have the competence to be knowledgeable and rational, especially when we interact freely with each other. We can indeed change our minds. We can “bend over backward to be defense attorneys against our own pet ideas.” We can reconsider. We can be rational.
    • Chapter 11, “Rescuing Human Rationality” (p. 214)

External links[edit]

Wikipedia
Wikipedia
Wikipedia has an article about: