| 
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • You already know Dokkio is an AI-powered assistant to organize & manage your digital files & messages. Very soon, Dokkio will support Outlook as well as One Drive. Check it out today!

View
 

Are People Bayesian

Page history last edited by PBworks 16 years, 8 months ago

Are People Bayesian? Some Empirical Research

 

 

Extracted and adapted for the Web from "Value and Belief", a PhD thesis accepted by the University of Bristol, 2003.

Topics: philosophy of mind, inductive logic, mathematical psychology, philosophy of rationality

 

Abstract

 

Descriptive Bayesianism is the idea that predicting and explaining human behaviour in terms of probability and utility functions is at least as good as everyday belief-desire explanation. Yet it is well known that people can be bamboozled by the simplest probability calculations. One way to reconcile these two points is to say that belief-desire explanation only seems to work. Another way is to say that while people have a hard time understanding probability, their behaviour fits probabilistic models very well. This short note does not offer a definite answer on how Bayesian people actually are, but considers some surprising empirical results on the subject.


This note is mainly about the rational analysis research programme in psychology. This means that we will examine classic experiments purporting to demonstrate human irrationality, and then survey reinterpretations that explain the subjects' performance in terms of subjective probability and utility functions. Before that, I will consider some experimental results that offer a warning about intentional explanation.

 

Cognitive Illusions

 

The sheer ubiquity of belief/desire explanation in human beings' understanding of each other might be taken as an argument for the effectiveness of the intentional stance. While this argument may well support the theory that intentional explanations are much more successful than chance, there are at least a couple of considerations that undermine this support. Specifically, common cognitive biases may exaggerate the perceived success of a subject's attempts to explain someone else's behaviour.

 

One bias that could have this effect is hindsight bias, in which subjects overestimate their ability to have predicted a past event. Fischhoff (1975) reports a study in which subjects demonstrated overconfidence about their ability to make a prediction from limited information. Subjects read passages of text describing a historical event (a battle in India) and were then told an outcome of the event, different groups being given different conflicting outcomes. Asked to rate the likelihood, given the original information, of each of four outcomes, the subjects rated the outcome they had been given as the most likely consequence of the initial information they had been given.

 

This was open to the complaint that a judgement of the likelihood of an outcome is meaningless when one has been told what the outcome is. However, a further element of the experiment reinforces the interpretation in terms of overconfidence. Each group of subjects was asked to pick out the statements from the initial text that were most strongly evidential as to the outcome. The original text had been constructed to have an equal number of statements supporting each possible outcome, but each group of subjects picked out as evidential the ones which pointed to the outcome they had been given. In cases where we make post-hoc predictions of another person's behaviour, knowing a number of things about their motivations and beliefs, we seem to have a very strong analogy to the Fischhoff experiment. If Phil launches a surprise birthday party for his wife, we could have explained it in terms of his eagerness to revive his marriage, but if he doesn't, we can explain that because he has never been keen on doing things out of the ordinary.

 

Since the ability to understand and predict other people's action is a desirable ability, it may also be subject to illusory superiority. This is the tendency of an overwhelming majority of subjects to report that they possess positive characteristics to a higher degree than most others or than the average other (see Hoorens (1993) for a review of this and similar biases).

 

The Rule-Discovery Task

 

Now we turn from results which are downbeat about intentional explanation to some arguments (specifically, re-interpretations of classic experiments) which are upbeat, especially about descriptive Bayesianism. A supposed canonical demonstration of human irrationality is the rule discovery task used by Wason (1960). Subjects had to identify a rule governing triples of numbers, having been told that "2,4,6" fits the rule. They could perform "tests" by suggesting a triple and asking if it followed the rule. Subjects rapidly formed hypotheses, such as "Numbers increasing by two each time," or, "Increasing even numbers." The correct rule, "Any numbers in increasing order," was broad enough to logically include the hypotheses that users usually came up with. One learns that one's hypothesis is insufficiently broad by offering a test which does not conform to the hypothesis (such as "3,5,6" for either of our examples) and finding that it does obey the experimenter's rule. Subjects, however, very rarely made this sort of test, almost always preferring tests that were in accordance with their hypothesis. Those who hypothesised "increasing by two each time" would suggest "4,6,8" and so on. Wason interpreted this as a "confirmation bias", meaning a bias towards seeking ambiguous confirmations of a hypothesis rather than decisive falsifications.

 

Klayman and Ha (1987) provided a strongly contrasting analysis of these results. To follow their argument we have to keep two distinctions separate. One distinction relates to the evidential significance of the test; whether it is expected to falsify or verify a hypothesis. The other distinction is how the test relates to the hypothesis; whether it is a positive case of the hypothesis or not. In Wason's experiment, these distinctions happen to cut the same way. Rather than concluding that subjects are avoiding a decisive falsification, we are free to conclude that they are attached to positive-case tests. This is the route taken by Klayman and Ha, and they justify this as the rationally correct procedure in terms of the expected informativeness of the result of the test. If the subject's hypothesis and the experimenter's rule are disjoint, for example, then a positive-case test will decisively falsify the hypothesis, because any triple that conforms to the hypothesis will not obey the experimenter's rule. If the correct rule is a narrow subset of the hypothesis, then positive-case tests are the only way to provide a falsification.

 

It follows that falsifications could be more likely to come from positive-case tests or from negative-case tests, depending on the likelihoods of various possibilities; that the target set is a subset of the initial hypothesised set, a superset of it, disjoint from it and so on. Klayman and Ha produced a probabilistic model that demarcated the conditions under which positive- and negative-case tests would be most informative.

 

The Wason task is explained to the subjects as an attempt to find a target rule (or set). Target sets are almost always minorities and usually very small minorities. A given property is usually only had by a small proportion of things. Hence the target set is more likely to be a subset than a superset of a broad initial hypothesis. These are the conditions under which positive-case tests (the tests that Wason's subjects used) are more likely to produce informative, decisive falsifications. Hence the subjects were not only adopting an entirely rational approach to the task but also generalising inductively from other cases of rule-discovery, thereby making use of information that might have had a bearing on the problem. It is still true that the subjects' strategies were irrational for discovering a rule that was extremely broad, but then they did not have the information that the rule was of this nature. Klayman and Ha point to the alternative version of the Wason experiment performed by Tweney et al. (1980). Rather than telling subjects that triples did or did not fit a rule, they classified the triples as "DAX" or "MED". This avoided labelling one set of triples as a "target", and subjects in this revised experiment were much more successful at finding the correct rule.

 

By reinterpreting this one experiment, one does not entirely disprove the existence of confirmation bias. It has been argued that confirmation bias is demonstrated in other experiments (Snyder and Swann (1978)). What it does show is that this supposedly irrational behaviour can be understood as optimising something desirable, namely the informativeness of a test, in light of the subject's probabilities.

 

Rational Analysis

 

Although the label was not in use at the time of their paper, Klayman and Ha were adopting a "Rational Analysis" approach, which has since been championed most prominently by Oaksford and Chater (see Chater and Oaksford (1999) for a review paper and Oaksford and Chater (1998) for an edited volume on the topic).

 

Oaksford and Chater's (1994) canonical demonstration of the rational analysis approach is a reinterpretation of the Wason selection task. There are many variations on this task, but they all require the subject to test a conditional statement, i.e. something of the form "For all x, (if Fx then Gx)". The subjects are told they can perform four possible tests:

  • Examine an F to see whether or not it is G.
  • Examine a not-F to see whether or not it is G.
  • Examine a G to see whether or not it is F.
  • Examine a not-G to see whether or not it is F.

 

The task is to identify which of these tests are needed to establish the truth or falsity of the conditional rule. In a version of the task at which people are particularly successful, the rule is that anyone drinking in a bar is over the legal drinking age. Subjects realise that they need to examine anyone drinking (to see if they are of the legal age) and anyone underage (to see if they are drinking). Wason's original task (Wason (1968)) presented a much more abstract rule: that if a letter written on a card is an A then a number written on the other side is a 3. Subjects mostly chose the vowel and the even number, or just the vowel, rather than the vowel and odd number. In terms of the abstract framework set out above, they tested the F and the G, or just the F, rather than the F and the not-G.

 

The "correct answer" in the above problem recalls the "paradox of the ravens" in the philosophy of science. No-one seriously suggests that "All ravens are black" should be tested by examining all non-black things (the not-Gs), so this should make us question whether examining not-Gs is really the universally best course of action. Chater and Oaksford (1999) give as an example the rule, "Each saucepan that is dropped makes a clanging sound" (my paraphrase). The exhortation "Test Fs and not-Gs" would lead us not only to drop saucepans but also to examine every event where we do not hear that sound, to see if a dropped saucepan is involved. We do not do this, but our failure to test not-Gs does not seem to be an example of irrationality.

 

Applying Information Theory

 

Oaksford and Chater's analysis assumes that subjects will treat the task as an inductive problem in which they try to decrease uncertainty, in the sense of Shannon's information theory. Like Klayman and Ha, they assume that subjects treat properties as rarely instantiated, meaning that the subjective baseline probability of any property is small. Let us assume that the probability of something being F or G is small. Then on testing a not-G, we would expect to find that it is not-F, even if the conditional rule does not hold and the properties are independent. So the test of not-G is not very informative, in that it is expected to produce only a small decrease in uncertainty, or to put it another way the expected outcome of the test does not distinguish the two rival hypotheses. If Fs and Gs are both rare, then, counter-intuitively, observation of a G can be informative. If the properties are independent, then a G is unlikely to be an F. Conversely, if all Fs are Gs and there are almost as many Fs as Gs then it follows that most Gs are Fs. Thus the observation of a G is actually informative as to whether or not the conditional rule holds.

 

The theoretical information content of each test follows the ordering [F > G > not-G > not-F]. This matches the patterns of answers that subjects give in the abstract Wason task. Again we have a putative demonstration of irrationality where subjects are giving answers that turn out to be optimal according to a rational choice model, and where they seem to be making an inductive inference from similar kinds of task. The probabilistic model invoked might be called ad-hoc, but it ascribes a sensible and plausible use of information to the subject. More importantly, the very fact that it is possible to fit a rational choice model to the subject's behaviour, even if it seems ad hoc, shows that the Bayesian intentional stance is not ruled out by the alleged irrationality. When presented with the more familiar task involving drinking age, subjects are perhaps bringing a lot more information into the hypothetical situation, both about the base rates of different properties and about the penalties for making a mistake.

 

Deduction versus Induction?

 

The deductive logic rationale which says that G is totally uninformative is very appealing and is obviously in some sense the right answer. However, the rationale in terms of uncertainty reduction in an inductive problem is also appealing, yet they give contrary answers. The resolution seems to be that the former, deductive solution holds when we are considering a closed world of only four items. On the other hand, when we have an indefinite number of items it is no longer the case that testing an F and a not-G resolves the question of whether the conditional rule holds, so one has to take an uncertainty-reduction approach.

 

The claim of rational analysis is not that people explicitly frame problems in terms of probabilities or expected information, or even that they understand these concepts. It just says that we can predict subjects' behaviour quite successfully by interpreting subjective probabilities onto them. In this way, the rational analysis approach is an application of what I am calling descriptive Bayesianism in experimental contexts.

 

Rational Forgetting

 

Chater traces the origins of the rational analysis approach to the work of Anderson. Anderson (1990) gives a very counterintuitive account of the process of forgetting. It may seem obvious that the gradual loss of information over time due to forgetting is an irrational feature of human beings, leading to inconsistent preferences or choices over time and inefficient use of information. It would seem to be the canonical example of something to be explained in terms of a physical process rather than belief and desire. However, even this can be given a rational analysis.

 

In a storage area for many different pieces of information, such as a hard drive, library or brain, each piece of information needs a "key" such as a library catalogue number so it can be picked out from the others. The shortest length that this key can take is proportional to the logarithm of the number of items in the archive. The longer the key that has to be generated and processed, the slower the retrieval. Hence it is not always a good idea to put information into such an archive, nor is it a good idea to keep every piece of information permanently. As more information is packed into the archive, the key approaches the same size as the item being retrieved, and so the overall "efficiency" of the library is eliminated. Imagine generating catalogue numbers for a library with all possible books up to a certain length, or file names for a computer disk filled with all possible programs up to a certain length. In a system which recalls pieces of information on the basis of association relations such as similarity, as the brain seems to do, then an excess of stored information will result in an avalanche of "similar" items being elicited by any input, potentially swamping the particular information that is most useful to the subject.

 

Hence a good information storage system will only keep those items that are likely to be needed. Anderson created a model with a plausible distribution for usefulness, which says that, context being constant, the likelihood of an item being needed is a function of the time since it was last used. This model fits empirical data on forgetting; an unexpected successful prediction being that the rate of forgetting is affected by the likelihood that the information will be tested in the near future.

 

Again it should be explained that the cost-benefit analysis of keeping information in memory is not supposed to be a decision that the subject is in any sense conscious of making. What Anderson's analysis shows is the predictive power of a belief/desire model applied to putatively irrational human behaviour. Taking the rational analysis approach, one can still accept that human beings have many kinds of cognitive failing, but at least some central results in this area are actually successes for descriptive Bayesianism. In Part 4, "Rationality and Science," I will take a closer look at different kinds of rationality and the sense in which the basic rationality assumed by the intentional stance can underlie irrationality in the scientific or instrumental senses.

 

Rational Irrationality: Extreme Cases

 

As another example of the rationality of seemingly irrational behaviour, consider insanity and unreasonableness. It may seem tautological to say that unreasonableness is irrational, but irrationality in a certain sense can be rational and optimal in an underlying sense. Consider the game of "chicken" in which players drive cars at each other and the "loser" is the first person to swerve. An irrational player who will not swerve (and who is known to be irrational) is at an advantage over a rational player who does a cost-benefit calculation of swerving versus not swerving (Poundstone (1993)). Negotiations arguably work in a similar way, giving an advantage to the party that can credibly appear stubborn and unreasonable.

 

Consider a case of out-and-out insanity, such as a responsible professional person who suddenly claims to be Napoleon and makes decisions seemingly at random. This could be explained as a rational choice in the following way. The pressures and responsibilities they face might make life unbearable, but an admission that this is the case would mean humiliation. Rather than admit an inability to fulfil one's responsibilities, it is much preferable to present oneself as prevented by disability from fulfilling them. Consider how someone who is unable to walk to the shops because of an injury is treated very differently from someone who is too lazy. The concept of insanity, which is thought of as a disability or illness, allows this sort of way out. Hence it is in the subject's rational interest to seem suddenly and radically out of touch with reality and unable to take decisions. A way to convey this is to claim something that is obviously not the case; claiming to be Napoleon, for example.

 

This is not to deny the existence of real disorders of mental function. Instead it shows that even in explaining "insane" behaviour we do not necessarily have to abandon intentional explanation.

 

References

 

Comments (0)

You don't have permission to comment on this page.