Thursday, September 30, 2010

Explaining Irrationality (by guest blogger G. Randolph Mayes)

In one of the last papers he wrote before dying almost exactly one year ago, John Pollock posed what he called “the puzzle of irrationality”:

Philosophers seek rules for avoiding irrationality, but they rarely stop to ask a more fundamental question ... [Assuming] rationality is desirable, why is irrationality possible? If we have built-in rules for how to cognize, why aren’t we built to always cognize rationally?
Consider just one example, taken from Philip Johnson-Laird’s recent book How We Reason: Paolo went to get the car, a task that should take about five minutes, yet 10 minutes have passed and Paolo has not returned. What is more likely to have happened?
1. Paolo had to drive out of town.

2. Paolo ran into a system of one way streets and had to drive out of town.
The typical reader of this blog probably knows that the answer is 1. After all (we reason) 2 can’t be more likely, since 1 is true whenever 2 is. But I’ll bet you felt the tug of 2 and may still feel it. (This human tendency to commit the ‘conjunction fallacy’ was famously documented by the Israeli psychologists Daniel Kahneman and Amos Tversky.)

So we feel the pull of wrong answers, yet are (sometimes) capable of reasoning toward the correct ones.

Pollock wanted to know why we are built this way. Given that we can use the rules that lead us to the correct answers, why didn’t evolution just design us to do so all the time? Part of his answer- well-supported by the last 50 years of psychological research - is that most of our beliefs and decisions are the result of ‘quick and inflexible’ (Q&I) inference modules, rather than explicit reasoning. Quickness is an obvious fitness conferring property, but the inflexibility of Q&I modules means that they are prone to errors as well. (They will, for example, make you overgeneralize, avoiding all spiders, snakes, and fungi rather than just the dangerous ones.)

Interestingly, though, Pollock does not think human irrationality is simply a matter of the error proneness of our Q&I modules. In fact, he would not see a cognitive system composed only of Q&I modules as capable of irrationality at all. For Pollock, to be irrational, an agent must be capable of both monitoring the outputs of her Q&I modules and overriding them on the basis of explicit reasoning (just as you may have done above.) Irrationality, then, turns out to be any failure to override these outputs when we have the time and information needed to do so. Why we are built to often fail at this task is not entirely clear. Pollock speculates that it is a design flaw resulting from the fact that our Q&I modules are phylogenetically older than our reasoning mechanisms.

I think on the surface this is actually a very intuitive account of irrationality, so much so that it is easy to miss the deeper implications of what Pollock has proposed here. Most people think of rationality as a very special human capacity, the ‘normativity’ of which may elude scientific understanding altogether. But for Pollock, rationality is something that every cognitive system has simply by virtue of being driven by a set of rules. Human rationality is certainly interesting in that it is driven by a system of Q&I modules that can be defeated by explicit reasoning. What really makes us different, though, is not that we are rational, but that we sometimes fail to be.

Brie Gertler and I Argue about Introspection on Philosophy TV

here.  For what it's worth, I thought it went pretty well.  We were able to home in on some of our central points of disagreement and push each other on them a bit.

Tuesday, September 28, 2010

Are Ethicists Any More Likely to be Blood or Organ Donors Than Are Other Professors?

Short answer: no.  Not according to self-report, at least.

These results come from Josh Rust's and my survey of several hundred ethicists, non-ethicist philosophers, and professors in other departments. (Other survey results, and more about the methods, are here, here, here, here, here, here, here, and here.)

Before asking for any self-reports of behavior, we asked survey respondents to rate various behaviors on a nine-point scale from "very morally bad" through "morally neutral" to "very morally good". Among the behaviors were:

Not having on one’s driver’s license a statement or symbol indicating willingness to be an organ donor in the event of death,
and
Regularly donating blood.
In both cases, ethicists were the group most likely to praise or condemn the behavior (though the differences between ethicists and other philosophers were within statistical chance).  60% of ethicists rated not being an organ donor on the "morally bad" side of the scale, compared to 56% of non-ethicist philosophers and 42% of non-philosophers (chi-square, p = .001).  And 84% of ethicists rated regularly donating blood on the "morally good" side of the scale, compared to 80% of non-ethicist philosophers and 72% of non-philosophers (chi-square, p = .01).

In the second part of the questionnaire, we asked for self-report of various behaviors, including:
Please look at your driver’s license and indicate whether there is a statement or symbol indicating your willingness to be an organ donor in the event of death,
and
When was the last time you donated blood?
The groups' responses to these two questions were not statistically significantly different: 67% of ethicists, 64% of non-ethicist philosophers, and 69% of philosophers reported having an organ donor symbol or statement on their driver's license (chi-square, p = .75); and 13% of ethicists, 14% of non-ethicist philosophers, and 10% of non-philosophers reported donating blood in 2008 or 2009 (the survey was conducted in spring 2009; chi-square, p = .67).  A related question asking how frequently respondents donate blood also found no detectable difference among the groups.

These results fit into an overall pattern that Josh Rust and I have found: Professional ethicists appear to behave no better than do other professors.  Among our findings so far:
  • Arbitrarily selected ethicists are rated as overall no morally better behaved by members of their own departments than are arbitrarily selected specialists in metaphysics and epistemology (Schwitzgebel and Rust, 2009).
  • Ethicists, including specialists in political philosophy, are no more likely to vote than are other professors (though Political Science professors are more likely to vote than are other professors; Schwitzgebel and Rust, 2010).
  • Ethics books, including relatively obscure books likely to be borrowed mostly by professors and advanced students in philosophy, are more likely to be missing from academic libraries than are other philosophy books (Schwitzgebel, 2009).
  • Although ethics professors are much more likely than are other professors to say that eating the meat of mammals is morally bad, they are just about as likely to report having eaten the meat of a mammal at their previous evening meal (Splintered Mind post, May 22, 2009).
  • Ethics professors appear to be no more likely to respond to undergraduate emails than are other professors (Splintered Mind post, June 16, 2009).
  • Ethics professors were statistically marginally less likely to report staying in regular contact with their mothers (Splintered Mind post, August 31, 2010).
  • Ethics professors did not appear to be any more honest, overall, in their questionnaire responses, to the extent that we were able to determine patterns of inaccurate or suspicious responding (Splintered Mind post, June 4, 2010).
Nor is it the case, for the normative questions we tested, that ethicists tend to have more permissive moral views.  If anything (as with organ donation), they tend to express more demanding moral views.

All this evidence, considered together, creates, I think, a substantial explanatory challenge for the approximately one-third of non-ethicist philosophers and approximately one-half of professional ethicists who -- in another of Josh's and my surveys -- expressed the view that on average ethicists behave a little morally better than do others from socially comparable groups.

We do have preliminary evidence, however, that environmental ethicists litter less.  Hopefully we can present that evidence soon.  (First, we have to be sure that we are done with data collection.)

Friday, September 24, 2010

Graduate Student Conference on... Me?

Well, kind of. The full title is:

CoxiMAP: Mind, Action, and Perception II:
Graduate Conference on the Work of Eric Schwitzgebel, the Epistemological Status of First-Person Methodology in Science, and the Metaphysics of Belief

It's in Osnabrueck, Germany, Jan. 21-23, and presenters will be awarded 150 Euros toward travel costs.  The submission deadline is soon: October 20 (300 word abstract, with a full paper in English of 5000 words), sent to Sascha Fink at safink [at domain] uos.de.  Full-length call for papers here.

I have been assured that submissions in philosophy of mind and/or epistemology more generally will also be welcomed.
I will also be giving a series of talks in Osnabrueck:

Jan. 20: "Shards of Self-Knowledge"
Jan. 21: "The Problem of Known Illusion and the Problem of Unreportable Illusion"
Jan. 22: "The Moral Behavior of Ethics Professors"

Also on Jan. 21, I will lead a tutorial and discussion on experience sampling.

Thursday, September 23, 2010

Perplexities of Consciousness: cover design

MIT Press has shown me the cover design for my forthcoming book, Perplexities of Consciousness:


The mind-bending cover art?  That's Pete Mandik's exomusicology (acrylic on canvas, 2001, photo by Rachelle Mandik). (You can get a bit of a closer look at another version of exomusicology here.)

Monday, September 20, 2010

How to Get a Big Head in Academia

Step 1: Get a tenure-track job at a research-oriented institution.

Step 2: Publish some stuff.

Step 3: Get tenure.

After Steps 1-3 -- which are, admittedly, something of a challenge -- the rest comes naturally!

Step 4: Read some stuff. You might especially find yourself reading material related to the subtopics on which you yourself have been publishing -- especially if any of that material cites your own work. The stuff that you choose to read will become especially salient to you in your perception of your field. (So also, of course, are your own publications.)

Step 5: Attend some meetings. The talks you see, the people you gravitate toward, will tend to discuss the same things you do. The field will thus seem to revolve around those issues. If other presenters in your area know you are around, they will be especially careful to mention your important contributions. You might even be sought out by a graduate student or two. That student or two will seem to you representative of all graduate students.

Step 6: Acquire some graduate students. They will tell you that your work is terrific and centrally important to the field.

Step 7: Read some emails. The people who like your work and think it is important are much more likely to email you than the people who ignore your work and think it's crappy. Also, the content of people's emails will tend to highlight what they like or, if critical, will frame that criticism in a way that makes it seem like a crucial issue on which you have taken an important public stand. (Additionally, the criticism will almost always be misguided, demonstrating your intellectual superiority to your critic.)

Finally: Given all the valuable input you have received through reading, attending conferences, talking to graduate students, and professional correpondence, it will seem clear to you that your field (post-Schnerdfootian widget studies) is central to academia, that the issues you are writing on are the most important issues within that field, and that your own contributions are centrally important to the academic understanding of those issues.

Sadly, your colleagues will not seem to fully appreciate this fact.

Tuesday, September 14, 2010

Can We All Become Delusional with Hypnosis? by guest blogger Lisa Bortolotti

Recent studies on hypnosis have suggested that delusions can be temporarily created in healthy subjects (see work by Amanda Barnier and Rochelle Cox). When you are given a hypnotic suggestion that you will see a stranger when you look in the mirror, it is probable that your behaviour in the hypnotic session will strikingly resemble that of a patient with a delusion of mirrored self misidentification. Both the hypnotic subject and the delusional patient deny that they see themselves in the mirror and claim instead that they see a stranger who looks a bit like them. Their beliefs are resistant to challenges and often accompanied by complex rationalisations of their weird experience.

Why would we want to create delusions in healthy subjects? It’s difficult to study the phenomenon of delusions in the wild, and especially the mechanisms responsible for their formation. Here are some reasons why we may need the controlled environment of the lab:

1. it is not always possible to investigate a clinical delusion in isolation from states of anxiety or depression that affects behaviour - comorbidity makes it harder to detect which behaviours are due to the delusion under investigation, and which are present for independent reasons;

2. ethical considerations significantly constrain the type of questioning that is appropriate with clinical patients because it is important to avoid causing distress to them, and to preserve trust and cooperation, which are beneficial for treatment;

3. for delusions that are rare, such as the delusion of mirrored self misidentification, it is difficult to find a sufficient number of clinical cases for a scientific study.
Evidence from the manifestation of hypnotically induced delusions has the potential to inform therapy for clinical delusions. Moreover, the use of hypnosis as a model for delusions can also inform theories of delusion formation, as analogies can be found in the underlying mechanisms. There are good reasons to expect that the hypnotic process results in neural patterns that are similar to those found in the clinical cases.

Given that during the hypnotic session healthy subjects engage in behaviour that is almost indistinguishable from that of clinical patients, reflecting on this promising research programme can not only help the science of delusions, but also invite us to challenge the perceived gap between the normal and the abnormal.

[This is Lisa's last guest post. Thanks, Lisa!]

Friday, September 10, 2010

How Big the Moon Is, According to One Three-Year-Old

A conversation I had with last night my daughter Kate, three years and seven months old:

Me: Which is bigger, the moon or the house?

Kate: The house.

Me: Which is bigger, the moon or a tree?

Kate: A tree.

Me: Which is bigger, the moon or a quarter?

Kate: No.

Me: No? What?

Kate: They're little.

Me: Which is smaller, the moon or a quarter?

Kate: A quarter.

Me: Which is smaller, the moon or a peanut butter jar?

Kate: The moon.

Me: So the moon is between a quarter and a peanut butter jar?

Kate: That's right, Daddy, you got it!

(See also my posts Development of the Moon Illusion? and How Far Away Is the Television Screen of Visual Experience?)

Sunday, September 05, 2010

Are People Responsible for Acting on Delusions? by guest blogger Lisa Bortolotti

Consider this case. Bill suffers from auditory hallucinations in which someone is constantly insulting him. He comes to believe that his neighbour is persecuting him in this way. Exasperated, Bill breaks into the neighbour’s flat and assaults him. Is Bill responsible for his action? Matthew Broome, Matteo Mameli and I have discussed a similar case in a recent paper. On the one hand, even if it had been true that the neighbour was insulting Bill, the violence of Bill’s reaction couldn’t be justified, and thus it is not obvious that the psychotic symptoms are to blame for the assault. On the other hand, psychotic symptoms such as hallucinations and delusions don’t come in isolation, and it is possible that if Bill hadn’t suffered from a psychiatric illness, then he wouldn’t have acted as he did.

In the philosophy of David Velleman, autonomy and responsibility are linked to self narratives. We tell stories about ourselves that help us recollect memories about past experiences and that give a sense of direction to our lives. Velleman’s view is that these narratives can also produce changes in behaviour. Suppose that I have an image of myself as an active person but recently I neglect my daily walk and spend the time in front of the TV. So I tell myself: “I have to get out more or I’ll become a couch potato”. I want my behaviour to match my positive self-image so I can become the person I want to be. Our narratives don’t just describe our past but can also issue intimations and shape the future.

According to Phil Gerrans, who has applied the notion of self narratives to the study of delusions, when experiences are accompanied by salience, they become integrated in a self narrative as dominant events. People with delusions tend to ascribe excessive significance to some of these experiences and, as a result, thoughts and behaviours acquire pathological characteristics (e.g. as when Bill is exasperated by the idea of someone insulting him). Gerrans’ account vindicates the apparent success of medication and cognitive behavioural therapy (CBT) in the treatment of delusions. Dopamine antagonists stop the generation of inappropriate salience, and by taking such medication, people become less preoccupied with their abnormal experiences and are more open to external challenges to their pathological beliefs (“How can I hear my neighbour’s voice so clearly through thick walls?”) In CBT people are encouraged to refocus attention on a different set of experiences from those contributing to the delusional belief, and to stop weaving the delusional experiences in their self narratives by constructing scenarios in which such experiences make sense even if the delusional belief were false (“Maybe the voice I’ve heard was not my neighbour’s.”)

As Gerrans explains, self narratives are constructed unreliably in the light of abnormal experiences and delusional beliefs. If we take seriously the idea that self narratives may play an important role in the governance of behaviour, and accept that narratives constructed by people with delusions are unreliable, then it’s not surprising that people with delusions are not very successful at governing themselves.

Thursday, September 02, 2010

Philosophy TV Launch

The brand-new, launched-today, Philosophy TV site promises to showcase conversations between philosophers, akin to Bloggingheads.tv. Unlike Bloggingheads TV, Philosophy TV will be dedicated solely to philosophy. The inaugural episode features Tamar Gendler and me chatting/arguing about implicit association and belief. Unfortunately, I can't view the episode myself yet because my sound card is busted.

The next few weeks promise Jamie Dreier and Mark Schroeder, Craig Callender and Jonathan Schaffer, Peter Singer and Michael Slote, Ken Aizawa and Mark Rowlands, and Andy Egan and Joshua Knobe. Pretty impressive lineup!