Things are quiet. People are on break -- visiting family, like me -- or they're sweating it out at the Eastern APA.
But maybe some of you visitors will do me a favor and answer this: How do you know you're not dreaming? Presumably you do know, right? Genuine radical skeptics are few. What I'm asking is how you know -- on what basis or by what means.
I have my own opinions about this which I'll post later, but first I'm curious to hear from some of you....
Sunday, December 30, 2007
Things are quiet. People are on break -- visiting family, like me -- or they're sweating it out at the Eastern APA.
Monday, December 24, 2007
Friday, December 21, 2007
If we suppose that introspection is a causal process between two distinct events, it's hard to see how infallibilism could be plausible. What sort of event can't be brought about in strange ways? If we suppose, for example, that introspective judgment is a brain process, couldn't -- at least in principle, by dint of genius neuroscience, but probably much more easily than that -- that brain process be brought about non-standardly?
One way out of this is to deny that the introspective judgment and the introspected conscious experience are indeed distinct events ("distinct existences" in Shoemaker's sense). For example one might "contain" the other, as is sometimes suggested (e.g., Shoemaker, Burge). Consider as an analogy: "This sentence contains the word 'pixie'". The sentence is infallibly true wherever it appears because the conditions of its existence are a subset of the conditions of its truth. Could introspection work the same way?
Well, one fella's modus ponens is another's modus tollens: If containment implies infallibility, the case against infallibility is, I think, so compelling that we ought to deny containment. But let's consider containment independently of that. Does the judgment, "I'm visually experiencing redness" (for example) contain a visual experience of redness? Does it itself, somehow, contain the phenomenology of red -- not merely assert the existence of red phenomenology but actually include that phenomenology?
Let's suppose -- I don't quite buy this, but it's probably close enough for the purposes of this argument -- that the components of judgments are concepts. Concepts may be reshuffled and combined to make new judgments, right? Now the judgment "I'm visually experiencing something caused by Martians" cannot literally contain something caused by Martians because nothing is caused by Martians. And the judgment "Looking at Mars can cause people to visually experience redness" cannot contain an actual experience of redness because it can be uttered by a blind woman. But now we can recombine elements of the two to get "I'm visually experiencing redness". It's odd to suppose that this recombined product must contain actual red phenomenology if it's composed only of elements none of which contain that phenomenology and that can occur independently of it.
Or: The judgment "I visually experienced redness" does not contain red phenomenology (since I might now be experiencing no redness). Similarly for the judgment "I will visually experience redness". Is the present-tense version of this judgment so radically different in structure from the past and future tenses that it must contain redness -- a totally different kind of thing from what the others contain -- while the others don't?
The most plausible case for something like containment might be the following (bastardized and simplified from Chalmers 2003): "I have *this* phenomenology" -- where *this* is an act of "inner ostention", cognitively pointing toward one's own phenomenology. Such a case might be a case of self-fulfulling containment, but it is no more substantive or necessarily introspective than "I'm located here".
Thursday, December 20, 2007
Tuesday, December 18, 2007
It's almost a ritual, in discussions of the phenomenology of vision, to praise "artists" -- meaning those in the visual arts -- for having an appreciation of visual phenomenology that most of the rest of us lack. However I believe that the truth is the reverse.
Thomas Reid is typical:
I cannot therefore entertain the hope of being intelligible to those readers who have not, by pains and practice, acquired the habit of distinguishing the appearance of objects to the eye, from the judgment which we form by sight of their colour, distance, magnitude, and figure. The only profession in life wherein it is necessary to make this distinction, is that of painting. The painter hath occasion for an abstraction, with regard to visible objects, somewhat similar to that which we here require: and this indeed is the most difficult part of his art. For it is evident, that if he could fix in his imagination the visible appearance of objects, without confounding it with the things signified by that appearance, it would be as easy for him to paint from life, and to give every figure its proper shading and relief, and its perspective proportions, as it is to paint from a copy (An Inquiry into the Human Mind, 1764/1997, p. 82-83).Now this much I'll grant Reid and others who share his view: Traditional, representational painters have the difficult skill of rendering on a two-dimensional canvas an arrangement of paint such that it produces for the eye an arrangement of light importantly similar to what would be produced by the actual three-dimensional scene they are rendering; and it takes much practice to see outward things in terms of how they can be presented on a canvas, for example foreshortened and rendered in the right two-dimensional shapes. But that skill is not the skill of appreciating real visual appearances.
For one thing, the view makes no geometric sense. Three dimensional scenes cannot be rendered in two-dimensions without geometric distortion in size and/or angle -- distortion that becomes more evident the greater the visual angle encompassed. This is why there is always something a little wrong with panoramic photographs. This geometrical difficulty could be avoided if artists drew on concave semispheres instead of flat rectangles. But they don't; and they'd have to relearn the rules of perspective to do so.
Even setting that issue aside: We should not infer from the fact that to create a sense of realism in the viewer an artist must color shadows in such-and-such a way that we really visually experience shadows as colored in that way. We should not infer from the fact that light, and water, and distance, and motion, can be rendered a certain way on canvas to the fact that our visual experience light and water and distance and motion matches such renditions (e.g., motion as either a series of freeze-frames or as blur). The painter learns the skill of seeing the world in a certain way for the purpose of a certain technique, not the skill of apprehending our visual experience as it is in itself.
Since most visual artists don't seem to appreciate this fact, their reports about their visual experience are likely to be less accurate than the reports of non-artists -- distorted by the false assumption that the world as seen for painting is the world as seen for life.
Monday, December 17, 2007
Saturday afternoon was, I think (believe it or not), my first time watching The Nutcracker. My wife, son, and I were bumped from our back-of-the-room seats and compensated with VIP seats, third row center. Early into the performance, I started thinking about the amazing opulence celebrated, maybe even taken for granted, in the ballet; and then about the opulence of symphonies and ballets in general and critics of luxury like Marx and Peter Singer and Mozi. Then I thought about the fact that I was thinking such things, while my wife was simply enjoying the ballet. [Update Jan 15, 2014: I doubt that my wife was "simply enjoying the ballet" (per the discussion in comments below); and I no longer even think I know what it would take for such a statement to be true.] I thought about why boys want to be soldiers, and about changing views of corporal punishment, the strangeness of wanting pearls, the sexuality of the costumes, whether too many pirouettes will damage the brain.
And then I wondered this: What if we gave everyone in the audience "beepers" that went off randomly a couple times during the show, asking people to report on their experiences, thoughts, feelings, sensations, whatever, just before the beep? (You know I've been getting into beepers!) Surely someone has done this sort of thing?
Russ Hurlburt and I have randomly beeped people during our talks. So far, among about 10 beeped experiences we've discussed with audience members, not a single person has reported being focused primarily on the content of the talk.
Prediction: People will, if asked after the fact, report much higher rates of absorption in movies, lectures, performances, etc., than one would see if one did a random sampling study. That wouldn't be a bad thing, necessarily. In a way, it's compatible with a much richer, personal, life-involving experience of the performance....
Friday, December 14, 2007
A comment on Wednesday's post reminded me of a delightful old case study by Andre Roch Lecours and Yves Joanette (1980) which seems to be largely unknown in the literature.
"Brother John" was a French monk who suffered severe, almost complete, aphasia (that is, incapacity with language) of both outer speech and, by his report, inner speech as well during epileptic episodes. Yet during these episodes he remained quite capable of rational thought and behavior. Here is Lecours and Joanette's description of one extended episode of severe global aphasia in Brother John:
While he was traveling by train from Italy to Switzerland, Brother John once found himself at the height of a paroxysmal dysphasia soon upon reaching the small town of his destination. He had never been in this town before but he probably had considered in his mind, before the spell began (or became severe), the fact he was to disembark at the next stop of the train. At all events, he recognized the fact he had arrived when the time came. He consequently gathered his suitcases and got off the train and out of the railway station, the latter after properly presenting his transportation titles to an attending agent. He then looked for and identified a hotel, mostly or entirely on non-linguistic clues since alexia was still severe, entered and recognized the registration desk, showed the attendant his medic-alert bracelet only to be dismayed and dismissed by a gesture meaning "no-room" and a facial mimic that perhaps meant "I-do-not-want-trouble-in-my-establishment." Brother John repeated the operation in search of a second hotel, found one and its registration desk, showed his bracelet again, and, relieved at recognizing through nods and gestures that there were both room and sympathy this time, he gave the receptionist (a "fat lady") his passport, indicating the page where she was to find the information necessary for completing his entry file. He then reacted affirmatively to her "do-you-want-to-rest-in-bed-now" mimical question. He was led to his room and given his key; he probably tipped as expected and went to bed. He did not rest long, however: feeling miserable ["It helps to sleep but sometimes I cannot because I am too nervous and jittery" (free translation)], then hungry, he went down to the hotel's lobby and found the restaurant by himself. He sat at the table and, when presented with the menu, he pointed at a line he could not read but expected to be out of the hors-d'oeuvres and desserts sections. He hoped he had chosen something he liked and felt sorry when the waiter came back with a dish of fish, that is, something he particularly dislikes. He nonetheless ate a bit ("potatoes and other vegetables"), drank a bottle of "mineral water," then went back by himself to his room, properly used his key to unlock his bedroom door, lay down, and slept his aphasia away. He woke up hours later, okay speechwise but feeling "foolish" and apologetic. He went to see the fat lady and explained in detail; apparently, she was compassionate (p. 13-14).If Lecours and Joanette's understanding of Brother John is correct, there was no, or almost no, inner or outer speech production or recognition through the entire episode. Brother John was presumbably not "thinking in words" -- or if he was, "thinking in words" must mean something very different from what I'd have thought it to mean.
Of course we shouldn't put much weight on a single anecdote transmitted second-hand....
Wednesday, December 12, 2007
I'm drafting an entry for Sage Press's forthcoming Encyclopedia of Perception. Since Sage seems to want a fairly relaxed, conversational style and the draft isn't much longer than a blog post, I thought I'd make it today's post.
Perceptual Experience and Attention
Do you have constant tactile experience of the shirt on your back? Constant auditory experience of the background rumble of traffic? Constant visual experience of the tip of your nose? Or, when you aren’t paying attention to such things, do they drop out of consciousness entirely, so that they form no part of your stream of experience – not even vaguely, peripherally, amorphously – no part of your phenomenology, no part of what it’s like to be you?
There is, of course, perceptual processing without attention. A gentle tug on the shirt or an unexpected movement in the visual periphery will generally call your attention, even if you are fully absorbed in other things. To call attention such events must register, first, pre-attentively. While your attention centers on one or a few things, you monitor many others inattentively, ready to redirect attention when an inattentional processes detects a large or important change. The interesting question is not whether there is perception without attention, but whether experience accompanies our inattentional perceptual processing or whether that processing is entirely nonconscious. We might think of consciousness as like a soup. Is it a rich soup, replete with experience across broad regions of several modalities simultaneously? Or is it a thin soup, limited to one or a few regions or objects or modalities at a time?
Ordinary people’s intuitions diverge considerably here. So also do the views of philosophers and psychologists. On intuitive or introspective grounds, William James and John Searle (among others) endorse the rich view, Julian Jaynes and David Armstrong the thin view. One widely discussed case is the absent-minded driver: You’ve driven to work a thousand times. Today you drive habitually, utterly absorbed in other thoughts. You arrive and seem suddenly to wake up: Ah, I’m here already! Now, did you actually visually experience the road – at all? very much? – on your way to work?
One might think the question easily settled. Simply introspect now. How much is going on in your consciousness? Unfortunately, the “refrigerator light phenomenon” frustrates any such straightforward test: The fact that you hear (or auditorially experience) the hum of traffic when you’re thinking about whether you hear the hum of traffic provides no evidence on the question of whether you hear the hum of traffic when you’re not considering the matter. Just as the act of checking the refrigerator light turns it on, so also might the act of checking for tactile experience of one’s shirt or visual experience of one’s nose produce those very experiences.
Often we fail to parse, respond to, or remember what might seem to be salient stimuli – a stream of speech we’ve decided to ignore, a woman in a gorilla suit walking through a fast-paced ballgame, substantial changes in a flickering picture, a geometric figure briefly presented in an unattended part of a visual display. Daniel Dennett and Arien Mack, among others, have interpreted such phenomena as evidence for the thin view. However, the conclusion does not follow. We may not parse unattended stimuli much or remember them well, but they may still be experienced in an inchoate or immemorable way, or the general gist may be remembered if not the details.
Ned Block has emphasized that it seems introspectively that we visually experience more of a visual display than we focally attend to. On the face of it, this fact (if it is a fact) seems to suggest that perceptual experience outruns attention. But might it, instead, be a matter of diffuse attention spreading more broadly than focal attention, perhaps along a gradient? Even if you visually experience this whole page while focally attending only to a few words at a time, it doesn’t follow that you also visually experience the wall in the far periphery when you’re not thinking about it, or the pressure of the shoes on your feet.
The issue of whether perceptual experience is, in general, rich or thin may also be addressed by gathering introspective or immediately retrospective reports about randomly sampled moments of experience. Eric Schwitzgebel, giving people beepers to wear during ordinary activities and asking them to reflect on the last undisturbed moment before each beep, found a majority of participants to report visual experience in 100% of sampled moments, tactile experience and peripheral visual experience somewhat less. However, as Schwitzgebel admits, it’s unclear how much credence to give such reports.
The rich and thin views draw radically different pictures of our experience. If the rich view is right, consciousness contains much more than adherents of the thin view suppose. Although there is room here for merely terminological confusion, it appears that there is also room for major substantive disagreement. Strange that this question, concerning an absolutely fundamental and pervasive aspect of human experience, is so poorly studied!
Suggested further readings:
Armstrong, D.M. (1981), The nature of mind. Ithaca, NY: Cornell.
Block, N. (forthcoming), Consciousness, accessibility, and the mesh between psychology and neuroscience. Behavioral and Brain Sciences.
Dennett, D.C. (1991), Consciousness explained. Boston: Little, Brown & Co.
James, W. (1980/1981), The principles of psychology. Cambridge, MA: Harvard.
Jaynes, J. (1976), The origin of consciousness in the breakdown of the bicameral mind. Boston: Houghton Mifflin.
Mack, A., & Rock, I. (1998), Inattentional blindness. Cambridge, MA: MIT.
Reddy, L., Reddy, L., & Koch, C. (2006), Face identification in the near-absence of focal attention. Vision Research, 46, 2336-2343.
Rensink, R.A. (2000), When good observers go bad: Change blindness, inattentional blindness, and visual experience. Psyche, 6 (9).
Schwitzgebel, E. (2007), Do you have constant tactile experience of your feet in your shoes? Or is experience limited to what’s in attention? Journal of Consciousness Studies, 14 (3), 5-35.
Searle, J.R. (1992), The rediscovery of the mind. Cambridge, MA: MIT.
Simons, D.J. (2000), Attentional capture and inattentional blindness. Trends in Cognitive Sciences, 4, 147-155.
Note to blog visitors: One aspect of the question I've omitted is the old debate in introspective psychology about whether experiences of objects to which one is attending differ in some qualitative attribute like "clearness" or "attensity" from experiences of unattended objects. But this hasn't exactly been a hot topic in the last 100 years; and the more basic question of whether there even is experience outside of attention needs to be settled first.
Monday, December 10, 2007
A week and a half ago, I posted a brief, frustrated reflection on my failure to find any good research on the rates at which political scientists vote in public elections. After several more search attempts, I've given up. As far as I can see, no one has explored this issue since a few studies in the 1960s and 1970s -- studies so problematic as to be utterly useless.
I've informally asked a number of people to guess what Josh Rust and I will find when we analyze the data. Everyone except the political scientists said that they suspect we'll find that political scientists vote more often than other professors. The political scientists, however, were cagey. A couple mentioned a minority view in political science that voting is for suckers: Your vote never makes a difference, so voting is a waste of time. Yet one of these same professors said that he himself has voted in every single election, down to the tiniest little runoff, since the turn of the century.
So here's my prediction: Political scientists will have a more broadly spread distribution than other professors -- there will be more at the extreme of voting in almost every election but there will also be many who vote rarely or never. On average, though, I predict, the political scientists will vote more. Compared to the average non-political-science professor, the average political scientist will be more informed about elections, more invested in and interested in the outcomes, and more likely to have publicly embraced the view that one should vote.
Well, we'll see! Here's what Josh and I plan to do:
From university websites, we'll gather names of philosophers, political scientists, and a sample of professors in other fields. We'll then look for these names on voting records that have been provided to us by several states, calculating a rate of votes per year for each individual since that individual's first recorded vote in the state.
There are two main weaknesses in this method, and I especially welcome readers' reflections or suggestions about these. First, since we don't have street addresses for the professors in question, we will not be able to disambiguate between voters with identical names. If there are four John Millers who live within commuting distance of So-and-So College, we won't know which one is the professor, so we will have to discard the data. And second, if no voting record matches the professor's name, we will not know whether that professor is registered under a different name, registered in a different locale, a non-citizen, a felon, or simply a non-voter. So we'll have to exclude those professors too.
Because of these difficulties, we won't be able to reach conclusions about absolute rates of voting participation among political scientists, just comparative rates -- more or less than professors in other departments. But will these difficulties undermine our ability even to draw that conclusion? Although I don't see any reason to think there will be large differences in the rates at which professors in different departments are registered under different names or in different locales, there is reason to suspect that different departments may have different rates of common names and of non-citizens. But hopefully we can keep those confounds under control: We'll have an exact count of the common-name professors in the different departments, so we can attempt analyses that account for that; and hopefully we can estimate the rates of non-citizenship in departments by accessing c.v.'s or biographies of our non-voting professors where possible and by looking at general data on the citizenship of professors.
What do you think?
Posted by Eric Schwitzgebel at 3:06 PM
Friday, December 07, 2007
Non-academics often think that skill in reading is measured by reading speed -- the faster the better. That is partly true, up to a point (up to about 7th grade, I suspect). I'm reminded of Woody Allen's joke about what he got from speed-reading War and Peace: "It's about Russia."
Philosophers, in contrast, sometimes seem to fetishize slow reading. "Deep" philosophy, it might seem -- or deep thinking about philosophy as one reads -- requires a glacial pace. Students sometimes excitedly report, "We spent the whole three-hour seminar reading a single page of Wittgenstein!"
I don't deny that glacial reading can, in the right mood, be exciting. And surely if you breeze through Wittgenstein or Heidegger at two minutes a page, you're missing something. But here's the compromise: If you cut your reading pace in half to get more out of what you read, you'll only be able to read half as much -- and that's another way of missing something.
The key to great philosophical reading, I think, is to vary your pace according to your projects and interests. In some ways, reading quickly is the harder skill. It's also the one less taught in philosophy seminars. How quickly can you assimilate the main ideas of 400 pages of articles on topic X? Can you detect and hone in on, slow down for, those crucial few paragraphs on which the issues really turn? Indeed, unless you can read quickly, you're likely not to have the broad understanding necessary to see where one should read slowly.
I used to begin graduate seminars with student presentations on the assigned reading. The dull blow-by-blow that typically resulted, dedicating an equal amount of energy to every page of the reading, is exactly the opposite of the skilled reader's adjustment of pace and focus. Now instead I ask students to come prepared with one or two well-developed questions or objections. This, I hope, encourages focus rather than plodding. I haven't yet dared to assign students 400 pages of a philosophy for a week, advising them to read it quickly and laser in on what seem to them to be key issues -- I think this might cause a riot! -- but the more I think about it, the more I'm tempted.
Reading philosophy quickly of course invites misunderstanding and oversimplification. But so does reading philosophy slowly, without a sufficient sense of context and alternative perspectives.
Wednesday, December 05, 2007
It seems to me that I sometimes have thoughts that linger after the inner speech that expresses them is done. I might say silently to myself, "Shoot, writing three posts a week is a lot of work!" and then that thought may briefly stay with me, in some sense that's hard to articulate, before I move on to new thoughts.
Can I say more about what that experience is like? Only through metaphor, it seems: It's like a resonance or an echo. But I don't think the inner speech literally resonates or echoes in the sense of, say, the last word or the last few words quietly buzzing or repeating themselves, slowly dying away.
I found it interesting, then, to contrast this sense I have of my inner speech with a report by Melanie, the subject Russ Hurlburt and I interviewed in our just-published book, Describing Inner Experience?, regarding a randomly-sampled (with a beeper) moment of her inner experience:
Russ: So you had said in inner speech, “they lasted for a nice long time,” just prior to the beep?In the book, I express skepticism about this report. I wonder if Melanie is being taken in by her own metaphor (as, I think, people are often taken in by metaphors in describing their experience, e.g., in calling dreams black and white or visual experience flat). Russ, however, accepts the report.
Melanie: Um hm, not at the beep but just prior to it.
Russ: But in some way the “nice long time” portion is still there. Is that right?
Melanie: Yeah, it was. The best I can liken it to is an echo.
Russ: Okay. And “echo.” I want to understand what you mean by “echo.” An echo gets softer and softer; did you mean to imply that? And echo sometimes is repeated and sometimes once but…
Melanie: No, it didn’t get softer and softer, it’s almost like [quizzically] it got blurrier and blurrier. Not in terms of visual blurry, but a sound blurry [again quizzically], where it just started overlapping itself until it just came to this jumble in which you can’t make any noise out. It sounds really weird but…
Russ: So are you saying that you said in inner speech something that was quite clear…
Melanie: Um hm.
Russ: … “It lasted for a nice long time,” and then there’s “nice long time,” “nice long time,” overlapped with “nice long time”…
Russ: … then “nice long time” overlapped with “nice long time” overlapped with “nice long time”…
Melanie: And it keeps going.
Russ: … until there’s sort of several of these things going?
Melanie: Yeah (Sixth Sampling Day, p. 207-208).
What do you think? Any other ideas about the phenomenology, if any, of lingering thoughts?
Monday, December 03, 2007
I've been reading The Happiness Hypothesis, by Jonathan Haidt -- one of those delightful books pitched to the non-specialist, yet accurate and meaty enough to be of interest to the specialist -- and I was struck by Haidt's description of historian William McNeill's work on synchronized movement among soliders and dancers:
Words are inadequate to describe the emotion aroused by the prolonged movement in unison that [military] drilling involved. A sense of pervasive well-being is what I recall; more specifically, a strange sense of personal enlargement; a sort of swelling out, becoming bigger than life, thanks to participation in collective ritual (McNeill 1997, p. 2).Who'd have thought endless marching on the parade-grounds could be so fulfilling?
I am reminded of work by V.S. Ramachandran on the ease with which experimenters can distort the perceived boundaries of a subject's body. For example:
Another striking instance of a 'displaced' body part can be demonstrated by using a dummy rubber hand. The dummy hand is placed in front of a vertical partition on a table. The subject places his hand behind the partition so he cannot see it. The experimenter now uses his left hand to stroke the dummy hand while at the same time using his right hand to stroke the subject's real hand (hidden from view) in perfect synchrony. The subject soon begins to experience the sensations as arising from the dummy hand (Blotvinick and Cohen 1998) (Ramachandran and Hirstein 1998, p. 1623).Also:
The subject sits in a chair blindfolded, with an accomplice sitting in front of him, facing the same direction. The experimenter then stands near the subject, and with his left hand takes hold of the subject's left index finger and uses it to repeatedly and randomly to [sic] tap and stroke the nose of the accomplice while at the same time, using his right hand, he taps and strokes the subject's nose in precisely the same manner, and in perfect synchrony. After a few seconds of this procedure, the subject develops the uncanny illusion that his nose has either been dislocated or has been stretched out several feet forwards, demonstrating the striking plasticity or malleability of our body image (p. 1622).So here's my thought: Maybe synchronized movement distorts body boundaries in a similar way: One feels the ground strike one's feet, repeatedly and in perfect synchrony with seeing other people's feet striking the ground. One does not see one's own feet. If Ramachandran's model applies, repeatedly receiving such feedback might bring one to (at least start to) see those other people's feet as one's own -- explaining, in turn, the phenomenology McNeill reports. Perhaps then it is no accident that armies and sports teams and dancing lovers practice moving in synchrony, causing a blurring of the experienced boundary between self and other?
Friday, November 30, 2007
Well, I was hoping to work up a post today on the voting behavior of political scientists, but so far the only literature I can find on this is old and hideous -- and now I have to dash off to Cal State Long Beach to give a talk (on the moral behavior of ethicists)!
So a tidbit: Henry A. Turner and Charles B. Spaulding (1969) mailed questionnaires to academics in various disciplines, asking people about their voting histories. 61% of the questionnaires were returned (ah, the good old days!). 89% of the respondents said they voted in 1956 and 91% of the respondents said they voted in 1960. Were political scientists the most likely to have said they voted? Nope! Geologists were (95% and 97% in the two elections). The methodological shortcomings of this study are left as an exercise for the reader.
Chasing threads through citation databases, I found a cluster of articles in the same general vein in the 1960s and 1970s -- mostly focusing on the party affiliations of the respondents (overwhelmingly Democrat in the humanities). Then the citation thread peters out....
Hopefully next week I can dig up something more recent and methodologically better. Or will it be up to me? Surely someone must have studied whether political scientists actually vote!
(You ask why I care? Well, beside its being intrinsically interesting, I need a comparison group for when I go hunt down the data on whether political philosophers are more likely than others to vote.)
Posted by Eric Schwitzgebel at 10:13 AM
Wednesday, November 28, 2007
Two weeks ago, I posted some of Kant's remarks about the relationship between moral reflection and moral behavior. Kant suggests that people who tend to engage in moral reflection (like professional ethicists, presumably) will be less likely to fall for easy rationalizations of their inclinations, and so they will behave with more scruple.
Today, John Stuart Mill:
[Once an ethical or religious creed becomes dominant, believers] neither listen, when they can help it, to arguments against their creed, nor trouble dissentients (if there be such) with arguments in its favour. From this time may usually be dated the decline of the living power of the doctrine. We often hear the teachers of all creeds lamenting the difficulty of keeping up in the minds of believers a lively apprehension of the truth which they nominally recognise, so that it may penetrate the feelings, and acquire real mastery over the conduct. No such difficulty is complained of while the creed is still fighting for its existence: even the weaker combatants then know and feel what they are fighting for, and the difference between it and other doctrines; and in that period of every creed's existence, not a few persons may be found who have realised its fundamental principles in all the forms of thought, have weighed and considered them in all their important bearings, and have experienced the full effect on the character which belief in that creed ought to produce in a mind thoroughly imbued with it. But when it has come to be an hereditary creed, and to be received passively, not actively -- when the mind is no longer compelled, in the same degree as at first, to exercise its vital powers on the questions which its belief presents to it, there is a progressive tendency to forget all of the belief except the formularies, or to give it a dull and torpid assent, as if accepting it on trust dispensed with the necessity of realising it in consciousness, or testing it by personal experience, until it almost ceases to connect itself at all with the inner life of the human being....Forgive the long quote. Mill writes so beautifully!
All Christians believe that the blessed are the poor and humble, and those who are ill-used by the world; that it is easier for a camel to pass through the eye of a needle than for a rich man to enter the kingdom of heaven; that they should judge not, lest they be judged; that they should swear not at all; that they should love their neighbor as themselves; that if one take their cloak, they should give him their coat also; that they should take no thought for the morrow; that if they would be perfect they should sell all that they have and give it to the poor. They are not insincere when they say that they believe these things. They do believe them, as people believe what they have always heard lauded and never discussed. But in the sense of that living belief which regulates conduct, they believe these doctrines just up to the point which it is usual to act upon them....
The same thing holds true, generally speaking, of all traditional doctrines -- those of prudence and knowledge of life, as well as of morals or religion.... [M]uch more of the meaning even of these would have been understood, and what was understood would have been far more deeply impressed on the mind, if the man had been accustomed to hear it argued pro and con by people who did understand it (from On Liberty, Ch. II).
I hesitate to set my own prose next to Mill's, lest the contrast be too painfully evident, so I'll just briefly remark: If what Mill says is true, then professional ethicists, who know better than almost anyone the pros and cons of their moral creeds, who discuss them endlessly, who comprehend as well as people can the principles undergirding them, ought to display those moral principles in their character and behavior. Yet from what I see, they behave no differently than do others of similar social background.
Is Mill simply wrong, then? He seems so right! I cannot bring myself to reject the moral value of ethical reflection, consigning it to mere froth and rationalization with no power to alter and improve our behavior.
I call this the problem of the ethics professors.
Monday, November 26, 2007
In the last few weeks, it seems like I keep coming across references to Brett Pelham's work on name effects in life decisions. Striking stuff! Here are some data from his 2002 article:
Ratio of lawyers to dentists among people whose names start with "Den": 7.22.
Ratio of lawyers to dentists among people whose names start with "La": 8.98.
Odds of moving from one's home state to live in Virginia, as opposed to Georgia, among people named Virginia: 2.10.
Odds among people named Georgia: 0.97.
Increased likelihood of living in the city of "St. X" if your first name is X (e.g., Paul in St. Paul, Louis in St. Louis): 44%.
Increased likelihood if your last name is X: 55%.
Pelham and his coauthors go on through dozens of analyses. They conclude that people are attracted to locales and careers in part because of the similarity to their names. Although one could quibble with each analysis -- maybe people with family ties to Georgia are more likely to name their daughters Georgia, maybe people in St. Paul are more likely to name their children Paul (and so people named Paul don't choose to live in St. Paul) -- the effect is so widespread and consistent among so many different measures that Pelham's conclusion feels hard to resist by the end.
I suppose it's no surprise that we are as irrational in our big life decisions (where to live) as in our small decisions (where to eat lunch) and as influenced by silly little things.
Or is it irrational? Maybe there's nothing inherently unreasonable in choosing one's residence based on similarity to one's name rather than (say) climate or job opportunities. Probably people have little self-knowledge about the influences of such factors on their decisions - but does that make the operation of such factors irrational? Acting on hunches and intutions, without knowing their basis, is not always irrational. Might Virginia be a little happier living in Virginia? I don't see why not. If so, and if salary (above poverty levels) doesn't have much of an effect on happiness (as per recent research), then maybe Dennis is better off earning $60K in Denver than $70K in Atlanta, and his gut steered him right.
Well, I don't know. But as a Schwitzgebel, it's natural for me to be drawn to scepticism!
Posted by Eric Schwitzgebel at 3:12 PM
Wednesday, November 21, 2007
Materialism is the view that the world is entirely material (or physical). There are no immaterial souls or properties. There is no ghost in the machine.
David Chalmers argues against materialism as follows:
(1.) I can conceive of a world in which everything material is as it is in the actual world, yet in which there is no consciousness. (This would be a zombie world, which has a counterpart Eric Schwitzgebel who says and writes and does exactly the same things as I do, but who has no light of consciousness inside and is in the relevant sense an empty machine.)
(2.) Although such a world may not be naturally possible -- that is, although such a world may violate the laws of nature that hold at our world -- the fact that it is conceivable shows that it is metaphysically possible. (Compare: It is conceivable, and metaphysically possible, that my coffee cup rise of its own accord into the air and circle around my head in violation of the laws of gravity and inertia. A three-sided square, in contrast, is neither naturally nor metaphysically possible.)
(3.) Since in that world my counterpart does not have the property of being conscious though he shares all material properties with me, the property of being conscious must not be a material property.
(4.) So the world is not entirely material.
(Obviously, this argument is condensed. See Chalmers's 1996 book for the full details!)
What has always struck me as strange about this argument is how it derives a conclusion about the fundamental structure of reality from facts about what we can conceive. How could that possibly work? How could doing thought experiments in my armchair reveal whether the world is purely material or not?
Most materialist responses to Chalmers either deny that we can really conceive of such a world or deny that conceivability is an adequate test of metaphysical possibility. However, I find Chalmers convincing in his responses to both lines of attack. My thinking, instead, is that we should conceive metaphysical possibility as conceptual possibility but deny that materialism is (or should be) a thesis about what is conceptually possible.
To make this work, I need to play around with the concept of a "property". For a simple, concrete example, let's say that that in all naturally possible worlds I'm in brain state #1117A if and only if I'm having the conscious experience of pain. My zombie counterpart without consciousness (in a conceptually possible but naturally impossible world) has #1117A but not conscious pain. We might define thinly-sliced properties as properties individuated such that if they diverge even only in conceptually possible worlds, they are different properties. Thickly-sliced properties, in contrast, might be individuated such that if two diverge only in conceptually possible worlds but never in naturally possible worlds, they really are only one property. Conscious pain and #1117A would thus be different thinly-sliced properties but the same thickly-sliced property.
Now the question is, should we think of materialism as a claim about properties thinly sliced or thickly sliced? Let me suggest that the proper spirit of materialism, as a scientific hypothesis, confines it to being a claim about what is naturally possible, not a claim about what is conceptually possible. So zombie worlds and thinly-sliced properties are irrelevant to its truth. Materialists can give Chalmers "property dualism" if "property" means thinly-sliced property. In some sense, I do have non-material properties, but that's just a function of the fact that such thinly-sliced "properties" are individuated in accord with the concepts of the person attributing them and the human concept of consciousness pulls apart from the human concept of the material.
Monday, November 19, 2007
The caterpillar who thinks about how its legs work falls on its chin, the story goes. So similarly, Joshua Rust (my co-author on The Moral Behavior of Ethicists: Peer Opinion) suggests that in cases when our spontaneous responses would be morally appropriate, moral reflection can tangle up the works. If ethicists in fact act worse than non-ethicists, as suggested by about one-third of non-ethicist philosophers in our peer opinion survey, Josh believes the caterpillar effect may be the explanation why.
Consider my finding that ethics books are more likely to be missing from academic libraries. Here's a Rustian (Rusty?) explanation: Our normal, unreflective treatment of library books includes returning them when they're due and being sure to check them out before leaving the library. If we start to think about the ethics of returning books, these spontaneously virtuous responses might get thrown off. We might find ourselves, for example, rationalizating and justifying theft or carelessness.
Or, Bernard Williams-style, consider the person who pauses to reflect on the moral pros and cons before helping a person in need versus the person who unreflectively leaps to assist.
I'm not sure I'm quite ready to get on board with Josh on this one yet, though. It seems to me that often our spontaneous reactions are self-serving, and habits of ethical reflection can break us away from those. I'm inclined to think that overall (even if not in every particular situation) it's good to have habits of moral reflection. This, I suppose, is part of why I find it interesting and puzzling that ethicists, who presumably do tend to reflect morally more often on average than non-ethicists, seem to behave no better than anyone else.
I've posted Josh's draft essay on this in the Underblog. I'm sure he'd appreciate comments!
Friday, November 16, 2007
I've been thinking (again) about why Chalmers's dualism about consciousness dissatisfies me. Apropos of this, a metaphysics of ghosts.
First, let's distinguish between experiential and non-experiential ghosts.
Non-experiential ghosts are, lets say (for now), constituted of non-physical stuff, ectoplasm. They engage in stereotyped, repetitive actions (shaking shackles, gliding down hallways) but don't think and have no conscious experiences. The person whose living form they resemble is dead and gone with no personal psychic connection to the ghost.
Experiential ghosts, in contrast, think and have experiences, maybe engaging in more complex behavior. While there's something it's like to be an experiential ghost, there's nothing it's like to be a non-experiential ghost -- just as there's nothing it's like to be a mirror image or a shadow or a footprint. The following discussion is confined to non-experiential ghosts.
How do non-experiential ghosts come into being and how are they perceived? Here's a theory-sketch: A person (a projector) has a psychological trauma that creates a non-physical ectoplasmic entity resembling her. Once created, this ectoplasmic entity exists independently of the projector's experience. Such ghosts are seen and heard not by reflecting or creating photons or producing sonic vibrations in the air. Rather, they work directly on the perceiver's visual and auditory cortex. This direct action on the brain explains why ghosts cannot be photographed or audiotaped. Call this the ectoplasmic theory of ghosts.
Here's a competing theory -- the materialist theory. There is no ectoplasm. Rather, when a certain sort of trauma occurs, it directly affects the brains of ghost perceivers, through "paranormal" but perfectly physical action at a distance, cutting out the ectoplasmic middleman. A traumatic event in Schnerdfoot's brain as he is murdered directly causes visual and auditory cortical activity in the brains of other perceivers who walk by the scene of Schnerdfoot's death years later.
Now suppose, further, that the materialist theory of ghosts turns out to be true. It seems right to say, then, something like this: Ghosts are really nothing but effects on our brains from earlier trauma in other people's brains. There are no immaterial entities or properties (setting aside any qualms about whether consciousness itself might be immaterial).
Question: Could a philosopher (Chalmers's counterpart?) in Materialist Ghost World run the following argument? I know that it's a law of nature that whenever there's a ghost it's produced by trauma of such-and-such a sort and has such-and-such effects on perceivers' brains. Yet I can conceive of those causes and effects without the presence of a ghost. Therefore, "being haunted" is not the same property as "being a place in which past trauma causes certain effects in perceivers' brains". There's a possible world in which those properties come apart. Furthermore, since I know Schnerdfoot's house is haunted, I know that materialism is false: There are non-physical properties instantiated in my world!
Since it seems wrong to say of the Materialist Ghost World that it is a world in which non-physical properties are instantiated, there must be some flaw in the argument.
I initially conceived this post as a challenge to Chalmers, but now that I've arrived at the end, I've come to think it fails as a challenge. Here's why: Our Materialist Ghost World philosopher must, it seems, either conceive of ghosts ectoplasmically or conceive of them functionally (in terms of their causes and effects). If the first, it's false to say that he knows that ghosts exist in his world. If the second, it's false to say that there is a physically identical possible world that doesn't contain ghosts. It's not clear that Chalmers's argument for dualism fails in the same way, since it's not clear that he has to choose between an ectoplasmic and a functional conception of consciousness.
Next week I'll try another crack at Chalmers -- without the ghosts!
Wednesday, November 14, 2007
Will moral philosophers behave better than non-philosophers? Kant seems to imply as much. From the Groundwork (1785/2002, Ch. 1):
A wonderful thing about innocence -- but also something very bad -- is that it cannot defend itself very well and is easily led astray. For this reason even wisdom -- which otherwise is more a matter of acting than knowing -- also needs science [i.e., Wissenschaft: academic learning], not in order to learn from it, but in order to gain access and durability for what it prescribes. Human beings feel within themselves a powerful counterweight opposed to all the commandments of duty... the counterweight of needs and inclinations.... From this there arises a natural dialectic -- that is, a tendency to quibble with these strict laws of duty, to cast doubt on their validity or at least on their purity and strictness, and, if possible, to make them conform better to our wishes and inclinations....Generally speaking, Kant interpretation is not for the faint-hearted, but this passage seems straightforward enough, even lucid: Without philosophy, our moral thinking is apt to be tangled up with self-serving impulses. We're apt to be led astray, illegitimately justifying just what it is that we desire. Philosophical reason, because it sees more accurately the true principles of morality, tends to counter such self-serving rationalizations.
In this way, common human reason is driven... to take a step into the field of practical philosophy. There it seeks instruction and precise direction as to the source of its own principle and about the correct function of this principle in contrast with maxims based on need and inclination. It ventures into philosophy so as to escape from the perplexity caused by conflicting claims and so as to avoid the risk of losing all genuine moral principles through the obscurity into which it easily falls.
From this it seems to follow that the more we beef up the philosophical end of the "dialectic" -- that is, the more we reflect on moral principles -- the more steadily we will see the moral right and the less will selfish desires entangle our understanding. This is the "science" that ordinary wisdom needs to "gain access and durability for what it prescribes".
As I see it, the issue is empirical. Does training in philosophical ethics help insulate one from ethical confusion due to self-serving impulses? Do ethicists engage in less rationalization? Does some principle, some unblinking knowledge of the right shine through?
Or, instead, does ethics tend to give one additional resources for rationalization? The ethicist may see more easily than others through the crudest, stupidest rationalizations -- but might this gain may be offset, or even more than offset, by a talent for subtle, sophisticated rationalizations...?
Friday, November 09, 2007
Appeals to intuition have been central to analytic philosophy since at least the 1970s. Epistemologists rely on our intuitive judgments about whether someone looking at a real barn in (unbeknownst to her) Fake Barn Country knows that it's a barn she's seeing. Ethicists rely on our intuitive judgments about whether it's wrong to push someone in front of a runaway streetcar, killing him in order to save five others. Philosophers of mind rely on our intuitions about whether a cleverly enough designed machine would be conscious.
Several years ago, a number of young philosophers decided they were fed up with philosophers' armchair claims about "our" intuitions (especially when those claims contradicted each other). Such claims are empirically testable, they said, so let's test them! Hence "experimental philosophy" as a movement was born.
Experimental philosophy, so conceived, is a coherent and interesting movement -- even if it's debatable exactly how much polls of undergraduates about philosophical puzzles really tell us about deep philosophical questions.
But then the question arises: Some philosophers have done experiments that aren't a matter of polling intuitions. Should they, too, be called "experimental philosophers"? It turns out there aren't many such philosophers, but I happen to be one (e.g., this and this and this and this and this).
The consensus seems to be that "experimental philosophy" should be construed broadly to include people like me -- to include, basically, any philosopher who does experiments with an eye to philosophical issues. I'm honored to join the party (and the society and the blog and everything else!), but I'm concerned about this characterization of experimental philosophy. What if a psychologist runs an experiment with an eye to philosophical issues (as many have done)?
For example, I've given people beepers and asked about their stream of consciousness, with an eye to issues about the basic structure and epistemology of our experience (critiquing Descartes and James and Dennett and Siewert and many other philosophers). A psychologist could have done exactly the same thing, though -- and many have done similar things. If we count all such psychologists as experimental philosophers, then the movement is too big and broad to be a coherent entity. On the other hand, if we count me but not those psychologists, then it's hard to see how "experimental philosophy" could be a subdiscipline or movement defined by a set of research questions and methods. Instead, it would have to be some sort of sociologically defined movement in which departmental affiliation plays a key role. But is that what we want?
Wednesday, November 07, 2007
I'm hoping that the book rises above its current Amazon.com sales rank of #5,939,601! Barnes & Noble seems to be offering a 20% discount on it ($27.20 + free delivery). [Update, Nov. 13: Amazon now seems to be offering it for $26.66 + free delivery.]
Russ Hurlburt and I gave a subject a random beeper while she went about her normal day. When the beep sounded, she was to reflect on her "last undisturbed moment of inner experience immediately before the beep". Then we interviewed her about her sampled experiences. We did this for six days. At the core of the book are edited transcripts of the interviews, supplemented with sideboxes where we connect with existing and historical literature in philosophy and psychology. Russ and I have written separate introductory and concluding chapters -- he from the perspective of a proponent of this method for learning about consciousness, I as a skeptic.
Here are three unique things about the book:
(1.) It explores in unprecedented detail randomly sampled moments of an ordinary subject's stream of experience.
(2.) Rather than being a debate between opposing partisans regarding the accuracy of subjective reports about experience, it is a collaboration between opposing partisans, where we really try to get each other's views straight and find common ground, over many conversational turns.
(3.) It takes the question of the accuracy of introspective reports about experience, and the conditions of accuracy and failure, as seriously as has ever been done -- not just regarding beeper methodologies, but (in the extended opening and concluding chapters) regarding introspective reports about consciousness in general.
Russ and I have tried to write so the book would be accessible to non-specialists. I suspect parts of it will be drier than ideal for a broad audience, but if you enjoy the consciousness posts on this blog, I think -- or at least I hope! -- that you'll enjoy the book.
Tuesday, November 06, 2007
Monday, November 05, 2007
More useful would be to know about the differences between Kantians, utilitarians, and virtue ethicists. Based on my utterly non-scientific, anecdotal method, my conclusion is that you're safest with utilitarians and virtue theorists, and in mortal danger around Kantians (it's that combination of dogmatic rectitude and lack of judgment, I guess--or to quote Geuss again, "The Kantian philosophy is no more than at best a half-secularized version of...a theocratic ethics with 'Reason' in the place of God" [Outside Ethics, p. 20]).
Now I myself have no strong opinion about this question. I know too few ethicists who fall neatly into these categories, and their character seems to me too diverse. My sample size is too small, given the variance! However, I have noticed that everyone I've spoken to so far who thinks there are differences in ethical character between Kantians, utilitarians, and virtue ethicists thinks the Kantians are the worst of the lot. I'd be interested to hear readers' thoughts about this.
I note -- though by itself it shows little -- that utilitarian and virtue ethics books are as likely to be missing from academic libraries as Kantian books, maybe even more likely to be missing: See here.
Although Leiter seems to speak tongue-in-cheek at the end of his post when he calls this a "weighty matter", I myself think there is no matter in ethics weightier than the question of what sorts of moral reflection are prone to encourage or suppress actual moral behavior.
Friday, November 02, 2007
is here. Josh and I went to a meeting of the American Philosophical Association last spring and distributed questionnaires asking philosophers their opinion about the moral behavior of ethicists compared to non-ethicist philosophers and compared to non-academics of similar social background. The summary result (announced previously here) is this: The majority opinion among philosophers is that ethicists do not behave better. Ethicists themselves were about evenly divided between saying that ethicists behave better and saying they behave the same. Non-ethicists were about evenly divided between saying that ethicists behave better, the same, and worse.
In conversation, I've found that most philosophers seem untroubled by the view that ethicists are not better behaved than non-ethicists. But I think that if this is true it should be troubling -- both normatively and empirically!
Normatively, because it seems that philosophical reflection about ethical matters should have an impact on one's actual morally behavior. And empirically because it seems that people who devote their careers to ethics should at least be more inclined than average to think that morality is important (and thus worth acting on) and should find violations of their favorite principles more salient than do non-ethicists.
Comments on the essay gratefully welcomed! Email me.
Wednesday, October 31, 2007
Monday, October 29, 2007
The more people I ask, the more people seem okay with the idea that most of what we see, most of the time (roughly, everything not in the region of optical focus), is double. Whoa! I get the heebie-jeebies. Either common sense is badly wrong -- if common sense is what I take it to be! -- or a substantial number of introspectors (including such eminent ones as Helmholtz and Titchener) are badly wrong. Grist for my skeptical mill. But not happy grist. Now I'm walking around thinking maybe I'm crazy for not seeing the persistent doubling that so many say is always there!
I take comfort, though, that Stephen Palmer, whose 1999 textbook Vision Science is generally considered standard in the field, analyzes the phenomenon much as I would:
One question that naturally arises from all this talk about disparity between the two retinal images is why don't we normally experience double images?... The answer has at least two parts. One is that points on or near the horopter [roughly, points at the same distance as the point on which you are focusing and on which your eyes are converging] are fused perceptually into a single experienced image. The region around the horopter within which such disparate images are perceptually fused is called Panum's fusional area. The second part of the answer is that for points that lie outside Panum's area, the disparity is normally experienced as depth. You can experience double images if you attend to disparity as "doubleness," however, or if the amount of disparity is great enough, as whenyou cross your eyes by focusing on your nose (p. 209).
Palmer seems to be saying that normally we don't see double, unless we attend to disparity as doubleness. He might then say -- as I would say -- that the reason so many people seem willing to attribute doubleness to their daily experience, when prompted to attend to double images created on the spot, is that they illegitimately infer that their normal visual experience is like their experience during such doubling exercises. But if so, that suggests an interesting instability and suggestibility among people in their judgments about ordinary visual experience!
Conversely, someone might argue against me, and against Palmer, that our experience when we think about doubling is our typical experience: We just ordinarily miss it in ordinary experience because we don't really think about or register the actual double-experiences we have of things off the horopter (or outside Panum's area) in the everyday run of life.
Friday, October 26, 2007
Wednesday, my senior seminar surprised me. We were talking about depth perception and how your eyes converge slightly when you focus on something nearby, when one student casually remarked that when he focused on a nearby object a more distant object in front of him (a student sitting on the opposite side of the seminar table, as it happened) appeared double.
I find it easy to get a double image by holding one finger about four inches before my nose and focusing in the distance, but I've always found it more difficult to get doubling by the converse operation of focusing on something close and attending to an object in the distance -- though many early introspective psychologists claimed that the phenomenon of doubling in the distance is common or even pervasive (e.g., Reid, Purkinje, J. Mueller, Helmholtz, Stout, Sanford, Titchener). Helmholtz, for example, writes:
When a person's attention is directed for the first time to the double images in binocular vision, he is usually greatly astonished to think that he had never noticed them before, especially when he reflects that the only objects he has ever seen single were those few that happened at the moment to be about as far from his eyes as the point of fixation. The great majority of objects, comprising all those that were farther or nearer than this point, were all seen double (1910/1962, III.7; see also this post).
This has always seemed to me introspective psychology gone awry. In a 2006 essay (Do Things Look Flat?), I conjectured that the attribution of pervasive doubling in visual experience had something to do with the popularity of stereoscopes in the late 19th century and the analogy between binocular vision and stereoscopy; but recently, especially in light of the view's relatively early roots, I've been more inclined to think it has to do with overemphasis of the theory of the horopter in binocular vision.
With this in mind, I asked the eight students in my seminar to converge their eyes upon their fingers before the nose and report on whether the student across the table seemed to them to double. I went to the board to write down poll numbers, yes or no. No need to write the numbers down, though -- all the students immediately said yes! (Well, one was quiet, but when I specifically asked him, he agreed with the others.) Evidently, I was the only one who didn't find the effect.
One student then said that he has always been very aware of the persistent doubling of things in vision. He or another student then recommended that we look at the Julesz random dot stereogram on p. 112 of Dennett's (1991) Consciousness Explained (which we were reading). Several students claimed that they could "fuse" it and see the square pop out by allowing their vision to double (though I should say not all the alleged pop-outers initially agreed on the shape that popped out). Here's the stereogram:
Now to me the prospect of trying to merge those two images in my mind to get a three-dimensional pop-out effect seems utterly hopeless!
I'm sitting in my office trying to get that doubling in the distance. I put my finger before my nose and compare its position to that of a V8 bottle six feet away. I close one eye, then the other, and notice how my finger seems to change position relative to the bottle. This gives me a sense of how far apart, maybe, to expect to see the doubled bottles when I converge my eyes upon my finger. Then I do converge my eyes. Maybe that bottle doubles -- but I'm not sure. I try again, and now it seems clear that there is no doubling.
But I've always had an unusually dominant left eye (I had "lazy eye" as a kid), so maybe I'm the one who's unusual? Do most of us always see most things double (per Helmholtz et al.)? Or does it take an unusual effort? My confidence that the Helmholtz quote is a bit of madness with which few ordinary observers would agree has been shaken.
Thursday, October 25, 2007
Readers of this blog may be interested to look at David Chalmers's and David Bourget's new bibliography of over 18,000 papers in philosophy of mind, with links to online versions where possible, here.
A shorter bibliography (only 5000 papers!) focuses on free online papers on consciousness, here.
Let's see. If I read 5 papers a day every day for 10 years, I should be fully up to date in philosophy of mind -- assuming of course, that no one publishes anything in the interval! (Oh, and I guess I'll have a to read a few books on the side.)
An embarrassment of riches! Fortunately, "expert" is a relative term.
A completely irrelevant etymological aside: Ever since an undergraduate pointed out to me, a few years ago, that his electronically submitted assignment wasn't really a "paper" in the strictest sense, I've been having these distractingly purist thoughts about what exactly qualifies as a "paper". Uh oh. Will you, too, now?
Posted by Eric Schwitzgebel at 8:14 AM
Wednesday, October 24, 2007
Almost no one who is a jerk thinks he's a jerk. So how do you know if you are one? The ordinary devices of introspection won't do the trick. You need to look, without blinkers, at your behavior. To do so, you need a situation where the line between jerk and not-jerk is clear and there are many others in essentially the same situation against whom to compare yourself.
Fortunately (or, rather, unfortunately) the freeways of California provide just such a situation. What I'm talking about, of course, is the guy who speeds by the long line of cars waiting in the congested exit lane and cuts in at the last second.
Some might doubt that this is jerkish behavior. Surely those are the very people who themselves cut in. But is that unorthodox opinion the cause of the aggressive driving or the (rationalizing, self-deceived) effect of it? Introspection, again, will be of no help here.
Consider Kant: Surely the maxim "skip the line to cut in at the last minute" is not universalizable. It's not a maxim that you could simultaneously will that everyone abide by, since their doing so would cause congestion in your own fast-flowing lane, exactly the kind of congestion you are aiming to avoid.
Or take a consequentialist tack: Does your cutting in at the last second maximize happiness or human flourishing? Well, you save time and you may feel good (perhaps even deliciously wicked), but you cost each of the many cars behind you a little time and you annoy those who see you scoot past; you may slow down your own faster-moving lane with your last-minute cut-in; and you increase the risks of an accident. It's hard to see how the calculus could be positive here, unless you have a very good reason for thinking your time is more precious than others'. (Maybe you're running late? Well, couldn't you have hit the road earlier?)
Or consider character examplars: Would Confucius cut in at the last minute? How would Jesus drive?
With this behavioral measure in hand, each of us can reflect on our jerk-sucker ratio. Suppose for every 48 cars that wait in an orderly way (the suckers) there are 2 who cut in (the jerks). If you are among those two, that puts you in the 96th percentile for jerks! On the other hand, if there are 15 cars cutting in and 35 waiting, cutting puts you only in the top 70th percentile. (If the ratio gets too balanced, though, the formula breaks down: 50-50 is just a jam, and 45-55 is probably just choosing one's lane wisely.)
Me, I find myself typically at about the 80th percentile. I'll wait patiently if almost everyone else is doing so -- but if enough people are cutting in, I'll break and run (or plan to do so next time around). But since I really loathe being either jerk or sucker, my preferred plan is to stay off the road!
Actually, at such times I think I would usually will the Kantian maxim. I'd be delighted if both lanes were equally plugged. Gladly, I'd sacrifice the jerk's time savings to avoid the jerk-sucker game entirely! But that still doesn't change the uncomfortable fact that, for the most part, I'd rather be in the 80th percentile for jerk than the 20th for sucker.
Now the question is, how well does this tendency to be self-serving carry across situations...?
Monday, October 22, 2007
I will try occasionally to visit that post and the original posts to look at the comments. Although I can't address particular students' situations, I can address general questions.
I would be especially interested to hear from professors who have served on admissions committees who think that my advice is inaccurate or misleading.
Part I: Should You Apply, and Where?
Part II: Grades and Classes
Part III: Letters of Recommendation
Part IV: Writing Samples
Part V: Statement of Purpose
Part VI: GRE Scores and Other Things
Part VII: After You Hear Back
When You'll Hear and When You'll Have to Decide
There's a general agreement among philosophy Ph.D. programs that applicants have until April 15 to decide whether to accept an offer of admission. This deadline drives the process.
Schools with a hard cap on their admissions offers might be permitted by the administration to admit only eight students, for example, or to offer funding (in the form of T.A.-ships and fellowships) to only eight students. These schools will try to admit those eight students quickly (in February, maybe) and will often pressure those students to make a decision as soon as they can so that if they decline, another student further down the list can be admitted or offered funding.
Other departments will target a certain entering class size and admit approximately twice that many students (or more or less, depending on the "yield" rates in recent years) with the expectation that about half of the admitted students will decline. (For example, UCR was aiming for 10-12 last year. We admitted 24 and got 11.) In principle, these departments could admit all those students early in the process, but in fact things often fall behind. If the number of students accepting offers seems to be falling short of expectations, a few may be admitted at the last minute.
If you're at the top of a department's list, expect (typically, depending on the committee's speed) to hear mid-Februrary to mid-March. Applicants lower down on the list may not hear until April, even April 15th itself! You may not hear good news about funding, in particular, until very near the April 15th deadline, if the department has a hard cap on funding. Be ready on April 15th to make an immediate decision about an offer should one come -- and don't be too far from the phone! It's not unreasonable to ask for an additional day or two to decide, should you hear on April 15th, but the department may or may not comply with such a request.
It's generally in the interest of the applicants, then, to wait on their decisions until April 15. However, it is in the interest of departments to extract decisions from applicants as early as possible. Unfortunate!
Occasionally, if an entering class is looking smaller than expected, a department may admit someone after April 15th. That student may already have committed to another school. Generally speaking, it's good to keep to your commitments, but if the one program is much more appealing than the other, I'd recommend reneging with a heartfelt apology!
Most top-50 ranked Ph.D. programs do not expect students to pay their way through graduate school. They'll offer funding (at poverty levels) in the form of T.A.-ships and fellowships. When comparing funding offers between schools, don't just look at the raw dollar amounts. Some schools inflate their dollar amounts by adding the cost of tuition to their stated funding totals -- money which of course comes right back to them. Make sure, also, that your funding offer includes student medical insurance.
Most departments will guarantee students five years of support (though UCR typically offers only four years to students entering with an M.A.) in some combination of fellowship and T.A.-ship. If you're on fellowship you're paid just for being a student! (Sweet!) A typical offer at a typical department will be for one year of fellowship (your first year, when you aren't really advanced enough a student to be a T.A., anyway, in the eyes of most deparments) and four years of T.A.-ship. Students especially targeted by the department may receive additional fellowship years. (Outstanding GPA and GRE scores help a lot here, since the high-level administrators who often give out those fellowship packages can evaluate those numbers better than they can evaluate writing samples and letters of recommendation.) Although most Ph.D. programs expect most of their students to pay their way through most of their years by T.A.-ing, a few schools -- especially the smaller private schools -- don't expect much T.A.-ing from their students and offer comparatively more fellowship support.
You might also consider how much is expected of a T.A.: Teaching one section of 25 students is much easier than teaching three sections of 25 which in turn is easier (usually) than teaching an entire course on your own. Also consider what happens when your guaranteed years of funding run out, since most students at most schools run out of guaranteed funding before they complete their degrees.
Don't expect too much wiggle room in negotiations about funding. But if a comparable department is offering you a better package than the school that would otherwise be your first choice, it can't hurt to politely mention that fact to the chair of the admissions committee.
Financial offers generally don't include summer funding, though often students can apply for a limited number of summer-school teaching positions. So how are you going to get through the summer?
Unless summer funding is dependable, I recommend considering writing test questions for ETS or a similar organization. Question writing often pays pretty well (by graduate student standards) and since it's piece work, you can do as little or as much of it as you like, on your own time. Such organizations often appreciate the precise turn of mind typical of philosophy students, who as a group do very well on standardized tests. (The organization I worked for, ACT, specifically recruited philosophy Ph.D. students, and the guide to writing questions used philosophical jargon and made reference to Quine!) Unfortunately, it can take several months to get training and certification to write questions, so if you consider this option, plan well in advance. Or -- again, if you're the sort who does well on standardized tests -- you can approach the issue from the other side and teach SAT or GRE prep courses. (Of course teaching philosophy is even better, if you can swing it!)
Letting People Know Where You've Been Admitted
Let your letter writers know where you've been admitted -- or even if you haven't been admitted anywhere -- and ultimately where you decide to go. It's only polite, since they put in work on your behalf. It helps them have a better sense, too, of what to expect for future students. And besides, they might have some helpful advice.
Admissions committee chairs also like to know where you've been admitted and where you decide to go (if not to their school) and why. You needn't share this information if you don't want to, but it helps them in thinking about future admissions. For example, if lots of admittees are going to comparably ranked schools because those schools have better funding offers, admissions committees can make a case for more funding to the college administrators. If admittees are declining mostly for much better-ranked schools, then committees know that their low yield rates are due to having a strong batch of applicants. Etc.
I highly recommend visiting the departments to which you've been admitted -- but only after you've been admitted. Admitted students, whom departments now want and are competing to attract, are treated much differently than students who have merely applied or who are on the "waiting list" (if there is one), who will be seen as petitioners. Unfortunately, then, it won't be possible to properly visit departments that admit you at the last minute.
Some departments have money to help students fly out to visit, others don't. It doesn't hurt to ask politely. In any case, let the admissions committee chair know you intend to visit. Even if funding isn't available, she can help arrange your stay -- for example by mentioning what times would be good or bad and maybe finding a graduate student willing to put you up for a night or two.
There are two main reasons to visit departments: First and obviously, it can help you decide where to go. But second, and less obviously, it is a valuable educational experience in its own right.
The second point first: As I mentioned in Part I, students who spend their whole time in one department often have a provincial view of philosophy. Even visiting another department for a few days can crack that provincialism and give an invigorating and liberating, broader perspective on the field. Also, you will never again be treated as well by eminent professors as you will when you are a prospective (admitted!) graduate student. The country's best-known philosophers will take you out to lunch or coffee for an hour and genuinely listen to your views on philosophical topics. They'll be solicitous of you. They'll value your opinion. I remember one extremely eminent professor spending a full day with me. We toured his campus and another nearby campus; we listened to music late into the night; he shared gossip about the state of the profession. (Spending a full day is highly unusual, though! Don't expect it. Aim for coffee. Interestingly, this particular professor had no idea who I was when I saw him again a few years later.) Graduate students -- who at top schools sometimes soon become influential professors themselves -- will engage you in long discussions about the state of philosophy, and you'll (sometimes) feel a real camraderie. My own graduate school tour, for which I set aside three full weeks (for six campuses) was one the highlights of my philosophical education.
To maximize all this, try to stay at each campus for a few weekdays. Weekends don't really count. If you have to cut classes, cut classes. This is much more important than whether you get an A or a B in Phil 176. Also, I'd recommend emailing in advance the professors you'd like to meet and asking them if they're willing to go out for coffee with you.
When you visit a school, the department will generally set you up with first- and second-year students to meet. No harm in that, but bear in mind that first- and second-year students are often still in the glow of having been admitted and they haven't yet started the most difficult part of their education, their dissertation. Insist on meeting students in their 5th year and beyond, especially students working with advisors you imagine you might be working with. In my experience, such students will generally be brutally honest. Unlike new graduate students and unlike professors they don't really care whether you come to their school or not, so they have little motive to draw a rosy picture. And often they're just itching to have someone to grouse to.
Meet the professors, but don't expect their solicitious treatment to continue after you've enrolled. The advanced students' opinions about the professors are probably a better gauge of how you'll actually be treated. Nonetheless, if you talk substance with professors on philosophical topics you care about, you can get a sense of whether you're likely to see eye-to-eye philosophically.
The Summer Before
Students often seem to be shy about showing their faces around the department to which they've been admitted until either classes start or there's some formal introductory event. No need for this. Move in early. Meet some professors and ask them for some reading suggestions pertinent to your shared interests or classes you'll be taking with them in the fall. Get a running start. Professors are often quite interested in meeting the new students -- until the inevitable disappointment of discovering that on average they're only average! But if you get a running start, maybe that's a sign that you'll be an unusually good student...?
Friday, October 19, 2007
In his seminal 1991 book Consciousness Explained, Daniel Dennett famously criticizes what he calls the "Cartesian Theater" view of the mind. I find the criticism odd.
The central "Cartesian" claim Dennett targets is that there is a specific location in the brain "arrival at which is the necessary and sufficient condition for conscious experience" (p. 106). His argument consists mainly in denying that there's always a fact of the matter about when, exactly, an experience occurs, if one considers events at very small time scales (on the order of tenths of a second). He appears to draw from this argument what seems to be the fairly radical anti-"Cartesian" conclusion that there are, in general, no definitive facts of the matter about the flow of conscious experiences independent of the changing "narratives" we construct about them. (Elsewhere in the book, however, Dennett writes as though there are such facts. I criticize his apparent inconsistency about such matters here.)
The argument is odd in two ways:
First: Dennett does not want to deny the intuitive idea that there are "afferent" (inbound) brain processes that are not in themselves conscious, such as early visual processes in the retina and early visual cortex. Nor does he want to deny that there may be similarly non-conscious "efferent" neural processes, going out from the brain -- for example, motor impulses travelling from the supplementary motor area down the spinal cord (p. 108-109). So evidently there is a center in the brain where everything comes together, on his view. The only question is how large that center is. But how could that question of size be theoretically deep enough to drive the general conclusions Dennett wants and his characterization of the issue as one on which most previous philosophers have gone radically wrong?
Second: Ordinary external events may also be temporally indeterminate, if one looks at narrow enough time slices (even independently of issues of Einsteinian relativity). Consider an example from a real theater: An elephant and an acrobat charge onto stage right and stage left respectively, at about the same time. The elephant's trunk comes in at t + 0 milliseconds but his tail doesn't come in until t + 600 milliseconds. The acrobat's leading foot comes in at t + 200 milliseconds but his trailing foot doesn't come in until t + 350 milliseconds. Did the elephant or the acrobat enter first? Obviously, many variations of this scenario are possible. But does this support any radical, general conclusion about the temporal order of events? Does it show that the best way to think of the processing of events is in terms of multiple scripts and that there are no facts independent of our narratives? Of course not! In real theaters as in Cartesian theaters, there is blurriness at the edges. That's how the world works in general (except maybe at the quantum level). Nothing radical follows.
Tuesday, October 16, 2007
Part I: Should You Apply, and Where?
Part II: Grades and Classes
Part III: Letters of Recommendation
Part IV: Writing Samples
Part V: Statement of Purpose
Part VI: GRE Scores and Other Things
GRE scores are less important to your application than grades, letters, writing sample, and statement of purpose. A few schools don't even require them. In my experience, some members of admissions committees take them seriously and others discount them entirely. My own opinion is that they add little useful information. However, since some committee members take them seriously, it's worth studying for the GRE and retaking it if you didn't do well. Also, since the higher-level administrators who oversee the process and often make the decisions about fellowship funding can really only evaluate your GPA and GRE scores, people who do well on these quantitative measures are likely to get better funding offers -- more years of fellowship without teaching, for example (being paid simply to be a student!). Also, it looks good for the department if the students they admit have better average grades and GREs than the students in psychology, economics, etc. We don't want to send too many 1100 GRE offers up to the dean's office for approval!
The GRE scores for this year's entering class at UCR ranged from 1230 to a perfect 1600, with most in the 1300s and 1400s. At UCR I'd say below 1250 is a strike against an applicant, above 1400 is a bonus. There is no GRE Subject Test in Philosophy.
Of course you made dean's list! If you list too many awards, the really good ones may escape notice. Among the most impressive awards: Magna Cum Laude, Phi Beta Kappa, departmental or college "outstanding student" or "outstanding essay" awards (if the department only selects one per year and the college only a few), awards from nationally- or internationally-recognized institutions such as the NSF or DAAD. Generally, though, even fairly impressive awards don't count for much. It's your grades, letters, and sample that really matter.
Race and Gender
Some schools give you the option of specifying your race and gender. Letter writers must also choose pronouns and can choose to mention race if they think it is relevant. (Some would never do so. Others think they help the applicant by doing so, if the applicant is a minority. If you prefer to keep the information confidential, tell your letter writers in advance.) Committees will often guess gender and ethnicity based on names.
Philosophy is largely a male discipline right now in the United States, and it's overwhelmingly non-Hispanic Caucasian. (Tenured men outnumber tenured women by a ratio of about 4-to-1. The ratio of non-Hispanic Caucasians to minorities is probably even more skewed.) I believe there are persistent systemic biases. However, I also believe that most admissions committees would like to counter these biases and see a broader diversity in the field. Admissions committees may nonetheless show bias implicitly in how they read a file from "Maria Gonzales" compared to a file from "Mark Johnson", unconsciously expecting less from the first file than the second. However, at least the admissions committees I've worked on have used conscious strategies in attempt to counteract, maybe more than counteract, these biases. For underprivileged minorities, especially, an application might be seriously considered that would be quickly dismissed if the applicant were a white male.
While we white males might feel disadvantaged by this, we should bear in mind that we profit from persistent bias in our favor in other contexts. For example, it's generally much easier to fit a professor's stereotype for a "promising philosophy student" if you have a certain kind of look and diction, the tone of voice and cultural attitude, that is characteristic of upper middle class white men. Decades of psychological studies suggest that stereotype-driven expectations can have substantial effects not only on how one is perceived (and thus presumably on letters) but also on one's performance on objective tests (through being encouraged, supported, believed in, made comfortable, etc., by one's teachers).
Personal Contact and Connections
Such things don't help much, I suspect, unless they bring substantive new information. If a professor at some point had a good substantive, philosophical conversation with an applicant and mentions that to the committee, that might help a bit. But seeking out professors for such purposes could backfire if it seems like brown-nosing, or if the applicant seems immature, arrogant, or not particularly philosophically astute.
Some professors may be very much swayed by personal connections, I suppose. I myself, however, often have a slightly negative feeling that I'm being "played"; and even if I know the person hasn't sought me out for the purpose of improving her admissions chances, in aiming to be fair and objective in my evaluations I will tend to discount that person's application somewhat -- maybe even more than it deserves.
Your cover letters may be thrown away or lost. Don't include any important information in them.
Part VII: After You Hear Back