Thursday, February 25, 2010

The Review of Philosophy and Psychology

There's a new journal on the block. The first issue of The Review of Philosophy and Psychology is now out, and for a limited time all articles are available for free, here. The inaugural issue concerns experimental philosophy and was guest edited by Joshua Knobe, Tania Lombrozo, and Edouard Machery.

One of Josh Rust's and my papers is in it: Do Ethicists and Political Philosophers Vote More Often Than Other Professors? (Short answer: No.)

There's other good stuff in there too. I especially recommend Simon Cullen's critique of the methodology of experimental philosophy, Survey-Driven Romanticism.

Wednesday, February 24, 2010

How Far Away Is the Television Screen of Visual Experience?

... not that I think there really is one, even in a loose, metaphorical sense. (See here.) But:

David Boonin (a visiting speaker from Colorado) and UCR graduate students Alan Moore and Matt Braich and I were hiking up Mt. Rubidoux. From the top, we could see several miles across town to the UCR campus. We pointed out to Boonin the clock tower, and then Alan said that the humanities building housing the philosophy department was also visible nearby, down and to the right, "about an inch and a half away".

What, an inch and a half away?! Alan's statement -- as I'm sure he knew -- sharply conflicted with my published views about the nature of the visual experience of perspective. And yet I knew exactly what Alan meant. He had effectively pointed out the spot. It seemed to me that "an inch and a half" was a much better description of the apparent distance than, say, a millimeter or twenty feet. (Of course the real distance is much larger than any of those.)

My thumb is about 3/4 of an inch wide. Holding it at arm's length, I saw that it almost perfectly occluded the distance between the clock tower and the humanities building. Thus, if the television screen of visual experience were arm's length away, Alan should have said that the distance was 3/4 of an inch. From the fact that the building's apparent distance (in some sense of "apparent distance"!) was an inch and a half, I thus geometrically derive the conclusion that the television screen of visual experience is about five feet away.

There, proved!

Friday, February 19, 2010

Thursday, February 18, 2010

Another Simple Argument Against Any General Theory of Consciousness

... related to my first Simple Argument, but spun out a bit differently.

(1.) The history of philosophy shows that no theory of consciousness can avoid having some highly unintuitive consequences. (Or more cautiously, the history suggests that. The strength of the conclusion turns in part on the strength of this premise.)

For example, if functionalism is true, some very weird assemblages will be conscious. If consciousness depends upon material constitution, then beings behaviorally indistinguishable from us but materially different might entirely lack consciousness. And: Intuitive notions of consciousness seem to involve sharp boundaries not present in the evolution or development of conscious systems. And so on.

(2.) Therefore, something apparently preposterous must be true of consciousness.

(3.) Therefore, reflection on what is intuitively true -- and metaphysical speculations that depend on such intuitions -- cannot be a reliable guide to consciousness. (What such speculations yield, as is evident from the literature, is a variety of idiosyncratic hunches.)

(4.) Empirical observation of physical structure and behavior also cannot settle the question of which preposterous things are true, because their interpretation depends on prior assumptions about consciousness. (For example: Does observing such-and-such a functional structure establish that consciousness is present? Only given such-and-such functionalist assumptions.)

(5.) So we're stuck.

If we are stuck, the live options seem to be mysterianism (we will never know the truth about consciousness) or eliminativism (the concept of "consciousness" is broken to begin with, so good riddance).

Tuesday, February 09, 2010

Cognitive Shielding

Here's a concept I'm playing with and may soon have occasion to deploy in my work on introspection: cognitive shielding.

Normally when I reach judgments, the processes driving the judgment are wild. I don't attempt to control the influences on my judgment. I just let the judgment flow from whatever processes might drive and affect it. I look out the window and think about whether it will rain. I'm not sure what exactly causes me to conclude that it will. Presumably the appearance of the clouds is a major factor, but maybe I'm also influenced by my knowledge of what month it is and how common rain is this time of year. Maybe I'm influenced, too, by wind and by temperature, reflecting sensitivity to contingencies between those and oncoming rain -- contingencies I may have no conscious knowledge of. Maybe I'm influenced by knowledge of yesterday's weather, of this morning's weather report, and who knows what else. I don't attempt to control any of this, and the judgment comes.

Sometimes, I intentionally launch processes with the aim of having those processes influence my judgment. So, for example, I might think to myself: "In the northern hemisphere, storms spin in such a way that the wind of the leading edge tends to come from the south. So I really should consider the direction of the wind in reaching my judgment about the likelihood of rain." [How true this generalization actually is, I don't know.] I notice that the wind is indeed from the south and this increases my confidence that it will soon rain. The decision to consider a particular factor launched a process that would not otherwise have occurred, with an influence on the conclusion.

And finally, sometimes I try to shield my judgments from certain influences. Maybe I know that I'm overly pessimistic and am biased toward anticipating rain whenever I'm planning a picnic. I am in fact planning a picnic, and I don't want the resulting pessimism to affect my judgment, so I attempt to put the picnic out of mind or compensate somehow for the bias it would otherwise introduce. Or -- a familiar example for professors -- in grading student essays I might be legitimately concerned that my like or dislike for the student as an individual might bias my grading. I might attempt to compensate for this by not looking at the names on the essays, and then no cognitive shielding is necessary. But sometimes I do know who has written the essay I am grading. I might then try to shield my judgment about the essay's quality from that potentially biasing influence. Wild judgment might unfairly favor the student if I like her, so I try to reach a judgment uninfluenced by my opinion about her as a person.

Two issues:

(1.) It's not always clear whether some series of thoughts is wild or launched. Similarly for shielding. Possibly there is a large gray area here. But if the distinction between spontaneously considering certain factors and intentionally considering (or setting aside) certain factors makes sense -- and I think it does -- then I think these distinctions can fly, despite the gray area.

(2.) It seems that launching will normally be successful. Shielding on the other hand, may be difficult to execute successfully. One might try not to be influenced by certain things and yet nonetheless be influenced. But this is no objection to this taxonomy as long as it's clear that we can try to shield our judgments from certain influences.

Thoughts? Reactions? Does this make sense? Is there someone in the literature who has already laid this out better than I?

Wednesday, February 03, 2010

My entry on "Introspection" is now up in the Stanford Encyclopedia of Philosophy

... here.

From the intro:

Introspection, as the term is used in contemporary philosophy of mind, is a means of learning about one's own currently ongoing, or perhaps very recently past, mental states or processes. You can, of course, learn about your own mind in the same way you learn about others' minds—by reading psychology texts, by observing facial expressions (in a mirror), by examining readouts of brain activity, by noting patterns of past behavior—but it's generally thought that you can also learn about your mind introspectively, in a way that no one else can. But what exactly is introspection? No simple characterization is widely accepted. Although introspection must be a process that yields knowledge only of one's own current mental states, more than one type of process fits this characterization.

Introspection is a key concept in epistemology, since introspective knowledge is often thought to be particularly secure, maybe even immune to skeptical doubt. Introspective knowledge is also often held to be more immediate or direct than sensory knowledge. Both of these putative features of introspection have been cited in support of the idea that introspective knowledge can serve as a ground or foundation for other sorts of knowledge.

Introspection is also central to philosophy of mind, both as a process worth study in its own right and as a court of appeal for other claims about the mind. Philosophers of mind offer a variety of theories of the nature of introspection; and philosophical claims about consciousness, emotion, free will, personal identity, thought, belief, imagery, perception, and other mental phenomena are often thought to have introspective consequences or to be susceptible to introspective verification. For similar reasons, empirical psychologists too have discussed the accuracy of introspective judgments and the role of introspection in the science of the mind.

Tuesday, February 02, 2010

Podcast of "An Empirical Perspective on the Mencius-Xunzi Debate about Human Nature"

given to the Confucius Institute of Scotland on Jan. 19,
here.

The podcast is audio-only, so you won't see the overheads. I don't think you need to see the overheads to understand the talk. But for completeness here they are (as MS PowerPoint 2003).

You may also be interested to see this article, which was part of the basis for the talk.