Tuesday, June 20, 2017

The Dauphin's Metaphysics, read by Tatiana Grey at Podcastle

My alternative-history story about love and low-tech body switching through hypnosis has just been released in audio at PodCastle. Terrific reading by Tatiana Grey!

PodCastle 475: The Dauphin's Metaphysics

This has been my best-received story so far, recommended by Locus Online, translated into Chinese and Hungarian for leading SF magazines in those languages, and assigned as required reading in at least two philosophy classes in the US.

The setting is Beijing circa 1700, post-European invasion and collapse, resulting in a mashup of European and Chinese institutions. Dauphin Jisun Fei takes a metaphysics class with the the Academy's star woman professor and conceives a plan for radical life extension.

Story originally published in Unlikely Story, fall 2015.

Thursday, June 15, 2017

On Not Distinguishing Too Finely Among One's Motivations

I'm working through Daniel Batson's latest book, What's Wrong with Morality?

Batson distinguishes between four different types of motives for seemingly moral behavior, each with a different type of ultimate goal. Batson's taxonomy is helpful -- but I want to push back against distinguishing as finely as he does among people's motives for doing good.

Suppose I offer a visiting speaker a ride to the airport. That seems like a nice thing to do. According to Batson, I might have one (or more) of the following types of motivation:

(1.) I might be egoistically motivated -- acting in my own perceived self-interest. Maybe the speaker is the editor of a prestigious journal and I think I'll have a better shot publishing and advancing my career if the speaker thinks well of me.

(2.) I might be altruistically motivated -- aiming primarily to benefit the speaker herself. I just want her to have a good visit, a good experience at UC Riverside, and giving her a ride is a way of advancing that goal I have.

(3.) I might be collectivistically motivated -- aiming primarily to benefit a group. I want UC Riverside's Philosophy Department to flourish, and giving the speaker a ride is a way of advancing that thing I care about.

(4.) I might be motivated by principle -- acting according to a moral standard, principle, or ideal. Maybe I think driving the speaker to the airport will maximize global utility, or that it is ethically required given my social role and past promises.

Batson characterizes his view of motivation as "Galilean" -- focused on the underlying forces that drive behavior (p. 25-26). The idea seems to be that when I make that offer to the visiting speaker, that action must have been induced by some particular motivational force inside me that is egoistic, altruistic, collectivist, or principled, or some specific combination of those. On this view, we don't understand why I am offering the ride until we know which of these interior forces is the one that caused me to offer the ride. Principled morality is rare, Batson argues, because it requires being caused to act by the fourth type of motivation, and people are more normally driven by the first three.

I'm nervous about appeals to internal causes of this sort. My best guess is that these sorts of simple, familiar folk (or quasi-folk) categories don't map neatly onto the real causal processes generating our behavior, which are likely to be much more complicated, and also misaligned with categories that come naturally to us. (Compare connectionist structures and deep learning.)

Rather than try to articulate an alternative positive account, which would be too much to add to this post, let me just suggest the following. It's plausible that our motivations are often a tangled mess, and when they are a tangled mess, attempting to distinguish finely among them is usually a mistake.

For example, there are probably hypothetical conditions under which I would decline to drive the speaker because it conflicted with my self-interest, and there are probably other hypothetical conditions under which I would set aside my self-interest and choose to drive the speaker anyway. I doubt these hypothetical conditions line up neatly, so that I decline to drive the speaker if and only if it would require sacrificing X amount or more of self-interest. Some situations might just channel me into driving her, even at substantial personal cost, while others might more easily invite the temptation to wiggle out.

The same is likely true for the other motivations. Hypothetically, if the situation were different so that it was less in the collective interest of the department, or less in the speaker's interest, or less compelled by my favorite moral principles, I might drive or not drive the speaker depending partly on each of these but also partly on other factors of situation and internal psychology, habits, scripts, potential embarrassment -- probably in no tidy pattern.

Furthermore, egoistic, altruistic, collectivist, and principled aims come in many varieties, difficult to disentangle. I might be egoistically invested in the collective flourishing of the department as a way of enhancing my own stature in the profession. I might be drawn to different, conflicting moral principles. I might altruistically desire both that the speaker get to her flight on time and that she enjoy the company of the cleverest conversationalist in the department (me!). I might enjoy showing off the sights of the L.A. basin through the windows of my car, with a feeling of civic pride. Etc.

Among all of these possible motivations -- indefinitely many possible motivations, perhaps, if we decide to slice finely among them -- does it make sense to try to determine which one or few are the real motivations that are genuinely causally responsible for my choosing to drive the speaker?

Now if my actual and hypothetical choices were all neatly aligned with my perceived self-interest, then of course self-interest would be my real motive. Similarly, if my pattern of actual and hypothetical choices were all neatly aligned with one particular moral principle, then we could say I was mainly moved by that principle. But if my patterns of choice are not so neatly explained, if my choices arise from a tangle of factors far more complex than Batson's four, then each of Batson's factors is only a simplified label for a pattern that I don't very closely match, rather than a deep Galilean cause of my choice.

The four factors might, then, not compete with each other as starkly as Batson seems to suppose. Each of them might, to a first approximation, capture my motivation reasonably well, in those fortunate cases where self-interest, other-interest, collective interest, and moral principle all tend to align. I have lots of reasons for driving the speaker! This might be so even if, in hypothetical cases, I diverge from the predicted patterns, probably in different and complex ways. My motivations might be described, with approximately equal accuracy, as egoistic, altruistic, collectivist, and principled, when these four factors tend to align across the relevant range of situations -- not because each type of motivation contributes equal causal juice to my behavior but rather because each attribution captures well enough the pattern of choices I would make in the types of cases we care about.

Wednesday, June 07, 2017

Academic Pyramids, Academic Tubes

Greetings from Cambridge! Traveling around Europe and the UK, I am struck by the extent to which different countries have relatively pyramid-like vs relatively tube-like academic systems. This has moved me to think, also, about the extent to which US academia has recently been becoming more pyramidal.

Please forgive my ugly sketch of a pyramid and a tube:

The German system is quite pyramidal: There is a small group of professors at the top, and many stages between undergraduate and professor, at any one of which you might suddenly find yourself ejected from the system: undergraduate, then masters, then PhD, then one or more postdocs and/or assistantships before moving up or out; and at each stage one needs to actively seek a position and typically move locations if successful.

In contrast, the US system, as it stood about twenty years ago, was more tubular: fewer transition stages requiring application and moving, with much sharper cutdowns between each stage. To a first approximation, undergraduates applied to PhD programs, very few got in, and then if they completed there was one more transition from completing the PhD to gaining a tenure-track job (and typically, though of course not always, tenure after 6-7 years on the tenure track).

Philosophy in the US is becoming more pyramidal, I believe, with more people pursuing terminal Master's degrees before applying to PhD programs, and with the increasing number of adjunct positions and postdoctoral positions for newly-minted PhDs. Instead of approximately three phases (undergrad, grad/PhD, tenure-track/tenured professor), we are moving closer to five-phase system (undergrad, MA, PhD, adjunct/post-doc, tenure-track/tenured).

This more pyramidal system has some important advantages. One advantage is that it provides more opportunities for people from nonelite backgrounds to advance through the system. It has always been difficult from students from nonelite undergraduate universities to gain acceptance to elite PhD programs (and it still is); similarly for students who struggled a bit in their undergraduate careers before finding philosophy. With the increasing willingness of PhD programs to accept students with Master's degrees, a broader range of students can earn a shot at academia: They can compete to get into a Master's program (typically easier to do for people with nonelite backgrounds than being admitted to a comparably-ranked PhD program) and then possibly shine there, gaining admittance to a range of PhD programs that would otherwise have been closed to them. A similar pattern sometimes occurs with postdocs.

The other advantage of the pyramid is that being exposed to a variety of institutions, advisors, and academic subcultures has advantages both for the variety of perspectives it provides and for meeting more people in the academic community. A Master's program or a postdoctoral fellowship can be a rewarding experience.

But I am also struck by the downside of pyramidal structures. In Europe, I met many excellent philosophers in their 30s or 40s, post-PhD, unsure whether they would make the next jump up the pyramid or not, unable to settle down securely into their careers. This used to be relatively uncommon in the US, though it has become more common. It is hard on marriages and families; and it's hard to face the prospects of a major career change in mid-life after devoting a dozen or more years to academia.

The sciences in the US have tended to be more pyramidal than philosophy, with one or more postdocs often expected before the tenure-track job. This is partly, I suspect, just due to the money available in science. There are lots of post-docs to be had, and it's easier to compete for professor positions with that extra postdoctoral experience. One possibly unintended consequence of the increased flow of money into philosophical research projects, through the Templeton Foundation and government research funding organizations, is to increase the number of postdocs, and thus the pyramidality of the discipline.

Of course, the rise of inexpensive adjunct labor is a big part of this -- bigger, probably, than the rise of terminal Master's programs as a gateway to the PhD and the rise of the philosophy post-doc -- but all of these contribute in different ways to making our discipline more pyramidal than it was a few decades ago.

Thursday, June 01, 2017

The Social-Role Defense of Robot Rights

Daniel Estrada's Made of Robots has launched a Hangout on Air series in philosophy of technology. The first episode is terrific!

Robot rights cheap yo.

Cheap: Estrada's argument for robot rights doesn't require that robots have any conscious experiences, any feelings, any reinforcement learning, or (maybe) any cognitive processing at all. Most other defenses of the moral status of robots assume, implicitly or explicitly, that robots who are proper targets of moral concern will exist only in the future, once they have cognitive features similar to humans or at least similar to non-human vertebrate animals.

In contrast, Estrada argues that robots already deserve rights -- actual robots that currently exist, even simple robots.

His core argument is this:

1. Some robots are already "social participants" deeply incorporated into our social order.

2. Such deeply incorporated social participants deserve social respect and substantial protections -- "rights" -- regardless of whether they are capable of interior mental states like joy and suffering.

Let's start with some comparison cases. Estrada mentions corpses and teddy bears. We normally treat corpses with a certain type of respect, even though we think they themselves aren't capable of states like joy and suffering. And there's something that seems at least a little creepy about abusing a teddy bear, even though it can't feel pain.

You could explain these reactions without thinking that corpses and teddy bears deserve rights. Maybe it's the person who existed in the past, whose corpse is now here, who has rights not to be mishandled after death. Or maybe the corpse's relatives and friends have the rights. Maybe what's creepy about abusing a teddy bear is what it says about the abuser, or maybe abusing a teddy harms the child whose bear it is.

All that is plausible, but another way of thinking emphasizes the social roles that corpses and teddy bears play and the importance to our social fabric (arguably) of our treating them in certain ways and not in other ways. Other comparisons might be: flags, classrooms, websites, parks, and historic buildings. Destroying or abusing such things is not morally neutral. Arguably, mistreating flags, classrooms, websites, parks, or historic buildings is a harm to society -- a harm that does not reduce to the harm of one or a few specific property owners who bear the relevant rights.

Arguably, the destruction of hitchBOT was like that. HitchBOT was cute ride-hitching robot, who made it across the length of Canada but who was destroyed by pranksters in Philadelphia when its creators sent it to repeat the feat in the U.S. Its destruction not only harmed its creators and owners, but also the social networks of hitchBOT enthusiasts who were following it and cheering it on.

It might seem overblown to say that a flag or a historic building deserves rights, even if it's true that flags and historic buildings in some sense deserve respect. If this is all there is to "robot rights", then we have a very thin notion of rights. Estrada isn't entirely explicit about it, but I think he wants more than that.

Here's the thing that makes the robot case different: Unlike flags, buildings, teddy bears, and the rest, robots can act. I don't mean anything too fancy here by "act". Maybe all I mean or need to mean is that it's reasonable to take the "intentional stance" toward them. It's reasonable to treat them as though they had beliefs, desires, intentions, goals -- and that adds a new richer dimension, maybe different in kind, to their role as nodes in our social network.

Maybe that new dimension is enough to warrant using the term "rights". Or maybe not. I'm inclined to think that whatever rights existing (non-conscious, not cognitively sophisticated) robots deserve remain derivative on us -- like the "rights" of flags and historic buildings. Unlike human beings and apes, such robots have no intrinsic moral status, independent of their role in our social practices. To conclude otherwise would require more argument or a different argument than Estrada gives.

Robot rights cheap! That's good. I like cheap. Discount knock-off rights! If you want luxury rights, though, you'll have to look somewhere else (for now).

[image source] Update: I changed "have rights" to "deserve rights" in a few places above.