If you saw my dog, Sadie, as she scopes me out in the morning when I put on my shoes, you’d think the answer to the question is obvious. She’s always checking for the running shoes. When it’s clear we’re going running, she goes ballistic.
But does she really have “fun?” Or is this just a result of evolution, a dance between her genes and mine in which her ancestors enjoyed some benefit (my ancestors fed them?) when they exhibited what looked like enthusiasm for going hunting, or out into the fields for the work day.
It’s a question I think about a lot, so I was totally intrigued by a piece in the April 8 Nature by Clive D. L. Wynne, from the University of Florida.
Wynne’s taking on the question of whether it is useful to anthropomorphize animal behaviors. It’s a distinguished tradition:
The complexity of animal behaviour naturally prompts us to use terms that are familiar from everyday descriptions of our own actions. Charles Darwin used mentalistic terms freely when describing, for example, pleasure and disappointment in dogs; the cunning of a cobra; and sympathy in crows. Darwin’s careful anthropomorphism, when combined with meticulous description, provided a scientific basis for obvious resemblances between the behaviour and psychology of humans and other animals. It raised few objections.
It’s also, Wynne argues, wrong.
He’s doing something careful here, and I am not. Wynne is looking for some sort of rigorous framework, while I’m just having a primitive relationship with my four-legged running buddy. I’m doing what he’d label “naive anthropomorphism” (“the impulse that prompts children to engage in conversations with the family dog”) while he’s talking about something much more rigorous. “Critical anthropomorphism,” in Wynne’s formulation (he’s quoting Gordon Burghardt here) “uses the assumption of animal consciousness as a ‘heuristic method to formulate research agendas that result in publicly verifiable data that move our understanding of behaviour forward.'”
And when you get into the realm of consciousness vs. hardwired behaviorism, that’s when things get sticky. It’s really a central debate of contemporary philosophy – how one might distinguish, as Kwame Anthony Appiah puts it, between a robot and our mom.
Suppose the computer in question is in a robot, which, like androids in science fiction, looks exactly like a person. It’s a very smart computer, so that its “body” responds exactly like a particular person: your mother, for example. For that reason I’ll call the robot “M.” Would you have as much reason for thinking that M had a mind as you have for thinking your mother does?
Of course that’s such a canned example that we surely have a way out by saying that of course no such robot exists, and we can tell the difference between silicon-based machines and Mom, and of course Mom has consciousness and the computer doesn’t, you silly argumentative philosopher you.
But what about Sadie? How might I distinguish between whether the run is “fun,” in the same way that our human consciousness experiences “fun,” and a hardwired behaviorist exhibition of fun-like symptoms? Wynne’s not arguing, I think, that Sadie doesn’t have consciousness, merely when we say that she does, we’re not saying anything terribly useful in terms of understanding what’s up.
For me, I’ve solved it at a more practical level. Of course she’s having fun. The poor dog can barely sit still for me to put the leash on her. What else could it be?
I guess I’m just a naive anthropomorphist.