The cost – and benefits – of hostility to strangers

ResearchBlogging.orgBruce Schneier points to an interesting post by Stephen Dubner, who asks why we humans are so prone to fear strangers, given that strangeness is such a poor predictor of dangerousness. Dubner proposes, and Schneier agrees, that it has something to do with our tendency to focus on rare, shocking dangers:

Why do we fear the unknown more than the known? That’s a larger question than I can answer here (not that I’m capable anyway), but it probably has to do with the heuristics — the shortcut guesses — our brains use to solve problems, and the fact that these heuristics rely on the information already stored in our memories.

And what gets stored away? Anomalies — the big, rare, “black swan” events that are so dramatic, so unpredictable, and perhaps world-changing, that they imprint themselves on our memories and con us into thinking of them as typical, or at least likely, whereas in fact they are extraordinarily rare.

That’s probably right. But when I read Dubner’s post, I immediately thought of another factor: hostility toward outsiders is instinctive because it can help communities bond.

This idea actually grew out of an attempt to understand altruism. Altruism is something of a puzzle to evolutionary biologists – the easiest thing to assume, under a “survival of the fittest” framework, is that selfishness is always the winning strategy. Yet again and again in human and nonhuman societies, we see examples of altruism, in which individuals help each other without immediate repayment. Societies in which everyone is altruistic should be able to out-compete societies in which everyone is selfish – but a single selfish person in a mostly altruistic society can out-compete her neighbors, make more selfish babies, and eventually drive altruism to extinction. So, if you can come up with a way to make altruism stable in the long term, you’ve got a good shot at publishing in Science or Nature.


Photo by Lawrence OP.

One such paper was published back in 2007. Co-authors Jung-Kyoo Choi and Samuel Bowles noted that tribal human societies spend a lot of time and blood in inter-tribal wars, and wondered if what they called parochialism – hostility to outsiders – helped stabilize within-tribe altruism [$-a]. They built a mathematical model of competing tribes, in which individuals within those tribes had one of four inheritable personality types: parochial altruists, tolerant altruists, parochial nonaltruists, and tolerant nonaltruists. Parochial altruists were something like the medieval ideal of a knight, willing to fight outsiders and die for the benefit of others in their tribe. Parochial nonaltruists weren’t willing to risk their lives for others; and the two tolerant types were, well, tolerant of others.

As I described above, nonaltruists were favored by within-tribe competition: altruists all contributed toward a common resource pool, which was shared among the whole tribe. So nonaltruists got a share, but didn’t contribute, which benefits them but is ultimately bad for the tribe. Tribes that fought other tribes and won could expand their territories and take the losers’ resources. On the other hand, if tribes interact peacefully, the tolerant individuals – and only the tolerant individuals – received a resource reward. (Is this putting anyone else in mind of certain new-school German board games?)

Choi and Bowles found that their model led to two alternative stable kinds of tribe dominated by either tolerant nonaltruists or parochial altruists. This is almost too tidy, because it looks like a dichotomy between peaceful-but-selfish “moderns” and mutually-aiding, warlike “primitives.” Yet tribal societies really do seem to be more prone to a certain kind of war (more like feuding, really), as Jared Diamond discusses in a 2008 essay for the New Yorker [$-a]. And, even in our modern, globalized society, we are immediately and instinctively suspicious of – hostile to – those different from us. Commenting on Choi and Bowles’s paper in the same issue of Science, Holly Arrow called this the “sharp end of altruism,” [$-a] and wondered how to tease apart the apparent association between altruism to neighbors and hostility to outsiders.

The most obvious option may be to expand our definition of “neighbor.” In a world where an Internet user in Malaysia can see (selected portions of) my ramblings on this ‘blog, maybe I’m less of a stranger than I would be otherwise. That’s not much, really, but it’s a start. The wonderful thing about being human is that, understanding our own tendencies, we can seek to overcome them.

References

H. Arrow (2007). EVOLUTION: The sharp end of altruism Science, 318 (5850), 581-2 DOI: 10.1126/science.1150316

J.-K. Choi, S. Bowles (2007). The coevolution of parochial altruism and war Science, 318 (5850), 636-40 DOI: 10.1126/science.1144237

3 thoughts on “The cost – and benefits – of hostility to strangers

  1. Thanks for posting on recent evolutionary research findings — some very interesting findings that I haven’t come across on other evolutionary science blogs.

    As you point out, a strong basis for distrust/hostility towards strangers is that it acts to stabilise within-tribe altruism, and can help communities to bond, but I suspect there is a more intrinsic basis — which may have been touched on in the $-a papers that I haven’t read — and which you perhaps touch on in your final paragraph.

    As far as I can see, any species that has the intelligence to be capable of both selfish and altruistic behaviour, and that has some (but perhaps not a fully developed) consciousness of the dynamics of reciprocal altruism, will automatically exhibit distrust, and potentially hostility, between one tribe and another. Reciprocal altruism implies a continuous nexus of individuals recognising other individual conspecifics, observing and remembering their behaviour over time, and (consciously or unconsciously) rewarding co-operative behaviour in others and/or punishing selfish behaviour. An intrinsic element of this is “over time”. In a tribe that is somewhat conscious of these dynamics, and experiences incentives for both selfish and co-operative behaviour, it is this continuous, somewhat conscious nexus that builds up co-operation and trust. With increasing intelligence, there is also a reliance on indicators in the absence of directly observed behaviour, e.g. Ug spent a couple of weeks feeding off the rest of us while he went off in the forest by himself, but he brought back a newly-constructed canoe, so this is an indication that he has been pulling his weight.

    There is limited continuous contact with the tribe down the river, hence only a very limited capacity to sustain this continuous nexus, and an extreme reliance on a few indicators, such as the bearing of gifts and brides. However, there is a high level of distrust in this situation. Relying on a few indicators is a very risky business, because every tribe has the opportunity to free-ride off another by means of surprise attack — assuming the other tribe has anything worth taking (set against the risks and effort), such as tools, food stores, women and perhaps human flesh as food. This opportunity should have been particularly evident to the human species, for two reasons.

    First, hunter-gatherers were partly sustained by hunting, so they had the knowledge, skills and tools for surprise attack. Second, the more humans became conscious of the dynamics of reciprocal altruism, the more they would have been able to conceive of the dynamics of inter-group free-riding and/or co-operation. Indeed, it may have been just as much the other way round — the actuality of inter-group free-riding and/or co-operation, and the between-group anti-free-riding mechanism of revenge warfare, may well have driven more conscious awareness of the general dynamics of reciprocal altruism, and hence social intelligence. In fact, I suspect that both hunting and inter-group violence/co-operation have been vital ingredients in the emergence of whatever level of intelligence humans can be said to currently exhibit (despite the fact that I’m a life-long vegetarian and conscientious objector to war).

    Of course, as you indicate, there is now chatter across the internet, and this provides the opportunity for at least some degree of tracking/remembering/rewarding/punishing of each other worldwide.

    Do the dynamics I outline above make sense to you? If so, have they been factored into current evolutionary theory?

    Personally, I’m inclined to accord only a limited level of interest to models like that of Choi/Bowles because they are predicated on fixed, mindless behavioural strategies that radically simplify observable behaviour. Of course, simplification has its place, but the Choi/Bowles model makes me wonder how far advanced evolutionary theory is in 2009.

  2. Chris,

    I think the factors you’ve outlined sound perfectly plausible, and, put together, are probably a more accurate description of human behavior – and some of the specifics have indeed been considered in mathematical models of cooperation and altruism. (I refer to this body of work in this post, but it’s a lot deeper than I can go into.)

    Of course, all mathematical models are necessarily simplified – C&B have set theirs up to focus on a very specific kind of process, rather than consider interactions between all the factors that could be involved in human cooperation. But there are also lots of natural systems where cooperation and altruism occur between individuals who are, relative to humans, pretty much instinct-driven automatons.

    Yuccas and the yucca moths that pollinate them, for example: the moths try to lay as many eggs in the yucca’s flowers as they can (to save pollination work), but the plants kill off flowers that receive too many moth eggs. There’s a “negotiation”, over evolutionary time, to keep a mutually beneficial relationship stable, as moths evolve tendencies to lay just enough eggs, and yucca evolve to be sensitive enough to prevent over-exploitation, but not so sensitive that they don’t get enough pollination.

  3. Jeremy ~

    Sorry, I gave the wrong impression with my comment about Choi/Bowles. I didn’t mean to denigrate this model, or mathematical modelling in general. It has its place. I’m just contrasting what little I know of evolutionary modelling with, say, climate change modelling. By comparison, evolutionary modelling seems to be at the sandbox level. Of course, it’s vital to go through the sandbox level before taking the next step, but I sometimes feel a bit disappointed at how little evolutionary theory has evolved.

    I already read all the Evolution/Science posts on your blog, and appreciate the extent to which intraspecific and interspecific “negotiation” (and, for that matter, reliance on indicators and false indicators) is widespread in nature at a relatively automaton-like level. Humans seem to be relatively well advanced in an awareness of these process at a conscious level, and I’m interested in the drivers by which this evolutionary change in consciousness evolved, and perhaps continues to evolve.

    As what I wrote before sounds plausible, I’d be interested in your thoughts on what seems to me to be the only viably long-term evolutionary strategy by which humans can make altruism stable, given our extreme reliance on complex, somewhat ambiguous indicators, with all the opportunities this provides for false indicators (whether consciously or unconsciously constructed). The strategy could be described as the Jesus Strategy — though I’m not holding my breath for this title to appear in Nature or Science any time soon. :-)

    The strategy involves being thoroughly astute and free from free-riding: as cunning as a snake and as innocent as a dove. Without astuteness (which implies such things as a willingness to intimately consider the essential nature of selfishness in 40 days in the desert, and to empathise with and understand tax-collectors and the like), there is automatically an ignorance of social reality that maximises the opportunity for others to free-ride, and provides an opening for self-deception in oneself.

    Conversely, without freedom from free-riding, one is automatically caught in the external conflict of deception and/or the internal conflict of self-deception, and one is thereby unable to fully comprehend social reality, with all the disadvantages this entails. One may secure short-term advantage, but this is invariably at the expense of the long term (another word for the fullness of which is eternity).

    Of course, this strategy is radically different from an apparent demonstrated, shared commitment to one of the various belief-systems based on the Golden Rule, which offer only a contingent, limited value as a social indicator of trustability or as a shield against selfishness.

Comments are closed.