One way to successfully invade a habitat: eat the competition

Harmonia axyridis, adulte The Harlequin ladybug, Harmonia axyridis. Photo by Ombrosoparacloucycle.

ResearchBlogging.orgThe Asian Harlequin ladybug, Harmonia axyridis, eats aphids like they’re Popplers, and it’s been repeatedly introduced into the U.S. and Europe to do exactly that. But since it was first introduced, H. axyridis has spread of its own accord, and displaced native ladybugs. This isn’t just because the Harlequin ladybug eats more aphids, or breeds faster, than the locals; it looks like part of the Harlequin’s success is due to the fact that it eats its native competition.

Although they’re known for eating aphids, most ladybugs are perfectly willing to engage in intraguild predation—that is, to eat other insects that are themselves primarily predators. Including other ladybugs. So a team at Wageningen University in the Netherlands set out to see whether H. axyridis might engage in a different kind of intraguild predation than its native competitors—do the Harlequins preferentially attack ladybugs of different species, and, when they do, are they more likely to win?

The team tested this in what they call a “semi-field” experiment, by creating encounters between ladybug larvae on individual leaves of small potted lime trees. They chose two other ladybug species, Coccinella septempunctata and Adalia bipunctata, for comparison to, and competition with, H. axyridis. Then, on the leaves of small potted lime trees, the researchers set up larval ladybug death matches.

Death matches for science, mind you.

The experiment was, basically, this: put two ladybug larvae on the same leaf, and watch what happens. The team paired up every possible combination of pairs of larvae from the three different ladybug species, so they ended with observations of each species interacting with (1) another member of its own species and (2 and 3) members of each of the other species.

Harmonia axyridis A Harlequin ladybug larva. Photo by kenjobro.

In a majority of these larval ladybug death matches, the paired larvae didn’t actually interact; either they failed to come into contact before the experiment timed out (the researchers gave the larvae 1000 seconds to start rumbling) or one or both larvae jumped off the leaf or crawled back onto the nearest branch. Across all the different possible species pairings, the larve actually interacted in between 23 and 43 percent of the trials.

However, when the larvae did manage to make contact … they also didn’t attack each other that frequently. Out of hundreds of trials, some of the larval pairings only resulted in one or two aggressive interactions. Most of the time the larvae failed to react, or just turned around and departed the leaf. So, okay, “Larval Ladybug Deathmatch” is probably not coming to next year’s reality TV lineup.

However however, out of the small fraction of trials in which the larvae did interact, and did interact aggressively, Harlequin ladybug larvae were clearly the meanest ladybugs on the leaf: when they attacked the larvae of the other species, they went for the ladybug jugular, and ate what they killed a little more than half the time. (The study’s authors use the word “predate” to describe this kind of interaction, a usage for which I do not care.) Harlequin ladybug larvae would sometimes attack members of their own species, but they never ended up eating them.

In comparison, the other two species hardly engaged in any aggression, and the team recorded only two instances of ladybug-on-ladybug predation in which Harlequin larvae weren’t the predators.

So the authors conclude that Harlequin ladybugs successfully invaded Europe and North American, in part, by eating the larvae of species that would otherwise stand in their way. Based on this data set, though, it’s a bit hard for me to believe this could be a major contributor.

Even on the experimental leaves, the larvae either failed to make contact or didn’t interact aggressively most of the time. Then, there’s some reason to think that the larvae don’t come into such close contact on a regular basis when left to their own devices: the research team also tried to set up death matches by placing the larvae on different leaves of the same tree, and then never saw the larvae wander onto the same leaf. So unless ladybug larvae hang out in groups in nature—and maybe they can, if they happen onto the same aphid-ridden tree—Harlequin ladybugs are probably not chowing down on the competition very often.◼

Reference

Raak-van den Berg, C. L., H. J. De Lange, & J. C. Van Lenteren (2012). Intraguild predation behaviour of ladybirds in semi-field experiments explains invasion success of Harmonia axyridis. PLoS ONE, 7 : 10.1371/journal.pone.0040681

Share

Color indicates poison in “poison dart” frogs—honestly!

A strawberry poison dart frog; apparently the San Cristobal color morph. Photo by Wilfredo Falcón.

ResearchBlogging.orgAlmost everyone knows the basic story behind the brilliant coloring of poison dart frogs. These tiny tropical rainforest amphibians secrete toxic alkaloids from their skin, and their bright colors are aposematic signals to warn away potential predators.

You’d expect species that are all sending the same message—Poison! Don’t eat!—to use the same signal to do it. Local studies confirm that birds are more likely to attack poison dart frogs who look different from other poison dart frogs in a given area. Yet not all poison dart frogs have the same color pattern, or even similar color patterns. Far from it—frogs within the same species can look completely different.

One possible explanation is that frogs with different coloration are not, in fact, sending the same signal. Brighter color could indicate greater toxicity. That seems to be the case for one highly variable species, the strawberry poison dart frog Dendrobates pumilio. A paper just published as an online, open-access article in The American Naturalist demonstrates that D. pumilio‘s colors are “honest signals”—and those signals are directed at specific predators.

The many colors of Dendrobates pumilio. Figure 1 from Maan & Cummings (2012).

The new study’s authors, Martine Maan and Molly Cummings, selected a study species that is a veritable rainbow of aposemitism, as you can see from the excerpted figure above. Different populations of Dendrobates pumilio are orange, red, green, blue, and yellow, with or without black spots. Maan and Cummings make sense of that colorful diversity in two major ways: first, by finding out whether there’s a relationship between color and poison, and second, by making an educated guess about how the different color morphs look to D. pumilio‘s many predators.

For the first part, Maan and Cummings took an objective measure of color—reflectance spectrum of frogs’ skin, measured under standardized lighting—and compared it to an objective measure of toxicity—how much discomfort mice exhibited from an injection of frog skin extract. (The mouse injection method is apparently a standard toxicity assay, and I guess it makes sense if you don’t know the specific chemicals that make the frogs poisonous.) The coauthors found a strong relationship between skin brightness and toxicity—frogs with brighter coloring were more poisonous.

Objectively bright coloring isn’t quite the same thing as looking bright to a predator, though. Different animals have different color vision—a frog that looks brightly colored to a frog-eating bird might not be particularly showy to a frog-eating snake, because birds and snakes have different suites of sensory cells in their eyes. So the coauthors then fed the spectral readings from the frogs into mathematical models that estimate how the frogs look to different kinds of animal vision. (This approach has been used elsewhere—for instance, to determine how well brood-parasitic cuckoo eggs blend in with their hosts’.) Maan and Cummings applied models based on the visual sensitivity of crabs, snakes, two kinds of bird vision, and frog vision.

Another strawberry poison dart frog, this time the color morph found on Aguacate. Photo by Drriss.

They found strong relationships between the frogs’ toxicity and their colors as seen by birds, and as seen by other frogs. The crab vision model varied depending on what kind of material the frog would be viewed against—to a crab, the frogs were conspicuous against bark or leaf litter, but not against green leaves. Meanwhile, the snake vision model didn’t perceive any particular relationship between brightness and toxicity. Those results make a lot of sense. Birds are most likely to spot prey from a distance, and make a decision to pursue it or not without getting up close. Crabs aren’t likely to encounter frogs up in the foliage, but on the ground, in the leaf litter. And snakes are less likely to rely sight than on chemical senses—taste or olfaction—in evaluating a potential meal.

This study doesn’t directly demonstrate the action of natural selection, and that leaves a significant question hanging: Why should Dendrobates pumilio signal its toxicity honestly? Certainly, if you’re a highly toxic frog, you’d want to let predators know; but if you’re less toxic than the frogs in the next population, why would you tell the world? Indeed, other species of poison dart frogs have evolved mimicry—bright colors without poison.

That suggests the honest coloration within D. pumilio is be due to more than just selection by predators. Perhaps coloration serves social functions, and then more conspicuous color morphs need to be more toxic to fend off more frequent predator attacks. Or there may be genetic constraints that link bright color and toxicity within the species, and both have evolved local differences due to genetic drift. Finding out how selection and other evolutionary forces have created this pattern would be no small project, but I think it’ll make an interesting story in the end. ◼

References

Darst, C. (2006). A mechanism for diversity in warning signals: Conspicuousness versus toxicity in poison frogs Proc. Nat. Acad. Sciences USA, 103 (15), 5852-7 DOI: 10.1073/pnas.0600625103

Maan, M., & Cummings, M. (2012). Poison frog colors are honest signals of toxicity, particularly for bird predators. The American Naturalist, 179 (1) DOI: 10.1086/663197

Share

Frightened birds make bad parents

Song sparrow chicks. Photo by Tobyotter.

ResearchBlogging.orgPredators have an obvious impact on their prey: eating them. But if the threat of predators prompts prey species to change their behavior, those behavioral changes can also affect prey population dynamics [$a]—and thereby, potentially, the prey’s evolution—even if the predators never actually catch any prey.

This is the effect documented in a short, sharp study just published in Science, in which Liana Y. Zanette and her coauthors show that song sparrows raise fewer chicks if they simply think that there are predators nearby [$a].

The team’s experimental design was simple but probably pretty work-intensive. Over the course of one summer on several small islands off the coast of British Columbia, they watched song sparrows choose mates and build nests. Once nests were established, the team surrounded them with anti-predator defenses: netting and electrified fences. They confirmed that these measures kept predators out with regular video surveillance. And then they turned on the loudspeakers.

At some nests, the team broadcast looped recordings of calls made by song sparrow predators—raccoons, crows and ravens, hawks, owls, and cowbirds. At control nests, the broadcast was instead a playlist of similar-sounding calls made by non-predators, including seals, geese, hummingbirds, and loons. The team then monitored the nests, recording the behavior of the mated pair at each nest, and the ultimate success of the eggs they laid.

An adult song sparrow, looking watchful. Photo by kenschneiderusa.

The results are pretty unambiguous. Pairs of song sparrows that heard predator calls laid fewer eggs than pairs that heard non-predators. Of the eggs laid by pairs who heard predator calls, fewer hatched, and of those hatched chicks, fewer survived fledge. Just the continuous, threat of predators—predators that were never visible—reduced the number of chicks the sparrows fledged.

The reasons for the reduced offspring are apparent from other behavioral observations. Birds in the predator-call treatment were perpetually on high alert, as measured by “flight initiation distance,” the distance up to which a researcher could approach the nest before the birds took flight. Sparrows in the non-predator treatment let researchers get about 120 meters from the nest before taking off; sparrows in the predator treatment wouldn’t tolerate humans within twice that distance. In the predator treatment, sparrows spent less time sitting on their eggs, and visited to feed their chicks less frequently. Not surprisingly, chicks in the predator treatment also gained less weight than chicks in the non-predator treatment.

And, in what may be the most poignant data set I’ve ever seen in print, the team also measured the skin temperature of chicks in each nest 10 minutes after the parents had left. Chicks in the predator-call treatment were measurably, and significantly, colder.

So the simple fear of predators is enough to prompt free-living song sparrows to lay fewer eggs, and raise fewer of the eggs they do lay to fledging. However, the absolute difference in offspring between sparrow pairs in the predator and non-predator treatments—40%—probably reflects the maximum effect we might expect to see in natural populations.

That’s because left to themselves, sparrows probably seek nesting spots with less predator activity. Here, all the sparrows had established nests in what, presumably, were the best spots they could find—but for half of them, the new neighborhood suddenly seemed to become a lot less safe shortly after they settled in. What Zanette et al. document is very much a behavioral, short-term response, and it’s one that many prey animals may be able to mitigate, or avoid altogether, with other behavioral responses. It’s hard to say how exactly it reflects the impact that fear of predators might have in sparrow populations unmolested by ornithologists.

Nevertheless, this result does suggest that for many prey animals, the fear of predators can, itself, be something to fear. ◼

References

Creel, S., & Christianson, D. (2008). Relationships between direct predation and risk effects. Trends in Ecology & Evolution, 23 (4), 194-201 DOI: 10.1016/j.tree.2007.12.004

Martin, T. (2011). The cost of fear. Science, 334 (6061), 1353-4 DOI: 10.1126/science.1216109

Zanette, L., White, A., Allen, M., & Clinchy, M. (2011). Perceived predation risk reduces the number of offspring songbirds produce per year. Science, 334 (6061), 1398-1401 DOI: 10.1126/science.1210908

Share

Snake-eating opossums have evolved venom-resistant blood

The humble Virginia opossum can shrug off snakebites that would kill larger mammals. Photo by TexasEagle.

ResearchBlogging.orgIf you were going to pick the traits of a single animal to confer on a superhero, you probably wouldn’t pick the Virginia opossum. Possums are ubiquitous, scruffy, ratlike marsupials, their toothy grins giving the not entirely inaccurate impression that they don’t have much going on upstairs. Until recently, the nicest thing I could think to say about them is that they eat a lot of ticks.

Blood-sucking Lyme disease vectors are only a small part of the opossum’s eclectic diet, however. They also eat quite a few poisonous snakes, and this has apparently led them to evolve a trait I could call a superpower without exaggeration: opossum blood is resistant to snake venom.

This curious and useful ability was first documented by J.A. Kilmon in a 1976 paper [$a], in which Kilmon reported field observations and laboratory trials showing that opossums tolerate snakebites without visible ill effect. (If animal experimentation makes you queasy, you might want to go read something else about now. Perhaps a nice post about gerbils?)

A natural bite was observed in the field by a 160 cm eastern diamondback on an adult opossum, Didelphis virginiana. The opossum displayed no apparent distress and this suggested a remarkable tolerance by that animal to envenomation. In order to ascertain if an actual envenomation did take place, Mr. Seashole conducted field experiments by manually causing snakes to inflict actual bites on captured opossums. None of the bites caused visible signs of distress to the opossums.

Kilmon brought possums back to the lab, anesthetized them, hooked them up to heart monitors, and “inflicted” bites on them from diamondback and timber rattlesnakes, water moccasins, and at least one cobra. (Kilmon reports he used 15 snakes in total, but doesn’t break that number down by species.) “None of the five opossums,” he wrote, “developed observable local reactions other than trauma attributable to fang penetration and none developed observable systemic effect, exhibiting negligible alteration of heart rate and respiration.”

A timber rattlesnake—no big deal to an opossum. Photo by Tom Sprinker.

Finally Kilmon injected an anesthetized opossum with enough water moccasin venom to kill five fifteen-kilogram dogs, and observed no reaction beyond a brief drop in blood pressure and small spike in pulse rate—when the possum awoke, it was “apparently healthy.” Upon sacrificing and dissecting the animal, Kilmon found no evidence of organ damage.

Kilmon concludes his brief scientific report with a weird aside about the evolutionary history of opossums, which, had he been writing in 2011, would have made me think his research consisted mainly of skimming the Wikipedia entry for Didelphis virginiana. In the course of reporting the opossum’s taxonomic affiliations and known diet, Kilmon notes offhandedly,

This polyprotodont marsupial is a primitive but also very successful mammal. The opossums of varying species are the only marsupials surviving in the placental world, the predominant marsupial and monotreme mammals of Australia having probably survived due to their isolation. The opossum has remained unchanged for millions of years and probably reached his peak of evolutionary specialization several millions of years ago.

I don’t think he could’ve gotten away with that last sentence in an evolutionary biology journal. It’s true that the common ancestor of opossums and placental mammals (i.e., us) diverged quite a long time ago, that opossum-like critters are known from the fossil record going back that far, and that many opossum traits are thought to be shared with early mammals. But that doesn’t mean opossums “remained unchanged for millions of years.” The lineage leading to modern opossums has been evolving exactly as long as the lineage leading to modern humans—and if the opossum’s lifestyle hasn’t led it to such evolutionary heights as the wheel, war, New York and so forth, then it also hasn’t left the opossum unchanged.

As it happens, a pretty good illustration of this point is the paper that led me to Kilmon’s morbid little study in the first place. Mammalogists Sharon Jansa and Robert Voss have just published a study of one blood protein that may underlie opossums’ resistance to venom. The venom of pit vipers like rattlesnakes and water moccasins targets the blood clotting system—one of the unpleasant effects of a snake bite is internal hemorrhage. So Jansa and Voss examined the evolution of a venom-targeted clotting protein called von Willebrand Factor, or vWF, comparing it across the entire family of opossums, the didelphidae.

Photo by Maggie Osterberg.

Since the evolutionary origin of the family, the vWF of opossum species that prey on snakes has accumulated more changes than vWF in non-snake-eating species. That’s circumstantial evidence for the effect of natural selection continuously acting on vWF over millions of years. Jansa and Voss picked out several specific changes that are unique to snake-eating opossums, and found that they’re associated with a region of vWF that is known to bind with one of the toxins in pit viper venom.

The authors suggest that opossums may have been engaged in a evolutionary “arms race” against snake venom toxins since they first developed a taste for rattlesnake. In other words, not only is the opossum not unchanged since the early history of mammals, one of the traits that has changed continuously since then may be the very feature that piqued Kilmon’s interest.

References

Jansa, S., & Voss, R. (2011). Adaptive evolution of the venom-targeted vWF protein in opossums that eat pitvipers. PLoS ONE, 6 (6) DOI: 10.1371/journal.pone.0020997

Kilmon, J., Sr. (1976). High tolerance to snake venom by the Virginia opossum, Didelphis virginiana. Toxicon, 14 (4), 337-40 DOI: 10.1016/0041-0101(76)90032-5

Share

Nibbled to distraction: Gerbils infested with fleas don’t watch for foxes

ResearchBlogging.orgIn natural communities, each species is embedded in a web of interactions with other species—predators, prey, competitors, mutualists, and parasites. The effects of all these other species combine in complex, unpredictable ways. I recently discussed a study of protozoans living inside pitcher plants that found predators and competitors can cancel out each others’ evolutionary effects. Now another study finds that parasites and predators can interact to make desert-living gerbils adopt less effective foraging strategies [$a].

Allenby’s gerbil is a small desert rodent native to Israel’s Negev Desert. They make a living foraging for seeds, which might seem simple enough—but for small desert mammals, it’s a constant balancing act. Foraging requires continuously judging how profitable it is to continue gathering seeds in one spot compared to looking for another, maybe better, spot; and all the while watching out for predators.

The red fox—a major threat if you’re a tiny rodent, but hard to watch for when you’re scratching fleas all the time. Photo by HyperViper.

For small mammals, parasites like fleas can impose a real physiological cost—but they might also cause irritation that interfere with effective foraging. This idea led a group of Israeli reserachers to experimentally infest captive gerbils with fleas, and release them into an enclosure with a red fox.

It’s okay—the fox was muzzled! The research group was interested in how effectively the gerbils foraged in standardized patches of resources (trays of seed mixed with sand) in the presence of predators, and how being flea-ridden changed that foraging behavior. As metrics of foraging efficiency, they recorded how rapidly the gerbils gave up foraging in a single tray before moving on to another, which approximates how many seeds they left behind.

With no fleas, gerbils spent slightly—but not significantly—less time foraging in a single tray when a fox was in the enclosure with them. But gerbils infested with fleas moved on to a new tray substantially faster in the presence of a fox, leaving behind more seeds in the process. The study’s authors suggest that this is because the irritation caused by fleas distracted the gerbils too much to keep watch for a predator and forage at the same time—so flea-ridden gerbils made up for being less watchful by moving between patches of resources more rapidly.

So for gerbils, the presence of a second, different kind of antagonist amplifies the effects of a nearby predator. Fleas and foxes aren’t just a double whammy—the effects of both together are worse than the sum of each individually.

Reference

Raveh, A., Kotler, B., Abramsky, Z., & Krasnov, B. (2010). Driven to distraction: detecting the hidden costs of flea parasitism through foraging behaviour in gerbils. Ecology Letters DOI: 10.1111/j.1461-0248.2010.01549.x

Share