The “Big Four,” part I: Natural selection

This post is the first in a special series about four fundamental forces in evolution: natural selection, mutation, genetic drift, and migration.

This post was chosen as an Editor's Selection for ResearchBlogging.orgAmong non-biologists, the best-known of the Big Four forces of evolution is almost certainly natural selection. We’ve all heard the catchphrase “survival of the fittest,” and that’s a pretty good, if reductive, summing up of the principle. In more precise terms, here’s how natural selection works:

  • Natural populations of living things vary. Deer vary in how fast they can run, plants vary in how much drought they can tolerate, birds vary in their ability to catch prey or collect seeds—no two critters of the same species are exactly alike.
  • Some of those variable traits determine how many offspring living things have. How well you avoid predators, fight off disease, and collect food all determine how many babies you can make.
  • Many of those variable traits are heritable, passed on from parents to offspring. Faster deer usually have faster fauns; drought-tolerant plants make drought-tolerant seeds.

With these three conditions in place, natural selection occurs: heritable traits that help make more babies become more common. That is, if you have a trait that lets you support more offspring than your neighbor, you’ll have more children than your neighbor, and they’ll have more children than your neighbor’s children, and so on.

Fitness-versus-phenotype regressions for directional, stabilizing, and disruptive selection. Graphic by jby.

Measuring selection

Put this way, natural selection is simply a relationship between fitness, the number of offspring an organism can produce (often reported in comparison to the rest of the local population), and phenotype, the value of one or more traits of that organism (wing length, running speed, number of flowers produced, &c). Biologists can measure selection in natural populations by estimating this relationship between fitness (or a proxy for fitness, like growth rate), and phenotypes. Such an analysis should produce something like the regression graphs to the right, in which the relationship might be directional, with greater- (or smaller-) than-average phenotypes having greater fitness; stabilizing, with the average phenotype value having greater fitness; or disruptive, with extreme phenotype values having greater fitness. The slope of the line, or the shape of the curve, is a measure of the strength of natural selection [PDF] on an organism’s phenotype. This approach to measuring selection has been widely applied, and in 2001 a group of biologists led by Joel Kingsolver collected more than 2,500 estimates of the strength of natural selection [PDF].

How strong is selection?

Kingsolver et al. found that selection was usually surprisingly weak. Studies with the largest sample sizes, and the most statistical power to detect selection, mostly found directional selection strength (that is, the slope of the fitness-phenotype regression) less than 0.1, and the strength of stabilizing or disruptive selection was similarly low. Does this mean selection doesn’t matter in the short-term evolution of natural populations?

Probably not. The average selection strength estimates from the Kingsolver et al. dataset are actually stronger than selection strength assumed in most mathematical models of evolution. Furthermore, the collected estimates of selection had “long tailed” distributions—a small number of studies found quite strong selection, up to ten times as strong as the average. So maybe rare but strong bouts of selection have disproportionate impact over the long term.

Peter and Rosemary Grant have documented decades of shifting natural selection on Darwin’s finches (Geospiza spp.). Photo by Igooch.

Taking the finch by the beak

Part of the problem with assessing selection in nature is that most datasets measure selection over just one or a few years. One exception is the case of Darwin’s finches in the Galapagos Islands. The Galapagos offers a wide variety of habitat types, and experiences substantial year-to-year environmental variation—a landscape that should exert all sorts of natural selection on its occupants. Peter and Rosemary Grant have studied Galapagos finches for decades now, and found that selection is continuously at work on these unassuming birds. (The Grants’ book How and Why Species Multiply sums up their research program for a lay audience.)

Much of the Grants’ work has focused on the finches’ beaks, which largely determine what food the birds can eat. The distribution of seed sizes available on different Galapagos islands strongly predicts [PDF] the size of finches’ beaks on those islands. In 1989, the Grants published estimates of selection on beak size in the finch species Geospiza conirostris following a drastic wet-to-dry climactic shift that radically changed what foods were available to the finches. They found strong selection [$a], with fitness-phenotype regression slopes as high as 0.37. What’s more, the direction of selection changed dramatically from a very wet year to the dry year immediately afterward, as the finches were forced to move from feeding on small seeds and arthropods—which gave the advantage to shorter beaks—to hard-to-crack seeds, which required deep beaks.

The Grants’ longer-term study of selection on Galapagos finches confirms this image of selection swinging back and forth unpredictably [PDF]. From 1972 to 2001, they tracked populations of the finch species G. scandens and G. fortis, and saw both more gradual long-term changes in the finches’ body size and beak measurements as well as sudden sharp shifts. These changes continually altered the ability of the two species to hybridize, so that some years they were more reproductively isolated than others—and conditions in any one year were poor indicators of what would be going on five, ten, or twenty years later.

So when does selection matter?

The Grants’ study makes natural selection look as shifting and impermanent as the wind. How can it shape patterns of evolution over millions of years, then? One possibility is that trends may emerge over longer periods of time, as wobbly selection moves species in new directions in a drunkard’s walk, with two steps forward, then one step back, then four steps forward. Another is that lasting trends only occur when speciation intervenes to lock in fleeting changes due to variable natural selection [$a]. Much also depends on how selection interacts with mutation, genetic drift, and migration, as I’ll discuss in the rest of this series.

And here’s a shameless plug for a t-shirt. Photo by jby.

References

Futuyma, D. (1987). On the role of species in anagenesis. The American Naturalist, 130 (3), 465-73 DOI: 10.1086/284724

Grant, B.R., & Grant, P.R. (1989). Natural selection in a population of Darwin’s finches. The American Naturalist, 133 (3), 377-93 DOI: 10.1086/284924

Grant, P.R., & Grant, B.R. (2002). Unpredictable evolution in a 30-Year study of Darwin’s finches. Science, 296 (5568), 707-11 DOI: 10.1126/science.1070315

Grant, P.R. and B.R. Grant. (2008) How and Why Species Mutliply: The Radiation of Darwin’s Finches. Princeton University Press. Google Books.

Kingsolver, J., Hoekstra, H., Hoekstra, J., Berrigan, D., Vignieri, S., Hill, C., Hoang, A., Gibert, P., & Beerli, P. (2001). The Strength of phenotypic selection in natural populations. The American Naturalist, 157 (3), 245-261 DOI: 10.1086/319193

Lande, R. (1976). Natural selection and random genetic drift in phenotypic evolution. Evolution, 30 (2), 314-34 DOI: 10.2307/2407703

Johnson, T., & Barton, N. (2005). Theoretical models of selection and mutation on quantitative traits. Phil. Trans. R. Soc. B, 360 (1459), 1411-25 DOI: 10.1098/rstb.2005.1667

Schluter, D., & Grant, P. (1984). Determinants of morphological patterns in communities of Darwin’s finches. The American Naturalist, 123 (2), 175-96 DOI: 10.1086/284196

Share

Evolving from pathogen to symbiont

This post was chosen as an Editor's Selection for ResearchBlogging.orgRecently the open-access PLoS Biology published a really cool study in experimental evolution, in which a disease-causing bacterium was converted to something very like an important plant symbiont. The details of the process are particularly interesting, because the authors actually used natural selection to identify the evolutionary change that makes a pathogen into a mutualist.

Life as we know it needs nitrogen – it’s a key element in amino acids, which mean proteins, which mean structural and metabolic molecules in every living cell. Conveniently for life as we know it, Earth’s atmosphere is 78% nitrogen by weight. Inconveniently, that nitrogen is mostly in a biologically inactive form. Converting that inactive form to biologically useful ammonia is therefore extremely important. This process is nitrogen fixation, and it is best known as the reason for one of the most widespread mutualistic interactions, between bacteria capable of fixing nitrogen and select plant species that can host them.


Clover roots, with nodules visible (click through to the original for a nice, close view. Photo by oceandesetoile.

In this interaction, nitrogen-fixing bacteria infect the roots of a host plant. In response to the infection, the host roots form specialized structures called nodules, which provide the bacteria with sugars produced by the plant. The bacteria produce excess ammonia, which the plant takes up and puts to its own uses. The biggest group of host plants are probably the legumes, which include the clover pictured to the right, as well as beans – this nitrogen fixation relationship is the reason that beans are the best source of vegetarian protein, and why crop rotation schemes include beans or alfalfa to replenish nitrogen in the soil.

For the nitrogen-fixation mutualism to work, free-living bacteria must successfully infect newly forming roots in a host plant, and then induce them to form nodules. The chemical interactions between bacteria and host plant necessary for establishing the mutualism are pretty well understood, and in fact genes for many of the bacterial traits, including nitrogen-fixation and nodule-formation proteins thought to be necessary to make it work are conveniently packaged on a plasmid, a self-contained ring of DNA separate from the rest of the bacterial genome, which is easily transferred to other bacteria.

This is exactly what the new study’s authors did. They transplanted the symbiosis plasmid from the nitrogen-fixing bacteria Cupriavidus taiwanensis into Ralstonia solanacearum, a similar, but disease-causing, bacterium. With the plasmid, Ralstonia fixed nitrogen and produced the protein necessary to induce nodule formation – but host plant roots infected with the engineered Ralstonia didn’t form nodules. Clearly there was more to setting up the mutualism than the genes encoded on the plasmid.


Wild-type colonies of Ralstonia (tagged with fluorescent green) are unable to enter root hairs (A), but colonies with inactivated hrcV genes are able to enter and form “infection threads,” like symbiotic bacteria (B). Detail of Marchetti et al. (2010), figure 2.

This is where the authors turned to natural selection to do the work for them. They generated a genetically variable line of plasmid-carrying Ralstonia, and used this population to infect host plant roots. If any of the bacteria in the variable population bore a mutation (or mutations) necessary for establishing mutualism, they would be able to form nodules in the host roots where others couldn’t. And that is what happened: three strains out of the variable population successfully formed nodules. The authors then sequenced the entire genomes of these strains to find regions of DNA that differed from the ancestral, non-nodule-forming strain.

This procedure identified one particular region of the genome associated with virulence – the disease-causing ability to infect and damage a host – that was inactivated in the nodule-forming mutant strains. As seen in the figure I’ve excerpted above, plasmid-bearing Ralstonia with this mutation were able to form infection threads, an intermediate step to nodule-formation, where plasmid-bearing Ralstonia without the mutation could not. Clever use of experimental evolution helped to identify a critical step in the evolution from pathogenic bacterium to nitrogen-fixing mutualist.

References

Amadou, C., Pascal, G., Mangenot, S., Glew, M., Bontemps, C., Capela, D., Carrere, S., Cruveiller, S., Dossat, C., Lajus, A., Marchetti, M., Poinsot, V., Rouy, Z., Servin, B., Saad, M., Schenowitz, C., Barbe, V., Batut, J., Medigue, C., & Masson-Boivin, C. (2008). Genome sequence of the  beta-rhizobium Cupriavidus taiwanensis and comparative genomics of rhizobia. Genome Research, 18 (9), 1472-83 DOI: 10.1101/gr.076448.108

Gitig, D. (2010). Evolving towards mutualism. PLoS Biology, 8 (1) DOI: 10.1371/journal.pbio.1000279

Marchetti, M., Capela, D., Glew, M., Cruveiller, S., Chane-Woon-Ming, B., Gris, C., Timmers, T., Poinsot, V., Gilbert, L., Heeb, P., Médigue, C., Batut, J., & Masson-Boivin, C. (2010). Experimental evolution of a plant pathogen into a legume symbiont. PLoS Biology, 8 (1) DOI: 10.1371/journal.pbio.1000280

Share

Why aren’t there more sickle-cell anemics in the Mediterranean?

This post was chosen as an Editor's Selection for ResearchBlogging.orgThe story of sickle-cell anemia and its malaria-protective effects is a textbook case how environmental context determines the fitness of a given genetic profile. However, the evolution of human blood disorders in response to selection from malaria parasites might be more complicated than that textbook story.



Malaria-causing parasites (dark-stained) among human red blood cells (top), and “sickled” red blood cells (bottom). Photos via WikiMedia Commons.

Malaria is caused by mosquito-spread parasites that attack their hosts’ oxygen-bearing red blood cells. A particular mutation in the gene that codes for part of the hemoglobin molecule – the molecule that actually stores oxygen inside red blood cells – leads to deformed, sickle-shaped, blood cells. People who carry two copies of the sickle cell gene develop sickle-cell disease, in which the sickle-shaped cells reduce oxygen transport efficiency and interfere with blood circulation. People with only one copy of the sickle-cell gene are healthy, and better able to resist malaria infection than those with no copies. The textbook story is that, in regions where malaria is common, such as sub-Saharan Africa, the advantage of malaria resistance is enough to offset the fitness risk of carrying the sickle-cell gene – that one-fourth of children born to parents who each have one copy of the gene will themselves have two copies and develop sickle-cell disease.

However, there are regions like the Mediterranean where malaria has historically been prevalent, but in which the human population hasn’t evolved the higher frequency of sickle-cell genes that you’d expect from the scenario outlined above. A new paper in PNAS demonstrates that this may be because of interactions between the sickle-cell gene and another genetic blood disorder, thalassemia [$a].

Thalassemia is a class of genetic disorders affecting the protein subunits that comprise hemoglobin. Each hemoglobin molecule is formed by binding together two “alpha”-type subunits, and two “beta”-type subunits. If there is a shortage of correctly-formed subunits of either type, then hemoglobin formation is impaired, resulting in anemia or (if the mutation stops subunit production altogether) death. However, like sickle-cell genes, thalassemic mutations can confer resistance to malaria; and if alpha-thalassemia is paired with beta-thalassemia, the reduced production of both subunits can balance out.

As it happens, in combination with alpha-thalassemia, the sickle-cell gene’s malaria protection is neutralized. Using population genetic models, the new study’s authors show that this effect may have actively prevented the sickle-cell gene from establishing in the Mediterranean, where alpha- and beta-thalassemias are more common than in Africa. In the Mediterranean, the presence of beta-thalassemia genes reduces the fitness cost of (mild) alpha-thalassemia genes; and in the presence of alpha-thalassemia genes, the sickle-cell gene confers no protection to people with one copy but still induces sickle-cell disease in people with two copies.

These interactions between genes are called epistasis, and they can have dramatic impacts on evolution. Although I haven’t seen many cases as well-characterized as this one, epistasis is probably widespread in the complex systems of genomes, where thousands of regulatory and protein-coding genes interact to build living things.

References

Penman, B., Pybus, O., Weatherall, D., & Gupta, S. (2009). Epistatic interactions between genetic disorders of hemoglobin can explain why the sickle-cell gene is uncommon in the Mediterranean Proc. Nat. Acad. Sci. USA, 106 (50), 21242-6 DOI: 10.1073/pnas.0910840106

Share

Picky eating, not genetics, splits leaf beetles

This post was chosen as an Editor's Selection for ResearchBlogging.orgMany different factors can conspire to create reproductive isolation between populations and, ultimately, separate species. Disentangling them is often tricky, but a study recently published in PNAS takes a crack, and demonstrates that two populations of leaf beetles are divided by food preferences, not genetics [$-a]




Neochlamisus larva, and two possible food plants, red maple and willow. Photos by Scott Justis/BugGuide.net, Mary Keim, and John Tann.

Some populations of the leaf beetle Neochlamisus bebbianae eat red maple, and others eat willow; each type grows better on their native host plant. Hybrids between the two species are possible, and they don’t grow as rapidly when raised on either host. This might mean that ecology — adaptation to the different host plants — is creating reproductive isolation between the two forms of Neochlamisus. But it might also mean that the two forms are genetically incompatible.

Many species are separated by intrinsic genetic incompatibility. In these cases, hybrids have reduced fitness, or die outright, because the two species have evolved separately in such a way that mixed genomes cannot produce important proteins correctly. One example was recently found in two lines of the wildflower Arabidopsis thaliana — both lines had duplicate copies of an important gene, and in each line a different copy mutated into non-functionality, so some hybrids between the two lacked any functional copies [$-a].

To differentiate between this kind of genetic incompatibility and ecological isolation, coauthors Egan and Funk conducted not one but two generations of hybridization between maple and willow Neochlamisus populations. In the first (F1) generation, they bred parents from each host-specialized type; but in the second they performed a “backcross,” breeding the F1 hybrids with mates from one or the other of the parental populations.

This produced a population of backcrossed hybrids with 3/4 of their genes from one parental type, and 1/4 from the other. If intrinsic incompatibility separated the types, then these backcrossed hybrids would grow poorly no matter what their host plant. However, if adaptation to separate host plants isolates the types, then backcrossed hybrids would perform better on the host plant of the type with which they shared more genes. This is what Egan and Funk found — backcrossed hybrid larvae grew faster on maple if they shared more genes with maple-type Neochlamisus, and similarly for willow.

References

Bikard, D., Patel, D., Le Mette, C., Giorgi, V., Camilleri, C., Bennett, M., & Loudet, O. (2009). Divergent evolution of duplicate genes leads to genetic incompatibilities within A. thaliana Science, 323 (5914), 623-6 DOI: 10.1126/science.1165917

Egan, S., & Funk, D. (2009). Ecologically dependent postmating isolation between sympatric host forms of Neochlamisus bebbianae leaf beetles Proc. Nat. Acad. Sci. USA, 106 (46), 19426-31 DOI: 10.1073/pnas.0909424106

Share

How to synchronize flowering without really trying

This post was chosen as an Editor's Selection for ResearchBlogging.orgOne way plants can gain an advantage in their dealings with pollinators, seed dispersers, or herbivores is to act collectively. For instance, when oak trees husband their resources for an extra-big crop of acorns every few years instead of spreading them out, acorn-eating rodents are overwhelmed by the bumper crop, and more likely to miss some, or even forget some of the nuts they cache. These benefits of synchronized mass seed production, or “masting,” are straightforward, but how it happens is less clear. A paper in the latest issue of Ecology Letters has an answer — synchronization happens accidentally [$-a].


Bumper acorn crops ensure that squirrels miss a few. Photo by douglas.earl.

When Dan Janzen first described masting as an adaptation in plants’ coevolution with seed predators, he proposed that “an internal physiological system” [$-a] acted as a timer between masting events, with masting ultimately triggered by weather conditions. However, mathematical models have suggested a different possibility, the “resource-budget hypothesis:” that masting synchronization arises through an interaction of resource and pollen limitation [$-a].

Resource limitation works in concert with pollen limitation by catching plants at two stages of the seed-production process. First, if the resources required for seed production are more than can be accumulated in a single year, or if the availability of resources varies from year to year, then some years will be spent building up reserves instead of producing flowers. When reserves are built up, seed production is limited by the availability of pollen to fertilize flowers. Plants that flower when most of the rest of the population doesn’t will fail to set much seed, so they’ll have reserves to make seeds in the next year. This doesn’t require Janzen’s “internal physiological system” for the plants to synchronize, although such a system might evolve to reduce the likelihood of wasting resources by flowering out of synch.

The new paper tests this model in populations of a western U.S. wildflower, Astralagus scaphoides, which flowers at high frequency every alternate year. The authors prevented seed production in the plants by removing their flowers, either in a “press” of three years in a row or in a single “pulse” during one high-flowering year. The plants’ response to these treatments would reveal the role of resource and pollen limitation in synchronizing seed production.

If resource depletion after fruit set prevents reproduction in successive years, we predicted that ‘press’ plants would flower more than control plants every year, as they were never allowed to set fruit. We predicted that ‘pulse’ plants would flower again in 2006, but not set fruit due to density-dependent pollen limitation in a low-flowering year.

The authors also measured the sugars stored in the roots of plants collected before and after flowering in a high-flowering year.


Seed predator in action. Photo by tombream07.

The resource-budget hypothesis worked. Plants prevented from setting seed were forced out of synch with the rest of the population. “Pulse” plants flowered the year after treatment, but because few other plants did, they received little pollen and set little seed. They then had resources to flower yet another year, with the rest of the population this time, and set much more seed, depleting their reserves and bringing them back into synch. “Press” plants continued to flower at high rates each year, as long as they were prevented from setting any seed. Sugar levels built up in the tested roots during non-flowering years, and dropped after high-flowering years.

So masting arises as an emergent result of two limitations acting on plants — the resources needed to make seed, and good access to pollen. A couple of simple rules lead, undirected, to an ordered system that affects entire natural communities.

References

Crone, E., Miller, E., & Sala, A. (2009). How do plants know when other plants are flowering? Resource depletion, pollen limitation and mast-seeding in a perennial wildflower. Ecology Letters, 12 (11), 1119-26 DOI: 10.1111/j.1461-0248.2009.01365.x

Janzen, D. (1971). Seed predation by animals Ann. Rev. Ecol. Syst., 2 (1), 465-92 DOI: 10.1146/annurev.es.02.110171.002341

Janzen, D. (1976). Why bamboos wait so long to flower Ann. Rev. Ecol. Syst., 7 (1), 347-91 DOI: 10.1146/annurev.es.07.110176.002023

Satake, A., & Iwasa, Y. (2000). Pollen coupling of forest trees: Forming synchronized and periodic reproduction out of chaos. J. Theoretical Biol., 203 (2), 63-84 DOI: 10.1006/jtbi.1999.1066

Share

Social termites team up with non-relatives

This post was chosen as an Editor's Selection for ResearchBlogging.orgIn social insects, colonies of hundreds or thousands of workers and soldiers forgo reproduction to support one or a few “reproductives” — drones and a queen. In most cases, this isn’t as selfless as it might seem. Because the workers in a colony are all offspring of the queen, they’re really reproducing through her — because the queen shares genes with the workers, when she reproduces it contributes to their evolutionary fitness.

This is called kin selection, and in many cases it’s a good explanation for the way the interests and behavior of individual workers are overridden by the interests of the colony. There are, however, exceptions — and an open-access paper in the latest issue of PNAS describes what looks like a good case: mergers between unrelated colonies of termites.


Zootermopsis nevadensis, a social insect inclined to negotiated settlements. Photo by BugGuide/ Will Chatfield-Taylor.

The termite Zootermopsis nevadensis lives in small, socially-stratified colonies that tunnel through rotting logs. Each colony has a pair of reproductive individuals, a king and queen, served by sterile workers and soldiers. Multiple unrelated colonies usually nest in a single log, and when they encroach on each other’s territory, something interesting happens — they merge.

In what the authors refer to obliquely as the “interaction” that precedes a merger, the king and queen of one or both colonies may die. Mergers occur in the aftermath, as workers from the two colonies began to work in concert, and one or a few of them become replacement reproductives. This ability of sterile workers to start reproducing in the absence of a king and queen is unique to termites. DNA analysis shows what happened after mergers — new reproductives could arise come from either or both colonies, and that in some cases they interbred.

It’s this possibility to become genetically invested in the newly merged colony, the authors say, that motivates workers from two unrelated colonies to work together. If this is the case, it means that kin selection is not what keeps merged colonies together. Group selection might be a better explanation. Kin selection is often contrasted with group selection, in which unrelated individuals sacrifice their own interests to those of a larger group, so that their colony can better compete against rival colonies. In a classic 1964 Nature paper [$-a], John Maynard Smith discussed the conditions under which kin selection operates well:

By kin selection I mean the evolution of characteristics which favour the survival of close relatives of the affected individual, by processes which do not require any discontinuities in population breeding structure.

And contrasts them to conditions necessary for group selection to work:

[Under group selection] … If all members of a group acquire some characteristic which, although individually diadvantageous, increases the fitness of the group, then that group is more likely to split into two, and in this way bring about an increase in the proportion of individuals in the whole population with the characteristic in question. The unit on which selection is operating is the group and not the individual.

The ecology of Zootermopsis nevadensis may set the stage for group selection to overpower kin selection. With many small colonies competing for a single rotting log, the benefits of possibly contributing to the reproduction of a larger, more competitive colony make mergers worthwhile. Something similar has been documented in ants, which can form supercolonies of unrelated colonies if there is some external threat (another ant species) to force them to band together — you can find discussion of a recent paper on this case over at Primate Diaries.

References

SMITH, J. (1964). Group selection and kin selection Nature, 201 (4924), 1145-1147 DOI: 10.1038/2011145a0

Johns, P., Howard, K., Breisch, N., Rivera, A., & Thorne, B. (2009). Nonrelatives inherit colony resources in a primitive termite Proc. Nat. Acad. Sci. USA, 106 (41), 17452-6 DOI: 10.1073/pnas.0907961106

Share

First step to mutualism doesn’t look so friendly

This post was chosen as an Editor's Selection for ResearchBlogging.orgAnt-plant protection mutualism is a widespread and elegant species interaction. How do species strike bargain like this, requiring specialized behaviors and structures in each partner, in the first place? A new report in The American Naturalist suggests an answer: maybe ants took the initiative [$-a].

In exchange for protection from herbivores and competitors [big PDF], “myrmecophytic” host plants grow hollow structures called domatia and often produce nectar to shelter and feed a colony of ants. This mutualism is really a sort of negotiated settlement between the partners; both ants and plants do what they can to get the most out of the interaction. We have evidence in some cases that host plants cut back support for ants if there aren’t any herbivores around; and, in other cases, that ants prune their host plants to prompt the growth of more domatia.



Domatium diversity: Ant domatia on Acacia (above) and Cordia nodosa (below). Photos by Alastair Rae and Russian_in_Brazil.

So it isn’t entirely surprising that there might be cases where that bargain hasn’t been established yet, and that’s what the new paper reports. The observation turned up in connection with one of the most interesting forms of the ant-plant mutualism: the “devil’s gardens” of the Amazonian rainforest. Devil’s gardens are created by colonies of the ant Myrmelachista schumanni, which attacks possible competitors to its preferred host [$-a], Duroia hirsuta, leaving patches where nothing but D. hirsuta grows.

Clued in by native research assistants, the group studying the devil’s garden interaction discovered that trees growing at the edge of a garden are often afflicted with swollen, distorted trunks. Cutting into the swellings, they found them riddled with passages and populated by M. schumanni. The trees in question are not known as myrmecophytes, and it’s not clear that they receive any benefit from hosting ants. In fact, the authors report that ant-occupied trunks are weakened, and prone to breakage under their own weight or under heavy wind.

The paper doesn’t present direct evidence that the ants create the galls, but as the authors explain, this seems likely — M. schumanni kills its hosts’ competition by injecting them with formic acid, which parallels the irritants other gall-making insects inject into their host plants. It make sense that gall-making might have started as ants’ attempts to kill off trees that are too big to succumb to formic acid outright, but respond to it by growing galls like scar tissue. Furthermore — and this is pure speculation, of course — this looks like a first evolutionary step toward true ant-plant mutualism. Domatia may have originally evolved to redirect ants from more damaging gall-making, and since ants are naturally territorial about their nests, it might not take much behavioral change before they end up protecting their host.

References

Edwards, D., Frederickson, M., Shepard, G., & Yu, D. (2009). A plant needs ants like a dog needs fleas: Myrmelachista schumanni ants gall many tree species to create housing. The American Naturalist, 174 (5), 734-40 DOI: 10.1086/606022

Frederickson, M., Greene, M., & Gordon, D. (2005). “Devil’s gardens” bedevilled by ants Nature, 437 (7058), 495-6 DOI: 10.1038/437495a

Janzen, D. (1966). Coevolution of mutualism between ants and acacias in Central America Evolution, 20 (3), 249-75 DOI: 10.2307/2406628

Share

A helpful invasive species?

This post was chosen as an Editor's Selection for ResearchBlogging.orgIntroduced species can wreak havoc on the ecosystems they invade. But what happens after they’ve been established for centuries? A new study in the latest Proceedings of the Royal Society suggests that, in one case, an introduced species has actually become an important part of the native ecosystem — and helps protect native species from another invader [$-a].



Dingoes (above) control red
foxes, which is good for native
critters.
Photos by ogwen and
HyperViper.

The introduced species in question is the Australian dingo, the wild descendant of domestic dogs [$-a] that moved Down Under with the first humans to settle the continent. Today, 5,000 years after their introduction, dingoes are the largest predator in much of Australia, and they were a prominent part of the ecosystem encountered by European settlers. Europeans, like previous waves of human arrivals, brought their own domestic and semi-domestic animals — including red foxes, which prey on small native mammals.

The new study’s authors hypothesized that because dingoes reduce red fox activity both through direct predation and through competition for larger prey species, dingoes should reduce fox predation on the smallest native mammals. At the same time, dingoes prey on kangaroos, the largest herbivore in the Australian bush — and reducing kangaroo populations should increase grass cover, providing more habitat for small native mammals. When the authors compared study sites with dingoes present to sites where dingoes had been excluded to protect livestock, this is what they found: increased grass cover, and greater diversity of small native mammals where dingoes were present.

Recently a news article in Nature discussed ragamuffin earth [$-a] — the idea that human interference in nature has so dramatically changed natural systems that it may often be impossible to restore “pristine” ecological communities. In these cases, some ecologists say, conservation efforts might be better focused on how to maintain and improve the diversity and productivity of the novel ecosystems we’ve inadvertently created. It looks as though the dingo could be a poster child for exactly this approach.

References

Letnic, M., Koch, F., Gordon, C., Crowther, M., & Dickman, C. (2009). Keystone effects of an alien top-predator stem extinctions of native mammals Proc. R. Soc. B, 276 (1671), 3249-3256 DOI: 10.1098/rspb.2009.0574

Marris, E. (2009). Ecology: Ragamuffin Earth Nature, 460 (7254), 450-3 DOI: 10.1038/460450a

Savolainen, P. (2004). A detailed picture of the origin of the Australian dingo, obtained from the study of mitochondrial DNA Proc. Nat. Acad. Sci. USA, 101 (33), 12387-90 DOI: 10.1073/pnas.0401814101

Share