Why aren’t there more sickle-cell anemics in the Mediterranean?

This post was chosen as an Editor's Selection for ResearchBlogging.orgThe story of sickle-cell anemia and its malaria-protective effects is a textbook case how environmental context determines the fitness of a given genetic profile. However, the evolution of human blood disorders in response to selection from malaria parasites might be more complicated than that textbook story.



Malaria-causing parasites (dark-stained) among human red blood cells (top), and “sickled” red blood cells (bottom). Photos via WikiMedia Commons.

Malaria is caused by mosquito-spread parasites that attack their hosts’ oxygen-bearing red blood cells. A particular mutation in the gene that codes for part of the hemoglobin molecule – the molecule that actually stores oxygen inside red blood cells – leads to deformed, sickle-shaped, blood cells. People who carry two copies of the sickle cell gene develop sickle-cell disease, in which the sickle-shaped cells reduce oxygen transport efficiency and interfere with blood circulation. People with only one copy of the sickle-cell gene are healthy, and better able to resist malaria infection than those with no copies. The textbook story is that, in regions where malaria is common, such as sub-Saharan Africa, the advantage of malaria resistance is enough to offset the fitness risk of carrying the sickle-cell gene – that one-fourth of children born to parents who each have one copy of the gene will themselves have two copies and develop sickle-cell disease.

However, there are regions like the Mediterranean where malaria has historically been prevalent, but in which the human population hasn’t evolved the higher frequency of sickle-cell genes that you’d expect from the scenario outlined above. A new paper in PNAS demonstrates that this may be because of interactions between the sickle-cell gene and another genetic blood disorder, thalassemia [$a].

Thalassemia is a class of genetic disorders affecting the protein subunits that comprise hemoglobin. Each hemoglobin molecule is formed by binding together two “alpha”-type subunits, and two “beta”-type subunits. If there is a shortage of correctly-formed subunits of either type, then hemoglobin formation is impaired, resulting in anemia or (if the mutation stops subunit production altogether) death. However, like sickle-cell genes, thalassemic mutations can confer resistance to malaria; and if alpha-thalassemia is paired with beta-thalassemia, the reduced production of both subunits can balance out.

As it happens, in combination with alpha-thalassemia, the sickle-cell gene’s malaria protection is neutralized. Using population genetic models, the new study’s authors show that this effect may have actively prevented the sickle-cell gene from establishing in the Mediterranean, where alpha- and beta-thalassemias are more common than in Africa. In the Mediterranean, the presence of beta-thalassemia genes reduces the fitness cost of (mild) alpha-thalassemia genes; and in the presence of alpha-thalassemia genes, the sickle-cell gene confers no protection to people with one copy but still induces sickle-cell disease in people with two copies.

These interactions between genes are called epistasis, and they can have dramatic impacts on evolution. Although I haven’t seen many cases as well-characterized as this one, epistasis is probably widespread in the complex systems of genomes, where thousands of regulatory and protein-coding genes interact to build living things.

References

Penman, B., Pybus, O., Weatherall, D., & Gupta, S. (2009). Epistatic interactions between genetic disorders of hemoglobin can explain why the sickle-cell gene is uncommon in the Mediterranean Proc. Nat. Acad. Sci. USA, 106 (50), 21242-6 DOI: 10.1073/pnas.0910840106

Evolution-proof insecticide?

ResearchBlogging.orgIn this week’s issue of PLoS Biology, an essay describes the perfect means for controlling malaria-carrying mosquitoes: an “evolution-proof insecticide.” By taking advantage of the life history traits of both mosquitoes and the malaria parasite, Read et al. argue it should be possible to create an insecticide that will cut malaria transmission without selecting for resistance in the mosquitoes.

Malaria remains a major public health problem in much of the world – according the World Health Organization, a child dies of the disease every 30 seconds, and the cost of malaria may cut economic growth by as much as 1.3% in countries with high infection rates. In the absence of a vaccine, the best approach for malaria management is to control the mosquitoes that transmit the malaria parasite. This is usually done with insecticides, but these have a limited useful lifespan, as they create strong selective pressure for mosquito populations to evolve resistance.


Photo by LoreleiRanveig.

As Read et al. point out, it’s not that we need to kill off mosquitoes as such; we just need to stop them from transmitting malaria. If this can be accomplished without strongly reducing the mosquitoes’ fitness, it would reduce or eliminate selection for resistance. Malaria typically needs a long time to incubate inside a mosquito before it becomes transmissible to humans, and, in what Read et al. call “one of the great ironies of malaria,” this incubation time is longer than most mosquitoes live. That is, the mosquitoes who successfully transmit malaria are the small proportion of the population who live long enough to incubate the parasite.

Here’s where evolutionary biology interacts with the life history of malaria parasites in a highly convenient way: an insecticide that selectively targets older mosquitoes will have a smaller impact on the mosquito population’s fitness. This is because most of a female mosquito’s fitness – the total number of offspring she produces – is concentrated in her first one or two egg-laying cycles. Her fitness can increase if she survives to complete more cycles, but it’s pretty rare that she does. From natural selection’s point of view, that first of eggs counts much more than possible future batches, because they’re not very likely.

For that hypothetical female mosquito to transmit malaria, she has to bite an infected human in the course of feeding to fuel one egg-laying cycle, then incubate the malaria parasites for an additional two to six cycles. Therefore, say Read et al., an insecticide that doesn’t harm mosquitoes until they complete their first few egg-laying cycles is the “evolution-proof” solution – the only offspring it “steals” from the affected mosquitoes were pretty improbable anyway, and it prevents the malaria parasites from incubating long enough to successfully infect a new human host.

As it happens, the evolution-proof insecticide might not be a chemical agent, but a biological one. A paper I discussed back in January suggested that infecting malaria-carrying mosquitoes with the parasitic Wolbachia bacterium could control mosquito populations [$-a] by, yes, reducing their total lifespan to something less than the malaria parasite’s incubation time. In short, it looks like the goal of a malaria-free world is not as improbable as it used to be.

References

McMeniman, C., Lane, R., Cass, B., Fong, A., Sidhu, M., Wang, Y., & O’Neill, S. (2009). Stable introduction of a life-shortening Wolbachia infection into the mosquito Aedes aegypti Science, 323 (5910), 141-144 DOI: 10.1126/science.1165326

Read, A., Lynch, P., & Thomas, M. (2009). How to make evolution-proof insecticides for malaria control PLoS Biology, 7 (4) DOI: 10.1371/journal.pbio.1000058