
Imagine walking into a pharmacy a few centuries ago, seeking relief from an ailment. You might expect an herbal concoction, perhaps some leeches, or a tincture. But what if the prescribed remedy included powdered human skull, dried blood, or even a piece of mummified flesh? This might sound like a macabre fantasy, yet for a significant span of our recorded past, the consumption of human remains for therapeutic purposes was not just common but considered a legitimate, even sophisticated, medical practice across various cultures and continents.
This peculiar chapter in medical history, sometimes termed medicinal anthropophagy or funerary cannibalism, isn’t confined to a single ancient civilization. Its roots stretch back thousands of years, evolving through the beliefs of diverse societies and persisting well into the early modern era. Understanding this practice requires suspending our modern sensibilities and examining the historical context in which these beliefs flourished. What drove people to believe that consuming parts of the dead could bring health to the living?
The story often begins with ancient Egypt. The highly sophisticated methods of mummification resulted in bodies remarkably preserved, leading to a unique commodity: mummia. Initially, mummia referred to bitumen or pitch, a dark, tar-like substance used in embalming, which was believed to possess healing properties. However, as trade routes opened and demand grew, and perhaps due to misunderstanding or deliberate deception, actual mummified human remains began to be substituted for, and eventually became synonymous with, mummia. From around the 12th century onwards, this powdered human flesh became a widely sought-after ingredient in European apothecaries. It was prescribed for everything from epilepsy and headaches to internal bleeding and plague, thought to carry the life force or spirit of the deceased, thereby imparting strength or healing to the consumer.
The belief wasn’t limited to ancient Egyptian imports. European physicians and alchemists expanded upon the concept, often using locally sourced human remains. Think of “skull moss,” a preparation made from moss scraped off the skull of a person who had died violently, believed to be potent against epilepsy. Or consider the application of human fat, sometimes rendered from executed criminals, for treating bruises, sprains, or even nerve pain. The logic, however grim it appears today, often hinged on the idea that the body retained its vital essence after death, or that specific parts held unique powers. A healthy person’s blood, for instance, might be seen as a source of vitality for the anemic, or a brain for mental clarity. This era highlights a broader historical tendency to seek remedies in the natural world, even if that “natural world” included the human body itself.
During the 17th century, the practice reached its peak. Distinguished figures like King Charles II of England famously consumed “King’s Drops,” a concoction that included powdered human skull, believed to be effective against a range of ailments. Pharmacists of the time proudly displayed human bones and dried flesh in their shops, much like they would exotic herbs or rare minerals. This wasn’t merely a fringe belief; it was integrated into the mainstream medical pharmacopeia, reflecting a blend of empirical observation, superstition, and a desperate search for cures in an age before modern pharmacology and diagnostics. The prevailing medical philosophy, often rooted in humoral theory, sometimes justified these practices by associating bodily fluids and parts with various temperaments and health states.
The gradual decline of medicinal cannibalism began in the 18th century, coinciding with the rise of enlightenment thinking, greater scientific scrutiny, and a burgeoning understanding of human anatomy and physiology. Physicians started to question the efficacy of these remedies, realizing that the supposed benefits were often anecdotal or coincidental. Ethical considerations also began to surface more prominently, challenging the moral implications of desecrating graves and consuming human flesh. By the 19th century, while some remnants of the practice lingered in folk medicine, it had largely been relegated to the annals of curious historical errors within established medical practice. This shift marked a significant evolution in medical thought, moving away from mystical beliefs towards empirical evidence.
Reflecting on this peculiar historical phase offers a window into the complex interplay of cultural beliefs, scientific understanding, and human desperation. It reminds us that what is considered “medicine” is deeply contextual and evolves dramatically over time. From powdered mummies to royal elixirs, the journey of medicinal anthropophagy underscores humanity’s relentless quest for healing, even if that path sometimes led to remedies that now seem profoundly unsettling. Our understanding of the human body and the mechanisms of disease has come a long way since then, forging a different, more scientifically grounded future for medicine.