We Trust the Bones, But Not the Bodies
Why environmental health is held to a higher—and deadlier—standard of proof
In 2017, skull fragments dated to about 300,000 years ago from Jebel Irhoud pushed the origin of Homo sapiens deeper into the past—and farther across Africa—than many researchers had assumed. A handful of bones and stone tools was enough to rewrite a chapter of human history. Archaeologists debated, revised timelines, published papers, and moved on. No one demanded randomized trials. No one insisted on absolute certainty before updating the story.
Yet when observational epidemiology raises alarms about toxic chemicals, the reaction is very different.
We are told the studies are “correlational.” That other explanations can’t be ruled out. That causality hasn’t been proven. And so—again and again—we dismiss the evidence, delay regulation, and quietly allow harm to continue.
This double standard is not accidental.
Journalists routinely rely on anecdotes to hook readers and add depth to their reporting. Paleontologists redraw the human family tree based on fragments of bone. Geneticists routinely invoke twin studies to claim that 50 to 80 percent of disease is inevitable.
But when epidemiologists document higher risks of cancer, autism, or cardiovascular disease linked to toxic chemicals or pollution, skepticism hardens into paralysis.
Why?
Correlation, Association, and a Convenient Confusion
Part of the problem lies in how we talk about evidence. Critics often lump all observational research together and dismiss it as “correlational.” The word sounds damning, but it hides an important distinction.
Most epidemiologic studies do not stop at correlation. They treat it as a starting point. Researchers test alternative explanations. They look for dose–response patterns, pay close attention to timing—especially early development—check consistency across populations, and ask whether findings make biological sense. They examine whether associations persist after accounting for age, income, smoking, diet, occupation, and co-exposures.
Ironically, some of the studies most often treated as near-causal—such as twin studies—are correlational by design, resting on strong assumptions about shared environments and gene–environment independence. I explore those limitations in more detail elsewhere (Why Disease Isn’t in Our DNA).
In plain terms: twin studies often accept correlation and interpret it. Epidemiology treats correlation as something to challenge—and tries to break it. And yet, it is epidemiology—not genetics—that is routinely dismissed as “just correlational.”
The point is simple: different kinds of evidence are judged by very different standards—and epidemiology consistently gets the harshest one.
The Chemical Blind Spot—and Who Benefits
Nowhere is this blind spot more consequential than with pollution and chemicals in commerce.
Lead is a textbook example. Observational studies linked low-level lead exposure to reduced IQ, behavioral problems, and later cardiovascular disease long before regulation caught up. Industry emphasized uncertainty, attacked scientists, and demanded impossible standards of proof. We now know there is no safe level—and the costs of delay are written into classrooms, communities, and shortened lives.
PFAS followed the same script. Observational studies linked them to immune suppression, reduced vaccine response, thyroid disease, lipid abnormalities, and obesity in children. These were not isolated findings. The associations persisted across studies, populations, and methods. Still, action lagged.
Asbestos. Tobacco. Benzene. Each time, observational evidence raised early warnings. Each time, it was dismissed as inconclusive. Each time, regulation arrived late.
These studies were never meant to deliver absolute proof. They were designed to serve as early warnings—especially for industrial chemicals and pesticides that undergo far less evaluation than drugs. Regulatory approval should be treated as provisional, not as a golden seal. When even a handful of well-conducted studies point to harm, action should follow.
This is not a failure of imagination. It is a system that protects short-term profits by shifting long-term costs—measured in disease, disability, and premature death—onto the public.
And who pays for the cleanup?
Who pays for the delay?
You do.
The taxpayer.
Early Warnings
None of this means observational studies should be accepted uncritically. They can be wrong. They require replication, transparency, and humility.
But they also deserve to be judged honestly—and proportionately.
When an exposure is widespread, persistent, and involuntary—and when studies repeatedly point in the same direction—waiting for perfect certainty is not scientific rigor. It is a policy decision. And history suggests it is usually the wrong one.
Archaeologists revise history when new bones appear.
Geneticists revise estimates when assumptions crumble.
Journalists revise narratives when facts change.
We should grant epidemiology the same courtesy.
Because long before certainty arrives, bodies—quietly, collectively—are already telling the story.




"Correlation may not prove causation, but it is a good hint!"
Observations which involve noting correlations is the beginning of the scientific process. It is hair-pulling lunacy to ignore correlation. It's like my former MIL telling me that the fact that I got violently ill every time after eating certain foods didn't mean anything and I should continue to eat those foods. Yeah, she was a bitch and so is anyone who tells you to continue to consume poison because it isn't "proved" yet.
The was an interesting Cochrane Review article several years ago on sources of lead poisoning in children. They concluded paint was not a source of lead exposure to children because they excluded any studies that were not controlled. All of the studies of children's blood lead levels going down after lead paint abatement didn't have a control group where there was a group of children with no intervention. Of course it would have been unethical to have such a control group.