This
post was originally published on
this sitehttp://www.marksdailyapple.com/
I’m a huge proponent of self-experimentation. We can’t always rely on funding for research relevant to our needs, interests, and desires, and those studies that are relevant are still using participants that are not us. We like control, when it comes down to it. We want to be the arbiters of our own destinies, and running (formal or informal) self-experiments of 1 can help us get to that point. But as helpful as it can be, there are both inherent limits to self-experimentation and common pitfalls people fail to take into account when designing their experiments of one.
I’m not referring to the basics of experimentation, like the need to control for variables or the importance of limiting the number of interventions you test at once. You guys know that stuff. I’m talking about the limitations most people don’t foresee:
No baseline measurements.
The best studies establish a baseline against which any experimental effects are plotted; these are baked into the design. If you have no baseline, you can’t compare your “findings” to anything. In fact, you don’t even have any solid findings without a baseline. You just have feelings, or hunches, which are fine, but they don’t tell the complete story or establish causation. Since many self-experimenters tend to be less formal than clinical researchers, more “fly by the seat of their pants,” failing to establish a baseline is a common weakness.
Lack of placebo.
When applicable, a placebo — an inactive control against which the active experimental intervention is compared — strengthens an experiment’s impact. If a person takes the placebo (often a sugar pill) and it works just as well as the active intervention, researchers know the latter probably isn’t all that active after all.
Inserting a placebo into our self-experiments isn’t always viable or possible, of course (how would you placebo control an experiment testing the effect of cold water plunges?), and it adds a level of complexity to what we often expect to be a simple case of “let’s try this out.” Also, some people simply scoff at the notion that they’d be susceptible to placebo. (“Sure, those anonymous people in the other study might be “weak enough” to fall for it, but not me!”) Which is why it’s easy to forget (willfully or otherwise) to use a placebo.
Lack of blinding.
The best experiments are blinded, which prevents the participants from knowing whether they’re receiving the active intervention or the placebo. This is important because the simple knowledge of what you’re taking can change your response to the intervention, especially when it’s something you “want” to work — which is exactly what most self-experimenters are testing. Blinding allows researchers to measure effects independent of the subject’s expectations, desires, or biases.
Lack of double-blinding.
In a double-blinded study, neither the participants nor the doctors know who’s receiving the placebo and who’s receiving the treatment. This is important because if clinicians know they’re giving subjects a placebo, their interactions with subjects will change in subtle, subconscious ways, and this can affect the effect of the treatment. Ideally, double-blinding also blinds the data collectors, outcome adjudicators, and data analysts involved in the study to remove any trace of bias or expectation.
Lack of applicability to other people.
Most people who smoke cigarettes never get lung cancer. We all have an anecdote of a grandparent who smoked multiple packs a day and lived to the ripe old age of 90+ without ever developing a serious smoking-related disease. And if that cancer-resistant grandpa conducted a self-experiment to test the effects of smoking on cancer risk, he’d conclude that it was completely safe for him. And it would be — for him, in that instance — but because of research involving tens of thousands of subjects, we have a strong idea that smokers have a 15x greater chance of contracting lung cancer than non-smokers. The advantage of funded research and experiments of 20, 50, or a 1000 people is that the results are more applicable to more people.
This limitation isn’t a problem if you acknowledge that your self-experiment is about you and you alone and withhold application to the rest of humanity, but it is a limitation.
Regression to the mean.
Eventually, we recover from illness on our own. Left to its own devices, the body will heal. And while it’s attractive to ascribe credit to the vitamin C we took for stopping our flu in its tracks, the same thing could have happened had we taken nothing at all. This is even more likely when we realize that people generally turn to self-experimentation when symptoms are worst, where there’s nowhere to go but back to the baseline.
Using short-term effects to extrapolate long-term ones.
You might feel great in your month-long experiment foregoing sleep at night for micronaps throughout the day, but does that mean it’s safe? It might be. Maybe it is an effective replacement for traditional sleep patterns. But one month is not enough time to tell us that.
This is a problem with most studies, of course. But self-experiments aren’t exempt, and since we tend to trust ourselves and our experiences over the results from a PubMed abstract, self-experimenters are particularly prone to unwarranted extrapolation.
Most of these aren’t unique to self-experimentation; if it is to be powerful and valid, any type of trial must account for biases, blinding, placebo, and improper extrapolation. Most can be avoided, mitigated, or overcome with careful planning and rigorous design. But self-experimenters are all too susceptible to forgetting about or ignoring these limitations and skipping the rigorous design to get to the good stuff. Because, well, it’s kind of hard. (So I wrote a book to make it easier.) That’s the rub, though. We can cruise PubMed and read hundreds and thousands of studies whose authors accounted for everything for us. They did the work. But when we run a self-experiment, we must do the work because we are subject, researcher, analyst, and clinician.
We’re not just the participants in self-experiments; we’re also the authors. We handle everything. We can’t relinquish control to an external authority. Self-experimenters can certainly use placebo controls and blind themselves, but it requires a level of dedication and rigor that, for some people, is just unrealistic or beyond their means. Also, we often run self-experiments just to “try things out” or “see how it works,” and if it works, we’re happy. That our guess might work because of the placebo effect doesn’t bother most people who only want to make themselves healthier. That’s fair, but it also means our results can’t establish causality.
I don’t expect everyone to establish definitive cause and effect for every decision they make, bite of food they take, and lifestyle modification they enact. That’s just ridiculous. We’re not trying to pass peer review here. We’re trying to live better. And not every “study” needs to be rigorous. But by heeding some of these common self-experimentation pitfalls — not even necessarily all of them — or at least being aware of them, we can increase the power of our forays into personal science.
Are you a self-experimenter? If so, have you noticed yourself falling prey to any of the pitfalls mentioned in today’s post? Are there any that I’ve missed? How have you handled it in the past, and how will you do it in the future?
Thanks for reading, everyone. Take care!