Science versus Experience in Physical Medicine
The conflict between science and clinical experience and pragmatism in the management of aches, pains, and injuries
Science is not meant to cure us of mystery, but to reinvent and reinvigorate it.
Dr. Robert Sapolsky, from his classic book, Why Zebras Don't Get Ulcers
These days most good health care professionals take it for granted that treatment ideas should be blessed by science to some degree. But to what degree? Blessed how much? Blessed how?
Despite the good intentions, there is still a serious lack of evidence-based practice across the board. It is getting better, but it’s slow.1 There are some signs of improvement (with back pain particularly), but physical medicine2 is still a cocky teenager, just starting to come of age and figure out that it doesn’t know everything.
Back in the good old days there wasn’t evidence of anything one way or another (absence of evidence) and everyone pretty much did whatever they liked as long as it sounded good and the patients were happy. If you could get people to pay for it, that was good enough! Market-based medicine. Experience-based medicine. What could possibly go wrong? Entire modality empires sprang up out of the fertilizer of hunches and pet theories, many of them reasonable but definitely wrong, and many more “not even wrong.”
As standards have gone up and science has (finally!) started to test some of the 20th Century’s biggest treatment ideas, we’ve learned that there are a shocking number of low-value medical practices.3 Well-validated large effects in medicine are uncommon;4 in most cases nothing is going on except a creatively induced placebo (evidence of absence of any medical effect)… and placebo isn’t all that powerful and probably should never be justification for a therapy.5
In fact, science has become quite the buzzkill … especially for the treatment of pain,6 and manual therapists especially (physical therapists, chiropractors, massage therapists) who are paying attention have started to wonder if anything actually works, why they read this damn website anyway, and how they can justify what they are selling without more encouraging trials to point to.
(Yes, a few things do work for pain. Just shockingly few.)
Evidence isn’t everything, and clinical experience and patient buy-in do still matter
Evidence-based medicine is like democracy: it’s the worst way of knowing what to do to help patients … except for all the other ways.
Despite the rise and importance of Evidence-Based Medicine™, evidence produced by good quality trials isn’t the only thing that matters. It is not and never has been be the sole criterion for choosing health care interventions. There’s much more to it, and there always has been. Specifically, EBM has always formally, explicitly defined itself as the integration of clinical experience and patient values and preferences and expectations with the best available clinical evidence.
There’s several variations on this chart, but the take-home message is always the same: the application of EBM isn’t just about the evidence. (But that doesn’t mean the evidence can be ignored.)
For instance, a physical therapist deciding whether or not to use dry needling might consider three things:
- the evidence supporting dry needling is a bit iffy,
- but in his experience it works well for most people,
- and yet this patient reacts poorly to it and doesn’t care for the risk, even if there’s still a possibility of benefit.
Dr. Brad Schoenfeld on achieving this balance:
Evidence-based practice isn’t merely deferring to research for answers. Rather, it involves synthesizing the body of literature to develop general guidelines, then using your personal expertise to customize prescription to the individual. This is why the best practitioners have spent considerable time in the trenches, experimenting with different strategies both personally and with clients to hone their understanding of how to bridge the gap between science and practice.
But professional experience cannot veto the evidence and scientific plausibility! Beliefs based on experience are on extremely thin ice when they are at odds with the science.
Therapy is a process
As physical therapist Jason Silvernail argues,7 “The manual therapy approach is a ‘process’ of care centred on a reasoning model, not a ‘product’ consisting of one or more manipulative techniques.” And that process may be effective even if individual techniques are unimpressive. Good care is more than the sum of its parts.
Patients cannot meaningfully apply their values and preferences until they are informed, but, once they are, “informed consent” has a lot of power. Professionals can legitimately do a lot sketchy stuff if only they speak the magic words: “This is experimental. It may not work. I think it’s worth trying because yada yada yada and the risks are super low. Do you want to proceed?”
Patients really appreciate that approach. In my experience.
Absence of evidence is actually not a deal breaker, and it is still very common, even today. For all the progress we’ve made, pain and musculoskeletal medicine research has still only just scratched the surface. There is still a great deal that is “unproven” simply because no one has really checked properly yet.
All of this puts evidence in its place … but that is still a place of honour. Testing treatments matters!8
There’s a stand-up comedy routine in which Chris Rock makes fun of people who say things like ‘I take care of my kids!’ or ‘I’ve never been to jail!’ His punchline? ‘You’re supposed to take care of your kids. You’re supposed to stay out of jail. They aren’t things you can boast about.’
‘Evidence-based’ strikes me like that. You’re supposed to use evidence. It’s not something you get to brag about.
Just exactly how much evidence is needed for a treatment to be considered “evidence-based”?
At the very least, it has to make sense (plausible), and it can’t have already failed fair tests (evidence of absence). That is the bare minimum required. More would be nice, though. A little more detail:
- Biological plausibility. It cannot be at odds with any well-established biology, chemistry, physics — that’s a deal-breaker. Goodbye, therapeutic touch and Reiki. Goodbye homeopathy. Goodbye applied kinesiology. (And here’s an important but under-appreciated point: testing of highly implausible treatments tends to produce false positives.9 True story!)
- There can’t be evidence-of-absence. If there’s reasonably robust clinical trial evidence that shows no benefit, or damns a technique with only very faint praise (which is actually more common), that’s a deal-breaker. The results can’t just be“statistically significant” without clinical significance; there has to be an adequate “effect size,” big enough that we should care. It doesn’t have to be huge to matter — that’s what she said, ba-dum-tss — but it can’t be tiny either. So goodbye glucosamine. Goodbye ultrasound. Goodbye platelet-rich plasma. And many others.
The bar gets raised quickly in proportion to the costs and risks, or if there’s no informed consent. Specifically, I want to see at least:
- Three good quality trials with obviously happy results.
- Including at least one replication from relatively unbiased researchers.
- And double that if the hypothesis is implausible or self-serving, or if the intervention is expensive or risky.
In other words, if the stakes are higher, or if it’s a bigger reach, the evidence has to actually impress me. (Exactly what that takes is spelled out in much more detail in a dedicated article, believe it or not. This is just a high-level summary!)
The bar for “proof” is far higher still. This is just the level of evidence I need to raise my eyebrows and say, “Okay, that might work, or probably works.” And if you’re thinking that not many treatments have even adequate or impressive evidence, let alone proof … you’re right. Even the lowest bar is often not cleared.
The more senior the colleague, the less importance he or she placed on the need for anything as mundane as evidence. Experience, it seems, is worth any amount of evidence. These colleagues have a touching faith in clinical experience, which has been defined as “making the same mistakes with increasing confidence over an impressive number of years.”
Isaacs et al., 2001, The Oncologist
“Dubious Study” xkcd #1847 © xkcd.com by Randall Munroe
What if there’s new, positive evidence? What then? Unfortunately, new “positive” studies rarely change the bottom line
Regardless of how unimpressive the evidence has been, I’ve noticed that clinicians are extremely eager to embrace any new evidence that seems to support what they do. Imagine that. But this also comes with an attitude. This truly happens quite a bit: there’s a new study of a treatment out with positive results that contradicts that discouraging evidence history … and it gets smugly thrown in my face, a “revenge citation.” Here’s how this goes, sometimes unfolding over years…
- Reluctant skepticism — In the beginning, someone cautiously embraces my cynical reporting on an underwhelming treatment. “Welp,” they say, “Ingraham says it doesn’t work, or not very well, anyway. Uh, thanks for that … I think….”
- Glorious confirmation bias — Much later, that cynical conclusion seems to be overturned by a shiny new scientific paper. After “reading” it — the abstract, or some fluff "science" reporting which is probably just regurgitating a press release — they decide that it sure seems positive. And so they think: Good news! Chlorine Gas Snorting Therapy works after all! I guess that Ingraham guy was wrong.
- Smug revenge — And then they write to tell me how wrong I was. Or, more likely, they post it on social media. The tone may be caustic: “New evidence proves skeptic was just a cranky dumb-dumb!” Once in while they actually ask me what I think, if I agree that things have turned around.
- Skepticism rejected — Of course, I’m usually already aware of the paper, but I re-check it anyway, and confirm the inevitable: it’s a shite study that just muddies the water. And then I try to explain why it wasn’t so “promising” after all. Which rarely has much effect … because who’s going to embrace my curmudgeonly take when there’s a much happier version of the story backed by a Genuine Citation?
The sad truth is that new “positive” studies rarely change the bottom line, rarely restore the “evidence-based” status of an intervention, no matter how good they look to the average clinician. For instance, I have seen the story above play out ad nauseam about massage for delayed-onset muscle soreness …
A classic example of a revenge citation: massage for exercise soreness
A “new” (2017) analyses like Guo et al. certainly does seem positive. For years since it was published, people have been citing it and declaring that I must have gotten it wrong about massage-for-soreness. After that close call with cognitive dissonance, those people went back to their original reassuring belief that the soreness can be rubbed out.
Except I did not get it wrong. Don’t get me wrong, I can get things wrong! But not this. I had that paper in my bibliography the whole time with a private note-to-self: “weak sauce ‘positive’ meta-analysis on massage for DOMS, worthless, changes nothing!” I hadn’t gotten around to referencing and dismissing it because it’s just kind of terrible. It was just a biased “garbage in, garbage out” review of precisely the same data that hadn’t been convincing before.
Unfortunately, just because a scientific review or meta-analysis seems superficially positive really doesn’t mean much. There are many gotchas! They are so complex that they are ripe for abuse, just as prone to bias-powered error and misrepresentation as clinical trials, if not more so. Meta-analysis is the major practical example of how data can be tortured until it tells you what you want to hear (see Ioannidis), and it’s particularly prone to it when studies are underpowered — a huge problem with pain and rehab science.
And that’s all true of reviews and meta-analysis even when they are reviewing and meta-analyzing good evidence! But there’s rarely much good evidence in the first place. (Find me a review that does not make a statement about the need for better data. I’ll wait.)
The modern scientific publishing industry spews low-quality studies like a crap firehose, and many of those are bogus for reasons we can’t even see. It would take a lot more than just one positive study to reverse the negative trend enough to hit my own threshold for “worth a try.” Given a history of negative results, it would probably take at least three strongly positive trials from with no glaring methodological flaws or researcher biases … and that still wouldn’t be “proof,” not by a long shot. But it would swing the pendulum enough that I might endorse the gamble (depending on the costs and risks too, of course).
Surprise! My standards are low! (Sort of)
As long as the costs/risks are low enough, I’m actually not all that hard to please …
I have a reputation for being critical of many (or most?) theories and techniques, so many readers may be surprised by just how low my standards are for what constitutes adequate scientific support. But I really do think that many unproven theories and techniques are fair game — assuming they’re fairly safe, cheap, plausible. And if they haven’t been spanked by good trials yet. And if the patient is fully informed.
Here’s the “but” though, one big problem that sustains my militant skepticism …
That is what keeps me cranky and critical: not treatment based on too little evidence, but treatment based on too much hunch and bravado.
For example, I think trigger point therapy, despite its many problems,10 is still a defensible approach to some kinds of pain as long as the risks and costs are tamed and it’s presented with humble disclaimers. It’s just fine if a therapist puts it to patients like this:
“I do trigger point therapy, even though no one really knows what trigger points are. We have some theories. The science so far is not very encouraging, and there’s a bunch of controversy. Although there are still reasons for optimism, basically no one can really know yet if we can do anything about them. It’s a gamble, and not cheap. But we’ll be gentle and efficient and I won’t recommend a long expensive course of treatment without promising signs. Do you want to proceed?”
But I have a huge problem with this kind of thing (which is rarely actually said out loud, but is what’s actually going on):
“Trigger point therapy works! My results speak for themselves. I understand this kind of pain and I can treat it. Now enjoy my magic hands [or needles]… which are going to hurt both your body and your wallet, by the way.”
In the absence of good decisive science — which is all too often — it’s really all about the framing and the humility and the doing-no-harm.
What do you do when confronted with evidence that’s a bummer? At odds with your experience?
I want PainScience.com to be known as an EBM-friendly website, so what do I do when the evidence is contradicted by the clinical experience of my professional readers?
I’m a writer, not a magician: I just stay focused on reporting the evidence, and that’s more than enough for one lifetime.
The artful merging of evidence and experience with the unique special-flowerness of the patient in front of you is a clinical challenge … not my writing challenge. Clinicians have to make decisions based on all three of those factors, all day, every day. That’s their job. I left that challenge behind several years ago. These days, my new challenge is to provide clinicians (and patients) with as good a picture of the evidence as I can. I’m a specialist now, focussing on just one of the pillars of EBM: the science-y pillar.
On the other hand, I was also a clinician for ten years, and I correspond constantly with many extremely experienced clinicians now. So there are hat tips to clinical experience here there and everywhere on PainScience.com. I do write about what clinicians believe. But, mostly, I stick to what the evidence can support — because that’s all I have time for, if nothing else.
But for you clinicians…
When confronted with scientific evidence that’s a bummer, at odds with your experience, remember that your experience is a fully legit third of the EBM equation. But! You must be very cautious not to lean too hard on your experience, because “you are the easiest person to fool” (Feynman). It’s only a third of the equation. Not two thirds. Not half. Just a third, roughly, give or take (probably always less than a third for younger professionals). And it’s never a very reliable third. Just like science, experience is difficult to interpret and often wrong.
Is it possible to care about both research and patients? Yes! Duh!
I hear quite a bit of this: “I am more concerned with helping my patients than what the research says.” This sentiment is almost always a defensive response to criticism of a treatment method. For example, if I say, “Good studies have shown that dry needling isn’t effective,”11 I am likely to get a response like, “You’re an armchair therapist. All you care about is research. I care about my patients, and in my experience dry needling helps them.”
It’s based on the ungracious and incorrect assumption that professionals who are concerned about “what the research says” are less concerned about helping patients. That’s absurd. No one is special or unique for wanting the best for your patients.
You should care about research because it can help you help your patients. That’s what everyone wants.
I still routinely see patients and professionals recoiling from EBM flaws that just don’t exist. We still see the mistaken belief that applying science to healthcare will make it cold and impersonal. Here’s physical therapist Dr. Jules Rothstein addressing that fear in 2001 … and his reassurance is just as relevant today:12
We need to make certain that, as we move to a better form of practice, we continue to put patients first. Nothing could be more humanistic than using evidence to find the best possible approaches to care. We can have science and accountability while retaining all the humanistic principles and behaviors that are our legacy.
Prof. Jules Rothstein, PT, PhD
Science versus practice from the patient perspective
Reader Kirsten Loop asked these questions on the PainScience.com Facebook page:
Honest question here. So, the idea that pain practitioners of whatever sort (as opposed to clients) a) can’t make promises about their treatments for pain because most scientific studies are problematic and b) shouldn’t make promises based on pseudo-science (or unsupported opinion)...what are the clients in pain supposed to do? Apparently, we’re also not supposed to (gasp!) “self-diagnose” or otherwise draw conclusions about our own pain issues -- unless, of course, we can think deeply and critically about them. (Which, according to most professionals, we aren’t capable of doing. You’d think we’re all a buncha fools the way we apparently fall prey to the snake oil salespeople out there, according to many in the pain science community.) But even critical thinking about non-proven treatments is practically ‘sinful.’ So, we’re to do... nothing? Just wait for the scientific system to fix itself, pick up the pace, eventually, I presume, through interest, funding, rigorous testing, and unbiased reporting of results? That’s gonna take a long time. I’ll be dead by then. I’m not a manual therapist and therefore I’m not demoralized by the dawning realization that my original training was a biomedically based lie. I am a civilian with pain. People are not well served by deliberate and accidental flakes, and they are also not well served by the snail’s pace of science that takes even longer to translate into treatments that practitioners can use in the real world. Decades in many cases. So, I’m beginning to wonder what’s the point of all of this? No one knows sh*t :)
They were good, difficult questions, worthy of a high quality reply. Here is how I responded:
That frustration is justified and historically appropriate. It’s the right reaction to this annoying chapter in the history of pain medicine. Like a suffragette in the early 20th Century, it makes complete sense to be outraged by the circumstances in which we find ourselves. Or maybe a medical analogy is more appropriate: it’s like having a bacterial infection before antibiotics.
We are indeed awkwardly stuck between half-baked science and quacks/flakes trying to provide the answers that science still can’t. Fortunately, that doesn’t mean we are completely screwed. There is a functional compromise between the extremes. Like all good compromises, it tends to make everyone unhappy. But it exists.
Basically, the middle ground consists of experimental treatment and informed consent, prioritizing the most plausible options and rejecting the most ridiculous. It looks like this:
PROVIDER: I don’t know if this works. No one can know if this is effective. There are some minor risks, and it could be a waste of time and money. But it’s still a reasonable thing to try, as long as you understand and accept that it’s experimental. Are you cool with that?
PATIENT: 👍🏻
Or a more patient-o-centric version:
PATIENT: I want to try [whatever]. I know it’s not proven and I know there are risks. But let’s chat about them. Are you willing to provide that therapy, as long as I’m okay with the uncertainties?
PROVIDER: 👍🏻
Ideally there would be more discussion about WHY it’s a reasonable thing to try, of course. 😉
To date, there really is no such thing as strictly “evidence-based medicine” for most kinds of chronic pain. But that doesn’t mean that the half-baked science is useless: we can still use it to evaluate and prioritize treatment options. And we must! Because there is nothing else.
About Paul Ingraham
I am a science writer in Vancouver, Canada. I was a Registered Massage Therapist for a decade and the assistant editor of ScienceBasedMedicine.org for several years. I’ve had many injuries as a runner and ultimate player, and I’ve been a chronic pain patient myself since 2015. Full bio. See you on Facebook or Twitter., or subscribe:
Related Reading
- Speculation-Based Medicine — Alternative medicine prioritizes experience and speculation over evidence … and then ignores the evidence when it finally arrives
- Quackery Red Flags — Beware the 3 D’s of quackery: Dubious, Dangerous and Distracting treatments for aches and pains (or anything else)
- Most Pain Treatments Damned With Faint Praise — Most controversial and alternative therapies are fighting over scraps of “positive” scientific evidence that damn them with the faint praise of small effect sizes that cannot impress
- Why “Science”-Based Instead of “Evidence”-Based? — The rationale for making medicine based more on science and not just evidence… which is kinda weird
- Alternative Medicine’s Choice — What should alternative medicine be the alternative to? The alternative to cold and impersonal medicine? Or the alternative to science and reason?
- Ioannidis: Making Medical Science Look Bad Since 2005 — A famous and excellent scientific paper … with an alarmingly misleading title
- Statistical Significance Abuse — A lot of research makes scientific evidence seem much more “significant” than it is
- Insurance Is Not Evidence — Debunking the idea that “it must be good if insurance companies pay for it”
- The Power of Barking: Correlation, causation, and how we decide what treatments work — A silly metaphor for a serious point about the confounding power of coincidental and inevitable healing, and why we struggle to interpret our own recovery experiences
- ‘Reductionism’ Is Not an Insult — Reducing complex systems in nature to their components is not a bad thing
- Confirmation Bias — Confirmation bias is the human habit of twisting our perceptions and thoughts to confirm what we want to believe
What’s new in this article?
Five updates have been logged for this article since publication (2016). All PainScience.com updates are logged to show a long term commitment to quality, accuracy, and currency. more
Like good footnotes, update logging sets PainScience.com apart from most other health websites and blogs. It’s fine print, but important fine print, in the same spirit of transparency as the editing history available for Wikipedia pages.
I log any change to articles that might be of interest to a keen reader. Complete update logging started in 2016. Prior to that, I only logged major updates for the most popular and controversial articles.
See the What’s New? page for updates to all recent site updates.
Mar 26, 2025 — Minor editing, sanding down some rough edges from the last update.
2025 — A significantly upgraded section (“What if there’s new, positive evidence? What then? Unfortunately, new “positive” studies rarely change the bottom line”) plus a new section (“A classic example of a revenge citation: massage for exercise soreness.”)
2020 — Expanded and polished. Added two new sections, “Science versus practice from the patient perspective,” and “Is it possible to care about both research and patients? Yes!” Added a citation about low-value medical interventions (Herrera-Perez), another about dry needling (Stieven). Added a terrific quote about eminence-based medicine from a class paper (Isaacs), and another about how to “bridge the gap between science and practice.”
2018 — Added minor point about how new positive evidence affects a record of negative evidence.
2017 — Merged in a couple older blog posts, added several references and footnotes, revised and re-framed et voila: this is now the official new “science versus experience” page for PainScience.com.
2016 — Publication.
Notes
- Grant HM, Tjoumakaris FP, Maltenfort MG, Freedman KB. Levels of Evidence in the Clinical Sports Medicine Literature: Are We Getting Better Over Time? Am J Sports Med. 2014 Apr;42(7):1738–1742. PubMed 24758781 ❐
- AKA “musculoskeletal” medicine, but I prefer “physical medicine.” This is the science and art of helping people with recover from and cope with painful and disabling injuries, trauma, and disease. “Physical medicine” is a broad umbrella covering (or partially covering) physiatry, rheumatology, neurology, physical therapy, occupational therapy, sports medicine, orthopedic surgery, orthotics/prosthetics/pedorthics, massage therapy, chiropractic, osteopathy, and more. See Reviews of Pain Professions.
- Herrera-Perez D, Haslam A, Crain T, et al. Meta-Research: A comprehensive review of randomized clinical trials in three medical journals reveals 396 medical reversals. eLIFE. 2019 Jun 11;8(e45183). PainSci Bibliography 52236 ❐ “Low-value medical practices are medical practices that are either ineffective or that cost more than other options but only offer similar effectiveness.”
- Pereira TV, Horwitz RI, Ioannidis JPA. Empirical evaluation of very large treatment effects of medical interventions. JAMA. 2012 Oct;308(16):1676–84. PubMed 23093165 ❐
A “very large effect” in medical research is probably exaggerated, according to Stanford researchers. Small trials of medical treatments often produce results that seem impressive. However, when more and better trials are performed, the results are usually much less promising. In fact, “most medical interventions have modest effects” and “well-validated large effects are uncommon.”
“We feel strongly that our patients deserve scientifically defensible care that is more than just artfully delivered placebo.”
Ingram et al., 2013, Journal of Orthopaedic & Sports Physical Therapy
- Cashin AG, Furlong BM, Kamper SJ, et al. Analgesic effects of non-surgical and non-interventional treatments for low back pain: a systematic review and meta-analysis of placebo-controlled randomised trials. BMJ Evid Based Med. 2025 Mar:bmjebm–2024–112974. PubMed 40101974 ❐
This big new review of common non-medical back pain treatments is bad news. It’s a sequel to a 2009 version, and, like any good sequel, it delivers more of what made the first one good: the most rigorous science, a meticulous and sensible review of placebo-controlled trials only, a high bar to clear. And it seems to be just about as discouraging as it could possibly be. But tilt your head like a curious doggo for a different view, and maybe it’s not as bad as it seems? Here’s some good news about the bad news:
- Given the amount of garbage out there, getting some relief from 1 in 10 treatments doesn’t actually strike me as being all that bad. And the winners are: NSAIDs, exercise, spinal manipulation, taping, antidepressants, and TPRV1 antagonists (like capsaicin in spicy rubs).
- There’s a lot of “garbage in, garbage out” here, and it mostly confirms that we have inadequate rather than negative evidence. The review had such high standards that there weren't enough high quality studies for many conclusions. (And yet consumers are spending billions of bucks on those not-clinically-proven treatments. 😬) But the silver lining here is that there could be good treatments that simply haven’t been tested enough yet.
And now the bad news about the bad news. Unfortunately, if tilt your head the other way, the bad news looks even worse than it seemed at first. Quite a bit worse.
- The lucky handful of treatments that are being touted as effective here are offering really minor relief, all very damned-with-faint-praise.
- And some of those small benefits are likely illusory — evidence that registered as a thumbs up in this review, but likely won’t stand up to scrutiny/replication.
- And the reems of “inconclusive” evidence here? Most of that is extremely unlikely to ever get conclusive.
The bottom line is clear: back pain is largely immune to treatment. See also Wang et al., another 2025 review focusing on interventional medicine — with similarly high standards, and disappointing results.
- Silvernail J. Manual therapy: process or product? J Man Manip Ther. 2012 May;20(2):109–10. PubMed 23633891 ❐ PainSci Bibliography 54128 ❐
- Evans I, Thornton H, Glasziou P. Testing treatments: better research for better healthcare. 2nd ed. Pinter & Martin; 2011.
This excellent book is currently available for free from www.TestingTreatments.org. It’s a superb exploration why research matters, and how it’s done.
- Pandolfi M, Carreras G. The faulty statistics of complementary alternative medicine (CAM). Eur J Intern Med. 2014 Sep;25(7):607–9. PubMed 24954813 ❐
- People experience muscle pain and acutely sensitive spots in muscle tissue that we call “muscle knots.” What’s going on? The dominant theory is that a trigger point is an isolated spasm of a small patch of muscle tissue. Unfortunately, it’s just a theory, trigger point science is half-baked and controversial, and it’s not even clear that trigger points are even a problem with muscle. Meanwhile, people keep hurting, and massage — especially self-massage — is a safe, cheap, reasonable way to try to help. That’s why I have a large tutorial devoted to how to self-treat “trigger points” — whatever they really are. See Trigger Point Doubts: Do muscle knots exist? Exploring controversies about the existence and nature of so-called “trigger points” and myofascial pain syndrome.
- Stieven FF, Ferreira GE, Wiebusch M, et al. No Added Benefit of Combining Dry Needling With Guideline-Based Physical Therapy When Managing Chronic Neck Pain: A Randomized Controlled Trial. J Orthop Sports Phys Ther. 2020 Apr:1–21. PubMed 32272030 ❐
- Rothstein JM. Thirty-Second Mary McMillan Lecture: journeys beyond the horizon. Phys Ther. 2001 Nov;81(11):1817–29. PubMed 11694175 ❐ PainSci Bibliography 51998 ❐
