“This book is amazing!” – Not!
Starting with a Malcolm Gladwell quote (above) on the dust jacket and an overly aggrandizing foreward (also written by Gladwell), and then further to the contents of the book, I am very disappointed in Hutchinson’s final product.
I guess I expected more from Alex. Given his training as a physical scientist one would expect a high standard in critically reviewing, parsing out, and summarizing the raft of research he investigated on the subject. Rather the reader is offered an almost trite treatment with little evidence that the data and analysis utilized by the quoted researchers is relevant or even valid. Although Hutchinson does express some doubt about the conclusiveness of some of the research he references, it is weak, not sufficiently critical, and certainly not thorough.
Having taken the time to read the book twice and to read numerous important references, I find it unfortunate that we have, once again, a book that just repeats the mostly unsupported claims of the authors of much of the research in this field (the study of human endurance and psychobiology). Do yourself a favor and take a look at some of the references that Hutchinson uses, and particularly at the methodology and the statistical analysis, and you will find studies that have fundamentally flawed underpinnings with little or no statistical power. Add to this the “replication crisis” in the entire field of psychology (where less that 40% of all studies can be replicated) and other fields (here is a link to an upcoming webinar on the subject), and you have a situation that leads to crippling uncertainty.
For example (this is just one exemplar of many others referenced in the book), here is the studied population and data analysis methodology utilized in one of the central reference works quoted by Hutchinson in the text:
“Ten healthy male human subjects were recruited from Bangor University’s rugby league team.”
data analysis methodology:
“Data were explored for normality and homogeneity of variance, and are presented as mean § SD unless noted otherwise. The effects of exercise duration (0, 25, 50, 75, and 100% of time to exhaustion) on MVCP, heart rate, blood lactate concentration, and RPE were tested using one-way repeated measures ANOVAs. If the assumption of sphericity was violated, the Greenhouse–Geisser correction was employed. In such cases, the uncorrected degrees of freedom are reported between square brackets in conjunction with the respective epsilon values. The correlation between RPE at isotime (the highest common exercise duration achieved by all subjects during the time to exhaustion test) and time to exhaustion was assessed by Pearson product moment correlation coefficient. The difference between MVCP measured immediately after exhaustion and the power output required by the time to exhaustion test was tested using a paired-samples t test. Significance was set at 0.05 (two-tailed) for all analyses, which were conducted using the Statistical Package for the Social Sciences, version 14.”
Note the size of the studied population- 10 essentially uncontrolled subjects in a cross-sectional study! It does not take much understanding of data and analysis to understand that there is essentially no statistical power rendered by such a study design. This is why the authors resort to a “pretzel-like” contortion of statistical nonsense to try and derive something out of the poorly designed study. This is just one example- the quoted references that I reviewed are rampant with such poorly designed and obfuscatingly analyzed studies. Of course this does not stop Hutchinson (and many of the authors of the research he references) from making unsupported claims and then propagating this nonsense to a degree that is just embarrassing.
In this example the authors claim the following:
“These results challenge the long-standing assumption that muscle fatigue causes exhaustion during high-intensity aerobic exercise, and suggest that exercise tolerance in highly motivated subjects is ultimately limited by perception of effort”.
Based on a study of 10 rugby players- who are assumed to be “highly motivated”? How do the authors know that? There is no discussion of what “motivated” is in the text- no definition, no measurement, no data, no discussion. Add to this the myriad of other confounding factors that any human subject will bring to a physical test (e.g. simple things like sickness (the authors rely on self-reported recent-past sickness data) or pre-sickness, emotional state, recent training history, recent or past injuries as well as much more complex factors such as biometrics, familiarity and efficiency with cycling on a trainer, participating in exercise in a lab, genetic and epigenetic factors, etc., etc.). Surely one cannot be surprised that there is a replication crisis in such studies that have woefully deficient samples sizes of under-characterized samples (subjects) being tested. You don’t have to have faith in my analysis- do your own; I have high confidence that you will come to the same conclusion.
The popular literature (i.e. the stuff of writers like Gladwell in books like “Outliers”) is full of science fiction derived from unsubstantiated claims of studies like the one above. The unaware but trusting readership goes “hook, line, and sinker” for the bravado and pushes this mis-information out in millions of lunch and dinner table conversations, all to the diminishment of the acceptance of the reality that there is no there there.
I got nothing from this book beyond further confirmation of the low standards for analysis in this field of study. Sure there are some interesting anecdotes and even some potential indications of possible breakthroughs in understanding, but neither of these are the basis for a book on the subject. A magazine article? Maybe, but why bother?