One must learn by doing the thing; for though you think that you know it, you have no certainty until you try.
Parts I and II of this series introduced a methodology and analysis protocol to assess, in an objective manner, the competitiveness of running races using the Road Marathon event-type as a demonstration example. It is found that normalization of the finishing time and rank order data to percentage back from the winning time and cumulative probability, respectively, results in simple exponential functionality. This functionality is expected for athletic performance. In addition, because the data are normalized the approach allows for reliable event-to-event comparisons. In Part III of the series the analysis is extended to a shorter race length- the Road 10 km.
For reasons of continuity, stability, and the relevance of long-term aggregated data, I searched for a 10 km road race that has been run on the same course over an extended period of time. One such race is the Boulder Boulder 10 km road race that takes place each year on Memorial Day in Boulder, CO. This race was co-founded by Olympian Frank Shorter in 1979 has been in existence since. The period used to analyze this race is 2001-2013. The 2014 results as reported on the Boulder Boulder race website are still preliminary so they are not included here. Also, in 2011 the race starting point was moved to a new location so when aggregated results are used one must take that into account. In this analysis any aggregated results will only include the 2001-2010 datasets.
A second, long-running 10 km-type event is also analyzed- the Falmouth Road Race. The Falmouth Road Race was started in 1973 and quickly gained status as one of the most competitive 10 km-type (the race is actually 7 miles) races in the US and the world. Early on the likes of Bill Rogers, Marty Liquori, and Frank Shorter were going head to head in Cape Cod and this level of racing continues today and this race fields one of the most elite-encrusted road events of the year.
Boulder Boulder 10 km
Following the same analysis approach as described in Part I and applied to Marathons in Part II, presented below is an example of the data for the Boulder Boulder 10 km race, in this case the 2003 race.
As expected the data fit nicely to a simple exponential function with a competitiveness index (CI) of about 0.17. Comparison of this value of CI to those found in the “Big 5″ Marathons reveals that the Boulder Boulder race is as competitive as the marathons. In fact the 2003 Boulder Boulder is as competitive as any of the “Big 5″ marathons over the period 2001-2014. The high CI for the 2003 Boulder Boulder is seen throughout the entire period analyzed (2001-2013). All of the analysis data (competitiveness index (CI), coefficient of determination (R^2), and cohort size (n)) for the Boulder Boulder race are presented in tabular form below along with the same data for the Boston Marathon for the same period. Note: of the “Big 5″ Marathons, Boston was, on average, the most competitive of the group.
The Boulder Boulder 10 km race exhibits an average CI for the study period of 0.168 compared to an average CI of 0.147 for the Boston Marathon- a 14% difference. This means that the Boulder Boulder is, on average and in all but one year of the study period, a more competitive event. Also note that there are, on average, many more competitors who make up the 125% cohort in the Boulder Boulder than in the Boston Marathon. As will be shown in Part IV of this series, it is generally apparent that the cohort size is inversely proportional to the event length, meaning that the longer the event the smaller is the 125% cohort size even when correcting for the total number of entrants in the particular race. This likely due to multiplicative effects on time differentials which are larger in longer races.
Falmouth Road Race
This race presents data that represent a good example of the “negative” information discussed briefly in Part I. This is data that in some way does not fit the developed model or thought process and can provide important insight into the details of mechanisms and governing laws.
Contrary to all datasets analyzed to date, all but one of the datasets for the Falmouth Road Race do not follow an exponential functionality. Rather the data are well described by a linear functionality. I present here the data for Falmouth and provide a discussion as to why one might observe such linear functionalities. We will also see evidence for linear functionality in some of the ultramarathon data which will be presented in Part IV.
As representative of the linear functionality datasets from the study period, presented below is the same analysis as being used throughout this study for the 2009 Falmouth Road Race. Shown are the data fitted to exponential (red) and linear (blue) functions. Clearly the data are best described by a linear function (which exhibits an R^2 value of 0.987) with a slope of 0.039.
The exponential function significantly under estimates the proportion of the cohort in the 5-20% back region. The competitors in this performance range are performing at a much higher level (i.e. faster finishing times) than would be expected from an exponential distribution. As an example, this dataset shows that the athlete who is at the 10% back performance level is at a cumulative probability of about 0.35 or the 35th percentile (where 65% of the finishing times are slower) whereas, an exponential distribution would predict this athlete to perform at something around a cumulative probability of 0.20 (or the 20th percentile (where 80% of the finishing times are slower))*. Thus a greater proportion of the population is faster than would be predicted by the expected exponential distribution. What this means is that the population represented by this cohort is out-performing the expected performance based on an exponential distribution. A direct derivative of this result is that the cohort is likely non-normal. This will be discussed below.
To demonstrate how different the 2009 Falmouth race is when compared to the other races analyzed, presented below is a plot of both the Falmouth 2009 race and a representative dataset from the Boulder Boulder 10 km race (the 2003 race). Note the dramatic difference in the functionality of the competitor distributions for this 125% cohort.
Why are these two competitor performance distributions so different? Well, in two words, the reason appears to be: East Africans. The Falmouth Race offers travel stipends to top runners, there is a nice prize purse, and the race director obviously works hard at assembling the best field of world class runners as possible. As a result this race is consistently and highly populated by world-class athletes from East Africa. For instance in the 2012 race eight of the top ten runners were among the best runners from Kenya and the other two were top runners from Uganda and Ethiopia. As expected such a field will also draw the best American runners who want every chance to test their mettle against such a world class field. In the end you have a “stacked field” that is not representative of the tail of a distribution of all competitive runners. The 2013 race is an interesting counterpoint as the finishing time distribution fits an simple exponential. This race, contrary to the others in the study period, is not highly populated by East African runners. In fact there are only six East Africans or runners of East African origin in the race. The other races in the study period have at least twice as many and each of these races exhibits a linear functionality. Presented below are the analysis data in tabular form for the Falmouth Road Race 2001-2013.
In contrast to these results, the Boulder Boulder results summarized above uniformly follow a simple exponential functionality and do not have the participation of as large a population of world-class athletes. This is surprising as the Boulder Boulder spends in excess of $200,000 for athlete travel, accommodation, and prizes. This level of support for attracting top talent is similar to what Falmouth does. For some reason Falmouth attracts a much more world-class field; I am certain that there is an identifiable reason for this but not being a road runner I have not been exposed to any background.
Non-exponential Performance Distributions
Why is the performance distribution tending toward linear for many of the Falmouth Road Race races? It is apparent that when the fast end of competing population of runners is skewed toward a world-class level (in this case by successful recruitment of such talent by the race director), the 125% cohort tail of the distribution tends toward a linear functionality. This can be partially reasoned by the fact that this portion of the population is out-performing what would have been a normal, exponential distribution and is better characterized by a linear relationship. There is an important take away from this observation- even though athletic performance is defined by an exponential relationship, when one analyzes only the highest performing population (the data presented here suggests that) a linear functionality will obtain. This means that if an athlete is able to perform at this level they are not necessarily facing that “exponential wall” of improvement that was discussed in Part I of this series and in the 10,000 hour rule post. Rather, some such athletes can possibly evolve more rapidly through this, for them, linear space and achieve world-class status. This is an important observation also for coaches as it is extremely difficult with high performance athletes to determine whether they have plateaued or are still improving. Analysis of an athletes’ progression against world-class competition using percentage back from the winning time as the operative metric could reveal whether the athlete is on a linear trajectory or is, in fact, hitting the “exponential wall”. This could allow coaches to identify the “true” world-class-capable talent and focus training and racing appropriately. I have personally seen way too many cases of very, very good athletes coming to this “exponential wall” of improvement and spending a lot of time, effort, money, angst, and coach resources only to lead to eventual retirement without achieving world-class results. This is contrasted to the few athletes who somehow make it through the “exponential wall” and become top, world-class competitors. Identifying these athletes is a challenge and the methodology presented here may be one way to help in such identification. I am currently analyzing data that tests this hypothesis retrospectively with results of known world-class competitors- stay tuned.
The bottom line here is that when conducting this type of competitiveness analysis and an approximately linear relationship is found, one can expect to see a stacked field of competitors that skews an otherwise normal distribution of finishing time data. There is some evidence that this is occurring in some ultramarathon events as will be seen in Part IV.
We have extended a methodology for assessing competitiveness in running races from Marathons to a shorter race, the Road 10 km. It is found that the methodology is extensible as evidenced by the uniform exponential functionality exhibited by the Boulder Boulder 10 km race. However, it is also found that such exponential functionality can be replaced by a linear functionality if the competitor field includes a recruited population of world-class athletes. This “stacked field” alters the functionality for the 125% cohort in way leads to a linear performance distribution. Such a linear distribution of performance consistently out-performs the performance predicted by an exponential distribution, as expected given the superior talent represented in this tail of the general population of competing runners.
Next, in Part IV, we will extend this methodology and analysis to longer races- ultramarathons.
*Note: This may confuse some. In the case of finishing times a faster time (lower time value) is ranked higher. Many percentiles are reported on test scores where a higher score is ranked higher. The analysis here is inverted from this more common type.
True delight is in the finding out rather than in the knowing.
In Part I of this series, a methodology has been developed to analytically assess the competitiveness of running races. The approach involves normalization of the finishing time data and the rank finish order data to provide a transformed dataset for any running that can be analyzed and compared to any other timed running event. This normalization/transformation process utilizes the percentage back from the winning (or best ever) time for normailzation of the finishing finishing time data and cumulative probability (percentile rank) for normalization the finishing rank order data. Once the dataset is transformed the functionality of the cumulative probability versus the percentage back from the winning time is determined and tabulated using functional parametrics. In the case of running events it has been found that virtually all races that the author has analyzed exhibit a simple exponential functionality of the form:
Analysis of the “Big 5″ Road Marathons
The “Big 5″ marathons- London, Chicago, Berlin, New York, and Boston serve as a group of events that unarguably represent prototypical competitive running races. Analysis of these events over a significant period of time allows for a development of a calibration of the CI for competitive events and therefore a standard for CI that can be used to compare to other events and event types.
I show here the analysis for the London marathon for the period 2001-2013 as an example and then provide a tabular figure showing the results for all of the “Big 5″ marathons from the period 2001-2014. Presented below are the cumulative probability versus the percent back from the winning time for each year of the London Marathon, the fitted exponential curve, the exponential equation for the fit, and the R^2 value (coefficient of determination) for the fit. Also shown is the aggregated data analysis for all of these years taken together and re-analyzed using the same 125% cohort.
It is difficult to see the parametrics in these figures so the London data along with the population size for each analysis is presented below in tabular form:
Note that all of the R^2 values are about 0.92 or greater indicating very good fits to the data for the exponential function. It is seen that the CI varies from a low of 0.119 for the 2012 event to a high of 0.153 for the 2010 event. This represents a difference of about 28% meaning that the 2012 event was, by this measure, 28% less competitive than the 2010 event. The rest of the years show CI values around 0.130-0.140.
Presented below is the tabular data for all “Big 5″ marathons over the period 2001-2014 (or 2001-2013 for those marathon events that have yet to occur in 2014).
There is much to be gleaned from these data and I note here some of the important observations:
- The CI for these highly competitive events has general bounds of about 0.120 to about 0.170 or a range of about 40%.
- All of the fits to the data are very good- R^2 values are in excess of 0.918.
- The population sizes are sufficient to expect a very low error magnitude.
- Of the group, the New York marathon is, on average, the least competitive and the Boston Marathon is the most competitive.
- Interestingly, the last two Boston Marathons have been the most competitive events of the group by a good margin.
Point 1 (taken together with points 2 and 3) allows us to now have a calibration for expected magnitude for the competitiveness index for highly competitive events. We can now make meaningful comparisons with other marathon events and other running races.
The analyzed population size varies in this dataset and ranges from a low of 63 to a high of 368, a variation that is almost a factor of 6. It is important to test the robustness of this analysis approach by determining the extent to which there is a relationship between the computed CI and the analyzed population size. Presented below is a graph of the CI versus the population size. As is clear form the graph, there is no correlated relationship and this gives additional support to the efficacy of the analysis approach across events with very different populations in the “125% cohort”.
“Other” Road Marathons
It seems that there exist almost as many Marathon events in the US as there are cities, towns, and villages- meaning that there are many thousands of Marathon events held each year. I will make no attempt to survey a representative selection of such events as the task is enormous. I will, however, present analysis of a few Marathon event results to begin to establish a “feel” for what one might find in a comprehensive study.
In a quite random way I selected the following “other” Marathons for analysis and comparison to the “Big 5″:
- Kansas City (MO) Marathon 2012
- Fox Cities (WI) Marathon 2014
- Columbus (OH) Marathon 2013
- Rochester (NY) Marathon 2014
- Wenatchee (WA) Marathon 2013
- Vermont City Marathon 2013
This selection of events includes a range of size and speed. Although none of these marathon events have elite level winning times (< 2:10:00) two have winning times in the sub-elite level (2:10:00 – 2:20:00). Presented below are the cumulative probability versus percentage back from winning time plots for each event showing the fitted exponential functions and the associated parametrics.
And here is the data in tabular form:
There are a few interesting observations that merit remarks:
- Three of these events (Kansas City, Rochester, and Wenatachee) exhibit very low competitiveness compared to the “Big 5″ events.
- As noted earlier, an event can be competitive but still have a relatively slow winning time (Fox Cities). This is an important understanding because, although competitiveness and the “fastness” of a given race are not entirely independent, competitiveness can be high even in “slow” races since the computational basis is the cohort in the race.
- The two “fastest” races of the group (Columbus and Vermont) show competitiveness on par with the “Big 5″ events.
- All of the fits to the data are very good- R^2 values are all in excess of 0.92.
- The analysis appears to be robust down to very small populations, although the calculated error will be substantially higher for the small populations.
I continue to be encouraged by the robustness of this analysis approach across this very disparate selection of marathon events ranging from the largest and most “elite encrusted” events right down to the “neighborhood”-type events.
Shown here is the application of an analytical competitiveness methodology across a large range of marathon events. The results show consistent adherence to the expected exponential function resulting from normally distributed performance data. This work establishes a new basis for assessment of the competitiveness of a given running event using a very simple and straightforward analysis protocol and should provide an analytical context to evaluations of “competitiveness” in such events.
In Part III of this series we will look at a shorter distance race (10 km) and Part IV will extend the analysis to ultramarathons. There are some very interesting results!
Try again. Fail again. Fail better.
A recent example of a seemingly never-ending discussion on whether a certain running race was competitive or not has spurred me into writing this post.
The discussion that is presented in the comments of the above-mentioned article is commonplace whenever the subject comes up, particularly as it relates to such discussions of competitiveness in ultramarathon races. Presumably much of the assertion behind claims that ultramarathons are not competitive arise out of the typically small fields when compared to other endurance running events (e.g. road marathons) and “naive” references to “slow” times by those who do not have a grasp of the reality of racing for such long distances over extended periods of time.
In my experience all of these discussions lack any sort of frame of reference with respect to what is a definition of competitiveness and therefore these discussions lack any sort of logical, arguable, and defensible position from which to derive constructive conclusions. Although there may be other quantifiable metrics for competitiveness, I will offer a data-based approach here and expand upon application to a variety of running races in succeeding posts. This approach is highly defensible as it uses only finishing time and rank order placement results for computation of “competitiveness.” Event “shallowness” can also be quantified with the same data.
Shallow Competition? These are two different things.
We often hear reference to “shallow competition” as a descriptor of a particular race or event type. As will be developed here, the degree to which a race is competitive is significantly (although not entirely) independent of how “deep” the field is. Therefore the term “shallow competition” really has no foundation in communicating anything of substance. It is possible to have a shallow but competitive field as well as a deep but not competitive field. The following will provide a definition of and metric for competitiveness in running races, describe a method for assessing what is a “deep” field, and offer tools for anyone to determine the competitiveness and “deepness” of the field a given running race.
Definition of Competitiveness
A search of the literature has turned up very little work on defining and evaluating “competitiveness” from an analytical perspective in individual timed sporting events. Given the mountains of data of recorded finishing times for such timed events all across the world, it seems odd that no one has taken up the task of defining competitiveness. I may have missed some publications but certainly there is nothing of substance on the subject via a comprehensive search using numerous channels.
It is clear that some events are more competitive than others, that some sports have deep and competitive fields and others do not, and that “new” sports (e.g. cross country mountain biking) become established and, in a relatively short time, demonstrate a transition to much greater competitiveness. However, there exists no basic fundamental analysis that describes and measures competitiveness.
Understanding and potentially measuring competitiveness is useful for numerous reasons. First, a measure for competitiveness can provide the competing athlete with a clear understanding of how competitive their sport is and, additionally, how competitive a particular race is. This understanding will allow the competitor to assess their performance in an objective way. Second, a measure for competitiveness can serve as a basis for “point” accumulation in ranking of competitors for “championship” awards and honors. These “point” accumulations can be adjusted as a function of the competitiveness of individual events to ensure that the greatest point accumulations are by those who compete well in the most competitive events. Third, it is commonly asserted by many among the “running” community that ultramarathons are not “competitive” and an analytic measure of competitiveness can determine whether or not this assertion is, in fact, supportable.
When one considers the concept of a definition of competitiveness for an event it becomes abundantly clear that there are numerous sub-categorical levels of competitiveness. We have the competitiveness of the particular event itself (intra-event competitiveness (IEC)), the competitiveness of a particular event in aggregate over all or a selected portion of years that the event has been in existence (aggregate intra-event competitiveness (A-IEC)), and we have the competitiveness of a given event type (e.g. road marathon) as it is compared from event to event over the history of the event type or over some selected time period (inter-event competitiveness (IrEC and the aggregate (A-IrEC)).
In running races we are fortunate to have well defined results that can be rigorously analyzed without, to first order, any subjectivity. These data are the finishing time and the rank order of finish. This is great but, as we know, running race courses are all at least slightly different even if they are conducted on a track. In addition each event can have very large differences in the number of competitive participants; this is particularly true when comparing ultramarathons to other event types. Therefore it is imperative that we employ some method to normalize the finishing time and rank order data to be able to compare one race to another be it multiple intra-event results, inter-event results, inter event-type results, or aggregate, all-time inter-event results.
Separate from issues having to do with making comparisons, in framing a concept of competitiveness it is important to recognize that the competitiveness of an event is not only defined by how fast the winner runs but also by how other competitors in the race compare to the eventual winner. This means that any robust competitiveness evaluation must also normalize both the finishing time data and the finishing rank order data. The following will summarize the approach taken here.
Normalization of Finishing Time Data
Accepting the reality that every running course is different, that weather and/or atmospheric conditions can play an important role, and that even the fact that the same running course “runs” differently on different days due to surface conditions, it is crucial that one develop a method for normalizing finishing time data in a fashion that accommodates such differences to facilitate a robust analysis of competitiveness for a given event or event type.
For running race finishing time data the most direct way to accomplish this is via utilization of the calculated percentage time back from the winning time. The percentage back value represents a universal performance metric derived from the finishing time that is substantially independent of the race course, the length of the race, the weather, or other impacting variables that may arise since all competitors face the same conditions on race day. In addition, it is much more informative to assess one’s performance utilizing percentage back rather than raw finishing time (or placement) since improvements are better characterized with percent increases/decreases from the winning time than with raw time. Also the finishing time on various courses will be different on each course even if the race is of the same length. The FIS uses percentage back in assessments for cross country skiing. National endurance sports organizations like the US Ski Team and many other national ski programs (e.g. Norway, Sweden, France, etc.) also use percentage back metrics to assess current and up-coming talent. In fact the US Ski Team has often used percentage back metrics in decisions on which team athletes are to attend World Cup and World Championship races. Also, many coaches of endurance athletes will use percentage back to evaluate performance. In the following analysis we will use “percentage back from the winning time” as the fundamental normalization method for finishing time in the development of competitiveness metrics in running races.
Normalization of Finishing Rank Order Data
When assessing performance of a given cohort (or an aggregated collection of comparable cohorts) the concept of “percentile rank” is commonly used. The percentile rank will likely be familiar to you from the extensive use of this metric by the College Board in assessing SAT scores for a given year as well as in comparisons of test performances from year to year. The percentile rank values range from 0-100 and one’s percentile rank for a test provides data as to what percentile you have scored in relative to your cohort. For instance a test score that yields an 85 percentile rank means that 85% of the participants scored lower and 15% scored higher. These percentile rankings serve to normalize the test scores within the cohort and allow for comparisons with other years (cohorts) where the population size may change significantly. Similarly for running races, the percentile rank is a useful metric for comparison not only within a cohort (the performance of a competitive field at a given race) but also between cohorts (the performance of competitors from numerous years of the same event) and effectively allows for comparisons of races with very different competitive field sizes. The arithmetically related “cumulative probability” will be used here instead of percentile rank for normalization. Cumulative probability values range from 0-1 and represent the probability of a given result within the cohort. For instance a cumulative probability value of 0.1 for a running race result means that this competitor has finished just within the top 10% of the field and has posted a time that is faster than 90% of the competitors in the cohort.
The Functionality of Running Race Results
As for any fundamental concept, derivation of a functional description is paramount to allowing for utility. In the case presented here for evaluations of running races, it is the functionality of the cumulative probability (percentile rank) versus the percentage back from the winning time that describes the competitiveness of the event. In other words the shape of the curve defined by the cumulative probability versus the percentage back defines the competitiveness and comparisons of the shape of such curves (and appropriate descriptive parametrics) will allow for evaluation of the competitiveness of a given event or event type.
For demonstrative purposes, presented below are two cumulative probability versus percentage back from the winning time plots for the Men’s results of the London Marathon in years 2005 (blue) and 2002 (green) using a cohort of the top 100 finishers. The top 100 finishers were used as this population typically shows the population of runners who have finished within about 25% of the winning time. Although I would define “competitive” runners as those who finish within about 5% of the winning time, this population was chosen to allow for comparison to longer, ultramarathon races with much smaller populations and including results up to about 25% back yields sufficient population sizes for analysis and comparisons of all races.
It is inarguable that the London Marathon represents a very competitive event, particularly among the top 100 finishers, so the following analysis is representative of a very competitive running race.
The top 100 finisher cohort of the 2005 London Marathon Men’s race exhibits a steeper ascending functionality than the shallower functionality of the 2002 data.
Graphical inspection of the curves reveals that the 2005 Men’s race was more competitive than the 2002 race. The two figures presented below show that at the same percentile rank/cumulative probability or at the same percentage back from the winning time there is a considerable difference in the percentage back value and the proportion of the population, respectively. Specifically, at an arbitrarily selected value of cumulative probability of 0.20, we see that in the 2005 race this value represents competitors who’s finishing times are about 8% back from the winning time whereas in the 2002 race this probability value represents competitors who’s finishing time is about 12.5% back from the winning time, or about a 35% difference between the races. Similarly, at an arbitrarily selected value of percentage back from the winning time of 10%, the results from the 2005 race show that about 28% of the cohort was at or below this finishing time percentage whereas in the 2002 race only 13% of the cohort was at or below this finishing time percentage, or about a 55% difference between the races. It is clear that, in comparison of the selected cohorts, the 2005 race was more competitive- meaning there is a significantly greater proportion of the cohort of competitors closer to the winning competitor.
In a more fully analytical approach, one can fit the curves to a function and use the function metrics to characterize the level of competitiveness. In this case (and in all cases of running races studied by the author) the cumulative probability versus the percentage back from the winning time data generally fit very well to a simple exponential function. This is expected from a population that follows a normal distribution as athletic performance does. Presented below is a figure showing the fit of exponential functions to the race data. The fits are quite good although, in this example, they underestimate the differences. However the trend is captured. We see that the 2005 race is more competitive and therefore this event exhibits an exponential factor of 0.1481 which is larger than that for the less competitive 2002 race where the exponential factor is 0.1349. These exponential factors characterize the steepness of the curves and therefore the level of competitiveness of the race or event. The following section provides a method for utilization of these exponential function parametrics to capture an analytical measure of competitiveness (a competitiveness index).
One can also derive a metric for the level of the “deepness” of the field (cohort) from these data by assessing the density of competitors (data points) along the curve. A “deep” field would exhibit a high density of competitive times throughout the high performance end whereas a “shallow” field would show a paucity of competitors (data points) in this same region with large gaps between competitors. I will offer no analytical parametric for this evaluation as it is relatively straightforward to determine a sense of the “deepness” of the field from graphical observations.
Derivation of Competitiveness from the Exponential Data
It is unarguable that the road marathon (and specifically here the London Marathon) is a highly competitive running event where literally thousands (and perhaps even ten thousands) of elite and sub-elite participants have recorded impressive finishing times in the 100 year recorded history of the event. That these data fit an exponential function is entirely consistent with performance excellence and highly competitive sport. The exponential function describes a finishing time distribution that includes a sparsely populated tail of ethereal performance followed by an increasingly populated distribution of less impressive finishing times. The degree of performance excellence is defined by the high performance tail and the competitiveness of the event is defined by the “steepness” of the curve (which is proportional to the magnitude of the exponential term of the function). For example, an “other-worldly” performance at the far left of the curve (near or at zero percent back) with very few (or no) other recorded performances near it in the distribution is the definition of performance excellence. Similarly, the steepness of the curve just beyond the high performance tail defines how close other competitors are to the “netherland” of performance excellence. In other words, the steepness of the performance excellence curve determines how many competitors are “knocking at the door” of entry into the performance excellence club. The greater the number of such individuals, the higher is the probability that one of these (very talented and hard-working) competitors will put everything together and score a finishing time in the high performance tail. In the case of a more shallow exponential curve (lower magnitude exponential term), performances are more widely distributed and there are therefore many less individual competitors who have demonstrated performances that are close to the high performance tail. In this case the probability that a competitor will score a finishing time in the high performance tail is much smaller than in the population represented in the steeper distribution. This probability of performance excellence clearly scales with the steepness of the distribution (magnitude of the exponential term) and is a way to define the competitiveness of the event. Presented below are plots of simple exponential functions where only the exponential term is varying, showing the change in steepness of the curve as a function of the exponential. The range of exponential terms in the plot spans the range of such terms found in running finishing time data as will become apparent in subsequent sections.
From a functional perspective, two performances from an exponential population distribution that are close in linear time (the x axis in this plot) are actually exponentially different in “net performance” (the y axis in this plot- e.g. percentile rank). This means that although one competitor may be linearly “close” in time to another competitor in an event, they are actually exponentially further back from a performance perspective and the magnitude of the difference is directly proportional to the exponential term that characterizes the fitted data. The steeper the performance excellence curve the more difficult it is to progress. Many of us have experienced this reality in our own athletic endeavors as we approach our individual limit of ability- exponential improvement is not easy. A shallow(er) curve defines a population where even relatively large changes in finishing time (percentage back) do not lead to substantial changes in percentile rank. Such a population is the result of a sparse competitive field (in some cases due to a sport or event that is new or in a high-growth mode) and/or that the current level of performance is not challenging elite-level human limitations- meaning that the most of the current competitors have not fully developed their potential for performance (either physiological or technical abilities or both).
Now let’s take a look at this exponential functionality as the pre-exponential term varies. Plotted below is an exponential function with an exponential term similar to that exhibited by the road marathon data (an exponent of 1.2x) but with increasing magnitude pre-exponential terms (1, 5, 10, 20). Note that as the pre-exponential term is increased the the rapidly increasing portion of the exponential function begins at lower values (lower percentage back, faster finishing time). Since the x values are generated with a basis of the fastest time ever (at 0% back), the lower the pre-exponential the greater the degree of excellence (the more ethereal the performance) is represented by the fastest time ever.
Taking these two arguments together we now can construct a conceptual equation defining performance excellence: competitiveness and the degree (magnitude) of comparative excellence associated with the fastest time in the cohort. In a general, conceptual, equation form we have:
R ~ 1/E • C (equation 1)
R = cumulative probability (percentile rank)
E = magnitude of comparative excellence of fastest (or fastest ever) time
C = exp(ax), where a=competitiveness index (CI) and x=finishing time or percentage back from the fastest time
Conceptually we have a functionality for competitiveness and excellence that states that, for a measured cohort, the higher the magnitude of the exponential factor, the greater the competitiveness and the higher the magnitude of the pre-exponential factor, the smaller is the difference between the best time and the “rest of the best”. What remains is calibration of the parameters as they map onto running event data. This will be addressed in following posts but an estimate of the upper limit to the competitiveness index is provided below.
Establishing an Upper limit to the Competitive Index
To calibrate the approach outlined here it is important to establish an upper limiting value for the competitive index (CI). As shown above, this index is defined as the exponential factor in the fitted function to the finishing time data. It is inarguable that the road marathon is one of the most competitive of running events. Application of the the analysis protocol developed here to the dataset consisting of the fastest 499 marathon finishing times ever is a good estimate of the expected upper limit to how competitive the event can be. This is because the cohort represented in the data is from the all time best finishing times and represents a cohort of superstars all competing together in one “fictional” race- a “dream” race of sorts. Since these data are the best efforts of all who have ever run the marathon event, they represent the ultimate level of competition as we know it today. Plotted below are the data shown previously for the 2005 and 2002 London Marathon along with the data for the fastest 499 marathon times ever. We see that the data for the fastest times fits very well to a simple exponential function (as expected) and that the competitive index is nearly an order of magnitude larger than that for the individual, single race data for the London Marathon (CI= 1.2585 for the all time data and 0.1481 for the 2005 London Marathon). Based on this analysis it is expected that no single event or aggregated event data will be more competitive than the cohort represented by the all time data and therefore the CI of the all time data represents an upper limit to the value of the CI. Establishing this value will allow for meaningful comparisons in the analysis of numerous other events and event types in follow-on posts.
Take-aways from the Analysis
- The first important take-away here is that running event data fit very nicely to an exponential distribution of finishing time (or percentage back from the fastest time). This exponential behavior is fundamental to the nature of excellence in the sport of running.
- A second take-away is that via a simple analysis of the distribution of the finishing time data for a running event we can extract functional parameters that define the competitiveness of the event as well as establish a reasonable approximation of the degree of excellence of the fastest time. Should other event data of this type fit an exponential function then the exponential term can be used as a fundamental metric for defining the competitiveness of a given event and allow for comparisons between events. A simple process of calculating the cumulative probability, plotting this against the percentage back data, and then fitting the resulting curve will provide robust metrics for defining the competitiveness (steepness of the “excellence curve”) of the event data and therefore yield an analytical basis for comparison.
- A third take-away is that the factors that have lead to such exponential differences in “net performance” are similarly exponential and arguments (such as that espoused by the “10,000 hour rule” cult) that more practice (training) alone can close performance “gaps” are not founded. One must introduce some sort of positive non-linearity to the process of improvement since training time cannot be non-linearly increased by any meaningful magnitude for any meaningful time period. To put this in marathon running terms, a marathon competitor who has progressed to say a 2:15 performance standard over some considerable period is going to have an exponentially increasing difficult prospect at closing the gap to a 2:10 performance standard.
- A final important take-away is that the analysis provides perspective of exactly how exceptional performances in the tail of the finishing time (percentage back) distribution are- this is not a linear space as many seem to assume.
Subsequent posts in this series will analyze finishing time data from numerous distance road running events of varying lengths (10 km-marathon) and from trail ultramarahons. There are some interesting findings.
A Note on Scientific Process
Having written on numerous occasions on this subject and with continued development and refinement of robust analysis approaches to evaluate the “competitiveness” of running races in general and ultramarathons specifically, it is important to point out some parts of the scientific process that are critical to advancement. The first is to establish a null hypothesis and to test against it. In all of this work the working null hypothesis has been that ultrarunning races are just as competitive as other endurance running events. As is the imperative of science, I went about proving this hypothesis wrong; this approach of going about proving a hypothesis wrong is something that is typically not well understood by those who do not engage in scientific inquiry. The following video, which was inspired by a favorite book of mine – “The Black Swan” by Nassim Taleb – does a good job of demonstrating how difficult and seemingly elusive inquiry can be even in the most simple of examples.
The key to progress is to obtain “negative” information, i.e. information that does not fit the null hypothesis and therefore provides positive insight as to what is the underlying law, rule, or function that is the subject of the hypothesis. This is what I have been engaging in with this project.
Second, science and scientific inquiry is not about agreement and it certainly is not about stasis. Our understanding evolves and can be upturned due to new findings and refined due to new insight. All too many consider articles about advancements or discoveries in the popular press to be “definitive”. These same readers lament (sometimes publicly) when they then are exposed to another study that may undermine or refute the prior study conclusions. This is scientific inquiry, a constant, jittery series of disagreements, additional study, and resolution that, when successful, describes general progress toward understanding. There are very few fundamental discoveries that lead to sizable jumps in understanding of any complex inquiry. When the popular press presents any study as such, be wary. A contemporary example that the ultrarunning community has been exposed to is the back and forth amongst cardio researchers as to whether running long distances damages the heart- one study concludes that it does, the next that it does not- this is science, no giant leaps forward, just a bunch of back and forth all the while developing an accumulation of data and interpretation that defines progress but may not fully answer the question at hand. If one is uncomfortable with uncertainty, conflicting data, or alternate interpretations of data, then science is not for you.
For the Fall/Winter 14/15 season, Salomon have introduced a new hydration belt specifically for Nordic skiing- the S Lab Insulated Hydro Belt Set. I recently received this hydration system and have had a chance to use use it during some roller skiing sessions and I am providing initial impressions here. A more thorough review will be provided once I get a chance to use the hydration system in true winter conditions over a reasonable distance.
Hydration systems for Nordic skiing have essentially been in the “stone age” as compared to other endurance sports. With the exception of Bryce Thatcher’s innovative water bottle-based system belt from the mid-80’s there has been virtually no further development beyond improved ergonomics. In fact, in recent years there has been a retrograde path followed by all of the major Noridc ski equipment companies to produce ridiculously inconvenient, poorly designed, and clunky hydration belts such as the one in the image below. In order to drink from this “nordic canteen” (and when I say “canteen” I mean it in reference to WWII type technology) one must stop, take off the belt, screw off a top cap (or pop up a nipple valve) and tip the whole thing up in the air to get a swig. As far as I can tell this design was borne out of Norway and is about as bad of an example of design as I have ever experienced. As the local Swix rep says- “there is the right way, the wrong way, and the Nor-way”. Usually this “Nor-way” is unique and functional (like the Swix “Triac” pole, for instance) but in the case of these hydration belts it is nothing but bad design. Even Salomon jumped on the bandwagon and produces one of these. This model has a “pull top” nipple but they have made a screw top in the past.
Last season I gave up on the Nordic ski equipment makers and started using a Salomon running belt hydration system for training and longer races. These belts work but there are compromises, like difficulty in getting a soft flask out of the belt while skiing at speed. Some have taken to using backpack bladder hydration systems with the associated tube for access, but in skiing the last thing you want is something on your back as it interferes with poling and smooth technique. Given the fact that Nordic skiing is essentially a “core” dominated skeleto-muscluar activity, the waist is the area of the body that moves the least as a strong and stiff core allows for and is essential to efficient power development via the limbs. So a belt is the right choice for this sport. It does not take any “non-linear” thinking to realize that a solution for hydration systems for Nordic skiers would be to combine a belt system with a bladder-and-tube reservoir. Well, Salomon have done this.
Making the obvious available
The S Lab Insulated Hydro Belt Set combines the design language from Salomon’s running belt hydration systems with a bladder-and-tube from Salomon’s running vest systems. The tube has a removable insulated sheath to help prevent freeze up in cold conditions. Salomon states that the tube insulation prevents freeze-up down to -20C (-4F). I have not tested this yet so it is the one feature that will remain a question until the snow moves in this Fall.
The belt has a large zippered insulated rear pocket for a 1.5 l Hydrapac bladder, a small water resistant zippered pocket (I assume this is for electronics or other water sensitive items that you may have with you), a small stretch mesh pocket, and a large stretch mesh pocket that makes up the entire width of the rear portion of the belt storage area. The whole assemblage zips up into a neat package with a stated volume capacity of 2 l, 1.5 l of which accommodates a full bladder, so that leaves about 0.5 l for other “stuff”.
As mentioned above, the design of this hydration system is a hybrid of the Salomon belt systems and the bladder-based vest systems. For the Nordic skiing application a number of other design features have been included. First, I will cover the basic functions and then address the other features.
The basic function of the hydration system is to conveniently provide water/fuel to the training/racing athlete. In this belt this is accomplished with a form-fitting bladder/storage chamber attached to an adjustable belt with a velcro closure. The images below show an overview of the components.
These are the elements of the basic functions- just fill the bladder, zip up the rear pocket, put on and fasten the belt, bring the feed tube around the front of your waist through an elastic band on the belt, fish the tube up your torso under your jacket, and then clasp the feed tube on an available piece of clothing (e.g. the collar of your jacket). Voila! Water/fuel is now readily available without stopping. One will likely need to adjust the exact position of the bite valve to get it into the mouth but this can be easily accomplished with the fingers of either hand even with a pole strap on. Perfect!
Fit and Use
The belt system is quite comfortable as a result of the form-fitting design and a highly adjustable belt. The belt length is adjusted by slipping it into a “slot” in the storage compartment and attaching it via a velcro fastener on the inside surface of the ‘slot”. The belt adjusts nicely down to my 27″ waist and it appears that there is enough room to get it down to about 24″ and perhaps even smaller. Based on some measurements, it looks like the belt can be expanded to at least 44″. The belt closure uses a simple d-ring slot that one puts the velcro covered belt end through and then the belt end is folded back to attach to a companion velcro section of the belt. It is very secure and easily adjustable.
Once on, the form fit design is about as comfortable as a belt can be as Salomon have been working on this type of belt for over 5 years and they seem to have the design protocol down. One of the nice additional features is the three gel-pac pockets sewn into the belt across the front. These pockets can hold individual gel-pacs (or other such fuel pacs) and I have so far found them easy to extract from. We will have to see what cold weather does however as cold fingers are a different animal when it comes to fine motor skills. There is also concern that gel-pacs may become frozen and difficult to consume- not sure if there will be enough transmitted body heat to keep them soft. The pockets will accommodate other fuels as well- perhaps ones that are not temperature sensitive.
I loaded up the belt with about as much as I would typically carry for a 4 hour OD workout with some possibility of changing/wet weather. I was easily able to stow the following: 1.5 l of liquid in the bladder, three kick wax tins, a cork, a dry hat, an S Lab Hybrid rain jacket, a gel-pac, a fuel bar, and a camera. There was still room for more but I would likely never carry 1.5 l of fluid (I would probably carry more like .75 or 1 l) so there would be additional room for some dry gloves and more fuel. The images below show what the belt looks like when loaded with this stuff.
And here is a view of what was in the belt:
One of the key features to the utility of the belt pack is the use of stretch meah for the outer pocket. This material form fits over the contents and holds them snugly up and into the pack making for a compact and jiggle-free ski.
I have taken the belt on a few roller skiing sessions of about 20-30 km in rolling terrain for a total of about 100 km. The belt is very comfortable and one can forget that it is there at the waist. You are, however, aware of the feed tube/bite valve as it is right there next to your face. Very convenient and easy to fuel from. I was out on cool mornings so I had a jacket on and fished the feed tube up the inside of the jacket, as will be typical when using the belt system on snow. One could also route the tube up ones back and bring it forward over a shoulder. I have not tried this as I find the front orientation to be very functional.
If you are not wearing a jacket you can fish the feed tube up your shirt (or jersey) and clip it to the collar. It is possible to just bring the tube around on top of whatever you are wearing and clip it to the collar without fishing the tube underneath. This leaves the tube dangling out in front, ready to snag on a moving hand during poling, so I would not advise using the system this way. If you are not wearing a shirt, you are out of luck, although one could just fold the tube back on itself and stow it in the slot for the belt and still use the fluid carrying capability, but you would have to stop to use it however. The intended use is in winter conditions where one will be wearing a jacket or some type of clothing (like a race jersey).
I will be putting in much more rollerskiing time with the belt system in the next two months so I will update this post with any further experience of note.
$100 US…. expensive but worth every penny just for the convenience, let alone the substantial storage capacity and comfort.
Finally, modern hydration system design meets Nordic skiing. The Salomon S Lab Insulated Hydro Belt Set is a unique hydration/fueling solution for Nordic skiers that allows for convenient, “on the move” hydration/fueling. In addition, the system has a substantial volume of carrying capacity for longer ski workouts. Although testing in truly cold, winter conditions is critical to confirmation of translation of my current experience to snow and cold, I highly recommend this hydration system for Nordic skiers. The only concern I will note at this point has to do with how well the insulated tube/bite valve works- i.e. does it stay free of ice ups. Salomon says it is good to -20C (-4 F) but only direct experience will tell. Stay tuned.
Salomon announced the S Lab Hybrid running jacket at the Winter 14 OR Show and Expo with some innovative and appealing features. I have received an S Lab Hybrid jacket recently and have been able to put it through the paces due to an unusual “monsoonal” flow of moisture from the Gulf of California up here to the mountains of central Idaho. What this means is that our mountain weather transforms into something very similar to what the San Juans of Colorado always get in the summer- daily thunderstorms often accompanied by hail or snow. Having grown up hiking and running in the San Juans and Sangres, this weather system has provided a nostalgic respite from the normally “blue bird” perfect weather we usually get this time of year. It has also allowed for some good tests of the performance of the S Lab Hybrid jacket in difficult conditions. As one will see in the remainder of this review, this is a jacket that you will definitely want to consider if you are in the market for a super lightweight, waterproof, windproof, hooded mountain running (or fast and light alpinist) jacket.
There are numerous new features associated with this jacket and I will review the primary ones here.
Weight and packability
My men’s small weighs in at 108 gms- a very light jacket and lighter than any other waterproof running jacket that I am aware of. Although weight below some reasonable value, say 150 gms, is not a driver for use, it is generally clear that the lower the weight the more packable the garment. This is what holds true here – this jacket is very packable even if one does not use the integrated storage waist band. The jacket can be easily stowed in any of the Salomon waist packs or backpacks. Although this jacket does not pack down to as small a volume as the S Lab Light jacket (it is not significantly less packable), it does have much more versatility.
Waterproofness, windproofness and breathability
The waterproof portions of the jacket are made with a fabric that Salomon calls “Advanced Skin Dry” which upon inspection appears to be a 2 layer PTFE membrane/face fabric laminate. It could be GORE-TEX or it could be from a competitor manufacturer. It is common that clothing manufacturers who use GORE-TEX fabric put a GORE-TEX identifier on the garment somewhere. The S Lab Hybrid jacket has no “GORE-TEX” labeling so I suspect that this fabic is not GORE-TEX but rather some other super lightweight waterproof fabric (e.g. eVent, etc.)
The waterproof/breathable fabric utilized in this jacket is rated at 10k/10k* which represents very good waterproofness with significant breathability. The face fabric in the laminate is clearly a rip-stop nylon material with a “durable water resistant” (DWR) coating as is commonly used in such garments. The waterproof fabric portions of the jacket are also windproof as expected.
Integrated storage waistband
The most unique feature of the jacket is the integrated storage waistband. As can be seen in the image of the jacket above, a stretch-mesh waistband approximately 5″ (13 cm) wide makes up the bottom-most portion of the jacket body. This waistband fits snugly around the waist and can be utilized to stow the rest of the jacket when not needed. The three quarter length zipper facilitates both the stowage and the ease of shedding the upper part of the jacket.
Integrated hood headband and “skin fit” hood
The jacket also has an integrated hood that fits very snugly around the head with an integrated interior “headband” retention system. This very comfortably keeps the hood in place and the “skin fit” around the head and face ensures good visibility.
Design and Fit
The overall design approach for this jacket is “minimal”- minimal waterproof fabric, minimal weight, minimal fit (meaning = nearly “skin fit”).
The waterproof fabric portions are the entire front, the shoulders and outer parts of the sleeves, and about one third of the back. The inside portions of the sleeves, the side panels, and the lower two thirds of the back are all made from a lightweight very stretchy nylon material some of which has strategically placed laser cut ventilation grids.
The fit is snug but the entire jacket is quite stretchy so it does not bind or pull while running or reaching; I would say that the fit is best described as somewhere between the “active fit” and “skin fit” designations that Salomon uses. The hood is closer to a “skin fit” level, which is a nice feature since many hoods severely and annoyingly limit visibility. This one does not.
The cuffs are made of the same material as the elastic waistband and they have a soft, comfortable feel against one’s wrists. I typically fold the cuffs up back into the sleeves where they serve to not only guard against wind intrusion but also to increase comfort in the sleeve-end area and to allow space for a watch.
There is also an upper chest retention clasp that allows one to unzip the jacket, fasten the clasp, and not have the jacket flapping around. It works well and I use it often. This feature has been on many of the Salomon cross country skiing jackets and skiing/running vests and is now on this jacket as well.
I have spent a total of about 12 hours of trail running time in this jacket (100 km (62 miles)) including one 40 km run (2000 m D+/6500 feet vert, climbing to 2900 m (9500 feet)) where all manner of weather was evident throughout the run. On this 40 km run we had light rain, heavy rain, 48-64 kmph (30-40 mph) winds, heavy hail (about 45 cm (1.5 feet)), some snow, and horizontal winds. Temperatures ranged from about 27 C (80F) down to 3 C (38F). It was a wild day in the Pioneer Mountains. The S Lab Hybrid jacket performed exceedingly well. With the exception of a small amount of leak-through on the lower (non-waterproof and marginally windproof) portion of the back panel in horizontal, driven heavy rain, I stayed entirely dry and warm the entire time. My companion had a Salomon Fast Wing Hoodie (DWR-coated) and became quite wet under the same circumstances; her discomfort was alleviated by putting a windproof (S Lab Light jacket) layer over the Fast Wing Hoodie and this additional thermal layer provided a substantial decrease in heat loss. Within an hour we were in sunny skies and 21 C (70F) weather, so there was no criticality. But if the wet and stormy weather had persisted the Fast Wing Hoodie/S Lab Light combo would likely have not been sufficient for protection. The S Lab Hybrid jacket was far superior in these conditions. The performance in wet and high winds was particularly impressive.
All of this performance was coincident with a highly breathable jacket. I experienced very little condensation on the inside of the garment. Even when running through some rain and then into dry, warmer conditions, I have not seen much condensation. This very different that what I have seen in other products. It would appear that the breathability is getting better in these fabrics.
The “skin fit” hood and integrated retention headband is simply the best running jacket hood I have ever experienced. The hood stays in place (even in high wind conditions), fits snugly around the head and face leaving a sufficient opening for high visibility, and remains extremely comfortable. Hard to beat.
One of the features of the Hybrid jacket that is unique is the elastic waistband that serves as stowage for the jacket. This features works quite well and does a good job of keeping the jacket contained comfortably and evenly distributed about one’s waist. In fact it is easy to forget that it is there in the stowed position. Salomon claims that you can disrobe from the upper portion of the jacket and stow it whilst still running. While I may not be the most coordinated, I currently find that shedding the jacket down to the stowed position while running to be difficult. Practice and development of technique may improve this but after about four separate tries on the run, in each attempt I have had to stop and get the thing fully tucked in properly. Once there it basically stays in place but doing this on the run is a bit of stretch, at least for me. I can see where this will become easier with practice but do not expect to execute a smooth, flowing shed and stowage right away. It should also be pointed out that one does not need to use the “stowability” of the elastic waist as stowage can also be accomplished by just trying the sleeves around the waist as you might do with a “regular” jacket. The advantage here is that you do not have to roll up the jacket and then tie it around the waist since the S Lab Hybrid jacket hangs naturally in a convenient way. This “shed-and tie” approach is easily accomplished while running and may be what Salomon intended. I prefer a neater package and therefore use the waistband as the stowage medium.
The singular slightly negative use experience I have had is that, once stowed, the jacket can work itself a bit out of the fully stowed position, particularly on long, technical downhills where one is gyrating around in some not common body positions. However, the jacket has never fully “untucked” itself under these conditions. Being a bit of a neat-nik, I grate at imperfectly stowed stuff, so this may not bother anyone else.
I will also mention here that I expect this jacket to perform very well as a top layer for cross country skiing in wet snow or rain conditions. Typically cross country skiers will don full-on GORE-TEX raincoats in such conditions as there have been very few products available that combine lightweight, waterproofness, and a close fit. The S Lab Hybrid jacket should serve well as a wet-conditions cross country skiing layer.
The jacket currently comes in just one colorway- black with red accents. However, Kilian has been seen wearing one in red with yellow accents so expect to see that colorway come spring as well as some others. I prefer a brighter color for foul weather garments as it is usually best to maximize your visibility in case something goes wrong. The Euro-black colorway is basically camouflage in most mountain conditions.
$275 US…. ridiculous!….. but worth it as there is nothing else out there in the form of a running jacket that has the waterproofness, windproofness, breathability, low weight, and unique design functions (stowage and interior hood headband) that this jacket has.
I am not aware of another running jacket that combines all of the features and functions that the S Lab Hybrid jacket does in such a lightweight, stowable package. This is the one jacket that will likely cover your needs for anything you might encounter on mountain runs from spring through fall. Although expensive, it should see significant use and therefore, on a cost per mile used basis, may even be economical. Highly recommended.
*The 10k/10k rating means: The first number is the waterproofness, measured in the minimum height of a column of water (1 cm x 1 cm) before the fabric begins to leak. In this case the height is 10,000 mm, a very good rating. The second number is a measure of how breathable the fabric is using a metric that is the minimum number of grams of water vapor that can pass through one square meter of the fabric in a 24 h period. A value of 10,000 grams of water vapor/24 h is also a very good rating.
I posted an initial review of the Salomon S Lab Sense 3 Ultra in February and noted the changes from the Sense Ultra model from 2013 (final update here). To review, the primary changes between the two models are:
- simpler speed lacing design with a “bottom loading” lace pocket
- shape changes in the fused polymer overlay on the upper
- a higher heel counter height (about 5 mm on my US size 7.5 (EU 40 2/3)
- a more dense upper fabric- similar material to Sense Ultra, just knitted differently
- a shorter Pro-feel rock plate and significantly more flexible shoe
- slightly higher midsole support of the heel counter
- removal of lugs from the arch area of the outsole
- polymer overlays on all previously exposed EVA areas on the outsole
- lower price!
I would put all of these into the category of minor changes, “tweaks” if you will.
This morning I added a third pair of Sense 3 Ultras to my shoe rotation and quickly realized that after a little over 1200 km (about 750 miles) on pair # 1, they were done. Running in the the new pair emphasized exactly how worn out pair #1 is, particularly as it relates to cushioning. So the following is the postmortem, but suffice it to say that the Salomon S Lab Sense 3 Ultra is a shoe that is likely going to be hard to improve upon.
These shoes have about 770 miles (1240 km) so at US$160/pair that gives a wear value of about $0.21/mile (US$0.125/km). This is about the same calculated cost per mile as that experienced with the Sense Ultra from 2013. I should point out however that I have taken this pair of Sense 3 Ultras out of service earlier than I did the Sense Ultras as I have become more attuned to final wear-out. I could have run in this pair for longer but I have come to the realization that one must navigate a balance between wear-out and insufficient cushioning. I am erring on the more cushioning side of that balance as I do not want to risk foot bruising due to two upcoming races, a 60 km with 10k of vert and a 100 miler with 22k of vert both in rocky, technical mountainous terrain. Still 750+ miles (1200+ km) is very good wear in my experience even at the $160 price point.
I retired the Sense Ultras because holes had developed in the upper fabric. The precursors to the holes were seen in the upper fabric as early as about 900 km (550 miles). On this pair of Sense 3 Ultras there is no evidence of wear or holes in the upper after 700+ miles. It would appear that the minor modifications to the height of the fused polymer overlay and the upper fabric in the Sense 3 have succeeded in fixing this significant problem with the 2013 Sense Ultras. Close examination of the Sense 3 Ultra uppers reveals that the upper fabric is likely to be durable for many additional miles.
I quickly figured out a technique for reliably tightening and stashing the laces in the new “bottom loading” lace pocket. This involves pulling up on the lace pocket tab prior to pulling the laces tight. This allows the laces to snug up on the tounge and leaves the lace pocket open to facilitate stowage. I was a bit concerned initially about this design but once you get used to it it works very well.
The higher heel counter did not bother me at all and this combined with the additional midsole heel support seems to provide more stability on steep downs, particularly long ones.
Although this has posed no issue from a running performance and/or comfort perspective, a new defect has appeared in the upper on both the left and right. This is a small crack in the material around the top at the ankle on the inside edge. I think the taller heel cup has changed the strain pattern in this area and the material is not tough enough to resist cracking. If the cushioning lasted longer, this might eventually lead to an issue.
As noted in the final update of the Sense Ultras, the midsole appears to reach a critical point with respect to cushioning somewhere around 1000 km (600 miles) for the type of use and terrain that I run on (about 50% rocky technical singletrack and 50% buffed singletrack). The same is true for the Sense 3 Ultras, as stated above. This may be an area where we might see Salomon come in with a different material that either lasts longer, is more cushioned, is lighter, maintains trail feel, or, hopefully, all of these.
The outsole of the Sense 3 Ultra shows very similar wear characteristics to that of the Sense Ultra. This is expected since there were no apparent changes in design or material in the heel and forefoot sections.
The shorter the rock plate and more flexible arch region of the Sense 3 Ultra gives substantially more trail feel without any adverse affects. Even with direct hits on sharp rock in this area, the Sense 3 Ultra still protected my foot, partly due to the fact that some of the strain was accommodated by the additional flexibility. Additionally, I noted no decrease in traction by removal of the lugs in the arch area. All of this represents a significant design improvement over the original Sense Ultra.
The S Lab Sense 3 Ultra remain a very comfortable shoe. This comfort is due, in part, to the additional flexibility offered in this latest model and is accomplished in such a fashion as to not affect trail performance. Of course the “endofit” inner sock is the feature that primarily gives this shoe its performance and comfort. The trail feel continues to be outstanding, the “slipper-like” fit is second to none, and traction is at the highest available level in anything but mud.
Where to now?
Salomon have refined and tweaked the Sense to what seems to be an optimal level. So the question that arises is- where do they go from here? Not sure, but if history is any indicator we might see something very innovative from Annecy come spring 2015…. I hope so! Perhaps we will see something at OR this week (not likely), in the meantime the Sense 3 Ultra is still in the line-up for FW14-15.
Salomon have tweaked the S Lab Sense Ultra to a shoe that is even better, particularly as it concerns trail feel and overall comfort. This is an accomplishment since the Sense Ultra was such a great shoe but the Sense 3 Ultra goes to another (albeit only slightly) higher level. So as before- only more so- a great, light weight, high durability, low drop, very comfortable shoe which can take on just about any terrain with confidence. Highly recommended.
Update 7 August 2014
Salomon are showing the Sense 4 Ultra and Sense 4 Ultra Soft Ground models for SS 2015 at OR Summer 2014 in Salt Lake. Based on the pictures, it appears that the outsole tread pattern has been changed significantly- perhaps to better shed mud? In any case it looks like we will not be seeing anything ground-breaking from Salomon in the Sense line for SS 2015.
House and Johnston have written an engaging, thorough, and well-illustrated training manual for alpinists or any endurance athlete. Although written for the alpinist, this book is particularly valuable for the mountain ultramarathon trail runner. The overlap between alpine-style climbing and mountain ultramarathon trail running is substantial and the committed mountain ultramarathoner will benefit greatly from a focused read of this book. From the long “event” time duration to the importance of core strength for optimal performance and injury prevention, alpinisim and mountain ultramarathoning are nearly inseparable from a training perspective. In fact in many sections of the text one can interchange the word “climb” with the word “run” and loose no meaning or relevance. Replacement of some of the upper body strength guidelines with similarly structured run-specific guidelines and and one will find the information in this book is nearly all directly applicable to mountain ultramarathon training. Given that there currently exists no such comprehensive training manual specifically written for mountain ultramarathon training, this work is a great resource for the ultramarathon athlete. Although the importance of the mental aspects of training and competition (or expedition completion for alpinists) is well accepted, there is very little in the literature directed to the mountain ultramarathon athlete. Again, this book stands as a solid offering on the subject of mental training and development for intensely hard and long duration endeavors, whatever form they might take. House and Johnston, with publisher Patagonia Books, have produced a beautiful book from a graphic perspective as well. The illustrations, pictures, and interesting “vignettes” from many of the world’s best alpinists serve, along with the excellent and thorough text, to make this book a classic tome that will be an important part of the canon for at least a generation.
I have always subscribed to the importance of “tangential reading” and have made such reading a fundamental part of the pursuit of knowledge. “Tangential reading” is the study of work in other (allied or disparate) fields to enrich and cultivate deep understanding of concepts or subjects that are a primary focus. Examples include the study of chemical thermodynamics when pursuing a full understanding of statistical physics or a reading of classical geometry texts whilst grappling with basic calculus. The approach and perspectives brought forth by workers in “tangential” fields of study invariably bring new insight and a more thorough understanding of the subject matter at hand. Similarly, such studies of “tangential” fields are also important in non-allied fields as influences upon thinking in other, very far afield, endeavors. An example here being rigorous study of mathematics and mechanical physics and the resultant substantial influence on the work of artist Richard Serra in his forms of “torqued ellipses” and “torqued torus inversions”, for example*. It was in this spirit that I came upon Steve House and Scott Johnston’s book Training for the New Alpinism – a Manual for the Climber as Athlete recently published by Patagonia Books. Although it was expected that there would be some overlap between mountain ultra-endurance running and cross country skiing training with training for alpine-style climbing, I was surprised with exactly how large this overlap is. A reading of this book has not only reinforced many training principles that are part and parcel of any rigorous training for endurance sport, but the authors have also done an admirable job of distilling much of this information into a very readable, engaging, and well-illustrated discourse. The alpine “vignettes” that are strategically placed throughout the book nicely emphasize points made in the text and offer inspiring photographs of alpine pursuits.
But perhaps the most functional attribute of this book is the applicability of the contents to any number of endurance sports. House and Johnston have stripped away much of the overlay of sport specific context that impairs many other training texts when it comes to communicating fundamental training concepts. Here a neutral ground of endurance training is developed and then applied to the alpine discipline. As a result endurance athletes of any variety will benefit from this book and, at the same time, be exposed to perspectives that are specific to the alpine discipline- much of which resonates with all endurance sports. Although Noakes, Daniels, Lydiard, Friel, Magness, and others have provided similar training precepts, each has done so within the atmosphere of running or cycling culture. In Training for the New Alpinisim alternative approaches based on the same training concepts are uniquely valuable to those athletes looking for broader perspectives, a deeper and more rich understanding, and, perhaps, some new direction to enhance one’s own training regimen.
The organization of the book is nicely done and follows a logical sequence from introduction to fundamentals to specifics to sensible nutrition to mental aspects. Along the way many detailed plans, progressive approaches, and suggested protocol are offered and documented in a straightforward manner. Although in excess of 400 pages, a reading goes swiftly due in part to the well written text but also due to the quality of the book including the paper, the illustrations, the pictures, and the collected short-form writings of some of the most accomplished alpinists. It is truly a pleasure to read. Thanks to House, Johnston, and publisher Patagonia Books for their focus on the graphic excellence and structural quality. Low quality books with sub-standard graphics are commonplace today and it is refreshing to read such a well-done product.
A few take-aways
1. The reiterative but often not followed precept of 80/20 (or 90/10, depending) proportions of L1-L2 to L4(L5). Here again the authors point out the critical importance of limiting the intensity of workouts in order to perform at an optimal level on race day (or survive a climb) and during scheduled hard workouts. I often drift out of this protocol and need consistent reminders to back it off and save the high intensity capacity for the high intensity workouts and races and not spend much if any time in the no man’s land of L3.
2. The importance of max strength workouts to develop reserve capacity power at a high power to weight ratio. This is such an important factor for competition yet there is very little written on the subject. The authors provide excellent information here.
3. Begin additional reading on mental aspects and develop some sort of operative approach to enhance mental thought processes under highly stressful conditions.
A few quotes from the text
“It is not our natural tendency to value struggle over success, a worldview that climbing sternly enforces. Embracing struggle for its own sake is an important step on your path”.
House and Johnston (p 21)
“Constantly overcoming difficult training challenges and examining ourselves along the way improves self-assurance. That confidence frees imagination. It opens doors to new, more difficult projects, and expands our problem-solving repertoire”.
Mark Twight (p 15)
I couldn’t recover when I did go long, and the old days when I could move for twelve to twenty-four hours non-stop were a distant memory. Thus ended my love affair with short-duration, high intensity “cross training” to the exclusion of other forms”.
Mark Twight (p 98-99)
“You get these high-powered people who want to climb Mount Everest, they spend $85,000… there is a Sherpa in the front pulling, a Sherpa in the back pushing, carrying extra oxygen bottles so you can cheat the altitude. You haven’t climbed Everest. The purpose of climbing something like that is to affect some kind of spiritual or physical change. When you compromise the process, you’re an asshole when you start out, and you’re an asshole when you get back”.
Yvon Chouinard (p 365)
An enjoyable read with valuable training advice and programming, a slew of high quality illustrations and pictures, applicability across endurance athletics (in particular mountain ultramarathoning), and much insight into the operative physical training programs and associated mental training that has worked for many of the world’s top alpinists. Add this to your list of “must reads”. Highly recommended.
*Charlie Rose has done numerous interviews with Serra where he makes it clear how important curiosity and the associated pursuit of “tangential” reading is to the creative process. One such interview can be viewed here starting at about 23:00: