I first read about heart rate variability (HRV) in September of 2012. Back then, HRV was background information in the context of a laboratory comparison to examine the relative effectiveness of HRbased training vs. power meter (PM)-based training. The next occurrence was from a team newsletter early this month. Least to say…I took interest.
Basically, HRV is about the variances in duration for your beat-to-beat time period. Here’s what I’m talking about:
The RR (or N-N) periods from the high-point of the QRS complex
Any variance is clear when one RR period differs from another RR period and so on throughout the tracing. So why is this variance important? Well, remember when training stress balance (TSB) used exponential weighted moving average equations to estimate where your training stress level (form and freshness) was? HRV is a direct (although inclusive) indicator of the level of stress that your system is experiencing.
In a fashion, your heartbeat rate is controlled by two systems within your body, the sympathetic nervous system (SN) and the parasympathetic nervous system (PN). The SN system functions to regulate the body’s unconscious actions, like fight-or-flight, and is constantly active at a basic level to maintain system stability. The PN system on the other hand, is responsible for stimulation of “rest and digest” or “feed and breed” activities that occur when the body is at rest. Basically, each system is the opposite and compliment to the other. The SN system increases the uniformity of the RR period, the PN increases dissimilarity. Both these systems “vie for control” and as such, the variability in the length of time between heart beats rises and falls.
Understanding how HRV is affected becomes important to us when we train in our respective sports. The body reacts to various types of stress. In training, we call this stimulation and adaptation. For instance, our bodies compensate for the anticipated future strain by making muscles stronger and/or more efficient. This occurs during the rest cycle, when we sleep. Accordingly, the balance between beat-to-beat uniformity and non-uniformity reflects this state. That is to say, when the body and/or mind is stressed, the variance tends to decrease. When the human body is rested and has capacity to take-on work, the variance increases. However, this “HRV gauge” isn’t specific to a particular stress source—it’s inclusive. This is the part that caught my attention. Why? Because I’m terrible at estimating just how depleted my athletic engine has become.
One particular article of research studied changes in autonomic function related to athletic over-training syndrome. Mourot et al. (2003) discussed how HRV provides information related to the regulation of heart rate in real-life conditions. Their study demonstrated that analysis of HRV using linear and non-linear methods (Poincare Plots specifically,) could be used as a direct indicator of fatigue after prolonged exercise.
Note that you’ll have to make some choices about how you collect and interpret HRV data. You could download an Android application to your smart phone, and have a super simplified green/yellow/red scheme to tell you your stress level, but this version might not give the detail that you want. Or you could spend near $175 USD for a more professional program (maybe with hardware) to give you a bit more detail, but again, you might not learn how that information is derived. I preferred to do the homework, learn the method(s), and gain an understanding at a deeper level. Choose what’s important for you.
I started learning how I could take advantage of this phenomena. Equipment-wise, I have heart rate straps of the ANT+ variety, an ANT+ USB dongle, a Power Tap hub on one of my wheel sets, and I have a computer. Problem was, I didn’t have the Windows-compatible software to analyse an ECG tracing. As it turns out, I didn’t need to mess around with tracings. After four to five days of searching the web, I found the HRV_Tracker application, which records RR intervals. This free software program accepts the data stream that my ANT+ compatible HR strap produces and drops it into a pre-formatted .hrm file. This only produces the data file however, and that’s when I found the Kubios HRV – Heart Rate Variability Analysis Software. This software application is an advanced tool for studying the variability of heart beat intervals.
After you get the software installed and setup, the basic process is to:
- Wake-up the next morning and create a daily data file using HRV_Tracker
- Open that file using Kubios HRV. Adjust for artifacts and frequency cut-off
- Determine which analysis method and metrics to track (time-domain, frequency-domain, or non-linear)
- Transfer chosen metrics over to a Minitab 16 statistics file (or Microsoft Excel, or other graphing application)
- Decide how the indicated data affects your training decision for the day
Recording Your Data File
I paraphrased the data collection steps from the Medicore SA-3000P clinical manual—simple and straight-forward:
- Surrounding environment
- Keep the same collection time since HRV is known to have circadian rhythm (morning/evening)
- The proper environment
- Avoid bright light or noise
- Maintain comfortable room temperature
- Before the measurement
- Avoid caffeine or smoking before recording your data file
- Do not eat breakfast
- Allow time to adjust to your seated position
- During the measurement
- Maintain a comfortable sitting position
- Don’t move or talk
- Don’t close your eyes or fall asleep
- Don’t intentionally control your breathing
I wake-up, visit the restroom then return to my desk’s seat, put on my HR strap, then launch the HRV_Tracker software (Figure 1.)
Figure 1. HRV_Tracker Application Interface
See the Device and Script Recording tool bars? Check the “Help and Support Documentation” section of the index.html file within your installation folder for detail on how to set these up. Something to pay attention to is that if you have the application window at some other size than full-screen…the stage time values will not display. I found this out by happenstance and always maximize the application window now when I’m recording.
After launching the application, plug-in your USB ANT+ stick. Your HR strap can now talk to your computer. When your ready (and if you’re using a script file) mouse over and click the “Start Scrip” button. Otherwise click “Start” to begin recording. You’ll see something like Figure 2:
Figure 2. Record file start screen
Within the “Heart rate status” window you’ll see your HR rate (self-explanatory) and the sequential R-R interval as it’s being recorded in milliseconds. Within the “Script status” window (if you’re using one,) you’ll see the stage message and elapsed time that you’ve implemented in your script file. (Refer to the help file noted before.) I chose three minutes as my recording period mainly because I decided to focus on Low Frequency (LF) and High Frequency (HF) and because these components are distinguished in short-term recordings according to the HRV Guidelines suggested by the Task Force of the European Society of Cardiology and the North American Society of Pacing and Electrophysiology. Other periods include five minutes, 20 minutes, and 24 hours. Whatever you decide as your recording period, try not to mix different length files.
Once you’ve reached the end of your chosen recording period, the application will automatically save your data file to your folder of choice, Figure 3.
Figure 3. Data file location and file name structure
Note the date and 24-hr time stamp within the file name, this is automatic and it makes it easy to manage your information over time.
Preparing Your Data
There are a couple things I do before copying my metrics over to my tracking file. In sequence, they are: deal with artifacts (if any), and remove the very low-frequency bands (VLF.) An artifact is basically any type of RR interval point that can be considered an outlier. These points are easy to spot…and easier to deal with, consider Figure 4:
Figure 4. A raw data .hrm file opened in Kubios
Note the main graphing window, see the four points that lie far outside the rest of the data points? Those are artifacts. You’ll find their causes within the Kubios user manual. All I need to do is get rid of them otherwise they’ll skew the rest of the LF and HF frequency information. In the left-side margin you’ll see a drop-down box labeled “Artifact correction.” Click it and you’ll see a range of options from “none” to “custom.” The idea is to pick the option that will highlight only the outlying points and not the rest of the data stream. Note that your retained data points turn the color green from blue temporarily. I usually pick down the list until this condition is true, thus in Figure 5:
Figure 5. Artifacts selected with the correction drop box
Here, the “low” correction level has excluded all the artifact points. Click “Apply” to screen the points out. The graph will automatically re-range for you. Click the “Frequency-Domain” button under the horizontal scroll bar for the next step.
Let’s reduce the very low-frequency (VLF) band from our analysis. (It’s the pink area in the FFT and AR spectrum graphs.) Basically this frequency zone (0 Hz to .04 Hz) is background noise for our purposes, so we can get rid of it. Select “Smoothing priors” from the “Method” drop box under “Remove trend components.” You’ll note that the Lambda value field (500) and the estimated frequency cutoff “fc = 0.035 Hz” appear. I debated changing the default value to “390,” to give me “fc = 0.039” but I’m not certain that the effect will be statistically significant. [Edit: I set the default to 0.039 anyway.] To save yourself some time set the detrending method and default Lambda value in your “Analysis options” section of your Preferences settings. Your screen would look something like Figure 6:
Figure 6. Detrended data file in Kubios
Note that most of the pink VLF band area is gone.
One of the neat things about Kubios is that there are various analysis methods to consider depending on your disposition. Some pros and cons about my particular metrics choices from types of analysis are:
- Pro—the simplest to apply. There are a few available metrics in this category SDNN, pNN50, TINN, and Triangular Index; but the root mean square of successive differences (rMSSD) is considered easiest for convenience by non-expert users. One substantiated argument for rMSSD as the ideal metric comes from this article. Also, another reference for the preference of rMSSD vs. pNN50 because of its mathematical robustness is made by the Task Force for HRV.
- Con—On the other hand, George Moody, Harvard-MIT Division of Health Sciences and Technology stated in an article published in PhysioNet that, “The commonly quoted scalar measures (…SDNN, pNN50, rMSSD) offer only a limited view of HRV.” He goes on to say that these measures were devised when the standard technology for estimating HRV was a pair of calipers and a hand calculator.
Available metrics in this category are very-low-frequency (VLF), low-frequency (LF), and high-frequency (HF).
Fast Fourier Transform Approach (FFT)
Autoregressive Modeling Analysis (AR)
- Pro—this model smooths the frequency components allowing discrimination of the bands, and not depending on pre-selection. In essence, the produced graph is easier to see compared to FFT. According to the Kubios users manual, the AR model, “…yields improved resolution especially for short samples.”
- Con—A limitation concerns the suitability of the value for the model order. This value will affect the determination of the center frequency and the amplitude of the frequency components, and non-optimal selections may introduce inaccuracies. Previously, Task Force guidelines recommended a range from 8 to 20. More definitively, a study by Boardman et al. (2002) recommended the best model order to be 16, to reduce spikes and smearing, and to produce easily observable peaks. The default model order in the preferences of the software is now 16.
- Pro—using this graphical method, it can be easy to identify artifacts. Refer to Figure 7. Note that in the main graph display, we find a couple of outliers, aka artifacts near the 7:10:35 time mark. Correspondingly, with the view results button “Nonlinear” selected, we see three points within the Poincare plot quite separated from the main cloud; these are the same indicators. I suppose that with practice, deciding on outlying points that are closer to the cloud will become clearer.
Figure 7. Poincare plot and locations of artifact points
- Con—even though Mourot et al. (2004) demonstrated that Poincare plots reliably delineated shifts in parasympathetic balance, and that “…changes in SD1 and SD1n paralleled changes observed in rMSSD, pNN50, TP, HF, and HF/TP.” it might be difficult to attempt day-to-day decisions about training based on the graph alone. Alternatively, a trend based on SD1 and/or SD2 fluctuation compared to a baseline could be a solution, although I found no study to date has demonstrated this idea.
Approximate Entropy (ApEn)
- Pro—this method might work well for an HRV file with less than 50 points, can be applied in real-time, and is also less affected by noise. Additionally, research by Lau et al. (2005) shows that ApEn can be a natural measure of HRV.
- Con—Based on the first reference in Pro, ApEn could estimate lower than expected for small records. ApEn could also lack relative consistency. Relative difficulties in applying this method in ECG data analysis were examined in this study by Holzinger et al. Furthermore, “ApEn was shown to be a biased statistic.” Pincus S. (1995)
Sample Entropy (SampEn)
An extension of ApEn. Technically explained, “Sample Entropy is the negative natural logarithm of an estimate of the conditional probability that sub-series (epochs) of length m that match point-wise within a tolerance r also match at the next point.”
- Pro— (still searching for a reason to use this method by itself or in addition to the previous methods.) None found to date.
- Con—Values for the variables m, and r must be chosen before analysis. As of this writing, various directions have been published. Accordingly, Aktaruzzanman and Sassi (2013) determined the following guidance: “The value m depends on the length of the series and it should be kept small (m = 1) for short series (length ≤ 120 points.)” and “The recommended value of r in the range [0.1 0.2] x STD has been shown to be applicable to a variety of signals.” Additionally, Heffernan et al. in their study of HR and HRV following resistance training, set their embedding value m = 2, and their filtering value (tolerance) r = 0.2 x STD. These latter settings happen to be the default in Kubios HRV.
Understanding Your Data
This is the tough part. Up to now all effort was put towards collecting and processing data. Now for the interpretation, because analysis methods and related metrics all have pros and cons I decided to approach any training decision-making from a more-holistic approach. That is to say, I wouldn’t make decisions based solely on a single method or metric. I would look at a multiple method and metric “picture” of the statistics for that morning and trend. I copy the following prepared metrics from Kubios into my Minitab 16 tracking sheet each morning Date, Mean RR, SDNN, rMSSD, NN50, pNN50, FFT(LF/HF), AR(LF/HF), and HF power (AR results). See Figure 8.
I input these metrics in every morning after creating the .hrm file; easily less than 5 minutes typically. Note that the last column, lnRMSSD, is actually a formula that I inserted into each cell of that column. Basically, it takes the rMSSD figure from column C4 and produces the natural logarithm then multiplies by 20. I then round to an integer for a nicer “real” number. In other words, it’s easier to look at “2” in a graph and not “2.876.”
When I initially started collecting HRV data, one of the first tracking graphs I created was my mean RR over time. This graph doesn’t carry much decision-making weight and I don’t really look at it anymore. It simply confirms that my RR average has decreased along the years, i.e., my average isn’t like it was in my twenties.
Note: all graphs contain a baseline of seven to ten days of non-training HRV measurements. This forms the position of the “mean.” Another way to do this would be to assign the baseline value as a horizontal reference line, and let the mean calculate and display as usual.
Based on my continued reading of research articles, and in some part observation of (then) current sports HRV measuring practice, I created LF/HF ratio tracking graphs from the FFT and AR analysis methods. Those are shown here as Figure 9 and Figure 10. Note that the vertical lines from the “Date” axis are days-off (no training) and that all data points are plotted in the morning before any workout or training:
Figure 9. Minitab 16 Individuals graph of HRV FFT (LF/HF) over time
For this method, a lower ratio is better, indicating “balance” between the ANS and PNS systems. According to FFT, my system was stressed far above the green average of 4.76. (This coincides with when I started the off-season lifting phase.) Curiously, only the latter date of Oct 29 shows higher stress, as I was lifting weights on the non-rest days. Or does this show adaptation?
Later on, again based on the research articles I was reading, the Autoregressive method seemed the way to go:
Figure 10. Graph of AR LF/HF ratio over time
The AR method reveals more stress days beyond what FFT showed in Figure 9. Again, for recovery, lower numbers are better. Per my understanding, system stress should increment higher during successive training days and reduce after a rest day. That’s basically the pattern here.
I’ve read plenty of studies that suggested that the HF band is indicative of parasympathetic activity. This was another graph that I created to help me understand the big picture. For the HF graph, higher numbers are better (Figure 11.)
Figure 11. Graph of HF values over time
In the HF graph, higher plots are better, showing an increased level of recovery or capacity for more work. What alarms me here is the downward trend starting Nov 1 (a rest day.) A bit of recovery followed for the morning of Nov 3 (heavy lift day), a continued negative trend for Nov 4 (light lift day), and an expected decrease for the morning of Nov 5 (rest day). A day of rest should have manifested as an increase for the morning reading on Nov 6 (a scheduled heavy lift day), but instead the plot continued its downward trend. At this point I decided not to lift, but to instead take another rest day. If the plot for the morning of Nov 7 shows an increase, then I know I’m recovering. However, if the plot decreases, then I know I’ve dug myself a good hole by training too much and not watching my trend line.
Nov 7: fortunately, the morning reading shows that my system recovery is back up to where I wanted it to be—above the baseline (47.) See Figure 12.
Figure 12. Graph of ln rMSSD over time
I’d like to think that I avoided a potential over-reaching scenario by understanding the warning signal as shown in Figure 12 for Nov 4 to Nov 6. Good meals and three nights rest allowed the regular workout as scheduled.
Control Limits & Historical Tendency
One of the challenges with this approach is the context of whole system change over time. Throughout the Minitab 16 graphs above you would have noticed the (red) horizontal line(s) above and below the mean value (green) line. These represent warning lines more or less, and they’re temporarily a way for me to easily tell when I’ve went overboard. For now, I’m setting the position of the upper and lower control limits equal to one, and some cases two, standard deviations because I haven’t installed a better mechanism for setting these “gates.”
One solution may be the exponentially weighted moving average (EWMA) equation like that used to calculate training stress balance (TSB). Using initial gain factors of 7 days for short-term and 42 days for long-term, I might be able to forecast the level of recovery. On the other hand, Dr. Larry Creswell, M.D., states in his blog post:
HRV responses to training are not only specific to an individual but also to both the recent and remote training history. The most important observation is that the relationship between HRV and fitness is simply different in well-trained athletes: there can be increases in HRV with no corresponding increase in fitness over a training cycle and there can also be decreases in HRV despite increases in fitness.
Another example of a gate-keeping solution is the BioForce HRV software application. This program color-codes a daily, weekly, and monthly index to give criteria for training decisions. At first glance, the index seems to be an average from the current period as compared to the previous and respective period but I haven’t read anything that explains the calculation method yet; so maybe I’ll have to keep looking.
I know there’s particular applications and authors who maintain that their method or metric is the way to go about tracking of this stuff. I’m not convinced based on what I’ve read thus far which says that a specific method and metric is the true and accurate way. So, what I’ve done is collect methods and metrics to examine as a whole to give me the “big picture” compared to what I expect to see. For example, the following methods and metrics when recorded the next morning after the earlier day’s training should behave thus:
Autoregression (AR LF/HF) graph—plot point should increase from previous; higher ratio indicates additional system stress
Fast Fourier Transform (FFT LF/HF) graph—plot point should increase from previous; higher ratio indicates additional system stress
High Frequency (HF) graph—an indicator of parasympathetic activity or magnitude. The plot point should decrease from the previous.
Mean RR graph—plot point should decrease; a lower value indicates increased sympathetic action (or decreased parasympathetic action.) An extended trend upward could mean general system adaptation. An extended downward trend could mean I’m moving into an over-reaching or over-training state
Natural log of rMSSD index graph—plot point should decrease from the previous as this metric is supposed to be an indicator of parasympathetic system magnitude compared to the sympathetic system.
If the direction of one graph doesn’t agree with the others, or if the plot doesn’t go in the direction I thought it would, the first thing I look for is an input error when I transferred the data values to the tracking spreadsheet (Minitab 16,) or maybe I didn’t do a pre-processing step correctly. If the data checks-out, then I attribute the disagreement between graphs to the “fuzziness” of the deal. After all, I’m not certain this is an exact science (yet?) So far there have been a few contradictions since my start date on October 6, but for the most part, graph behavior has been consistent.
My next issue concerns the size or amount of change from one day’s plot to the next. Does the amount of change make sense? Should the plot point be farther up or down? Well, this condition is why I’m doing it in the first place; to try to predict or understand how much change is involved based on the level of training I did. This condition relates to the “gating” function I spoke of earlier. In other words, when race weekend arrives, where will my system’s recovery level be? Will I be ready for a great effort, or should I wait for the next venue? I think if I can figure-out the pattern, I’ll be able to back plan my training (and recovery) from one of my target races, like an “A” race/stage race/etc.
That was a lot of stuff to put in one blog post. I only have one months-worth of data based on Phase 1‘s training input. The next training level is the Strength Period of Phase 1, which starts this coming Monday. My next post should be (hopefully) much shorter.
Hey, thanks for reading, and let me know any questions you may have. Best of luck for your racing efforts!