I’ve finally succumbed and got round to writing a blog. Previously I’d not thought myself wise, interesting or opinionated enough to warrant jotting anything down, but some recent events, work with athletes and chats with colleagues really got me thinking. I’m passionate about sports science, but I’m perhaps most passionate about how ‘we’ communicate practice and how athletes receive and respond to that counsel. As such, this blog is all about case studies and the value surrounding and within them. With a couple of case study manuscripts in preparation, this post has also helped focus my mind on the tasks at hand, and how far we’ve come with those athletes.
Hopefully this post provokes some thoughts…
No two athletes are the same
This seems obvious, almost to the point of cliche. We’re currently living through the embryonic era of personalised medicine (or so we’re told) so this idea of a collective group of individuals is getting rather old, even already.
But, when we take our investigations out of the lab, and onto the playing field we quickly become aware of an enormous degree of variation.
From my own work, supplementation springs to mind – when working with a large group of swimmers, we had differing age groups, genders and abilities and as such medal prospects at domestic championships.
Now then, across the events we supplemented differently, Beetroot Juice shots (small, more palatable dose established through single-blind performance testing in training) for Endurance and Medley swimmers, and Beta-Alanine and Sodium Bicarbonate for Sprint events. I would call this within sport variation.
Between sport variation then occurs when we take either of the above interventions and apply it to another sport with similar training or competitive demands. Another client used Beetroot & Cherry juice as part of their taper for a marathon (running a PB of 2:52). So whilst the training volumes i.e. time spent training (and intensities) may have been similar between Endurance swimmers and our Marathon Runner, the nature of the intervention changed to match the event, sport and competition.
Secondly, no two interventions are ever the same in real life. Multiple tests are required, at differing phases of the training cycle to establish ‘does this work for me?’ and the consistency of that response. This ultimately comes down to how you test, and how frequently and then what you do with that data to establish what’s working for the athlete. Don’t underestimate/ undervalue the placebo effect at all in this instance.
Sport doesn’t exist in a bubble/ vacuum
Here we need to look beyond the obvious, and get to know your athlete(s) really well. This is something I still need to get better at, but something that really paid dividends in some work that we’re hoping to publish in 2016 (see Best et al., 2015 here for preliminary findings). It quickly becomes apparent that working with an athlete goes beyond x ≠ y and starts to look more like a ⇒ z, with each letter in between being important too!
Personally when getting to know an athlete and their sport, I try look beyond the physiology and training schedule; thinking about biochemical implications (and potentially manipulations) that the sport and training for it carries (see this paper for a great summary diagram). Also, consider the cultural demands of a sport, again with swimming they tend to train early in the morning due to pool availability, so when it came to practising and testing interventions ahead of competition, it was important I was there at 5:30am too. We’re seeing similar factors being important in ultra-endurance athletes we consult with, their sport is part of their identity and as such our input must respond to that, to get the athlete to respond to our input.
Coaches are by and large ahead of you too here, they know their athletes and observe them in a very different way to a scientist. If you try and do anything in 2016 it could perhaps be to develop some strong relationships with coaches of athletes you work with, as this saves valuable hours in the long run.
Getting away from statistical significance
For those unaware Teesside is somewhat of a statistical hub, with Alan Batterham (@Alan_Batterham), Greg Atkinson (Greg_at_TeesUni) and Matthew Weston (@MWeston73) all working here. Where appropriate there has been a changing in the tide away from classical statistical significance, which typically tells us that the mean responses between groups differ with 95% certainty, occasionally with a confidence interval, to the contemporary approach of Magnitude Based Inferences (covered in great detail here, and more concisely here and here).
This more contemporary approach is fantastic for case studies, as we can express our work in a more fluid and meaningful way. Matthew Wright (@md_wright7) who heads up the Elite Athlete Scheme and FA Centre of Excellence at the university, has found this approach of great use when presenting data on female youth soccer players to their coaches as it lent itself to a traffic light system, by which players fitness scores could be coded and as such their training programmes developed and implemented, with the effects from their training programmes also being represented in this way.
If data from testing can be better communicated/ presented to athletes and coaches this surely increases the ecological validity of what we’re doing? That’s not to say the approach is without limitations, just I feel it lends itself to the current sports science climate – for a working example of the approach see this meta-analysis by Jonny Taylor (@JTaylor45) or if you want to chat about it Shaun McLaren (@Shaun_McLaren1) is the guy I tend to annoy with questions about this on a daily basis.
Stories behind the numbers
Very rarely do we see written, or hear about the nuts and bolts of research. Yes, we have a methods section in papers, some of which are very lengthy, but we rarely get a feel for the effort required or the relationships forged in order to produce that particular set of results.
I’ve already hinted at the early mornings required when working with swimmers, but the 28hour day we put in to support an ultra runner through a 100mile race to completion, plus the food and sports science support prep that surrounded that event probably tallied that up to 40hours. Add in athlete, coach, other communication and reading required to ‘nail it’ on the day, likely somewhere in the region of 60hours. The athlete completed the race in 23hrs 42minutes (you can read about it here with our manuscript also in preparation) but the time amassed beyond those hours wasn’t far off double that.
The above may sound like I’m complaining, but it was worth it – and we all have stories like that, when we work 1-1 with athletes. Because of the work surrounding the event, this furthered our understanding of the athlete and the event itself; that was invaluable in allowing us to provide the best possible support to an athlete who was giving their best.
It certainly wasn’t glamorous standing in car parks and other remote locations in the pitch black, pouring rain, but it got the result. We don’t get that emotion from lab based research, but in 1-1 work, we grow as real time practitioners, decision makers and data collectors.
I’m going to hit pause here, as this is a long post already, and conclude with four further points in a subsequent post.
If you’ve made it this far – thanks for reading.