Testing: If You Aren't Assessing You're Guessing

Rich Clarke

People testing agility in their practice environments is very rare. Some would say it is pointless, others get lots of value from it. There are loads of choices of how you begin to investigate things. There isn’t a right and wrong here, sport, culture, time etc all determine the answer more than ‘best practice’.

Options:

  • Look at the whole thing (both perception and action) either through a subjective or objective process.
  • Look at each component part independently (COD Speed, perception, deceleration, linear speed etc)
  • Investigate the underlying qualities (e.g Strength parameters)

Looking at the whole: Objective

There are lots of validated agility tests out there. Pretty much all of which are performed in a cutting action and in a response to either a human stimulus, a 2D screen or a light. The light stimulus slides down the scale in terms of specificity a little, but it’s still useful.

The main challenge is logistics. Testing is becoming more and more of a pain in the ass. Force plate analysis of jumps, GroinBar, NordBoard, sprints, conditioning, trunk robustness, strength testing etc. To then think ‘I’m going to set up a big screen with some pre-recorded videos’ is a bit much. Not because of its lack of use, but because it is a logistical nightmare, and this is without considering recording athletes to determine decision time.

But as Mark has mentioned in the tweet below, even a light stimulus has some use. Test a movement reactively and pre-planned and if someone is performing low on the pre-planned vs the reactive attempt, focus on the physical side of things. If the opposite, make training more representative and reactive.

I have (old 2008ish) data that shows a statistical difference between starters and non starters in an international rugby squad, also between professional club players and international players (agility measured with sprint, react to lights and sprint).

— Mark Bennett (@MarkBennett07) March 28, 2020

The use of a light here isn’t quantifying their perceptual skill, but it is seeing how well they can control their performance and coordinate when there are time constraints and other things to give their attention to. Clearly impactful as you can’t argue with good data collected in professional environments.

Looking at the whole: Subjective

This is again very useful. But my concern is that it is a bit of a default. It is easy to convince yourself that you do this, but it still takes attention and a conscious effort. Watching training or games and thinking ‘they are looking fast’ is a little different than trying to evaluate something and identify its different parts. In my opinion, it needs some structure in the questions you ask. You have to consider intent, what they are perceiving, external factors etc. It is easy to look at something and say ‘they should have done this here’ but they might have done what they did for good reason. And their lack of success may have been completely out of their hands. If we don’t use a more robust framework, we kind of watch, convince ourselves we are doing something useful but miss a lot of information and may have no impact. I am working on a framework at the moment so sign up to the newsletter and I’ll let you know where I get to.

So ideally have a robust framework, but everyone’s assessment process should be a subjective evaluation of competitive performance AND something else. Also, don’t forget that competitive performance doesn’t have to be completely subjective. A performance analysis or a coach is your best friend in this situation. How many tackles were made? Meters gained? Interceptions? Points? Know what your sport key performance indicators are and pay attention to the ones which are the most agility influenced.

Look at each component independently:

The chances are you are already partially doing this by speed and strength testing. Interestingly, very few people use a COD test, which I genuinely think is a mistake if you work with a multi-directional sport. Even if the results of the test you choose are strongly related to linear speed, you still get the opportunity to focus in on movement skills and some understanding of different qualities. I have mentioned a few times that I am a fan of the 505 test (the traditional one, as the modified is bang average). This has nothing to do with it being 180 degrees, as I’m not particularly bothered if the pattern of movement is highly prevalent in the game. I think its use comes from it being the only COD test that gives an opportunity to assess deceleration and each turning direction independently. This is the reason I developed the deceleration deficit and recommend the below thought process.

But there are 30+ COD tests to consider and lots of them are highly correlated to each other so don’t over think it. Choose what makes the most sense to you. Just remember one of the reasons they all tend to predict each other is because of the linear speed component. As mentioned in the process above, I think COD deficit is very useful. Not the most sensitive thing in the world (neither is deceleration deficit) but it helps zoom in on what we need to look at. I would recommend most people test a traditional 505 along with their sprints. If you can collect deceleration deficit too then great, but if not, COD deficit also ticks a really good box.

This is obviously only considering the physical component. Can we isolate the perceptual component? Yes. Is it worthwhile? Meh, not sure. And to be fair I should say ‘a’ perceptual component, not ‘the’ as perception isn’t a single thing, it is a complex interaction of lots of things. Testing some perceptual skills without the physical component is less of a pain in the ass than the COD set up which we spoke about earlier as it could be done inside and seated. This has been done in the literature and I think may have some use, especially if you have a group who aren’t very homogenous and you want to work out who needs what. But remember anything non-specific, such as a Stroop test or a general reaction test is likely wasting your time.

Investigating the underlying qualities:

Everyone here is likely the most comfortable already as we love a strength test. But similar to the above rationale for the 505-test, the most important thing is having some kind of braking assessment. You likely do sprints and jumps already with various distances and styles. But unless you are isolating something like ecc-RFD from a force plate, or looking at the contact time component of RSI, everything is propulsive. And how many propulsive tests do you need? Multi-directional performance needs more than a jet pack, and this becomes even more important when we consider injury risk. Something eccentric, whether strength based, isokinetic, jumping, horizontal deceleration or whatever you fancy really needs to go in there. Even a NordBoard is a good start as it will start to contribute towards multi-directional qualities a little more.

As you can see testing is a minefield and there is more to it than what I have outlined above. I have all of the details if someone wants to chat. But making agility a performance and training focus, regardless of how we assess it is the first step.

ABOUT THE AUTHOR

Rich is the founder of Strength Coach Curriculums and an S&C coach who specialised in multi-directional speed. He runs the S&C provision for Bristol Flyers Basketball and consults with clubs across the globe while also leading the MSc programme at the University of South Wales