As coaches and sport scientists, we often are responsible for the training and the testing of our athletes. I strongly encourage you to remove the athletes names from the data once it is processed.

Look at the data without assumptions and context, does it make sense? Which subject looks the strongest, fastest, most powerful, most robust as an athlete? Make judgements based on these data, which athletes "should" be the best?

Present this to the coach, ask them they same questions: which of these athletes do you think is the best?

Then add the names back in.

Does it still make sense with the context? - Yes? Great! You have a test that is probably robust enough to identify qualities important to the sport.

Does it still make sense with the context? - No? Are you finding yourself explaining away all of the data?

"Well this athlete is so small they don't need to be as strong and powerful" "They've improved a ton on the field but they haven't improved on this test so they must still be out of shape" "I don't see any change on body composition even though they feel better, look bigger and their diet has improved, so they haven't gained any muscle"

Maybe you need to rethink these tests, or the value you put in them. Athlete monitoring is meant to inform the training process, have you accounted for all possible fatigue and stress? How about the volume load in training and the weight room? Has your sample size changed? Did you calculate effect sizes? What is the reliability of your test? Do the athletes always give a max effort for the test? Are the testers executing the test perfectly? When was the last time you calibrated your instruments? Was the data normally distributed? Are you monitoring other metrics in training (GPS, VL, 1RM, daily jumps) that could potentially answer the question this test is meant to answer - do these daily/weekly measures agree with the conclusion of your test?

Monitoring is not perfect, try considering your data without the athlete names and comment below if it helps or not.

Luke