The Coach Feedback Analysis System
“[The tool] has important implications for researchers and practitioners alike, allowing for the collection of nuanced data on coach feedback compared to systematic observation instruments of coaches’ broader behaviours”
Corbett, Cope, Partington, Gannon, Ryan, & Mason (2026)
Why is this topic important for coaches?
As a coach, you’re frequently giving feedback to your athletes – whether that’s pitch side during a game, on the training track, or maybe when studying footage of your last game.
Lots of research suggests that feedback is useful in helping athletes improve their performance, but it can also have mixed results and be more or less impactful depending on several factors.
This is where coach developers and coaching researchers come into play. Using something known as a systematic observation tool, they can observe the way you coach and provide analysis of the types of things you said and did while coaching.
Until now, there hasn’t been a tool for analysing feedback specifically – most tools include one token feedback category but lack the depth required to get a clear picture of the types of feedback used by coaches.
It’s also important that the tools used to observe your coaching are consistent and accurately measure the things they claim to measure.
What’s the paper about?
The paper covered in this post aimed to describe a new tool called the Coach Feedback Analysis System (CFAS). The tool includes 6 different categories and 18 individual feedback types to consider when analysing feedback given by a coach. These are detailed below:
Valence: Was the feedback positive, negative, or neutral?
Information: Was the feedback descriptive or prescriptive?
Hattie/Timperley model: Did the feedback focus on the learner (self), the task, the process, or self-regulation processes?
Autonomy: Did the feedback support the athlete’s autonomy, control the athlete, or was it neutral?
Audience: Was the feedback directed to a full group, small group, or an individual?
Timing: Was the feedback delivered concurrently, when the play was stopped, or post action?
A major goal of the paper was to find out whether the tool measures the things it claims to measure (known by researchers as validity) and whether the tool is consistent over time and in the hands of different users (known as reliability).
What did the researchers do?
The research team put the CFAS tool to the test by using it to analyse about 600 minutes of coaching footage in different sports – hurling, football, Australian Rules football, and rugby – at both amateur and professional levels.
Two researchers analysed each piece of footage independently to make sure there was agreement about the types of feedback being observed.
The researchers also assembled an expert panel of coaches to see if the tool made sense to the people it’s built for.
As an example of the process used by the coders to analyse a single piece of feedback, consider the following:
Feedback: “Robbie, bloody excellent body work at the front of the contest.”
Step 1. Was the feedback positive, neutral, or negative?
Coding decision = Positive (as the coach makes it clear that he is happy with Robbie’s use of his body at the front of the marking contest).
Step 2. Was the feedback descriptive or prescriptive?
Coding decision = Descriptive (as the coach describes what was good about the previous action).
Step 3. Was the feedback self, task, process, or self-regulation level feedback?
Coding decision = Process (as the coach focuses on the aspect of the action that he felt enhanced the execution of the task ie. the use of his body).
Step 4. Was the feedback autonomy-supportive, neutral, or controlling?
Coding decision = Neutral (as the coach is neither telling the player what to do nor setting a challenge/ posing a question).
Step 5. Was the feedback directed to the group or the individual?
Coding decision = Individual (as the coach mentions Robbie’s name specifically).
Step 6. Was the feedback delivered concurrently, while an activity was stopped by the coach or post activity?
Coding decision = Concurrent (as the coach makes the comment directly after the contest has been cleared, while the activity is still live).
What did they find?
The CFAS was found to be a valid and reliable tool that could consistently and accurately measure the types of feedback that coaches provide.
The process of establishing reliability and validity ironed out some kinks in the tool and ensured that it is detailed enough to be useful in different settings.
The expert panel also agreed that the CFAS was a useful tool for analysing coach feedback.
So what?
The last five years has seen an explosion in the number of people working with coaches to improve their practice. It’s important that these practitioners are using tools that are accurate and consistent, so that the feedback coaches are getting on their performance is as useful as possible.
One way to ensure useful data as a coaching researcher or coach developer is to use an observation tool that has been through a process to establish reliability and validity. The CFAS is a tool that has now been deemed valid and reliable, so that coaches can be more certain that the feedback they are giving (and the feedback they are receiving on their feedback!) is of good quality.
Want to read more?
This post is a summary of the paper ‘Development, validation and reliability of the Coach Feedback Analysis System’, which was authored by Ross Corbett along with Ed Cope, Mark Partington, Evan Gannon, Lisa Ryan and Rob Mason. The full article appears in Journal of Sport Sciences and was published in 2025. More information can be found here:
https://www.tandfonline.com/doi/epdf/10.1080/02640414.2025.2561346