Player reports, from descriptive to predictive judgements
When I was asked to write a short essay on how an organization could push scouts to use data into their reports to improve them, I thought that was an absurd idea for several reasons. Thus, I decided to write about something else. Something I think might improve the way scouting reports are made in many football clubs.
One of the premises I want to make clear from the beginning is that many scouts are real experts in analysing and understanding the game and players skillsets. This should be without doubt but just in case it is worth mentioning. On the other side, there seems to exist a significant gap between the ability of these football experts to understand the game and their ability to predict how a player will develop or perform in the future.
While there are inherent limitations to the extent that this ability can be improved due to reasons like longer feedback loops, intrinsic variability in performance, etc. I also believe that there are several strategies that could be employed within organisations to make the process more sophisticated, structured and eventually improve the accuracy. Or at least being more aware of their limits.
Limitations of standard scout reports
“The player shows good movement, good timing to offer a passing option and get into space on the blind side of the defender…”
This is how an excerpt from a standard player report could look like. It is a detailed description of what a player does and also gives an explanation of why his/her actions are effective.
However, it says little or nothing regarding his/her future performance. It definitely gives a way of knowing the profile of the player, and can be a first step towards making a predictive judgement, however it’s not a prediction of any kind.
What these kinds of reports do is provide a description of what a player does and how it does, a descriptive dimension of the analysis. It answers a different question than the one we are interested in, which is “how will the player perform in the future?”.
In order to answer this question, obviously more relevant from a recruitment side, the scout has to project the player into the future and into a different context.
Is using data enough to improve the predictions?
Using data does not by itself avoid this issue either. Even if we have a very detailed and in depth analysis of what a player does (style) and how successfully she/he does it (efficacy), we still need to make some kind of prediction. Either systematic or intuitive. For example, knowing the base rate of similar players to achieve a certain performance level.
Ultimately, from any analysis, regardless of how detailed it is, we need to extract some sort of insight that helps us predict a player’s future performance.
A scout report with no concrete prediction does not add more value than a well structured data report, and it might also be less efficient. At best, the scout report could provide more context than the data, which is a valuable addition, but it doesn’t solve the issue of the prediction.
Just as recap, usually scouting reports include just a description (more or less detailed) and a superficial evaluation. This is to some extent useful to get an idea of the type of player, however these reports are less efficient than a data report and don’t answer the important question.
So what do I propose then?
As any other piece of information, these reports can be used as input in a structured process of decision making. For this matter reports have to fill some conditions:
Reports need to be accurate and make specific testable predictions of a player’s future performance. Especifically, systematic predictions that can be tracked during a period of time.
This doesn’t mean just making a guess about if a player will be capable of playing at top level or not, but using degrees of confidence, specific time frames, potential level in different scenarios, reasons why he or she could not achieve the expected potential, etc.
In short, scouts should not be just “collectors of information”. The information is useful and provides context, but it will be redundant, noisier than data in some cases, and more expensive.
Scouts have two main ways of providing added value: collecting information that data can’t get or can’t get as accurately – context for example*, and also using their expertise to make judgements and predictions that can be combined and weighted with other sources of information, giving as result a more robust output than each source individually.
Predictions have to be as systematic as possible to avoid noise, biases, and to be able to keep track and make retrospective accounting over time. If the predictions are formalised, they can be included in aggregated judgements.
How does a scouting process look?
A detailed explanation of how a scouting process within a football club could look like is beyond the scope of this text, but a simple example could be a four stages system. First, flag players using player profiles defined beforehand by the staff members, using data for this part is probably sine qua non condition in 2021. Then, creating a shortlist assigning priorities given things like squad needs, aging players, suitable markets, etc
The next stage would be choosing targets by preference and asking scouts to rate a set of things, answering specific questions and finally making explicit predictions.
Some examples could be: How well would she/he fit into our system? What sort of things would need to learn or improve? How about the positioning without the ball? What games does she/he tend to play well or bad? We can also ask the scouts to compare two players that we are considering for the same position with specific questions regarding the role we want them to fill.
Finally, the decision maker could average the answers, weigh them given past records (retrospective scouting) or just include the reports in some group decision making process.
* One thing scouts are able to do is bring more understanding into why the data looks like how it does. This point is about context, more specifically team effects. Let’s say there is a player that puts a large number of shots one season but doesn’t do it the next season. While the shot metrics are going to show a decrease, the player may be filling a different role than what he was previously doing on his previous team/with the previous coach, which now fits him less appropriately. This can mean that he is not capable of filling the new role, but might not imply anything about his shot creation ability under similar conditions as he had the previous year. While this is a silly example, it shows clearly that proving an explanation to why data looks like how it does, is a key ability of the expert scouts.
Pablo Peña works as Head of Innovation for StatsBomb.
1 Kommentar Alle anzeigen
Dougie December 23, 2022 um 12:55 pm
Somebody has read Noise by Daniel Kahneman!