800 007 970 (Gratuito para españa)
658 598 996
·WhatsApp·

26 nov 2009

Performance analysis to improve sport performance

/
Enviado por
/
Comentarios0
/
Etiquetas
The coaching process is about enhancing performance by providing feedback about the performance to the player or team. Researchers have shown that human observation and memory are not reliable enough to provide accurate and objective information on complex sports such as soccer (e.g. Franks and Miller, 1986), basketball and handball.

 
Autor(es): Nic James
Entidades(es): Reader in Performance Analysis of Sport, School of Sport, UWIC, Cardiff, UK.
Congreso: I Congreso de Ciencias de Apoyo al Rendimiento Deportivo
Valencia-26-28 de Noviembre de 2009
ISBN: 978-84-613-6128-1
Palabras claves: Performance analysis, sport performance

Resumen

The coaching process is about enhancing performance by providing feedback about the performance to the player or team. Researchers have shown that human observation and memory are not reliable enough to provide accurate and objective information on complex sports such as soccer (e.g. Franks and Miller, 1986), basketball and handball. Therefore, objective measuring tools are necessary to enable and facilitate the feedback process. The techniques associated with measuring sports performance are often referred to as performance analysis and usually take the form of video analysis, using either hand or computerised systems both during and post-event, from either a technical, tactical or movement analysis perspective.

Introduction

The coaching process is about enhancing performance by providing feedback about the performance to the player or team. Researchers have shown that human observation and memory are not reliable enough to provide accurate and objective information on complex sports such as soccer (e.g. Franks and Miller, 1986), basketball and handball. Therefore, objective measuring tools are necessary to enable and facilitate the feedback process. The techniques associated with measuring sports performance are often referred to as performance analysis and usually take the form of video analysis, using either hand or computerised systems both during and post-event, from either a technical, tactical or movement analysis perspective. Hand and computerised performance analysis systems essentially do the same thing, although computerised systems can speed up the data capture and analysis phases considerably. The recent developments in both computer and video technologies have transformed the approach of performance analysts and, consequently, their use in the coaching process. However this progress in technology has meant that a proliferation of commercial packages has become available, which are widely used, but require significant amounts of training and specialist skills. A major concern surrounding the use of performance analysis techniques relate to the accuracy and reliability of the data gathered. In order that sensible conclusions are made for any analysis undertaken for a coach or player (s) it is necessary to test the data gathering techniques for reliability. Furthermore the statistical analysis techniques performed on the data also need to be considered with respect to the type of data collected. Finally the presentation of the results needs to be undertaken in such a way as to enable coaches and players, who are usually not trained statisticians, to understand the message held within the output of the analyses.

The need for objective feedback

Coaches have typically viewed matches and formed opinions on the strengths and weaknesses of the players involved. This information has typically guided the content of subsequent coaching sessions and player selection strategies. Franks and Miller (1986) showed that soccer coaches were less than 45% correct in their post-game assessment of what occurred during 45 minutes of a soccer game. While individuals will vary, forgetting is not surprising given the amount of information that needs to be memorised and the difficulty of subsequently retrieving this information. Eyewitness testimony research (usually conducted at crime scenes) suggests that some events are more likely to be remembered than others (ones that are unusual) whilst others tend to be forgotten easily (events that occur just prior or post well remembered events). Furthermore, emotions and personal biases are significant factors affecting memory storage and retrieval. On top of this there is a vast amount of information that can be used to guide coaching decisions e.g. opponent and recruitment scouting, first and reserve team matches, training sessions etc. At the elite level of sport all of these aspects need to be considered and effectively utilised. This is predominately achieved through the use of teams of personnel amongst which the performance analyst plays a pivotal role. In an ideal situation the different sources of information will all be analysed using objective observation systems meaning that the coaches can focus their attention on what they perceive to be critical incidents in their players’ and opponents’ performances. The goal is then to improve the performance of these players by planning practices based on these analyses. Objectivity can be obtained through the use of simple video, more complex biomechanical systems for fine analyses e.g. Vicon (from Oxford Dynamics: http://www.vicon.com), or performance analysis software e.g. Focus (www.elitesportsanalysis.com), Observer XT (www.noldus.com), Silicon Coach (www.siliconcoach.com), Dartfish (www.dartfish.com), SportsCode (www.sportstec.com). Hand notation systems are in general very accurate but have disadvantages: the more complex ones involve considerable learning time. In addition, the data these systems produce can involve many man-hours to process into output that is meaningful to the coach or player: for example it can take as much as 40 hours just to process the data from one match. The introduction of computerised notation systems has to some extent solved the problem of data processing. Used in real-time analysis or with video recordings in post-event analysis, they enable immediate easy data access and the presentation of data in graphical and other pictorial forms, more easily understood by coaches and players. The increasing sophistication, and reducing cost, of video systems has greatly enhanced post-event feedback, from playback with subjective analysis by a coach to detailed objective analysis by means of notation systems (see Brown and Hughes, 1995). Computers introduce extra problems however, of which system-users and programmers must be aware, such as operator errors e.g. accidentally pressing the wrong key, hardware and software errors. Whatever system is used however there are always possibilities for perception errors to occur. For example where the observer misunderstands an event or incorrectly fixes a position, are particularly problematical in real-time analysis when the data must be entered quickly. To minimise these problems, careful validation of computerised notation systems must be carried out. Results from the computerised system and a hand system should be compared to assess the accuracy of the former. Reliability tests must also be performed on both hand and computerised systems to estimate the accuracy and consistency of the data.

What is performance analysis?

Performance analysis consists of both biomechanical and notational type analyses. These two techniques have a number of things in common but also differ in significant ways. Notational analysis is an objective way of recording performance, so that critical events in that performance can be quantified in a consistent and reliable manner. This enables quantitative and qualitative feedback to be provided that is accurate and objective. Notational analysis of sport typically attempts to convey sports performance in terms of technical, tactical or movement analysis perspectives with the results of such analyses combined to form a database of performance (termed performance profiling). Sports biomechanics is concerned with fine detail about individual sports techniques in comparison to notational analysis which is more concerned with gross movements or movement patterns in games or teams. Furthermore, notational analysts are typically more concerned with strategic and tactical issues in sport than with technique analysis. However both emphasise the development of systematic techniques of observation and have ‘key events’ as important features of their theoretical foundations. This paper will focus on notational analysis techniques as opposed to biomechanical ones.

Some findings from research in tactical aspects of performance

The definition of tactical patterns of play in sports has been of interest to a number of researchers. The different tactics used at each level of development within a sport will inevitably depend upon technical development, physical maturation and other variables. The research in this area try’s to assess whether patterns exist in how teams play and score with the belief that this information can be useful for identifying strengths and weaknesses in performance.

Probably the most influential and certainly the most controversial notational analyst in Britain was Charles Reep who died in 2002 (Pollard, 2002) having devoted over 50 years to analysing soccer (and other sports) in great detail. Whilst most of his work remains unpublished his legacy remains as the primary advocate of the long-ball game or direct play. His first published paper produced the finding that the structure of soccer is determined by near constants (Reep and Benjamin, 1968) and his work is thought to have influenced many researchers and football coaches e.g. Charles Hughes, who was the Assistant Director of Coaching for the Football Association (Hughes, 1987); Stan Cullis, manager of Wolverhampton Wanderers and Graham Taylor, manager of Lincoln City, Watford, Aston Villa, England and Wolverhampton Wanderers) to adopt playing strategies based on his findings. Reep and Benjamin compared passing distributions for a large number of possessions which, when plotted, exhibit a negative binomial distribution (Figure 1).

Figure 1: Passing sequence distribution for 578 matches played between 1953 and 1967 (data taken from Reep and Benjamin (1968). Contenido disponible en el CD Colección Congresos nº 12.

Small samples do not necessarily follow this distribution (as shown by Reep and Benjamin for 12 matches played by Arsenal during 1961-2, Table 2, page 582). However, this detail tends to be overlooked in comparison to the finding that events in soccer matches are very predictable when large data sets are used. Statisticians would recognise this phenomenon and give it the label “the law of large numbers” (first proved by Swiss mathematician James Bernoulli in 1713). Not withstanding this observation, Reep and his colleagues are best known for the findings that it takes on average 10 shots to get one goal (Reep and Benjamin, 1968), 50% of goals are scored from possessions that involve one pass or less (zero pass possessions include penalties and free kicks) and 80% three or less (Bate, 1988) and regaining possession in the opponent’s half provides many goal scoring opportunities (Reep and Benjamin, 1968). These assertions were based on an extremely methodical analysis of over 2000 matches spanning fifty years of mainly British soccer. The magnitude of this work, even though most of it remains unpublished, means that his observations have, and deserve to be, evaluated seriously. Given the previously mentioned associations with influential people in professional football he is also linked with the long-ball game which is more popular in England than most other countries. This is because the long-ball game has the tactical emphasis on getting the ball into the opposition’s half, in particular the penalty box using a relatively large number of long passes from defensive and midfield areas. The logic behind these tactics is that the more times the ball enters goal scoring areas of the pitch the more chance there is of scoring which tends to encapsulate the findings of Reep. Bate (1988) has also argued that the more short duration possessions there are then there will be more possessions during a game and therefore more chance of making these possessions potential goal scoring opportunities. This type of viewpoint tends to emphasise the importance of the quantity of possessions in critical areas as opposed to the quality of possessions. However, logically situations where no defenders are able to intercept the ball would be more advantageous than those where defenders are able to challenge for the ball. Clearly emphasising chance factors alone does not capture the full extent to which possessions relate to goal scoring opportunities although further debate and research is necessary to determine whether quantity or quality of possessions is the more important determinant of success. Whilst advocates of “possession football” which emphasises the quality of possessions in the critical areas, oppose the long-ball approach, with its acceptance of chance factors determining outcomes, historically the long-ball game has proved successful. There are examples of teams predicted not to do particularly well by football pundits, usually due to relatively small numbers of available players of the required skill level, having achieved far better than expected results. The best known of these is probably the Norwegian national team who have published detailed accounts of how they analyse their matches and their pursuit of a long ball strategy in attack (Olsen and Larsen, 1997). Hughes and Franks (2005) used all matches from the 1990 and 1994 FIFA World Cup finals to assess whether the findings of Reep and Benjamin (1968) were still applicable. The results proved very similar with approximately 80% of goals occurring from possessions containing 4 passes or less (cf. 3 or less). However Hughes and Franks recognised that there were more zero pass possessions than 1 pass possessions and more 1 pass than 2 pass possessions and so on. Consequently they removed this inequality by comparing the number of goals scored for each possession length rounded to a common 1000 possessions. When the data had been normalised in this way (i.e. number of goals scored per 1 pass possession divided by number of 1 pass possessions multiplied by 1000) the conclusions changed dramatically. Possession lengths of 3 to 7 passes seemed more likely to produce goals than shorter and longer length possessions. Clearly the reason for Reep’s finding that 50% of goals are scored from possessions that involve one pass or less is simply due to the fact that the majority of possessions are one pass or less. Taking the values from Hughes and Franks’ paper it appears that the ratio of one pass or less to 2 pass or more possessions is 1.8:1. Perhaps a pragmatic view to the possession debate is therefore needed. My suggestion is that in situations where it is difficult to get good quality passes to players in goal threatening positions it makes sense to try more speculative passes into the goal scoring areas as by chance some goal scoring opportunities will occur. In this respect Ensum et al. (2005) found evidence suggestive of differences in types of possession resulting in shots at goal. They used logistic regression analysis to show that whilst South Korea created approximately the same number of shots as Brazil their inferior shots to goals ratio was suggested to be as a result of not creating good quality shooting opportunities (possession differences) as opposed to poor shooting ability (skill difference). Hughes, Robertson and Nicholson (1988) analysed the 1986 World Cup finals to assess the patterns of play of successful teams i.e. those teams that reached the semi-finals, in comparison to unsuccessful teams i.e. teams that were eliminated at the end of the first round. Their main observations were:- 1.          Successful teams played significantly more touches of the ball per possession than unsuccessful teams. 2.          The unsuccessful teams ran with the ball and dribbled the ball in their own defensive area in different patterns to the successful teams. The latter played up the middle in their own half, the former used the wings more. 3.          This pattern was also reflected in the passing of the ball. The successful teams approached the final sixth of the pitch by playing predominantly in the central areas while the unsuccessful teams played significantly more to the wings. 4.          Unsuccessful teams lost possession of the ball significantly more in the final one sixth of the playing area both in attack and defence.

Some findings from research in technical aspects of performance

To define quantitatively where technique fails or excels has very practical uses for coaches, in particular, and also for sports scientists aiming to analyse performance at different levels of the development of athletes. This is very much the case in volleyball, where the quality of the skill being notated is highly dependent on the skill preceding it. Eom (1988) designed a computer interactive recording system to store and analyse a sample of 164 games from the 3rd FIVB Korean cup (1987). Playing actions were quantified using a five point numerical rating system, which ranked the different outcomes of six key skill components (Serve, Reception, Setting, Attack, Block and Defence). Stochastic analysis using a Markov model (was applied to detect the sequential dependencies of the events in each process. The data, represented in the form of a transitional matrix, was analysed using a chi-square test to investigate whether a skill that occurred at time (t + 1) or time (t + 2) is dependant on a skill occurring at time t. Further analyses were conducted to compare a transitional matrix in the attack process with that in the counter attack process and among the quick, medium, and high attack. Initial work with this system found that transition probabilities between the serve reception and set (0.81), and between the set and the spike (0.83), were highly significant. Among the types of spike, successes in first and second tempo attacks were highly dependent upon the quality of the set (0.87 and 0.92 respectively). The investigation did not find dependencies between the set and third tempo attacks, this was probably due to the nature of the skill. Third tempo attacks are characterized by a slow, highball trajectory, which gives the hitter time to adjust if the set isn’t spatially and temporally accurate. This type of analysis reflects the interdependency of tactical and technical evaluations. James and Bradley (2004) filmed a male world ranked squash player (number 15 during data capture, but previously ranked in the top 5 for 1 year, mainly in 2001) in matches and a training situation where the ball was fed to selected areas of the court. A high speed camera (Motionscope PCI 1000S, Redlake Imaging Corporation, Morgan Hill, USA) captured the training scenarios at 250 frames per second. In the training situation the ball was hit to the player (short to the forehand from deep) who was instructed to play different shots, as he would during a match, but to actively try to disguise his intentions. This data was analysed using biomechanical video analysis software (Quintic Consultancy Ltd., Coventry, UK). When similar shots were played during the training scenario (off similar ball velocities) the ball was struck at very consistent time delays after the ball bounce (e.g. straight drop shots were between 0.2s and 0.24s). This very small variation may have been indicative of the player intentionally hitting the ball early or late. The swing characteristics (kinematics) of the racket (when playing either straight, crosscourt or short) was judged to be remarkably similar (James is a Level 4 squash coach and previous National squad coach). Clear differences only became apparent between 0.07s and 0.03s before ball contact. This difference was as a result of differing wrist angles employed to play the shot of choice at the last moment and corresponded to swing times (initiation of forward travel of racket to ball contact) of 0.20s (±0.09).

Some findings from research in movement aspects of performance

A widely cited paper that demonstrates that low technology methods can be as effective and useful as more complex, time consuming alternatives was presented by Reilly and Thomas (1976). They simply coded players’ movements into standing, walking, trotting, running and sprinting categories. This information along with the pitch positions and time spans for each movement allowed the calculation of distances travelled and further details regarding work rates to be measured. Although this particular notation system was liable to some inaccuracies due to the difficulty of assigning some movements into a category e.g. when does trotting become running, the information derived enabled soccer coaches to equate training schedules to actual match demands for the first time. More recently high technology solutions have been introduced such as heart rate monitors and digital tracking of players as well as similar techniques using more complex categorisation of movements e.g. Bloomfield et al. (2004). These solutions essentially give the same information although with more precision. From the coaching perspective this accuracy may be desirable, essential or unnecessary. Recently O’Donoghue (2004) has evaluated the reliability of time-motion data in soccer using a two category distinction between work, which included all running activities faster than jogging and on the ball or attempts to gain possession of the ball, and rest which is jogging and all other slower movements. Having a dichotomous model simplifies the process as much as possible and would thus offer the greatest chance of producing reliable results. O’Donoghue analysed 658 performances recorded via Sky’s “PlayerCam” facility and initially deduced that the difference between two trained observers’ ability to accurately record work and rest periods was insufficient due to a systematic bias apparent. Accordingly, O’Donoghue limited the rest of the analysis to only his data. His subsequent finding that the variation between and within players, within and between positions and between matches was suggested to indicate that the concept of a typical work-rate may be erroneous, at least at the precision previously suggested. Perš et al. (2001) developed an automatic tracking sytem for detecting player movements for indoor sprots. This sytem was further developed (SAGIT/SQUASH tracking system) for use in squash (e.g. Vu?kovi?, Perš, James, and Hughes, in press; Vu?kovi?, Perš, James, and Hughes, 2009) and more recently Basketball. The principle behind this software is a tracking algorithm that compares each image with a template of the empty court so that a resultant value for each pixel can be compared with a threshold value to determine whether or not a player had been detected at this coordinate (a process known as binarization or thresholding). Some preliminary analysis of elite Slovenian basketball players suggests that as the match progresses players will reduce their higher intensity work for lower intensity ones (Fig. 2)

Figure 2: Work rate analysis for guards who played more than 200 seconds in a quarter. Contenido disponible en el CD Colección Congresos nº 12.

Figure 3: Work rate analysis for forwards who played more than 200 seconds in a quarter. Contenido disponible en el CD Colección Congresos nº 12.

Figure 4: Work rate analysis for centres who played more than 200 seconds in a quarter. Contenido disponible en el CD Colección Congresos nº 12.

Assessing the accuracy and reliability of the data gathered

Gathering any form of data with any type of equipment will always have some form of error present – notational analysis is no exception and it is vital that the size of any error is quantified. To do this it is important to measure the repeatability and accuracy of the equipment (where applicable) and the accuracy of the operator using this equipment, so that if subsequently conclusions are to be drawn upon sets of data, they are made with the knowledge of the potential error due to the operator and the equipment. James, Jones and Hollely (2002) suggested three main sources of error in notational analysis studies Operational error: where the observer presses the wrong button to label an event Observational errors: the observer fails to code an event Definitional errors: the observer labels an event inappropriately Atkinson and Nevill (1998) produced a definitive summary of reliability techniques in sports medicine which prompted Hughes, Cooper and Nevill (2004) to make suggestions on reliability and statistical processes for notational analysis. More recently the International Journal of Performance Analysis of Sport published a special edition dedicated to reliability (volume 1 of the 2007 edition). There appear to be a number of statistical procedures that can be deemed relevant including a weighted Kappa (Robinson and O’Donoghue, 2007), Yules’ Q (James, Taylor and Stanley, 2007) and a modified Bland and Altman plot (Cooper, Hughes, O’Donoghue and Nevill, 2007). This area of research activity has seen an enormous improvement in the application and presentation of reliability.

New methods for presenting results

Many studies in performance analysis compare the collective results of two or more randomly selected winning or losing teams. This is done to try to identify performance features that distinguish between winning and losing sides. This type of study tries to find general rules for a particular sport e.g. the long ball game is not as effective as a passing game in soccer. However, James and colleagues (e.g. Taylor, Mellalieu, & James, 2005; Jones, James and Mellalieu, 2008) suggest that combining results in this way may mask any individual team’s performance and which may then mean that inherent weaknesses and strengths of individual teams may not be identified. In effect combining the performance of lots of teams may produce a general pattern that does not actually hold true for any one team. The analysis of one team avoids this possibility but presents a number of different issues:-

  • Does a team play in a way that patterns of performance can be identified? If so, how many matches need to be analysed to determine whether any performance can be deemed as “unusual” or “as expected”?
  • What constitutes a good or bad performance by a team?
  • What factors impact on the performance of a team?

Identifying patterns in performance (performance profiling)

Trying to predict future performance on the basis of previous performances is an important goal for notation analysts. This is known as “performance profiling”and can be attempted in a number of ways. Typically the basis for any prediction model is that performance is repeatable, to some degree. In other words events that have previously occurred will occur again in some predictable manner. If this was not the case then prediction would obviously be impossible. The question therefore arises as to what extent are performances repeated? Consider two examples. In a team sport like soccer there are 22 players interacting, performing a high number of different actions in different areas of the pitch. This complex situation is not likely to provide easy to see repeated situations. Squash on the other hand involves just 2 players, discounting the game of doubles, in a relatively small court with a more limited number of actions (shots) available. Scientists have thus, unsurprisingly, tried to assess repeatability of performance in sports like squash before considering sports like soccer. Unexpectedly, repeatable performance, or invariant behavioural responses to similar situations, has largely not been found (e.g. McGarry and Franks, 1996). The reason for this is perhaps the complexity of the analysis has not matched the complexity of the sporting situations examined. Alternatively it may be the case that elite sports players do not necessarily respond in the same way to similar situations but have more than one response which they alternate to confuse the opposition. This creates a complex pattern which is far more difficult to ascertain.

Hughes, Evans and Wells (2001) produced calculations to suggest the number of matches that needed to be analysed to determine “normative” profiles and to assess whether performance variables had “stabilsed” (i.e. invariant behaviour). They argued that it is an implicit assumption in notational analysis that in presenting a performance profile of a team or an individual that a ‘normative profile‘ had been achieved. To assess this they compared the profile of an 8 match sample with those of 9 and 10 matches using dependent t-tests. No significant difference was deemed to show that a relatively stable profile was evident. Their conclusion based on a number of such tests was that the extent to which profiles were stable was dependent upon the nature of the data and, in particular, the nature of the performers. A second strategy to establish the extent to which the mean values became stable involved the use of set limits of error calculated as a percentage (10%, 5% and 1%) of the mean value. Cumulative mean values were then plotted to show how the mean value tended to “stabilise” (Fig. 5).

Figure 5: The mean number of shots per game (5% error method of Hughes, Evans and Wells, 2001). Contenido disponible en el CD Colección Congresos nº 12.

Whilst this innovative approach by Hughes, Evans and Wells (2001) led researchers to consider the issue of representative data and the best method of calculating performance profiles a number of issues were raised. Some researchers e.g. O’Donoghue (2004) questioned whether some data would ever stabilise. His analysis of soccer players suggested that the variability of one particular midfield player was greater between games and 15 minute periods than the variability found for a group of midfield players. This made him wonder whether it was sensible to suggest “typical” work rates for playing positions in soccer. James (unpublished work) recently presented a new method of profiling performance to performance analysts working for the English Institute of Sport. He suggested that the 95% confidence interval (derived using either a parametric or non-parametric method) for a data set could be used to determine upper and lower limits within which performance could be expected to fall. The parametric version is only suitable for normally distributed ratio data and the 95% confidence interval is calculated as:

Contenido disponible en el CD Colección Congresos nº 12.

Where is the sample mean, s is the sample standard deviation, N is the sample size and is the upper critical value of the t distribution with N – 1 degrees of freedom. For the non parametric version the confidence limits for a population median are calculated using the table in Zar (1999).

Figure 6: The mean number of shots per game (James parametric 95% confidence limit method). Contenido disponible en el CD Colección Congresos nº 12.

Figure 7: The median number of shots per game (James non parametric 95% confidence limit method). Contenido disponible en el CD Colección Congresos nº 12.

Comparing the three methods it is clear that the parametric methods employed by Hughes, Evans and Wells (2001) compare reasonably well against James’ parametric method (Table 1). However the advantage of James’ method is that the upper and lower limits can be calculated on any size data set (at least 2 observations are necessary to perform the calculation). Figure 6 suggests that the derived confidence limits tend to get closer to each other as the sample size increases but it is not clear at what point one could say that enough data had been collected to create a reasonable profile. This would seem to be a starting point for answering this question as opposed to a solution.

Table 1: A comparison of upper and lower limits (95%) for a 24 match sample. Contenido disponible en el CD Colección Congresos nº 12.

The non parametric method of calculating the upper and lower confidence limits (for a population median) suggest much greater variability (Figure 7) and hence less certainty as to the derivation of a performance profile.

Assessing good and bad performances by a team

Presenting a team’s match performance relative to previous performances allows the analyst (or coach as the form chart is designed to be easy to interpret) to see which aspects of the games were unusual very easily. To illustrate this procedure 10 matches for a British professional soccer team were analysed (post-event) using the Noldus ‘Observer Video Pro’ behavioural measurement software package (Noldus Information Technology). Nine performance indicators were arbitrarily selected from a study by Taylor et al. (2004) and the data for the tenth game transformed relative to the previous 9 matches (see Jones et al. for comprehensive details of this). The formula for this transformation is shown in equation 1 (taken from James et al., 2005):

Contenido disponible en el CD Colección Congresos nº 12.

Where x = the PI value for the 10th match, Mdn = the median and IQR = the inter-quartile range for the previous 9 matches.

Whilst the calculations are relatively simple the theory behind the above transformation is a little more complex but is beyond the scope of this paper (see James et al., 2005 for details), with the main principle advocated being the use of distribution free (non parametric) statistical techniques for notational analysis. The resultant form chart (Fig 8) is quite simple to interpret with a standardised PI value of 50 indicating performance at the same level as previous matches. Standardised values greater than 65 indicate performance was at least above the 75th percentile and less than 35 at least below the 25th percentile. The reason for the lack of precision being the unknown degree of skewness in the data set, which, for extreme values, results in the values given above. I.e. with less skewed data a standardised score value of 65 would be equivalent to a slightly higher percentile rank than the 75th and 35 slightly lower than the 25th.

The team analysed won the match represented in Figure 8 by 4 goals to 3 (see actual match values given in the Table incorporated into Figure 8). There were two reasons for including actual match values along with previous median values underneath the standardised score bar chart. The first was because some confusion can take place regarding whether a performance should be reverse scored or not. Take for example the number of goals conceded. The form chart indicates a higher value than normal (standardised score = 60) as there were more goals conceded (3) than the previous median value (2). However some people might suggest that since this is a worse performance for the analysed team the bar representing this PI should be lower than the median value to represent poorer performance. Both the presented style and the alternative have merit and therefore having the actual values enables the coach or other interested party in clarifying, where doubt exists, the actual situation. The second reason for the inclusion of the actual values was as a result of feedback from coaches who liked to have this information in conjunction with the bar chart so that they could communicate this information to players easily.

Figure 8: Form chart of the 10th match compared against performances from the previous nine matches for a professional British soccer team. Contenido disponible en el CD Colección Congresos nº 12.

Inspection of the performance indicators suggests a very low number of successful tackles were made (58% success rate compared to an expected performance around 75%) and may have contributed to the high number of goals scored against them. This could thus be deemed an ‘alerting variable’ such that the coach would need to consider why this occurred and perhaps focus on this area of the game in training. The fairly high number of interceptions (n=23) is a positive feature but the underlying reasons for this are unclear. Either the coach’s knowledge of the game or further analysis would be needed to ascertain whether this was a result of poor passing by the opposition, better movement by the analysed team than previously or as a consequence of the tactics employed.

The factors that impact on the performance of a team

Recently researchers have started to adopt more appropriate and sophisticated statistical procedures to try to answer questions of more relevance to sports teams and coaches. Whilst the early work in performance analysis often produced factual accounts of what happened in a tournament or a series of matches and also tried to identify differences between winning and losing teams there was often the criticism that potentially influential factors had not been accounted for. Furthermore it was often the case that inappropriate statistical procedures had been used meaning that there were issues related to the confidence in the findings. A number of factors can influence the performance of any team or individual player. For example the quality of the play by the opponents will directly influence the success rate of any actions as well as influence the decision making process as to which actions are to be performed at any particular time. O’Donoghue (2009) refers to this as the interacting performances theory and rightly suggests that performance profiles should take into account the strength of the opposition. However other factors such as the prevailing environmental conditions can also influence performance in some sports although the performance analysis literature does not seem to take this factor into account. These (confounding) variables are commonly referred to in scientific literature as independent variables and would seem to be critical in determining how a team or individual is likely to play. Researchers have tried to account for many variables that are perceived to be independent variables. For example match location (e.g. Taylor, Mellalieu, James and Shearer, 2008) and match status (Jones, James and Mellalieu, 2004; Lago and Martin, 2007). These have been investigated by a number of statistical procedures including linear regression, log-linear modeling and logit modeling which account for the fact that performance analysis data tends to be frequency counts of action variables that usually have skewed distributions.

Conclusions

Performance analysis has rapidly developed and expanded as a scientific discipline over the last 30 or so years. This rise in popularity has seen the inception of the International Journal of Performance Analysis of Sport (IJPAS) in 2001. This journal has, and continues, to publish papers on topics that had previously not received sufficient attention within this discipline. International conferences also serve as popular forums for the dissemination of this type of work with the most active from a performance analysis point of view being the World Congress of Performance Analysis of Sport (bi-annual); Science and Football (every 4 years), Science and Racket Sports (every four years) and to a lesser extent the conferences of the British Association of Sports and Exercise Science (every year).

There has been a rapid expansion in the number of jobs in this area with many professional teams in a number of different sports now employing full time performance analysts. Unfortunately much of the work carried out by these professionals remains secret as the advantages gained through this work is not for public consumption. However the strong link between academic and professional work can be evidenced by the fact that most professionals have undertaken their training in academic institutions and they continue to be present at conferences and regularly seek advice as to the current thinking and advances made in the scientific arena.

Bibliografía

  • Atkinson, G. and Nevill, A. (1998). Statistical Methods for Assessing Measurement Error (Reliability) in Variables Relevant to Sports Medicine. Sports Medicine, 26, 217-238.
  • Bate, R. (1988). Football chance: Tactics and strategy. In: Reilly, T., Lees, A., Davids K. and Murphy W. (Eds.) Science and Football. E & FN Spon: London, pp. 293-301.
  • Bloomfield, J., Polman, R., and O’Donoghue, P.G. (2004). The ‘Bloomfield Movement Classification’: Motion Analysis of Individual Players in Dynamic Movement Sports. International Journal of Performance Analysis of Sport, 4(2), 20-31.
  • Brown, D. and Hughes, M.D. (1995) The effectiveness of quantitative and qualitative feedback on performance in squash. In Science and Racket Sports (edited by T. Reilly, M.D. Hughes, and A. Lees), pp. 232-237, London: E&FN Spon.
  • Cooper, S-M., Hughes, M., O’Donoghue, P. & Nevill, A.M. (2007). A simple statistical method for assessing the reliability of data entered into sport performance analysis systems. International Journal of Performance Analysis in Sport, 7, 1, 87-109.
  • Ensum, J., Pollard, R. and Taylor, S. (2005). Applications of logistic regression to shots at goal in association football. In: Reilly, T., Cabri, J. and Arávjo, D. (Eds.) Science and Football IV. Routledge: London, 211-218.
  • Eom, H.J. (1988). A mathematical analysis of team performance in volleyball. Canadian Journal of Sports Science, 13, 55-56.
  • Franks, I. M., & Miller, G. (1986). Eyewitness testimony in sport. Journal of Sport Behavior,9, 39-45.
  • Hughes, C. (1987). The Football Association coaching book of soccer tactics and skills. Queen Anne Press: London.
  • Hughes, M.D., Robertson, K. and Nicholson, A. (1988) An Analysis of 1984 World Cup of Association Football. In Science and Football (edited by T. Reilly, A. Lees, K. Davids and W. Murphy), pp. 363-367. London: E&FN Spon.
  • Hughes, M., Evans, S. and Wells, J. (2001). Establishing normative profiles in performance analysis. International Journal of Performance Analysis of Sport, 1,4-27.
  • Hughes, M., Cooper, S.M. and Nevill, A. (2004). Analysis of notation data: reliability. In Notational analysis of sport, 2nd Edition (edited by M. Hughes and I.M. Franks), pp. 189-204. London: Routledge.
  • Hughes, M. and Franks, I.M. (2005). Analysis of passing sequences, shots and goals in soccer. Journal of Sports Sciences, 23(5), 509-514.
  • James, N., Jones, N.M.P. & Hollely, C. (2002). Reliability of selected performance analysis systems in football and rugby. In Proceedings of the 4th International Conference on Methods and Techniques in Behavioural Research, pp. 116-118. Amsterdam, The Netherlands.
  • James, N. & Bradley, C. (2004). Disguising ones intentions: The availability of visual cues and situational probabilities when playing against an international level squash player. In A. Lees, J.-F. Kahn and I.W. Maynard (Eds.), Science and Racket Sports III. Abingdon, Oxon: Routledge, pp. 247-252.
  • James, N., Jones, N.M.P. and Mellalieu, S.D. (2005). The development of position-specific performance indicators in professional rugby union, Journal of Sports Sciences, 23, 1, 63-72.
  • James, N., Taylor, J. & Stanley, S. (2007). Reliability procedures for categorical data in Performance Analysis. International Journal of Performance Analysis in Sport, 7, 1, 1-11.
  • Jones, N.M.P., James, N. & Mellalieu, S.D. (2008). An Objective Method for depicting Team Performance in Elite Professional Rugby Union. Journal of Sports Sciences, 26, 7, 691-700.
  • Jones, P., James, N. & Mellalieu, S.D. (2004). Possession as a Performance Indicator in Soccer. International Journal of Performance Analysis in Sport,4, 1, 98-102.
  • Lago, C and Martin, R. (2007). Determinants of possession of the ball in soccer. Journal of Sports Sciences, 25, 9, 969-974.
  • McGarry, T. and Franks, I.M. (1996). In search of invariant athletic behaviour in competitive sport systems: An example from championship squash match-play. Journal of Sports Sciences, 14, 445-456.
  • O’Donoghue, P.G. (2004). Sources of variability in time-motion data; measurement error and within player variability in work-rate. International Journal of Performance Analysis of Sport, 4, 2, 42-49.
  • O’Donoghue, P.G. (2009). Interacting performances theory. International Journal of Performance Analysis of Sport, 9, 1, 26-46.
  • Olsen, E. and Larsen, Ø. (1997). Use of match analysis by coaches. In: Reilly, T., Bangsbo J. and Hughes, M. (Eds.) Science and Football III. E & FN Spon: London, pp. 209-220.
  • Perš, J., Vu?kovi?, G., Kova?i?, S. & Dežman, B. (2001). A Low-cost Real-time Tracker of Live Sport Events. In Proceedings of the 2nd international symposium on image and signal processing and analysis in conjunction with 23rd International conference on information technology interfaces (edited by S. Lon?ari? and H. Babi?), pp. 362-365. Zagreb: University Computing Centre, University of Zagreb.
  • Pollard, R. (2002). Charles Reep (1904-2002): pioneer of notational and performance analysis in football. Journal of Sports Sciences, 20(10), 853-855.
  • Reep, C. and Benjamin, B. (1968). Skill and chance in association football. Journal of the Royal Statistical Society. Series A (General), 131(4), 581-585.
  • Robinson, G. & O’Donoghue, P. (2007). A weighted kappa statistic for reliability testing in performance analysis of sport. International Journal of Performance Analysis in Sport, 7, 1, 12-19.
  • Taylor, J.B., Mellalieu, S.D. and James, N. (2004). Behavioural comparisons of positional demands in professional soccer. International Journal of Performance Analysis in Sport, 4, 81-97.
  • Taylor, J.B., Mellalieu, S.D. & James, N. (2005). A Comparison of Individual and Unit Tactical Behaviour and Team Strategy in Professional Soccer. International Journal of Performance Analysis in Sport,5, 2, 87-101.
  • Taylor, J.B., Mellalieu, S.D., James, N. & Shearer, D.A. (2008). The Influence of Match Location, Quality of Opposition, and Match Status on Technical Performance in Professional Association Football. Journal of Sports Sciences, 26, 9, 885-895.
  • Vu?kovi?, G., Perš, J., James, N. & Hughes, M. (in press). Measurement error associated with the SAGIT/Squash computer tracking software. European Journal of Sports Sciences.
  • Vu?kovi?, G., Perš, J., James, N. & Hughes, M. (2009). Tactical use of the T area in squash by players of differing standard. Journal of Sports Sciences, 27, 8, 863-871.
  • Zar, J.H. (1999). Biostatistical Analysis, 4th edn. Englewood Cliffs, NJ: Prentice-Hall.

Responder

Otras colaboraciones