Print Page   |   Report Abuse   |   Sign In   |   Become a Member
CSME|CAPDA Medico-Legal Summit - Eliyas Jeffay
Share |


Neuropsychological Outcomes Following the Post-Acute Phase of Mild Traumatic Brain Injury
Eliyas Jeffay, MA; Konstantine K. Zakzanis, PhD

The highly 6S model of evidence based practice proposed by DiCenso, Bayley, and Haynes (2009) provides objective guidelines as to how single studies contribute to well understood systems. In short, evidence from single studies guides clinical action by way of contributing to synopses of single studies; which contribute to syntheses; which contribute to synopses of syntheses; which contribute to summaries; and which ultimately contribute to well understood systems. The majority of the steps between Single Studies and Systems are largely review-type publications and studies.

Narrative reviews have been the standard due to their ability to summarize and synthesize the research field. They are also great resources to get caught-up with a particular field of study. Moreover, they provide poignant suggestions for the future directions of the field, which may set a pseudo-standard for researchers in that field to follow. However, they have a number of limitations such as being highly subjective (by their very nature), susceptible to review bias, and are rarely comprehensive. Additionally, the precision of estimates per individual study are usually ignored with large but imprecise effects given attention whereas small but precise studies are ignored. With all this mind and while using the vote-counting method to determine whether a study contributes a positive, negative, or neutral effect to the studied phenomenon, the narrative review is not very useful at deciphering the trend with mixed and discordant data from single studies.

The alternative to narrative reviews are quantitative reviews. One such example of a quantitative review is a meta-analysis, the meta analysis collects individual effects size estimates from different but conceptually similar studies into a mean estimate. The advantages here are that meta-analyses are a quantitative and thus, objective review of the literature, have an emphasis on cumulative data, are transparent with respect study inclusion/exclusion with very specific criterion, and are able to statistically weigh individual studies based on precision and sample size in the overall mean effect size estimate. More practically, they provide the objective answers in light of conflicting data. In fact, this is the primary strength of the utilization of a meta-analysis.

One such area of research where there has been an ample supply of conflicting data is the neuropsychological effects following a mild traumatic brain injury (mTBI). Indeed, the argument of whether or not there are robust neuropsychological effects following a mTBI has a 100-year history (Evans, 1992). More recently, however, it has been widely accepted that there are two general phases following a mTBI; an acute phase and a post-acute phase (Alexander, 1995; Binder, 1997; McCrea et al., 2009). The acute phase is considered to be the first 90 days following the time since injury and the post-acute phase is anytime after that (>90 days). To this end, there have been multiple reviews citing support for the full recovery of neuropsychological effects following a mTBI in the post-acute phase (Belanger et al., 2005; Binder et al., 1997; Schretlen & Shapiro, 2003), whereas other reviews have concluded that there may be specific domains that continue to show impairment in the post-acute phase (Frencham et al., Pertab et al., 2009; Ruff & Jamora, 2009). Still, there are reviews citing that there may be a small population of individuals that continue to have persistent cognitive impairment well into the post-acute phase (termed ‘miserable minority’ as per Rohling et al., 2012). Though useful, these reviews are limited. For example, many of them have serious flaws in heterogeneity by way of inclusion/exclusion criteria of the sample, neuropsychological test/domain classification, mechanism of injury (i.e., in the context of athletes incurring injury during play), combining adult and child mTBI, and or inclusion of samples that meet different definitions of mTBI. Moreover, these studies have limitations regarding the statistics employed to derive findings. Although it may seem simple to gather effect sizes from each individual study and to average them, a good meta-analysis is more nuanced than that. To this end, sample weighted means, better estimates of the variance, heterogeneity of the sample of effect sizes and tests (i.e., fixed versus random effects model; whether an a priori assumption of or through post-hoc validation), along with useful moderating variables need to be thoughtfully utilized and analyzed in light of the interpretation of the findings.

Accordingly, the aim of this study was to provide an updated meta-analysis of the neuropsychological effects of mild traumatic brain injury (mTBI) for the post-acute stages after injury using a random effects model. We hypothesized that neuropsychological effects would be insignificant from controls in the post-acute phase of mTBI. This was based off previous meta-analytic reviews mentioned above that indicated minimal differences between controls with 95% confidence intervals that overlap with zero.

A literature search using the search using OVID Medline and PsycINFO database were conducted on all mTBI based neuropsychological studies from 1970 to 2017 (June). Furthermore, the references sections of reviews in the field were combed through in an effort to reduce overlooking studies. The search retuned over 600 results (while using the deduplicate feature) which were individually and manually searched through to determine if they met inclusion/exclusion criteria. Here, the inclusion/exclusion criteria were as follows: (1) no studies studying athletes; (2) no “whiplash” related studies; (3) mTBI had to be defined somehow – ideally aligned with the ACRM guidelines (ACRM, 1993); (4) mTBI must be compared to a control group (either healthy or non-neurological); (5) if other severities were includes, mTBI findings must be separated; (6) samples must be compared on cognitive tests (clinically validated or experimental); (7) enough statistics must be reported to calculate an effect size; (8) adults and adolescents only – no children as their progress following a mTBI has been reported to be different; and (9) time since injury must be reported. Following a three-level filtration process to determine eligibility, the final sample of studies were 31 with a total of 1469 mTBI patients and 4281 controls. Overall, 316 effect sizes were extracted from all the studies and they were grouped into the following cognitive domains: Global cognitive ability; Attention & psychomotor speed; Executive functions; Fluency; Acquisition memory; Delayed memory; Language; and Visuospatial ability.
Overall estimated mean effect sizes were small to moderate (g = -0.35, SD = 0.156) but statistically significant from zero (p < 0.001). By cognitive domain, mean effect size estimates were as follows: Global cognitive ability (g = -0.42, SD = 0.351); Attention & psychomotor speed (g = -0.30, SD = 0.207); Executive functions (g = -0.23, SD = 0.230); Fluency (g = -0.61, SD = 0.336); Acquisition memory (g = -0.42, SD = 0.285); Delayed memory (g = -0.32, SD = 0.239); Language (g = -0.73, SD = 0.396); and Visuospatial ability (g = -0.22, SD = 0.293). The largest effects found in the domains of Language and Fluency with tests of executive functions were calculated to have the smallest. All estimates of the mean effect size across all eight domains were statistically significantly different from zero (p < 0.05). Heterogeneity statistics confirmed the use of a random effects model (Q = 1,368.1, p < 0.001; Ϯ2 = 0.125).

The results herein in were in contrast to preceding quantitative reviews and our hypothesis in that there was an overall estimate mean effect size that was statistically significant from zero that differentiated mTBI patients versus controls across all cognitive domains and all studies. All the effect sizes were small to moderate. These findings have implications but which are different based on the perspective one takes. Academically, this indicates that neuropsychological test performance can differentiate, albeit to a small effect, the difference between mTBI and controls in the post-acute phase time period. However, when looking at the percent overlap between the two samples (Zakzanis, 1999), an effect size of -0.35 is equivalent to a 78% overlap between controls and mTBI patients. Even in the domain with the largest estimated mean effect size, Language, an effect size of -0.73 would be equivalent to a 57% overlap. Clinically, these differences are impractical and not useful in differentiating the two samples using neuropsychological test performance alone.

There are multiple potential explanations for the high overlap which include but are not limited to poor sensitivity of neuropsychological tests. There is a long history of neuropsychological tests initially being validated to be used to determine site and size of the lesion to assist a neurosurgeon prior to invention of neuroimaging technologies (Manchester et al., 2004). The results could also be due to other non-neurological factors that may have contributed to cognitive performance such as pain, mood, and fatigue (McCrea et al., 2009; Garden & Sullivan, 2010). Lastly, there non-head injury variables such as age and gender could have also contributed towards the variance of the cognitive performance as there is extensive history indicating that these such factors can mask the effects of a traumatic brain injury (see Frencham et al., 2005).

This study is not without limitations. To this end, one study (Vanderploeg et al., 2005) contributed 71% of the total control and 17% of all mTBI samples. Thus, a re-analysis of the data excluding this study and comparing it the current results will be conducted in the follow up analysis to determine the degree of influence this one study had on the overall estimated effect sizes. The lack of analysis of moderating variables limits our ability to explain, with reasonable correlational certainty, which potential factors influenced differences between the two groups. Collection of moderating variables was initially planned for in the study, but was abandoned as there was considerable variability of reporting methods. Moreover, the lack of standardization in the field for reporting moderating variables has been a long standing issue, with many researchers promoting standards to apparently deaf ears (for a review, see Maas et al., 2017). In addition, the largest estimated cognitive mean effect size was calculated from the domain with the least number of comparisons contributing to it (Language with 12 comparison). To this end, this leads to an unequal and lack of sample size of effect sizes across domains. Lastly but quite importantly, the majority of the included studies came from cross sectional studies with time since injury ranges from 90 days to over six years. Prior studies have indicated a negative logarithmic trend regarding cognitive impairment and time since injury (Schretlen & Shapiro, 2003). To this end, it may be that the studies with samples that are far beyond the 90 days threshold may contribute much smaller effect sizes overall. This was not accounted for or analyzed in the current study as although all studies reported time since injury, they were inconsistent with their reports. Some reported a mean and standard deviation for all their patients, others reported minimum time since injury as inclusion criteria for their study, and still others reported individual time since injury estimates for their entire sample.

The current results indicate that there may be an overall effect between mTBI and controls across neuropsychological test performance in the post-acute phase, but that this effect is small and the overlap between the two distributions are too large to be clinically useful at this time. Ideally, future basic studies should report consistent statistics and moderating data (Maas et al., 2017) with more emphasis on longitudinal rather than cross-sectional research

Copyright © 2016 Canadian Society of Medical Evaluators

Contact Us
301-250 Consumers Rd, Toronto, ON  M2J 4V6
Tel:
416 487 4040 | (1) + 888 672 9999 Fax:   416 495 8723
Email: info@csme.org    W
ebsite: www.csme.org

Privacy Statement