Prospective Study Open Access
Copyright ©The Author(s) 2016. Published by Baishideng Publishing Group Inc. All rights reserved.
World J Psychiatr. Sep 22, 2016; 6(3): 358-364
Published online Sep 22, 2016. doi: 10.5498/wjp.v6.i3.358
Agreement and conversion formula between mini-mental state examination and montreal cognitive assessment in an outpatient sample
Luqman Helmi, Geraldine McCarthy, Dimitrios Adamis, Sligo Medical Academy, NUI Galway and Sligo/Leitrim Mental Health Services, F91 CD34 Sligo, Ireland
David Meagher, Dimitrios Adamis, Cognitive Impairment Research Group, Graduate Entry Medical School, University of Limerick, V94 F858 Limerick, Ireland
David Meagher, Department of Psychiatry, University Hospital Limerick, V94 F858 Limerick, Ireland
Edmond O’Mahony, Donagh O’Neill, Owen Mulligan, Sutha Murthy, Geraldine McCarthy, Dimitrios Adamis, Sligo/Leitrim Mental Health Services, F91 CD34 Sligo, Ireland
Author contributions: All authors contributed to this paper.
Institutional review board statement: The study has been approved from the ethical committee.
Clinical trial registration statement: It is not a clinical trial.
Informed consent statement: Yes. Verbal informed consent.
Conflict-of-interest statement: None.
Data sharing statement: No sharing of data.
Open-Access: This article is an open-access article which was selected by an in-house editor and fully peer-reviewed by external reviewers. It is distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/
Correspondence to: Dimitrios Adamis, Consultant Psychiatrist, Sligo/Leitrim Mental Health Services, Clarion Rd Sligo, F91 CD34 Sligo, Ireland. dimaadamis@yahoo.com
Telephone: +353-71-9144829 Fax: +353-71-9144177
Received: May 20, 2016
Peer-review started: May 24, 2016
First decision: July 4, 2016
Revised: August 16, 2016
Accepted: August 27, 2016
Article in press: August 29, 2016
Published online: September 22, 2016
Processing time: 122 Days and 10.2 Hours

Abstract
AIM

To explore the agreement between the mini-mental state examination (MMSE) and montreal cognitive assessment (MoCA) within community dwelling older patients attending an old age psychiatry service and to derive and test a conversion formula between the two scales.

METHODS

Prospective study of consecutive patients attending outpatient services. Both tests were administered by the same researcher on the same day in random order.

RESULTS

The total sample (n = 135) was randomly divided into two groups. One to derive a conversion rule (n = 70), and a second (n = 65) in which this rule was tested. The agreement (Pearson’s r) of MMSE and MoCA was 0.86 (P < 0.001), and Lin’s concordance correlation coefficient (CCC) was 0.57 (95%CI: 0.45-0.66). In the second sample MoCA scores were converted to MMSE scores according to a conversion rule from the first sample which achieved agreement with the original MMSE scores of 0.89 (Pearson’s r, P < 0.001) and CCC of 0.88 (95%CI: 0.82-0.92).

CONCLUSION

Although the two scales overlap considerably, the agreement is modest. The conversion rule derived herein demonstrated promising accuracy and warrants further testing in other populations.

Key Words: Mini mental state examination; Montreal cognitive assessment; Cognition; Equation; Assessment; Old age psychiatry

Core tip: In this study we examined the agreement between mini-mental state examination and montreal cognitive assessment in an older population attending mental health service outpatients. Although both scales assess the same construct (cognition) the agreement between them was modest. Further we delivered a conversion rule which can allow conversion of scores between these scales. The converted scores had a high agreement with original ratings. Finally, this new conversion rule was superior to three previously suggested equating rules.



INTRODUCTION

The mini-mental state examination (MMSE)[1] and montreal cognitive assessment (MoCA)[2] are cognitive screening tests that are widely used in both everyday clinical practice and research. However, some evidence suggests that the MMSE is less sensitive for detecting milder cognitive deficits compared to the MoCA, while other studies indicate that it’s relative insensitivity to visuospatial and executive deficits impact limit it’s suitability in particular populations, e.g., vascular cognitive impairment or Parkinson’s disease[3-6]. Comparison of these two tests in specific populations such as patients with Parkinson’s disease[7,8] brain metastases[9] or sub-arachnoid haemorrhage[10] indicate that the MoCA is more suitable because it can detect mild forms of cognitive impairment and especially where this includes executive dysfunction. Similarly, population based studies of mild cognitive impairment (MCI) indicate that the MoCA is more sensitive than the MMSE in detecting mild forms of cognitive impairment[11]. However, to our knowledge no specific comparison of those two tests has been conducted in a general psychogeriatric population where cognitive testing is routine practice.

Furthermore, clinical trials vary in their use of these two scales which makes comparisons between studies and meta-analyses difficult. Equating methodologies can facilitate comparison between studies using different scales to measure the same construct. Previous studies have developed conversion rules for the MMSE and MoCA using either equipercentile equating and/or log-linear smoothing methods. These studies relate to specific populations; Roalf et al[12] studied a selected population with Alzheimer’s disease (AD), MCI and cognitively intact participants, van Steenoven et al[8] studied a population with Parkinson’s disease, and Trzepacz et al[13] studied a selected population of AD, MCI and participants deemed cognitively intact. However, given that these studies used specific and biased populations there is a lack of data in general elderly psychiatric patients. In addition, only one of these conversion rules (van Steenoven et al[8]) has been subject to further examination[7] which although conducted in a similar population only found moderate agreement [Pearson correlation coefficient 0.66 (95%CI: 0.56-0.75)].

Given that both scales are widely used in clinical settings, as well as in clinical trials and cohort studies, a rule to facilitate conversions and comparison of data from different centres and different clinical trials which have used these instruments would have utility.

Therefore, the aims of the present study were threefold (1) to estimate the level of agreement between MMSE and MoCA within an old age psychiatry population; (2) to derive a conversion formula for the two scales and test it in a random population of similar setting; and (3) to compare the new conversion formula with those described in previous studies.

MATERIALS AND METHODS
Subjects and design

This is an observational cross sectional study of performance using two screening cognitive scales in consecutive patients attending an old age psychiatry outpatient clinic and Day Hospital. The “single group design” method was used in this study reflecting that the same population was assessed with the two cognitive tests (MoCA and MMSE).

Procedures

All assessments were conducted by the same psychiatrist who was trained in the use of MoCA and MMSE (McCarthy G). Both tests were administered on the same day with a maximum of 3 h time gap to avoid boredom and/or learning effects. The tests were administered with no particular order (randomly).

Clinical assessments

Demographics: Demographic data (gender, age) were collected from medical records (files and hospital computerised database). In addition, information about years of education was collected from patients and relatives.

Diagnosis: ICD-10 psychiatric diagnoses were collected from the files and collapsed to main ICD-10 F categories. Where multiple diagnoses were evident the most predominant was chosen.

Cognitive assessments: (1) MoCA[2]. The MoCA assesses visuospatial/executive function, naming, memory, attention, concentration, language, abstract thinking, recall memory and orientation. It is scored on a 30-point scale. Higher scores indicate better cognitive performance. Administration typically takes about 12-15 min. Its psychometric properties have been investigated in many studies and it has been found to be superior to the MMSE for the detection of MCI[14]. In addition unlikely to MMSE it takes to account the education level; and (2) MMSE[1]. The MMSE comprises 11 questions assessing orientation to time and place, attention, immediate and short-term recall, language and visuospatial abilities. It is a brief cognitive screening instrument that takes less than ten minutes for administration. Over the past 40 years it has been the most widely used tool in clinical and research settings for brief assessment of cognitive status in elderly individuals. It’s psychometric properties have been thoroughly reviewed and indicate moderate-to-high levels of reliability and good evidence of criterion and construct validity[15]. It has a total score of 30, with higher scores indicating better cognitive performance. Disadvantages of the MMSE include a ceiling effect, the influence of education especially for the serials sevens component[15,16], and a documented learning effect[17].

Ethics

The procedures and rationale for the study were explained to all patients but because many patients had cognitive impairment at entry into the study it was presumed that many might not be capable of giving informed written consent. Because of the non-invasive nature of the study, Sligo Regional Ethics Committee approved an approach to establishing consent by virtue of augmenting patient assent with proxy consent from next of kin (where possible) or a responsible caregiver for all participants in accordance with the Helsinki Guidelines for Medical research involving human subjects.

Statistical analysis

Statistical analyses were conducted using the R “equate” package[18]. Z scores were used to compare MMSE and MoCA scores because although they are from the same sample they follow different distributions. The overall agreement between the two scales was assessed using Pearson’s product-moment correlation coefficient (r). However this estimation has been criticised by Bland et al[19] as misleading and therefore the concordance correlation coefficient (CCC) was also calculated[20]. The CCC measures agreement by assessing how well the relationship between the measurements is represented by a line through the origin at an angle of 45 degrees (as would be generated if the two measurements generated identical results).

To convert MoCA scores to MMSE (and vice versa), we generated an equating table to link the two scales. The conversions were extracted from a random population of the studied group and then tested in the remaining sub-population. Given that both scales have the same lower and higher scores but with different difficulty we used the Circle‐Arc Method[21]. However, we also applied other methods of equating models (linear, mean and equipercentile) and compared the standard errors of each model after bootstrapping (Figure 1).

Figure 1
Figure 1 Graphical representation of standard errors after bootstrapping of different equating methods.

By doing this we made the following assumptions: (1) that both scales measure the same latent construct (cognition); (2) that the two scales are not free of errors but the errors are small (both scales must have high reliability); and (3) that the ratings have been conducted by experts and the conversion rule will apply again in measurements that have been performed by experts.

Although both scales are continuous they are discretized continuous meaning that for a person A the score in MoCa (or MMSE) will be for example 11 and never 11.2 and thus the delivered score in MMSE was converted to the nearest integer. Finally, in the second sample we evaluated the conversion methods suggested by (1) Roalf et al[12]; (2) van Steenoven et al[8]; and (3) Trzepacz et al[13] using Pearson’s r and CCC to measure the agreement between the original scores and the converted scores.

RESULTS

The total sample (n = 135) was randomly divided in two groups; one which was used to derive the equating table (called experimental sample, n = 70) and a second evaluation sample (n = 65) in which the derived conversion rule was tested.

Descriptive statistics

Table 1 shows demographic data as well as the MoCA and MMSE scores in the two samples. The two samples did not significantly differ regarding gender distribution (χ2 = 0.084, df: 1, P = 0.772), age (t = 1.25, df: 133, P = 0.214), MoCA scores (t = 0.406, df: 133, P = 0.686), MMSE scores (t = 0.31, df: 133, P = 0.976) and years of education (t = 1.29, df: 133, P = 0.200). In addition, Table 2 shows the principal diagnoses in the two samples in percentages. A comparison of the two samples in terms of diagnoses did not identify significant difference (χ2 = 0.644, df: 3, P = 0.886). However, as shown in Table 1 the total MoCA scores were significantly lower than the total MMSE scores in both samples (For experimental sample: n = 70, z scores: -4.77, -8.19 respectively for MMSE and MoCA; P < 0.001; for the evaluation sample, n = 65, z-scores: -4.35, -7.88; P < 0.001).

Table 1 Demographic characteristics and cognitive test scores of the two samples.
Experimental n = 70range (min-max)Evaluation n = 65range (min-max)
Males21 (30%)21 (32.3%)
Age77.36 (SD: 7.06)62-8978.83 (SD: 6.6)66-91
MoCA19.03 (SD: 6.35)4-2918.57 (SD: 6.78)4-30
MMSE24.47 (SD: 4.87)9-3024.45 (SD: 4.71)8-30
Years of education10.71 (SD: 2.47)6-1810.20 (SD: 2.15)7-17
Table 2 Main diagnoses in the two samples.
DiagnosesExperimental sample
Evaluation sample
Total
n (%)MMSE mean (SD)MoCA mean (SD)n (%)MMSE mean (SD)MoCA mean (SD)
Organic, including symptomatic, mental disorders (F00-F09)32 (45.7)22.31 (4.95)15.93 (5.68)32 (49.2)21.75 (4.61)13.91 (5.11)64
Schizophrenia, schizotypal and delusional disorders (F20-F29)4 (5.7)23.5 (3.7)18.25 (4.43)2 (3.1)26 (5.66)20 (7.07)6
Mood (affective) disorders (F30-F39)18 (25.7)26.06 (4.42)21.61 (6.29)17 (26.2)26.17 (3.46)22.29 (5.47)35
Neurotic, stress-related and somatoform disorders (F40-F48)16 (22.9)27.25 (3.49)22.5 (5.33)14 (21.5)28.28 (1.85)24.5 (3.69)30
Agreement of the two scales in the experimental sample

The Pearson’s product-moment correlation coefficient for MoCA and MMSE was 0.86 (P < 0.001) which indicates very good agreement. However, the more conservative CCC was 0.57 (95%CI: 0.45-0.66), indicating a lower agreement between the two scales. Figure 2 depicts a scatterplot including a fitted linear line and a cubic. As evident from the scatterplot, the scores do not fit well to a linear model.

Figure 2
Figure 2 Linear Quadratic and Cubic relationship of montreal cognitive assessment and mini-mental state examination scores.
Linking the two scales (MoCA and MMSE)

The “circle-arc” method was used. Table 3 shows the conversion table. Also other equating methods were used but as expected the “circle-arc” had the least standard errors and less biases compared to the others. Figure 1 shows the standard error of the different methods after bootstrapping.

Table 3 Conversion table.
MoCA scoresMMSE scores
13
26
38
410
511
613
714
815
916
1017
1119
1219
1320
1421
1522
1623
1724
1825
1925
2026
2127
2227
2328
2428
2529
2629
2730
2830
2930
3030
Evaluation of the derived conversion

In the 2nd sample (evaluation sample) we converted MoCA scores to MMSE scores according to the above table and then examined the agreement between the converted MMSE scores and the originals. The Pearson’s product-moment correlation coefficient was 0.89 (n = 65, P < 0.001) and the Lin’s CCC was 0.88 (95%CI: 0.82-0.92). Thus the converted MMSE scores from MoCA have a high level of agreement with the actual MMSE scores.

Evaluation of the other methods suggested

With the Roalf et al[12]’s method, the Pearson’s product-moment correlation coefficient was equal to 0.88 (n = 65, P < 0.001) and the CCC equal to 0.86 (CI: 0.79-0.81). With the van Steenoven et al[8]’s method the agreement between the converted and the actual MMSE scores was high with the Pearson’s product-moment correlation coefficient of 0.86 (n = 65, P < 0.001) and the CCC of 0.84 (CI: 0.76-0.90). Finally, using the method suggested by Trzepacz et al[13] the Pearson’s product-moment correlation coefficient was 0.85 (n = 65, P < 0.001) and the CCC was 0.82 (CI: 0.72-0.88). All three previously described conversion rules were inferior to that derived herein.

DISCUSSION

It is often assumed that because the MoCA and MMSE measure the same general construct (cognition) that they can be used interchangeably. However, they each emphasise different aspects of cognition and, as our results demonstrate, their agreement is modest. For instance, the MMSE allocates more points for orientation (10 out of 30) compared to only 6 out of 30 in the MoCA, while the MoCA places greater emphasis on visuospatial domains (5 out of 30) compared to only 1 point out of 30 with the MMSE. As a consequence, it is not surprising that these two tests do not have a linear relationship (Figure 2). Furthermore because performance is more difficult in visuospatial and executive domains than orientation, scores in MoCA were significantly lower compared to scores in MMSE. In addition, although both tests are used as continuous scales (ranging from 0 to 30) in fact neither is a true ratio scale such that a score of 10 does not indicate half the cognitive ability of a score of 20. Similarly, both scales include arbitrary anchor points (e.g., a score of 0 does not mean that someone has no cognitive function at all).

Our second aim was to derive an equating rule to allow for accurate conversion of scores between the two scales. This has important utility for standardising multiple assessments of patients who are assessed using different scales over time. However, most importantly this conversion rule can allow for comparisons between multiple centers in clinical trials which use MoCA or MMSE alternately and can be used for pooling data from different studies to facilitate meta-analyses. Our conversion rule compared very favourably with those described in previous studies in terms of a better (higher agreement). We examined this issue using both Pearson’s correlation coefficients as well as the CCC which provides a more conservative method as, in comparison to Pearson’s correlation coefficient, it emphasises level of actual agreement over the general pattern of relationship[19]. Of note, the new method described herein performed better than previous methods by both measures of agreement (Pearson’s r or CCC). One explanation for these findings is that our sample is more representative of a general old age population in comparison to the three previous studies in which the samples were more restricted. However, one of the assumptions for equating methods is that the equating relationship is group invariant and as such does not change across the groups[22], if the sample or sampling method influenced the converted scores the conversion rule is not valid.

Although the sample can influence some psychometric values, the most likely explanation for the higher agreement is the equating method that we used as it provides a better fit for our data. The circle-arc which we used does not requires the estimated equating transformation to be linear. It constrains the end points in the two pre-specified end points and a middle point determined from the data and it is thus robust even for small samples[21]. In addition, the circle-arc method produces the most accurate results for different sample sizes compared to other methods like equipercentile with smoothing, linear equating, and mean equating[23]. Therefore, it is likely that the greater accuracy of the conversion rule described herein relates substantially to the methods that were used in its development rather than to the sampling method. These observations are further supported by Armstrong et al[7] who found a moderate agreement between the converted and actual scores when they applied the conversion rule suggested by van Steenoven et al[8], even though they tested the rule in a similar sample to that in which the rule was originally derived (i.e., patients with Parkinson’s disease). However, when the two scales or tests are different in content, reliability, or intended population, it is expected that the scales will be less equivalent to some degree[24], but this is not the case for the MoCA and MMSE as they both have high reliability, assumes that measure the same construct, (cognition) and are used in populations with possible cognitive deficits.

In conclusion, we found that the MMSE and MoCA have moderate agreement when used to assess general cognitive function reflecting their different emphasis into particular neuropsychological domains. Further, we found that their relationship is non-linear such that non-linear methods of equating should be used to compare performance on these scales. Finally, we derived a conversion rule which performed well in comparison to previously suggested methods and which merits further assessment in other larger and clinically diverse samples.

COMMENTS
Background

Mini-mental state examination (MMSE) and montreal cognitive assessment (MoCA) are widely used assessments of cognition in older people populations.

Research frontiers

Given the different tests of cognition the challenge is how to interpret them to a common “language”.

Innovations and breakthroughs

This study applies advance and robust techniques to overcome the above challenges.

Applications

The authors have derived an equation rule to convert the scores from MoCA to MMSE which can be used to pull together data from different studies.

Terminology

Equation models can be used to transform the scores from one scale or instruments to another.

Peer-review

The manuscript is a generally well-written and interesting paper. The topic is important because the rate of dementias increases around the world.

Footnotes

Manuscript source: Invited manuscript

Specialty type: Psychiatry

Country of origin: Ireland

Peer-review report classification

Grade A (Excellent): 0

Grade B (Very good): B

Grade C (Good): C

Grade D (Fair): 0

Grade E (Poor): 0

P- Reviewer: Hosak L, Müller MJ S- Editor: Gong XM L- Editor: A E- Editor: Lu YJ

References
1.  Folstein MF, Folstein SE, McHugh PR. “Mini-mental state”. A practical method for grading the cognitive state of patients for the clinician. J Psychiatr Res. 1975;12:189-198.  [PubMed]  [DOI]  [Cited in This Article: ]
2.  Nasreddine ZS, Phillips NA, Bédirian V, Charbonneau S, Whitehead V, Collin I, Cummings JL, Chertkow H. The Montreal Cognitive Assessment, MoCA: a brief screening tool for mild cognitive impairment. J Am Geriatr Soc. 2005;53:695-699.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 11622]  [Cited by in F6Publishing: 14701]  [Article Influence: 773.7]  [Reference Citation Analysis (0)]
3.  Ihara M, Okamoto Y, Takahashi R. Suitability of the Montreal cognitive assessment versus the mini-mental state examination in detecting vascular cognitive impairment. J Stroke Cerebrovasc Dis. 2013;22:737-741.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 30]  [Cited by in F6Publishing: 35]  [Article Influence: 2.9]  [Reference Citation Analysis (0)]
4.  Mai LM, Sposato LA, Rothwell PM, Hachinski V, Pendlebury ST. A comparison between the MoCA and the MMSE visuoexecutive sub-tests in detecting abnormalities in TIA/stroke patients. Int J Stroke. 2016;11:420-424.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 23]  [Cited by in F6Publishing: 24]  [Article Influence: 3.0]  [Reference Citation Analysis (0)]
5.  Nys GM, van Zandvoort MJ, de Kort PL, Jansen BP, Kappelle LJ, de Haan EH. Restrictions of the Mini-Mental State Examination in acute stroke. Arch Clin Neuropsychol. 2005;20:623-629.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 157]  [Cited by in F6Publishing: 162]  [Article Influence: 8.5]  [Reference Citation Analysis (0)]
6.  Zadikoff C, Fox SH, Tang-Wai DF, Thomsen T, de Bie RM, Wadia P, Miyasaki J, Duff-Canning S, Lang AE, Marras C. A comparison of the mini mental state exam to the Montreal cognitive assessment in identifying cognitive deficits in Parkinson’s disease. Mov Disord. 2008;23:297-299.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 207]  [Cited by in F6Publishing: 211]  [Article Influence: 13.2]  [Reference Citation Analysis (0)]
7.  Armstrong MJ, Duff-Canning S, Psych C, Kowgier M, Marras C. Independent application of montreal cognitive assessment/mini-mental state examination conversion. Mov Disord. 2015;30:1710-1711.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 5]  [Cited by in F6Publishing: 5]  [Article Influence: 0.6]  [Reference Citation Analysis (0)]
8.  van Steenoven I, Aarsland D, Hurtig H, Chen-Plotkin A, Duda JE, Rick J, Chahine LM, Dahodwala N, Trojanowski JQ, Roalf DR. Conversion between mini-mental state examination, montreal cognitive assessment, and dementia rating scale-2 scores in Parkinson’s disease. Mov Disord. 2014;29:1809-1815.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 83]  [Cited by in F6Publishing: 86]  [Article Influence: 8.6]  [Reference Citation Analysis (0)]
9.  Olson R, Tyldesley S, Carolan H, Parkinson M, Chhanabhai T, McKenzie M. Prospective comparison of the prognostic utility of the Mini Mental State Examination and the Montreal Cognitive Assessment in patients with brain metastases. Support Care Cancer. 2011;19:1849-1855.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 40]  [Cited by in F6Publishing: 32]  [Article Influence: 2.3]  [Reference Citation Analysis (0)]
10.  Wong GK, Lam SW, Wong A, Ngai K, Poon WS, Mok V. Comparison of montreal cognitive assessment and mini-mental state examination in evaluating cognitive domain deficit following aneurysmal subarachnoid haemorrhage. PLoS One. 2013;8:e59946.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 41]  [Cited by in F6Publishing: 45]  [Article Influence: 4.1]  [Reference Citation Analysis (0)]
11.  Dong Y, Yean Lee W, Hilal S, Saini M, Wong TY, Chen CL, Venketasubramanian N, Ikram MK. Comparison of the Montreal Cognitive Assessment and the Mini-Mental State Examination in detecting multi-domain mild cognitive impairment in a Chinese sub-sample drawn from a population-based study. Int Psychogeriatr. 2013;25:1831-1838.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 30]  [Cited by in F6Publishing: 36]  [Article Influence: 3.3]  [Reference Citation Analysis (0)]
12.  Roalf DR, Moberg PJ, Xie SX, Wolk DA, Moelter ST, Arnold SE. Comparative accuracies of two common screening instruments for classification of Alzheimer’s disease, mild cognitive impairment, and healthy aging. Alzheimers Dement. 2013;9:529-537.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 205]  [Cited by in F6Publishing: 256]  [Article Influence: 21.3]  [Reference Citation Analysis (0)]
13.  Trzepacz PT, Hochstetler H, Wang S, Walker B, Saykin AJ; Alzheimer’s Disease Neuroimaging Initiative. Relationship between the Montreal Cognitive Assessment and Mini-mental State Examination for assessment of mild cognitive impairment in older adults. BMC Geriatr. 2015;15:107.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 306]  [Cited by in F6Publishing: 379]  [Article Influence: 42.1]  [Reference Citation Analysis (0)]
14.  Cecato JF, Martinelli JE, Izbicki R, Yassuda MS, Aprahamian I. A subtest analysis of the Montreal cognitive assessment (MoCA): which subtests can best discriminate between healthy controls, mild cognitive impairment and Alzheimer’s disease? Int Psychogeriatr. 2016;28:825-832.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 40]  [Cited by in F6Publishing: 40]  [Article Influence: 5.0]  [Reference Citation Analysis (0)]
15.  Bravo G, Hébert R. Age- and education-specific reference values for the Mini-Mental and modified Mini-Mental State Examinations derived from a non-demented elderly population. Int J Geriatr Psychiatry. 1997;12:1008-1018.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in F6Publishing: 2]  [Reference Citation Analysis (0)]
16.  Brayne C. The mini-mental state examination, will we be using it in 2001? Int J Geriatr Psychiatry. 1998;13:285-290.  [PubMed]  [DOI]  [Cited in This Article: ]
17.  Olin JT, Zelinskin M. The 12-month reliability of the mini-mental state examination. Psychol Assess. 1991;3:427-432.  [PubMed]  [DOI]  [Cited in This Article: ]
18.  Albano AD. Equate: An R Package for Observed-Score Linking and Equating.  Available from: http://CRANR-projectorg/package=equate.  [PubMed]  [DOI]  [Cited in This Article: ]
19.  Bland JM, Altman DG. Statistical methods for assessing agreement between two methods of clinical measurement. Lancet. 1986;1:307-310.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 32742]  [Cited by in F6Publishing: 32173]  [Article Influence: 846.7]  [Reference Citation Analysis (2)]
20.  Lin LI. A concordance correlation coefficient to evaluate reproducibility. Biometrics. 1989;45:255-268.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 5032]  [Cited by in F6Publishing: 4577]  [Article Influence: 130.8]  [Reference Citation Analysis (0)]
21.  Livingston SA, Kim S. The Circle‐Arc Method for Equating in Small Samples. J Educ Meas. 2009;46:330-343.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 30]  [Cited by in F6Publishing: 30]  [Article Influence: 2.0]  [Reference Citation Analysis (0)]
22.  Mâsse LC, Allen D, Wilson M, Williams G. Introducing equating methodologies to compare test scores from two different self-regulation scales. Health Educ Res. 2006;21 Suppl 1:i110-i120.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 10]  [Cited by in F6Publishing: 7]  [Article Influence: 0.4]  [Reference Citation Analysis (0)]
23.  Livingston SA, Kim S. Random‐Groups Equating with Samples of 50 to 400 Test Takers. J Educ Meas. 2010;47:175-185.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 10]  [Cited by in F6Publishing: 10]  [Article Influence: 0.7]  [Reference Citation Analysis (0)]
24.  Holland PW, Dorans NJ. Linking and equating. Educational measurement. Westport, CT: Greenwood Press 2006; 187-220.  [PubMed]  [DOI]  [Cited in This Article: ]