This article has Open Peer Review reports available.
Diagnosis of TIA (DOT) score – design and validation of a new clinical diagnostic tool for transient ischaemic attack
© Dutta. 2016
Received: 13 September 2015
Accepted: 19 January 2016
Published: 9 February 2016
The diagnosis of Transient Ischaemic Attack (TIA) can be difficult and 50–60 % of patients seen in TIA clinics turn out to be mimics. Many of these mimics have high ABCD2 scores and fill urgent TIA clinic slots inappropriately. A TIA diagnostic tool may help non-specialists make the diagnosis with greater accuracy and improve TIA clinic triage. The only available diagnostic score (Dawson et al) is limited in scope and not widely used. The Diagnosis of TIA (DOT) Score is a new and internally validated web and mobile app based diagnostic tool which encompasses both brain and retinal TIA.
The score was derived retrospectively from a single centre TIA clinic database using stepwise logistic regression by backwards elimination to find the best model. An optimum cutpoint was obtained for the score. The derivation and validation cohorts were separate samples drawn from the years 2010/12 and 2013 respectively. Receiver Operating Characteristic (ROC) curves and area under the curve (AUC) were calculated and the diagnostic accuracy of DOT was compared to the Dawson score. A web and smartphone calculator were designed subsequently.
The derivation cohort had 879 patients and the validation cohort 525. The final model had seventeen predictors and had an AUC of 0.91 (95 % CI: 0.89–0.93). When tested on the validation cohort, the AUC for DOTS was 0.89 (0.86–0.92) while that of the Dawson score was 0.77 (0.73–0.81). The sensitivity and specificity of the DOT score were 89 % (CI: 84 %–93 %) and 76 % (70 %–81 %) respectively while those of the Dawson score were 83 % (78 %–88 %) and 51 % (45 %–57 %). Other diagnostic accuracy measures (DOT vs. Dawson) include positive predictive values (75 % vs. 58 %), negative predictive values (89 % vs. 79 %), positive likelihood ratios (3.67 vs. 1.70) and negative likelihood ratios (0.15 vs. 0.32).
The DOT score shows promise as a diagnostic tool for TIA and requires independent external validation before it can be widely used. It could potentially improve the triage of patients assessed for suspected TIA.
The diagnosis of transient ischaemic attack (TIA) can be difficult and studies show limited inter-observer agreement for clinical diagnosis . About 50 to 60 % of TIA referrals by non- specialists turn out to be non- cerebrovascular mimics [2–4]. Patients with TIA have a high risk of early stroke and subsequent adverse events [5, 6]. Following secondary prevention studies [7, 8] and the introduction of the ABCD2 score , rapid assessment TIA clinics have been set up to investigate and manage TIA. Inappropriate referrals to TIA clinics, however, can lead to delays for patients with TIA and the misdiagnosis of non -cerebrovascular conditions as TIA leads to unnecessary anxiety and inappropriate initial management.
Stroke diagnostic tools such as FAST and ROSIER have been developed for use by pre hospital assessors and emergency room clinicians [10, 11]. The ABCD2 score, too, has been used as a crude diagnostic aid for TIA . More recently, the ability of the ABCD2 score to reliably discriminate between those at high or low risk after a TIA has been called into question and a third of mimics found to have ABCD2 scores ≥ 4 . A TIA diagnostic tool could be used to improve TIA clinic triage by removing some mimics from urgent TIA pathways. There is only one TIA diagnostic tool, the score of Dawson and colleagues  which was not designed for retinal and some posterior circulation events and is not widely used. It has shown limited accuracy when used in a primary care setting . The Diagnosis of TIA Score (DOTS) is a new tool to help non-specialists make the diagnosis of TIA with greater accuracy. It includes retinal and posterior circulation events and is meant for use as a mobile app and web based calculator.
The development cohort for the score was a subset of TIA clinic patients studied retrospectively from a TIA database . Briefly, all patients referred to the Monday to Friday TIA clinics of Gloucestershire Royal Hospital (GRH), Gloucester, UK between April 2010 and May 2012 were eligible for inclusion in the development cohort. The catchment area for GRH has a population of 560,000. Referrals are accepted from Emergency Departments, General Practitioners, paramedics and other departments such as ophthalmology.
Data collected included demographic information, past medical history, a detailed history, examination findings, ABCD2 scores, results of investigations (blood tests, ECG, same day carotid duplex ultrasounds, same day CT brain scans) and final diagnosis . MRI scans were not done on the same day but later as required. The diagnosis was made by consultant stroke physicians with at least 7 to 10 years of stroke experience. Patients were classified as TIA, minor stroke or mimic. TIA was defined as an acute loss of focal cerebral or ocular function lasting < 24 h and presumed to be caused by embolic or thrombotic vascular disease while a stroke was diagnosed if symptoms lasted > 24 h [16, 17]. A diagnosis of stroke was also made if symptoms lasted < 24 h but there was a new infarct visible on CT . The minor strokes in this cohort were patients who had minimal symptoms or signs and did not require hospital admission. TIAs and strokes included retinal as well as cerebral events. The traditional NINDS diagnostic criteria  for TIA were used with a few exceptions at the discretion of the diagnosing physician. In the absence of a gold standard for diagnosis and lack of same day diffusion weighted MRI, to make the clinic diagnosis more robust, follow up data for a median of 34.9 months (IQR 27.7 – 41.6) were accessed to look at subsequent vascular events and death . In a small number of cases, the final diagnosis was altered based on new information from follow up.
Ethics, consent and permissions
The study had the necessary institutional permission from the Gloucestershire Hospitals NHS Foundation Trust (GHNHSFT). It was reviewed by the funding Research and Innovation Forum of the GHNHSFT and Gloucestershire Research Support Service (GRSS) with regards to the appropriate IRB/Ethical reviews necessary. The study was assessed as requiring no Research Ethics Committee/IRB review or formal patient consent. The basis for this decision, as per the regulations of the UK Health Departments Governance Arrangements for Research Ethics Committees, was that the study was limited to secondary use of information previously collected in the course of normal clinical care and that the no patient identifiers were recorded in the dataset for analysis. No patient contact or additional procedures were necessary for this study.
Selection of variables for analysis
Information derived from the history were coded into discrete binary variables such as “unilateral weakness”, “dysphasia”, “dysarthria”, “headache”, “amnesia” etc. These variables were selected from clinical features expected to predict stroke or TIA based on previous experience as well as those likely to favour mimics such as migraine and seizures [10, 11, 14, 19, 20]. Age and risk factors were other potential predictors. Preliminary univariate analysis was used to identify predictive variables although non- significant variables were not excluded automatically from fitted models.
Logistic regression and model selection
The dependent variable was “definite cerebrovascular disease” which included TIAs and minor strokes of the brain or eye. Stepwise multiple logistic regression using the backwards elimination method was performed to select the best model using the Akaike information criterion .
The actual score
The diagnostic score was derived from the coefficients of the final model and the intercept . Calibration of the models was tested by calibration plots and the Hosmer-Lemeshaw statistic and discrimination by Receiver Operating Characteristic (ROC) curves and area under the curve (AUC or c statistic) . Optimal cutpoints for the score were derived from the ROC curve using two methods; criteria based on sensitivity and specificity alone using the Youden Index and the cost of misclassification method (cost benefit analysis of diagnosis) where an assumption of a 2:1 cost ratio was made (i.e. the cost of misclassifying a TIA as a mimic is twice that of misclassifying a mimic as a TIA) . Specificity, sensitivities and other measures of diagnostic accuracy were calculated for each cut point . A web based calculator and smartphone app were subsequently designed to calculate the logit, probability of the outcome and to present the result as “probable TIA”, “possible TIA” or “TIA unlikely.
Although no formal sample size calculation was undertaken, the rule of thumb to satisfy the requirements for developing a score by logistic regression modelling was met by using the available sample; there were more than 10–20 outcome events per potential predictor variable studied [25, 26].
A separate validation cohort of patients was taken from patients referred to the GRH TIA clinic from January to August 2013. Baseline characteristics of the derivation and validation cohorts were compared using the t-test and chi-squared test. The DOTS and Dawson score  were calculated retrospectively by one observer blinded to the clinic diagnosis. The clinic diagnosis, made by a stroke consultant, was accepted as the gold standard. Predicted and observed diagnoses were plotted to test calibration and discrimination was tested by the c statistic with 95 % CI. The c statistic for ABCD2 scores (as recorded by the referring clinicians) and Dawson scores were also calculated and ROC curves compared. Data were analysed using “R” .
The derivation dataset was a subset of 1067 patients . Of the 1067 records, 188 records were rejected as the initial history had been recorded by trainee doctors or specialist nurses and not experienced stroke consultants. Data for 879 patients were satisfactory in every respect and used as the training dataset. In 12 out of 879 patients, the final diagnosis was altered based on new information from follow up.
Characteristics of the developmental and validation datasets
Development cohort (n = 879)
Validation cohort (n = 525)
Age in years (mean, SD(standard deviation)
Sex (proportion of females)
Proportion of TIA
272 (30.9 %)
160 (30.5 %)
Proportion of stroke
174 (18.7 %)
76 (14.5 %)
Proportion of mimics
443 (50.4 %)
289 (55.1 %)
Previous cerebrovascular disease
128 (14.6 %)
32 (6.1 %)
365 (41.5 %)
106 (20.2 %)
117 (13.3 %)
27 (5.2 %)
101 (11.2 %)
25 (4.8 %)
165 (18.8 %)
24 (4.6 %)
Predictors in the final model with coefficients, standard errors (SE), Odds Ratios and 95 % Confidence Intervals. The intercept was -3.365
Regression coefficients (SE)
95 % CI
History of hypertension
History of AF or new AF
Unilateral facial weakness (UMN)
Unilateral weakness (arm, leg or both)
Monocular visual loss
Bilateral visual loss
Visual aura (fortification spectra, scintillations or spreading scotoma)
Unilateral sensory loss
Ataxia (limb or gait)
Loss of consciousness or pre-syncope
Tingling/numbness/pins and needles
Diagnostic accuracy of DOT (cutpoint 0.297), DOT (cutpoint – 0.547) and Dawson scores on full validation cohort and cohort excluding retinal events. Confidence intervals (95 %) are shown where available
Full validation cohort (n = 525)
81 % (76 %–86 %)
89 % (84 %–93 %)
83 % (78 %–88 %)
86 % (81 %–90 %)
76 % (70 %–81 %)
51 % (45 %–57 %)
Positive predictive value
82 % (77 %–87 %)
75 % (70 %–80 %)
58 % (53 %–63 %)
Negative predictive value
85 % (80 %–89 %)
89 % (85 %–93 %)
79 % (72 %–85 %)
Positive likelihood ratio
Negative likelihood ratio
Validation cohort excluding retinal events (n = 485)
80 % (73 %–85 %)
88 % (82 %–92 %)
89 % (84 %–93 %)
86 % (81 %–90 %)
76 % (70 %–81 %)
51 % (45 %–57 %)
Positive predictive value
79 % (73 %–85 %)
71 % (65 %–77 %)
55 % (50 %–61 %)
Negative predictive value
86 % (82 %–90 %)
90 % (86 %–94 %)
88 % (82 %–92 %)
Positive likelihood ratio
Negative likelihood ratio
In the 24 patients in whom the DOT score wrongly missed TIAs or strokes, 15 were anterior circulation events, eight posterior circulation and one was a retinal TIA. However, only 7/24 (29 %) had some imaging (CT, MR or carotid) abnormalities in keeping with cerebrovascular disease. Many of the histories in this group of patient were atypical with non -focal as well as focal features and the diagnoses were recorded as probable or possible TIA /stroke suggesting some diagnostic uncertainty.
The DOT score is a new TIA diagnostic tool which performed well in comparison to the Dawson score  when applied to the validation dataset. The sensitivities were very similar in the two scores but other measures such as specificity, positive and negative predictive values and AUC were superior for DOTS. Unlike the Dawson score, this score attempts to encompass the entire spectrum of TIA/stroke that would be expected in a TIA clinic including brain (anterior and posterior circulation) and retinal events. In contrast to a previous study, the ABCD2 scores, which were those recorded by the referring clinician, showed very poor discrimination for the diagnosis of TIA . This suggests, in keeping with a recent meta-analysis , that clinic triage based on non-specialist use of the ABCD2 score could be improved by the use of a diagnostic score to enable quicker assessment of patients with a higher a priori probability of TIA or minor stroke.
Seizure and dysarthria are two seemingly surprising omissions from the final model. There were only 18 patients with overt rhythmical movements in the derivation cohort (of which at least one had limb shaking TIA) and other potentially postictal features like amnesia and loss of consciousness were significant. In contrast to some other scores [10, 11, 14], the DOT score attempts to distinguish between dysphasia and dysarthria to improve its specificity. It is well known that dysarthria has a broader differential diagnosis and, particularly in isolation, does not necessarily suggest a TIA. Other common cerebrovascular symptoms which are often associated with dysarthria were significant in the final model suggesting that the omission of dysarthria may not affect the sensitivity of the score. Many patients with transient monocular visual loss in the derivation cohort had been referred by the ophthalmology department and so alternative diagnoses may have been screened out leading to a higher preponderance of ocular TIA in this group of patients. The recommendation that all patients with visual loss should have other ocular pathology excluded before attributing the visual loss to a TIA, therefore extends to the presumed diagnosis of retinal TIA based on this score. It is also necessary to emphasise that patients with ongoing neurological signs or symptoms or any appropriate lesion on imaging should be considered as having a stroke and managed accordingly.
It has been suggested that it is unrealistic for a clinical scoring system to cover all types of TIA given the heterogeneity of their symptoms . However, this score was derived from a TIA cohort with a typical case-mix of anterior, posterior and retinal events as well as a high proportion of mimics. This explains why the score is not “parsimonious” and includes 17 items; more predictors are needed to sort TIA from mimics given their varied symptoms. A model as complex as this can be considered impractical but becomes highly usable when presented as a calculator or mobile app. Once the history is taken, using the web calculator or app takes about 30 s. This approach also enables guidance notes to be incorporated in the tool to help non- specialist assessors select the appropriate predictors with greater accuracy. It is well known that predictive tools derived in one health care setting may not translate well to others. It is hoped, however, that the guidance provided will preserve the score’s predictive performance when it is used by primary care or emergency department physicians.
The sample size used to derive the DOT score may not be considered large but it met the essential requirements of developing a score by logistic regression modelling [25, 26]. Although same day DWI-MR was not available, diagnosis was made by experienced stroke physicians mostly using standard NINDS diagnostic criteria . Results of follow up were taken into account to refine the diagnoses made. The dataset for the derivation cohort was complete in every respect. Validation, although internal and retrospective used a separate set of patients from a different time point which differed in many respects from the derivation cohort. The DOT and Dawson scoring was done by one observer blinded to the clinic diagnosis. The difference in the two cohorts was probably due to chance alone and strengthens the external validity of the DOT score. The case-mix suggests that the score should be generalizable to all TIA services which accept unselected patients referred by primary care physicians, emergency or other departments.
In conclusion, the DOT score shows promise as an useful tool for the diagnosis of TIA which will require external validation before it can be widely used. Impact studies would also be necessary to show that the score improves TIA clinic triage.
Chris Foy, medical statistician, provided invaluable advice but was not involved in the design and validation of the actual score. Emily Bowen, Steph Bridgland and Glynn Ward undertook data collection and data management. Funding for the author and all contributors was from the Research and Innovation Forum, Gloucestershire Hospitals NHS Foundation Trust.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
- Kraaijeveld CL, van Gijn J, Schouten HJ, Staal A. Interobserver agreement for the diagnosis of transient ischemic attacks. Stroke. 1984;15:723–5.View ArticlePubMedGoogle Scholar
- Dutta D, Bowen E, Foy C. Four Year Follow Up of Transient Ischemic Attacks, Strokes, and Mimics: A Retrospective Transient Ischemic Attack Clinic Cohort Study. Stroke. 2015;46:1227–32.View ArticlePubMedGoogle Scholar
- Prabhakaran S, Silver AJ, Warrior L, McClenathan B, Lee VH. Misdiagnosis of transient ischemic attacks in the emergency room. Cerebrovasc Dis. 2008;26:630–5.View ArticlePubMedGoogle Scholar
- Martin PJ, Young G, Enevoldson TP, Humphrey PR. Overdiagnosis of TIA and minor stroke: experience at a regional neurovascular clinic. QJM. 1997;90:759–63.View ArticlePubMedGoogle Scholar
- Giles MF, Rothwell PM. Risk of stroke early after transient ischaemic attack: a systematic review and meta-analysis. Lancet Neurol. 2007;6:1063–72.View ArticlePubMedGoogle Scholar
- Clark TG, Murphy MFG, Rothwell PM. Long term risks of stroke, myocardial infarction, and vascular death in “low risk” patients with a non-recent transient ischaemic attack. J Neurol Neurosurg Psychiatry. 2003;74:577–80.PubMed CentralView ArticlePubMedGoogle Scholar
- Rothwell PM, Giles MF, Chandratheva A, Marquardt L, Geraghty O, Redgrave JNE, et al. Effect of urgent treatment of transient ischaemic attack and minor stroke on early recurrent stroke (EXPRESS study): a prospective population-based sequential comparison. Lancet. 2007;370:1432–42.View ArticlePubMedGoogle Scholar
- Lavallée PC, Meseguer E, Abboud H, Cabrejo L, Olivot J-M, Simon O, et al. A transient ischaemic attack clinic with round-the-clock access (SOS-TIA): feasibility and effects. Lancet Neurol. 2007;6:953–60.View ArticlePubMedGoogle Scholar
- Johnston SC, Rothwell PM, Nguyen-Huynh MN, Giles MF, Elkins JS, Bernstein AL, et al. Validation and refinement of scores to predict very early stroke risk after transient ischaemic attack. Lancet. 2007;369:283–92.View ArticlePubMedGoogle Scholar
- Harbison J. Diagnostic Accuracy of Stroke Referrals From Primary Care, Emergency Room Physicians, and Ambulance Staff Using the Face Arm Speech Test. Stroke. 2002;34:71–6.View ArticleGoogle Scholar
- Nor AM, Davis J, Sen B, Shipsey D, Louw SJ, Dyker AG, et al. The Recognition of Stroke in the Emergency Room (ROSIER) scale: development and validation of a stroke recognition instrument. Lancet Neurol. 2005;4:727–34.View ArticlePubMedGoogle Scholar
- Quinn TJ, Cameron AC, Dawson J, Lees KR, Walters MR. ABCD2 scores and prediction of noncerebrovascular diagnoses in an outpatient population: a case-control study. Stroke. 2009;40:749–53.View ArticlePubMedGoogle Scholar
- Wardlaw JM, Brazelli M, Chapell FM, Miranda H, Schuler K, Sandercock PA, et al. ABCD2 score and secondary stroke prevention. Meta-analysis and effect per 1,000 patients triaged. Neurology. 2015;85:373–80.PubMed CentralView ArticlePubMedGoogle Scholar
- Dawson J, Lamb KE, Quinn TJ, Lees KR, Horvers M, Verrijth MJ, et al. A recognition tool for transient ischaemic attack. QJM. 2009;102:43–9.View ArticlePubMedGoogle Scholar
- Lasserson DS, Mant D, Hobbs FDR, Rothwell PM. Validation of a TIA recognition tool in primary and secondary care: implications for generalizability. Int J Stroke. 2015;10:692–696.Google Scholar
- Hankey GJ, Slattery JM, Warlow CP. The prognosis of hospital-referred transient ischaemic attacks. J Neurol Neurosurg Psychiatry. 1991;54:793–802.PubMed CentralView ArticlePubMedGoogle Scholar
- Warlow C. Epidemiology of stroke. Lancet. 1998;352:S1–4.View ArticleGoogle Scholar
- Easton JD, Saver JL, Albers GW, Alberts MJ, Chaturvedi S, Feldmann E, et al. Definition and evaluation of transient ischemic attack: a Scientific Statement for healthcare professionals from the American Heart Association/American Stroke Association. Stroke. 2009;40:2276–93.View ArticlePubMedGoogle Scholar
- Special report from the National Institute of Neurological Disorders and Stroke. Classification of cerebrovascular diseases III. Stroke. 1990;21:637–76.View ArticleGoogle Scholar
- Hand PJ, Kwan J, Lindley RI, Dennis MS, Wardlaw JM. Distinguishing between stroke and mimic at the bedside: the brain attack study. Stroke. 2006;37:769–75.View ArticlePubMedGoogle Scholar
- Royston P, Moons KGM, Altman DG, Vergouwe Y. Prognosis and prognostic research: Developing a prognostic model. BMJ. 2009;338:1373–7.View ArticleGoogle Scholar
- Altman DG, Vergouwe Y, Royston P, Moons KGM. Prognosis and prognostic research: validating a prognostic model. BMJ. 2009;338:1432–5.View ArticleGoogle Scholar
- Lopez-Raton M, Rodriguez-Alvarez MS, Cadarso-Suarez C, Gude-Sampedro F. OptimalCutpoints : An R Package for Selecting Optimal Cutpoints in Diagnostic Tests. J Stat Softw. 2014;61:1–36.View ArticleGoogle Scholar
- Eusebi P. Diagnostic accuracy measures. Cerebrovasc Dis. 2013;36:267–72.View ArticlePubMedGoogle Scholar
- Steyerberg EW, Eijkemans MJ, Harrell FE, Habbema JD. Prognostic modelling with logistic regression analysis: a comparison of selection and estimation methods in small data sets. Stat Med. 2000;19:1059–79.View ArticlePubMedGoogle Scholar
- Peduzzi P, Concato J, Kemper E, Holford TR, Feinstein AR. A simulation study of the number of events per variable in logistic regression analysis. J Clin Epidemiol. 1996;49:1373–9.View ArticlePubMedGoogle Scholar
- R Core Team. R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing; 2014. http://www.R-project.org/. Accessed 10 June 2015.Google Scholar