Question normal

In order to better evaluate neurocognitive impairment, we need effective tools that are equipped to reduce medical errors. Specifically, the variation clinician scoring in screening test used for neurocognitive impairment inherently introduces significant observer and measurement biases. These screening tests include but not limited to the mini-mental status exam (MMSE), Montreal cognitive assessment (MoCA), Mini-cog, Ascertain Dementia-8 (AD8), and Informant Questionnaire on Cognitive Decline in the Elderly Screen (IQCODE). These flaws become apparent when different examiners could grade the same patient differently based on examiner’s subjective bias of what constitutes a certain response to a question, deviation from standard test administration protocol, patient-physician barriers including cultural and linguistic difficulties which are all accentuated when patients are possibly neurocognitively impaired. Furthermore, these tools fail to accommodate the exam to the patient’s vision, hearing, and manual dexterity.

Herein we present a possible solution or alleviation to the biases observed by standardizing these screening tests with a mobile app that can be used regardless of the language dialectics, dexterity problems, and vision and hearing impairments. Furthermore, the app will automatically score and interpret inputs to reduce observer and measurement bias. The app will be able to track inputs from a particular patient and create an algorithm-constructed baseline to highlight the minute changes in functional status that the patient undergoes overtime.

Focusing on the MoCA screening test, the software will allow modifiable add-ons to address specific concerns such as including the ability to zoom in and out of the interactive app if a patient has vision impairment. Patients with severe deforming arthritis who are unable to hold a pencil will be able to draw with their fingers or knuckles and thus be able to demonstrate their visuospatial capabilities with a new baseline incorporating their disability.

To reduce variation in presentation of the interview questions, the app will have a virtual standardized examiner properly asking the questions in the form of re-playable video clips. These clips will be offered in multiple languages and designed to minimize miscommunications that are due to examiner’s accent, tone, and volume of their voice. Furthermore, it will reduce the examiner’s tendency to give cues consciously or subconsciously to a patient that is struggling to answer a question. This in a sense allows every patient to be examined by “one” virtual examiner, reducing the subjectivity of the examiner by allowing a patient to have clear lines of communication. Furthermore, it produces an adjustable volume, allowing the patient the ability to have headphones option for patients suffering from hearing impairment.

Answers will be scored against a database of evidence-based acceptable answers. Assessments from individual patients will be stored cloud-based database, allowing quick and efficient tracking of data, notification of changes in patient’s function, and response to treatments. Additionally, this database can be used in wide array of research including comparative studies to investigate the efficacy of treatments. This allows us to individualize the treatment based on what facets of the exams the patients have failed, hopefully, allowing the timely implementation to proper treatment and health service needs that would cut healthcare cost for excessive, unnecessary or late-intervention care. This app will facilitate accurate patient care, rapid evidence-based adjustments to the therapeutic approach, and systematic data gathering for large research studies. Lastly, this process reduces the overall time needed to administer and score these neurocognitive screening exams, reducing not only clinician burden but also saving healthcare dollars.

Proposal formulated in partnership with Eyuel S. Terefe.