Are Reading Room Interruptions Taking a Toll on Accuracy and Efficiency?

Ultimately, if we can identify factors that are particularly important in determining whether a given interruption will lead to adverse results, we may be able to use this research to design future reading room protocols that make these types of interruptions less frequent and /or less disruptive.

Drew et al
Journal of Medical Imaging
September 12, 2018

Ever wonder or worry about the toll reading room interruptions could have on the accuracy and/or efficiency of your personal practice?  A group of researchers that included Booth Aldred, MD, from SR member practice Austin Radiological Association, looked into those question and published their results in a recent issue of the Journal of Medical Imaging. Their research could improve reading room protocols in the future.

The authors note that reading room interruptions are a common occurrence for radiologists, citing recent workflow studies that clocked interruptions every 4 to 12 minutes during regular working hours. Another analysis reported on-call radiologists received an average of 72 calls per 12-hour shift. “Despite the suggestive evidence that interruptions have negative consequences for radiologists, we know little about the risk associated with interruption in diagnostic radiology,” the authors wrote.

The researchers hypothesized that interruptions would result in radiologists forgetting which areas of the image had been examined prior to the interruption. Drew et al set up two experiments in which radiologists would be interrupted twice—interpersonally and telephonically—beginning at three minutes into the interpretation, then tracked time-to-interpretation and eye movement to quantify the costs of the two different interruptions. The researchers employed a mobile eye-tracking system.

A total of 34 radiologists, fellows and second-year residents representing a broad spectrum of subspecialties participated in two experiments, one at the University of Utah and the other took place during the RSNA meeting. The worklist for both experiments included complex CT cases. 

In first experiment, participants were given 45 minutes to complete a worklist of 11 cases—including four experimental cases that included at least one important finding (sternal fracture, evidence of appendicitis, and more) dispersed throughout a worklist. Half of the experimental cases were interrupted by telephone request that required the radiologist to navigate to a different worklist to review a case before returning to the original worklist and interrupted study.

In the second experiment, radiologists read four cases, including two experimental cases, only one of which was interrupted. Neither of the experimental cases included important findings. The disruption consisted of an assistant asking the radiologist to stop and fill out a nine-question form, with the interrupted case still on the screen. Eye-tracking was used to verify that the radiologist was focused on the form and not re-engaged with the interpretation of the image on screen.


The authors note that much of the previous research to quantify the impact of interruption has focused on resumption lag: the time it takes to restart the original task. Drew et al, interested in reducing diagnostic error, focused on time on task and task accuracy.

On the time to interpretation front, the researchers observed a “reliable time cost” in the first experiment, in which the interruption involved the review of medical images but did not observe a time cost associated with the interruptions during the second experiment, neither of which involved review of medical images.

In the first experiment, the interruption resulted in an additional two minutes to interpretation for the interrupted case, as compared to a paired, uninterrupted case. However, the second interruption (of the same duration as the first) resulted in no measurable difference in time to interpretation.

In the post-interruption periods that occurred in both experiment 1 and experiment 2, the radiologists spent measurably more time looking at the dictation screen than they did the images, suggesting that the interruptions reduced the amount of time spent looking at the images. While the experiment was not designed to detect what would likely be small changes in accuracy due to interruption, the researchers did expect that less time spent looking at the images would explain the greater number of discrepancies observed in past studies during time periods with a greater number of phone interruptions.

Explaining that the use of eye-tracking software to assess image “coverage” during interpretation is challenging, the authors attempted to assess coverage by focusing on the region of the sternal fracture in experiment 1. The authors found that a significantly higher proportion of radiologists who were not interrupted documented the sternal fracture, spending 5 seconds more on this region than the interrupted radiologists.

Although they viewed the experiments as preliminary research, the authors were impressed that they were able to document reliable costs of interruption in an experiment that “allowed for just 1 or 2 weak interruptions.” Post-experiment interviews with participating radiologists indicated that real-life interruptions were greater for some than simulated. 

“Ultimately, if we can identify factors that are particularly important in determining whether a given interruption will lead to adverse results, we may be able to use this research to design future reading room protocols that make these types of interruptions less frequent and /or less disruptive,” the authors conclude.

Access the article here.


Subscribe to Hub
News from around the coalition and beyond

Hub is the monthly newsletter published for the membership of Strategic Radiology practices. It includes coalition and practice news as well as news and commentary of interest to radiology professionals.

If you want to know more about Strategic Radiology, you are invited to subscribe to our monthly newsletter. Your email will not be shared.