Assessing the usefulness of diagnostic tests.

AMERICAN JOURNAL OF NEURORADIOLOGY(1996)

引用 26|浏览3
暂无评分
摘要
Ideally, to ascertain the usefulness of a given medical treatment, investigators organize broad multicenter trials, such as the North American Symptomatic Carotid Endarterectomy Trial (NASCET) (1), in which they can compare outcomes in a sufficiently large number of patients. One could argue that a comparable method should be used for the evaluation of diagnostic tools. After all, it ultimately matters only whether a new test, like a new therapy, helps or hurts patients. But despite the difficulty and expense of the trials needed to evaluate new therapies, such trials are still more straightforward than those that would be necessary to determine whether a diagnostic modality is useful. The problem is that a diagnostic technology is several steps removed from patient outcome. Interposed between the making of a diagnosis and the outcome for a patient are several factors, including how the clinician uses the diagnostic information and the effectiveness of the therapy. Thus, a perfectly good diagnostic technology, if evaluated solely by patient outcomes, might look worse than it actually is because of problems “downstream.” Additionally, as therapies change, the effectiveness of diagnostic techniques may need to be reassessed. One way out of this difficulty is to analyze separately and sequentially the various components that lead to patient outcome. In 1977, Fineberg et al (2) outlined a hierarchical scheme that first consisted of four and was later revised to five levels of efficacy (Table 1). Other authors have presented similar schemes for the evaluation of diagnostic technologies (3–5). Each level of efficacy depends on the preceding level (hence the hierarchical arrangement). Thus, in order for a technology to provide useful information for diagnostic decision making (diagnostic impact) it must be an accurate test. Similarly, in order for a test to improve patient outcome, it must have a positive therapeutic impact. Not only are radiologists the logical group to design and implement the studies that evaluate these various aspects of diagnostic technologies, to do so is in their own best interest. However, the cost of these studies cannot be borne solely by radiologists. Instead, the health care system as a whole must agree on a mechanism to fund this type of research. The Society of Magnetic Resonance recently published a report that suggested several approaches to funding, including using a cooperative group to seek support from government, industry, payers, and providers (6). Twomain options are open to the investigator evaluating the usefulness of a diagnostic technology. The first is a decision-analysis approach, in which the researcher constructs a model combining known values for the test characteristics (sensitivity and specificity) with estimates of disease prevalence and of the outcomes of treatment. While estimates of test accuracy can be extracted from the literature, there will still be uncertainty. With sensitivity analysis, a crucial aspect of decision analysis, one would substitute the range of accuracy values that could reasonably be expected from each test. If the conclusions of the model are unchanged, then the model can be regarded as insensitive to changes of the variable in question for the range tested. Unfortunately, such models are only as valid as the probability estimates from which they are constructed. Since these estimates are culled from a literature that is frequently biased and incomplete, the usefulness of such models in drawing conclusions is limited. Perhaps their most important function is to indicate the critical bits of knowledge that are
更多
查看译文
关键词
commentaries,brain, magnetic resonance,efficacy studies,magnetic resonance, in treatment planning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要