The risk difference (RD), excess risk, or attributable risk[1] is the difference between the risk of an outcome in the exposed group and the unexposed group. It is computed as , where is the incidence in the exposed group, and is the incidence in the unexposed group. If the risk of an outcome is increased by the exposure, the term absolute risk increase (ARI) is used, and computed as . Equivalently, if the risk of an outcome is decreased by the exposure, the term absolute risk reduction (ARR) is used, and computed as .[2][3]
It is recommended to use absolute measurements, such as risk difference, alongside the relative measurements, when presenting the results of randomized controlled trials.[4] Their utility can be illustrated by the following example of a hypothetical drug which reduces the risk of colon cancer from 1 case in 5000 to 1 case in 10,000 over one year. The relative risk reduction is 0.5 (50%), while the absolute risk reduction is 0.0001 (0.01%). The absolute risk reduction reflects the low probability of getting colon cancer in the first place, while reporting only relative risk reduction, would run into risk of readers exaggerating the effectiveness of the drug.[5]
Authors such as Ben Goldacre believe that the risk difference is best presented as a natural number - drug reduces 2 cases of colon cancer to 1 case if you treat 10,000 people. Natural numbers, which are used in the number needed to treat approach, are easily understood by non-experts.[6]
Ben Goldacre (2008). Bad Science. New York: Fourth Estate. pp.239–260. ISBN978-0-00-724019-7.
Share this article:
This article uses material from the Wikipedia article Attributable_risk, and is written by contributors.
Text is available under a CC BY-SA 4.0 International License; additional terms may apply. Images, videos and audio are available under their respective licenses.