Visitors Now: | |
Total Visits: | |
Total Stories: |
Story Views | |
Now: | |
Last Hour: | |
Last 24 Hours: | |
Total: |
![]() |
||
Summary points
llustration of a Friends Meeting in the 17th century, from Cassell’s Illustrated History of England, iii (London: 1864, http://www.onemag.org/fwb_handbook.htm)
Bayes’s theorem and its application to clinical diagnosisThomas Bayes was an 18th century British vicar [minister] and amateur mathematician. Bayes’s theorem states that the pre-test odds of a hypothesis being true multiplied by the weight of new evidence (likelihood ratio) generates post-test odds of the hypothesis being true.( Ref 2 ) If used for diagnosis of disease, this refers to the odds of having a certain disease versus not having that disease.
The likelihood ratio summarises the operating characteristics of a diagnostic test as the ratio of patients with the disease to those without disease among those with either a positive or negative test result, and is derived directly from the test’s sensitivity and specificity according to the following two formulas:
For a positive test result: likelihood ratio = sensitivity/(1 – specificity)
For a negative test result: likelihood ratio = (1 – sensitivity)/specificity
The following example shows how Bayes’s theory of conditional probability is relevant to clinical decision making. The figure shows an electrocardiogram with an abnormal pattern of ST segment and T wave changes. Because the test provides an answer, this process must start with a question, such as, “Is this patient having a heart attack?” The bayesian approach does not yield a categorical yes or no answer but a conditional probability reflecting the context in which the test is applied. This context emerges from what is generally known about heart attacks and electrocardiograms and the characteristics of the patient—for example, “Who is this patient?” “Does this patient have symptoms?” and “What was this patient doing at the time the test was done?” To illustrate this, assume this electrocardiogram was obtained from either of the following two hypothetical patients:
Figure 2 (Electrocardiogram of hypothetical patient)
Logically, our opinion of heart attack before seeing the electrocardiogram should have differed greatly between these two patients. Since patient 1 sounds like exactly the kind of person prone to heart attacks, we might estimate his pre-test odds to be high, perhaps 5:1 (prior probability = 83%). If we assume that this electrocardiogram has a 90% sensitivity and 90% specificity for heart attacks,3 the positive likelihood ratio would be 9 (0.9/(1 – 0.9)) and the negative likelihood ratio 0.11 ((1 – 0.9)/0.9). With this electrocardiogram patient 1′s odds of heart attack increase ninefold from 5:1 to 45:1 (posterior probability = 98%). Note, our suspicion of heart attack was so high that even normal electrocardiographic appearances would be insufficient to erase all concern: multiplying the negative likelihood ratio (0.11) by the pre-test odds of 5:1 gives a posterior probability of 0.55:1 (38%).
By contrast, our suspicion of heart attack for patient 2 was very low based on her context, perhaps 1:1000 (prior probability = 0.1%). This electrocardiogram also increased patient 2′s odds of heart attack ninefold to reach 9:1000 (posterior probability = 0.89%), leaving the diagnosis still very unlikely…
Ref. 2 Toward evidence-based medical statistics. 2: The Bayes factor.:
Abstract. Bayesian inference is usually presented as a method for determining how scientific belief should be modified by data. Although Bayesian methodology has been one of the most active areas of statistical development in the past 20 years, medical researchers have been reluctant to embrace what they perceive as a subjective approach to data analysis. It is little understood that Bayesian methods have a data-based core, which can be used as a calculus of evidence. This core is the Bayes factor, which in its simplest form is also called a likelihood ratio. The minimum Bayes factor is objective and can be used in lieu of the P value as a measure of the evidential strength. Unlike P values, Bayes factors have a sound theoretical foundation and an interpretation that allows their use in both inference and decision making. Bayes factors show that P values greatly overstate the evidence against the null hypothesis. Most important, Bayes factors require the addition of background knowledge to be transformed into inferences–probabilities that a given conclusion is right or wrong. They make the distinction clear between experimental evidence and inferential conclusions while providing a framework in which to combine prior with current evidence.
@ Why clinicians are natural bayesians. [BMJ. 2005] – PubMed – NCBI:
See also, earlier post
2012-11-26 16:22:40
Source: http://gmopundit.blogspot.com/2012/11/gmo-statistics-part-29-likelihood-ratio.html