Signal Detection Theory

At its simplest, SignalDetectionTheory or SDT is a model for the situation of a decision maker choosing between two hypotheses based on the value of a measurement, x.

Under H1, x comes from the Signal distribution f1 and under H0, x comes from the Noise distribution f0.

It is the job of the Observer to decide whether it was 'Signal' or 'Noise' that produced x.

The assumption that larger values of x are more typical under f1 than under f0 leads to the use of the magnitude of x as a criterion (e.g. Choose H1 when x>c, otherwise chose H0).

The performance of this criterion is given by the Hit Rate (P(x>c|f1)) and the False Alarm Rate (P(x>c|f0)). These two quantities are also known in Neyman-Pearson-land as Power & Size, or as Sensitivity and 1-Specificity (the complement of Specificity).

When 1-Specificity is plotted against Sensitivity as a function of the criterion c the resulting curve is known as the ROC or Receiver Operating Charactistic.