Regressors are not normally orthogonalised in SPM, because there is rarely any need (see Lecture 2 of the SPM course slides - SpmMiniCourse2008 - about correlated regressors and orthogonalisation). However, one situation in which serial orthogonalisation is applied is for multiple parametric modulations of a single trial-type. The main rationale for this is that multiple modulators normally arise in the context of polynomial expansions (where a modulator is expanded into linear, quadratic, cubic etc terms), and here one normally wants to orthogonalise the Nth order term with respect to the 1, 2,.. N-1th order terms (so assigning any shared variance to the lower order terms - see course notes). However, another use for multiple parametric modulations is when one wants to covary out a parametric factor (eg RTs) across multiple conditions. This can be done by specifying a single trial-type (ie all onsets), and multiple modulators, one for the covariate of no interest, and others that code the different conditions (which are binary variables). [If one specifies multiple event-types, one per condition, and modulates each of these by the RTs, this does not covary out the RT across all conditions, because each modulator is mean-corrected, and so any difference in mean RTs across conditions (or a common RT-regression slope) is not covaried out.]
However, when specifying each condition as a separate parametric modulator in a rapid event-related design, the resulting regressors become highly correlated (in the extreme case, they become linearly-dependent). This means that the automatic serial orthogonalisation causes later modulators to become closer and closer to zero (being successively orthogonalised with respect to the previous conditions). In some cases, this can produce a column in the design matrix that is effectively zero, and hence the model becomes inestimiable (and an error results).
The solution in such cases is to turn off the (hard-coded) serial orthogonalisation of parametric modulators. This involves commenting out the following lines in (your local copies of) two separate spm functions: line 229 in spm_get_ons.m, and lines 285-287 in spm_fMRI_design.m.
Note that, if your resulting modulators are linearly-dependent, this will mean that you cannot estimate certain contrasts (namely those that don't sum to zero) - but this doesn't matter if you are always interested in *differences* between conditions, rather than the unique effect of each.
[The reason we have not commented out this line in the CBU versions of SPM is that in other cases (such as polynomial expansions), users may want to keep this serial orthogonalisation, particularly for multiple temporal basis functions and Volterra kernels, which are also affected by lines 285-287 in spm_fMRI_design.m. We could add it as a hidden option if enough requests - or you could implement multiple-modulator designs like the above with the AA functions written by Rhodri Cusack.]
Finally, note also that this problem of linear dependence doesn't normally arise with more typical multiple modulations (eg by some set of continuous variables like RT, word-frequency, visual contrast, etc) that are generally unlikely to be linearly dependent. If these are simply confounds, it is fine to keep SPM's default serial orthogonalisation - or if you want to implement a step-down regression, then enter them in the order of importance and keep the serial orthogonalisation. Only if you are are only interested in the unique effect of each should you turn off SPM's orthogonalisation as above.
Hope this makes some sense!