Using the average correlation to evaluate Cronbach's alpha and alternatives
Cronbach's alpha is used as a means of testing the reliability between a set of items and is related to the (arithmetic) average of the absolute values of the Pearson (offdiagonal) correlations. For n variables in a n by n correlation matrix there will be n(n1)/2 distinct offdiagonal correlations ie taking either the upper or lower triangle of correlations.
As an illustration Kenny (1979, p.125) gives an example for four correlations based upon 4 variables relating to judgments of persons in a mock trial. The six distinct correlations are given in bold in the table below.

Verdict 
Sentence 
Responsibility 
Innocence 

Verdict 
1.000 




Sentence 
0.412 
1.000 



Responsibility 
0.629 
0.403 
1.000 


Innocence 
0.585 
0.270 
0.500 
1.000 
Cronbach's alpha (Kenny, 1979 p.132133) is equal to
[n bar(r) ] / [1 + (n1) bar(r)] where bar(r) is the arithmetic absolute value of the correlations and n is the number of variables.
In the above example, n=4 and there are (4x3/2 =) 6 distinct offdiagonal Pearson correlations so the average r = (0.412+0.629+0.403+0.585+0.270+0.500)/6 = 0.4665 so Cronbach's alpha = [4 (0.4665)] / [1 + 3(0.4665)] = 0.778.
Jeremy Miles also mentions a composite reliability measure based upon factor loadings (on the same factor) which do not, unlike, Cronbach's alpha assume the correlations are near equal.
If you have three items with three loadings (L1, L2, L3) and three error variances (E1, E2, E3) obtained from an exploratory factor analysis then:
Composite reliability = (L1+L2+L3)**2 / [(L1+L2+L3)**2 + (E1+E2+E3)]
Just keep adding for more. Jeremy adds that unfortunately it is hard to find a reference which describes this clearly. Loehlin (1987), however, does explain the relationship between factor loadings and variance which underpin this method.
Raykov (1998) has demonstrated that Cronbach's alpha may over or underestimate scale reliability. Underestimation is common. He suggests obtaining item correlations using scores from a one factor model to test how well items are represented by a single factor. Due to the danger of underestimation using Cronbach's alpha, rho is now preferred and may lead to higher estimates of true reliability. Raykov's rho is not available in most standard packages but Raykov (1997) lists EQS and LISREL code for computing this measure. EQS is stand alone software and available at CBSU, as is LISREL. EQS example code is given here.
Values of Cronbach's below 0.70 are deemed unacceptable and above 0.80 good (Cicchetti, 1994). Clifton (2019) suggests alphas above 0.70 are 'respectable', above 0.80 are 'very good' with those greater than 0.90 suggesting one should consider shortening the scale.
* For other interpretations and confidence intervals for Cronbach's alpha see here.
Kuijpers, Andries van der Ark and Croon (2013) propose variants of classical tests with fewer assumptions for comparing Cronbach's alpha to a particular value and comparing two Cronbach's alphas from independent and dependent samples.
Rodriguez, Reise and Haviland (2016) propose an alternative to Cronbach's alpha called omega (giving R code in their appendix) which assumes that there exists a general factor with subset clusters of items loading on other factors to compute reliability across items.
An alternative to alpha, omega, is also recommended along with a few lines of R code to compute it in an online article reproduced here and in Peters, GJ T. (2014).
References
Cicchetti DV (1994). Guidelines, criteria, and rules of thumb for evaluating normed and standardized assessment instruments in psychology. Psychological Assessment 6 284290.
Clifton, JDW (2019). Managing validity versus reliability tradeoffs in scalebuilding decisions. Psychological Methods 24(5) 112. Looks at ways of tradingoff between reliability and validity with Cronbach's alpha and different types of validity described.
Kenny DA (1979). Correlation and Causality. Wiley:New York.
Kuijpers, RE, Andries van der Ark, L and Croon, MA (2013). Testing hypotheses involving Cronbach's alpha using marginal models. British Journal of Mathematical and Statistical Psychology 66(3), 503520.
Loehlin JC (1987). Latent variable models: An introduction to factor, path, and structural analysis. Hillsdale, NJ: Erlbaum.
Peters GJ Y (2014) The alpha and the omega of scale reliability and validity: Why and how to abandon Cronbachâ€™s alpha and the route towards more comprehensive assessment of scale quality. The European Health Psychologist 16(2) 5669.
Raykov T (1997). Estimation of composite reliability for congeneric measures. Applied Psychological Measurement, 21, 173184.
Raykov T (1998). Coefficient alpha and composite reliability with interrelated nonhomogeneous items. Applied Psychological Measurement, 22(4), 375385.
Rodriguez A, Reise SP and Haviland MG (2016). Evaluating bifactor Models: calculating and interpreting statistical indices. Psychological Methods 21(2) 137150.