Rules of thumb on magnitudes of effect sizes
The scales of magnitude are taken from Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum Associates (see also here). The scales of magnitude for partial $$\omega^text{2}$$ are taken from Table 2.2 of Murphy and Myors (2004). Bakker et al. (2019) note that contextual effect sizes should be used wherever possible rather than 'canned' effects like Cohen's.
There is also a table of effect size magnitudes at the back of Kotrlik JW and Williams HA (2003) here. An overview of commonly used effect sizes in psychology is given by VachaHaase and Thompson (2004). Whitehead, Julious, Cooper and Campbell (2015) also suggest Cohen's rules of thumb for Cohen's d when comparing two independent groups with an additional suggestion of a d < 0.1 corresponding to a very small effect.
Kraemer and Thiemann (1987, p.54 and 55) use the same effect size values (which they call delta) for both intraclass correlations and Pearson correlations. This implies the below rules of thumb from Cohen (1988) for magnitudes of effect sizes for Pearson correlations could also be used for intraclass correlations. It should be noted, however, that the intraclass correlation is computed from a repeated measures ANOVA whose usual effect size (given below) is partial etasquared. In addition, Shrout and Fleiss (1979) discuss different types of intraclass correlation coefficient and how their magnitudes can differ.
The general rules of thumb given by Cohen and Miles & Shevlin (2001) are for etasquared, which uses the total sum of squares in the denominator, but these would arguably apply more to partial etasquared than to etasquared. This is because partial etasquared in factorial ANOVA arguably more closely approximates what etasquared would have been for the factor had it been a oneway ANOVA and it is presumably a oneway ANOVA which gave rise to Cohen's rules of thumb.
Effect Size 
Use 
Small 
Medium 
Large 

Correlation inc Phi 

0.1 
0.3 
0.5 

r x c frequency tables 
0.1 (Min(r1,c1)=1), 0.07 (Min(r1,c1)=2), 0.06 (Min(r1,c1)=3) 
0.3 (Min(r1,c1)=1), 0.21 (Min(r1,c1)=2), 0.17 (Min(r1,c1)=3) 
0.5 (Min(r1,c1)=1), 0.35(Min(r1,c1)=2), 0.29 (Min(r1,c1)=3) 

Comparing two proportions 
0.2 
0.5 
0.8 

$$\eta^{2}$$ 
Anova 
0.01 
0.06 
0.14 

Anova; See Field (2013) 
0.01 
0.06 
0.14 

oneway MANOVA 
0.01 
0.06 
0.14 

Cohen's f 
oneway an(c)ova (regression) 
0.10 
0.25 
0.40 

$$\eta^{2}$$ 
Multiple regression 
0.02 
0.13 
0.26 

$$\kappa^{2}$$ 
Mediation analysis 
0.01 
0.09 
0.25 

Cohen's f 
Multiple Regression 
0.14 
0.39 
0.59 

Cohen's d 
ttests 
0.2 
0.5 
0.8 

Cohen's $$\omega$$ 
chisquare 
0.1 
0.3 
0.5 

Odds Ratios 
2 by 2 tables 
1.5 
3.5 
9.0 

Odds Ratios 
0.55 
0.65 
0.75 

Friedman test 
0.1 
0.3 
0.5 
Also:Haddock et al (1998) state that $$\sqrt{3/\pi}$$ multiplied by the log of the odds ratio is a standardised difference equivalent to Cohen's d.
Comparing Partial etasquared and generalized omegasquared in ANOVAs
Further details on the derivation of the Odds Ratio effect sizes
A quick guide to choice of sample sizes for Cohen's effect sizes
A nonparametric analogue of Cohen's d and applicability to three or more groups
Simulations with R code for a Bayesian power analysis with details here if the link is broken. A ttest Bayesian power simulation is here reproduced here if the link is broken. Jeon M and De Boeck P (2017) suggest compare translational approaches finding that a pvalue of 0.01 is roughly equivalent to a Bayes Factor of 3.
Definitions
For twosample ttests Cohen's d = (difference between a pair of group means) / (averaged group standard deviation) = t Sqrt [(1/n1) + (1/n2)] (Pustejovsky (2014), p.95 and Borenstein (2009), Table 12.1)
For a onesample ttest Cohen's d = difference between the mean and its expected value / standard deviation = t / Sqrt(n) for n subjects in each group. Cohen's d also equals t / Sqrt(n) in a paired ttest (Rosenthal, 1991) since t / Sqrt(n) = difference between two means / standard deviation of the difference and the ttest on the difference score is regarded as a special case of a onesample ttest. Dunlap, Cortina, Vaslow and Burke (1996) suggest in their equation (3) using an alternative transformation of the paired t statistic to obtain d = t Sqrt[2(1r)/n] for n subjects and a correlation, r, between the paired responses. They argue their estimator of d is preferred over Rosenthal's since it adjusts Cohen's d for the correlation resulting from the paired design. They do conclude, however, that for sample sizes of less than 50 the differences between the two effect size estimates for Cohen's d are 'quite small and trivial'.
Hedges and Olkin (2016) give easy to compute formulae to rescale Cohen's d to yield the proportion (and its variance) of observations in the treatment group which are higher than the control group mean. They do not, however, assess its robustness to distributional assumptions.
Other effect sizes using tratios
$$\eta^{2} $$ = SS(effect) / [ Sum of SS(effects having the same error term as effect of interest) + SS(the error associated with these effects) ]
Cohen's f = Square Root of etasquared / (1etasquared)
From here one can work out $$\eta^{2}$$ from a F ratio in a oneway ANOVA since
$$\eta^{2}$$ = (k1)/(Nk) F
There is also a $$\mbox{Partial } \eta^{2} $$ = SS(effect) / [ SS(effect) + SS(error for that effect) ]
Multivariate $$\eta^text{2}$$ = 1  $$\Lambda^{1/s }$$ where $$\Lambda$$ is Wilk's lambda and s is equal to the number of levels of the factor minus 1 or the number of dependent variables, whichever is the smaller (See Green et al (1997)). It may be interpreted as a partial etasquared.
$$\kappa^{2}$$ = ab / (Maximum value of ab) where a and b are the regression coefficients representing the independent variable to mediator effect and the mediator to outcome respectively to estimate the indirect effect of IV on outcome. See Preacher and Kelley (2011) for further details including MBESS procedure software for fitting this in R. There is also an online calculator for working out $$\kappa^{2}$$ here. For further details on mediation analysis see also here. Field (2013) also refers to this measure. Wen and Fan (2015) suggest limitations in using $$\kappa^{2}$$ and instead suggest using ab/c. where c is the sum of indirect effect (ab) and direct effect (c') using the notation in Preacher and Kelley's paper.
Suggestion : Use the square of a Pearson correlation for effect sizes for partial $$\eta^{2 }$$ (Rsquared in a multiple regression) giving 0.01 (small), 0.09 (medium) and 0.25 (large) which are intuitively larger values than etasquared. Further to this Cohen, Cohen, West and Aiken (2003) on page 95 of Applied Multiple Regression/Correlation Analysis for the behavioral Sciences third edition for looking at semipartial effects of single predictors in a regression rather than an overall model Rsquared ie looking at sqrt(change in Rsquared) from a model with and without the regressor and using the Pearson correlations as a rule of thumb for effect sizes.
Cohen's $$\omega^{2}$$ = Sum over all the groups $$ (\mbox{(observed proportion  expected proportion)}^{2}) $$ / (expected proportion)
References
Bakker, A, Cai, J, English, L, Kaiser, G, Mesa, V and Van Dooren, W (2019) Beyond small, medium, or large: points of consideration when interpreting effect sizes. Educational Studies in Mathematics 102 18.
Borenstein, M (2009) Effect sizes for continuous data. In H. Cooper, L. V.Hedges, & J. C. Valentine (Eds.), The handbook of research synthesis and metaanalysis (2nd ed., pp. 221–235). Sage Foundation:New York, NY.
Cohen, J (1988) Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum.
Cohen, J, Cohen, P, West, SG and Aiken, LS (2003) Applied multiple regression/correlation analysis for the behavioral sciences. Third Edition. Routledge:New York.
Dunlap, WP, Cortina, JM, Vaslow, JB and Burke, MJ (1996). MetaAnalysis of Experiments With Matched Groups or Repeated Measures Designs. Psychological Methods 1(2) 170177.
Field, A (2013) Discovering statistics using IBM SPSS Statistics. Fourth Edition. Sage:London.
Green, SB, Salkind, NJ & Akey, TM (1997). Using SPSS for Windows:Analyzing and understanding data. Upper Saddle River, NJ:
Haddock, CK, Rinkdskopf, D. & Shadish, C. (1998) Using odds ratios as effect sizes for metaanalysis of dichotomous data: A primer on methods and issues. Psychological Methods 3 339353.
Hedges, LV and Olkin I (2016) Overlap Between Treatment and Control Distributions as an Effect Size Measure in Experiments. Psychological Methods 21(1) 61–68
Kotrlik, JW and Williams, HA (2003) The incorporation of effect size in information technology, learning, and performance research. Information Techology, Learning, and Performance Journal 21(1) 17.
Jeon, M and De Boeck, P (2017) Decision qualities of Bayes factor and p valuebased hypothesis testing. Psychological Methods 22(2) 340360.
Kraemer, HC and Thiemann, S (1987) How many subjects? Statistical power analysis in research. Sage:London. In CBSU library.
Lenhard, W. & Lenhard, A. (2016) Calculation of effect sizes. Psychometrica Calculators for computing f and d amongst others.
Miles, J and Shevlin, M (2001) Applying Regression and Correlation: A Guide for Students and Researchers. Sage:London.
Murphy, KR and Myors, B (2004) Statistical power analysis: A Simple and General Model for Traditional and Modern Hypothesis Tests (2nd ed.). Lawrence Erlbaum, Mahwah NJ. (Alternative rule s of thumb for effect sizes to those from Cohen are given here in Table 2.2).
Preacher, KJ and Kelley, K (2011) Effect size measures for mediation models: quantitative strategies for communicating indirect effects. Psychological Methods 16(2) 93115.
Pustejovsky JE (2014) Converting From d to r to z When the Design Uses Extreme Groups, Dichotomization, or Experimental Control. Psychological Methods 19(1) 92112. This reference also gives several useful formulae for variances of effect sizes such as d and also on how to convert d to a Pearson r.
Rosenthal R (1991). Metaanalytic procedures for social research. Sage:Newbury Park, CA.
Shrout, PE and Fleiss, JL (1979) Intraclass Correlations: Uses in Assessing Rater Reliability, Psychological Bulletin, 86 (2) 420428. (A good primer showing how anova output can be used to compute ICCs).
Tabachnick, BG and Fidell, LS (2007) Using multivariate statistics. Fifth Edition. Pearson Education:London.
VachaHaase, T and Thompson, B (2004) How to estimate and interpret various effect ssizes. Journal of Counseling Psychology 51(4) 473481.
Wen, Z. and Fan, X. (2015) Monotonicity of Effect Sizes: Questioning KappaSquared as Mediation Effect Size Measure. Psychological Methods 20(2) 193203.
Whitehead, A. L., Julious, S. A., Cooper, C. L. and Campbell, M. J. (2015) Estimating the sample size for a pilot randomised trial to minimise the overall trial sample size for the external pilot and main trial for a continuous outcome variable. Stat Methods Med Res. (Available to read for free online here).