Diff for "MachineLearning" - Methods
location: Diff for "MachineLearning"
Differences between revisions 31 and 44 (spanning 13 versions)
Revision 31 as of 2008-06-30 11:50:42
Size: 5561
Comment:
Revision 44 as of 2013-03-08 10:28:25
Size: 5462
Editor: localhost
Comment: converted to 1.6 markup
Deletions are marked like this. Additions are marked like this.
Line 5: Line 5:
1. Introduction (applications, supervised, unsupervised, semi-supervised, reinforcement learning, bayes rule, probability theory, randomness) attachment:Presentation1_LML.ppt , 27 May 2008, Eleftherios Garyfallidis. 1. Introduction (applications, supervised, unsupervised, semi-supervised, reinforcement learning, bayes rule, probability theory, randomness) [[attachment:Presentation1_LML.ppt]] , 27 May 2008, Eleftherios Garyfallidis.
Line 7: Line 7:
2. Further Introduction (what is ML, bayes rule, bayesian regression,entropy, relative entropy, mutual information), attachment:Presentation2_LML.ppt , 3 June 2008, Eleftherios Garyfallidis. 2. Further Introduction (what is ML, bayes rule, bayesian regression,entropy, relative entropy, mutual information), [[attachment:Presentation2_LML.ppt]] , 3 June 2008, Eleftherios Garyfallidis.
Line 9: Line 9:
3. Maximum Likelihood vs Bayesian Learning (Notes available upon request) attachment:Presentation3_LML.ppt , 10 June 2008, Hamed Nili. 3. Maximum Likelihood vs Bayesian Learning (Notes available upon request) [[attachment:Presentation3_LML.ppt]] , 10 June 2008, Hamed Nili.
Line 11: Line 11:
4. Factor Analysis, PCA and pPCA, attachment:Presentation4_LML.ppt , 17 June 2008, Hamed Nili. 4. Factor Analysis, PCA and pPCA, [[attachment:Presentation4_LML.ppt]] , 17 June 2008, Hamed Nili.
Line 13: Line 13:
5. Independent Component Analysis (ICA), attachment:Presentation5_LML.pdf , 24 June 2008, Jason Taylor. 5. Independent Component Analysis (ICA), [[attachment:Presentation5_LML.pdf]] , 24 June 2008, Jason Taylor.
Line 15: Line 15:
6. ICA & Expectation Maximization (EM), attachment:Presentation6_LML.ppt, 1 July 2008, Eleftherios Garyfallidis. 6. ICA & Expectation Maximization (EM), [[attachment:Presentation6_LML.ppt]] , 1 July 2008, Eleftherios Garyfallidis.

Up to this point the lectures were based to the presentations of ML course given by Zoubin Ghahramani at the department of engineering, University of Cambridge, http://learning.eng.cam.ac.uk/zoubin/ml06/index.html .
Line 19: Line 21:
8. Graphical Models 2, 9 September 2008, Ian Nimmo-Smith. 8. Graphical Models 2, 13 January 2009. Ian Nimmo-Smith.
Line 21: Line 23:
9. Graphical Models 3, 16 September 2008, Ian Nimmo-Smith. 9. Markov Chain Monte Carlo, 20 January 2009, Eleftherios Garyfallidis.
Line 23: Line 25:
10. Monte Carlo Sampling 1, 23 September 2008, Eleftherios Garyfallidis. 10. ???, 27 January 2009, Hamed Nili.
Line 25: Line 27:
11. Monte Carlo Sampling 2 (MCMC), 30 September 2008, Eleftherios Garyfallidis.

12. Variational approximations (KL divergences, mean field, expectation propagation).

13. Model comparison (Bayes factors, Occam's razor, BIC, Laplace approximations).

14. Reinforcement Learning, Decision Making and MDPs (value functions, value iteration, policy iteration, Bellman equations, Q-learning, Bayesian decision theory

15. Reinforcement Learning 2.

16. General Discussion Q & A.

Up to this point the lectures were based to the presentations of ML course given by Zoubin Ghahramani at the department of engineering, University of Cambridge, http://learning.eng.cam.ac.uk/zoubin/ml06/index.html .
Line 42: Line 31:
 * Gaussian Processes 1.
 * Gaussian Processes 2.
 * Gaussian Processes.
Line 51: Line 39:
 * Artificial Neural Networks 2.
Line 60: Line 47:
 * and many meetings for discussion of specific papers. Any other suggestions ?  * Variational approximations (KL divergences, mean field, expectation propagation).
 * Model comparison (Bayes factors, Occam's razor, BIC, Laplace approximations).
 * Reinforcement Learning, Decision Making and MDPs (value functions, value iteration, policy iteration,
   Bellman equations, Q-learning, Bayesian decision theory
Line 63: Line 54:
Line 68: Line 58:
3. Netlab Algorithms for Pattern Recognition, I. T. Nabney, 2001. 3. Netlab Algorithms for Pattern Recognition, I. T. Nabney, 2001. 
Line 72: Line 62:
5. Kernel Methods for Pattern Analysis, Shawe-Taylor and Cristianini, 2004.
Line 73: Line 65:
Line 75: Line 66:
Line 79: Line 69:
Line 84: Line 73:
A maximum likelihood algorithm for ICA http://www.inference.phy.cam.ac.uk/mackay/ica.pdf  A maximum likelihood algorithm for ICA http://www.inference.phy.cam.ac.uk/mackay/ica.pdf
Line 87: Line 76:
Line 91: Line 79:
Christophe Andrieu, Nando de Freitas, Arnaud Doucet and Michael I. Jordan. (2003) [[attachment:Andrieu2003.pdf|An Introduction to MCMC for Machine Learning.]] Machine Learning, 50, 5–43, 2003.
Line 92: Line 81:
Christophe Andrieu, Nando de Freitas, Arnaud Doucet and Michael I. Jordan. (2003) [attachment:Andrieu2003.pdf An Introduction to MCMC for Machine Learning.] Machine Learning, 50, 5–43, 2003. Reversible Jump Markov Chain Monte Carlo [[attachment:DiffusionPapers/RJMCMC_Green_1995.pdf|(RJMCMC)]]
Line 99: Line 88:
[http://cocosci.berkeley.edu/tom/papers/tutorial2.pdf Thomas Griffiths, Alan Yuille. A Primer on Probabilistic Inference. ] [[http://cocosci.berkeley.edu/tom/papers/tutorial2.pdf|Thomas Griffiths, Alan Yuille. A Primer on Probabilistic Inference. ]]
Line 101: Line 90:
[http://yudkowsky.net/bayes/bayes.html An Intuitive Explanation of Bayesian Reasoning Bayes' Theorem By Eliezer Yudkowsky] [[http://yudkowsky.net/bayes/bayes.html|An Intuitive Explanation of Bayesian Reasoning Bayes' Theorem By Eliezer Yudkowsky]]
Line 103: Line 92:
[http://homepages.wmich.edu/~mcgrew/Bayes8.pdf Eight versions of Bayes' theorem] [[http://homepages.wmich.edu/~mcgrew/Bayes8.pdf|Eight versions of Bayes' theorem]]
Line 106: Line 95:
[http://www.gatsby.ucl.ac.uk/~pel/papers/ppc-06.pdf Ma, W.J., Beck, J.M., Latham, P.E. & Pouget, A. (2006) Bayesian inference with probabilistic population codes. Nature Neuroscience. 9:1432-1438] [[http://www.gatsby.ucl.ac.uk/~pel/papers/ppc-06.pdf|Ma, W.J., Beck, J.M., Latham, P.E. & Pouget, A. (2006) Bayesian inference with probabilistic population codes. Nature Neuroscience. 9:1432-1438]]
Line 108: Line 97:
[http://cocosci.berkeley.edu/tom/papers/bayeschapter.pdf Griffiths,Kemp and Tenenbaum. Bayesian models of cognition.] [[http://cocosci.berkeley.edu/tom/papers/bayeschapter.pdf|Griffiths,Kemp and Tenenbaum. Bayesian models of cognition.]]
Line 110: Line 99:
[http://www.cvs.rochester.edu/knill_lab/publications/TINS_2004.pdf Knill, D. C., & Pouget, A. (2004). The Bayesian brain: the role of uncertainty in neural coding and computation. Trends Neurosciences, 27(12), 712-719.] [[http://www.cvs.rochester.edu/knill_lab/publications/TINS_2004.pdf|Knill, D. C., & Pouget, A. (2004). The Bayesian brain: the role of uncertainty in neural coding and computation. Trends Neurosciences, 27(12), 712-719.]]
Line 112: Line 101:
[http://www.inf.ed.ac.uk/teaching/courses/mlsc/HW2papers/koerdingTiCS2006.pdf Kording, K. & Wolpert, D.M. (2006) Bayesian decision theory in sensorimotor control. TRENDS in Cognitive Sciences,10, 319-326] [[http://www.inf.ed.ac.uk/teaching/courses/mlsc/HW2papers/koerdingTiCS2006.pdf|Kording, K. & Wolpert, D.M. (2006) Bayesian decision theory in sensorimotor control. TRENDS in Cognitive Sciences,10, 319-326]]

Machine Learning Pages

These pages have been compiled by members of the CBU Learning Machine Learning (LML) Group

Learning Machine Learning Course

1. Introduction (applications, supervised, unsupervised, semi-supervised, reinforcement learning, bayes rule, probability theory, randomness) Presentation1_LML.ppt , 27 May 2008, Eleftherios Garyfallidis.

2. Further Introduction (what is ML, bayes rule, bayesian regression,entropy, relative entropy, mutual information), Presentation2_LML.ppt , 3 June 2008, Eleftherios Garyfallidis.

3. Maximum Likelihood vs Bayesian Learning (Notes available upon request) Presentation3_LML.ppt , 10 June 2008, Hamed Nili.

4. Factor Analysis, PCA and pPCA, Presentation4_LML.ppt , 17 June 2008, Hamed Nili.

5. Independent Component Analysis (ICA), Presentation5_LML.pdf , 24 June 2008, Jason Taylor.

6. ICA & Expectation Maximization (EM), Presentation6_LML.ppt , 1 July 2008, Eleftherios Garyfallidis.

Up to this point the lectures were based to the presentations of ML course given by Zoubin Ghahramani at the department of engineering, University of Cambridge, http://learning.eng.cam.ac.uk/zoubin/ml06/index.html .

7. Graphical Models 1, 8 July 2008, Ian Nimmo-Smith.

8. Graphical Models 2, 13 January 2009. Ian Nimmo-Smith.

9. Markov Chain Monte Carlo, 20 January 2009, Eleftherios Garyfallidis.

10. ???, 27 January 2009, Hamed Nili.

Proposed next topics :

  • Non-parametric Methods ( Kernel density estimators, nearest-neighbour methods).
  • Gaussian Processes.
  • Sparse Kernel Machines (support vector machines (SVM))
  • Sparse Kernel Machines 2 (relevance vector machines (RVM))
  • Boosting.
  • Overview of clustering methods ( k-means, EM, hierarchical clustering).
  • Mutual Information with applications to registration and neuronal coding.
  • Random Field Theory with applications in fMRI.
  • Artificial Neural Networks from a probabilistic viewpoint.
  • Machine Learning methods used in SPM.
  • Machine Learning methods used in FSL.
  • Signal processing basics.
  • Fourier Transform.
  • Wavelets.
  • Spherical Harmonics.
  • Spherical Deconvolution.
  • SNR in MRI experiments.
  • Variational approximations (KL divergences, mean field, expectation propagation).
  • Model comparison (Bayes factors, Occam's razor, BIC, Laplace approximations).
  • Reinforcement Learning, Decision Making and MDPs (value functions, value iteration, policy iteration,
    • Bellman equations, Q-learning, Bayesian decision theory

Books

1. Pattern Recognition and Machine Learning, C. M. Bishop, 2006. (Copy in our Library)

2. Information Theory and Learning Algorithms, D. J. C. Mackay, 2003. (Available online)

3. Netlab Algorithms for Pattern Recognition, I. T. Nabney, 2001.

4. Gaussian Processes for Machine Learning, C. E. Rasmussen and C. K. I. Williams, 2006. (Available online)

5. Kernel Methods for Pattern Analysis, Shawe-Taylor and Cristianini, 2004.

Reading

EM

An online demo with mixtures of lines or mixtures of gaussians. http://lcn.epfl.ch/tutorial/english/gaussian/html/

ICA

An online demonstration of the concept in http://www.cis.hut.fi/projects/ica/icademo/

A tutorial is given at http://www.cs.helsinki.fi/u/ahyvarin/papers/NN00new.pdf

A maximum likelihood algorithm for ICA http://www.inference.phy.cam.ac.uk/mackay/ica.pdf

ICA vs PCA

A simple graphical representation of the differences is given in http://genlab.tudelft.nl/~dick/cvonline/ica/node3.html

MCMC

Christophe Andrieu, Nando de Freitas, Arnaud Doucet and Michael I. Jordan. (2003) An Introduction to MCMC for Machine Learning. Machine Learning, 50, 5–43, 2003.

Reversible Jump Markov Chain Monte Carlo (RJMCMC)

Bayes Rule

Highly recommended from Bishop's book chapter 1.2.

http://plato.stanford.edu/entries/bayes-theorem/

Thomas Griffiths, Alan Yuille. A Primer on Probabilistic Inference.

An Intuitive Explanation of Bayesian Reasoning Bayes' Theorem By Eliezer Yudkowsky

Eight versions of Bayes' theorem

Bayesian Methods in Neuroscience

Ma, W.J., Beck, J.M., Latham, P.E. & Pouget, A. (2006) Bayesian inference with probabilistic population codes. Nature Neuroscience. 9:1432-1438

Griffiths,Kemp and Tenenbaum. Bayesian models of cognition.

Knill, D. C., & Pouget, A. (2004). The Bayesian brain: the role of uncertainty in neural coding and computation. Trends Neurosciences, 27(12), 712-719.

Kording, K. & Wolpert, D.M. (2006) Bayesian decision theory in sensorimotor control. TRENDS in Cognitive Sciences,10, 319-326

Online Demos for ML

http://homepages.inf.ed.ac.uk/rbf/IAPR/researchers/PPRPAGES/pprdem.htm

Software

Public code for machine learning :

http://homepages.inf.ed.ac.uk/rbf/IAPR/researchers/MLPAGES/mlcode.htm

None: MachineLearning (last edited 2013-03-08 10:28:25 by localhost)