Diff for "MachineLearning" - Methods
location: Diff for "MachineLearning"
Differences between revisions 39 and 40
Revision 39 as of 2008-12-01 12:19:17
Size: 5632
Comment:
Revision 40 as of 2008-12-05 18:05:14
Size: 5328
Comment:
Deletions are marked like this. Additions are marked like this.
Line 17: Line 17:
Up to this point the lectures were based to the presentations of ML course given by Zoubin Ghahramani at the department of engineering, University of Cambridge, http://learning.eng.cam.ac.uk/zoubin/ml06/index.html .
Line 19: Line 21:
8. Graphical Models 2, unknown date, Ian Nimmo-Smith. 8. Graphical Models 2, 9 December 2008. Ian Nimmo-Smith.
Line 21: Line 23:
9. Graphical Models 3, unknown date, Ian Nimmo-Smith. 9. Markov Chain Monte Carlo, 16 December 2008, Eleftherios Garyfallidis.
Line 23: Line 25:
10. Monte Carlo Sampling 1, unknown date, Eleftherios Garyfallidis.

11. Monte Carlo Sampling 2 (MCMC), unknown date, Eleftherios Garyfallidis.

12. Variational approximations (KL divergences, mean field, expectation propagation).

13. Model comparison (Bayes factors, Occam's razor, BIC, Laplace approximations).

14. Reinforcement Learning, Decision Making and MDPs (value functions, value iteration, policy iteration, Bellman equations, Q-learning, Bayesian decision theory

15. Reinforcement Learning 2.

16. General Discussion Q & A.

Up to this point the lectures were based to the presentations of ML course given by Zoubin Ghahramani at the department of engineering, University of Cambridge, http://learning.eng.cam.ac.uk/zoubin/ml06/index.html .
Line 42: Line 29:
 * Gaussian Processes 1.
 * Gaussian Processes 2.
 * Gaussian Processes.
Line 51: Line 37:
 * Artificial Neural Networks 2.
Line 60: Line 45:
 * and many meetings for discussion of specific papers. Any other suggestions ?  * Variational approximations (KL divergences, mean field, expectation propagation).
 * Model comparison (Bayes factors, Occam's razor, BIC, Laplace approximations).
 * Reinforcement Learning, Decision Making and MDPs (value functions, value iteration, policy iteration,
   Bellman equations, Q-learning, Bayesian decision theory
 * Reinforcement Learning 2.

Machine Learning Pages

These pages have been compiled by members of the CBU Learning Machine Learning (LML) Group

Learning Machine Learning Course

1. Introduction (applications, supervised, unsupervised, semi-supervised, reinforcement learning, bayes rule, probability theory, randomness) attachment:Presentation1_LML.ppt , 27 May 2008, Eleftherios Garyfallidis.

2. Further Introduction (what is ML, bayes rule, bayesian regression,entropy, relative entropy, mutual information), attachment:Presentation2_LML.ppt , 3 June 2008, Eleftherios Garyfallidis.

3. Maximum Likelihood vs Bayesian Learning (Notes available upon request) attachment:Presentation3_LML.ppt , 10 June 2008, Hamed Nili.

4. Factor Analysis, PCA and pPCA, attachment:Presentation4_LML.ppt , 17 June 2008, Hamed Nili.

5. Independent Component Analysis (ICA), attachment:Presentation5_LML.pdf , 24 June 2008, Jason Taylor.

6. ICA & Expectation Maximization (EM), attachment:Presentation6_LML.ppt , 1 July 2008, Eleftherios Garyfallidis.

Up to this point the lectures were based to the presentations of ML course given by Zoubin Ghahramani at the department of engineering, University of Cambridge, http://learning.eng.cam.ac.uk/zoubin/ml06/index.html .

7. Graphical Models 1, 8 July 2008, Ian Nimmo-Smith.

8. Graphical Models 2, 9 December 2008. Ian Nimmo-Smith.

9. Markov Chain Monte Carlo, 16 December 2008, Eleftherios Garyfallidis.

Proposed next topics :

  • Non-parametric Methods ( Kernel density estimators, nearest-neighbour methods).
  • Gaussian Processes.
  • Sparse Kernel Machines (support vector machines (SVM))
  • Sparse Kernel Machines 2 (relevance vector machines (RVM))
  • Boosting.
  • Overview of clustering methods ( k-means, EM, hierarchical clustering).
  • Mutual Information with applications to registration and neuronal coding.
  • Random Field Theory with applications in fMRI.
  • Artificial Neural Networks from a probabilistic viewpoint.
  • Machine Learning methods used in SPM.
  • Machine Learning methods used in FSL.
  • Signal processing basics.
  • Fourier Transform.
  • Wavelets.
  • Spherical Harmonics.
  • Spherical Deconvolution.
  • SNR in MRI experiments.
  • Variational approximations (KL divergences, mean field, expectation propagation).
  • Model comparison (Bayes factors, Occam's razor, BIC, Laplace approximations).
  • Reinforcement Learning, Decision Making and MDPs (value functions, value iteration, policy iteration,
    • Bellman equations, Q-learning, Bayesian decision theory
  • Reinforcement Learning 2.

Books

1. Pattern Recognition and Machine Learning, C. M. Bishop, 2006. (Copy in our Library)

2. Information Theory and Learning Algorithms, D. J. C. Mackay, 2003. (Available online)

3. Netlab Algorithms for Pattern Recognition, I. T. Nabney, 2001.

4. Gaussian Processes for Machine Learning, C. E. Rasmussen and C. K. I. Williams, 2006. (Available online)

Reading

EM

An online demo with mixtures of lines or mixtures of gaussians. http://lcn.epfl.ch/tutorial/english/gaussian/html/

ICA

An online demonstration of the concept in http://www.cis.hut.fi/projects/ica/icademo/

A tutorial is given at http://www.cs.helsinki.fi/u/ahyvarin/papers/NN00new.pdf

A maximum likelihood algorithm for ICA http://www.inference.phy.cam.ac.uk/mackay/ica.pdf

ICA vs PCA

A simple graphical representation of the differences is given in http://genlab.tudelft.nl/~dick/cvonline/ica/node3.html

MCMC

Christophe Andrieu, Nando de Freitas, Arnaud Doucet and Michael I. Jordan. (2003) [attachment:Andrieu2003.pdf An Introduction to MCMC for Machine Learning.] Machine Learning, 50, 5–43, 2003.

Reversible Jump Markov Chain Monte Carlo [attachment:DiffusionPapers/RJMCMC_Green_1995.pdf (RJMCMC)]

Bayes Rule

Highly recommended from Bishop's book chapter 1.2.

http://plato.stanford.edu/entries/bayes-theorem/

[http://cocosci.berkeley.edu/tom/papers/tutorial2.pdf Thomas Griffiths, Alan Yuille. A Primer on Probabilistic Inference. ]

[http://yudkowsky.net/bayes/bayes.html An Intuitive Explanation of Bayesian Reasoning Bayes' Theorem By Eliezer Yudkowsky]

[http://homepages.wmich.edu/~mcgrew/Bayes8.pdf Eight versions of Bayes' theorem]

Bayesian Methods in Neuroscience

[http://www.gatsby.ucl.ac.uk/~pel/papers/ppc-06.pdf Ma, W.J., Beck, J.M., Latham, P.E. & Pouget, A. (2006) Bayesian inference with probabilistic population codes. Nature Neuroscience. 9:1432-1438]

[http://cocosci.berkeley.edu/tom/papers/bayeschapter.pdf Griffiths,Kemp and Tenenbaum. Bayesian models of cognition.]

[http://www.cvs.rochester.edu/knill_lab/publications/TINS_2004.pdf Knill, D. C., & Pouget, A. (2004). The Bayesian brain: the role of uncertainty in neural coding and computation. Trends Neurosciences, 27(12), 712-719.]

[http://www.inf.ed.ac.uk/teaching/courses/mlsc/HW2papers/koerdingTiCS2006.pdf Kording, K. & Wolpert, D.M. (2006) Bayesian decision theory in sensorimotor control. TRENDS in Cognitive Sciences,10, 319-326]

Online Demos for ML

http://homepages.inf.ed.ac.uk/rbf/IAPR/researchers/PPRPAGES/pprdem.htm

Software

Public code for machine learning :

http://homepages.inf.ed.ac.uk/rbf/IAPR/researchers/MLPAGES/mlcode.htm

None: MachineLearning (last edited 2013-03-08 10:28:25 by localhost)