Technische Universität München Robotics and Embedded Systems

Machine Learning

Lecturer Prof. Dr. Jürgen Schmidhuber
Module Modul IN2064
Type Lecture
Language English
Semester WS 2009/2010
ECTS 6.0
Audience Obligatory course for students of RCI
Elective course for students of Informatics (Diplom 5+, Bachelor 5+, Master 1+)
Elective course for students of Business Informatics (Bachelor 5+)
Time & Location Thu 10:00 - 12:00 MI HS 2
Fri 08:15 - 10:00 MI 00.08.038
Certificate Final written exam: Sat, 06.02.2010, 10:30 - 12:30, MW1450


UPDATED No Lecture on Friday 05/02/2010.

Final exam is closed book.

Online registration (via TUMonline) for the fina exam is open (until 31.01.2010)

Information for final exam available.

New room for Thursday lectures: MI HS 2

We have a Google group for this course. Questions should be directed to Christian.

The lecture is announced for Wednesday, Thursday and Friday. However, we want to do everything on Thursday and Friday only! The above times are sharp, except for the first week.


The tutorial consists of (mostly) weekly worksheets to be completed by the student, and discussion of the exercises. The worksheets will be posted here.


This lecture will take you on a journey through the exciting and highly active field of Machine Learning, which has applications in areas as diverse as web searches, robotics, data mining, environmental sciences, medical data analysis, and many more. The first part of the lecture loosely follows the textbook by Chris Bishop, referenced below, and uses a lot of his material. You are highly recommended to have a look into it for answers to fundamental questions, and for more in-depth information. Here is an overview of the topics covered (at least cursorily) by the lecture, where arrows indicate our flow of argument rather than historical derivation:


We will start with laying the "groundwork" of established statistical techniques such as Bayes classifiers and Linear Discriminant Analysis as well as probabilistic regression. In this context, it would help to be somewhat familiar with the contents of Analysis I/II, Linear Algebra I/II and Probability Theory. The basic principles of kernel methods when combined with our probabilistic framework will lead us the very successful class of Support Vector Machines. It will also allow us to understand other well-established methods such as feed forward neural networks and recurrent relatives. Besides being the basis for neural network training, our discussion of gradient descent methods will also branch off to evolutionary algorithms and their modern descendants. Another important algorithm goes by the name of Expectation Maximization, and leads us to Hidden Markov Models (HMMs) and the Kalman filter, alternatives to recurrent networks when it comes to time series processing and dynamical systems prediction. Eventually, we will give a brief introduction to Reinforcement Learning techniques.


Introduction (version of 23 Oct., 11:08 am: Minimal correction on page 28).

Linear Regression and the additional information on validation sets.

Linear Classification.

Feed Forward Neural Networks.

Introduction to Evolutionary Computing. Basic Algorithms Part 1, Part 2. Neuroevolution UPDATED (Jan, 29th, 2pm)

Reinforcement Learning (Intro + Value-based), Python programming session and the source code of the programming examples. UPDATED (Jan, 7th, 2:20pm)

Continuous Reinforcement Learning. UPDATED (Jan, 7th, 1:45pm)

Introduction to Kernels and Introduction to SVMs

Sequence Learning, part I


Exercises will be posted here. You need to solve at least 2/3 of the assignments to be allowed to the final exam.

If you implement some of the algorithms you might want to have a look at a collection of data sets.

Have a look at WEKA for implementations of machine learning algorithms.

Stared exercises are optional.

Assignment for 2009/10/22 (hand in on Thu, 29 Oct, before lecture).

Assignment for 2009/10/29 (hand in on Thu, 5 Nov, before lecture). Have a look at the helpful notes by Sam Roweis.

Assignment for 2009/11/05 (hand in on Thu, 12 Nov, before lecture).

Assignment for 2009/11/12 (hand in on Thu, 19 Nov, before lecture).

Assignment for 2009/11/19 (hand in on Thu, 10 Dec, before lecture).

Assignment for 2009/12/17 (hand in on Thu, 7 Jan 2010, before lecture).

Assignment for 2010/1/14 (hand in on Thu, 21 Jan 2010, before lecture).

Assignment for 2010/1/21 (hand in on Thu, 28 Jan 2010, before lecture).

Suggested Reading

[1] David J. C. MacKay. Information theory, inference, and learning algorithms. Cambridge Univ. Press, 2008.
[2] Christopher M. Bishop. Pattern Recognition and Machine Learning. Springer, Berlin, New York, 2006.
[3] R.S. Sutton and A.G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998.
[4] Tom M. Mitchell. Machine learning. McGraw-Hill, Boston, Mass., 1997.