[visionlist] Regularization Methods for High Dimensional Learning (summer course in Italy)

Joel Z Leibo jzleibo at MIT.edu
Tue Mar 29 14:26:24 GMT 2011


We would like to  announce a graduate student summer course on

"Regularization Methods for High Dimensional Learning",

within the PhD School in Computer Science and Information Technology
of the University of Genova.

Instructors :
Francesca Odone (DISI- Università di Genova) odone at disi.unige.it
Lorenzo Rosasco (Istituto Italiano di Tecnologia and  Massachusetts
Institute of Technology ) lrosasco at mit.edu
When:  6-10 June 2011
Where:  Department of Computer Science (DISI),  University of Genova
(http://www.disi.unige.it)
Web:  http://slipguru.disi.unige.it/Teaching/odone_rosasco/
Registration: registration is free and requires sending an e-mail to
one of the instructors before 1/05/2011.
The course will be held upon attainment of a minimum number of 15 participants.

Course Description:

Understanding how intelligence work and how it can be emulated in
machines has been an elusive problem for decades  and it is arguably
one of the biggest challenges in modern science. Learning, its
principles, and computational implementations are at the very core of
this endeavor. Only recently we have been able, for the first time, to
develop  artificial intelligence  systems able to solve complex tasks
that were considered out of reach for several decades. Modern camera
can recognize faces, and smart phones recognize people voice, car
provided with cameras can detect pedestrians and ATM machines can
automatically read checks.  In most cases at the root of these success
stories there are machine learning algorithms, that is softwares that
are trained rather than programmed to solve a task.

What are the principles and the computational implementations that
allowi to learn from high dimensional data?
Among the variety of approaches and ideas in modern computational
learning , we focus on a  core class of methods, namely regularization
methods, which represents a  fundamental set of concepts and
techniques  allowing to treat in a  unified way a huge  class of
diverse approaches, while providing the tools to design new ones.
Starting from classical notions of smoothness, shrinkage and margin,
we will cover state of the art techniques based on the concepts of
geometry  (e.g. manifold learning), sparsity, low rank, allowing to
design algorithm for tasks such as supervised learning,  feature
selection, structured prediction, multitask learning and model
selection.
Practical applications will be discussed, primarily from the field of
computational vision.

The classes will focus on algorithmic and methodological aspects,
while trying to give an idea of the underlying theoretical
underpinnings. Practical laboratory sessions will give the opportunity
to have hands on experience.



More information about the visionlist mailing list