DATA8014 Principles of Deep Representation Learning (Foundation) [Section 1A, 2025]

Course category2025-26

Course Instructor:  Professor Yi Ma

Teaching Assistants: Tianzhe Chu and Feng Chen

Lecture Time: Mondays 2:00pm - 4:50pm 

Course Description: 

This course aims to provide a rigorous and systematic introduction to the mathematical and computational principles of deep learning. We achieve this by centering the course around a common and fundamental problem behind almost all modern practices of artificial intelligence and machine learning such as image recognition and generation. The problem is how to effectively and efficiently learn a low-dimensional distribution of data in a high-dimensional space and then transform the distribution to a compact and structure representation. Such a representation can be generally referred to as a  memory learned from the sensed data. 

We will start with the most basic and classical cases of PCA, ICA, and Dictionary Learning that assume the distribution has linear and independent structures. To generalize these classical models and solutions to general data distributions, we introduce a universal computational principle for learning low-dimensional distributions: compression. As we will see, data compression provides a unifying view on popular approaches for distribution or representation learning such as Score Matching (for denoising) and coding Rate Reduction. Within this framework, modern deep neural networks, such as ResNet and Transformers, can all be mathematically fully interpreted as (unrolled) optimization algorithms to achieve better compression and representation. To ensure the learned representation to be correct and consistent, we will study the effective Auto-Encoding architecture that consists of both encoding and decoding (say for denoising and diffusion). In order for a learning system to be fully automatic and continuous, we will also study a powerful framework of Closed-Loop Transcription that enables the encoding and decoding networks to self-correct hence self-improve via the ubiquitous mechanism of closed-loop feedback. 

Prerequisites:

Undergraduate linear algebra, statistics, and probability. Background in signal processing, infor- mation theory, optimization, feedback control may allow you to appreciate better certain aspects of the course material, but not necessary all at once. The course is open to senior undergraduates, with consent from the instructor. If you’re curious about whether you would benefit from this course, contact the instructor for details.

Teacher: Ma Yi