About: This course will cover two areas of deep learning in which labeled data is not required: Deep Generative Models and Self-supervised Learning. Recent advances in generative models have made it possible to realistically model high-dimensional raw data such as natural images, audio waveforms and text corpora. Strides in self-supervised learning have started to close the gap between supervised representation learning and unsupervised representation learning in terms of fine-tuning to unseen tasks. This course will cover the theoretical foundations of these topics as well as their newly enabled applications.
If you want to peek ahead, this semester's offering will be very similar to last year's offering.
Instructors: Pieter Abbeel, Peter Chen, Jonathan Ho, Aravind Srinivas
Teaching Assistants: Alexander Li and Wilson Yan
Communication: https://piazza.com/berkeley/spring2020/cs294158
Lectures: Wednesdays 5-8pm (first lecture on 1/22) in 250 Sutardja Dai Hall
Prerequisites: significant experience with probability, optimization, deep learning
(starting week of 1/27)
For homework, TA office hours are the best venue. For other questions (lecture, final project, research, etc.) any office hours should be great fits.
Pieter: Thursdays 6:30-7:30pm (242 Sutardja Dai Hall)
Alex: Mondays 5-6 pm, Tuesdays 11-noon (Soda 326)
Wilson: Wednesdays 3-4pm (Cory 557), Fridays 2-3pm (Soda 347)
HW1: Autoregressive Models (out 1/29, due 2/11) [solutions]