Learning with Limited Labeled Data (Fall 2025)
Course Description

One of the most remarkable abilities of recent systems created with machine learning is their flexibility. Large pre-trained models (so-called “foundation models”) can be adapted with relative ease to a wide range of tasks. Sometimes this adaptation can happen with no training examples at all. How is this ability achieved? What are its limits? And what should we do when it fails? This seminar course will survey recent research on these topics, including pre-training and transfer learning, instruction tuning, reinforcement learning from human and AI feedback, few-shot and zero-shot learning, weak supervision, and synthetic data generation. Students will lead discussions on recent research papers and develop final research projects.

Essential Info
Instructor: Stephen Bach a.k.a. Steve
Class Meetings: Tuesdays and Thursdays, 1-2:20 pm, CIT 316
Office Hours: See the Canvas homepage for information.
Textbook: None
Prerequisites: Previous experience in machine learning is required through CSCI 1420 or equivalent research experience.
Contact

For questions, discussion, and other course-related posts, use Canvas.

If you have an atypical question that you are certain does not belong on Canvas, email the instructor.

Course Schedule
Introduction
Sep 4
Introductions, an overview of the research topics we will cover during the semester, how to read a research paper.
Suplemental reading:
  • On the Opportunities and Risks of Foundation Models. Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Koh, Mark Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, Aditi Raghunathan, Rob Reich, Hongyu Ren, Frieda Rong, Yusuf Roohani, Camilo Ruiz, Jack Ryan, Christopher Ré, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Wang , Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, and Percy Liang. [PDF]
  • How to Read a CS Research Paper? Philip W. L. Wong. [PDF]
  • How to Read a Technical Paper. Jason Eisner. [Online]
  • How to Read a Paper. S. Keshav. [PDF]
Large Language Models
Sep 9
Language Models are Few-Shot Learners. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Neural Information Processing Systems (NeurIPS) 2020.
[PDF] [Reviews]
Suplemental reading:
  • Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Journal of Machine Learning Research (JMLR) 21(140):1-67, 2020.
    [PDF] [Blog] [Code]
Sep 11
Multitask Prompted Training Enables Zero-Shot Task Generalization. Victor Sanh, Albert Webson, Colin Raffel, Stephen H. Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Stella Biderman, Leo Gao, Tali Bers, Thomas Wolf, and Alexander M. Rush. International Conference on Learning Representations (ICLR) 2022.
[PDF] [Code] [Data] [Reviews]
Suplemental reading:
  • Benchmarking Generalization via In-Context Instructions on 1,600+ Language Tasks. Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, Anjana Arunkumar, Arjun Ashok, Arut Selvan Dhanasekaran, Atharva Naik, David Stap, Eshaan Pathak, Giannis Karamanolakis, Haizhi Gary Lai, Ishan Purohit, Ishani Mondal, Jacob Anderson, Kirby Kuznia, Krima Doshi, Maitreya Patel, Kuntal Kumar Pal, Mehrad Moradshahi, Mihir Parmar, Mirali Purohit, Neeraj Varshney, Phani Rohitha Kaza, Pulkit Verma, Ravsehaj Singh Puri, Rushang Karia, Shailaja Keyur Sampat, Savan Doshi, Siddhartha Mishra, Sujan Reddy, Sumanta Patro, Tanay Dixit, Xudong Shen, Chitta Baral, Yejin Choi, Noah A. Smith, Hannaneh Hajishirzi, and Daniel Khashabi. ArXiv 2204.07705 2022. [PDF] [Data]
Sep 16
Learning How to Ask: Querying LMs with Mixtures of Soft Prompts. Guanghui Qin and Jason Eisner. Conference of the North American Chapter of the Association for Computational Linguistics (NAACL) 2021.
[PDF] [Supplemental] [Video]
Suplemental reading:
  • The Power of Scale for Parameter-Efficient Prompt Tuning. Brian Lester, Rami Al-Rfou, and Noah Constant. Conference on Empirical Methods in Natural Language Processing (EMNLP) 2021. [PDF] [Code] [Video]
Sep 18
Start of course survey due
LoRA: Low-Rank Adaptation of Large Language Models. Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. International Conference on Learning Representations (ICLR) 2022.
[PDF] [Reviews] [Code]
Suplemental reading:
  • QLoRA: Efficient Finetuning of Quantized LLMs. Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, Luke Zettlemoyer. Neural Information Processing Systems (NeurIPS) 2023. [PDF] [Code]
Sep 23
Training language models to follow instructions with human feedback. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. Neural Information Processing Systems (NeurIPS) 2022.
[PDF]
Supplemental reading:
  • Low-Resource Languages Jailbreak GPT-4. Zheng-Xin Yong, Cristina Menghini, and Stephen H. Bach. NeurIPS Workshop on Socially Responsible Language Modelling Research (SoLaR) 2023. [PDF]
Sep 25
Proximal Policy Optimization Algorithms. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. ArXiv 1707.06347 2017.
[PDF]
Supplemental reading:
  • Direct Preference Optimization: Your Language Model is Secretly a Reward Model. Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D. Manning, and Chelsea Finn. Neural Information Processing Systems (NeurIPS) 2022. [PDF] [Reviews]
Sep 30
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. Neural Information Processing Systems (NeurIPS) 2022.
[PDF] [Reviews] [Blog]
Supplemental reading:
  • Self-Consistency Improves Chain of Thought Reasoning in Language Models. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, Denny Zhou. International Conference on Learning Representations (ICLR) 2023. [PDF] [Reviews]
Oct 2
DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning. DeepSeek-AI. ArXiv 2501.12948 2025.
[PDF]
Supplemental reading:
  • DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models. Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, Y.K. Li, Y. Wu, and Daya Guo. ArXiv 2402.03300 2024. [PDF] [Code]
Vision-Language Models
Oct 7
Learning Transferable Visual Models From Natural Language Supervision. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. International Conference on Machine Learning (ICML) 2021.
[PDF] [Blog]
Supplemental reading:
  • Zero-Shot Learning through Cross-Modal Transfer. Richard Socher, Milind Ganjoo, Christopher D. Manning, and Andrew Y. Ng. In Neural Information Processing Systems (NeurIPS) 2013. [PDF] [Reviews]
Oct 9
Learning to Compose Soft Prompts for Compositional Zero-Shot Learning. Nihal V. Nayak, Peilin Yu, and Stephen H. Bach. International Conference on Learning Representations (ICLR) 2023.
[PDF] [Reviews] [Code]
Supplemental reading:
  • Does CLIP Bind Concepts? Probing Compositionality in Large Image Models. Martha Lewis, Nihal V. Nayak, Peilin Yu, Qinan Yu, Jack Merullo, Stephen H. Bach, and Ellie Pavlick. Meeting of the European Chapter of the ACL (EACL) Findings 2024. [PDF] [Code]
Oct 14
BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation. Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. International Conference on Machine Learning (ICML) 2022.
[PDF]
Supplemental reading:
  • Flamingo: a Visual Language Model for Few-Shot Learning. Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katie Millican, Malcolm Reynolds, Roman Ring, Eliza Rutherford, Serkan Cabi, Tengda Han, Zhitao Gong, Sina Samangooei, Marianne Monteiro, Jacob Menick, Sebastian Borgeaud, Andrew Brock, Aida Nematzadeh, Sahand Sharifzadeh, Mikolaj Binkowski, Ricardo Barreira, Oriol Vinyals, Andrew Zisserman, and Karen Simonyan. Neural Information Processing Systems (NeurIPS) 2022. [PDF] [Reviews] [Blog]
Oct 16
Visual Instruction Tuning. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Neural Information Processing Systems (NeurIPS) 2023.
[PDF] [Reviews] [Code]
Supplemental reading:
  • MiniGPT-4: Enhancing Vision-Language Understanding with Advanced Large Language Models. Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. International Conference on Learning Representations (ICLR) 2024. [PDF] [Reviews] [Project]
Oct 21
Project proposal due
InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning. Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. Neural Information Processing Systems (NeurIPS) 2023.
[PDF] [Reviews] [Code]
Supplemental reading:
  • MMBench: Is Your Multi-modal Model an All-around Player? Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, and Dahua Lin. European Conference on Computer Vision (ECCV) 2024. [PDF] [Code]
Oct 23
InstructPix2Pix: Learning To Follow Image Editing Instructions. Tim Brooks, Aleksander Holynski, and Alexei A. Efros. Conference on Computer Vision and Pattern Recognition (CVPR) 2023.
[PDF] [Project]
Supplemental reading:
  • Adding Conditional Control to Text-to-Image Diffusion Models. Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. International Conference on Computer Vision (ICCV) 2023. [PDF] [Code]
Oct 23
Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs. Shengbang Tong, Zhuang Liu, Yuexiang Zhai, Yi Ma, Yann LeCun, and Saining Xie. Conference on Computer Vision and Pattern Recognition (CVPR) 2024.
[PDF] [Project]
Supplemental reading:
  • Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs. Shengbang Tong, Ellis Brown, Penghao Wu, Sanghyun Woo, Manoj Middepogu, Sai Charitha Akula, Jihan Yang, Shusheng Yang, Adithya Iyer, Xichen Pan, Austin Wang, Rob Fergus, Yann LeCun, and Saining Xie. Neural Information Processing Systems (NeurIPS) 2024. [PDF] [Reviews] [Project]
Oct 30
What matters when building vision-language models? Hugo Laurençon, Léo Tronchon, Matthieu Cord, and Victor Sanh. Neural Information Processing Systems (NeurIPS) 2024.
[PDF] [Reviews]
Supplemental reading:
  • LAION-5B: An open large-scale dataset for training next generation image-text models. Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, Patrick Schramowski, Srivatsa Kundurthy, Katherine Crowson, Ludwig Schmidt, Robert Kaczmarczyk, and Jenia Jitsev. Neural Information Processing Systems (NeurIPS) Datasets and Benchmarks 2022. [PDF] [Reviews] [Project]
Adaptation
Nov 4
DSPy: Compiling Declarative Language Model Calls into Self-Improving Pipelines. Omar Khattab, Arnav Singhvi, Paridhi Maheshwari, Zhiyuan Zhang, Keshav Santhanam, Sri Vardhamanan, Saiful Haq, Ashutosh Sharma, Thomas T. Joshi, Hanna Moazam, Heather Miller, Matei Zaharia, and Christopher Potts. ArXiv 2310.03714 2023.
[PDF] [Project]
Supplemental reading:
  • TextGrad: Automatic "Differentiation" via Text. Mert Yuksekgonul, Federico Bianchi, Joseph Boen, Sheng Liu, Zhi Huang, Carlos Guestrin, and James Zou. Nature 2025. [PDF (Preprint)] [Code]
Nov 6
LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention. Renrui Zhang, Jiaming Han, Chris Liu, Peng Gao, Aojun Zhou, Xiangfei Hu, Shilin Yan, Pan Lu, Hongsheng Li, and Yu Qiao. International Conference on Learning Representations (ICLR) 2024.
[PDF] [Reviews] [Code]
Supplemental reading:
  • LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model. Peng Gao, Jiaming Han, Renrui Zhang, Ziyi Lin, Shijie Geng, Aojun Zhou, Wei Zhang, Pan Lu, Conghui He, Xiangyu Yue, Hongsheng Li, and Yu Qiao. ArXiv 2304.15010 2023. [PDF] [Code (Same as V1)]
Nov 11
Jailbroken: How Does LLM Safety Training Fail? Alexander Wei, Nika Haghtalab, and Jacob Steinhardt. Neural Information Processing Systems (NeurIPS) 2023.
[PDF] [Reviews]
Supplemental reading:
  • LIMA: Less Is More for Alignment. Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, Susan Zhang, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer, and Omer Levy. Neural Information Processing Systems (NeurIPS) 2023. [PDF] [Reviews]
Nov 13
Echo Chamber: RL Post-training Amplifies Behaviors Learned in Pretraining. Rosie Zhao, Alexandru Meterez, Sham Kakade, Cengiz Pehlevan, Samy Jelassi, and Eran Malach. Conference on Language Modeling (COLM) 2025.
[PDF] [Reviews]
Supplemental reading:
  • Does Reinforcement Learning Really Incentivize Reasoning Capacity in LLMs Beyond the Base Model? Yang Yue, Zhiqi Chen, Rui Lu, Andrew Zhao, Zhaokai Wang, Yang Yue, Shiji Song, and Gao Huang. ArXiv 2504.13837 2025. [PDF]
Nov 18
Project status report due
Does Math Reasoning Improve General LLM Capabilities? Understanding Transferability of LLM Reasoning. Maggie Huan, Yuetai Li, Tuney Zheng, Xiaoyu Xu, Seungone Kim, Minxin Du, Radha Poovendran, Graham Neubig, and Xiang Yue. ArXiv 2507.00432 2025.
[PDF]
Supplemental reading:
  • BSFT Memorizes, RL Generalizes: A Comparative Study of Foundation Model Post-training. Tianzhe Chu, Yuexiang Zhai, Jihan Yang, Shengbang Tong, Saining Xie, Dale Schuurmans, Quoc V. Le, Sergey Levine, and Yi Ma. International Conference on Machine Learning (ICML) 2025. [PDF] [Reviews] [Project]
Synthetic Data
Nov 20
Self-Instruct: Aligning Language Models with Self-Generated Instructions. Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Meeting of the Association for Computational Linguistics (ACL) 2023.
[PDF] [Code]
Supplemental reading:
  • Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks. Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, Anjana Arunkumar, Arjun Ashok, Arut Selvan Dhanasekaran, Atharva Naik, David Stap, Eshaan Pathak, Giannis Karamanolakis, Haizhi Gary Lai, Ishan Purohit, Ishani Mondal, Jacob Anderson, Kirby Kuznia, Krima Doshi, Maitreya Patel, Kuntal Kumar Pal, Mehrad Moradshahi, Mihir Parmar, Mirali Purohit, Neeraj Varshney, Phani Rohitha Kaza, Pulkit Verma, Ravsehaj Singh Puri, Rushang Karia, Shailaja Keyur Sampat, Savan Doshi, Siddhartha Mishra, Sujan Reddy, Sumanta Patro, Tanay Dixit, Xudong Shen, Chitta Baral, Yejin Choi, Noah A. Smith, Hannaneh Hajishirzi, and Daniel Khashabi. Conference on Empirical Methods in Natural Language Processing (EMNLP) 2022. [PDF] [Code]
Nov 25
Self-Alignment with Instruction Backtranslation. Xian Li, Ping Yu, Chunting Zhou, Timo Schick, Omer Levy, Luke Zettlemoyer, Jason Weston, and Mike Lewis. International Conference on Learning Representations (ICLR) 2024.
[PDF] [Reviews]
Supplemental reading:
  • Learning to Generate Instruction Tuning Datasets for Zero-Shot Task Adaptation. Nihal V. Nayak, Yiyang Nan, Avi Trost, and Stephen H. Bach. Meeting of the Association for Computational Linguistics (ACL) Findings 2024. [PDF] [Code]
Nov 27
Thanksgiving
(No class)
Dec 2
High-Resolution Image Synthesis with Latent Diffusion Models. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022.
[PDF] [Code]
Supplemental reading:
  • Scalable Diffusion Models with Transformers. William Peebles and Saining Xie. International Conference on Computer Vision (ICCV) 2023. [PDF] [Code] [Project]
Dec 4
Stable Video Diffusion: Scaling Latent Video Diffusion Models to Large Datasets. Andreas Blattmann, Tim Dockhorn, Sumith Kulal, Daniel Mendelevitch, Maciej Kilian, Dominik Lorenz, Yam Levi, Zion English, Vikram Voleti, Adam Letts, Varun Jampani, and Robin Rombach. ArXiv 2311.15127 2023.
[PDF] [Code]
Supplemental reading:
  • VideoPoet: A Large Language Model for Zero-Shot Video Generation. Dan Kondratyuk, Lijun Yu, Xiuye Gu, José Lezama, Jonathan Huang, Grant Schindler, Rachel Hornung, Vighnesh Birodkar, Jimmy Yan, Ming-Chang Chiu, Krishna Somandepalli, Hassan Akbari, Yair Alon, Yong Cheng, Josh Dillon, Agrim Gupta, Meera Hahn, Anja Hauth, David Hendon, Alonso Martinez, David Minnen, Mikhail Sirotenko, Kihyuk Sohn, Xuan Yang, Hartwig Adam, Ming-Hsuan Yang, Irfan Essa, Huisheng Wang, David A. Ross, Bryan Seybold, and Lu Jiang. International Conference on Machine Learning (ICML) 2024. [PDF] [Project] [Blog]
Dec 15
Final project report due
(No class)
Learning Goals

Students who complete this course will:

Grading

The following standards will be used to assign grades. Anyone who doesn't complete the standards to earn a B will receive NC.

To Earn an A
To Earn a B
Estimated Time Commitment
ActivityHours
Class Meetings28
Readings65
Submitting Discussion Questions10
Preparing to Lead Discussion2
Project Research60+
Project Proposal / Status                             10
Project Final Report5
Total180+
General Course Policies

Diversity & Inclusion

The Brown computer science department has made it its mission to create and sustain a diverse and inclusive environment in which all students, faculty, and staff can thrive. In this course, that responsibility falls on us all, students and teaching staff alike. In particular, Brown's Nondiscrimination and Anti-Harassment Policy applies to all participants.

If you have not been treated in an inclusive manner by any of the course members, please contact either me (Stephen) or the department chair (Prof. Tamassia). The Diversity and Inclusion Student Advocates are also available as a resource for members of underrepresented groups. Additional resources are listed on the department's website. We, the computer science department, take all complaints about discrimination, harassment, and other unprofessional behavior seriously.

In addition, Brown welcomes students from all around the country and the world, and their unique perspectives enrich our learning community. To empower students whose first language is not English, an array of support is available on campus, including language and culture workshops and individual appointments. For more information, contact the English Language Learning Specialists at ellwriting@brown.edu.

Academic Integrity

Academic dishonesty will not be tolerated. This includes cheating, lying about course matters, plagiarism, or helping others commit a violation. Plagiarism includes reproducing the words of others without both the use of quotation marks and citation. Students are reminded of the obligations and expectations associated with the Brown Academic Code and Brown Code of Student Conduct. For project work, feel free to build on third-party software, datasets, or other resources, as long as you credit them in your report(s) and clearly state what work is solely your own. As a general policy (for this course and for the rest of your academic career): if you use any idea, text, code, or data that you did not create, then cite it.

Accommodations

Brown University is committed to full inclusion of all students. Please inform me if you have a disability or other condition that might require accommodations or modification of any of these course procedures. You may email me, come to office hours, or speak with me after class, and your confidentiality is respected. I will do whatever I can to support accommodations recommended by SAS. For more information contact Student Accessibility Services (SAS) at 401-863-9588 or SAS@brown.edu.

Mental Health

Being a student can be very stressful. If you feel you are under too much pressure or there are psychological issues that are keeping you from performing well at Brown, I encourage you to contact Brown’s Counseling and Psychological Services CAPS. They provide confidential counseling and can provide notes supporting accommodations for health reasons.

Incomplete Policy

I expect everyone to complete the course on time. However, I understand that there may be factors beyond your control, such as health problems and family crises, that prevent you from finishing the course on time. If you feel you cannot complete the course on time, please discuss with me (Stephen) the possibility of being given a grade of Incomplete for the course and setting a schedule for completing the course in the upcoming year.

Thanks to Tom Doeppner, Laura Dobler, and Daniel Ritchie for borrowed text.