In this module, you will learn about specialized hardware for training neural network models, called Graphical Processing Units (GPUs). You will explore the tradeoff between model efficiency, that is, how fast a model can be trained and make predictions, and performance, that is, how well a model can solve a task. You will see that models with more parameters generally work better but can are also be slower and require more computer memory. You will also map the stakeholders affected by the potential environmental impacts of AI such as energy use, water consumption, and e-waste in order to see how different groups experience both risks and potential benefits. This exercise will help you understand why environmental justice in AI requires considering diverse perspectives, from local communities to developers, policymakers, and future generations.
Exercise 1: Efficiency versus performanceExercise 2: Maximize a GPUExercise 3: Learning objectivesExercise 4: How to get the most out of this courseExercise 5: Lab: Compare Models of Different SizesExercise 6: The GPU architectureExercise 7: Environmental impacts of LLMsExercise 8: Stakeholder map of environmental impactsExercise 9: Quiz 1 - Question 1Exercise 10: Quiz 1 - Question 2