This chapter covers results in the paper **On calibration of modern neural networks** available at this MLR 2017 version.

The lecture slides are available here.

---
primaryColor: steelblue
shuffleQuestions: false
shuffleAnswers: true
---
### According to this paper, model calibration is influenced negatively by
- [ ] depth and width of the neural network.
- [x] depth, width, and batch normalization.
> Correct.
- [ ] batch normalization only.

---
primaryColor: steelblue
shuffleQuestions: false
shuffleAnswers: true
---
### What is a better calibration metrics for high-risk applications?
- [ ] Expected calibration error (ECE)
- [x] Maximum calibration error (MCE)
> Correct.
- [ ] the average of MCE and ECE.

---
primaryColor: steelblue
shuffleQuestions: false
shuffleAnswers: true
---
### Model miscalibration
- [ ] decreases as we use batch normalization.
- [x] increases as we use batch normalization.
> Correct.
- [ ] does not depend on batch normalization.

---
primaryColor: steelblue
shuffleQuestions: false
shuffleAnswers: true
---
### Temperature scaling is the simplest extension of:
- [ ] ECE and MCE metrics.
- [x] Platt scaling.
> Correct.
- [ ] BBQ and/or isotonic regression.

---
primaryColor: steelblue
shuffleQuestions: false
shuffleAnswers: true
---
### Temperature scaling
- [ ] changes the model accuracy.
- [x] has no impact on the model accuracy.
> Correct.
- [ ] is an algorithm to improve model accuracy.

There are no programming assignments for this lecture.