Warning: This is just my notes from the excellent fast.ai MOOC which you can find here

Pixels: The Foundations of Computer Vision

Sidebar: Tenacity and Deep Learning

End sidebar

path = untar_data(URLs.MNIST_SAMPLE)
path.ls()
(#3) [Path('labels.csv'),Path('valid'),Path('train')]
(path/'train').ls()
(#2) [Path('train/7'),Path('train/3')]
threes = (path/'train'/'3').ls().sorted()
sevens = (path/'train'/'7').ls().sorted()
threes
(#6131) [Path('train/3/10.png'),Path('train/3/10000.png'),Path('train/3/10011.png'),Path('train/3/10031.png'),Path('train/3/10034.png'),Path('train/3/10042.png'),Path('train/3/10052.png'),Path('train/3/1007.png'),Path('train/3/10074.png'),Path('train/3/10091.png')...]
im3_path = threes[1]
im3 = Image.open(im3_path)
im3
array(im3)[4:10,4:10]
array([[  0,   0,   0,   0,   0,   0],
       [  0,   0,   0,   0,   0,  29],
       [  0,   0,   0,  48, 166, 224],
       [  0,  93, 244, 249, 253, 187],
       [  0, 107, 253, 253, 230,  48],
       [  0,   3,  20,  20,  15,   0]], dtype=uint8)
tensor(im3)[4:10,4:10]
tensor([[  0,   0,   0,   0,   0,   0],
        [  0,   0,   0,   0,   0,  29],
        [  0,   0,   0,  48, 166, 224],
        [  0,  93, 244, 249, 253, 187],
        [  0, 107, 253, 253, 230,  48],
        [  0,   3,  20,  20,  15,   0]], dtype=torch.uint8)
im3_t = tensor(im3)
df = pd.DataFrame(im3_t[4:15,4:22])
df.style.set_properties(**{'font-size':'6pt'}).background_gradient('Greys')
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 0 0 0 0 29 150 195 254 255 254 176 193 150 96 0 0 0
2 0 0 0 48 166 224 253 253 234 196 253 253 253 253 233 0 0 0
3 0 93 244 249 253 187 46 10 8 4 10 194 253 253 233 0 0 0
4 0 107 253 253 230 48 0 0 0 0 0 192 253 253 156 0 0 0
5 0 3 20 20 15 0 0 0 0 0 43 224 253 245 74 0 0 0
6 0 0 0 0 0 0 0 0 0 0 249 253 245 126 0 0 0 0
7 0 0 0 0 0 0 0 14 101 223 253 248 124 0 0 0 0 0
8 0 0 0 0 0 11 166 239 253 253 253 187 30 0 0 0 0 0
9 0 0 0 0 0 16 248 250 253 253 253 253 232 213 111 2 0 0
10 0 0 0 0 0 0 0 43 98 98 208 253 253 253 253 187 22 0

First Try: Pixel Similarity

seven_tensors = [tensor(Image.open(o)) for o in sevens]
three_tensors = [tensor(Image.open(o)) for o in threes]
len(three_tensors),len(seven_tensors)
(6131, 6265)
show_image(three_tensors[1]);
stacked_sevens = torch.stack(seven_tensors).float()/255
stacked_threes = torch.stack(three_tensors).float()/255
stacked_threes.shape, stacked_sevens.shape
(torch.Size([6131, 28, 28]), torch.Size([6265, 28, 28]))
len(stacked_threes.shape)
3
stacked_threes.ndim
3
mean3 = stacked_threes.mean(0)
show_image(mean3);
mean7 = stacked_sevens.mean(0)
show_image(mean7);
a_3 = stacked_threes[1]
show_image(a_3);
dist_3_abs = (a_3 - mean3).abs().mean()
dist_3_sqr = ((a_3 - mean3)**2).mean().sqrt()
dist_3_abs,dist_3_sqr
(tensor(0.1114), tensor(0.2021))
dist_7_abs = (a_3 - mean7).abs().mean()
dist_7_sqr = ((a_3 - mean7)**2).mean().sqrt()
dist_7_abs,dist_7_sqr
(tensor(0.1586), tensor(0.3021))
F.l1_loss(a_3.float(),mean7), F.mse_loss(a_3,mean7).sqrt()
(tensor(0.1586), tensor(0.3021))

NumPy Arrays and PyTorch Tensors

data = [[1,2,3],[4,5,6]]
arr = array (data)
tns = tensor(data)
arr  # numpy
tns  # pytorch
tns[1]
tns[:,1]
tns[1,1:3]
tns+1
tns.type()
tns*1.5

Computing Metrics Using Broadcasting

valid_3_tens = torch.stack([tensor(Image.open(o)) 
                            for o in (path/'valid'/'3').ls()])
valid_3_tens = valid_3_tens.float()/255
valid_7_tens = torch.stack([tensor(Image.open(o)) 
                            for o in (path/'valid'/'7').ls()])
valid_7_tens = valid_7_tens.float()/255
valid_3_tens.shape,valid_7_tens.shape
(torch.Size([1010, 28, 28]), torch.Size([1028, 28, 28]))
def mnist_distance(a,b): return (a-b).abs().mean((-1,-2))
mnist_distance(a_3, mean3)
tensor(0.1114)
valid_3_dist = mnist_distance(valid_3_tens, mean3)
valid_3_dist, valid_3_dist.shape
(tensor([0.1290, 0.1223, 0.1380,  ..., 0.1337, 0.1132, 0.1097]),
 torch.Size([1010]))
tensor([1,2,3]) + tensor([1,1,1])
tensor([2, 3, 4])
(valid_3_tens-mean3).shape
torch.Size([1010, 28, 28])
valid_3_tens.shape, mean3.shape
(torch.Size([1010, 28, 28]), torch.Size([28, 28]))
def is_3(x): return mnist_distance(x,mean3) < mnist_distance(x,mean7)
is_3(a_3), is_3(a_3).float()
(tensor(True), tensor(1.))
is_3(valid_3_tens)
tensor([True, True, True,  ..., True, True, True])
accuracy_3s =      is_3(valid_3_tens).float() .mean()
accuracy_7s = (1 - is_3(valid_7_tens).float()).mean()

accuracy_3s,accuracy_7s,(accuracy_3s+accuracy_7s)/2
(tensor(0.9168), tensor(0.9854), tensor(0.9511))
is_3(stacked_threes).float().mean(), 1 - is_3(stacked_sevens).float().mean()
(tensor(0.8912), tensor(0.9962))

Stochastic Gradient Descent (SGD)

gv('''
init->predict->loss->gradient->step->stop
step->predict[label=repeat]
''')
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd"> G init init predict predict init->predict loss loss predict->loss gradient gradient loss->gradient step step gradient->step step->predict repeat stop stop step->stop
def f(x): return x**2
plot_function(f, 'x', 'x**2')
plot_function(f, 'x', 'x**2')
plt.scatter(-1.5, f(-1.5), color='red');

Calculating Gradients

x1 = -1.
xt = tensor(x1).requires_grad_()
xt
tensor(-1., requires_grad=True)
yt = f(xt)
yt.backward()
yt
tensor(1., grad_fn=<PowBackward0>)
xt.grad
tensor(-2.)
def grad(x): return (x * xt.grad.item()) - yt.item()

x = torch.linspace(-2,2)
fig,ax = plt.subplots(figsize=(6,4))
ax.plot(x, f(x))
ax.plot(x, grad(x))
plt.scatter(x1, f(x1), color='red');
plt.ylim(-.1, 4.1)
(-0.1, 4.1)

3D gradients

xt = tensor([3.,4.,10.]).requires_grad_()
xt
tensor([ 3.,  4., 10.], requires_grad=True)
def f(x): return (x**2).sum()

yt = f(xt)
yt
tensor(125., grad_fn=<SumBackward0>)
yt.backward()
xt.grad
tensor([ 6.,  8., 20.])

Stepping With a Learning Rate

An End-to-End SGD Example

time = torch.arange(0,20).float(); time
tensor([ 0.,  1.,  2.,  3.,  4.,  5.,  6.,  7.,  8.,  9., 10., 11., 12., 13., 14., 15., 16., 17., 18., 19.])
speed = torch.randn(20)*3 + 0.75*(time-9.5)**2 + 1
plt.scatter(time,speed);

$f(x, a, b, c) = ax^2 + bx + c$

def f(t, params):
    a,b,c = params
    return a*(t**2) + (b*t) + c

$mse(predictions, targets) = \sqrt{\frac{\sum(predictions - targets)^2}{n}} $

def mse(preds, targets): return ((preds-targets)**2).mean().sqrt()

Step 1: Initialize the parameters

params = torch.randn(3).requires_grad_()

Step 2: Calculate the predictions

def show_preds(preds, ax=None):
    if ax is None: ax=plt.subplots()[1]
    ax.scatter(time, speed)
    ax.scatter(time, to_np(preds), color='red')
    ax.set_xlabel('time')
    ax.set_ylabel('predictions')
    ax.set_ylim(-300,100)
preds = f(time, params)
show_preds(preds)

Step 3: Calculate the loss

loss = mse(preds, speed)
loss
tensor(141.6118, grad_fn=<SqrtBackward>)

Step 4: Calculate the gradients

loss.backward()
params.grad
tensor([-164.9038,  -10.5370,   -0.7816])
params.grad * 1e-5
tensor([-1.6490e-03, -1.0537e-04, -7.8159e-06])
params
tensor([-0.7244,  0.3629,  1.9200], requires_grad=True)

Step 5: Step the weights.

lr = 1e-3
params.data -= lr * params.grad.data
params.grad = None
preds = f(time,params)
mse(preds, speed)
tensor(114.4197, grad_fn=<SqrtBackward>)
show_preds(preds)
def apply_step(params, prn=True):
    preds = f(time, params)
    loss = mse(preds, speed)
    loss.backward()
    params.data -= lr * params.grad.data
    params.grad = None
    if prn: print(loss.item())
    return preds

Step 6: Repeat the process

for i in range(10): apply_step(params)
114.41970825195312
87.83831024169922
62.552642822265625
40.56702423095703
27.578344345092773
25.898353576660156
25.897171020507812
25.897003173828125
25.89684295654297
25.896682739257812
_,axs = plt.subplots(1,5,figsize=(15,3))
for ax in axs: show_preds(apply_step(params, False), ax)
plt.tight_layout()
_,axs = plt.subplots(1,5,figsize=(15,3))
for ax in axs: show_preds(apply_step(params, False), ax)
plt.tight_layout()

Step 7: stop

Summarizing Gradient Descent

gv('''
init->predict->loss->gradient->step->stop
step->predict[label=repeat]
''')
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd"> G init init predict predict init->predict loss loss predict->loss gradient gradient loss->gradient step step gradient->step step->predict repeat stop stop step->stop

The MNIST Loss Function

train_x = torch.cat([stacked_threes, stacked_sevens]).view(-1, 28*28)
train_y = tensor([1]*len(threes) + [0]*len(sevens)).unsqueeze(1)
train_x.shape, train_y.shape
(torch.Size([12396, 784]), torch.Size([12396, 1]))
dset = list(zip(train_x,train_y))
x,y = dset[0]
len(dset), x.shape, y.shape
(12396, torch.Size([784]), torch.Size([1]))
valid_x = torch.cat([valid_3_tens, valid_7_tens]).view(-1, 28*28)
valid_y = tensor([1]*len(valid_3_tens) + [0]*len(valid_7_tens)).unsqueeze(1)
valid_dset = list(zip(valid_x,valid_y))
def init_params(size, std=1.0): return (torch.randn(size)*std).requires_grad_()
weights = init_params((28*28,1))
bias = init_params(1)
(train_x[0]*weights.T).sum() + bias
tensor([3.7602], grad_fn=<AddBackward0>)
def linear1(xb): return xb@weights + bias
preds = linear1(train_x)
preds
tensor([[ 3.7602],
        [10.0223],
        [15.1395],
        ...,
        [ 4.7646],
        [ 1.8502],
        [ 3.3399]], grad_fn=<AddBackward0>)
corrects = (preds>0.0).float() == train_y
corrects
tensor([[ True],
        [ True],
        [ True],
        ...,
        [False],
        [False],
        [False]])
corrects.float().mean().item()
0.4820910096168518
weights[0] *= 1.0001
weights.shape
torch.Size([784, 1])
preds = linear1(train_x)
((preds>0.0).float() == train_y).float().mean().item()
0.4820910096168518
trgts  = tensor([1,0,1])
prds   = tensor([0.9, 0.4, 0.2])
def mnist_loss(predictions, targets):
    return torch.where(targets==1, 1-predictions, predictions).mean()
torch.where(trgts==1, 1-prds, prds)
tensor([0.1000, 0.4000, 0.8000])
mnist_loss(prds,trgts)
tensor(0.4333)
mnist_loss(tensor([0.9, 0.4, 0.8]),trgts)
tensor(0.2333)

Sigmoid

def sigmoid(x): return 1/(1+torch.exp(-x))
plot_function(torch.sigmoid, title='Sigmoid', min=-6, max=6)
def mnist_loss(predictions, targets):
    predictions = predictions.sigmoid()
    return torch.where(targets==1, 1-predictions, predictions).mean()
mnist_loss(tensor([1., 0., 1.]), tensor([1.,0.,1.]))
tensor(0.3460)

SGD and Mini-Batches

coll = range(15)
dl = DataLoader(coll, batch_size=5, shuffle=True)
list(dl)
[tensor([ 3, 12,  8, 10,  2]),
 tensor([ 9,  4,  7, 14,  5]),
 tensor([ 1, 13,  0,  6, 11])]
ds = L(enumerate(string.ascii_lowercase))
ds
(#26) [(0, 'a'),(1, 'b'),(2, 'c'),(3, 'd'),(4, 'e'),(5, 'f'),(6, 'g'),(7, 'h'),(8, 'i'),(9, 'j')...]
dl = DataLoader(ds, batch_size=6, shuffle=True)
list(dl)
[(tensor([17, 18, 10, 22,  8, 14]), ('r', 's', 'k', 'w', 'i', 'o')),
 (tensor([20, 15,  9, 13, 21, 12]), ('u', 'p', 'j', 'n', 'v', 'm')),
 (tensor([ 7, 25,  6,  5, 11, 23]), ('h', 'z', 'g', 'f', 'l', 'x')),
 (tensor([ 1,  3,  0, 24, 19, 16]), ('b', 'd', 'a', 'y', 't', 'q')),
 (tensor([2, 4]), ('c', 'e'))]

Putting It All Together

weights = init_params((28*28,1))
bias = init_params(1)
dl = DataLoader(dset, batch_size=256)
xb,yb = first(dl)
xb.shape,yb.shape
(torch.Size([256, 784]), torch.Size([256, 1]))
valid_dl = DataLoader(valid_dset, batch_size=256)
batch = train_x[:4]
batch.shape
torch.Size([4, 784])
preds = linear1(batch)
preds
tensor([[ 9.7460],
        [14.8540],
        [ 6.6535],
        [11.8257]], grad_fn=<AddBackward0>)
loss = mnist_loss(preds, train_y[:4])
loss
tensor(0.0003, grad_fn=<MeanBackward0>)
loss.backward()
weights.grad.shape,weights.grad.mean(),bias.grad
(torch.Size([784, 1]), tensor(-4.8543e-05), tensor([-0.0003]))
def calc_grad(xb, yb, model):
    preds = model(xb)
    loss = mnist_loss(preds, yb)
    loss.backward()
calc_grad(batch, train_y[:4], linear1)
weights.grad.mean(),bias.grad
(tensor(-9.7085e-05), tensor([-0.0007]))
calc_grad(batch, train_y[:4], linear1)
weights.grad.mean(),bias.grad
(tensor(-0.0001), tensor([-0.0010]))
weights.grad.zero_()
bias.grad.zero_();
def train_epoch(model, lr, params):
    for xb,yb in dl:
        calc_grad(xb, yb, model)
        for p in params:
            p.data -= p.grad*lr
            p.grad.zero_()
(preds>0.0).float() == train_y[:4]
tensor([[True],
        [True],
        [True],
        [True]])
def batch_accuracy(xb, yb):
    preds = xb.sigmoid()
    correct = (preds>0.5) == yb
    return correct.float().mean()
batch_accuracy(linear1(batch), train_y[:4])
tensor(1.)
def validate_epoch(model):
    accs = [batch_accuracy(model(xb), yb) for xb,yb in valid_dl]
    return round(torch.stack(accs).mean().item(), 4)
validate_epoch(linear1)
0.6554
lr = 1.
params = weights,bias
train_epoch(linear1, lr, params)
validate_epoch(linear1)
0.7372
for i in range(20):
    train_epoch(linear1, lr, params)
    print(validate_epoch(linear1), end=' ')
0.8886 0.932 0.9457 0.954 0.9584 0.9598 0.9623 0.9643 0.9648 0.9662 0.9672 0.9677 0.9682 0.9696 0.9701 0.9701 0.9706 0.9716 0.9721 0.9721 

Creating an Optimizer

linear_model = nn.Linear(28*28,1)
w,b = linear_model.parameters()
w.shape,b.shape
(torch.Size([1, 784]), torch.Size([1]))
class BasicOptim:
    def __init__(self,params,lr): self.params,self.lr = list(params),lr

    def step(self, *args, **kwargs):
        for p in self.params: p.data -= p.grad.data * self.lr

    def zero_grad(self, *args, **kwargs):
        for p in self.params: p.grad = None
opt = BasicOptim(linear_model.parameters(), lr)
def train_epoch(model):
    for xb,yb in dl:
        calc_grad(xb, yb, model)
        opt.step()
        opt.zero_grad()
validate_epoch(linear_model)
0.4562
def train_model(model, epochs):
    for i in range(epochs):
        train_epoch(model)
        print(validate_epoch(model), end=' ')
train_model(linear_model, 20)
0.4932 0.7354 0.8564 0.9179 0.9345 0.9487 0.957 0.9638 0.9658 0.9672 0.9692 0.9716 0.9746 0.9746 0.976 0.9765 0.9775 0.9775 0.978 0.9785 
linear_model = nn.Linear(28*28,1)
opt = SGD(linear_model.parameters(), lr)
train_model(linear_model, 20)
0.4932 0.8091 0.8481 0.916 0.9336 0.9487 0.956 0.9638 0.9663 0.9682 0.9692 0.9721 0.9736 0.9751 0.976 0.977 0.9775 0.978 0.978 0.9785 
dls = DataLoaders(dl, valid_dl)
learn = Learner(dls, nn.Linear(28*28,1), opt_func=SGD,
                loss_func=mnist_loss, metrics=batch_accuracy)
learn.fit(20, lr=lr)
epoch train_loss valid_loss batch_accuracy time
0 0.637234 0.503652 0.495584 00:00
1 0.622813 0.099536 0.934740 00:00
2 0.224453 0.229068 0.785574 00:00
3 0.096522 0.119550 0.899411 00:00
4 0.049351 0.083758 0.929343 00:00
5 0.031013 0.065843 0.941119 00:00
6 0.023518 0.055016 0.954367 00:00
7 0.020242 0.047905 0.961236 00:00
8 0.018628 0.042962 0.964671 00:00
9 0.017691 0.039352 0.967125 00:00
10 0.017050 0.036603 0.968597 00:00
11 0.016553 0.034435 0.971541 00:00
12 0.016141 0.032674 0.972522 00:00
13 0.015789 0.031212 0.973994 00:00
14 0.015485 0.029979 0.975957 00:00
15 0.015220 0.028927 0.976448 00:00
16 0.014989 0.028021 0.977429 00:00
17 0.014785 0.027235 0.977429 00:00
18 0.014602 0.026546 0.977920 00:00
19 0.014437 0.025939 0.978410 00:00

Adding a Nonlinearity

def simple_net(xb): 
    res = xb@w1 + b1
    res = res.max(tensor(0.0))
    res = res@w2 + b2
    return res
w1 = init_params((28*28,30))
b1 = init_params(30)
w2 = init_params((30,1))
b2 = init_params(1)
plot_function(F.relu)
simple_net = nn.Sequential(
    nn.Linear(28*28,30),
    nn.ReLU(),
    nn.Linear(30,1)
)
learn = Learner(dls, simple_net, opt_func=SGD,
                loss_func=mnist_loss, metrics=batch_accuracy)
learn.fit(40, 0.1)
epoch train_loss valid_loss batch_accuracy time
0 0.316475 0.418682 0.504416 00:00
1 0.149121 0.231948 0.799313 00:00
2 0.082135 0.115575 0.915604 00:00
3 0.053724 0.077822 0.940137 00:00
4 0.040625 0.060725 0.955839 00:00
5 0.033974 0.051135 0.964181 00:00
6 0.030173 0.045101 0.965653 00:00
7 0.027711 0.040987 0.965653 00:00
8 0.025942 0.037986 0.968106 00:00
9 0.024574 0.035688 0.971050 00:00
10 0.023464 0.033859 0.972522 00:00
11 0.022537 0.032356 0.973013 00:00
12 0.021745 0.031094 0.973503 00:00
13 0.021058 0.030013 0.974975 00:00
14 0.020455 0.029072 0.975466 00:00
15 0.019920 0.028244 0.976938 00:00
16 0.019441 0.027508 0.977920 00:00
17 0.019008 0.026849 0.977920 00:00
18 0.018614 0.026254 0.977920 00:00
19 0.018253 0.025716 0.978901 00:00
20 0.017922 0.025225 0.978901 00:00
21 0.017615 0.024774 0.979392 00:00
22 0.017330 0.024360 0.979882 00:00
23 0.017064 0.023978 0.980373 00:00
24 0.016814 0.023624 0.980864 00:00
25 0.016579 0.023295 0.980864 00:00
26 0.016357 0.022990 0.981845 00:00
27 0.016148 0.022705 0.981845 00:00
28 0.015949 0.022439 0.982336 00:00
29 0.015761 0.022191 0.982336 00:00
30 0.015581 0.021958 0.982336 00:00
31 0.015410 0.021740 0.983317 00:00
32 0.015247 0.021534 0.983808 00:00
33 0.015091 0.021340 0.983808 00:00
34 0.014941 0.021157 0.983808 00:00
35 0.014797 0.020985 0.983317 00:00
36 0.014659 0.020822 0.983317 00:00
37 0.014525 0.020667 0.983317 00:00
38 0.014396 0.020520 0.983317 00:00
39 0.014272 0.020378 0.983317 00:00
plt.plot(L(learn.recorder.values).itemgot(2));
learn.recorder.values[-1][2]
0.983316957950592

Going Deeper

dls = ImageDataLoaders.from_folder(path)
learn = cnn_learner(dls, resnet18, pretrained=False,
                    loss_func=F.cross_entropy, metrics=accuracy)
learn.fit_one_cycle(1, 0.1)
epoch train_loss valid_loss accuracy time
0 0.111226 0.045557 0.993621 00:19

Jargon Recap

Questionnaire

  1. How is a grayscale image represented on a computer? How about a color image?
  2. How are the files and folders in the MNIST_SAMPLE dataset structured? Why?
  3. Explain how the "pixel similarity" approach to classifying digits works.
  4. What is a list comprehension? Create one now that selects odd numbers from a list and doubles them.
  5. What is a "rank-3 tensor"?
  6. What is the difference between tensor rank and shape? How do you get the rank from the shape?
  7. What are RMSE and L1 norm?
  8. How can you apply a calculation on thousands of numbers at once, many thousands of times faster than a Python loop?
  9. Create a 3×3 tensor or array containing the numbers from 1 to 9. Double it. Select the bottom-right four numbers.
  10. What is broadcasting?
  11. Are metrics generally calculated using the training set, or the validation set? Why?
  12. What is SGD?
  13. Why does SGD use mini-batches?
  14. What are the seven steps in SGD for machine learning?
  15. How do we initialize the weights in a model?
  16. What is "loss"?
  17. Why can't we always use a high learning rate?
  18. What is a "gradient"?
  19. Do you need to know how to calculate gradients yourself?
  20. Why can't we use accuracy as a loss function?
  21. Draw the sigmoid function. What is special about its shape?
  22. What is the difference between a loss function and a metric?
  23. What is the function to calculate new weights using a learning rate?
  24. What does the DataLoader class do?
  25. Write pseudocode showing the basic steps taken in each epoch for SGD.
  26. Create a function that, if passed two arguments [1,2,3,4] and 'abcd', returns [(1, 'a'), (2, 'b'), (3, 'c'), (4, 'd')]. What is special about that output data structure?
  27. What does view do in PyTorch?
  28. What are the "bias" parameters in a neural network? Why do we need them?
  29. What does the @ operator do in Python?
  30. What does the backward method do?
  31. Why do we have to zero the gradients?
  32. What information do we have to pass to Learner?
  33. Show Python or pseudocode for the basic steps of a training loop.
  34. What is "ReLU"? Draw a plot of it for values from -2 to +2.
  35. What is an "activation function"?
  36. What's the difference between F.relu and nn.ReLU?
  37. The universal approximation theorem shows that any function can be approximated as closely as needed using just one nonlinearity. So why do we normally use more?

Further Research

  1. Create your own implementation of Learner from scratch, based on the training loop shown in this chapter.
  2. Complete all the steps in this chapter using the full MNIST datasets (that is, for all digits, not just 3s and 7s). This is a significant project and will take you quite a bit of time to complete! You'll need to do some of your own research to figure out how to overcome some obstacles you'll meet on the way.