In this case study, our objective is to classify movie reviews as positive or negative. This is a classic binary classification, which aims to predict one of two classes (positive vs. negative). To predict whether a review is positive or negative, we will use the text of the movie review. ℹ️
Throughout this case study you will learn a few new concepts:
- Vectorizing text with one-hot encoding
- Regularization with:
- Learning rate
- Model capacity
- Weight decay
- Dropout
Package requirements
library(keras) # for deep learning
library(tidyverse) # for dplyr, ggplot2, etc.
library(testthat) # unit testing
library(glue) # easy print statements
The IMDB dataset
Our data consists of 50,000 movie reviews from IMDB. This data has been curated and supplied to us via keras; however, tomorrow we will go through the process of preprocessing the original data on our own. First, let’s grab our data and unpack them into training vs test and features vs labels.
imdb <- dataset_imdb(num_words = 10000)
c(c(reviews_train, y_train), c(reviews_test, y_test)) %<-% imdb
length(reviews_train) # 25K reviews in our training data
[1] 25000
length(reviews_test) # 25K reviews in our test data
[1] 25000
Understanding our data
The reviews have been preprocessed, and each review is encoded as a sequence of word indexes (integers). For convenience, words are indexed by overall frequency in the dataset. For example, the integer “14” encodes the 14th most frequent word in the data. Actually, since the numbers 1, 2, and 3 are reserved to identify:
- start of a sequence
- unknown words
- padding
the integer “14” represents the \(14 - 3 = 11\)th most frequent word.
reviews_train[[1]]
[1] 1 14 22 16 43 530 973 1622 1385 65 458 4468 66 3941 4 173
[17] 36 256 5 25 100 43 838 112 50 670 2 9 35 480 284 5
[33] 150 4 172 112 167 2 336 385 39 4 172 4536 1111 17 546 38
[49] 13 447 4 192 50 16 6 147 2025 19 14 22 4 1920 4613 469
[65] 4 22 71 87 12 16 43 530 38 76 15 13 1247 4 22 17
[81] 515 17 12 16 626 18 2 5 62 386 12 8 316 8 106 5
[97] 4 2223 5244 16 480 66 3785 33 4 130 12 16 38 619 5 25
[113] 124 51 36 135 48 25 1415 33 6 22 12 215 28 77 52 5
[129] 14 407 16 82 2 8 4 107 117 5952 15 256 4 2 7 3766
[145] 5 723 36 71 43 530 476 26 400 317 46 7 4 2 1029 13
[161] 104 88 4 381 15 297 98 32 2071 56 26 141 6 194 7486 18
[177] 4 226 22 21 134 476 26 480 5 144 30 5535 18 51 36 28
[193] 224 92 25 104 4 226 65 16 38 1334 88 12 16 283 5 16
[209] 4472 113 103 32 15 16 5345 19 178 32
We can map the integer values back to the original word index (dataset_imdb_word_index()
). The integer number corresponds to the position in the word count list and the name of the vector is the actual word.
word_index <- dataset_imdb_word_index() %>%
unlist() %>%
sort() %>%
names()
# The indices are offset by 3 since 0, 1, and 2 are reserved for "padding",
# "start of sequence", and "unknown"
reviews_train[[1]] %>%
map_chr(~ ifelse(.x >= 3, word_index[.x - 3], "<UNK>")) %>%
cat()
<UNK> this film was just brilliant casting location scenery story direction everyone's really suited the part they played and you could just imagine being there robert <UNK> is an amazing actor and now the same being director <UNK> father came from the same scottish island as myself so i loved the fact there was a real connection with this film the witty remarks throughout the film were great it was just brilliant so much that i bought the film as soon as it was released for <UNK> and would recommend it to everyone to watch and the fly fishing was amazing really cried at the end it was so sad and you know what they say if you cry at a film it must have been good and this definitely was also <UNK> to the two little boy's that played the <UNK> of norman and paul they were just brilliant children are often left out of the <UNK> list i think because the stars that play them all grown up are such a big profile for the whole film but these children are amazing and should be praised for what they have done don't you think the whole story was so lovely because it was true and was someone's life after all that was shared with us all
Our response variable is just a vector of 1s (positive reviews) and 0s (negative reviews).
str(y_train)
int [1:25000] 1 0 0 1 0 0 1 0 1 0 ...
# our labels are equally balanced between positive (1s) and negative (0s)
# reviews
table(y_train)
y_train
0 1
12500 12500
Preparing the features
All inputs and response values in a neural network must be tensors of either floating-point or integer data. Moreover, our feature values should not be relatively large compared to the randomized initial weights and all our features should take values in roughly the same range.
Consequently, we need to vectorize our data into a format conducive to neural networks. For this data set, we’ll transform our list of article reviews to a 2D tensor of 0s and 1s representing if the word was used (aka one-hot encode). ℹ️
# number of unique words will be the number of features
n_features <- c(reviews_train, reviews_test) %>%
unlist() %>%
max()
# function to create 2D tensor (aka matrix)
vectorize_sequences <- function(sequences, dimension = n_features) {
# Create a matrix of 0s
results <- matrix(0, nrow = length(sequences), ncol = dimension)
# Populate the matrix with 1s
for (i in seq_along(sequences))
results[i, sequences[[i]]] <- 1
results
}
# apply to training and test data
x_train <- vectorize_sequences(reviews_train)
x_test <- vectorize_sequences(reviews_test)
# unit testing to make sure certain attributes hold
expect_equal(ncol(x_train), n_features)
expect_equal(nrow(x_train), length(reviews_train))
expect_equal(nrow(x_test), length(reviews_test))
Our transformed feature set is now just a matrix (2D tensor) with 25K rows and 10K columns (features).
dim(x_train)
[1] 25000 9999
Let’s check out the first 10 rows and columns:
x_train[1:10, 1:10]
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,] 1 1 0 1 1 1 1 1 1 0
[2,] 1 1 0 1 1 1 1 1 1 0
[3,] 1 1 0 1 0 1 1 1 1 0
[4,] 1 1 0 1 1 1 1 1 1 1
[5,] 1 1 0 1 1 1 1 1 0 1
[6,] 1 1 0 1 0 0 0 1 0 1
[7,] 1 1 0 1 1 1 1 1 1 0
[8,] 1 1 0 1 0 1 1 1 1 1
[9,] 1 1 0 1 1 1 1 1 1 0
[10,] 1 1 0 1 1 1 1 1 1 1
Preparing the labels
In contrast to MNIST, the labels of a binary classification will just be one of two values, 0 (negative) or 1 (positive). We do not need to do any further preprocessing.
str(y_train)
int [1:25000] 1 0 0 1 0 0 1 0 1 0 ...
Initial model
Since we are performing binary classification, our output activation function will be the sigmoid activation function ℹ️. Recall hat the sigmoid activation is used to predict the probability of the output being positive. This will constrain our output to be values ranging from 0-100%.
network <- keras_model_sequential() %>%
layer_dense(units = 16, activation = "relu", input_shape = n_features) %>%
layer_dense(units = 16, activation = "relu") %>%
layer_dense(units = 1, activation = "sigmoid")
summary(network)
Model: "sequential"
_________________________________________________________________________________________
Layer (type) Output Shape Param #
=========================================================================================
dense (Dense) (None, 16) 160000
_________________________________________________________________________________________
dense_1 (Dense) (None, 16) 272
_________________________________________________________________________________________
dense_2 (Dense) (None, 1) 17
=========================================================================================
Total params: 160,289
Trainable params: 160,289
Non-trainable params: 0
_________________________________________________________________________________________
We’re going to use binary crossentropy since we only have two possible classes.
network %>% compile(
optimizer = "rmsprop",
loss = "binary_crossentropy",
metrics = "accuracy"
)
Now let’s train our network for 20 epochs and we’ll use a batch size of 512 because, as you’ll find out, this model overfits very quickly (remember, large batch sizes compute more accurate gradient descents that traverse the loss more slowly).
history <- network %>% fit(
x_train,
y_train,
epochs = 20,
batch_size = 512,
validation_split = 0.2
)
Check out our initial resuls:
best_epoch <- which.min(history$metrics$val_loss)
best_loss <- history$metrics$val_loss[best_epoch] %>% round(3)
best_acc <- history$metrics$val_accuracy[best_epoch] %>% round(3)
glue("Our optimal loss is {best_loss} with an accuracy of {best_acc*100}%")
Our optimal loss is 0.27 with an accuracy of 89.6%
In the previous module, we had the problem of underfitting; however looking at our learning curve for this model it’s obvious that we have an overfitting problem.
plot(history)
YOUR TURN (3 min)
Using what you learned in the last module, make modifications to this model such as:
- Increasing or decreasing number of units and layers
- Adjusting the learning rate
- Adjusting the batch size
- Adding callbacks (i.e. early stopping, learning rate adjuster)
network <- keras_model_sequential() %>%
layer_dense(units = ____, activation = "relu", input_shape = n_features) %>%
layer_dense(units = ____, activation = "relu") %>%
layer_dense(units = 1, activation = "sigmoid")
network %>% compile(
optimizer = ____,
loss = "binary_crossentropy",
metrics = c("accuracy")
)
history <- network %>% fit(
x_train,
y_train,
epochs = 20,
batch_size = ____,
validation_split = 0.2
)
Regardless of what you tried above, you likely had results that consistently overfit. Our quest is to see if we can control this overfitting. Often, when we control the overfitting we improve model performance and generalizability. To reduce overfitting we are going to look at a few common ways to regularize our model.
Regularizing how quickly the model learns
Recall that the learning rate decides how fast we try to traverse the gradient descent of the loss. When the loss curve has a sharp U shape, this can indicate that your learning rate is too large.
The default learning rate for RMSprop is 0.001 (?optimizer_rmsprop()
). Reducing the learning rate will allow us to traverse the gradient more cautiously. Although the learning rate is not traditionally considered a “regularization” hyperparameter, it should be the first hyperparameter you start assessing.
Best practice:
- When tuning the learning rate, we often try factors of \(10^{-s}\) where s ranges between 1-6 (0.1, 0.01, …, 0.000001).
- Add
callback_reduce_lr_on_plateau()
to automatically adjust the learning during training.
- As you reduce the learning rate, reduce the batch size
- Adds stochastic nature to reduce chance of getting stuck in local minimum
- Speeds up training (small learning rate + large batch size = SLOW!)
network <- keras_model_sequential() %>%
layer_dense(units = 16, activation = "relu", input_shape = n_features) %>%
layer_dense(units = 16, activation = "relu") %>%
layer_dense(units = 1, activation = "sigmoid")
network %>% compile(
optimizer = optimizer_rmsprop(lr = 0.0001), # regularization parameter
loss = "binary_crossentropy",
metrics = c("accuracy")
)
history <- network %>% fit(
x_train,
y_train,
epochs = 25,
batch_size = 128,
validation_split = 0.2,
callbacks = list(
callback_reduce_lr_on_plateau(patience = 3), # regularization parameter
callback_early_stopping(patience = 7)
)
)
Our results show decrease in overfitting and improvement in our loss score and (possibly) accuracy.
best_epoch <- which.min(history$metrics$val_loss)
best_loss <- history$metrics$val_loss[best_epoch] %>% round(3)
best_acc <- history$metrics$val_accuracy[best_epoch] %>% round(3)
glue("Our optimal loss is {best_loss} with an accuracy of {best_acc}")
Our optimal loss is 0.265 with an accuracy of 0.894
plot(history) +
scale_x_continuous(limits = c(0, length(history$metrics$val_loss)))
Regularizing model capacity
In the last module, we discussed how we could add model capacity by increasing the number of units in each hidden layer and/or the number of layers to reduce underfitting. We can also reduce these parameters to regularize model capacity.
In the last module, we changed model capacity manually. Here, we’ll use a custom function and a for
loop to automate this process.
Variant 1: Larger or smaller layers?
Here, we’ll use a larger range of neurons (from \(2^2 = 4\) to \(2^8 = 256\)) in each hidden layer.
To do this, we’ll define a function dl_model
that allows us to define and compile our DL network with the specified number of neurons based on \(2^n\). This function returns a data frame with the training and validation loss and accuracy for each epoch and number of neurons:
dl_model <- function(powerto = 6) {
network <- keras_model_sequential() %>%
layer_dense(units = 2^powerto, activation = "relu", # regularizing param
input_shape = n_features) %>%
layer_dense(units = 2^powerto, activation = "relu") %>% # regularizing param
layer_dense(units = 1, activation = "sigmoid")
network %>% compile(
optimizer = "rmsprop",
loss = "binary_crossentropy",
metrics = c("accuracy")
)
history <- network %>%
fit(
x_train,
y_train,
epochs = 20,
batch_size = 512,
validation_split = 0.2,
verbose = FALSE,
callbacks = callback_early_stopping(patience = 5)
)
output <- as.data.frame(history) %>%
mutate(neurons = 2^powerto)
return(output)
}
Let’s also define a helper function that simply pulls out the minimum loss score from the above output (this is not necessary, just informational):
get_min_loss <- function(output) {
output %>%
filter(data == "validation", metric == "loss") %>%
summarize(min_loss = min(value, na.rm = TRUE)) %>%
pull(min_loss) %>%
round(3)
}
Now we can iterate over \(2^2 = 4\) to \(2^8 = 256\) neurons in each layer:
# so that we can store results
results <- data.frame()
powerto_range <- 2:8
for (i in powerto_range) {
cat("Running model with", 2^i, "neurons per hidden layer: ")
m <- dl_model(i)
results <- rbind(results, m)
loss <- get_min_loss(m)
cat(loss, "\n", append = TRUE)
}
Running model with 4 neurons per hidden layer: 0.271
Running model with 8 neurons per hidden layer: 0.271
Running model with 16 neurons per hidden layer: 0.282
Running model with 32 neurons per hidden layer: 0.301
Running model with 64 neurons per hidden layer: 0.268
Running model with 128 neurons per hidden layer: 0.293
Running model with 256 neurons per hidden layer: 0.277
The above results indicate that we may actually be improving our optimal loss score as we constrain the size of our hidden layers. The below plot shows that we definitely reduce overfitting.
min_loss <- results %>%
filter(metric == "loss" & data == "validation") %>%
summarize(min_loss = min(value, na.rm = TRUE)) %>%
pull()
results %>%
filter(metric == "loss") %>%
ggplot(aes(epoch, value, color = data)) +
geom_line() +
geom_hline(yintercept = min_loss, lty = "dashed") +
facet_wrap(~ neurons) +
theme_bw()
Variant 2: More or less layers?
We can perform a similar approach to assess the impact that the number of layers has on model performance. The following modifies our dl_model
so that we can dynamically alter the number of layers and neurons.
dl_model <- function(nlayers = 2, powerto = 4) {
# Create a model with a single hidden input layer
network <- keras_model_sequential() %>%
layer_dense(units = 2^powerto, activation = "relu", input_shape = n_features)
# regularizing parameter --> Add additional hidden layers based on input
if (nlayers > 1) {
for (i in seq_along(nlayers - 1)) {
network %>% layer_dense(units = 2^powerto, activation = "relu")
}
}
# Add final output layer
network %>% layer_dense(units = 1, activation = "sigmoid")
# Add compile step
network %>% compile(
optimizer = "rmsprop",
loss = "binary_crossentropy",
metrics = c("accuracy")
)
# Train model
history <- network %>%
fit(
x_train,
y_train,
epochs = 25,
batch_size = 512,
validation_split = 0.2,
verbose = FALSE,
callbacks = callback_early_stopping(patience = 5)
)
# Create formated output for downstream plotting & analysis
output <- as.data.frame(history) %>%
mutate(nlayers = nlayers, neurons = 2^powerto)
return(output)
}
Now we can iterate over a range of layers and neurons in each layer to assess the impact to performance. For time, we’ll use hidden layers with 64 nodes and just assess the impact of adding more layers:
# so that we can store results
results <- data.frame()
nlayers <- 1:6
for (i in nlayers) {
cat("Running model with", i, "hidden layer(s) and 16 neurons per layer: ")
m <- dl_model(nlayers = i, powerto = 4)
results <- rbind(results, m)
loss <- get_min_loss(m)
cat(loss, "\n", append = TRUE)
}
Running model with 1 hidden layer(s) and 16 neurons per layer: 0.27
Running model with 2 hidden layer(s) and 16 neurons per layer: 0.274
Running model with 3 hidden layer(s) and 16 neurons per layer: 0.27
Running model with 4 hidden layer(s) and 16 neurons per layer: 0.278
Running model with 5 hidden layer(s) and 16 neurons per layer: 0.279
Running model with 6 hidden layer(s) and 16 neurons per layer: 0.274
It’s uncertain how much performance in the minimum loss score we get from the above results; however, the plot below illustrates that our 1-2 layer models have less overfitting than the deeper models.
min_loss <- results %>%
filter(metric == "loss" & data == "validation") %>%
summarize(min_loss = min(value, na.rm = TRUE)) %>%
pull()
results %>%
filter(metric == "loss") %>%
ggplot(aes(epoch, value, color = data)) +
geom_line() +
geom_hline(yintercept = min_loss, lty = "dashed") +
facet_wrap(~ nlayers, ncol = 3) +
theme_bw()
Regularizing the size of weights
A common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights to take on small values, which makes the distribution of weight values more regular. This is called weight regularization and its done by adding to the loss function of the network a cost associated with having large weights.
If you a familiar with regularized regression ℹ️ (lasso, ridge, elastic nets) then weight regularization is essentially the same thing. ℹ️
Best practice:
- Although you can use L1, L2 or a combination, L2 is by far the most common and is known as weight decay in the context of neural nets.
- Optimal values vary but when tuning we typically start with factors of \(10^{-s}\) where s ranges between 1-4 (0.1, 0.01, …, 0.0001).
- The larger the weight regularizer, the more epochs generally required to reach a minimum loss
- Weight decay can cause a noisier learning curve so its often beneficial to increase the
patience
parameter for early stopping
network <- keras_model_sequential() %>%
layer_dense(
units = 16, activation = "relu", input_shape = n_features,
kernel_regularizer = regularizer_l2(l = 0.01) # regularization parameter
) %>%
layer_dense(
units = 16, activation = "relu",
kernel_regularizer = regularizer_l2(l = 0.01) # regularization parameter
) %>%
layer_dense(units = 1, activation = "sigmoid")
network %>% compile(
optimizer = "rmsprop",
loss = loss_binary_crossentropy,
metrics = c("accuracy")
)
history <- network %>% fit(
x_train,
y_train,
epochs = 100,
batch_size = 512,
validation_split = 0.2,
callbacks = callback_early_stopping(patience = 15)
)
Unfortunately, in this example, weight decay negatively impacts performance. The impact of weight decay is largely problem and data specific.
best_epoch <- which.min(history$metrics$val_loss)
best_loss <- history$metrics$val_loss[best_epoch] %>% round(3)
best_acc <- history$metrics$val_accuracy[best_epoch] %>% round(3)
glue("Our optimal loss is {best_loss} with an accuracy of {best_acc}")
Our optimal loss is 0.375 with an accuracy of 0.885
plot(history) +
scale_x_continuous(limits = c(0, length(history$metrics$val_loss)))
Regularizing happenstance patterns
Dropout is one of the most effective and commonly used regularization techniques for neural networks. Dropout applied to a layer randomly drops out (sets to zero) a certain percentage of the output features of that layer. By randomly dropping some of a layer’s outputs we minimize the chance of fitting patterns to noise in the data, a common cause of overfitting. ℹ️
Best practice:
- Dropout rates typically ranges between 0.2-0.5. Sometimes higher rates are necessary but note that you will get a warning when supplying rate > 0.5.
- The higher the dropout rate, the slower the convergence so you may need to increase the number of epochs.
- Its common to apply dropout after each hidden layer and with the same rate; however, this is not necessary.
network <- keras_model_sequential() %>%
layer_dense(units = 16, activation = "relu", input_shape = n_features) %>%
layer_dropout(0.6) %>% # regularization parameter
layer_dense(units = 16, activation = "relu") %>%
layer_dropout(0.6) %>% # regularization parameter
layer_dense(units = 1, activation = "sigmoid")
network %>% compile(
optimizer = "rmsprop",
loss = loss_binary_crossentropy,
metrics = c("accuracy")
)
history <- network %>% fit(
x_train,
y_train,
epochs = 100,
batch_size = 512,
validation_split = 0.2,
callbacks = callback_early_stopping(patience = 10)
)
Similar to weight regularization, the impact of dropout is largely problem and data specific. In this example we do not see significant improvement.
best_epoch <- which.min(history$metrics$val_loss)
best_loss <- history$metrics$val_loss[best_epoch] %>% round(3)
best_acc <- history$metrics$val_accuracy[best_epoch] %>% round(3)
glue("Our optimal loss is {best_loss} with an accuracy of {best_acc}")
Our optimal loss is 0.274 with an accuracy of 0.896
plot(history) +
scale_x_continuous(limits = c(0, length(history$metrics$val_loss)))
So which is best?
There is no definitive best approach for minimizing overfitting. However, typically you want to focus first on finding the optimal learning rate and model capacity that optimizes the loss score. Then move on to fighting overfitting with dropout or weight decay.
Unfortunately, many of these hyperparameters interact so changing one can impact the performance of another. Performing a grid search can help you identify the optimal combination; however, as your data gets larger or as you start using more complex models such as CNNs and LSTMs, you often constrained by compute to adequately execute a sizable grid search. Here is a great paper on how to practically approach hyperparameter tuning for neural networks (https://arxiv.org/abs/1803.09820).
To see the performance of a grid search on this data set and the parameters discussed here, check out this notebook.
Key takeaways
- Preparing text data
- Text data is usually stored as numeric data representing a word index
- We typically apply a word limit (i.e. top 10K, 20K, etc most frequent words)
- In this example we one-hot encoded the features into a 2D tensor but tomorrow we will look at better approaches
- When our model overfits regularizing can improve model performance
- Common approaches to regularization
- learning rate
- model capacity
- weight decay
- dropout
---
title: "Case Study 2: IMDB -- Binary Classification of Movie Reviews"
output: html_notebook
---

```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE, warning = FALSE, message = FALSE)
ggplot2::theme_set(ggplot2::theme_minimal())
```

In this case study, our objective is to classify movie reviews as positive or
negative. This is a classic _binary classification_, which aims to predict one 
of two classes (_positive vs. negative_). To predict whether a review is
positive or negative, we will use the text of the movie review. [ℹ️](http://bit.ly/dl-02#21)

Throughout this case study you will learn a few new concepts:

* Vectorizing text with one-hot encoding
* Regularization with:
   - Learning rate
   - Model capacity
   - Weight decay
   - Dropout

# Package requirements

```{r load-pkgs}
library(keras)     # for deep learning
library(tidyverse) # for dplyr, ggplot2, etc.
library(testthat)  # unit testing
library(glue)      # easy print statements
```

# The IMDB dataset

Our data consists of 50,000 movie reviews from [IMDB](https://www.imdb.com/).
This data has been curated and supplied to us via keras; however, tomorrow we
will go through the process of preprocessing the original data on our own. First,
let's grab our data and unpack them into training vs test and features vs labels.

```{r get-data}
imdb <- dataset_imdb(num_words = 10000)
c(c(reviews_train, y_train), c(reviews_test, y_test)) %<-% imdb

length(reviews_train)   # 25K reviews in our training data
length(reviews_test)    # 25K reviews in our test data
```

# Understanding our data

The reviews have been preprocessed, and each review is encoded as a sequence of 
word indexes (integers). For convenience, words are indexed by overall frequency 
in the dataset. For example, the integer "14" encodes the 14th most frequent 
word in the data. Actually, since the numbers 1, 2, and 3 are reserved to
identify:

1. start of a sequence
2. unknown words
3. padding

the integer "14" represents the $14 - 3 = 11$th most frequent word.

```{r first-review}
reviews_train[[1]]
```

We can map the integer values back to the original word index
(`dataset_imdb_word_index()`). The integer number corresponds to the position in
the word count list and the name of the vector is the actual word. 

```{r map-review-to-words}
word_index <- dataset_imdb_word_index() %>% 
  unlist() %>%                                 
  sort() %>%                                   
  names()                                      

# The indices are offset by 3 since 0, 1, and 2 are reserved for "padding", 
# "start of sequence", and "unknown"
reviews_train[[1]] %>% 
  map_chr(~ ifelse(.x >= 3, word_index[.x - 3], "<UNK>")) %>%
  cat()
```

Our response variable is just a vector of 1s (positive reviews) and 0s (negative
reviews).

```{r labels}
str(y_train)

# our labels are equally balanced between positive (1s) and negative (0s)
# reviews
table(y_train)
```

# Preparing the features

All inputs and response values in a neural network must be tensors of either 
floating-point or integer data. Moreover, our feature values should not be
relatively large compared to the randomized initial weights _and_ all our 
features should take values in roughly the same range.

Consequently, we need to _vectorize_ our data into a format conducive to neural 
networks. For this data set, we'll transform our list of article reviews to a
2D tensor of 0s and 1s representing if the word was used (aka one-hot encode).
[ℹ️](http://bit.ly/dl-02#22)

```{r prep-features}
# number of unique words will be the number of features
n_features <- c(reviews_train, reviews_test) %>%  
  unlist() %>% 
  max()

# function to create 2D tensor (aka matrix)
vectorize_sequences <- function(sequences, dimension = n_features) {
  # Create a matrix of 0s
  results <- matrix(0, nrow = length(sequences), ncol = dimension)

  # Populate the matrix with 1s
  for (i in seq_along(sequences))
    results[i, sequences[[i]]] <- 1
  results
}

# apply to training and test data
x_train <- vectorize_sequences(reviews_train)
x_test <- vectorize_sequences(reviews_test)

# unit testing to make sure certain attributes hold
expect_equal(ncol(x_train), n_features)
expect_equal(nrow(x_train), length(reviews_train))
expect_equal(nrow(x_test), length(reviews_test))
```

Our transformed feature set is now just a matrix (2D tensor) with 25K rows and
10K columns (features).

```{r}
dim(x_train)
```

Let's check out the first 10 rows and columns:

```{r}
x_train[1:10, 1:10]
```


# Preparing the labels

In contrast to MNIST, the labels of a binary classification will just be one of
two values, 0 (negative) or 1 (positive). We do not need to do any further
preprocessing.

```{r prep-labels}
str(y_train)
```


# Initial model

Since we are performing binary classification, our output activation function 
will be the _sigmoid activation function_ [ℹ️](http://bit.ly/dl-01#44). Recall 
hat the sigmoid activation is used to predict the probability of the output
being positive. This will constrain our output to be values ranging from 0-100%.

```{r architecture}
network <- keras_model_sequential() %>% 
  layer_dense(units = 16, activation = "relu", input_shape = n_features) %>% 
  layer_dense(units = 16, activation = "relu") %>% 
  layer_dense(units = 1, activation = "sigmoid")
```

```{r summary}
summary(network)
```

We're going to use _binary crossentropy_ since we only have two possible classes.

```{r compile}
network %>% compile(
  optimizer = "rmsprop",
  loss = "binary_crossentropy",
  metrics = "accuracy"
)
```

Now let's train our network for 20 epochs and we'll use a batch size of 512
because, as you'll find out, this model overfits very quickly (remember, large
batch sizes compute more accurate gradient descents that traverse the loss more
slowly).

```{r train}
history <- network %>% fit(
  x_train,
  y_train,
  epochs = 20,
  batch_size = 512,
  validation_split = 0.2
)
```

Check out our initial resuls:

```{r initial-results}
best_epoch <- which.min(history$metrics$val_loss)
best_loss <- history$metrics$val_loss[best_epoch] %>% round(3)
best_acc <- history$metrics$val_accuracy[best_epoch] %>% round(3)

glue("Our optimal loss is {best_loss} with an accuracy of {best_acc*100}%")
```

In the previous module, we had the problem of underfitting; however looking at
our learning curve for this model it's obvious that we have an overfitting
problem.

```{r initial-results-plot}
plot(history)
```

## YOUR TURN (3 min)

Using what you learned in the last module, make modifications to this model such
as:

1. Increasing or decreasing number of units and layers
2. Adjusting the learning rate
3. Adjusting the batch size
4. Adding callbacks (i.e. early stopping, learning rate adjuster)

```{r your-turn-1}
network <- keras_model_sequential() %>% 
  layer_dense(units = ____, activation = "relu", input_shape = n_features) %>% 
  layer_dense(units = ____, activation = "relu") %>% 
  layer_dense(units = 1, activation = "sigmoid")

network %>% compile(
  optimizer = ____, 
  loss = "binary_crossentropy",
  metrics = c("accuracy")
)

history <- network %>% fit(
  x_train,
  y_train,
  epochs = 20,
  batch_size = ____,
  validation_split = 0.2
)
```

Regardless of what you tried above, you likely had results that consistently
overfit. Our quest is to see if we can control this overfitting. Often, when we
control the overfitting we improve model performance and generalizability. To
reduce overfitting we are going to look at a few common ways to ___regularize___
our model.

# Regularizing how quickly the model learns

Recall that the learning rate decides how fast we try to traverse the gradient
descent of the loss. When the loss curve has a sharp U shape, this can indicate
that your learning rate is too large.  

The default learning rate for RMSprop is 0.001 (`?optimizer_rmsprop()`). Reducing
the learning rate will allow us to traverse the gradient more cautiously.
Although the learning rate is not traditionally considered a "regularization"
hyperparameter, it should be the first hyperparameter you start assessing.

Best practice:

- When tuning the learning rate, we often try factors of $10^{-s}$ where s ranges
  between 1-6 (0.1, 0.01, ..., 0.000001).
- Add `callback_reduce_lr_on_plateau()` to automatically adjust the learning
  during training.
- As you reduce the learning rate, reduce the batch size
   - Adds stochastic nature to reduce chance of getting stuck in local minimum
   - Speeds up training (small learning rate + large batch size = SLOW!)

```{r regularize-lr}
network <- keras_model_sequential() %>% 
  layer_dense(units = 16, activation = "relu", input_shape = n_features) %>% 
  layer_dense(units = 16, activation = "relu") %>% 
  layer_dense(units = 1, activation = "sigmoid")

network %>% compile(
  optimizer = optimizer_rmsprop(lr = 0.0001),        # regularization parameter
  loss = "binary_crossentropy",
  metrics = c("accuracy")
)

history <- network %>% fit(
  x_train,
  y_train,
  epochs = 25,
  batch_size = 128,
  validation_split = 0.2,
  callbacks = list(
    callback_reduce_lr_on_plateau(patience = 3),     # regularization parameter
    callback_early_stopping(patience = 7)
  )
)
```

Our results show decrease in overfitting and improvement in our loss score and
(possibly) accuracy.

```{r regularize-lr-results}
best_epoch <- which.min(history$metrics$val_loss)
best_loss <- history$metrics$val_loss[best_epoch] %>% round(3)
best_acc <- history$metrics$val_accuracy[best_epoch] %>% round(3)

glue("Our optimal loss is {best_loss} with an accuracy of {best_acc}")
```

```{r regularize-lr-results-plot, message=FALSE}
plot(history) + 
  scale_x_continuous(limits = c(0, length(history$metrics$val_loss)))
```

# Regularizing model capacity

In the last module, we discussed how we could add model capacity by increasing
the number of units in each hidden layer and/or the number of layers to reduce
underfitting. We can also reduce these parameters to regularize model capacity.

In the last module, we changed model capacity manually. Here, we'll use a 
custom function and a `for` loop to automate this process.

## Variant 1: Larger or smaller layers?

Here, we'll use a larger range of neurons (from $2^2 = 4$ to $2^8 = 256$) 
in each hidden layer.

To do this, we'll define a function `dl_model` that allows us to define 
and compile our DL network with the specified number of neurons based on $2^n$. 
This function returns a data frame with the training and validation loss and 
accuracy for each epoch and number of neurons:

```{r powerto-function}
dl_model <- function(powerto = 6) {
  
  network <- keras_model_sequential() %>%
    layer_dense(units = 2^powerto, activation = "relu",     # regularizing param
                input_shape = n_features) %>% 
    layer_dense(units = 2^powerto, activation = "relu") %>% # regularizing param
    layer_dense(units = 1, activation = "sigmoid") 
  
  network %>% compile(
      optimizer = "rmsprop",
      loss = "binary_crossentropy",
      metrics = c("accuracy")
      )
  
  history <- network %>% 
    fit(
      x_train,
      y_train, 
      epochs = 20,
      batch_size = 512,
      validation_split = 0.2,
      verbose = FALSE,
      callbacks = callback_early_stopping(patience = 5)
    )
  
  output <- as.data.frame(history) %>%
    mutate(neurons = 2^powerto)
  
  return(output)
  }
```

Let's also define a helper function that simply pulls out the minimum loss score 
from the above output (this is not necessary, just informational):

```{r helper-fx}
get_min_loss <- function(output) {
  output %>%
    filter(data == "validation", metric == "loss") %>%
    summarize(min_loss = min(value, na.rm = TRUE)) %>%
    pull(min_loss) %>%
    round(3)
}
```

Now we can iterate over $2^2 = 4$ to $2^8 = 256$ neurons in each layer:

```{r iterate-over-n-neurons}
# so that we can store results
results <- data.frame()
powerto_range <- 2:8

for (i in powerto_range) {
  cat("Running model with", 2^i, "neurons per hidden layer: ")
  m <- dl_model(i)
  results <- rbind(results, m)
  loss <- get_min_loss(m)
  cat(loss, "\n", append = TRUE)
}
```

The above results indicate that we may actually be improving our optimal loss
score as we constrain the size of our hidden layers. The below plot shows that
we definitely reduce overfitting.

```{r plot-results, warning=FALSE}
min_loss <- results %>%
  filter(metric == "loss" & data == "validation") %>%
  summarize(min_loss = min(value, na.rm = TRUE)) %>%
  pull()

results %>%
  filter(metric == "loss") %>%
  ggplot(aes(epoch, value, color = data)) +
  geom_line() +
  geom_hline(yintercept = min_loss, lty = "dashed") +
  facet_wrap(~ neurons) +
  theme_bw()
```


## Variant 2: More or less layers?

We can perform a similar approach to assess the impact that the number of layers 
has on model performance. The following modifies our `dl_model` so that we can 
dynamically alter the number of layers and neurons.

```{r nlayers-function}
dl_model <- function(nlayers = 2, powerto = 4) {
  
  # Create a model with a single hidden input layer
  network <- keras_model_sequential() %>%
    layer_dense(units = 2^powerto, activation = "relu", input_shape = n_features)
  
  # regularizing parameter --> Add additional hidden layers based on input
  if (nlayers > 1) {
    for (i in seq_along(nlayers - 1)) {
      network %>% layer_dense(units = 2^powerto, activation = "relu")
    }
  }
  
  # Add final output layer
  network %>% layer_dense(units = 1, activation = "sigmoid")
  
  # Add compile step
  network %>% compile(
      optimizer = "rmsprop",
      loss = "binary_crossentropy",
      metrics = c("accuracy")
      )
  
  # Train model
  history <- network %>% 
    fit(
      x_train,
      y_train, 
      epochs = 25,
      batch_size = 512,
      validation_split = 0.2,
      verbose = FALSE,
      callbacks = callback_early_stopping(patience = 5)
    )
  
  # Create formated output for downstream plotting & analysis
  output <- as.data.frame(history) %>%
    mutate(nlayers = nlayers, neurons = 2^powerto)
  
  return(output)
  }
```

Now we can iterate over a range of layers and neurons in each layer to assess 
the impact to performance. For time, we'll use hidden layers with 64 nodes and 
just assess the impact of adding more layers:

```{r iterate-over-n-layers}
# so that we can store results
results <- data.frame()
nlayers <- 1:6

for (i in nlayers) {
  cat("Running model with", i, "hidden layer(s) and 16 neurons per layer: ")
  m <- dl_model(nlayers = i, powerto = 4)
  results <- rbind(results, m)
  loss <- get_min_loss(m)
  cat(loss, "\n", append = TRUE)
}
```

It's uncertain how much performance in the minimum loss score we get from the
above results; however, the plot below illustrates that our 1-2 layer models
have less overfitting than the deeper models.

```{r plot-results2, warning=FALSE}
min_loss <- results %>%
  filter(metric == "loss" & data == "validation") %>%
  summarize(min_loss = min(value, na.rm = TRUE)) %>%
  pull()

results %>%
  filter(metric == "loss") %>%
  ggplot(aes(epoch, value, color = data)) +
  geom_line() +
  geom_hline(yintercept = min_loss, lty = "dashed") +
  facet_wrap(~ nlayers, ncol = 3) +
  theme_bw()
```

# Regularizing the size of weights

A common way to mitigate overfitting is to put constraints on the complexity of
a network by forcing its weights to take on small values, which makes the
distribution of weight values more regular. This is called _weight regularization_
and its done by adding to the loss function of the network a cost associated
with having large weights.

If you a familiar with regularized regression [ℹ️](http://bit.ly/homlr-regularize)
(lasso, ridge, elastic nets) then weight regularization is essentially the same
thing. [ℹ️](http://bit.ly/dl-02#23)

Best practice:

- Although you can use L1, L2 or a combination, L2 is by far the most common and
  is known as _weight decay_ in the context of neural nets.
- Optimal values vary but when tuning we typically start with factors of $10^{-s}$
  where s ranges between 1-4 (0.1, 0.01, ..., 0.0001).
- The larger the weight regularizer, the more epochs generally required to reach
  a minimum loss
- Weight decay can cause a noisier learning curve so its often beneficial to
  increase the `patience` parameter for early stopping

```{r}
network <- keras_model_sequential() %>%
  layer_dense(
    units = 16, activation = "relu", input_shape = n_features,
    kernel_regularizer = regularizer_l2(l = 0.01)    # regularization parameter
    ) %>%
  layer_dense(
    units = 16, activation = "relu",
    kernel_regularizer = regularizer_l2(l = 0.01)    # regularization parameter
    ) %>%
  layer_dense(units = 1, activation = "sigmoid")

network %>% compile(
    optimizer = "rmsprop", 
    loss = loss_binary_crossentropy,
    metrics = c("accuracy")
)

history <- network %>% fit(
    x_train,
    y_train,
    epochs = 100,
    batch_size = 512,
    validation_split = 0.2,
    callbacks = callback_early_stopping(patience = 15)
)
```

Unfortunately, in this example, weight decay negatively impacts performance. The
impact of weight decay is largely problem and data specific.

```{r regularize-weights-results}
best_epoch <- which.min(history$metrics$val_loss)
best_loss <- history$metrics$val_loss[best_epoch] %>% round(3)
best_acc <- history$metrics$val_accuracy[best_epoch] %>% round(3)

glue("Our optimal loss is {best_loss} with an accuracy of {best_acc}")
```

```{r regularize-weights-results-plot, message=FALSE}
plot(history) + 
  scale_x_continuous(limits = c(0, length(history$metrics$val_loss)))
```

# Regularizing happenstance patterns

_Dropout_ is one of the most effective and commonly used regularization
techniques for neural networks. Dropout applied to a layer randomly drops out
(sets to zero) a certain percentage of the output features of that layer. By
randomly dropping some of a layer's outputs we minimize the chance of fitting
patterns to noise in the data, a common cause of overfitting. 
[ℹ️](http://bit.ly/dl-02#25)

Best practice:

- Dropout rates typically ranges between 0.2-0.5. Sometimes higher rates are
  necessary but note that you will get a warning when supplying rate > 0.5.
- The higher the dropout rate, the slower the convergence so you may need to
  increase the number of epochs.
- Its common to apply dropout after each hidden layer and with the same rate;
  however, this is not necessary.

```{r}
network <- keras_model_sequential() %>%
  layer_dense(units = 16, activation = "relu", input_shape = n_features) %>%
  layer_dropout(0.6) %>%                            # regularization parameter
  layer_dense(units = 16, activation = "relu") %>%
  layer_dropout(0.6) %>%                            # regularization parameter
  layer_dense(units = 1, activation = "sigmoid")

network %>% compile(
    optimizer = "rmsprop", 
    loss = loss_binary_crossentropy,
    metrics = c("accuracy")
)

history <- network %>% fit(
    x_train,
    y_train,
    epochs = 100,
    batch_size = 512,
    validation_split = 0.2,
    callbacks = callback_early_stopping(patience = 10)
)
```

Similar to weight regularization, the impact of dropout is largely problem and 
data specific. In this example we do not see significant improvement.

```{r regularize-dropout-results}
best_epoch <- which.min(history$metrics$val_loss)
best_loss <- history$metrics$val_loss[best_epoch] %>% round(3)
best_acc <- history$metrics$val_accuracy[best_epoch] %>% round(3)

glue("Our optimal loss is {best_loss} with an accuracy of {best_acc}")
```

```{r regularize-dropout-results-plot, message=FALSE}
plot(history) + 
  scale_x_continuous(limits = c(0, length(history$metrics$val_loss)))
```

# So which is best?

There is no definitive best approach for minimizing overfitting. However,
typically you want to focus first on finding the optimal learning rate and
model capacity that optimizes the loss score. Then move on to fighting
overfitting with dropout or weight decay.

Unfortunately, many of these hyperparameters interact so changing one can impact
the performance of another. Performing a grid search can help you identify the
optimal combination; however, as your data gets larger or as you start using
more complex models such as CNNs and LSTMs, you often constrained by compute to
adequately execute a sizable grid search. Here is a great paper on how to
practically approach hyperparameter tuning for neural networks
(https://arxiv.org/abs/1803.09820).

To see the performance of a grid search on this data set and the parameters
discussed here, check out [this notebook](https://rstudio-conf-2020.github.io/dl-keras-tf/notebooks/imdb-grid-search.nb.html).

# Key takeaways

* Preparing text data
   - Text data is usually stored as numeric data representing a word index
   - We typically apply a word limit (i.e. top 10K, 20K, etc most frequent words)
   - In this example we one-hot encoded the features into a 2D tensor but
     tomorrow we will look at better approaches
* When our model overfits regularizing can improve model performance
* Common approaches to regularization
   - learning rate
   - model capacity
   - weight decay
   - dropout