Given that deep learning models can take hours, days and even weeks to train, it is important to know how to save and load them from disk. In this notebook you will discover how you can save your Keras models to file and load them up again to make predictions. After completing this lesson you will know:

Keras can separate the concerns of saving your model architecture and saving your model weights. Model weights are saved to HDF5 format. This is a grid format that is ideal for storing multi-dimensional arrays of numbers. The model structure can be described and saved (and loaded) using two different formats: JSON and YAML.

Each example will also demonstrate saving and loading your model weights to HDF5 formatted files. The examples will use a simple network trained on the Pima Indians onset of diabetes binary classification dataset.

Requirements

library(keras)
library(mlbench) # for the data
data(PimaIndiansDiabetes)

head(PimaIndiansDiabetes)

Let’s create and evaluate a simple model so we can demonstrate consistency in approaches.

# prep data
X <- PimaIndiansDiabetes[, 1:8] %>% as.matrix()
Y <- PimaIndiansDiabetes[["diabetes"]]
Y <- ifelse(Y == "neg", 0, 1)

# create model generating function
create_model <- function() {
  model <- keras_model_sequential() %>%
  layer_dense(units = 12, input_shape = ncol(X), activation = "relu") %>%
  layer_dense(units = 8, activation = "relu") %>%
  layer_dense(units = 1, activation = "sigmoid")

model %>% compile(
  loss = 'binary_crossentropy', 
  optimizer = 'adam', 
  metrics = 'accuracy'
  )
  return(model)
}

# build, train and evaluate model
model <- create_model()
model %>% fit(X, Y, epochs = 150, batch_size = 10, verbose = 0)
model %>% evaluate(X, Y, verbose = 0)
$loss
[1] 0.4776175

$accuracy
[1] 0.7591146

Save your model weights to HDF5 format

The Hierarchical Data Format or HDF5 for short is a flexible data storage format and is convenient for storing large arrays of real values, as we have in the weights of neural networks. We can save our model weights to this format as follows.

save_model_weights_hdf5(model, "pima_weights.h5")

Note, this does not save the model architecture. Consequently, we can re-create our model architecture, load our saved weights into this model, and we will get the exact same results as before.

# create new model
new_model <- create_model()

# load weights
load_model_weights_hdf5(new_model, "pima_weights.h5")

# evaluate the model
new_model %>% evaluate(X, Y, verbose = 0)
$loss
[1] 0.4776175

$accuracy
[1] 0.7591146

Save your model configuration to HDF5 format

We can also save the model configuration along with the weights and optimizer configuration. Saving a fully-functional model configuration is very useful as you can load them in TensorFlow.js and then train and run them in web browsers, or convert them to run on mobile devices using TensorFlow Lite. For example, we can inspect our initial model’s architecture:

model

Now we can save it with save_model_hdf5():

model %>% save_model_hdf5("pima_model.h5")

When we reload the model we see that we have the same architecture:

imported_h5_model <- load_model_hdf5("pima_model.h5")
summary(imported_h5_model)
Model: "sequential_12"
___________________________________________________________________________________________
Layer (type)                            Output Shape                         Param #       
===========================================================================================
dense_36 (Dense)                        (None, 12)                           108           
___________________________________________________________________________________________
dense_37 (Dense)                        (None, 8)                            104           
___________________________________________________________________________________________
dense_38 (Dense)                        (None, 1)                            9             
===========================================================================================
Total params: 221
Trainable params: 221
Non-trainable params: 0
___________________________________________________________________________________________

And it has saved the same weight configuration as we get the same evaluation results.

imported_h5_model %>% evaluate(X, Y, verbose = 0)
$loss
[1] 0.4776175

$accuracy
[1] 0.7591146

Since the optimizer-state is recovered, you can resume training from exactly where you left off. For example, if we wanted to continue training this model by executing 10 more epochs we can and we see that we get some initial improvement.

imported_h5_model %>% fit(X, Y, epochs = 10, batch_size = 10, verbose = 0)
imported_h5_model %>% evaluate(X, Y, verbose = 0)
$loss
[1] 0.4664829

$accuracy
[1] 0.7669271

Save your model configuration to a serialized TF format

An alternative is saving your model configuration to a serialized TensorFlow file format which is compatible with TensorFlow Serving.

model %>% save_model_tf("pima_model")

Using save_model_tf() will create a model directory with the model and other assets.

list.files("pima_model")
[1] "assets"         "saved_model.pb" "variables"     

We can reload the fully configured model:

imported_tf_model <- load_model_tf("pima_model")
summary(imported_tf_model)
Model: "sequential_12"
___________________________________________________________________________________________
Layer (type)                            Output Shape                         Param #       
===========================================================================================
dense_36 (Dense)                        (None, 12)                           108           
___________________________________________________________________________________________
dense_37 (Dense)                        (None, 8)                            104           
___________________________________________________________________________________________
dense_38 (Dense)                        (None, 1)                            9             
===========================================================================================
Total params: 221
Trainable params: 221
Non-trainable params: 0
___________________________________________________________________________________________

Which provides the same evaluation scores as before.

imported_tf_model %>% evaluate(X, Y, verbose = 0)
$loss
[1] 0.4776173

$accuracy
[1] 0.7591146

And similar to the HDF5 format, we can pick up where we left off with our training.

imported_tf_model %>% fit(X, Y, epochs = 10, batch_size = 10, verbose = 0)
imported_tf_model %>% evaluate(X, Y, verbose = 0)
$loss
[1] 0.4527401

$accuracy
[1] 0.7890625
LS0tCnRpdGxlOiAiU2F2ZSB5b3VyIG1vZGVscyBmb3IgbGF0ZXIgd2l0aCBzZXJpYWxpemF0aW9uIgpvdXRwdXQ6IGh0bWxfbm90ZWJvb2sKLS0tCgpHaXZlbiB0aGF0IGRlZXAgbGVhcm5pbmcgbW9kZWxzIGNhbiB0YWtlIGhvdXJzLCBkYXlzIGFuZCBldmVuIHdlZWtzIHRvIHRyYWluLCBpdCBpcyBpbXBvcnRhbnQgdG8ga25vdyBob3cgdG8gc2F2ZSBhbmQgbG9hZCB0aGVtIGZyb20gZGlzay4gSW4gdGhpcyBub3RlYm9vayB5b3Ugd2lsbCBkaXNjb3ZlciBob3cgeW91IGNhbiBzYXZlIHlvdXIgS2VyYXMgbW9kZWxzIHRvIGZpbGUgYW5kIGxvYWQgdGhlbSB1cCBhZ2FpbiB0byBtYWtlIHByZWRpY3Rpb25zLiBBZnRlciBjb21wbGV0aW5nIHRoaXMgbGVzc29uIHlvdSB3aWxsIGtub3c6CgoqIEhvdyB0byBzYXZlIGFuZCBsb2FkIEtlcmFzIG1vZGVsIHdlaWdodHMgdG8gSERGNSBmb3JtYXR0ZWQgZmlsZXMuCiogSG93IHRvIHNhdmUgYW5kIGxvYWQgbW9kZWwgd2VpZ2h0cyBhbmQgYXJjaGl0ZWN0dXJlIHRvZ2V0aGVyLgoKS2VyYXMgY2FuIHNlcGFyYXRlIHRoZSBjb25jZXJucyBvZiBzYXZpbmcgeW91ciBtb2RlbCBhcmNoaXRlY3R1cmUgYW5kIHNhdmluZyB5b3VyIG1vZGVsIHdlaWdodHMuIE1vZGVsIHdlaWdodHMgYXJlIHNhdmVkIHRvIEhERjUgZm9ybWF0LiBUaGlzIGlzIGEgZ3JpZCBmb3JtYXQgdGhhdCBpcyBpZGVhbCBmb3Igc3RvcmluZyBtdWx0aS1kaW1lbnNpb25hbCBhcnJheXMgb2YgbnVtYmVycy4gVGhlIG1vZGVsIHN0cnVjdHVyZSBjYW4gYmUgZGVzY3JpYmVkIGFuZCBzYXZlZCAoYW5kIGxvYWRlZCkgdXNpbmcgdHdvIGRpZmZlcmVudCBmb3JtYXRzOiBKU09OIGFuZCBZQU1MLgoKRWFjaCBleGFtcGxlIHdpbGwgYWxzbyBkZW1vbnN0cmF0ZSBzYXZpbmcgYW5kIGxvYWRpbmcgeW91ciBtb2RlbCB3ZWlnaHRzIHRvIEhERjUgZm9ybWF0dGVkIGZpbGVzLiBUaGUgZXhhbXBsZXMgd2lsbCB1c2UgYSBzaW1wbGUgbmV0d29yayB0cmFpbmVkIG9uIHRoZSBQaW1hIEluZGlhbnMgb25zZXQgb2YgZGlhYmV0ZXMgYmluYXJ5IGNsYXNzaWZpY2F0aW9uIGRhdGFzZXQuCgojIyBSZXF1aXJlbWVudHMKCmBgYHtyfQpsaWJyYXJ5KGtlcmFzKQpsaWJyYXJ5KG1sYmVuY2gpICMgZm9yIHRoZSBkYXRhCmRhdGEoUGltYUluZGlhbnNEaWFiZXRlcykKCmhlYWQoUGltYUluZGlhbnNEaWFiZXRlcykKYGBgCgpMZXQncyBjcmVhdGUgYW5kIGV2YWx1YXRlIGEgc2ltcGxlIG1vZGVsIHNvIHdlIGNhbiBkZW1vbnN0cmF0ZSBjb25zaXN0ZW5jeSBpbgphcHByb2FjaGVzLgoKYGBge3J9CiMgcHJlcCBkYXRhClggPC0gUGltYUluZGlhbnNEaWFiZXRlc1ssIDE6OF0gJT4lIGFzLm1hdHJpeCgpClkgPC0gUGltYUluZGlhbnNEaWFiZXRlc1tbImRpYWJldGVzIl1dClkgPC0gaWZlbHNlKFkgPT0gIm5lZyIsIDAsIDEpCgojIGNyZWF0ZSBtb2RlbCBnZW5lcmF0aW5nIGZ1bmN0aW9uCmNyZWF0ZV9tb2RlbCA8LSBmdW5jdGlvbigpIHsKICBtb2RlbCA8LSBrZXJhc19tb2RlbF9zZXF1ZW50aWFsKCkgJT4lCiAgbGF5ZXJfZGVuc2UodW5pdHMgPSAxMiwgaW5wdXRfc2hhcGUgPSBuY29sKFgpLCBhY3RpdmF0aW9uID0gInJlbHUiKSAlPiUKICBsYXllcl9kZW5zZSh1bml0cyA9IDgsIGFjdGl2YXRpb24gPSAicmVsdSIpICU+JQogIGxheWVyX2RlbnNlKHVuaXRzID0gMSwgYWN0aXZhdGlvbiA9ICJzaWdtb2lkIikKCm1vZGVsICU+JSBjb21waWxlKAogIGxvc3MgPSAnYmluYXJ5X2Nyb3NzZW50cm9weScsIAogIG9wdGltaXplciA9ICdhZGFtJywgCiAgbWV0cmljcyA9ICdhY2N1cmFjeScKICApCiAgcmV0dXJuKG1vZGVsKQp9CgojIGJ1aWxkLCB0cmFpbiBhbmQgZXZhbHVhdGUgbW9kZWwKbW9kZWwgPC0gY3JlYXRlX21vZGVsKCkKbW9kZWwgJT4lIGZpdChYLCBZLCBlcG9jaHMgPSAxNTAsIGJhdGNoX3NpemUgPSAxMCwgdmVyYm9zZSA9IDApCm1vZGVsICU+JSBldmFsdWF0ZShYLCBZLCB2ZXJib3NlID0gMCkKYGBgCgoKIyMgU2F2ZSB5b3VyIG1vZGVsIHdlaWdodHMgdG8gSERGNSBmb3JtYXQKClRoZSBIaWVyYXJjaGljYWwgRGF0YSBGb3JtYXQgb3IgSERGNSBmb3Igc2hvcnQgaXMgYSBmbGV4aWJsZSBkYXRhIHN0b3JhZ2UgZm9ybWF0IGFuZCBpcyBjb252ZW5pZW50IGZvciBzdG9yaW5nIGxhcmdlIGFycmF5cyBvZiByZWFsIHZhbHVlcywgYXMgd2UgaGF2ZSBpbiB0aGUgd2VpZ2h0cyBvZiBuZXVyYWwgbmV0d29ya3MuIFdlIGNhbiBzYXZlIG91ciBtb2RlbCB3ZWlnaHRzIHRvIHRoaXMgZm9ybWF0IGFzIGZvbGxvd3MuCgpgYGB7cn0Kc2F2ZV9tb2RlbF93ZWlnaHRzX2hkZjUobW9kZWwsICJwaW1hX3dlaWdodHMuaDUiKQpgYGAKCk5vdGUsIHRoaXMgZG9lcyBub3Qgc2F2ZSB0aGUgbW9kZWwgYXJjaGl0ZWN0dXJlLiBDb25zZXF1ZW50bHksIHdlIGNhbiByZS1jcmVhdGUgb3VyIG1vZGVsIGFyY2hpdGVjdHVyZSwgbG9hZCBvdXIgc2F2ZWQgd2VpZ2h0cyBpbnRvIHRoaXMgbW9kZWwsIGFuZCB3ZSB3aWxsIGdldCB0aGUgZXhhY3Qgc2FtZSByZXN1bHRzIGFzIGJlZm9yZS4KCmBgYHtyfQojIGNyZWF0ZSBuZXcgbW9kZWwKbmV3X21vZGVsIDwtIGNyZWF0ZV9tb2RlbCgpCgojIGxvYWQgd2VpZ2h0cwpsb2FkX21vZGVsX3dlaWdodHNfaGRmNShuZXdfbW9kZWwsICJwaW1hX3dlaWdodHMuaDUiKQoKIyBldmFsdWF0ZSB0aGUgbW9kZWwKbmV3X21vZGVsICU+JSBldmFsdWF0ZShYLCBZLCB2ZXJib3NlID0gMCkKYGBgCgojIyBTYXZlIHlvdXIgbW9kZWwgY29uZmlndXJhdGlvbiB0byBIREY1IGZvcm1hdAoKV2UgY2FuIGFsc28gc2F2ZSB0aGUgbW9kZWwgY29uZmlndXJhdGlvbiBhbG9uZyB3aXRoIHRoZSB3ZWlnaHRzIGFuZCBvcHRpbWl6ZXIgY29uZmlndXJhdGlvbi4gU2F2aW5nIGEgZnVsbHktZnVuY3Rpb25hbCBtb2RlbCBjb25maWd1cmF0aW9uIGlzIHZlcnkgdXNlZnVsIGFzIHlvdSBjYW4gbG9hZCB0aGVtIGluIFRlbnNvckZsb3cuanMgYW5kIHRoZW4gdHJhaW4gYW5kIHJ1biB0aGVtIGluIHdlYiBicm93c2Vycywgb3IgY29udmVydCB0aGVtIHRvIHJ1biBvbiBtb2JpbGUgZGV2aWNlcyB1c2luZyBUZW5zb3JGbG93IExpdGUuIEZvciBleGFtcGxlLCB3ZSBjYW4gaW5zcGVjdCBvdXIgaW5pdGlhbCBtb2RlbCdzIGFyY2hpdGVjdHVyZToKCmBgYHtyfQptb2RlbApgYGAKCk5vdyB3ZSBjYW4gc2F2ZSBpdCB3aXRoIGBzYXZlX21vZGVsX2hkZjUoKWA6CgpgYGB7cn0KbW9kZWwgJT4lIHNhdmVfbW9kZWxfaGRmNSgicGltYV9tb2RlbC5oNSIpCmBgYAoKV2hlbiB3ZSByZWxvYWQgdGhlIG1vZGVsIHdlIHNlZSB0aGF0IHdlIGhhdmUgdGhlIHNhbWUgYXJjaGl0ZWN0dXJlOgoKYGBge3J9CmltcG9ydGVkX2g1X21vZGVsIDwtIGxvYWRfbW9kZWxfaGRmNSgicGltYV9tb2RlbC5oNSIpCnN1bW1hcnkoaW1wb3J0ZWRfaDVfbW9kZWwpCmBgYAoKQW5kIGl0IGhhcyBzYXZlZCB0aGUgc2FtZSB3ZWlnaHQgY29uZmlndXJhdGlvbiBhcyB3ZSBnZXQgdGhlIHNhbWUgZXZhbHVhdGlvbiByZXN1bHRzLgoKYGBge3J9CmltcG9ydGVkX2g1X21vZGVsICU+JSBldmFsdWF0ZShYLCBZLCB2ZXJib3NlID0gMCkKYGBgCgpTaW5jZSB0aGUgb3B0aW1pemVyLXN0YXRlIGlzIHJlY292ZXJlZCwgeW91IGNhbiByZXN1bWUgdHJhaW5pbmcgZnJvbSBleGFjdGx5IHdoZXJlIHlvdSBsZWZ0IG9mZi4gRm9yIGV4YW1wbGUsIGlmIHdlIHdhbnRlZCB0byBjb250aW51ZSB0cmFpbmluZyB0aGlzIG1vZGVsIGJ5IGV4ZWN1dGluZyAxMCBtb3JlIGVwb2NocyB3ZSBjYW4gYW5kIHdlIHNlZSB0aGF0IHdlIGdldCBzb21lIGluaXRpYWwgaW1wcm92ZW1lbnQuCgpgYGB7cn0KaW1wb3J0ZWRfaDVfbW9kZWwgJT4lIGZpdChYLCBZLCBlcG9jaHMgPSAxMCwgYmF0Y2hfc2l6ZSA9IDEwLCB2ZXJib3NlID0gMCkKaW1wb3J0ZWRfaDVfbW9kZWwgJT4lIGV2YWx1YXRlKFgsIFksIHZlcmJvc2UgPSAwKQpgYGAKCgojIyBTYXZlIHlvdXIgbW9kZWwgY29uZmlndXJhdGlvbiB0byBhIHNlcmlhbGl6ZWQgVEYgZm9ybWF0CgpBbiBhbHRlcm5hdGl2ZSBpcyBzYXZpbmcgeW91ciBtb2RlbCBjb25maWd1cmF0aW9uIHRvIGEgc2VyaWFsaXplZCBUZW5zb3JGbG93IGZpbGUgZm9ybWF0IHdoaWNoIGlzIGNvbXBhdGlibGUgd2l0aCBbVGVuc29yRmxvdyBTZXJ2aW5nXShodHRwczovL3d3dy50ZW5zb3JmbG93Lm9yZy90ZngvZ3VpZGUvc2VydmluZykuCgpgYGB7cn0KbW9kZWwgJT4lIHNhdmVfbW9kZWxfdGYoInBpbWFfbW9kZWwiKQpgYGAKClVzaW5nIGBzYXZlX21vZGVsX3RmKClgIHdpbGwgY3JlYXRlIGEgbW9kZWwgZGlyZWN0b3J5IHdpdGggdGhlIG1vZGVsIGFuZCBvdGhlciBhc3NldHMuCgpgYGB7cn0KbGlzdC5maWxlcygicGltYV9tb2RlbCIpCmBgYAoKV2UgY2FuIHJlbG9hZCB0aGUgZnVsbHkgY29uZmlndXJlZCBtb2RlbDoKCmBgYHtyfQppbXBvcnRlZF90Zl9tb2RlbCA8LSBsb2FkX21vZGVsX3RmKCJwaW1hX21vZGVsIikKc3VtbWFyeShpbXBvcnRlZF90Zl9tb2RlbCkKYGBgCgpXaGljaCBwcm92aWRlcyB0aGUgc2FtZSBldmFsdWF0aW9uIHNjb3JlcyBhcyBiZWZvcmUuIAoKYGBge3J9CmltcG9ydGVkX3RmX21vZGVsICU+JSBldmFsdWF0ZShYLCBZLCB2ZXJib3NlID0gMCkKYGBgCgpBbmQgc2ltaWxhciB0byB0aGUgSERGNSBmb3JtYXQsIHdlIGNhbiBwaWNrIHVwIHdoZXJlIHdlIGxlZnQgb2ZmIHdpdGggb3VyIHRyYWluaW5nLgoKYGBge3J9CmltcG9ydGVkX3RmX21vZGVsICU+JSBmaXQoWCwgWSwgZXBvY2hzID0gMTAsIGJhdGNoX3NpemUgPSAxMCwgdmVyYm9zZSA9IDApCmltcG9ydGVkX3RmX21vZGVsICU+JSBldmFsdWF0ZShYLCBZLCB2ZXJib3NlID0gMCkKYGBgCgo=