Get startedGet started for free

Training models with the Estimators API

1. Training models with the Estimators API

In this video, we'll take a look at the high level Estimators API, which was elevated in importance in TensorFlow two point zero.

2. What is the Estimators API?

The Estimators API is a high level TensorFlow submodule. Relative to the core, lower-level TensorFlow APIs and the high-level Keras API, model building in the Estimator API is less flexible. This is because it enforces a set of best practices by placing restrictions on model architecture and training. The upside of using the Estimators API is that it allows for faster deployment. Models can be specified, trained, evaluated, and deployed with less code. Furthermore, there are many premade models that can be instantiated by setting a handful of model parameters.

3. Model specification and training

So what does the typical model specification and training process look like in the Estimators API? Well, it starts with the definition of feature columns, which specify the shape and type of your data. Next, you load and transform your data within a function. The output of this function will be a dictionary object of features and your labels. The next step is to define an estimator. In this video, we'll use premade estimators, but you can also define custom estimators with different architectures. Finally, you will train the model you defined. Note that all model objects created through the Estimators API have train, evaluate, and predict operations.

4. Defining feature columns

Let's step through this procedure to get a sense of how it works. We'll first define the feature columns. If we were working with the housing dataset from chapter 2, we might define a numeric feature column for size using feature_column.numeric_column. Note that we supplied the dictionary key, "size," to the operation. We will do this for each feature column we create. We may also want a categorical feature column for the number of rooms using feature_column.categorical_column_with_vocabulary_list.

5. Defining feature columns

We can then merge these into a list of features columns. Alternatively, if we were using the sign language MNIST dataset, we'd define a list containing a single vector of features.

6. Loading and transforming data

We next need to define a function that transforms our data, puts the features in a dictionary, and returns both the features and labels. Note that we've simply taken three examples from the housing dataset for the sake of illustration. Using them, we've defined a dictionary with the keys "size" and "rooms," which maps to the feature columns we defined. Next, we define a list or array of labels, which give the price of the house in this case, and then return the features and labels.

7. Define and train a regression estimator

We can now define and train the estimator. But before we do that, we have to define what estimator we actually want to train. If we're predicting house prices, we may want to use a deep neural network with a regression head using estimator.DNNRegressor. This allows us to predict a continuous target. Note that all we had to supply was the list of feature columns and the number of nodes in each hidden layer. The rest is handled automatically. We then apply the train function, supply our input function, and train for 20 steps.

8. Define and train a deep neural network

Alternatively, if we want to instead perform a classification task with a deep neural network, we just need to change the estimator to estimator.DNNClassifier, add the number of classes, and then train again. You can also use linear classifiers, boosted trees, and other common options. Just check the TensorFlow Estimators documentation for a complete list.

9. Let's practice!

Estimators might seem confusing initially, but they're very useful once you master them, so let's practice with a few exercises.

Create Your Free Account

or

By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.