Get startedGet started for free

Training a linear model in batches

In this exercise, we will train a linear regression model in batches, starting where we left off in the previous exercise. We will do this by stepping through the dataset in batches and updating the model's variables, intercept and slope, after each step. This approach will allow us to train with datasets that are otherwise too large to hold in memory.

Note that the loss function,loss_function(intercept, slope, targets, features), has been defined for you. Additionally, keras has been imported for you and numpy is available as np. The trainable variables should be entered into var_list in the order in which they appear as loss function arguments.

This exercise is part of the course

Introduction to TensorFlow in Python

View Course

Exercise instructions

  • Use the .Adam() optimizer.
  • Load in the data from 'kc_house_data.csv' in batches with a chunksize of 100.
  • Extract the price column from batch, convert it to a numpy array of type 32-bit float, and assign it to price_batch.
  • Complete the loss function, fill in the list of trainable variables, and perform minimization.

Hands-on interactive exercise

Have a go at this exercise by completing this sample code.

# Initialize Adam optimizer
opt = keras.optimizers.____

# Load data in batches
for batch in pd.read_csv('____', ____=____):
	size_batch = np.array(batch['sqft_lot'], np.float32)

	# Extract the price values for the current batch
	price_batch = np.array(batch['____'], np.____)

	# Complete the loss, fill in the variable list, and minimize
	opt.minimize(lambda: loss_function(____, slope, price_batch, size_batch), var_list=[intercept, ____])

# Print trained parameters
print(intercept.numpy(), slope.numpy())
Edit and Run Code