1. Learn
  2. /
  3. Courses
  4. /
  5. Introduction to TensorFlow in Python

Connected

Exercise

Avoiding local minima

The previous problem showed how easy it is to get stuck in local minima. We had a simple optimization problem in one variable and gradient descent still failed to deliver the global minimum when we had to travel through local minima first. One way to avoid this problem is to use momentum, which allows the optimizer to break through local minima. We will again use the loss function from the previous problem, which has been defined and is available for you as loss_function().

The graph is of a single variable function that contains multiple local minima and a global minimum.

Several optimizers in tensorflow have a momentum parameter, including SGD and RMSprop. You will make use of RMSprop in this exercise. Note that x_1 and x_2 have been initialized to the same value this time. Furthermore, keras.optimizers.RMSprop() has also been imported for you from tensorflow.

Instructions

100 XP
  • Set the opt_1 operation to use a learning rate of 0.01 and a momentum of 0.99.
  • Set opt_2 to use the root mean square propagation (RMS) optimizer with a learning rate of 0.01 and a momentum of 0.00.
  • Define the minimization operation for opt_2.
  • Print x_1 and x_2 as numpy arrays.