Get Started

Unconstrained optimization

1. Unconstrained optimization

Hello again. In this chapter, we'll delve deeper and understand unconstrained optimization.

2. What is unconstrained optimization?

Unconstrained optimization is finding the maxima or minima of a function that does not have any constraints or restrictions on the input variables. So far all of our examples have been unconstrained optimization problems. Previously we utilized the principles of calculus to find an optimal solution. SciPy has tools to simplify the process even more and solve optimization problems numerically using one line of code.

3. Univariate unconstrained optimization

Let's begin by solving a univariate unconstrained optimization problem. We will use the minimize_scalar function from the scipy.optimize library. Suppose we have the following objective function with the formula x squared minus 12 x plus 4 and we want to find the minima. We pass the function through minimize_scalar, save the result, and print it to see that our minima is 6. We can confirm this by visualizing the formula and observing the minimum when x is 6. The printed output of result gives us several details. Let's take a closer look.

4. Univariate unconstrained optimization result

Fun is the function value at the minimum. Message provides information on the status of the process. nfev is the number of times the algorithm evaluated the function. nit is the number of iterations it took to reach the solution. The success boolean tells us whether an optimum was found, and x is the optimal value. We can directly access each part of the result using attributes. For example, the optimal value can be accessed with result.x.

5. Finding the maxima

We can also find a maximum of a function with SciPy but it requires a little trick. SciPy doesn’t have a maximize scalar function so we have to use the same minimize scalar function to find the maxima. Recall our previous maximization function for the furniture manufacturer that wanted to maximize profit. When we visualized the function we noted that it was shaped like a hill meaning it had a maximum value. We create a negation of the original function by multiplying the entire thing by negative one. This reflects the function and changes our maximization problem into a minimization problem. Our original function objective function was 40 q minus 0.5 q squared, the negation would be negative 40 q plus 0.5 q squared. When we apply minimize scaler to the negated function we match our previous answer of a optimal maximum of 40. Note the syntax in the f-string within the second print statement. We use :.2f to print the result to two decimal places.

6. Multivariate unconstrained optimization

Those were univariate examples; the process is similar for multivariate unconstrained optimization. In this case, the function to find the minima is the minimize function from scipy.optimize. The minimize function requires its arguments to be in vector form, so our multivariate objective function only takes one argument, a. We're making lemonade. a[0] represents one variable, the lemons, and a[1] represents the second variable, the sugar, and the function represents the recipe and the goal is to minimize costs. SciPy's minimize function takes two parameters: the objective function and an initial guess of the correct answer for the correct amount of lemons and sugar in an array or list. This is conventionally stored as x0. For this example we are going to guess that the correct answer is x=1 and y=2. This format tells the function how many variables are in the problem. Two in our case.

7. Multivariate unconstrained optimization result

The result gives us a bit more information this time. We now also have: hess_inv, the inverse of the Hessian, a matrix of all second partial derivatives. jac is the Jacobian, which is the gradient for a scalar function of many variables. A status of zero means the process was successful. We may need the information from hessian and jacobian results for more advanced cases, but for now, there's no need to worry about them. The optimal value here to find the minimum is an x of 2 and y of 3. So much simpler than programming the calculus ourselves.

8. Let's practice!

Great job! Now it's time to practice!