## Description

Study the theory of penalty methods in the textbook. In Part II of the exercise, there are some exercises which we recommend that you prepare by formulating the KKT conditions in advance. If you feel a slight anxiety in the proximity of computers, you may want to take a look at MATLAB’s optdemo in advance.

Part I: Constrained optimization: penalty methods

Consider the problem to

minimize *f*(*x*)*, *subject to *g*(*x*) ≤ 0^{m},

where *f *and *g *are continuously differentiable functions. Penalty methods are generally of one of two different kinds: *exterior *and *interior *penalty methods, depending on if the methods generally give an infeasible of strictly feasible sequence of iteration points. We have implemented one method of each kind in MATLAB.

To start, download the zip-file from the course homepage and follow the directions given. Move to the directory LAB2 and start MATLAB by simply typing matlab. In order to run the programs, you should move from the directory LAB2 to the directories LAB2/epa (exterior penalty algorithm) or LAB2/lipa (interior penalty, or interior point, algorithm for linear programming). Both algorithms are started by typing go in Matlab’s command window.

Note that the problems are given with the constraints on “≤”-form, while Nash–Sofer describes the methods using the “≥”-form.

The *exterior *penalty method (sometimes called just the “penalty method”) works with the relaxation

minimize *f*(*x*) + *ρ _{k}ψ*(

*x*)

*,*

*x*∈<^{n}

where *ρ _{k }> *0 and

*ρ*→ +∞ when

_{k }*k*→ +∞, and the penalty function is the quadratic function

*m **ψ*(*x*) := ^{X}(max{0*,g _{i}*(

*x*)})

^{2 }

*.*

*i*=1

The *interior *penalty method (sometimes called the “barrier method”) works with the relaxation

minimize *f*(*x*) + *µ _{k}φ*(

*x*)

*,*

*x*∈<^{n}

where *µ _{k }> *0 and

*µ*→ 0 when

_{k }*k*→ +∞, and where the penalty function is the function

*m*

*φ*(*x*) := −^{X}log(−*g _{i}*(

*x*))

*.*

*i*=1

In order to avoid numerical problems one usually lets the sequences *ρ _{k }*and

*µ*converge slowly.

_{k }## Description of the interface

After choosing the example from the drop-down list in the lower-left part of the window, press the “Load” button. In the left window you will see the level sets of the objective function (you can think of them as a topographic map), the directions the *negative *gradient, as well as the curves constraining the feasible set. The coordinates of the current iteration point can be read below the left window.

After adjusting the desired penalty value, press the “Optimize” button. The right window will show the level sets of the penalised function, exactly as the algorithm would “see” it, were it not near-sighted. The current iteration point is plotted in the right window (pink “x” cross); the left window will contain the optimization “path”, showing the progress of the algorithm (pink curve).

*Note! *In our implementation of EPA we solve the penalised problem using a gradient algorithm to obtain a globally optimal solution. Instead, one can perform only a few iterations of the gradient algorithm.

In IPA, we perform *only one *iteration of the modified Newton method (with an Armijo line search). We show the global minimum point in the right window using a red “o” circle; its evolution as the penalty parameter changes (so-called “central path”) is shown in the left window as a red line.

*Hint! *You can change the starting point for the algorithm by modifying the variables x1_start and x2_start in the .m-file, corresponding to the example you solve. Even more, you can add your own problems by providing a corresponding example*.m file!

## Exercises

There are four nonlinear problems, two convex (example_nl{01,02}.m), and two non-convex (example_nl{03,04}.m), as well as three linear problems (example_lin[01–03].m); you can find the problem formulations in the Appendix.

- Using the interior point method, solve the LPs 01–03. Do we alwaysfind an optimal extreme point (problem 03)? Notice how the algorithm follows closely the central path and goes “directly” to the global minimum point (i.e., it skips visiting the extreme points), if you change the penalty parameter smoothly. Compare with the Simplex method.
- Change the directory to LAB2/kkt and type go at the Matlab prompt.

Find the KKT points of the nonlinear problems (example_nl[01–04].m). Are the KKT conditions sufficient for the global optimality (problem 03)? Are they necessary (problem 04)?

*Hint: *There is a built-in tolerance in the graphical interface that sometimes fools you. Make sure to analytically verify the results. This hint is especially important for the problem 04.

- Using the exterior penalty algorithm, solve the nonlinear and linearproblems. Can you get different “optimal” solutions by changing the penalty parameter in a different manner or by starting from different points (problem 03)? Tricky: Can you think of a reason for the slow convergence in problem 04 (hint: KKT)?

# Part II: Constrained optimization; MATLAB:s Optimization Toolbox

In this part of the lab, you are to use MATLAB’s optimization routines to solve some nonlinear problems.

*Note! *This part of the lab should be prepared by formulating the KKT conditions for the exercise problems. We have created an example to show how Matlab’s constrained minimization, fmincon, works. In order to understand the command, it is helpful to read Matlab’s help about it (type help fmincon). If you also would like to know more about various options of the solver, type help optimset. Example:

s.t. *x*_{1 }≥ 0 *x*21 + *x*2 ≥ 2

The code for this example can be found in the file LAB2/fmincon1 and the example can be run by typing run fmincon1 in the MATLAB command window. Study the code that implements this example; to solve the other problems you need to create similar files.

## Exercises

- Given is the problem

,

s.t. *x*^{2}_{1 }− *x*_{2 }≤ 0*, *2*x*_{1 }− *x*_{2 }≥ 0*.*

- Solve the problem using fmincon.
- State the KKT conditions, examine the convexity of the problemand verify that the obtained solution is a
*global maximum*.

- Given is the problem

min *f*(*x*) := *x*_{1 },

s.t. (*x*_{1}− 1)^{2 }+ (*x*_{2 }+ 2)^{2 }≤ 16*, x*21 + *x*22 ≥ 13*.*

Solve the problem from at least five starting points. Describe what happens. Which point is the best one? Can you guarantee that this is a *global minimum*? Fun points to try are (1*,*1)^{T}*,*(0*,*0)^{T}*,*(3*.*7*,*0)^{T }and (−1*,*−1)^{T}.

# Appendix

## Linear problems

### example_lin01.m

min*x*_{1 }+ 3*x*_{2}*,*

^{ }*x*_{1 }+ 2*x*_{2 }≥ 2*,*

*x*1− 3*x*2 ≤ 2*,*

s.t.

_{}−*x*^{1 }+ 3*x*_{1}*,xx*^{2}_{2 }≤≥ 120*, ,*

### example_lin02.m

min*x*_{1 }+ 3*x*_{2}*,*

1*x*

11111*/////*65432*xxxxx*_{111111}+ 1+ 1+ 1+ 1+ 1+ 1*//////*1056789*xxxxxx*_{222222 }≥≥≥≥≥≥ 111111*,,,,,,*

s.t.

1*/*7*x*

111*///*9810*xx*_{111}*x*+ 1+ 1+ 1_{1 }+ 1*///*234*xxxxxx*_{122222 }≤≥≥≥≥≥ 2001111*.,,,, ,*

### example_lin03.m

min*x*_{2}*,*

^{}1*/*12*xx*1^{1}+ 1+ 1*//*109*xx*^{2}2 ≥≥ 11*,,*

1*/*3*x*^{1 }+ 1*/*8*x*^{2 }≥ 1*,*

_{}11*//*54*xx*_{11 }+ 1+ 1*//*67*xx*_{22 }≥≥ 11*,,*

_{}1*/*6*x*_{1 }+ 1*/*5*x*_{2 }≥ 1*,*

s.t.

^{}11*//*87*xx*^{1}1 + 1+ 1*//*34*xx*^{2}2 ≥≥ 11*,,*

_{}1*/*9*x*_{1 }+ 1*/*2*x*_{2 }≥ 1*,*

_{} 1*/*10*x*_{1 }+ 1*xxx*_{212 }≤≥≥ 2001*., ,*

## Nonlinear problems

### example_nl01.m

min*x*21 + *x*22*,*

^{ }*x*_{1 }≥ 2*,*

s.t. *x*_{2 }≥ 1*,*

^{}1*/*2*x*_{1 }+ 1*/*4*x*_{2 }≤ 2*.*

### example_nl02.m

min*x*^{2}_{1}*,*

*,*

s.t.

### example_nl03.m

min*x*_{1 }sin(*x*_{1}) + *x*_{2 }sin(*x*_{2})*,*

^{}_{ }*xx*_{2}1 ≥≥ 31*//*43*,,*

s.t.

(*x*1− 1)2*x*+ (1−*x*sin(2−*x*1)22) ≤≥ 50*.,*

### example_nl04.m

min(*x*_{1 }+ 1)^{2 }+ 1*/*2*x*^{2}_{2}*,*

^{ }*x*_{1 }≤ 3*, *s.t. *,*