MM621-TMA937 Lab2-Constrained optimization: penalty methods Solved

30.00 $

Category:
Click Category Button to View Your Next Assignment | Homework

You'll get a download link with a: zip solution files instantly, after Payment

Securely Powered by: Secure Checkout

Description

Rate this product

Study the theory of penalty methods in the textbook. In Part II of the exercise, there are some exercises which we recommend that you prepare by formulating the KKT conditions in advance. If you feel a slight anxiety in the proximity of computers, you may want to take a look at MATLAB’s optdemo in advance.

Part I: Constrained optimization: penalty methods

Consider the problem to

minimize f(x), subject to g(x) ≤ 0m,

where f and g are continuously differentiable functions. Penalty methods are generally of one of two different kinds: exterior and interior penalty methods, depending on if the methods generally give an infeasible of strictly feasible sequence of iteration points. We have implemented one method of each kind in MATLAB.

To start, download the zip-file from the course homepage and follow the directions given. Move to the directory LAB2 and start MATLAB by simply typing matlab. In order to run the programs, you should move from the directory LAB2 to the directories LAB2/epa (exterior penalty algorithm) or LAB2/lipa (interior penalty, or interior point, algorithm for linear programming). Both algorithms are started by typing go in Matlab’s command window.

Note that the problems are given with the constraints on “≤”-form, while Nash–Sofer describes the methods using the “≥”-form.

The exterior penalty method (sometimes called just the “penalty method”) works with the relaxation

minimize f(x) + ρkψ(x),

x∈<n

where ρk > 0 and ρk → +∞ when k → +∞, and the penalty function is the quadratic function

m ψ(x) := X(max{0,gi(x)})2 .

i=1

The interior penalty method (sometimes called the “barrier method”) works with the relaxation

minimize f(x) + µkφ(x),

x∈<n

where µk > 0 and µk → 0 when k → +∞, and where the penalty function is the function

m

φ(x) := −Xlog(−gi(x)).

i=1

In order to avoid numerical problems one usually lets the sequences ρk and µk converge slowly.

Description of the interface

After choosing the example from the drop-down list in the lower-left part of the window, press the “Load” button. In the left window you will see the level sets of the objective function (you can think of them as a topographic map), the directions the negative gradient, as well as the curves constraining the feasible set. The coordinates of the current iteration point can be read below the left window.

After adjusting the desired penalty value, press the “Optimize” button. The right window will show the level sets of the penalised function, exactly as the algorithm would “see” it, were it not near-sighted. The current iteration point is plotted in the right window (pink “x” cross); the left window will contain the optimization “path”, showing the progress of the algorithm (pink curve).

Note! In our implementation of EPA we solve the penalised problem using a gradient algorithm to obtain a globally optimal solution. Instead, one can perform only a few iterations of the gradient algorithm.

In IPA, we perform only one iteration of the modified Newton method (with an Armijo line search). We show the global minimum point in the right window using a red “o” circle; its evolution as the penalty parameter changes (so-called “central path”) is shown in the left window as a red line.

Hint! You can change the starting point for the algorithm by modifying the variables x1_start and x2_start in the .m-file, corresponding to the example you solve. Even more, you can add your own problems by providing a corresponding example*.m file!

Exercises

There are four nonlinear problems, two convex (example_nl{01,02}.m), and two non-convex (example_nl{03,04}.m), as well as three linear problems (example_lin[01–03].m); you can find the problem formulations in the Appendix.

  1. Using the interior point method, solve the LPs 01–03. Do we alwaysfind an optimal extreme point (problem 03)? Notice how the algorithm follows closely the central path and goes “directly” to the global minimum point (i.e., it skips visiting the extreme points), if you change the penalty parameter smoothly. Compare with the Simplex method.
  2. Change the directory to LAB2/kkt and type go at the Matlab prompt.

Find the KKT points of the nonlinear problems (example_nl[01–04].m). Are the KKT conditions sufficient for the global optimality (problem 03)? Are they necessary (problem 04)?

Hint: There is a built-in tolerance in the graphical interface that sometimes fools you. Make sure to analytically verify the results. This hint is especially important for the problem 04.

  1. Using the exterior penalty algorithm, solve the nonlinear and linearproblems. Can you get different “optimal” solutions by changing the penalty parameter in a different manner or by starting from different points (problem 03)? Tricky: Can you think of a reason for the slow convergence in problem 04 (hint: KKT)?

Part II: Constrained optimization; MATLAB:s Optimization Toolbox

In this part of the lab, you are to use MATLAB’s optimization routines to solve some nonlinear problems.

Note! This part of the lab should be prepared by formulating the KKT conditions for the exercise problems. We have created an example to show how Matlab’s constrained minimization, fmincon, works. In order to understand the command, it is helpful to read Matlab’s help about it (type help fmincon). If you also would like to know more about various options of the solver, type help optimset. Example:

s.t. x1 ≥ 0 x21 + x2 ≥ 2

The code for this example can be found in the file LAB2/fmincon1 and the example can be run by typing run fmincon1 in the MATLAB command window. Study the code that implements this example; to solve the other problems you need to create similar files.

Exercises

  1. Given is the problem

,

s.t.     x21 x2 ≤ 0, 2x1 x2 ≥ 0.

  • Solve the problem using fmincon.
  • State the KKT conditions, examine the convexity of the problemand verify that the obtained solution is a global maximum.
  1. Given is the problem

min f(x) := x1                                                                            ,

s.t.            (x1− 1)2 +             (x2 + 2)2 ≤ 16, x21                    +             x22               ≥ 13.

Solve the problem from at least five starting points. Describe what happens. Which point is the best one? Can you guarantee that this is a global minimum? Fun points to try are (1,1)T,(0,0)T,(3.7,0)T and (−1,−1)T.

Appendix

Linear problems

example_lin01.m

minx1 + 3x2,

x1 + 2x2 ≥ 2,

 x1− 3x2 ≤ 2,

s.t.

x1 + 3x1,xx22 ≤≥ 120, ,

example_lin02.m

minx1 + 3x2,

1x

11111/////65432xxxxx111111+ 1+ 1+ 1+ 1+ 1+ 1//////1056789xxxxxx222222 ≥≥≥≥≥≥ 111111,,,,,,

s.t.

1/7x

111///9810xx111x+ 1+ 1+ 11 + 1///234xxxxxx122222 ≤≥≥≥≥≥ 2001111.,,,, ,

example_lin03.m

minx2,

1/12xx11+ 1+ 1//109xx22 ≥≥ 11,,

1/3x1 + 1/8x2 ≥ 1,

11//54xx11 + 1+ 1//67xx22 ≥≥ 11,,

1/6x1 + 1/5x2 ≥ 1,

s.t.

11//87xx11 + 1+ 1//34xx22 ≥≥ 11,,

 1/9x1 + 1/2x2 ≥ 1,

 1/10x1 + 1xxx212 ≤≥≥ 2001., ,

Nonlinear problems

example_nl01.m

minx21 + x22,

                            x1 ≥ 2,



s.t.                                  x2 ≥ 1,

1/2x1 + 1/4x2 ≤ 2.

example_nl02.m

minx21,

                                         ,

s.t.

example_nl03.m

minx1 sin(x1) + x2 sin(x2),

                            xx21 ≥≥ 31//43,,

s.t.

(x1− 1)2x+ (1−xsin(2−x1)22) ≤≥ 50.,

example_nl04.m

min(x1 + 1)2 + 1/2x22,

    x1 ≤ 3, s.t. ,

  • LAB2-btvfty.zip