COMP6211E Assignment 3 Solved

30.00 $

Description

Rate this product

Theoretical Problems
1. (3 points) Consider x 2 R+. Consider
h(x) = ô€€€ln x
• Find the Bregman divergence Dh(x; y).
• Consider function
f(x) = 0:5x ô€€€ ln(x + 1):
What’s the smoothness and strong convexity parameters of f with respect to h?
• Consider h-mirror descent method for solving f(x). What’s the formula to obtain xt from xtô€€€1
with a learning rate t.
2. (3 points) Consider a composite optimization problem
f(x) + g(x);
with non-convex regularizer g(x) given by
g(x) = 
Xd
j=1
ln(1 + jxj j);
for some  > 0. Since logarithm grows slowly when jxj j increases, this regularizer leads to less bias
than L1 regularization. Assume we want to apply proximal gradient method
proxg [x ô€€€ rf(x)]
to solve this problem, where the proximal optimization problem becomes
proxg(z) = arg min
x

1
2
kx ô€€€ zk22
+ g(x)

(1)
• Show that when  > 0 is suciently small, then the proximal optimization problem (1) is strongly
convex for all z. What is the range for such ?
• Find the closed form solution for proxg(z) when  is suciently small so that (1) is convex.
• If f(x) is L smooth and 2 strongly convex. Does the proximal gradient algorithm converge? If
so, how many iterations are needed to obtain an  primal suboptimal solution?

  • assignment3-j5ijnl.zip