CSC412/2056 Assignment #1 Solved

35.00 $

Category:
Click Category Button to View Your Next Assignment | Homework

You'll get a download link with a: zip solution files instantly, after Payment

Securely Powered by: Secure Checkout

Description

5/5 - (1 vote)

Problem 1 (Variance and covariance, 6 points)
Let X and Y be two continuous independent random variables.
(a) Starting from the definition of independence, show that the independence of X and Y implies that their covariance
is zero.
(b) For a scalar constant a, show the following two properties, starting from the definition of expectation:
E(X + aY) = E(X) + aE(Y)
var(X + aY) = var(X) + a2var(Y)
Problem 2 (Densities, 5 points)
Answer the following questions:
(a) Can a probability density function (pdf) ever take values greater than 1?
(b) Let X be a univariate normally distributed random variable with mean 0 and variance 1/100. What is the pdf of
X?
(c) What is the value of this pdf at 0?
(d) What is the probability that X = 0?
Problem 3 (Calculus, 4 points)
Let x, y 2 Rm and A 2 Rmm. Please answer the following questions, writing your answers in vector notation.
(a) What is the gradient with respect to x of xTy?
(b) What is the gradient with respect to x of xTx?
(c) What is the gradient with respect to x of xTA?
(d) What is the gradient with respect to x of xTAx?
Problem 4 (Linear Regression, 10pts)
Suppose that X 2 Rnm with n  m and Y 2 Rn, and that Y  N(Xb, s2 I). In this question you will derive the result
that the maximum likelihood estimate ˆb of b is given by
ˆb
= (XTX)􀀀1XTY
(a) What are the expectation and covariance matrix of ˆb, for a given true value of b?
(b) Show that maximizing the likelihood is equivalent to minimizing the squared error
ån
i=1(yi 􀀀 xib)2. [Hint: Use ån
i=1 a2i
= aTa]
(c) Write the squared error in vector notation, (see above hint), expand the expression, and collect like terms. [Hint:
Use bTxTy = yTxb (why?) and xTx is symmetric ]
(d) Take the derivative of this expanded expression with respect to b to show the maximum likelihood estimate ˆb as
above. [Hint: Use results 3.c and 3.d for derivatives in vector notation.]
1
Problem 5 (Ridge Regression, 10pts)
Suppose we place a normal prior on b. That is, we assume that b  N(0, t2 I).
(a) Show that the MAP estimate of b given Y in this context is
ˆb
MAP = (XTX + lI)􀀀1XTY
where l = s2/t2.
Estimating b in this way is called ridge regression because the matrix lI looks like a “ridge”. Ridge regression is a
common form of regularization that is used to avoid the overfitting that happens when the sample size is close to
the output dimension in linear regression.
(b) Show that ridge regression is equivalent to adding m additional rows to X where the j-th additional row has its
j-th entry equal to
p
l and all other entries equal to zero, adding m corresponding additional entries to Y that are
all 0, and and then computing the maximum likelihood estimate of b using the modified X and Y.
2
Problem 6 (Gaussians in high dimensions, 10pts)
In this question we will investigate how our intuition for samples from a Gaussian may break down in higher dimensions.
Consider samples from a D-dimensional unit Gaussian x  N(0D, ID) where 0D indicates a column vector of D
zeros and ID is a D  D identity matrix.
1. Starting with the definition of Euclidean norm, quickly show that the distance of x from the origin is
p
xTx
2. In low-dimensions our intuition tells us that samples from the unit Gaussian will be near the origin. Draw 10000
samples from a D = 1 Gaussian and plot a normalized histogram for the distance of those samples from the
origin. Does this confirm your intuition that the samples will be near the origin?
3. Draw 10000 samples from D = f1, 2, 3, 10, 100g Gaussians and, on a single plot, show the normalized histograms
for the distance of those samples from the origin. As the dimensionality of the Gaussian increases, what can you
say about the expected distance of the samples from the Gaussian’s mean (in this case, origin).
4. From Wikipedia, if xi are k independent, normally distributed random variables with means mi and standard
deviations si then the statistic Y =
q
åk
i=1( xi􀀀mi
si
)2 is distributed according to the c-distribution. On the previous
normalized histogram, plot the probability density function (pdf) of the c-distribution for k = f1, 2, 3, 10, 100g.
5. Taking two samples from the D-dimensional unit Gaussian, xa, xb  N(0D, ID) how is xa 􀀀xb distributed? Using
the above result about c-distribution, how is jjxa 􀀀 xbjj2 distributed? (Hint: start with a X-distributed random
variable and use the change of variables formula.) Plot the pdfs of this distribution for k = f1, 2, 3, 10, 100g.
How does the distance between samples from a Gaussian behave as dimensionality increases? Confirm this by
drawing two sets of 1000 samples from the D-dimensional unit Gaussian. On the plot of the c-distribution pdfs,
plot the normalized histogram of the distance between samples from the first and second set.
6. In lecture we saw examples of interpolating between latent points to generate convincing data. Given two samples
from a gaussian xa, xb  N(0D, ID) the linear interpolation between them xa is defined as a function of
a 2 [0, 1]
lin interp(a, xa, xb) = axa + (1 􀀀 a)xb
For two sets of 1000 samples from the unit gaussian in D-dimensions, plot the average log-likelihood along the
linear interpolations between the pairs of samples as a function of a. (i.e. for each pair of samples compute
the log-likelihood along a linear space of interpolated points between them, N(xaj0, I) for a 2 [0, 1]. Plot the
average log-likelihood over all the interpolations.) Do this for D = f1, 2, 3, 10, 100g, one plot per dimensionality.
Comment on the log-likelihood under the unit Gaussian of points along the linear interpolation. Is a higher
log-likelihood for the interpolated points necessarily better? Given this, is it a good idea to linearly interpolate
between samples from a high dimensional Gaussian?
7. Instead we can interpolate in polar coordinates: For a 2 [0, 1] the polar interpolation is
polar interp(a, xa, xb) =
p
axa +
q
(1 􀀀 a)xb
This interpolates between two points while maintaining Euclidean norm. On the same plot from the previous
question, plot the probabilitiy density of the polar interpolation between pairs of samples from two sets of 1000
samples from D-dimensional unit Gaussians for D = f1, 2, 3, 10, 100g. Comment on the log-likelihood under the
unit Gaussian of points along the polar interpolation. Give an intuative explanation for why polar interpolation
is more suitable than linear interpolation for high dimensional Gaussians. For 6. and 7. you should have one
plot for each D with two curves on each.
8. (Bonus 5pts) In the previous two questions we compute the average loglikelihood of the linear and polar interpolations
under the unit gaussian. Instead, consider the norm along the interpolation,
p
xTa
xa. As we saw
previously, this is distributed according to the X-distribution. Compute and plot the average log-likelihood of
the norm along the two interpolations under the the X-distribution for D = f1, 2, 3, 10, 100g, i.e. XD(
p
xTa
xa).
There should be one plot for each D, each with two curves corresponding to log-likelihood of linear and polar interpolations.
How does the log-likelihood along the linear interpolation compare to the log-likelihood of the true
samples (endpoints)? Using your answer for questions 3 and 4, provide geometric intuition for the log-likelihood
along the linear and polar interpolations. Use this to further justify your explanation for the suitability of polar
v.s. linear interpolation.
3

  • A1.zip