ISYE6740 Homework 3 Solved

35.00 $

Category:
Click Category Button to View Your Next Assignment | Homework

You'll get a download link with a: zip solution files instantly, after Payment

Securely Powered by: Secure Checkout

Description

5/5 - (3 votes)

1.     Order of faces using ISOMAP

This question aims to reproduce the ISOMAP algorithm results in the original paper for ISOMAP, J.B. Tenenbaum, V. de Silva, and J.C. Langford, Science 290 (2000) 2319-2323 that we have also seen in the lecture as an exercise (isn’t this exciting to go through the process of generating results for a high-impact research paper!)

The file isomap.mat (or isomap.dat) contains 698 images, corresponding to different poses of the same face. Each image is given as a 64 × 64 luminosity map, hence represented as a vector in R4096. This vector is stored as a row in the file. [This is one of the datasets used in the original paper] In this question, you are expected to implement the ISOMAP algorithm by coding it up yourself. You may use the provided functions in ShortestPath.zip to find the shortest path as required by one step of the algorithm. To load data in Python, for instance, you can use from scipy.io import loadmat, images = loadmat(‘isomap.mat’)[‘images’]. To load data in Matlab, you can directly use load() function.

Choose the Euclidean distance (i.e., in this case, a distance in R4096) to construct the nearest neighbor graph—vertices corresponding to the images. Construct a similarity graph with vertices corresponding to the images, and tune the threshold  so that each node has at least K = 50 neighbors (this approach corresponds to the so-called -Isomap).

  • (10 points) Visualize the similarity graph (you can either show the adjacency matrix,or similar to the lecture slides, visualize the graph using graph visualization packages such as Gephi (https://gephi.org) and illustrate a few images corresponds to nodes at different parts of the graph, e.g., mark them by hand or use software packages).
  • (20 points) Implement the ISOMAP algorithm yourself to obtain a two-dimensionallow-dimensional embedding. Plot the embeddings using a scatter plot, similar to the plots in lecture slides. Find a few images in the embedding space and show what these images look like. Comment on do you see any visual similarity among them and their arrangement, similar to what you seen in the paper?
  • (10 points) Now choose `1 distance (or Manhattan distance) between images (recall the definition from “Clustering” lecture)). Repeat the steps above. Use -ISOMAP to obtain a k = 2 dimensional embedding. Present a plot of this embedding. Do you see

any difference by choosing a different similarity measure by comparing results in Part (b) and Part (c)?

  • (10 points) Perform PCA (you can now use your implementation written in Question1) on the images and project them into the top 2 principal components. Again show them on a scatter plot. Explain whether or you see a more meaningful projection using ISOMAP than PCA.

2.     Density estimation: Psychological experiments.

We will use this data to study whether or not the two brain regions are likely to be independent of each other and considering different types of political view For this question; you can use the proper package for histogram and KDE; no need to write your own.

The data set n90pol.csv contains information on 90 university students who participated in a psychological experiment designed to look for relationships between the size of different regions of the brain and political views. The variables amygdala and acc indicate the volume of two particular brain regions known to be involved in emotions and decision-making, the amygdala and the anterior cingulate cortex; more exactly, these are residuals from the predicted volume, after adjusting for height, sex, and similar body-type variables. The variable orientation gives the students’ locations on a five-point scale from 1 (very conservative) to 5 (very liberal). Note that in the dataset, we only have observations for orientation from 2 to

5.

Recall in this case, the kernel density estimator (KDE) for a density is given by

,

where xi are two-dimensional vectors, h > 0 is the kernel bandwidth, based on the criterion we discussed in lecture. For one-dimensional KDE, use a one-dimensional Gaussian kernel

.

For two-dimensional KDE, use a two-dimensional Gaussian kernel: for

,

where x1 and x2 are the two dimensions respectively

.

  • (10 points) Form the 1-dimensional histogram and KDE to estimate the distributionsof amygdala and acc, respectively. For this question, you can ignore the variable orientation. Decide on a suitable number of bins so you can see the shape of the distribution clearly. Set an appropriate kernel bandwidth h > For example. for one-dimensional KDE, you are welcome to use a rule-of-thumb bandwidth estimator

h ≈ 1.06σnˆ −1/5,

where n is the sample size, ˆσ is the standard error of samples; this is shown to be optimal when Gaussian kernel functions are used for univariate data.

  • (10 points) Form 2-dimensional histogram for the pairs of variables (amygdala, acc). Decide on a suitable number of bins so you can see the shape of the distribution clearly.
  • (10 points) Use kernel-density-estimation (KDE) to estimate the 2-dimensional densityfunction of (amygdala, acc) (this means for this question, you can ignore the variable orientation). Set an appropriate kernel bandwidth h >

Please show the two-dimensional KDE (e.g., two-dimensional heat-map, two-dimensional contour plot, etc.)

Please explain based on the results, can you infer that the two variables (amygdala, acc) are likely to be independent or not?

  • (10 points) We will consider the variable orientation and consider conditional distributions. Please plot the estimated conditional distribution of amygdala conditioning on political orientation: p(amygdala|orientation = c), c = 2,…,5, using KDE. Set an appropriate kernel bandwidth h > Do the same for the volume of the acc: plot p(acc|orientation = c), c = 2,…,5 using KDE. (Note that the conditional distribution can be understood as fitting a distribution for the data with the same orientation. Thus you should plot 8 one-dimensional distribution functions in total for this question.)

Now please explain based on the results, can you infer that the conditional distribution of amygdala and acc, respectively, are different from c = 2,…,5? This is a type of scientific question one could infer from the data: Whether or not there is a difference between brain structure and political view.

Now please also fill out the conditional sample mean for the two variables:

  c = 2 c = 3 c = 4 c = 5
amygdala        
acc        

Remark: As you can see this exercise, you can extract so much more information from density estimation than simple summary statistics (e.g., the sample mean) in terms of explorable data analysis.

  • (10 points) Again we will consider the variable orientation. We will estimate the conditional joint distribution of the volume of the amygdala and acc, conditioning on a function of political orientation: p(amygdala, acc|orientation = c), c = 2,…, You

will use two-dimensional KDE to achieve the goal; et an appropriate kernel bandwidth h > 0. Please show the two-dimensional KDE (e.g., two-dimensional heat-map, two-dimensional contour plot, etc.).

Please explain based on the results, can you infer that the conditional distribution of two variables (amygdala, acc) are different from c = 2,…,5? This is a type of scientific question one could infer from the data: Whether or not there is a difference between brain structure and political view.

  • hw3-biknfr.zip