## Description

**Homework 2 **

**Mathematics:**

Consider a classification problem where we wish to determine if a human subject is likely to get a concussion in the next year. We use four features – x_{1 }(Age), x_{2 }(concussHistory), x_{3}

(FavoriteSport), x_{4} (LastEducation), x_{5} (Gender) . Each feature takes on one of a discrete number of values, shown below:

Age |
Child | Teen | Adult | |

concussHistory |
Never | Recent | DecadesAgo | |

FavoriteSport |
Boxing | Golf | Rugby | Baseball |

LastEducation |
Masters | HighSchool | College | |

Gender |
Female | Male |

We wish to classify each user as either y^{i}=LikelyConcuss or y^{i}=NotLikelyConcuss.

- How can the features above be transformed to use a logistic classifier? Explain in 1-4 sentences.

- How many parameters are required to learn a separating hyper-plane (
**w**and any other necessary elements) for logistic classification with the features converted in question 1? (Work from your answer to question 1. If you could not figure out question 1, assume we have a new space of 6 continuous numeric features x_{1}, x_{2}, …, x_{5}– this may or may not be a valid result from question 3.)

- Which of the following vectors has (approximately) unit magnitude? (At most 1.1, at least 0.9.)

0.3 0 1 2

(a) [−0.6] (b) [ ^{0.6 }] (c) [^{−1}] (d) [^{−}0^{1}]

0.7 −0.4 0

−0.6 0.7 −1 0.5

- What is the projection (dot product) of the vectors below onto the unit magnitude vector from question 3? (If you believe none of the vectors in question 3 have unit magnitude, just use vector (d) from question 3.)

2 −2 0.5

(a) ^{[1}0] (b) [ ^{1}0 ] (c) [^{1}−^{.}1^{5}]

0 2 1.5

1

- Let us assume we have a classifier with 𝒘 = [−4] and 𝑏 = 2. Provide
**two**distinct inputs**x**^{i}

6

that would be on the boundary between class 0 and 1, specifically find two inputs **x**^{i} where g(**x**^{i};**w**)=0.5.

- In class, we define the logistic/sigmoid function as 𝑔(ℎ) = , producing the curve in solid blue shown to the right. Note while g(h)≈0 when h is a very low negative number (h<-4) and g(h)≈1 when h is a very high positive number (h>4), when h is close to zero, -2<h<2, g(h) transitions roughly linearly between 0.1 and 0.9 . Consider in contrast g
^{alt}(h) in the dashed**red**curve, which stays close to 0 for input h = – 2 and stays close to 1 for input h=2.

__How can we adjust the definition of g(h) to create the dashed__**red**curve g^{alt}__(h)?__(You do not need to give the exact equation for g^{alt}(h), but explain what alterations if any are made to the numerator, denominator, exponent, etc.)

- We now wish to have a modified sigmoid g
^{alt2}where g^{alt2}(h)< .05 only when h<-10 and g^{alt2}(h)> .95 only when h>10. Provide an exact algebraic formula for g^{alt2}(h) to fit this specification.

In class, we considered the logistic function output as a pseudo-probability and used it to define a likelihood for data given the parameters in **w** (for the following three questions we will assume b is included in **w**).

^{𝑖}; 𝒘))(1−𝑦^{𝑖}) 𝑔(𝒙^{𝑖}; 𝒘)_{𝑦}𝑖

𝐿(𝑦|𝒙; 𝒘) = ∏ (1 − 𝑔(𝒙

𝑖

Note for binary classification, y=0 or y=1. The goal is to find **w** to maximize L for all data (**x**^{i},y^{i}). Let us say we skip the logistic function and learn the two-way classifier **w** to maximize a sum:

𝑑

S(y|**x**; **w**) =

𝒘 𝒙

- Replace
*d*with a valid expression such that S(y|**x**;**w**) will be largest when**w**is correctly selected to correctly classify data from class 0 (y^{i}=0) as**w**^{T}**x**<0 and from class 1 (y^{i}=1) as**w**^{T}**x**>0.*d*should be an algebraic expression involving some variables**w**,**x**^{i}, and/or y^{i}. For example:*d=|*.**x**^{i}| or d=10+y^{i}

- What is the derivative of S(y|
**x**;**w**) with respect to a single element of**w**, w_{j}? (Consider the definition of**w**^{T}**x**.)

𝑤^{𝑗}

- Recall L2 regularization affects the gradient ascent learning rule by adding + .

𝜆

3

|𝑤𝑗−𝜇𝑗|

Let us now create a **new** regularization with the prior probability (𝑤_{𝑗}) = 𝑒^{− }𝛼 .

**What is the resulting change to the gradient ascent learning rule? **

# B. SVMs

- Let us use SVM to define a linear classifier with the following support vectors and 𝛼’s:

−2 −2 2 1

𝒙^{1 }= [ ^{3}0 ], 𝒙^{2 }= [ ^{0}0 ], 𝒙^{3 }= [^{−}2^{4}], 𝒙^{4 }= [^{−}0^{4}]

0 3 0 5

𝑦^{1 }= +1, 𝑦^{2 }= +1, 𝑦^{3 }= −1, 𝑦^{4 }= −1

𝛼^{1 }= 1.2, 𝛼^{2 }= 0.9, 𝛼^{3 }= 1.2, 𝛼^{4 }= 0.9

What is the resulting **w**?

- Consider the same support vectors as before but a different set of 𝛼’s:

𝛼^{1 }= 1.5, 𝛼^{2 }= 1, 𝛼^{3 }= 2, 𝛼^{4 }=? ? ?

Presuming **x**^{1}, **x**^{2}, **x**^{3}, and **x**^{4} are the only support vectors available, what value must 𝛼^{4} have?

What is the resulting **w**?

- The respective update rules for logistic classification and SVM learning are

𝑤 ← 𝑤 + 𝜀𝒙^{𝑖}(𝑦^{𝑖 }− 𝑔)

and

𝑤 ← 𝑤 − 2𝑤 − ∑ 𝑦^{𝑖}𝜆_{𝑖}𝒙^{𝑖}

𝑖

- Explain in 1-2 sentences how logistic classification learning minimizes the impact of correctly-labeled data points far from the hyperplane.

- Explain in 1-2 sentences how SVM learning minimizes the impact of correctly-labeled data points far from the hyperplane.

- Which of the two methods (or both) allows all data points to be used in the final hyperplane definition?

We did not yet define “Kernel functions”, but I recommend students try this question anyway … just treat it as a norm function that takes in two vectors. It will count for a small amount of credit.

- (6 points) Consider the following kernel function:

𝒂^{T} 𝐛

𝐾(𝒂, 𝒃) =

|𝒂||𝒃| And further consider the following support vectors:

1 0 −0.2

𝒙𝟒 = [−2]** **𝒙𝟓 = [0.5] 𝒙^{𝟔 }= [ ^{1}1^{ }]

0 0

−2 1 −0.5

What is the output of the kernel function for each pair of inputs below?

K(**x**^{4},**x**^{6}) K(**x**^{6},**x**^{5})

** **

** **

** **

** **

# C. Programming

In this question you will implement algorithms for learning parameters and classifying with Logistic Classification.

To submit Part C, leave your code in the Python file hw2.py inside your private/CIS5800 directory.

**Accessing our data **

The file hw2data.mat is available on our website (and on erdos using

cp ~dleeds/MLpublic/hw2data.mat .) Load this file into Python to get access to the fullData numpy array. For this array, each row is one example student. Columns 0 through 9 represent **ten** features, corresponding to diverse information about the student – her/his GPA, SAT scores, age, level of financial aid, etc. The last column (column fullData[:,10]) represents the student major y^{i} – 0 (History) or 1 (Chemistry).

- Write a function called
**sigmoidLikelihood**that takes in the*m*features for*n*students as a nparray of dimension (n,m) , the corresponding class labels for*n*students as a np-array of dimension (n), and the separating hyper-plane represented as**w**by a np-array of dimension (m+1). The b scalar offset is included as the final element of**w**. The function returns the sigmoid-based pseudo-likelihood of each student being in the corresponding class, as a np-array of dimension (n).

Use the following syntax:

LVector = sigmoidLikelihood(X, y, w)

The pseudo-likelihood is computed using the sigmoid function (which you must implement for this homework).

- The function sigmoidLikelihood returns an np-array of pseudo-likelihoods for
*n*data points. If we wish to calculate the full pseudo-likelihood across all input, we will multiply the function outputs: ℒ = ∏_{𝑖 }𝑃(𝒙^{𝑖}, 𝑦^{𝑖}; 𝒘) or prod(LVector) . However, if too many small probabilities are multiplied together, the total pseudo-likelihood will be approximated to exactly zero, losing potentially valuable information. This estimation to zero can be avoided by taking the pseudo**log-likelihood**np.sum(np.log(LVector)) .

- If LVector contains all values of 0.05 ([0.05, 0.05, 0.05, …, 0.05]), how many data points (elements in LVector) are needed for np.prod to estimate the pseudo-likelihood as perfectly 0?

- What is the pseudo-log-likelihood equivalent given the number of data points from part (a)?

*Include your answers as a comment in hw2.py*.

- Write a function called
**learnLogistic**that takes in the initial hyper-plane vector**w**as a nparray of dimension (m+1) with b as the final element, training data^{0}**x**with n data points and m features as a np-array of dimension (n,m), the correct labels for each point y as a np-array of dimension (n), and the number of learning loops*K*. The function outputs the new weights**w**as a np-array of dimension (m+1) and the log-likelihood of the input data after each loop as a nparray of dimension (K).

Use the following syntax:

w, LHistory = learnLogistic(w0, X, y, K)

**Note: **For each “loop,” learnLogistic should loop through each data point in the training set and use gradient ascent to update **w** for each data point.

# this is pseudo-code! for dataPt i : for feature j :

update[j] += stepSize * updateRule(x[i,j],y[i]) for feature j : w[j] += update[j]

Assume stepSize=0.01.

- In class, we discussed that learnLogistic can run faster if you use fewer loops. For example, instead of looping on feature j, numpy allows you to write:

# this is pseudo-code!

update += stepSize * updateRule(x[i,:],y[i])

- Write learnLogisticFast to accept the same inputs and produce the same outputs as learnLogistic above, but using vector math to remove at least one loop (you may be able to remove more than one, but only have to remove one).

- Compare the speeds of learnLogistic and learnLogisticFast, by running them on the data in hw2data.mat with at least 10 loops.
**Provide your answer in a comment.**

Specifically, use time.time() to determine the time it takes each function to run:

# this part is not pseudo-code

# comment this out/delete it before submitting hw2.py

# JUST SUBMIT THE TIME IT TAKES TO RUN learnLogistic AND

# learnLogisticFast import time timeStart=time.time()

w, LHistory = learnLogistic(w0,X,y,K) timeEnd=time.time() print(timeEnd-timeStart)

- Write a function
**logisticClassify**that takes in the*m*feature values for the*n*data points as a np-array with dimensions (n,m) and the weight vectoras a np-array with dimension (m+1) (with b as the final element of w). The function returns the 0/1 label for each data point as a np-array of dimension (n).*w*

Use the following syntax:

classLabels=logisticClassify(x,w)