CS7642 Project #3 Correlated Q-Learning Solution

35.00 $

Categories: ,
Click Category Button to View Your Next Assignment | Homework

You'll get a download link with a: zip solution files instantly, after Payment

Securely Powered by: Secure Checkout

Description

5/5 - (1 vote)

Problem

Description

As you encountered in the first project, replication of previously published results can be an interesting and challenging task. You learned that researchers often leave out important details that cause you to perform extra experimentation to produce the right results.

 

For this project, you will be reading “Correlated Q-Learning” by Amy Greenwald and Keith Hall. You are then asked to replicate the results found in Figure 3(parts a-d). You can use any programming language and libraries you choose.

Procedure

  • Read the paper.
  • Develop a system to replicate the experiment found in section “5. Soccer Game”
    • This will include the soccer game environment

○ This will include agents capable of Correlated-𝖰, Foe-𝖰, Friend-𝖰, and 𝖰-learning

  • Run the experiment found in section “5. Soccer Game”
    • Collect data necessary to reproduce all the graphs in Figure 3
  • Create graphs demonstrating
    • The 𝖰-value difference for all agents

○ Anything else you may think appropriate

  • We’ve created a ​private​ Georgia Tech GitHub repository for your code. Push your code to the personal repository found here: ​https://github.gatech.edu/gt-omscs-rldm
    • The quality of the code is not graded. You don’t have to spend countless hours adding comments, etc. But, it will be examined by the TAs.

○ Make sure to include a README.md file for your repository

■ Include thorough and detailed instructions on how to run your source code in the README.md

○ You will be penalized by ​50 points​ if you:

■ Do not have any code or do not submit your ​full​ code to the GitHub repository

■ Do not include the git hash for your last commit in your paper

  • Write a paper describing your agents and the experiments you ran
    • Include the hash for your last commit to the GitHub repository in the paper’s header.

○ The rubric includes a few points for formatting. Make sure your graphs are legible and you cite sources properly. While it is not required, we recommend you use a conference paper format. Just pick any one.

○ 5 pages maximum — really, you will lose points for longer papers.

○ Describe the game

○ Describe the experiments/algorithms replicated: implementation/outcome/etc

○ Explain your experiments

○ The paper should include your graphs

■ And, discussions regarding them

○ Discuss your results

■ How well do they match?

■ Significant differences?

○ Describe any problems/pitfalls you encountered (e.g. unclear parameters, contradictory descriptions of the procedure to follow, results that differ wildly from the published results)

■ What steps did you take to overcome them

■ What assumptions you made

  • Justifications for such assumptions
    • Save this paper in PDF format

○ Submit!

  • Celebrate your mastery of Reinforcement Learning!

Your grade will largely be based upon your report and analysis.

Resources

The concepts explored in this homework are covered by:

  • Lectures
    • Game Theory (all of them)
  • Readings
    • Greenwald-Hall (2003)

Submission Details

The due date is indicated on the Canvas page for this assignment.

Due Date: Indicated as “Due” on Canvas

Late Due Date [20​          point penalty per day]:​ Indicated as “Until” on Canvas

Make sure you have set your timezone in Canvas to ensure the deadline is accurate.

The submission consists of:

  • Your written report in PDF format (Make sure to include the git hash of your last commit)
  • Your source code in your personal repository on Georgia Tech’s private GitHub

To complete the assignment, submit your written report to Project 3 under your Assignments on Canvas: ​https://gatech.instructure.com

You may submit the assignment as many times as you wish up to the due date, but, we will only consider your last submission for grading purposes.

Late submissions will receive a cumulative 20 point penalty per day. That is, any projects submitted after midnight AOE on the due date get a 20 point penalty. Any projects submitted after midnight AOE the following day get a 40 point penalty and so on. No project will receive a score less than a zero no matter what the penalty. Any projects more than 4 days late and any unsubmitted projects will receive a 0.

Note: Late is late. It does not matter if you are 1 second, 1 minute, or 1 hour late.  If Canvas marks your assignment as late, you will be penalized. ​Additionally, if you resubmit your project and your last submission is late, you will incur the penalty corresponding to the time of your last submission.

Finally, if you have received an exception from the Dean of Students for a personal or medical emergency we will consider accepting your project up to 7 days after the initial due date with no penalty. Students requiring more time should consider withdrawing from the course (if possible) or taking an incomplete for this semester as we will not be able to grade their project.

 

  • Game_Theory_and_Q_Learning.zip