CS 7642: Project #3 Reinforcement Learning and Decision Making Solved

34.99 $

Description

5/5 - (2 votes)

Correlated-Q Project #3
1 Problem
1.1 Description
As you encountered in the first project, replication of previously published results can be an interesting and challenging task. You learned that researchers often leave out important details that cause you to perform extra experimentation to produce the right results.
For this project, you will be reading “Correlated Q-Learning” by Amy Greenwald and Keith Hall Greenwald, Hall, and Serrano 2003. You are then asked to replicate the results found in Figure 3(parts a-d). You can use any programming language and libraries you choose.
1.2 Procedure
• Read the paper.
• Develop a system to replicate the experiment found in section ”5. Soccer Game”
– This will include the soccer game environment
– This will include agents capable of Correlated-Q Foe-Q Friend-Q and Q-learning
• Run the experiment found in section ”5. Soccer Game”
– Collect data necessary to reproduce all the graphs in Figure 3
• Create graphs demonstrating
– The Q-value difference for all agents
• We’ve created a private Georgia Tech GitHub repository for your code. Push your code to the personal repository found here: https://github.gatech.edu/gt-omscs-rldm
• The quality of the code is not graded. You don’t have to spend countless hours adding comments, etc. But, it will be examined by the TAs.
• Make sure to include a README.md file for your repository
– Include thorough and detailed instructions on how to run your source code in the README.md
– If you work in a notebook, like Jupyter, include an export of your code in a .py file along with your notebook
• You will be penalized by 25 points if you:
– Do not have any code or do not submit your full code to the GitHub repository
– Do not include the git hash for your last commit in your paper
• Write a paper describing your agent and the experiments you ran
– Include the hash for your last commit to the GitHub repository in the header on the first page of your paper.
– Make sure your graphs are legible and you cite sources properly. While it is not required, we recommend you use a conference paper format. For example https://www.ieee.org/conferences/ publishing/templates.html
1
– Correlated-Q 2
– 5 pages maximum – really, you will lose points for longer papers.
– Describe the game
– Describe the experiments/algorithms replicated: implementation/outcome/etc – Explain your experiments.
– The paper should include your graphs and discussions regarding them
– Discuss your results in the context of the game and the algorithm
∗ How well do they match?
∗ Significant differences?
∗ Justify your results. Why do they make sense?
∗ What was the purpose of the experiment ie. what is the significance of your results?
– Describe any problems/pitfalls you encountered (e.g. unclear parameters, contradictory descriptions of the procedure to follow, results that differ wildly from the published results)
∗ What steps did you take to overcome them
∗ What assumptions you made and how you justify said assumptions – Save this paper in PDF format.
– Submit to Canvas!
• Using a Deep RL library instead of providing your own work will earn you a 0 grade on the project and you will be reported for violating the Honor Code.
1.3 Resources
1.3.1 Lectures
• Lesson 11A: Game Theory
• Lesson 11B: Game Theory Reloaded
• Lesson 11C: Game Theory Revolutions
1.3.2 Readings
• Greenwald-Hall-2003.pdf Greenwald, Hall, and Serrano 2003
1.4 Submission Details
• Your written report in PDF format (Make sure to include the git hash of your last commit)
• Your source code in your personal repository on Georgia Tech’s private GitHub
To complete the assignment, submit your written report to Project 2 under your Assignments on Canvas: https://gatech.instructure.com
Note: Late is late. It does not matter if you are 1 second, 1 minute, or 1 hour late. If Canvas marks your assignment as late, you will be penalized. Additionally, if you resubmit your project and your last submission is late, you will incur the penalty corresponding to the time of your last submission.
– Correlated-Q 3
1.5 Grading and Regrading
When your assignments, projects, and exams are graded, you will receive feedback explaining your errors (and your successes!) in some level of detail. This feedback is for your benefit, both on this assignment and for future assignments. It is considered a part of your learning goals to internalize this feedback. This is one of many learning goals for this course, such as: understanding game theory, random variables, and noise.
It is important to note that because we consider your ability to internalize feedback a learning goal, we also assess it. This ability is considered 10% of each assignment. We default to assigning you full credit. If you request a regrade and do not receive at least 5 points as a result of the request, you will lose those 10 points.
References
[GHS03] Amy Greenwald, Keith Hall, and Roberto Serrano. “Correlated Q-learning”. In: ICML. Vol. 20. 1. 2003, p. 242.

  • Project3-xifjqf.zip