# Proof Assistance Evals Project > Review the study design first by reading the Evaluation Design. It explains the methodology, task set, rubric, scoring protocol, and limitations. ## Sources - [Work Area 3 - Analysis](/analysis.guide.llm.md): Your third implementation target is analyze.py. - [Work Area 1 - Conversation Runner](/conversation-runner.guide.llm.md): Your first implementation target is run_conversations.py. - [Evaluation Design](/evals.guide.llm.md): This document explains what the project evaluates, why the task set is designed this way, and how human and automated judges should interpret the results. - [Proof Assistance Evals Project](/index.path.llm.md): Review the study design first by reading the Evaluation Design. It explains the methodology, task set, rubric, scoring protocol, and limitations. - [Work Area 2 - Judge Scorer](/judge-scorer.guide.llm.md): Your second implementation target is run_judge.py. - [Discrete Math Proof Assistance Evals](/proof-assistance-evals.project.llm.md): This project is an empirical study of how well modern AI assistants help a student work through discrete-math proofs. - [Research Workflow and Follow-Up](/research-workflow.guide.llm.md): After the three scripts work, use this guide to run the actual study in the right order. - [Project Setup](/setup.guide.llm.md): This guide gets the project running on your computer before you implement the harness.