Based on your feedback, we are clarifying two points:
1) Quantitative (objective) vs Qualitative (subjective) prize.
The Quantitative prize will be awarded to the agent that solves all evaluation tasks in the shortest number of simulation steps, only if it provably demonstrates gradual learning and not-forgetting. Since the goal of the 1st Round is to build a gradually learning agent, just being fastest at solving evaluation tasks is not enough. This diagram shows evaluation process for the Quantitative prize:
Full resolution diagram available at:
The Qualitative prize is subjective and is awarded by the Jury for the idea, concept, or design that shows the best promise for scalable gradual learning. We would like to emphasize that this submission does not have to be a working AI agent; an idea described in a white paper alone can also win the prize, which is good news for those who don’t feel strongest in AI programming.
We have also clarified this better in the updated version of the Specifications document (see chapters Prizes, Timeline of the Round, and especially the Evaluation Criteria). We are very sorry that we have made a mistake in the previous version of the Specifications and confused the Qualitative and Quantitative prizes at one point, but we hope that this clarification is timely, and our initial mistake did not affect the agent development process. The requirements for your agents did not change, but we only clarified the evaluation process.
We have newly included the titles Quantitative and Qualitative prize and better explained their nature in the updated Rules of the Challenge (changes visible in green).
2) For the convenience of all participants, we have copied a detailed description of what you need to submit from the Specifications document to the Rules of the Challenge (the updates were made in Paragraph 11 and Footnote 1 in the Rules document). Till now, the Rules just referred to the main Specifications document, but now, for the sake of clarity, both documents provide the full description.
So, here, again, is the summary of what you need to submit (alias your “Technical Solution”):
- the source code of your agent (in any programming language),
- the training tasks (and training data) used for training the agent (only in case they are different from the training tasks that the organizers provided),
- your pre-trained agent,
- the design (description/explanation, white paper) of your agent. This white paper should:
- be brief and well-structured (2 pages max.; in case the whitepaper exceeds the 2-page limit, participants must include a one-page summary of the paper at the beginning),
- include instructions on how to run the agent, including if it should be evaluated on GPU or CPU,
- explain the main principles and motivations behind the agent’s design in a brief, structured manner,
- include participant’s / team’s name and contact details,
- state your preference, whether you want to open source the agent, or just share it with the organizers,
- note: if the you decide to compete for the qualitative prize only (best idea), you can submit the white paper only as your Technical Solution.
All the changes in the Rules document are visible in green for your convenience.
We hope that these clarifications were helpful and will add to transparency of the Challenge. Thank you very much again for your feedback!