Deliverable WP2
Up one levelWP2 related deliverables
- D2.1: Scenarios Specification and Problem Finding
- The goal of this deliverable is to identify two common environments (the HOUSE and the ROOM ENVIRONMENTS) and the three final scenarios, with the aim of including the main issues proposed by the partners during the discussion phase of the project. The identification of the environments and the scenarios that will be presented is based on the consideration that the common scenarios should also be considered as a means to achieve integration among the different approaches characterizing the partners. In Section 4 the results of the forum-based discussion are presented and discussed. To facilitate the presentation and the groupings of these different proposals the original contributions have been simplified and organized in a table format. The original proposals with the involved research problems are attached to this Deliverable in the APPENDIX section. In Section 5 the two environments are individuated and discussed and in Section 6 common constraints on the robots are envisaged. Finally, in Section 7 on the basis of the six theoretical scenarios, the final three scenarios (FINDING AND LOOKING FOR, PREDICTING IN A DYNAMIC WORLD, GUARDS AND THIEVES) are described and discussed. These final scenarios are the main achievement reported in this deliverable representing the common framework shared by the Partners that will be used in the next phases of the MindRACES project.
- D2.2: Scenario Design and Implementation
- The objective of this deliverable is to report the activity done by the consortium to design and implement the three selected scenarios, the tasks and the environments that will be used in the next phases of the project. The output of this deliverable is used to develop, evaluate and test the enhanced architectures and robots whose results will be reported in D3.2, D4.2 and D5.2.
- D2.3: Evaluation Methodology and Metrics
- In order to generate evaluation metrics that are appropriate for MindRACES a questionnaire has been created and distributed to the partners. Additional tables have been supplied to help the easier systematization of the metrics proposed. The contributions of all the partners have been processed and a common table with metrics have been generated. On the basis of the presentation of the results of this effort and the discussion among the partners during the regular consortium meeting in Würzburg (April 20-21), a few metrics common to all partners were selected giving the possibility for each partner to add its own evaluation metrics which accounts for the specificity of the architecture used.