Evaluation Criteria: Difference between revisions
No edit summary |
No edit summary |
||
Line 1: | Line 1: | ||
==Objectives== | ==Objectives== | ||
The following objectives can be accessed: | The following objectives can be accessed: | ||
* Project | * Project Time | ||
* Budget | * Budget | ||
* AFPs realised in the Code as a percentage >> [[chart]] | * AFPs realised in the Code as a percentage >> [[chart]] | ||
* Errors per KLOC | * Errors per KLOC | ||
* AFPs realised in the | * AFPs realised in the Manual as a percentage | ||
* Errors per page in the | * Errors per page in the Manual | ||
Objectives are the most important evaluation criteria. At the end of | Objectives are the most important evaluation criteria. At the end of a simulation run, trainees want to know if they have managed the project within the budget, in time and if they have realised the required AFPs for the code and for the manual. The evaluation component provides a textual description for every objective. | ||
Line 17: | Line 17: | ||
==Completness of Documents== | ==Completness of Documents== | ||
This | This criterion gives an overview of the completeness of the documents(Specification, System design, Modul design, Code, Manual) and is presented to trainees in the form of a diagram. >> [[chart]] | ||
==Remaining Errors in Documents== | ==Remaining Errors in Documents== | ||
Another important evaluation criteria gives an overview how many | Another important evaluation criteria gives an overview, how many remaining errors are left in the documents. You can distinguish the following evaluations: | ||
* | * Remaining errors of Documents>> [[chart]] | ||
* | * Remaining errors in the Specification | ||
* | * Remaining errors in the System Design | ||
* | * Remaining errors in the Module Design | ||
* | * Remaining errors in the Code | ||
* | * Remaining errors in the Manual | ||
>> Back to: [[Classification of the Evaluation Criteria]] | >> Back to: [[Classification of the Evaluation Criteria]] | ||
Line 36: | Line 36: | ||
==Efficiency of Reviews or Tests== | ==Efficiency of Reviews or Tests== | ||
This criterion is compared with the | This criterion is compared with the costs of reviews or tests, the errors found and the costs of correcting these errors. With the help of these diagrams it can be determined, whether a review or test was performed efficiently or not. | ||
* Efficiency of the | * Efficiency of the Specification Reviews | ||
* Efficiency of the | * Efficiency of the System Design Reviews | ||
* Efficiency of the | * Efficiency of the Module Design Reviews | ||
* Efficiency of | * Efficiency of Code Reviews | ||
* Efficiency of | * Efficiency of Manual Reviews >> [[chart] | ||
* Efficiency of | * Efficiency of Module Tests | ||
* Efficiency of | * Efficiency of System Testing | ||
* Efficiency of | * Efficiency of Integration Tests | ||
Line 51: | Line 51: | ||
==Effectiveness of Reviews== | ==Effectiveness of Reviews== | ||
This evaluation criterion compares the | This evaluation criterion compares the remaining errors of documents with the located errors through reviews. Thereby it can be determined if trainees have deployed the right developers. | ||
* Effectiveness of | * Effectiveness of Specification Reviews | ||
* Effectiveness of | * Effectiveness of System Design Reviews >> [[chart]] | ||
* Effectiveness of | * Effectiveness of Modul Design Reviews | ||
* Effectiveness | * Effectiveness of Code Reviews | ||
* Effectiveness of | * Effectiveness of Manual Reviews | ||
Line 66: | Line 66: | ||
==Losses by Reviews or Tests== | ==Losses by Reviews or Tests== | ||
This evaluation criteria shows the number of AFPs, which get lossed through reviews and tests. This loss can be caused by a bad choice of developers.Therefore the | This evaluation criteria shows the number of AFPs, which get lossed through reviews and tests. This loss can be caused by a bad choice of developers.Therefore, the trainees should always check the qualification of a developer. | ||
* Losses in | * Losses in Tests | ||
* Losses in | * Losses in Reviews >> [[chart]] | ||
Line 75: | Line 75: | ||
==Expenses== | ==Expenses== | ||
This evaluation | This evaluation criterion shows the expenses of each single phase (Specification, Design, Code, Test, Manual). | ||
>> [[chart]] | >> [[chart]] | ||
Line 82: | Line 82: | ||
==Authors of Documents== | ==Authors of Documents== | ||
This evaluation should check if | This evaluation should check if trainees have used the right developers for the different phases. | ||
The following evaluations are available: | The following evaluations are available: | ||
* Author / authors of | * Author / authors of Specification | ||
Author / authors of | Author / authors of System Design >> [[chart]] | ||
Author / authors of | Author / authors of Module Design | ||
Author / authors of the | Author / authors of the Code | ||
Author / authors of the | Author / authors of the Manual | ||
[[de:Bewertungskriterien]] | [[de:Bewertungskriterien]] |
Revision as of 18:36, 12 May 2013
Objectives
The following objectives can be accessed:
- Project Time
- Budget
- AFPs realised in the Code as a percentage >> chart
- Errors per KLOC
- AFPs realised in the Manual as a percentage
- Errors per page in the Manual
Objectives are the most important evaluation criteria. At the end of a simulation run, trainees want to know if they have managed the project within the budget, in time and if they have realised the required AFPs for the code and for the manual. The evaluation component provides a textual description for every objective.
>> Back to: Classification of the Evaluation Criteria
Completness of Documents
This criterion gives an overview of the completeness of the documents(Specification, System design, Modul design, Code, Manual) and is presented to trainees in the form of a diagram. >> chart
Remaining Errors in Documents
Another important evaluation criteria gives an overview, how many remaining errors are left in the documents. You can distinguish the following evaluations:
- Remaining errors of Documents>> chart
- Remaining errors in the Specification
- Remaining errors in the System Design
- Remaining errors in the Module Design
- Remaining errors in the Code
- Remaining errors in the Manual
>> Back to: Classification of the Evaluation Criteria
Efficiency of Reviews or Tests
This criterion is compared with the costs of reviews or tests, the errors found and the costs of correcting these errors. With the help of these diagrams it can be determined, whether a review or test was performed efficiently or not.
- Efficiency of the Specification Reviews
- Efficiency of the System Design Reviews
- Efficiency of the Module Design Reviews
- Efficiency of Code Reviews
- Efficiency of Manual Reviews >> [[chart]
- Efficiency of Module Tests
- Efficiency of System Testing
- Efficiency of Integration Tests
Effectiveness of Reviews
This evaluation criterion compares the remaining errors of documents with the located errors through reviews. Thereby it can be determined if trainees have deployed the right developers.
- Effectiveness of Specification Reviews
- Effectiveness of System Design Reviews >> chart
- Effectiveness of Modul Design Reviews
- Effectiveness of Code Reviews
- Effectiveness of Manual Reviews
>> Back to: Classification of the Evaluation Criteria
Losses by Reviews or Tests
This evaluation criteria shows the number of AFPs, which get lossed through reviews and tests. This loss can be caused by a bad choice of developers.Therefore, the trainees should always check the qualification of a developer.
- Losses in Tests
- Losses in Reviews >> chart
Expenses
This evaluation criterion shows the expenses of each single phase (Specification, Design, Code, Test, Manual). >> chart
Authors of Documents
This evaluation should check if trainees have used the right developers for the different phases. The following evaluations are available:
- Author / authors of Specification
Author / authors of System Design >> chart Author / authors of Module Design Author / authors of the Code Author / authors of the Manual