Evaluation Criteria: Difference between revisions

From Ameise-en
Jump to navigationJump to search
No edit summary
No edit summary
 
(16 intermediate revisions by the same user not shown)
Line 1: Line 1:
==Objectives==
==Objectives==
The following objectives can be accessed:
The following objectives can be evaluated and retrieved:
* Project time
* Project duration
* Budget
* Budget
* AFPs realised in the Code as a percentage >> [[zum Diagramm]]
* AFPs realised in the code as a percentage  
* Errors per KLOC in the Code
* Errors per KLOC  
* AFPs realised in the manual as a percentage
* AFPs realised in the manuals as a percentage
* Errors per page in the manual
* Errors per page in the manuals




Objectives are the most important evaluation criteria. At the end of the project the trainee wants to know if he/she has managed the project within the budget, in time and if he/she has realised the required AFPs for the code and for the manual. The evaluation component provides a textual description for every objective.
Objectives are the most important evaluation criteria. At the end of a simulation run, trainees want to know if they have managed the project within the budget, in time and if they have achieved the AFPs required for the code and manuals. The evaluation component provides a textual description for every objective.




>> [[Back to: Classification of evaluation criteria]]
>> Back to: [[Classification of the Evaluation Criteria]]


   
   
==Completness of documents==
==Completeness of Documents==
This criterium gives an overview of the completness of the documents(Specification, Systemdesign, Moduldesign, Code, Manual) and is presented to the trainee with a diagramm. >> [[zum Diagramm]]
This criterion gives an overview of the completeness of the documents(specification, system design, modul design, code, manuals) and is presented to trainees in the form of a diagram.  


   
   
==Residual errors of documents==
==Remaining Errors in Documents==
Another important evaluation criteria gives an overview how many residual errors are left in the documents. You can distinguish the following evaluations:
Another important evaluation criterion gives an overview, how many errors are still left in the documents. You can distinguish the following evaluations:
* Residual errors of documents>> [[zum Diagramm]]
* Remaining errors in Documents
* Residual error in the specification
* Remaining errors in the Specification
* Residual error in the system design
* Remaining errors in the System Design
* Residual error in the module design
* Remaining errors in the Module Design
* Residual error in the code
* Remaining errors in the Code
* Residual error in the manual
* Remaining errors in the Manuals


>> [[Back to: Classification of evaluation criteria]]
>> Back to: [[Classification of the Evaluation Criteria]]


   
   
==Efficiency of Reviews and Tests==
== Efficiency of Reviews or Tests==


This criterion is compared with the cost of reviews or tests, the errors found and the cost of correcting these errors. With the help of these diagrams it can be determined  whether a review or test was performed efficiently or not.
This criterion relates the effort of reviews or tests, the errors found and the effort to correct these errors. These diagrams show at a glance, whether a review or test was performed efficiently or not.


* Efficiency of the specification reviews
* Efficiency of the Specification Reviews
* Efficiency of the system design reviews
* Efficiency of the System Design Reviews
* Efficiency of the module design reviews
* Efficiency of the Module Design Reviews
* Efficiency of code reviews
* Efficiency of Code Reviews
* Efficiency of manual reviews >> [[zum Diagramm]
* Efficiency of Manual Reviews
* Efficiency of module tests
* Efficiency of Module Tests
* Efficiency of system testing
* Efficiency of System Testing
* Efficiency of integration tests
* Efficiency of Integration Tests


   
   
Line 51: Line 51:
==Effectiveness of Reviews==
==Effectiveness of Reviews==


This evaluation criterion compares the residual errors of documents with the located errors through reviews.Thereby it can be determined if the trainee has used the right developers.
This evaluation criterion relates to the errors still remaining in the documents located during reviews. Thereby, it can be determined if trainees have employed the proper reviewers.


* Effectiveness of specification reviews
* Effectiveness of Specification Reviews
* Effectiveness of system-design reviews >> [[zum Diagramm]]
* Effectiveness of System Design Reviews
* Effectiveness of modul-design reviews
* Effectiveness of Modul Design Reviews
* Effectiveness pf code reviews
* Effectiveness of Code Reviews
* Effectiveness of manual reviews
* Effectiveness of Manual Reviews




>> [[Back to: Classification of evaluation criteria]]
>> Back to: [[Classification of the Evaluation Criteria]]


   
   
==Losses through Reviews and Tests==
== Losses by Reviews or Tests==


This evaluation criteria shows the number of AFPs, which get lossed through reviews and tests. This loss can be caused by a bad choice of developers.Therefore the trainess should always check the qualification of a developer.
This evaluation criterion shows the number of AFPs, which were lost during reviews and tests. This loss can be caused by a bad choice of reviewers.Therefore, the trainees should always consider the qualification of a reviewer.
 
* Losses in tests
* Losses in reviews >> [[zum Diagramm]]


* Losses in Tests
* Losses in Reviews
   
   
==Expense Allocation==
==Expenses==
 
This evaluation criterion shows the expenses of each single phase(Specification, Design, Code, Test, Manual).


This evaluation criteria shows the expense allocation of each single phase (Specification, Design, Code, Test,Manual creation).
>> [[zum Diagramm]]


   
   
==Authors of documents==
==Authors of Documents==


This evaluation should check if the trainee has used the right developers for the different phases.
This evaluation should check if trainees have used the proper developers for the different phases.
The following evaluations are available:
The following evaluation options are available:


* Author / authors of  specification
* Author / authors of  Specification
Author / authors of  system design >> [[zum Diagramm]]
* Author / authors of  System Design
Author / authors of module design
* Author / authors of Module Design
Author / authors of the code
* Author / authors of the Code
Author / authors of the manual
* Author / authors of the Manuals


[[de:Bewertungskriterien]]
[[de:Bewertungskriterien]]

Latest revision as of 15:48, 13 August 2013

Objectives

The following objectives can be evaluated and retrieved:

  • Project duration
  • Budget
  • AFPs realised in the code as a percentage
  • Errors per KLOC
  • AFPs realised in the manuals as a percentage
  • Errors per page in the manuals


Objectives are the most important evaluation criteria. At the end of a simulation run, trainees want to know if they have managed the project within the budget, in time and if they have achieved the AFPs required for the code and manuals. The evaluation component provides a textual description for every objective.


>> Back to: Classification of the Evaluation Criteria


Completeness of Documents

This criterion gives an overview of the completeness of the documents(specification, system design, modul design, code, manuals) and is presented to trainees in the form of a diagram.


Remaining Errors in Documents

Another important evaluation criterion gives an overview, how many errors are still left in the documents. You can distinguish the following evaluations:

  • Remaining errors in Documents
  • Remaining errors in the Specification
  • Remaining errors in the System Design
  • Remaining errors in the Module Design
  • Remaining errors in the Code
  • Remaining errors in the Manuals

>> Back to: Classification of the Evaluation Criteria


Efficiency of Reviews or Tests

This criterion relates the effort of reviews or tests, the errors found and the effort to correct these errors. These diagrams show at a glance, whether a review or test was performed efficiently or not.

  • Efficiency of the Specification Reviews
  • Efficiency of the System Design Reviews
  • Efficiency of the Module Design Reviews
  • Efficiency of Code Reviews
  • Efficiency of Manual Reviews
  • Efficiency of Module Tests
  • Efficiency of System Testing
  • Efficiency of Integration Tests


Effectiveness of Reviews

This evaluation criterion relates to the errors still remaining in the documents located during reviews. Thereby, it can be determined if trainees have employed the proper reviewers.

  • Effectiveness of Specification Reviews
  • Effectiveness of System Design Reviews
  • Effectiveness of Modul Design Reviews
  • Effectiveness of Code Reviews
  • Effectiveness of Manual Reviews


>> Back to: Classification of the Evaluation Criteria


Losses by Reviews or Tests

This evaluation criterion shows the number of AFPs, which were lost during reviews and tests. This loss can be caused by a bad choice of reviewers.Therefore, the trainees should always consider the qualification of a reviewer.

  • Losses in Tests
  • Losses in Reviews


Expenses

This evaluation criterion shows the expenses of each single phase(Specification, Design, Code, Test, Manual).



Authors of Documents

This evaluation should check if trainees have used the proper developers for the different phases. The following evaluation options are available:

  • Author / authors of Specification
  • Author / authors of System Design
  • Author / authors of Module Design
  • Author / authors of the Code
  • Author / authors of the Manuals