Do Institutional Heads Meet Evaluation Standards? An Analysis Based on Stufflebeam's Criteria

Summary:


This article evaluates whether institutional heads adhere to established evaluation standards, referencing Stufflebeam's criteria for trust and confidence in evaluation data. The article highlights key principles like relevance, importance, scope, timeliness, and pervasiveness, alongside scientific standards such as internal validity, external validity, reliability, and objectivity. It also discusses challenges such as staff roles, costs, and institutional goals that affect the practical implementation of these standards. The analysis emphasizes the importance of adopting these criteria to ensure accurate, reliable, and actionable information for effective decision-making.


Do Institutional Heads Meet Evaluation Standards? An Analysis Based on Stufflebeam's Criteria:

Do Institutional Heads Meet Evaluation Standards? An Analysis Based on Stufflebeam's Criteria

Criteria of Evaluation:


A commitment to increasing accountability through an evaluation program does not begin simply by creating a central office. The school district develops a system that combines a variety of program evaluation activities in all schools. All schools are staffed on a hierarchical basis for program evaluation.


Thus, the evaluator has complete freedom to select, organize and report useful information. Due to the direct relationship between Superintendent and Director Evaluation, design problems are minimized.


Philosophically and financially, it is important to trust the information which is provided to the Evaluation Services Office and Superintendent Schools to make district-wide decisions based on these.


Whether the information obtained under the evaluation system is utilized or is relegated to red tape depends on the close relationship between the two offices. Besides to trust on evaluator for measuring the information standards prescribed by the Superintendent are also adopted for this purpose.


Review criteria presented by Stufflebeam:


Stufflebeam has defined five practical principles and criteria for trust and confidence in the information provided by the evaluator. Which are as follows.

Do Institutional Heads Meet Evaluation Standards? An Analysis Based on Stufflebeam's Criteria

i- Relevance:


The data collected by the evaluator should be relevant to specific purposes, if the information is not relevant to those purposes then it is useless.


ii- Importance:


Information related to specific purposes can be piled up, but it is not right to get confused in a lot of information, so it is important to consider only the most important information.


iii- Scope:


Among the relevant and most important information, the same data should be taken into consideration, which has depth in itself and has a large scope.


iv- Timeliness:


Any information, no matter how good it is in terms of relevance, importance and scope, is worthless if it is not provided on time. In contrast, relatively low-quality information is more useful if delivered in a timely manner.


v- Pervasiveness:


The design of the evaluation should be such that the information obtained is available to all persons who need to know it.


Functional quality is very important for a decision maker in evaluating the usefulness of the information provided by the evaluator. Another criterion that school administrators consider is the idea of whether the information obtained is practically commensurate with the required for evaluation staff or not.


Practical standards are not only necessary for school administrators, but this standard should also be adopted in relation to the information received by the evaluator. 


The following are the scientific standards defined by Stufflebeam.

Do Institutional Heads Meet Evaluation Standards? An Analysis Based on Stufflebeam's Criteria

i- Internal Validity:


Information must be accurate and truthful. The best way to take care of truth and accuracy is to take the information directly, if not, a statement about the means by which the information is taken, is necessary.


ii- External Validity:


It refers to the generalizability of the information i.e It is seen that the information is relevant to one group of the sample or can it be applied to other groups as well?


iii- Reliability:


This refers to the consistency of the information, i.e., if the data is collected again, the results of the information remain almost the same. Reliability depends on the nature of the sources used to collect the information.


iv- Objectivity:


This refers to the information being comprehensible to everyone, so that all concerned can agree on the meaning of the data.


Adopting these practical and scientific standards ensures that the information obtained from the evaluation process is accurate, sound and critical to decision-making for the program concerned. Checking the accuracy of information is important not only for the evaluator but also for the decision maker to take care of it.


Even if the superintendent is convinced that the information obtained from the evaluation can improve decisions, the problem of using the evaluation services is not easy. Because this kind of complexity is related to staff roles, implementation costs and the department's own goals and objectives for the evaluation. Therefore, all the factors have to be taken into consideration to solve the problem in a better way.


Post a Comment

0 Comments