Seafield Research and Development Services4th Generation
Home Up change 4th Generation Jargon Busters

 

Home
Up Sister Sites:

SRDS -
Home
Training of Trainers

CCSD
Social Development

Land Reform
Social Landownership
 

Fourth Generation Evaluation

Source: Guba EG and Lincoln YS (1989) Fourth Generation Evaluation; Sage

The Paradigm is shifting.
Guba and Lincoln have pointed the way.
The following six one-pagers distil the essence
 
Evaluation - the first three generations g960314c
Flaws in the first three generations of evaluation g960314d
Fourth Generation Evaluation - the process g960327a
Stakeholder claims, concerns and issues - why pay attention? g960326a
Some principles of fourth generation evaluation g960229e
New meanings in the constructivist methodology g960314b

 

g960314c

Evaluation - the First Three Generations

The First Generation - Measurement

The major purpose of the school was to teach the children what was known to be true; children demonstrated mastery of those "facts" by regurgitating them on what were essentially tests of memory.

The role of the evaluator was technical; he or she was expected to know about and be able to use the vast number of "psychometric measurement instruments" that were available by the mid 1940s.

The Second Generation - Description

The first generation targeted students as the object of evaluation. But this did not give useful information about school curricula. This became required in America in the 1930s and thus the second generation of evaluation was born.

Around this time Ralph Tyler was developing tests to measure whether or not students had learned what their teachers had intended them to learn. These desired learning outcomes were labelled as objectives.

He was given the task of modifying his procedures to refine the new curricula which were being designed at the time and to make sure that they were working. He measured the extent to which the new curricula were achieving their objectives and was thus able to note the strengths and weaknesses in the curriculum programme design.

The role of the evaluator was thus to describe patterns of strength and weakness with respect to stated objectives in the use of a given curriculum.

The Third Generation - Judgement

"Something not worth doing is not worth doing well". The key question in the late 1950s in America became not so much "are the objectives being achieved" but rather "are the objectives worthwhile".

This third generation thus called for judgements of "worth". Various models for achieving this were devised in the 1960s and 70s where the essential feature was that the evaluator to some extent or other had to act as judge.

Top of this one-pager    Top of this web page

g960314d

Flaws in the first three generations of Evaluation

The first three generations of Evaluation can be seen to be flawed to the extent of having to be replaced. Three major categories of flaw can be identified:

A tendency towards managerialism

The manager who hires the evaluator tends to stand outside the evaluation - his or her managerial qualities are not called into question.

The manager/evaluator relationship is disempowering and unfair as they between them decide which questions should be asked, how answers will be collected and interpreted and who will see the results. Other stakeholders are not represented and may be disenfranchised.

A failure to accommodate value pluralism

When evaluation is seen to be about valuing there is the question of whose values. The claim of value-freedom within the scientific mode of inquiry is not tenable and, that being the case, the value-pluralism within societies and between cultures is a crucial matter to be attended to in an evaluation. None of the evaluation approaches of the first three generations accommodates value differences in the slightest.

An overcommittment to the scientific paradigm of inquiry

Virtually all of the first three evaluation models uses the scientific paradigm to guide its methodological work. But this extreme dependence on the methods of science has had unfortunate results:

bulletcontext stripping - assessing the evaluand as if it was not embedded in a highly specific context. This leads to generalization in questions and results and fatally reduces the contextually relevance and usability of the findings.
bulletoverdependence on quantitative measurement - leading to the modus operandi that what cannot be measured cannot be real.
bulletthe "coerciveness of truth" - science claims to tell us about "the way things really are" and, given, managerialism and commitment to the scientific paradigm, this locks thinking into the positivist mode and lends illegitimate support to the status quo.
bulletscientific truth is non negotiable - if science discloses the truth about things then any other alternative explanations must be in error.
bulletthe evaluator bears no moral responsibility for his conclusions if they are "scientific truth".

Top of this one-pager    Top of this web page

g960327a

Fourth Generation Evaluation - the Process

Fourth Generation evaluation is organised by the claims, concerns, and issues of stakeholding audiences, and it utilises the methodology of the constructionist paradigm. It includes the following processes but not of necessity in the order indicated.

  1. Identify the full array of stakeholders who are at risk in the projected evaluation.
  2. Elicit from each stakeholder group their constructions about the issues at hand and the range of claims, concerns and issues they wish to raise in relation to it.
  3. Provide a context and a methodology through which different constructions, and different claims, concerns and issues, can be understood, critiqued, and taken into account.
    1. carry out this methodology within each stakeholder group, so that the group construction (or several, if there are within-group conflicts) can emerge and decisions can be reached about which claims, concerns, and issues should be pursued.
    2. cross fertilize each group with the constructions, claims, concerns, and issues arising from other groups so that those items must be confronted and dealt with. This cross fertilization may also include constructions drawn from the literature, from other sites, or from the experience of the evaluator. Any construction, claim, concern, or issue may properly be introduced so long as it is open to critique and criticism.
  1. Generate consensus with respect to as many constructions, and their related claims, concerns and issues as possible.
  2. Prepare an agenda for negotiation on items about which there is no, or incomplete, consensus.
  3. Collect and provide the information called for in the agenda for negotiation.
  4. Establish and mediate a forum of stakeholder representatives in which negotiation can take place.
  5. Develop a report, probably several reports, that communicate to each stakeholder group any consensus on constructions and any resolutions regarding the claims, concerns, and issues that they have raised (as well as regarding those raised by other groups that appear relevant to that group.)
  6. Recycle the evaluation once again to take up still unresolved constructions and their attendant claims, concerns and issues.

Fourth Generation evaluations are never completed;

They pause until a further need and opportunity arise.

Top of this one-pager    Top of this web page

g960326a

Stakeholder claims, concerns and issues - why pay attention?

Guba and Lincoln (1989) give five reasons for insisting upon the use of stakeholder claims, concerns and issues as the basis for deciding what information is needed in an evaluation. These are as follows:

bulletstakeholders are placed at risk by an evaluation
bulletthus, in the interests of fairness, they deserve to have input into the process
bulletevaluation exposes stakeholders to exploitation, disempowerment and disenfranchisement
bulletthus, in the interests of self defence, they are entitled to some control over the process
bulletstakeholders represent a virtually untapped market for the use of evaluation findings that are responsive to self defined needs and interests
bulletthe inclusion of stakeholder inputs greatly broadens the scope and meaningfulness of an inquiry and contributes immeasurably to the dialectic so necessary if evaluation is to have a positive outcome
bulletall parties can be mutually educated to more informed and sophisticated personal constructions as well as an enhanced appreciation of the constructions of others.

The authors also state that when these arguments are laid alongside other arguments which they have developed ie the need to:

bulletescape from a managerial ideology
bullettake account of pluralistic values
bulletrethink the ontological bases of evaluative interpretations

we believe that an overwhelming case is formed that mandates serious consideration of fourth generation evaluation.

Top of this one-pager    Top of this web page

g960229e

Some Principles of Fourth Generation Evaluation

Evaluation is:

  1. a process whereby evaluators and stakeholders jointly and collaboratively create (or move towards) a consensual valuing construction of some evaluand. It does not necessarily yield irrefutable (ie empirically confirmable) information (although that may be a side product.
  1. a process that subsumes data collection and data valuing (interpretation) into one inseparable and simultaneous whole.
  2. a local process. Its outcomes depend on local contexts, local stakeholders, and local values and cannot be generalised to other settings.
  3. a sociopolitical process. Social, cultural and political aspects, far from being merely distracting or distorting nuisances, are integral to the process, at least as important as are considerations of technical adequacy.
  4. a teaching/learning process. Evaluators, clients, sponsors, and all stakeholders both teach and learn from one another; indeed, such teaching/learning is an absolute prerequisite to the meaningful reconstruction of emic views.
  5. a continuous, recursive, and divergent process, because its "findings" are created social constructions that are subject to reconstruction. Evaluations must be continuously recycled and updated.
  6. an emergent process. It cannot be fully designed in advance for its focus (or foci) depends on inputs from stakeholders and its activities are serially contingent.
  7. a process for sharing accountability rather than assigning it.
  8. a process that involves evaluators and stakeholders in a hermeneutic dialectic relationship.
  1. Evaluators play many conventional (but reinterpreted) and unconventional roles in carrying out fourth generation evaluation.
  2. Evaluators must possess not only technical expertise but also relevant interpersonal qualities. Perhaps chief among these are patience, humility, openness, adaptability, and a sense of humour.

Top of this one-pager    Top of this web page

g960314b

New meanings in the constructivist methodology

bulletTruth is a matter of consensus among informed and sophisticated constructors, not of correspondence with an objective reality.
bulletFacts have no meaning except within some value framework; hence there cannot be an objective assessment of any proposition.
bulletCauses and effects do not exist except by imputation; hence accountability is a relative matter and implicates all interacting parties (entities) equally.

 

Phenomena can be understood only within the context in which they are studied; findings from one context cannot be generalised to another; neither problems nor their solutions can be generalised from one setting to another.

Interventions are not stable; when they are introduced into a particular context they will be at least as much affected (changed) by that context as they are likely to affect the context.

Change cannot be engineered; it is a non linear process that involves the introduction of new information, and increased sophistication in its use, into the constructions of the involved humans.

Evaluation produces data in which facts and values are inextricably linked. Valuing is an essential part of the evaluation process, providing the basis for an attributed meaning.

Accountability is a characteristic of a conglomerate of mutual and simultaneous shapers, no one of which nor one subset of which can be uniquely singled out for praise or blame.

 

Evaluators are:

bulletsubjective partners with stakeholders in the literal creation of data.
bulletorchestrators of a negotiation process that attempts to culminate in consensus on better informed and more sophisticated constructions.

 

Evaluation data derived from constructivist inquiry have neither special status nor legitimations; they represent simply another construction to be taken into account in the move towards consensus.

Top of this one-pager    Top of this web page