By Heather King - April 2011
PAPER CITATION
Pekarik, A. J. (2010). From knowing to not knowing: moving beyond “outcomes.” Curator: The Museum Journal, 53(1) 105–115.
In this paper, Pekarik challenges the conventional approaches that institutions use to monitor success. He argues that outcome-based evaluations simply record impact in a set of predetermined categories and do not document the many and varied effects that participants may experience.
Pekarik grounds his contention in a discussion about museums and the evaluation of exhibitions, although his argument applies more broadly to endeavours across the whole educational field. He accepts that schemes that measure the extent to which specific outcomes are met are relatively straightforward to implement and findings are easy to understand. However, he asserts that outcome-based evaluations neglect the significance of unintended outcomes, and further narrow what the institution or programme is offering by changing services in response to feedback related to a limited set of objectives. He also notes that a given experience is likely to be one of many interrelated activities that an individual may participate in. As a result, isolating the impact of one experience is extremely difficult.
Instead, Pekarik argues for participant-based evaluation. By focusing on the participant’s experience and asking about processes, settings, needs, values, barriers, and so on, such an approach aims to discover new dimensions that had not previously been considered. Pekarik describes participant-based evaluation as “an open-ended inquiry into meaning making that aims to make understanding more complex—rather than to simplify it” (page 111). Moreover, it provides a way of finding out not simply what happened but why it happened. Pekarik also calls for greater “design experimentation” in which programmes (or in this case, exhibitions) are studied and modified to determine the impact of changes on participant responses so that improved versions result.
Pekarik’s thesis is controversial and indeed goes against the requirements of many funding bodies who urge measurable demonstration of impact. Its importance, however, lies in its reminder to us to reflect on our purpose in developing programmes: Are we really trying to understand and respond to the learning needs of individuals, or are we measuring learning within a narrow set of predetermined parameters?