Quantcast
Channel: Reflection and Evaluation – Museum Questions
Viewing all articles
Browse latest Browse all 18

How do we plan and improve museum programs?

$
0
0

My last post, on the role of education research in planning museum programs, sparked debate in person and on LinkedIn about whether museums emphasize content knowledge or an understanding of pedagogy when hiring, training, and supporting museum education staff. For better or worse, it’s essential that museum educators know both. They need to understand the content they are teaching, and they need to know how people engage and learn in a museum setting (and in any other settings they offer programs in). But there another, vitally important element museum educators need to know when planning programs: they need strategies for program design.

I’m an advocate for a program design and management approach that (for now) I’m calling “reflective practice” (I would love to find another term, as the phrase “reflective practice” is now used in many different ways). Reflective Practice is a cycle that involves defining goals or outcomes, aligning programs with these goals, implementing and evaluating the program, and then spending time to pause, reflect, and improve programs.

This iterative cycle has a few advantages. First, it offers structure when approaching program design: it helps frame the many decisions program developers need to make (Where should we hold this? How many people can participate? Who should teach this? What should participants do first? How should the program end? What might I send people home with?) Second, it ensures that programs do what we say they will do, or at least, they get closer and closer to achieving its stated (and promoted, and hopefully funded) goals. And third, it creates a departmental culture that is collaborative, experimental, and reflective. By examining programs with the assumption that they can and should be improved, and reflecting on them as a team, museum education leaders create a space in which everyone can share ideas and engage in an experimental approach to program design.

Here is a little more about the different steps.

Identify goals

Why do you want people to attend this program? What need are you addressing? What will they experience? The easiest way to think about goals is to consider what people may know, understand, or are able to do as a result of attending the program. You can also look at Bloom’s taxonomy, which in its 2001 version considers remembering, understanding, applying, analyzing, evaluating, and creating. And taxonomies exist for the psychomotor and affective domains, as well.

Sometimes people are hesitant to set goals because they don’t want to shut the door on unintended impact. Museum educators know that one person might leave a program having learned to use a new tool; another might leave having made a new connection with a fellow participant; a third might have learned something new about themselves. When you clarify your goals, you are not limiting participants to your goals – you are not shutting down their freedom to feel and experience in their own ways. You are, however, ensuring that the program has some specified, intended impact that. You are allowing yourself to craft a program that supports the type of learning that your department or museum promotes. And, pragmatically, you are creating a framework that will help you share intended impact with funders, and garner support for the program.

Align program with goals

You are not stating your goals for the sake of it; you are using them to guide decisions that will shape your program. If you want participants to know about an artist, you will need to include information about that artist, as well as some way for them to demonstrate that they are leaving with this knowledge. If you want participants to feel empathy for a group during a historical period, you probably want to activate their imaginations, and also couch this carefully in an accurate understanding of history. And you will need to ask questions that invite them to share their feelings to demonstrate the empathy you designed your program to activate. Any goal you crafted for your program leads to structural and content choices in program design.

Implement and evaluate

These two processes happen simultaneously – you offer the program, and you collect data to let you know whether it was effective at achieving your goals. In order to do this, you need to create a program aligned with goals, and you need to create an evaluation strategy to measure your successes and shortcomings in the context of those goals. The strategy can be simple, and might include a few of the following:

  • Time built into the program for visitor dialogue and reflection, during which participants are asked questions that invite them to demonstrate their personal progress toward your goals
  • A one-page evaluation tool identifying what goal achievement looks like and sounds like. The program leader and a colleague or critical friend can use this tool to observe the program and take notes on what’s working and what can be improved
  • A participant survey completed right after the program by all participants

You will also want to look at the program implementation – if the instructor is taking the actions needed to achieve the goals – and outcomes – if the goals are being experienced by participants. Developing an evaluation tool is a very useful exercise for program designers, as it forces them to identify and articulate what effective program implementation looks like, and what goal achievement among participants looks like. Ideally the program instructors are among those designing these tools, as well.

It’s helpful to include Likert scales or some other way to quantify your data, which allows you to look at the relative ratings of program elements so you can address where you may have fallen short.

Here is an example adapted from an evaluation created for a school program at the Guggenheim Museum.

Reflect and Improve

This step is often skipped, which undermines the value of the entire process. Evaluation is a learning tool, so using the results is essential to the reflective process. Don’t stuff your evaluations in a drawer – use them. If they are not providing you with useful information, then you need to change them. And if they are telling you that you are doing everything perfectly, you may want to be a little more ambitious!

A few tips for reflection:

  • Reflect as a team. Include instructors, peers, supervisors, colleagues from other museums – whoever you need to start a good conversation and creative brainstorming.
  • If you are using an evaluation with a Likert scale (a 7-point scale) s.  If all the outcomes are scoring 5 and higher, refrain from saying “this is perfect” – people tend to inflate their ratings (this is called courtesy bias). Instead, look at whatever is scoring lowest, because this is your space for creativity, experimentation, and improvement. How could the program be improved to come closer to fully achieving your goals?
  • Consider the goals as you review the data. Are they the right goals? Do the goals need to be changed? Added to? Made more nuanced?  It’s ok to realize that that some of your goals may not be the right goals for this program. What can and should this program be doing for participants?
  • Develop a list of program changes with concrete action steps, including deadlines and point people. And make sure there is someone to hold everyone accountable for making these steps happen.

Repeat. Maybe not every time the program is offered, but every quarter or every year. Keep improving. Keep experimenting. Keep generating new and better ideas and programs. And keep learning.

If you have a different approach to program design, I’d love to hear from you! And if you want more information about Reflective Practice, or to work with me for team training or support in this process, please contact me at rebeccashulman[at]museumquestions[dot]org.


Viewing all articles
Browse latest Browse all 18

Trending Articles