You are hereHALE MA Portfolio / Program Planning & Evaluation
Program Planning & Evaluation
As I began to round out my degree program the choices became more precious, as I had only three courses left. I chose EAD877 - Program Planning and Evaluation in Post-secondary Education because I am frequently exposed to the creation of new degree and certificate programs. I'm asked to help provide advice to administrators and core faculty driving the creation of new programs and the design of a curriculum and course sequence. We try to encourage 'best practices' but sometimes it is unclear which interventions have a deterministic or even positive effect.
Something that became apparent to me in the course of our study and discussion of this topic in class, was the lack of assessment commonly done on campus. During the course my interest in assessment and accreditation increased and I left the course with a strong interest in continuing to learn about this field and develop my evaluation skills. I found this practical and timely form of inquiry to make effective use of my basic research skills. However, since evaluation is concerned with the merit and worth of programs it is much more relevant to an active practitioner like me who is concerned with choosing interventions that are well-directed and effective.
In this course I formed a team with Melissa Buffenn. We wrote a fictional program plan based on the scenario of Antioch College's struggle to keep their main campus open. Our peers chose not to approve our project plan, it was too ambitious and needed more evidence that it would succeed. However, given our limited access to Antioch materials and knowledge of the College, I think we made a very good try in what is a very difficult situation for that school. I've attached the program plan to the bottom of this page.
We also worked together to conduct a formative evaluation of University of Michigan Dearborn's (UMD) online bachelors program. We initially cast our scope too wide, and with encouragement from Dr. Minor, managed to narrow the scope to specifically focus on faculty satisfaction with support resources. We were invited by the Associate Dean to to interview faculty, staff, and administrators to develop an understanding of multiple viewpoints and concerns prior to writing our report. Multiple site visits were conducted and we attempted to identify levels of satisfaction with various supporting technologies, services, and staff made available to faculty teaching online.
Most of the evaluation was conducted through personal interviews, which we balanced against comparison to similar supports in other organizations, and my own expertise as an experienced instructional technology support person and distance education administrator. We delivered the report to UMD early the next term, and Melissa answered questions for a committee. Our goal was to equip them with some independent assessment of their support structure for faculty and an indication of satisfaction levels to inform decisions and enable them to take steps to improve their system.
My greatest takeaways from the experience were:
- Access is critical, without administrative support and consent evaluation is impractical.
- Choosing the right evaluation methods and scoping the evaluation is critical relative to what is being assessed, the resources available, and the necessary timeliness of findings.
- The evaluation must hold value to the stakeholders and sponsors, but the evaluator must retain enough autonomy to design a valid assessment.
One thing I really appreciated with Dr. Minor's style was his supportive nature and willingness to help us succeed with our plans while still leaving us plenty of room to make mistakes and learn from them. It made many of the lessons of the course more powerful because we were able to perceive the inherent value in certain approaches once they addressed areas of uncertainty or aspects of evaluation we struggled with.