Building Institutional Research Capacity in our Naturopathic Schools and Programs, Part 2 of 3

 In Education

David Schleich, PhD

Last month we discussed the value to our schools of having an institutional research (IR) capacity. Such a strong IR resource built on increasingly sophisticated organizational intelligence can help us build the profession, from the schools outward into the larger terrain of professional formation. Terenzini (1993), a scholar who advocates for IR in higher education, zeroes in on categories of such intelligence, but it is to open systems theory (Katz and Kahn, 1978) we can initially turn to get a handle on how IR can help institutions survive and thrive over time.

Schmidtlein (1977) outlines how theoretical assumptions like open systems theory affect organizational behavior, leading routinely to such processes as program planning and budgeting (PPB), program evaluation and review technique (PERT), quality circles, strategic planning and total quality management (TQM). Such processes can be immeasurably helpful to rapidly growing institutions such as our colleges, where bold and rapid decisions need to be taken frequently if those processes in turn reflect an institutional sensitivity to “various types and sources of information” (Schmidtlein, 1977, p. 65). In such a world of quickly emerging data and techniques, an incrementalist strategy – which does not make use of aggressively and consistently gathered data about the institution – would not serve a naturopathic college’s IR model very well (Lindblom, 1959; Wildavsky, 1964). Some institutions and organizations call for agility and speed in decision-making. Incremental change does not cut it in the health education sector these days. Institutional pushback, though, to such change can be countered with hard facts.

Such agility has allowed our colleges, for the most part housed in nonprofit, private college contexts, reliant totally on their own resources to maximize the expenditures on and effectiveness of classroom and clinical education. These operations hugely affect its main customers, the ND students and their patients. As well, longstanding approaches to delivering this educational service are under review not only in terms of how rapidly increasing demand can be met successfully, but also in terms of how our colleges can produce graduates with highly reliable clinical skills and knowledge. Add into this mix an increasing confusion about to whom our colleges are accountable for its operations, and the need for an IR capacity has never been greater.

It is not sufficient for the usual highly visible measures (new student applications, graduation rates, loan default rates, student flow modeling, workload analysis, resource allocation, faculty evaluation, program evaluation, assessment, institutional self-study, budget development and analysis, academic program planning and institutional strategic planning) to define separately or collectively institutional viability and performance. Rather, as Volkwein explains, “there is now a renewed interest in process measures, rooted in the theory that good outcomes will not result from flawed educational processes” (1999, p. 14). For all of these reasons, then, the time has come to build an effective IR capacity at our schools.

Key Elements for Design of IR

Volkwein (1999) outlines the main policy concerns that drive IR in higher education: tuition and related costs; universal need for management efficiency and increased productivity; effectiveness and quality; access; and accountability. His main point is that these concerns often “collide” with strategic planning, operational efficiency and customer service. An effective IR framework can significantly assist in controlling the most important intersections. IR will tend to “measure everything that moves,” but more particularly, it will want to measure and improve inputs, critical processes, outputs and outcomes (Volkwein, 1999, p. 14).

Peters warns us, though, that such a program and spirit of assessment “requires a diligent search for bad news, but accountability encourages the opposite” (1994, p. 17). The framework and implementation strategy recommended here is grounded in the conviction that IR at our schools can move as needed along Volkwein’s continuum – that is, from formative and internal for improvement purposes and audience, to summative and external for accountability purposes and audience (Volkwein, 1999, p. 17). IR may well emerge exhibiting different purposes and roles in the earliest stages of implementation. For example, the IR department could move along a continuum ranging from “information authority” to “scholar and researcher.”

As an information authority, our naturopathic colleges could create IR departments whose mandate would be to educate “the campus community about itself” (Volkwein, 1999, p. 17). Terenzini would call this “technical/analytical intelligence” (1993). As needed, though, the IR team might also act as a policy analyst group, “providing support for planning and budget allocation decisions, policy revision, administrative restructuring or other needed change” (Volkwein, 1999, p. 18). Terenzini would call this “issues intelligence.” At times the IR group might have to act as a “spin doctor,” assembling data and descriptive statistics that “reflect favorably upon the institution” (p. 18).

It is, though, when the proposed IR department operates as scholar and researcher that the most sophisticated role and purpose can be served. Terenzini’s characterization of this role as “contextual intelligence” (1993, p. 25) may prove to be the new IR department’s most enduring contribution. This latter “intelligence” includes “an understanding of the institution’s historical and philosophical evolution, faculty and organizational cultures, informal as well as formal campus political structures and codes, governance, decision-making processes and customs” (Terenzini, 1993, p. 25). This latter category “reflects organizational savvy and wisdom.”

IR Implementation and Evaluation

Kirkpatrick (1994) provides us with key ingredients to consider in the design of a model helpful to the implementation of an IR operation. Specifically, he outlines the uses to which data collected by an energized IR department can be put. He suggests that IR designs consider these levels in their strategic design of the unit’s operational structure and purposes. His four levels of evaluation include participant impression, program effectiveness, impact on participants and return on investment for the organization. Although Kirkpatrick’s model is specifically applied to programs, its hierarchy can be very useful to us in formulating our overall IR model, which will examine many core indicators of institutional effectiveness across all key divisions, departments and functions.

His first level focuses on the clients of the program, identifying benefits and other outcomes and providing the “deliverers” of the service (in this case, classroom and clinic teaching) with information about “how their efforts are being perceived and used” (Boulmetis & Dutwin, 2000, p. 9). Level two focuses on actual outcomes based on “some comparison between a set of standards and what actually resulted” (p. 10). Level three undertakes to look at the overall and long-term impact the program had on its clients. Particularly, level three would review how sustainable the outcomes of level two were over time. Finally, level four of Kirkpatrick’s model would make certain that IR occurred about “the extent to which the parent institution [sponsor] benefited from the program” (p.10).

Other IR experts have made available in the literature a variety of evaluation methods that can also help formulate a model for our colleges’ potential IR departments. Provus (1971), for example, in outlining his discrepancy evaluation model provides a framework for reviewing a program (or service) through its developmental stages. He explains that each stage must have a set of standards of performance. The stages are defined as design, installation, process, product and cost-benefit analysis. His model provides intelligence that helps managers “make decisions based on the difference between preset standards and what actually exists” (Provus, 1971, pp. 23-36).

Michael Scriven proposes a “goal-free evaluation model” (Popham, 1974) which, instead of basing an evaluation on a program’s or service’s goals, instead “examines how and what the program is doing to address needs in the client population” (p. 73). In such a model, the evaluator has “no preconceived notions regarding the outcome of the program (that is, goals)” (Popham, 1974, p. 73). Although difficult to utilize, it is a popular approach, especially where “a program (or service) has many different projects occurring simultaneously.”

R.E. Stake developed yet another approach called the transaction model (Madaus et al., 1983). Because this approach combines monitoring with process evaluation, it lends itself well to the “constant back and forth” between classroom and clinical education in a setting such as our schools. Not only are naturopathic medical education staff constantly on the move between the two areas of the curriculum, the evaluators in such a model would also be active participants, “giving constant feedback” (Boulmetis & Dutwin, 2000, p. 75). This is a highly subjective model, and its main strength is that it directly benefits the clients and practitioners with “real-time data and feedback.”

Daniel Stuffelbeam (Madaus et al., 1983) designed a “decision-making model” grounded in a focus on “decisions that need to be made in the future” (Boulmetis & Dutwin, 2000, p. 76). This model is concerned with the long-range effects of a program or service, and is less interested in immediate processes or outcomes. It invites both qualitative and quantitative research methodologies, but its greatest value to our schools may lie in its entirely “summative” orientation, particularly important in a healthcare landscape replete with evidence-based medicine and whose outcomes are not always consistent, but whose public image and reputation is the contrary.

A goal-based model (also referred to in the literature as the objective attainment model) is principally concerned with “stated objectives.” While it appears to be the most research-like, it is not as flexible as our colleges’ circumstances might require. The quickly transforming landscape of primary healthcare regulation coupled with the rapid proliferation of treatment modalities across traditional and non-traditional practitioners make the delivery of curriculum and the provision of services to patients and students hard to quantify, because that curriculum and those services are quickly moving targets.

Rivlin’s systems analysis model, though, may well be the most appropriate (and acceptable) approach for our colleges’ IR audiences (their boards; the regulatory and accrediting agencies of the profession; and our colleges’ students, patients and staff). In this approach, the evaluator “looks at the program (or service) in a systematic manner, studying the input, throughput and output” (Boulmetis & Dutwin, 2000, p. 77). External audiences for IR reports would also like this approach because it not only determines whether or not the ND program moves students (and their patients) through their four-year program efficiently, but also assesses whether or not the goals of the program were achieved.

Next month we will have a look at the role of data in the design of an effective IR function in our colleges.


David_Schleich_Headshot-248x300David Schleich, PhD is president and CEO of NCNM, former president of Truestar Health, and former CEO and president of CCNM, where he served from 1996 to 2003. Other previous posts have included appointments as vice president academic of Niagara College, and administrative and teaching positions at St. Lawrence College, Swinburne University (Australia) and the University of Alberta. His academic credentials have been earned from the University of Western Ontario (BA), the University of Alberta (MA), Queen’s University (BEd) and the University of Toronto (PhD).

References

Terenzini PT: On the nature of institutional research and the knowledge and skills it requires. In JF Volkwein (ed), What Is Institutional Research All About? A Critical and Comprehensive Assessment of the Profession. New Directions for Institutional Research, No. 104, San Francisco, 1993, Jossey-Bass.

Katz D and Kahn P: The Social Psychology of Organizations. New York, 1978, Wiley.

Schmidtlein FA: Information systems and concepts of higher education governance. In CR Adams (ed), Appraising Information Needs of Decision Makers. New Directions for Institutional Research, No. 15, San Francisco, 1977, Jossey-Bass.

Lindblom CE: The science of muddling through, Public Administration Review 19(2):79-88, 1959.

Wildawsky A: The Politics of the Budgetary Process. Boston, 1964, Little, Brown.

Volkwein JF: What Is Institutional Research All About? A Critical and Comprehensive Assessment of the Profession. New Directions for Institutional Research, No. 104, San Francisco, Winter 1999, Jossey-Bass.

Boulmetis J and Dutwin P: The ABCs of Evaluation: Timeless Techniques for Program and Project Managers. San Francisco, 2000, Jossey-Bass.

Provus M: Discrepancy Evaluation. Berkeley, 1971, McCutchan.

Popham WJ (ed): Evaluation in Education: Current Applications. Berkley, 1974, McCutchan.

Madaus GF et al (eds): Evaluation Models: Viewpoints on Educational and Human Services Evaluation. Boston, 1983, Kluwer-Nijhoff.

Recommended Posts

Start typing and press Enter to search