Institutional Research in our Colleges: The Role of Data in the Design of an Effective IR Department- Part 3

David Schleich, PhD

In the past two columns we have looked at Institutional Research (IR) models that presume the use of credible data which, in turn is invaluable in strategic planning, both short and long term. Our understanding of data types is key to the design and deployment of a reliable IR model for our schools. The historic impulse for designers of intermittent assessment and evaluation instruments at our colleges has been to rely on anecdotal and qualitative data more often than on quantitative input. Nevertheless, there is a role for both qualitative and quantitative data in any IR unit. For example, it is common practice in many programs to document the anecdotal comments in fourth-year student exit surveys to get some understanding of program effectiveness and student satisfaction. Some colleges routinely include interval and ratio data, which can change the “impression” of the research outcomes for a particular graduating class. The organization and nature of the data strongly influenced its value at the reporting stage.

When the approach being used by an IR team examining a program or service is qualitative, the data gathered is nominal and ordinal. Quantitative approaches look at interval and ratio data, and the use of more of these often have been recommended by accreditors to compensate for a long history of reliance on qualitative feedback. It has not been the case in our colleges that a model could quickly or easily include all four types; however, their rollout at different points along an IR implementation strategy is quite doable.

Nominal data (or “naming” information) is principally “categorical” (e.g. gender data, ethnicity, religion, marital status, presenting condition, nationality, etc.). Since no order or value is assigned to the categories of such data, it is very valuable in assembling a picture of larger patterns, such as the numerical superiority of females as students and patients at a particular school, and the long-term implications of this gender imbalance.

Ordinal data, on the other hand, does convey a rank order. We can envision many instances where “scales of agreement,” for example (e.g., “from strongly agree to strongly disagree”), are among the instruments used by IR practitioners in their intelligence gathering for classroom and clinical education, and for patient care outcomes. It will be important for a model, though, to note that despite its not having an order, ordinal data “only measures the order, not the degree of separation” among items. Interval data, however, does involve equal intervals. In patient and student satisfaction surveys, for example, an instrument may use interval data frequently. Ratio data simply adds the dimension of an absolute zero point on a rank order.

Designing Impact Potential Into an IR Unit for Strategic Planning Capabilities

The literature provides us with yet another important characteristic to build into an IR function and capacity in naturopathic medical education. Boulmetis and Dutwin advise designers to integrate into any proposed new department the “habit of working first to understand the purposes (expected outcomes) of a program (or service), and then to understand to what extent those purposes have been accomplished” (p. 127). The approach, in any case, will want to achieve a balance point, as it were, on a research-evaluation continuum that clearly understands its intended purposes and audience. As stated earlier, historically our colleges were inclined to cater to external audiences, notwithstanding the preference of our staff, often, to priorize internal enhancement as a goal for IR. Gradually, the IR department will no doubt accumulate experience, data and documentation, which incrementally and cumulatively will expand knowledge about our colleges for groups well beyond the immediate and historical stakeholders.

Sonnichsen’s work (1994) can also help us to build success into any proposed IR model. He emphasizes the importance designing an IR unit so that its output will consistently have an impact on the organization. His recommendation is that developers of an IR operation develop and structure it “to complement a change-oriented approach” (Wholey et al.,1994, p. 534). He emphasizes how important it is that an IR unit view itself as a “change agent” and goes on to explain how “to convert evaluation findings to organization actions through effective development, framing, writing and placement of recommendations” (p. 535).

Significantly, he recommends that an IR operation focus initially on the decision-making process of the institution. Sonnichsen is mainly cautioning here about what Weis (1998) has described as “the ubiquitous lament in the evaluation literature over the lack of use of evaluation reports” (p. 535). As well, Sonnichsen suggests strategies for “converting evaluation and assessment findings to organizational action.” To be sure, we would want such skills to be part of an IR capacity at our schools. Particularly in America, accreditors relish abundant data of this kind. Reginald Carter (1994) also emphasizes how important it is that any IR function be complemented by a capacity to “increase the likelihood that the results will ultimately be used” (p. 576).

Finally, and particularly relevant to our schools, given their non-profit status, Sandra Trice Gray (1998) introduces into the literature the concept of “co-evaluation” as a means of organizational learning and a way “for the organization to assess its progress and change in ways that lead to greater achievement of its mission in the context of its vision” (p. 4). Seeing traditional IR and evaluation as a “report card process, an after-the-fact rating that came too late to permit any improvement” (p. 4), Gray proposes an approach that can overcome this potential negativity and ineffectiveness.

She sees co-evaluation as the responsibility of everyone in the organization and as a process that “addresses the total system of the organization, its internal effectiveness and external results.” Aiming at improvement rather than judgment, it is a proposed “model that invites collaborative relationships within the organization, and with external parties such as clients, community members, businesses, government, donors, funders and other nonprofit associations” (p. 5). Essentially, Gray’s system of co-evaluation constitutes an “umbrella for all other forms of institutional research.” Gray’s work includes practical suggestions for implementing a co-evaluation agenda from the board right on through the organization to its most basic client and customer. She outlines a variety of “learning moments” for organizations keen to collect and share “the right information.” This more holistic approach to creating an IR capacity may well be suited to our colleges’ short- and long-term agenda.

All of these models and approaches, in any case, can make use of existing data at the outset, data that can be assembled during routine, annual audits and as part of normal, five-year accreditation processes that require assessment of many areas of the college’s mission, operations and resources.

Accreditation, Annual Financial Audits and Gov’t Audits

Our colleges routinely prepare reports for purposes of accreditation with the Council on Naturopathic Medical Education (CNME) and various regional accreditors, all of which are, in turn, affiliated with the U.S. Department of Education in Washington, D.C. There is no equivalent Canadian accrediting body. However, the Ministry of Training, Colleges and Universities in Ontario and its counterpart in other Canadian provinces through their respective student assistance departments (e.g., the Ontario Student Assistance Plan/OSAP) do require an annual audit related to student loans, attrition levels, employment success and student profile. Further, an annual financial audit conducted by external accounting firms examine our colleges’ finance function routinely. These assessments generate data very helpful in part to an IR department’s work, and their existing capacity would be very useful in the early work of the unit.

Institutional and programmatic accreditation are largely formulaic processes, criticized often for being “too broad in scope to delve deeply into real deficiencies” (Ewell, 1999). John Wiring’s study of program review (Wegin, 1998) concluded that “such exercises are ‘one-shot’ affairs, poorly integrated into the life of the institution or with the complementary processes of assessment … and rarely seem to stimulate faculty to act to improve teaching and learning.” Meanwhile, David Dill (2000) reports that there is emerging evidence from other countries that assessment of programs and their systems supporting them “offer a number of possible improvements over our existing processes of accreditation and program review” (p. 36). Dill refers specifically to “the first full cycle of academic audits in the United Kingdom, New Zealand, Sweden and Hong Kong,” which have been independently evaluated in each country by bona fide organizations such as Coopers & Lybrand, Meade and Woodhouse, and Nilsson and Wahlen. These audits, apparently, have demonstrable success in such dimensions of higher education policy and practice as:

  • Making improving teaching and student learning an institutional priority
  • Clarifying responsibility for improving teaching and learning at the academic unit, faculty, and institutional level
  • Providing information on best practices within and across institutions (Dill, p. 36).

Dill goes on to explain that “academic audits” (strongly related in design and intention to key work of an IR unit in a college) “have predictable characteristics that differentiate them from existing quality-assurance mechanisms” (Dill, p. 38). Such institutional research has a “sharp focus on quality-assurance processes,” but apparently less so on resources or outcomes (Dill, p. 36). As well, there is attention paid to “auditor selection and training” often coordinated and “screened” by personnel in effective IR units.

One important proviso from scholars and practitioners of IR and program auditing that Dill reports is that the traditional “self-study” within an accreditation process may be “a misdirected exercise, addressing issues and generating documents that may be of limited value … and may lead to frustration rather than insight” (Dill, p. 39). Thus, in the design of an IR department for our colleges, the pitfalls of a formulaic accreditation “model” should be avoided, but the rigor and value of a well-designed assessment encouraged.

Let us now turn to a modest proposal for the design and implementation of an IR operation in our colleges.

Elements of an Ideal Design

The establishment of a well-designed IR department can effectively assist in answering the key questions posed by Middaugh and his colleagues (1994):

  • Where is our college (or program) at this moment?
  • Where are we going?
  • How can we best arrive at our desired end?

As Seybert pointed out, the “environment for IR continues to emphasize efficiency and constricting resources along with the added pressures of external mandates for assessment of institutional effectiveness, independent measurement of student learning and outcomes, benchmarking and general accountability” (Seybert, 2000). IR is institution and system specific, Seybert reported. Its primary emphasis is on data gathering and reporting activities “designed to support various management functions of the institution” (2000).

Thus, an IR department or unit must play a focal role in addressing the questions outlined above. It must consistently describe the fit between the college’s institutional mission and the programs and services it currently has in place, and the institution’s position within the educational marketplace. As well, IR can contribute to understanding and identifying the changes needed in any one college’s programs and services, all the while making sure that such changes are “consistent with the institutional mission” and “reflective of changing environmental conditions” (Seybert, 2000). A naturopathic college IR department should provide data and information helping in decisions about alternative courses of action and associated costs. Finally, that same IR department will include, wherever needed, possible data and information that “acknowledges that the external organizational environment also has a very profound impact upon the institution’s ability to perform its functions” (Middaugh, 1994, p. 6).

All of these contributions of an IR department are very valuable to the institution. Middaugh et al.’s (1994) Conceptual Framework for Analysis of University Functions is an ideal foundation for a naturopathic college IR model. Called the “IPO” model, it includes the following components:

  • Inputs: students, faculty, staff, facilities, indicators of quality, financial resources
  • Process: institutional mission, academic programs and services, support programs, teaching and research, completion/attrition, indicators of quality, measures of productivity and general planning analyses
  • Outputs: graduates, value-added outcomes, cognitive outcomes, indicators of quality, advancement of knowledge

Further, this framework must include methodologies and instruments to gather information and data about the external environment impacting on the college, such as fiscal/economic considerations; market place considerations; and government/regulatory concerns.

Another aspect of a proposed framework is the adoption of guidelines, which should inform the initiation of any IR project. Middaugh et al. provide a set of questions that should accompany any well-choreographed IR plan or strategy.

  • What are the purposes of any assessment?
  • How does a particular project fit with other assessment projects that have been completed recently or are contemplated (that is, is there an overall plan for assessing students, staff and programs)?
  • Should qualitative or quantitative measures (or both) be incorporated?
  • To whom will the results be reported?
  • How will the results be used?
  • What will happen if the results are bad news for the institution? (Middaugh et al., 1994, p.45)

Next month we shall have a close look at the nuts and bolts of an ideal IR model for our colleges.

David Schleich, PhD is president and CEO of NCNM, former president of Truestar Health, and former CEO and president of CCNM, where he served from 1996 to 2003. Other previous posts have included appointments as vice president academic of Niagara College, and administrative and teaching positions at St. Lawrence College, Swinburne University (Australia) and the University of Alberta. His academic credentials have been earned from the University of Western Ontario (BA), the University of Alberta (MA), Queen’s University (BEd) and the University of Toronto (PhD).

References

Boulmetis J and Dutwin P: The ABCs of Evaluation: Timeless

Techniques for Program and Project Managers. San Francisco, 2000, Jossey-Bass.

Sonnichsen RC: Evaluators as change agents. In Wholey JS et al. (eds), Handbook of Practical Program Evaluation. San Francisco, 1994, Jossey-Bass.

Wholey JS et al. (eds): Handbook of Practical Program Evaluation. San Francisco, 1994, Jossey-Bass.

Weiss CH: If program decisions hinged only on information: a response to Patton, Evaluation Practice 9(3):15-28, 1988.

Carter R: Maximizing the use of evaluation results. In Wholey JS et al. (eds), Handbook of Practical Program Evaluation. San Francisco, 1994, Jossey-Bass.

Gray ST: Evaluation with Power: A New Approach to Organizational Effectiveness, Empowerment, and Excellence. San Francisco, 1998, Jossey-Bass.

Ewell PT: A delicate balance: the role of evaluation in management. Paper presented at the International Network for Quality Assurance Agencies in Higher Education (INQAAHE) meeting, Santiago, May 2-5, 1999.

Dill D: Is there an academic audit in your future? Reforming quality assurance in U.S. Higher Education, Change July/August:35-41, 2000.

Wergin J: Assessment of programs and units: program review and specialized accreditation. Presented to the AAHE Assessment Conference, Cincinnati, June 1998.

Middaugh MF et al: Strategies for the Practice of Institutional Research: Concepts, Resources and Applications, Resources in Institutional Research, No. 9. Tallahassee, 1994, The Association for Institutional Research, Florida State University.

Seybert JA: The role of institutional research in college management. Seminar presented at the summer institute of the Community College Leadership Program, Ontario Institute for Studies in Education, University of Toronto, July 6, 2000.

Scroll to Top