Guidelines for Selecting and Using Assessment Instruments, Tools, and Services

The earlier edition of Measuring Quality sought to promote high standards for campus assessment through a series of guidelines for selecting and using assessment instruments primarily for improvement purposes. These recommendations were guided by a set of questions that are summarized below, along with the guidance provided in response to each question.

Selection and Preparation for Use

Do these assessments live up to their promises?

  • The appropriateness of the tool for the specific task – Does it address relevant questions or help you identify better questions to ask?
  • The skills and experiences of administrators – Do those implementing, interpreting, and deciding how to use the results have the necessary background, skills and abilities?
  • The availability of sufficient financial, personal, and material resources – Do you have or can you obtain resources (human, fiscal, and physical) to effect changes based on results?

How do we determine which survey is best suited to our purposes?

  • Defining shared purpose among those most likely to use the results.
  • Involving faculty and staff with relevant experience and expertise.
  • Considering total demand on student time and attention, especially if implementing multiple assessments.

Who at my institution needs to be involved?

  • Those who are most likely to use the results to monitor and improve practices and processes
  • Faculty and staff with expertise in institutional research, assessment, public polling, program evaluation and quality improvement methods
  • It is important to coordinate among units engaging in these efforts so as to manage the demand on students’ time and attention.

What is generally involved in participating?

  • Participation requirements vary but are relatively predictable. However, time and resource commitments only begin with participation.
  • Preparing for participation is significant, especially if appropriate steps are taken to carefully select instruments and methods.
  • The largest commitments for effective use are the time and resources devoted to analyzing, interpreting, and most importantly using the results.

Using Results Responsibly and Effectively

How well do assessments reflect experiences on our campus?

  • Representativeness: Evaluating whether participants reflect the institutional target population – are there any systematic biases, such as by gender, race/ethnicity, course load (for students), rank (for faculty), etc. ?
  • Reliability: Were the data collection conditions influenced by any non-typical factors that might have affected responses (e. g. , a survey during exam week, extreme weather, athletic team championship)?
  • Validity: Do the results support the conclusions and inferences that you (or the assessment results provider) derive from the data? Determining validity is supported by various analyses of results, such as benchmarks with other institutions or within-institution sub-group comparisons, as well as through corroborating evidence from other data sources (triangulation), such as post-assessment focus groups, other extant data sources (e. g. , student records), and related assessment instruments. (cf. , Borden & Young, 2008).

How can we use the data for assessment and improvement?

  • Bringing together for a constructive dialogue, individuals who can:
    • Effect changes to relevant programs and services;
    • Allocate resources and determine priorities for change; and
    • Understand the technical and contextual limitations of such assessments.
  • Seeking feedback from the target population regarding the meaning and interpretation of the questions asked and the findings.
  • Considering carefully to whom the results or conclusions apply; interpreting results often requires disaggregation into meaningful subgroups, which should be part of the sampling design to ensure appropriate representation.

How can campuses, governing boards, policy makers, or other constituents use the results for public accountability?

  • Making information public increases the pressure to be responsive to and accountable for results. This is a “double-edged sword. ” Accountability can be a strong catalyst for action but can also affect negatively the use of such information for internally motivated improvements.
  • Using assessments for internal planning and improvement may be the best support for external accountability. Colleges and universities that aggressively evaluate their programs and services—and act on that information to improve those programs and services—will gather a rich body of evidence to support their claims of institutional effectiveness.

What are the advantages and disadvantages of using national or commercially available assessment instruments and resources, compared to locally developed ones?

  • National or commercial products are typically:
    • Cost-effective for participation;
    • Designed by experts with significant resources;
    • Tested in a broad range of settings;
    • Have available comparative information from other users (although appropriateness of comparisons not always clear); and
    • Have available participation experience of other institutions to draw on for effective practice guidance.
  • Advantages of locally developed typically:
    • Allow greater control and attention to local circumstances;
      Are easier to integrate across populations and samples (e. g. , student and faculty surveys with common items related to local priorities and initiatives);
    • Costs more to develop and implement but provide results that are more directly applicable to change, which can make using the results for improvement more cost effective;
    • Accommodate collaborations among “like institutions” to develop relevant assessments with comparative benchmarks.

Do we need to use the human subjects review process when administering one of these assessments?

  • If used exclusively for internal evaluation and improvement, human subjects review is not required, technically, but many institutional review boards recommend reviews to ensure overall institutional compliance.
  • If results are going to be disseminated publicly (on web sites or at conferences) or reported in research articles, human subjects review is generally required. However, reviews can often be expedited or exempted due to the minimal risk and “normal educational practice” provisions of most review guidelines. (Note: qualification for exemption from review must be determined by an official designee of the institutional review board and cannot be the judgment of the investigator).
  • Assessment data collected without human subjects review can subsequently be considered for public dissemination if submitted for review as a use of archival data.
  • Regardless of human subjects review, other federal (e. g. , FERPA, HIPAA) and state laws apply to the confidential collection, secure storage and responsible use of assessment data.

How do national assessments compare with other ways to assess institutional quality?

  • Effective assessment generally requires multiple methods including localized inquiries as well as those that can be supported through the national and commercial instruments, services, and resources included in the Measuring Quality inventory.
  • Each instrument, service or resource has associated benefits and limitations, which is why multiple, convergent methods are recommended.
  • National and commercially available instruments, services, and resources are often good catalysts and starting points for developing effective assessment practices.

Do these assessments encompass the concerns of all major student and other constituent groups?

  • Because of the diversity of U.S. postsecondary institutions and the populations of students, faculty, and staff that enroll and work at these institutions, it is important to consider how an instrument, resource or service relates to local circumstances. Some instruments are more suited to traditional-aged students attending four-year institutions although an increasing number are tailored to other populations and sectors.
  • Effective use requires attention to potential sources of cultural bias and the appropriateness of instruments developed for relatively homogeneous populations (regardless of designer intentions).

The New Assessment Landscape

The rapid expansion of available assessment instruments, systems, and support services since the publication of the previous volume indicates the degree to which assessment has become an integral activity within the academy. Hutchings (2009) describes how the growth, especially by for-profit assessment services, motivated in large part by accreditation requirements, indicates that assessment “has really ‘arrived’” (p. 33). However, she also notes that these developments may serve to push assessment further into the accountability direction and away from the improvement realm. The double-edged sword of using externally available instruments, software platforms, and data resources highlights further the increased importance for institutional leaders to manage the tension effectively. As Hutchings notes:

In the right hands, for the right purposes, these services and tools can be a boon. They can move campuses off the assessment dime and give them a way to get started. (p. 32)

However, she also asks:

Might it be that more automated, push-the-button reporting gives greater visibility and importance to “the data” than it should and short-circuits the deliberative tasks that should be central to assessment? (p. 32) Despite tremendous advances in assessment instruments, resources, and services, faculty and administrative program staff still have to engage in the difficult tasks of developing consensus on learning and other student development outcomes, relating those outcomes to the curriculum and to support services, and collecting, reviewing, and acting upon evidence of the outcomes of courses, programs, and processes. In addition, institutional leaders are under continuous pressure to address issues related to access, affordability, research, scholarly productivity, and the institution’s contributions to the social and economic needs of a range of external communities. Moreover, all of this must be done in a climate of increased cost containment and public scrutiny.

It is therefore more important than ever for institutions to develop a strategic and balanced approach to assessment that manifests authentic professional responsibility for student learning as well as other mission-critical activities (e.g., workforce development for vocational programs; the impact of research, scholarship, and advanced professional practice for research universities; and outreach/engagement as appropriate to mission). Although it is possible for a few campus leaders (e.g., the senior administrators and faculty leaders and professional staff in assessment and institutional research) to lead and drive this process, effective practice will occur only if and when assessment for improvement is woven into the broader policies and practices of academic administration and faculty governance. At the same time, the academy must come to terms with what it looks like from the outside in, that is, sufficient attention must be paid to how authentic engagement with assessment for improvement is translated into a narrative that can be understood and appreciated by external constituents. As Lee Shulman, former president of the Carnegie Foundation for the Advancement of Teaching, notes, “Accountability requires that we take responsibility for the story we commit ourselves to telling” (2007, p. 22).

References

Hutchings, P. (2009). The new guys in assessment town. Change, 41(3), 26–33.

Shulman, L. (2007). Counting and recounting: Assessment and the quest for accountability. Change, 39(1, January/February), 20–25.