Obstacles That Contribute to the Controversy and Strategies to Work Toward Consensus

profileXWBusinessmail
cf_Opening_Doors_to_Faculty_Involvement_in_Assessment.pdf

k i tion s uccess i icate c ntabil i ty c ge e ction u on educate a ity c success i ge a uriosity c tion s icate l i nnovation s ection k ss quality i t ion u nuity c nection s ss

a s

i l e

n

nowledge accountabil ity connection self -reflection educate action understand communicate l isten learn access quality innovation success ingenuity ntel lect curiosity challenge create achievement connection self -reflection educate action understand communicate l isten learn access quality innov uccess ingenuity intel lect curiosity challenge knowledge accountabil ity connection understand communicate l isten learn access quality innovation ngenuity self -reflection educate action understand intel lect knowledge accountabil ity connection self -reflection educate action understand commun uriosity challenge create achievement connection self -reflection curiosity challenge create achievement connection self -reflection knowledge accou onnection self -reflection educate action understand communicate l isten learn access quality innovation success ingenuity intel lect curiosity challen ducate innovation success ingenuity intel lect curiosity challenge create achievement knowledge accountabil ity connection self -reflection educate a nderstand communicate curiosity challenge create achievement connection self -reflection understand communicate l isten learn access quality acti ct ion understand communicate l isten learn action understand communicate l isten learn access quality innovation success ingenuity intel lect curios hallenge knowledge accountabil ity connection access quality self -reflection curiosity challenge create achievement learn access quality innovation ngenuity self -reflection educate action understand intel lect knowledge accountabil ity connection self -reflection educate action understand knowled ccountabil ity connection self -reflection educate action understand communicate l isten learn access quality innovation success ingenuity intel lect c hallenge connection knowledge accountabil ity connection self -reflection educate action understand communicate l isten learn access quality innova uccess ingenuity challenge create achievement connection self -reflection educate action understand connection self -reflection understand commun sten learn access quality action create achievement connection self -reflection educate action understand communicate l isten learn access quality uccess educate action communicate l isten learn access quality action educate action understand communicate educate innovation success self -ref nowledge accountabil ity communicate l isten learn achievement connection self -reflection educate action understand communicate l isten learn acc nnovation success ingenuity intel lect access quality innovation success self -reflection curiosity challenge create achievement connection self -reflec nderstand educate action understand communicate l isten learn action understand communicate l isten learn access quality innovation success inge uriosity challenge create achievement connection self -reflection understand communicate l isten learn access quality action create achievement co elf -reflection educate action understand communicate l isten learn access quality innovation success educate action communicate l isten learn acce

National Institute for Learning Outcomes Assessment | 1

O c c a s i o n a l P a p e r # 4 learningoutcomesassessment.org

National Institute for Learning Outcomes Assessment

Opening Doors to Faculty Involvement in Assessment

Pat Hutchings

National Institute for Learning Outcomes Assessment | 2

About the Author

Pat Hutchings

Pat Hutchings joined the Carnegie Foundation for the Advancement of Teaching in 1998, serving as a senior scholar and then as vice presi- dent, working closely with a wide range of programs and research initiatives, including the Carnegie Academy for the Scholarship of Teaching and Learning. She has written widely on the investiga- tion and documentation of teaching and learning, the peer collabo- ration and review of teaching, and the scholarship of teaching and learning. Recent publications, drawing from Carnegie’s work, include Ethics of Inquiry: Issues in the Scholarship of Teaching and Learning (2002), Opening Lines: Approaches to the Scholarship of Teaching and Learning (2000) and, co-authored with Mary Taylor Huber, The Advancement of Learning: Building the Teaching Commons (2005). She left her full-time position in December 2009 but continues to work part-time with the Foundation on a broad range of higher education issues. She was chair of the English department at Alverno College from 1978 to 1987 and a senior staff member at the American Association for Higher Education from 1987-1997. Her doctorate in English is from the University of Iowa.

Contents

Abstract …3

Foreword…4

Opening Doors to Faculty Involvement in Assessment… 6

Why Faculty Involvement Matters…7 � Obstacles to Greater Involvement…8

Developments to Build On…10

Six Recommendations…13

Many Doors to Faculty Involvement…17 �

References…18

NILOA National Advisory Panel…20 Mission…20 Occasional Paper Series…20 About NILOA …21 Staff…21 Sponsors…21

The ideas and information contained in this publication are those of the authors and do not necessarily reflect the views of Carnegie Corporation of New York, Lumina Founda- tion for Education, or The Teagle Foundation.

t t

t

intel lect curiosity challenge create achievement connection self -reflection

communicate l isten learn access quality action educate action understand communicate l isten learn action understand communicate l isten learn acces challenge curiosity communicate

ingenuity intel lect curiosity challenge educate innovation success ingenuity intel lect curiosity challenge create achievement knowledge accountabil i reflection knowledge accountabil ity connection self -reflection educate action understand communicate l isten learn access quality innovation succes understand communicate curiosity challenge create achievement connection self -

reflection educate action understand intel lect knowledge accountabil ity connection self -reflection educate actio innovation success ingenuity intel lect curiosity challenge knowledge accountabil ity connection understand communicate l isten learn access quali

educate action understand communicate l isten learn access quali

innovation success ingenuity self - reflection curiosity challenge create achievement connection sel

connection self -reflection educate action understand create achievement connection self -reflection understan

knowledge accountabil ity connection self -reflection educate action understand communicate listen learn access quality innovation success ingenuity y y n f- s y d s

National Institute for Learning Outcomes Assessment | 3

A b s t r a c t �

Opening Doors to Faculty Involvement in Assessment

The assessment literature is replete with admonitions about the importance of faculty involve- ment, a kind of gold standard widely understood to be the key to assessment’s impact “on the ground,” in classrooms where teachers and students meet. Unfortunately, much of what has been done in the name of assessment has failed to engage large numbers of faculty in significant ways.

In this paper, I examine the dynamics behind this reality, including the mixed origins of assess- ment, coming both from within and outside academe, and a number of obstacles that stem from the culture and organization of higher education itself. I then identify more recent develop- ments that promise to alter those dynamics, including and especially the rising level of interest in teaching and learning as scholarly, intellectual work. I close by proposing six ways to bring the purposes of assessment and the regular work of faculty closer together: 1) Build assessment around the regular, ongoing work of teaching and learning; 2) Make a place for assessment in faculty development; 3) Integrate assessment into the preparation of graduate students; 4) Reframe assessment as scholarship; 5) Create campus spaces and occasions for constructive assessment conversation and action; and 6) Involve students in assessment. Together, these strategies can make faculty involvement more likely and assessment more useful.

Pat’s paper effectively synthesizes her dozens of years of experience as a faculty member, consultant, and colleague. To her admirable observations and recommendations about engaging faculty in assessment, I would only add one: remember that you don’t need everybody on board to move forward.

Peter T. Ewell Vice President, National Center for Higher Education Management Systems (NCHEMS) Senior Scholar, NILOA

t t

t

intel lect curiosity challenge create achievement connection self -reflection

communicate l isten learn access quality action educate action understand communicate l isten learn action understand communicate l isten learn acces challenge curiosity communicate

ingenuity intel lect curiosity challenge educate innovation success ingenuity intel lect curiosity challenge create achievement knowledge accountabil i reflection knowledge accountabil ity connection self -reflection educate action understand communicate l isten learn access quality innovation succes understand communicate curiosity challenge create achievement connection self -

reflection educate action understand intel lect knowledge accountabil ity connection self -reflection educate actio innovation success ingenuity intel lect curiosity challenge knowledge accountabil ity connection understand communicate l isten learn access quali

educate action understand communicate l isten learn access quali

innovation success ingenuity self - reflection curiosity challenge create achievement connection sel

connection self -reflection educate action understand create achievement connection self -reflection understan

knowledge accountabil ity connection self -reflection educate action understand communicate listen learn access quality innovation success ingenuity y y n f- s y d s

National Institute for Learning Outcomes Assessment | 4

F o r e w o r d �

Since the emergence of assessment as a widespread phenomenon at American colleges and universities in the mid-1980s, “faculty involvement” has been repeatedly identified as essential. I repeated this admonition freely at that time, as did Pat Hutchings, the author of this latest NILOA Occasional Paper. But admonishment did not make it so. NILOA’s most recent survey of provosts, for example, reveals that gaining faculty involvement and support is among their top concerns, and I always get similar answers when I pose this question to audiences at conferences and workshops.

In the first portion of her paper, Pat effectively enumerates some challenges to achieving greater faculty involvement. One of the most important is the fact that, from the outset, most faculty perceived assessment as being principally about external accountability. As a result, many continue to see little connection between such activities and their day-to-day life in the class- room. To amplify Pat’s point, moreover, the entire premise of “assessment to improve instruc- tion”—especially if it is offered by outsiders—is that there is something wrong with instruction to begin with. This posture is not a happy one from which to begin a productive conversation. Another salient challenge that Pat nails is the fact that there is currently little payoff to faculty for undertaking this work. Simply telling them that “it is part of the job of teaching,” as too many academic leaders currently do, doesn’t work very well because the connection between assessment and teaching isn’t obvious to faculty. And things may be even worse: widespread perceptions that assessment is essentially an administrative activity—the stuff of “strategic planning” and “program review”—mean that faculty will shun it if only for that reason. Pat also notes that faculty value expertise and assessment is something that they typically do not know much about. In a similar vein, assessment is prosecuted in the alien language of business and education—not usually the most respected disciplines on any campus.

These are formidable obstacles. But Pat also gives us reasons to hope by reviewing several areas in which we have made progress. First, as she points out, the entire discourse about instruction has acquired a new tone and heightened respectability. And insofar as the connection between assessment and teaching and learning can be clarified, this new rhetoric can only benefit assess- ment. Related to this is the rising prominence of Teaching and Learning Centers at many institutions. At their best, they can help faculty discover the integral connection between assessment and instruction and show them how to do assessment better. Finally, Pat observes that assessment methods have come a very long way over the last twenty years. When all this began (with the salient exception of Alverno College where Pat once taught), most institutions doing assessment had to be content with re-administering the ACT Assessment or giving GREs in various fields. Now we have creative and authentic standardized general skills tests like the Collegiate Learning Assessment (CLA) and the Critical-Thinking Assessment Test (CAT), as well as a range of solid techniques like curriculum mapping, rubric-based grading, and elec- tronic portfolios. These technical developments have yielded valid mechanisms for gathering evidence of student performance that look a lot more like how faculty do this than ScanTron forms and bubble sheets. At least as important, they have made the job of assessment easier. Given that lack of time is one of the greatest objections that faculty raise about assessment, this also helps engagement. Finally, I’d like to add an item to Pat’s reassuring list of “hopefuls.” While I have no concrete evidence to back up this assertion, I am becoming convinced through sustained interaction that younger faculty members are more positive about and engaged in

knowledge accountabil ity connection self -reflection educate action understand communicate listen learn access quality innovation success ingenuity y y n f- s y d s

t t

t

intel lect curiosity challenge create achievement connection self -reflection

communicate l isten learn access quality action educate action understand communicate l isten learn action understand communicate l isten learn acces challenge curiosity communicate

ingenuity intel lect curiosity challenge educate innovation success ingenuity intel lect curiosity challenge create achievement knowledge accountabil i reflection knowledge accountabil ity connection self -reflection educate action understand communicate l isten learn access quality innovation succes understand communicate curiosity challenge create achievement connection self -

reflection educate action understand intel lect knowledge accountabil ity connection self -reflection educate actio innovation success ingenuity intel lect curiosity challenge knowledge accountabil ity connection understand communicate l isten learn access quali

educate action understand communicate l isten learn access quali

innovation success ingenuity self - reflection curiosity challenge create achievement connection sel

connection self -reflection educate action understand create achievement connection self -reflection understan

National Institute for Learning Outcomes Assessment | 5

F o r e w o r d ( c o n t i n u e d ) �

assessment than their “Boomer generation” colleagues. This may be because they are more collectivist and team oriented—eroding the “isolation in the classroom” syndrome that Pat so accurately describes. But wherever it comes from, it is bound to be good for assessment’s future.

The meat of Pat’s paper is offered in six recommendations for “opening doors to faculty involve- ment in assessment.” The first—embedding assessment directly into the regular curriculum through mapped and targeted assignments, graded validly and reliably through carefully designed and piloted rubrics—has always been a personal favorite of mine, and I argued for it in my Occasional Paper a year ago. The second and third—more emphasis on faculty develop- ment offered through campus Teaching and Learning Centers, and greater emphasis on instruc- tional training (and assessment) in preparing future faculty in graduate training—are familiar, though this by no means diminishes their appropriateness. The fourth—making assessment technique and evidence an integral part of the Scholarship of Teaching and Learning—reflects Pat’s own successful history of doing this over many years at the Carnegie Foundation for the Advancement of Teaching and as AAHE’s founding Assessment Forum Director.

The last two recommendations, though, are not only sound but are fresh as well. The fifth cogently notes the fact that colleges and universities lack spaces and opportunities for faculty to discuss and make meaning of assessment results through sustained engagement. Time constrained committee discussions are no place for serious collective reflection and there is simply no “room” for this activity (figuratively or literally) in current campus discourse. This is a serious limitation and it ought to be addressed. Sixth, Pat urges us to involve students directly in assessment. Now there’s an idea! Not only do students have the greatest stake in improving teaching and learning, they also are closer to the data than we are. This means that they can frequently offer much better interpretations of assessment results and I have seen more than one campus assessment committee learn this to its members’ benefit.

In short, Pat’s paper effectively synthesizes her dozens of years of experience as a faculty member, consultant, and colleague. To her admirable observations and recommendations about engaging faculty in assessment, I would only add one: remember that you don’t need everybody on board to move forward.

Peter T. Ewell Vice-President, National Center for Higher Education Management Systems (NCHEMS) Senior Scholar, NILOA

knowledge accountabil ity connection self -reflection educate action understand communicate listen learn access quality innovation success ingenuity intel lect curiosity challenge create achievement connection self -reflection educate action understand communicate listen learn access quality innovation success ingenuity intel lect curiosity challenge knowledge accountabil ity connection understand communicate listen learn access quality innovation success ingenuity self -reflection educate action understand intel lect knowledge accountabil ity connection self -reflection educate action understand communicate curiosity challenge create achievement connection self -reflection curiosity challenge create achievement connection self - reflection knowledge accountabil ity connection self -reflection educate action understand communicate listen learn access quality innovation success ingenuity intel lect curiosity challenge educate innovation success ingenuity intel lect curiosity challenge create achievement knowledge accountabil ity connection self -reflection educate action understand communicate curiosity challenge create achievement connection self -reflection understand

s e y e n f- n nt

t t

l

communicate l isten learn access quality action educate action understand communicate l isten learn action understand communicate l isten learn acces quality innovation success ingenuity intel lect curiosity challenge knowledge accountabil ity connection access quality self -reflection curiosity challeng create achievement learn access quality innovation success ingenuity self -reflection educate action understand intel lect knowledge accountabil i connection self -reflection educate action understand knowledge accountabil ity connection self -reflection educate action understand communica l isten learn access quality innovation success ingenuity intel lect curiosity challenge connection knowledge accountabil ity connection self -reflectio educate action understand communicate l isten learn access quality innovation success ingenuity challenge create achievement connection se reflection educate action understand communicate l isten learn achievement connection self -reflection educate action understand communicate l iste learn access quality innovation success ingenuity intel lect access quality innovation success self -reflection curiosity challenge create achieveme

National Institute for Learning Outcomes Assessment | 6

Since the institutional assessment of student learning outcomes arrived on the higher education scene some 25 years ago, no issue has generated more attention than the role of faculty in such work. The research and practice literature on the topic is packed with admonitions about the importance of faculty involvement, which has come to be seen as a kind of gold standard, the key to assessment’s impact “on the ground”—in classrooms where teachers and students meet. This view, it seems safe to say, is shared by just about everyone who works in, writes about, worries about, or champions assessment.

What is also widely shared is a sense that the real promise of assess- ment depends on significantly growing and deepening faculty involve- ment—and, in short, that there has not been enough of it. In truth, the extent to which faculty have been involved in assessment is difficult to know—and the danger here of self-fulfilling prophecy should be kept in mind—but in a recent national survey of campus assessment practice, 66 percent of chief academic officers name “more faculty engagement” as the highest priority in making further progress (Kuh & Ikenberry, 2009, p. 9). Similarly, “a strong faculty leadership role” tops the list of criteria for the Council on Higher Education Accreditation’s annual award to campuses with exemplary assessment programs (see Eaton, 2008), and the assessment framework put forward by the Association of American Colleges & Universities (AAC&U, 2008) urges a focus on “our students’ best work,” in which faculty must clearly play a—perhaps the—central role. Looking back to earlier days, a set of principles devel- oped under the sponsorship of the American Association for Higher Education (AAHE) (Astin et al., 1993) points in the same direction, urging that assessment be firmly connected to the classroom and the values of educators.

On the one hand, such urgings reflect the fact that on hundreds of campuses faculty have played critically important roles in assessment; their efforts have produced exciting accounts by and about those who become engaged in assessment and discover in it (sometimes to their considerable surprise) a route to more powerful approaches to student learning. At the same time, these urgings reflect a concern that much of what has been done in the name of assessment has failed to engage large numbers of faculty in significant ways.

In this paper I examine the dynamics behind this reality, identify recent developments that may alter those dynamics by creating a more posi- tive climate for serious work on learning and teaching, and propose six approaches that promise to bring the purposes of assessment and the regular work of faculty closer together—making faculty involvement more likely and assessment more useful. While building on the observa- tions of many who have written about these matters, I also draw on my

O p e n i n g D o o r s t o F a c u l t y I n v o l v e m e n t i n A s s e s s m e n t

Pat Hutchings

The real promise of assessment depends on significantly growing and deepening faculty involve- ment…

National Institute for Learning Outcomes Assessment | 7

experience as an English professor at Alverno College (where assessment was fully integrated into faculty work), on my role as inaugural director of the AAHE Assessment Forum,1 and on my subsequent work (much of it with The Carnegie Foundation for the Advancement of Teaching) with faculty from a wide range of disciplines and institutional types seeking ways to make teaching more visible, valued, and effective in meeting the needs of today’s learners.

Why Faculty Involvement Matters

For starters it’s worth looking at what happens when faculty are signifi- cant participants in the assessment process—not just token members of a committee cobbled together for an accreditation visit or an after- the-fact audience for assessment results they had no part in shaping but central voices and shapers of activity. Such significant roles have not been the norm. As Peter Ewell (2009) points out in another NILOA paper, from its early days in higher education, assessment was “consciously separated from what went on in the classroom,” and especially from grading, as part of an effort to promote “objective” data gathering (p. 19). In response, many campuses felt they had no choice but to employ external tests and instruments that kept assessment distinct from the regular work of faculty as facilitators and judges of student learning. In fact, the real promise of assessment—and the area in which faculty involvement matters first and most—lies precisely in the questions that faculty, both individually and collectively, must ask about their students’ learning in their regular instructional work: what purposes and goals are most important, whether those goals are met, and how to do better. As one faculty member once told me, “assessment is asking whether my students are learning what I am teaching.”

Such questions are not new, they are not easy, and most of all they are not questions that can be answered by “someone else.” They are faculty questions. Ironically, however, they have not been questions that natu- rally arise in the daily work of the professoriate or, say, in department meetings, which are more likely to deal with parking and schedules than with student learning. Literary scholar Gerald Graff (2006) has written about the skill with which academics in his field manage to side step such conversations—which, admittedly, can become difficult, take a wrong turn, or bog down, generating a good deal of proverbial heat and not much light.

But listening to the voices of faculty who have taken on assessment’s questions with colleagues, the power of the process is clear. In inter- views conducted as part of the work of the AAHE Assessment Forum, for instance, my colleague Ted Marchese and I heard over and over about assessment’s power to prompt collective faculty conversation about purposes, often for the first time; about discovering the need to be more explicit about goals for student learning; about finding better ways to know whether those goals are being met; and about shaping and sharing feedback that can strengthen student learning. As a professor of English at the University of Virginia told us, although he did not wholly

1 In preparing this paper, I returned to a 1990 Change magazine article I co- authored with Ted Marchese, my colleague at the AAHE and its vice president. Ted’s view of assessment has deeply influenced my own, and I am grateful to him for thinking with me over the years about many of the issues I deal with here.

Listening to the voices of faculty who have taken on assessment’s questions with colleagues, the power of assessment is clear.

National Institute for Learning Outcomes Assessment | 8

endorse its work, the university’s assessment steering committee was worth sticking with because “it’s the only place on campus I can find an important conversation about what students are learning” (Hutchings & Marchese, 1990, p. 23). Such conversations are important in and of themselves, but they matter, too, because they set the stage for the larger cycle of assessment work: designing and selecting instruments and approaches, grappling with evidence, and using results to make changes that actually help students achieve the goals and purposes faculty believe are most important.

All of this is by way of saying that assessment has deep-seated educa- tional roots. A number of forces propelled assessment’s arrival on the higher education landscape, certainly, but among the most notable was the 1984 report, Involvement in Learning, by the National Institute of Education’s (NIE) Study Group on the Conditions of Excellence in American Higher Education, which called on undergraduate education to 1) set high expectations, 2) involve students in their learning, and 3) assess and give feedback for improvement. Assessment was seen first and foremost as an educational practice, and its champions—like Alex- ander Astin, who served on the NIE study group—held up a vision of educational quality based not on reputation and resources but on the institution’s contribution to learning—and, therefore, on the work of students and faculty.

Obstacles to Greater Involvement

While the role of assessment in learning has had and continues to have eloquent and prestigious proponents, it has also attracted other, perhaps louder, patrons from its earliest days in higher education. In 1986 the National Governor’s Association (NGA) embraced the idea in a report tellingly entitled Time for Results. A key figure in this initiative was Governor John Ashcroft of Missouri, whose state motto, “Show Me,” captured the tone of policy makers tired of what they saw as higher education’s sense of entitlement and asking for proof and accountability. In fairness, it should be said that external calls for assessment took a range of forms in the early years, and many of them were sensible and well intentioned. Alverno College’s much-touted model of assessment in the service of individual student learning (which was prominently featured in the NGA report) captured the imagination of some policy makers, and the general trend as mandates began to emerge at the state level was toward guidelines that invited, or at least permitted, campus engagement and invention connected to local curriculum and teaching. Nevertheless, the bottom line was that assessment, from its earliest days, became identified with a group of actors outside academe whose patronage cast a pall over its possibilities within the academy. From the faculty point of view, this looked a lot like someone else’s agenda—and not an altogether friendly someone else, at that.

But governors and external mandates have only been part of the scenario. Obstacles to fuller faculty involvement in assessment have been encoun- tered in several directions, including that of higher education itself.

First, for many faculty the language of assessment has been less than welcoming. While some observers—attempting to make a virtue of necessity—have pointed out that the word’s etymology comes from

Assessment…became identified with a group of actors outside academe whose patronage cast a pall over its possibilities within the academy. From the faculty point of view, this looked a lot like someone else’s agenda— and not an altogether friendly someone else, at that.

National Institute for Learning Outcomes Assessment | 9

‘‘sitting down beside” (in acts of coaching and feedback between teacher and student), louder to most ears have been echoes of less congenial activities: accounting, testing, evaluation, measurement, benchmarking, and so forth—language from business and education, not the most respected fields on most campuses. As a group assembled for a Teagle Foundation “listening” on assessment observed, “If one endorsed the idea that, say, a truly successful liberal arts education is transformative or inspires wonder, the language of inputs and outputs and ‘value added’ leaves one cold” (Struck, 2007, p. 2). In short, it is striking how quickly assessment can come to be seen as part of “the management culture” (Walvoord, 2004, p. 7) rather than as a process at the heart of faculty’s work and interactions with students.

A second obstacle to fuller faculty involvement has been that faculty are not trained in assessment. Put simply, graduate education aims to develop scholarly expertise in one’s field. While forward-looking doctoral programs are now beginning to treat teaching as a more prominent part of professional formation, it remains true that reflecting on educational purposes, formulating learning goals, designing assignments and exams, and using data for improvement are activities that live, if at all, only on the far margins of most Ph.D. students’ experience. Nor has assessment had a central place in professional development experiences for faculty. Early in the higher education assessment movement most campus teaching centers kept student outcomes assessment at arm’s length, wary of mixing their (almost completely voluntary) services with an enterprise associated with mandates and evaluation. This has begun to change (as noted below) but, meanwhile, faculty who might have been interested in assessment have had no ready place or opportunity to learn about it. Especially as assessment conversations turn technical—as they do, perhaps prematurely—faculty, for whom expertise is a premier value, have bowed out, not wanting to be seen as amateurs and dilettantes. This dynamic has likely been exacerbated when the campus, for good reasons, establishes an assessment office and specialized staff to manage it, almost by definition marginalizing “regular” faculty.

A third obstacle to faculty involvement has been that the work of assess- ment is an uneasy match with institutional reward systems. It is impor- tant not to overgeneralize here. On some campuses, particularly those where teaching is the central mission, assessment has been recognized and valued as part of the faculty role, either as an aspect of teaching or (as in the case of faculty sitting on an assessment planning or advisory committee) as valued institutional service. In many higher education settings, however, assessment, like teaching more generally, has often been undervalued or invisible in promotion and tenure deliberations, a circumstance that has certainly not encouraged faculty to see assessment as their work.

Fourth, and finally, it may be that faculty have not yet seen sufficient evidence that assessment makes a difference. There’s a chicken-and- egg dynamic at work here; more faculty involvement would presum- ably make a bigger difference. But the fact remains that the benefits of assessment are uncertain and that faculty facing rising demands on their time and energy must make choices. Not choosing assessment, after all, may be a rational decision. Indeed, assessment is seen as “redun- dant” on many campuses, duplicating already existing processes and not yielding additional benefits (Kuh & Ikenberry, 2009, p. 9). Similarly,

In many higher education settings, assessment, like teaching more generally, has often been undervalued or invisible in promotion and tenure deliberations, a circumstance that has certainly not encouraged faculty to see assessment as their work.

National Institute for Learning Outcomes Assessment | 10

as evidenced by numerous reports over the years, many campuses have succeeded in “doing assessment” but have fallen short in using the results to make changes in the educational experience of their students (Carey, 2007; Hutchings & Marchese, 1990; Lopez, 1998). Faculty perceptions reflect this shortfall, as shown in data from the 2009 Faculty Survey of Student Engagement; while 75 percent of respondents indicated their campuses were involved in assessment “quite a bit” or “very much,” only about a third had positive views of the dissemination and usefulness of assessment findings (National Survey of Student Engagement, 2009, pp. 21–22). Indeed, there is now a growing awareness that the neat logic of “data-driven improvement’’ is much easier to invoke than to enact (Bond, 2009); a recently announced initiative of the Spencer Founda- tion, for instance, “questions the assumption that the simple presence of data invariably leads to improved outcomes and performance, and that those who are presented information under data-driven improvement schemes will know how best to make sense of it and transform their practice” (see www.spencer.org). Faculty who are already and increas- ingly pressed in too many directions would be readier to join the assess- ment process, one might surmise, if its benefits were easier to see.

Developments to Build On

The four obstacles to faculty involvement in assessment noted above were in place for the most part when assessment first appeared on the higher education scene in the mid-1980s, and they are still in force today. But it is not true, despite metaphors of graveyards and slow- turning ships, that there is nothing new under the higher education sun. A number of recent developments may be creating a more hospi- table climate for a faculty role in assessment.

At the most general level is the growth of attention to teaching and learning. Traditionally less visible and valued than other aspects of academic life in higher education, the profile of pedagogy has clearly risen over the last two decades. In 1999, for instance, my Carnegie Foundation colleague Mary Huber set out to map the various forms and forums for exchange about matters pedagogical. “What has been surprising to us,” she reported,

is not only how many forums there are right now for this exchange, but how surprised people seem to be to find this out. In other words, what we are finding appears to be at odds with the prevailing stereotype that there has been little investment of intellectual interest and energy in teaching and learning in higher education. Perhaps in comparison to traditional research this is so, but the field of teaching and learning in higher educa- tion is far more active (if not very evenly distributed) than many might think. (Huber, 1999, p. 3)

In short, higher education, here in the U.S. and internationally, has seen a huge rise in the number of campus events, conferences, special initiatives, funded projects, journals, online forums, and multimedia resources shining a light on faculty’s work as teachers. Assessment, in its broadest and most important sense of making judgments about student learning, has been a part of this expanding “teaching commons” (Huber & Hutchings, 2005), creating a more generous space for faculty engage- ment with campus assessment activities.

Faculty who are already and increasingly pressed in too many directions would be readier to join the assessment process, one might surmise, if its benefits were easier to see.

National Institute for Learning Outcomes Assessment | 11

Within this general phenomenon one also finds the growth of more focused communities around specific pedagogies (service learning, problem-based learning, undergraduate research, and so forth) and the teaching and learning of particular fields (chemical education, for example, or the teaching of writing). As champions of their chosen approach, these communities have naturally turned to assessment- like activities for evidence of impact and to shape next steps. External funding for these efforts has, increasingly, mandated such data gathering, and the notion that educational reform should be informed by evidence has become a commonplace—so much so, in fact, that talking about teaching without invoking learning has become a sort of anathema.

At the same time, and running hand in hand with these developments, has been the rise of the scholarship of teaching and learning, a move- ment that has gained significant momentum over the past decade. Over 250 campuses have been involved in the Carnegie Academy for the Scholarship of Teaching and Learning (CASTL, running from 1998–2009), and many more campuses in the U.S. and beyond have embraced this agenda. Today, growing numbers of faculty from a full range of fields and all institutional types are posing and investigating questions about their students’ learning, using what they discover to improve their own classrooms and to contribute to a body of knowl- edge others can build on. Such work has become an entrée for those who perhaps would not be drawn to assessment but feel welcomed by the idea of seeing their teaching and their students’ learning as sites for scholarly inquiry—particularly in a community of like-minded educa- tors interested in learning from their findings. A 2009 survey of CASTL campuses indicates that such work, even when involving relatively small numbers of faculty, brings energy and openness to institutional assess- ment activities:

The scholarship of teaching and learning is often mentioned [in the Carnegie survey] as having had an effect on assessment. Departments where faculty have been engaged in inquiry into the students’ experience understand learning outcomes better because “they have assessed student learning in their classrooms,” and are “noticeably less hostile to institutional assessment.” Respondents also noted specific programs (the first-year experi- ence, general education) and majors (biology) where scholarship of teaching and learning work has been woven into assessment approaches. (Ciccone, Huber, Hutchings, & Cambridge, 2009, p. 9)

Clearly there are productive bridge-building possibilities here, as the scholarship of teaching and learning and assessment share overlapping agendas, practices, and institutional constituencies and as growing faculty involvement in the former shifts understandings of the latter to more clearly align assessment with what faculty actually do as teachers.

Moreover, this kind of serious, intellectual work on teaching and learning is making its way—albeit slowly—into campus practices and policies related to faculty roles and rewards. In a 2002 AAHE national survey, two thirds of chief academic officers reported changes “to encourage and reward a broader definition of scholarship” (O’Meara, 2005, p. 261). It is no accident that for more than a decade the assessment conversation in this country ran in parallel with an energetic national conversation

Today, growing numbers of faculty from a full range of fields and all institutional types are posing and investigating questions about their students’ learning, using what they discover to improve their own classrooms and to contribute to a body of knowledge others can build on.

National Institute for Learning Outcomes Assessment | 12

about faculty roles and rewards. That conversation had waned somewhat by the time the AAHE Forum on Faculty Roles and Rewards concluded a number of years ago, but the Association of American Colleges & Universities (AAC&U) has since stepped in with new leadership— organizing conferences on the topic and recommending in its much- circulated Framework for Accountability that “campus reward systems should incorporate the importance of faculty members’ intellectual and professional leadership in both assessment and educational improve- ment” (p. 12). One route to this end is the work of the Peer Review of Teaching Project (PRTP)—a national initiative promoting the use of course portfolios—a tool “that combines inquiry into the intellectual work of a course, careful investigation of student understanding and performance, and faculty reflection on teaching effectiveness” (not a bad definition of assessment at its best). The PRTP has engaged hundreds of faculty members from numerous universities, many of whose course portfolios can be found at http://www.courseportfolio.org[.] These arti- facts and the review processes they make possible are raising the profile of inquiry into learning and teaching, by whatever name, and setting the conditions in which such work can be rewarded, as other forms of scholarship are.

Finally, the climate for faculty involvement in assessment is becoming more hospitable with the emergence of new tools and technologies. A wider range of instruments is now available—beyond the small set of standardized tests most visible in assessment’s first decade—and some of these are clearly more related to the tasks and assignments that faculty require of students in their own classrooms. The Collegiate Learning Assessment (CLA), for instance, forgoes reductive multiple-choice formats in favor of authentic tasks that would be at home in the best classrooms; CLA leaders now offer workshops to help faculty design similar tasks for their own classrooms, the idea being that these activi- ties are precisely what students need to build and improve their critical- thinking and problem-solving skills. The widely used National Survey of Student Engagement, and its cousin, the Community College Survey of Student Engagement, document the extent to which students engage in educational practices associated with high levels of learning and devel- opment—practices like frequent writing, service learning and discussing ideas with faculty outside of class. Electronic student portfolios, which over the last decade have become widespread on all kinds of campuses, now provide a vehicle for bringing the regular work of the classroom under the assessment umbrella in manageable ways (see, for example, Miller & Morgaine, 2009). Some campuses are now employing online data management systems, like E-Lumen and TracDat, that invite faculty input into and access to assessment data (Hutchings, 2009). With developments like these facilitating faculty interest and engage- ment in ways impossible (or impossibly time consuming or technical) in assessment’s early days, new opportunities are on the rise.

Obstacles, it’s true, are also on the rise. On campuses across the nation, the picture is hardly rosy. Cutbacks are everywhere; faculty are increas- ingly stressed and pressed, with many more in part-time, contingent positions; and higher education is seen by some as “underachieving” (Bok, 2006), failing many of the students who need it most. The point here is not that faculty involvement in assessment will now be easy but that there have been developments to build on going forward.

The climate for faculty involvement in assessment is becoming more hospitable with the emergence of new tools and technologies. A wider range of instruments is now available… and some of these are clearly more related to the tasks and assignments that faculty require of students in their own classrooms.

National Institute for Learning Outcomes Assessment | 13

Opening Doors to Faculty Involvement: Six Recommendations

In this spirit, now may well be a good time for campuses to survey their full range of assessment activities, recognizing that not all of them use the language of assessment and that they come in a wide variety of shapes and sizes. Having the fullest possible picture in view may suggest new ways to encourage faculty activity where it already exists, to support it where it is emergent, and to think harder about where and exactly how the scarce resource of faculty time and talent can be best deployed. The following six recommendations may serve as keys—opening doors to faculty involvement in assessment.

1. Build Assessment Around the Regular, Ongoing Work of Teaching and Learning

Assessment should grow out of faculty’s questions about their students’ learning and the regular, ongoing work of teaching: syllabus and curric- ulum design, the development of assignments and classroom activities, the construction of exams, and the provision of feedback to students. These kinds of closer-to-the-classroom connections can help to move assessment “away from the center, and out to the capillary level,” as one group of practitioners suggested, making it more “centrifugal” (Struck, 2007, p. 2).

This injunction to build assessment around faculty’s regular work in the classroom has been part of assessment’s gospel from the beginning, but doing so has often gone against the grain, as campus assessment prac- tices were consciously separated from what went on in the classroom (Ewell, 2009). In the face of this disconnect, campuses could hardly find a better place to begin (or to resuscitate) assessment than by building on (rather than dismissing) the practice of grading—an approach advo- cated by Barbara Walvoord (2004). Starting, as it were, at “ground level”—with a practice in which every faculty member is engaged every semester in every class for every student—can bring to the fore impor- tant questions about course design, assignments and exams, and feed- back to students, which is arguably an aspect of assessment that would benefit from much more attention—and where faculty interests and talents would be particularly to the point. A focus on grading and feed- back would also address the long-standing problem of student motiva- tion by assuring that assessment does indeed “count” in ways that elicit students’ best work.

Embedding assessment in the classroom then sets the stage for work at the next level of the department or program, contexts which draw on what most members of the professoriate know and care most about: their discipline or field. Those seeking to engage more faculty more fully in assessment would do well to invite and explore questions about how students “decode the disciplines” (Pace & Middendorf, 2004) and learn “disciplinary habits of mind” (Garung, Chick, & Haynie, 2008)—to quote from the titles of two recent volumes that map this terrain. When assessment reflects and respects disciplinary interests—recognizing, for example, that learning history is not the same as learning music or chemistry—it is more likely to lead to consequential faculty engage- ment. Assessment, one might say, must live where faculty live, in the classrooms where they teach the field they love.

Embedding assessment in the classroom sets the stage for work at the next level of the department or program, contexts which draw on what most members of the professoriate know and care most about: their discipline or field.

National Institute for Learning Outcomes Assessment | 14

2. Make a Place for Assessment in Faculty Development

Over the last several decades many campuses (research universities, first, but now a much broader swath) have established teaching and learning centers that offer a broad array of instructional improvement opportu- nities—and assessment can be an integral part of their work.

Signs of movement in this direction are increasingly evident. Nancy Chism, a national leader in the faculty development community, argues that teaching improves through “naturally occurring cycles of inquiry” in which faculty plan, act, observe, and reflect. Teaching center staff support this process, she says, by assisting faculty with data collection and by suggesting instruments and methods for obtaining “good infor- mation on the impact of teaching” (Chism, 2008, n.p.). Bringing faculty together around such evidence, facilitating constructive conversations about its meaning and implications, setting local efforts in the context of a larger body of research—these are important roles that many teaching centers are now taking up, roles that strengthen the growing sense of community around pedagogy and a shared commitment to evidence.

In this same spirit, many centers offer small grants to faculty trying out a new classroom approach, and some now require them to assess the impact of their innovation on student learning and to share what they have learned in campus events, seminars, and conferences or in online representations of their work. While there’s a danger in linking such work too closely to the machinery of institutional assessment (turning an intellectual impulse into a bureaucratic requirement), most faculty are eager to see their work contribute to something larger, and teaching centers can play an important brokering role in this regard, developing faculty habits of inquiry and evidence use that are the sine qua non of assessment—and essential to good teaching, as well. In short, assess- ment should be central to professional development.

3. Build Assessment into the Preparation of Graduate Students

This recommendation is part and parcel of the previous one (teaching centers often serve graduate students as well as faculty), but it bears highlighting separately as well, especially since signs of progress in this area are beginning to appear.

The chemistry department at the University of Michigan, for instance, offers a program of study for graduate students interested in a more sustained experience in teaching, curriculum design, and assessment. The multicampus, NSF-funded Center for the Integration of Research, Teaching, and Learning (CIRTL) (see www.cirtl.net), coordinated by the University of Wisconsin–Madison, trains STEM graduate students and postdocs to bring their investigative skills as researchers to their work as teachers. The Teagle Foundation has recently funded a number of similar efforts, some on individual campuses and one—through the Council of Graduate Schools—tellingly entitled “Preparing Future Faculty to Assess Student Learning Outcomes” (see www.teaglefounda- tion.org/grantmaking/grantees/gradschool.aspx).

Most faculty are eager to see their work contribute to something larger, and teaching centers can play an important brokering role in this regard, developing faculty habits of inquiry and evidence use that are the sine qua non of assessment—and central to good teaching, as well.

National Institute for Learning Outcomes Assessment | 15

These examples are still the exception, admittedly, but they show what is possible. Weaving assessment into courses and experiences designed to prepare beginning scholars for their future work as educators is a prom- ising step forward, with long-term benefits as today’s graduate students become tomorrow’s faculty members and campus leaders.

4. Reframe the Work of Assessment as Scholarship

As scholars, faculty study all manner of artifacts and phenomena; their students’ learning should be seen as an important site for investigation, as well. Creating a place (and incentives) for greater faculty involve- ment in assessment means seeing such work not simply as service or as good campus citizenship but as an important intellectual enterprise—a form of scholarship reflecting faculty’s professional judgment about the nature of deep understanding of their field and about how such under- standing is developed.

In this sense, assessment would do well to find common cause with the scholarship of teaching and learning. This must be done carefully, given the different impulses and motivations behind each, but as noted above the two movements can strengthen each other. Thus, for starters, campus leaders of assessment and those charged with advancing the scholarship of teaching and learning should explore shared agendas and practices. A parallel discussion between these two communities would be beneficial at the national level as well—for example, by including leaders from the scholarship of teaching and learning community at assessment conferences, and vice versa.

Also needed is continued attention to the development and use of new forms, formats, and genres for capturing the scholarly work of teaching, learning, and assessment. The course portfolio model mentioned above is perhaps pre-eminent in this regard, with a growing community of users trading artifacts, reviewing one another’s evidence and reflections, and putting their materials forward in both formative and summative decision-making settings (Bernstein, Burnett, Goodburn, & Savory, 2006). But portfolios are only one possibility, and inventing other ways for faculty engaged in assessment—be it in their own classroom or beyond—to document and share their work in ways that can be reviewed, built on, and rewarded is a critical step forward that can help propel and reenergize the larger conversation about faculty roles and rewards.

5. Create Campus Spaces and Occasions for Constructive Assessment Conversation and Action

Behind many of the long-standing challenges of assessment is a more fundamental reality: that teaching and learning have traditionally been seen and undertaken as private activities, occurring behind class- room doors both literally and metaphorically closed. As noted above, this reality has shifted significantly in recent years, as teaching and learning have become topics of widespread interest, debate, and inquiry. Campuses seeking to engage more faculty more deeply with assessment must find ways to create such opportunities—and there are now many possibilities and models.

Behind many of the long-standing challenges of assessment is a more fundamental reality: that teaching and learning have traditionally been seen and undertaken as private activities, occurring behind classroom doors both literally and metaphorically closed.

National Institute for Learning Outcomes Assessment | 16

Some readers will recall, as an example of such opportunities, the Harvard Assessment Seminars from the 1990’s, sponsored by Derek Bok, led and reported on by Richard Light, and involving a large group of notable educators from across the university (and a few from nearby institutions as well) in gathering and acting on evidence about a range of widely relevant questions about undergraduate learning (Light, 1990, 1991). On the more modest side, departments can set aside time in their regular meetings to examine issues of teaching and learning or set up teaching circles specifically dedicated to such work. Other possibili- ties include multidisciplinary reading and study groups (perhaps facili- tated by a teaching center), faculty learning communities and inquiry groups (Cox & Richlin, 2004; Huber, 2008), and, importantly, oppor- tunities to interact and share findings with peers beyond the institution, as faculty expect to do in other types of scholarly work.

6. Involve Students in Assessment

If faculty have been less than enthusiastic about assessment, it is not for lack of caring about their students’ learning. Indeed, bringing students more actively into the processes of assessment may well be the most powerful route to greater faculty engagement.

One relevant line of work in this vein is student self-assessment— providing the tools and frameworks that allow learners to monitor and direct their own development. Alverno College is arguably the pioneer in this arena, but there are many recent efforts as well, including AAC&U’s push for “intentional learning” (AAC&U, 2002); the wide- spread use of e-portfolios as a vehicle for students to reflect on and to direct their own progress (Yancy, 2009); the creation of rubrics that can serve as frameworks for students to assess their own learning (Rhodes, 2010; Walvoord, 2004); and the interest in approaches that develop students’ metacognitive abilities (see, for instance, Strategic Literacy Initiative, 2007). Similarly, working under the banner of the scholar- ship of teaching and learning, a number of campuses have invented vehicles for involving students in campus conversations about and studies of teaching and learning, arguing that they should be collabora- tors and co-inquirers (not simply objects of study) and that they can make distinctive contributions to classroom research projects, curricular evaluation and revision, and institutional ethnography (Werder & Otis, 2010).

Efforts like these speak to the role of students as agents of their own learning, but in a larger sense they are also steps toward making the campus an organization in which all members, top to bottom and across the institution, are focused on improvement—and where evidence and reflection are part of the routines of daily life. These routines must be developed across the campus at multiple levels—from the institution, to the program, to the course and classroom where they manifest them- selves in the relationship between faculty and students and in cycles of learning, assessment, feedback, and further learning. Situating assess-

Bringing students more actively into the processes of assessment may well be the most powerful route to greater faculty engagement.

National Institute for Learning Outcomes Assessment | 17

ment within those cycles is the key to faculty involvement and to making assessment—at all levels—a more positive and consequential process.

Many Doors to Faculty Involvement

Behind all of the above recommendations is a broader one: that there is no single best way to support greater faculty engagement with assess- ment. Significant numbers of faculty have been involved, and more will enter into the work if opportunities present themselves in appealing, doable forms aligned with faculty’s interests, talents, time, and values. For some faculty, assessment will be done primarily in the context of their own teaching—by gathering evidence, for instance, about the impact of a classroom innovation or a new application of technology and using what is discovered to improve students’ learning; this work matters and it should be acknowledged and shared more broadly in ways that are appropriate. Other faculty will be engaged by efforts at the department or program level, perhaps through a curricular reform effort in which assessment will play a part; again, this work should be seen and acknowledged as contributing to the campus’s efforts to use evidence to prompt reflection, innovation, and improvement. Some faculty will find through their assessment activities new scholarly inter- ests and communities that will change their career directions in major ways; others will discover more bounded ways to contribute. What- ever the focus or commitment, the need for significant investments of faculty time are likely to be higher in assessment’s early stages, declining as experience is gained and as processes become more integrated into regular work.

Making all contributions—large or small, sustained or episodic, early or later in the process—more visible and valued, and opening a variety of doors to assessment, is a critical step forward. In this spirit, campus leaders may need to think more broadly and more creatively about where and how faculty can be involved most productively in the work of assessment—matching tasks to talents, needs to interests, and remem- bering, above all, that assessment is only a part of the larger enterprise of improvement in higher education.

There is no single best way to support greater faculty engagement with the scholarship of teaching and learning.

National Institute for Learning Outcomes Assessment | 18

References

Association of American Colleges & Universities (AAC&U). (2002). Greater expectations: A new vision for learning as a nation goes to college. Washington, DC: Author.

Association of American Colleges & Universities (AAC&U). (2008). Our students’ best work: A framework for accountability worthy of our mission (2nd ed.). Washington, DC: Author.

Astin, A. W., Banta, T. W., Cross, K. P., El-Khawas, E., Ewell, P. T., Hutchings, P., . . . & Wright, B. D. (1993, April). Principles of good practice for assessing student learning. Leadership Abstracts, 6(4). Retrieved from http://www.eric.ed.gov/ERICDocs/data/ericdocs2sql/content_ storage_01/0000019b/80/15/56/76.pdf �

Bernstein, D., Burnett, A. N., Goodburn, A., & Savory, P. (2006). Making teaching and learning visible: Course portfolios and the peer review of teaching. Bolton, MA: Anker.

Bok, D. C. (2006). Our underachieving colleges: A candid look at how much students learn and why they should be learning more. Princeton, NJ: Princeton University Press.

Bond, L. (2009). Toward informative assessment and a culture of evidence. Stanford, CA: The Carnegie Foundation for the Advancement of Teaching.

Carey, K. (2007, September/October). Truth without action: The myth of higher-education accountability. Change, 39(5), 24–29.

Chism, N. (2008, April). The scholarship of teaching and learning: Implications for professional development. Keynote presentation at the Thai Professional and Organizational Development (POD) Network 2-Day Workshop, Bangkok, Thailand.

Ciccone, A., Huber, M. T., Hutchings, P., & Cambridge, B. (2009). Exploring impact: A survey of participants in the CASTL Institutional Leadership and Affiliates Program. Unpublished paper, The Carnegie Foundation for the Advancement of Teaching, Stanford, CA.

Cox, M. D., & Richlin, L. (Eds.). (2004, May). Building faculty learning communities (New Directions for Teaching and Learning No 97). San Francisco, CA: Jossey-Bass.

Eaton, J. (2008, July/August). Attending to student learning. Change, 40(4), 22–27. Ewell, P. T. (2009, November). Assessment, accountability, and improvement: Revisiting the tension (NILOA

Occasional Paper No. 1). Urbana, IL: University of Illinois and Indiana University, National Institute of Learning Outcomes Assessment. Retrieved from http://www.learningoutcomeassessment.org/documents/ PeterEwell_005.pdf

Garung, R., Chick, N., & Haynie, A. (Eds.). (2008). Exploring signature pedagogies: Approaches to teaching disciplinary habits of mind. Sterling, VA: Stylus.

Graff, G. (2006, January). Toward a new consensus: The Ph.D. in English. In C. M. Golde & G. E. Walker (Eds.), Envisioning the future of doctoral education: Preparing stewards of the discipline (pp. 370-389). San Francisco, CA: Jossey-Bass.

Huber, M. T. (1999, March). Developing discourse communities around the scholarship of teaching. Paper presented at the Colloquium on Campus Conversations, American Association for Higher Education, Washington, DC.

Huber, M. T. (2008). The promise of faculty inquiry for teaching and learning basic skills. Stanford, CA: The Carnegie Foundation for the Advancement of Teaching.

Huber, M. T., & Hutchings, P. (2005). The advancement of learning: Building the teaching commons. San Francisco, CA: Jossey-Bass.

Hutchings, P. (2009, May/June). The new guys in assessment town. Change, 41(3), 26–33. Hutchings, P., & Marchese, T. (1990, July/August). Watching assessment: Questions, stories, prospects.

Change, 22(5), 12–38.

National Institute for Learning Outcomes Assessment | 19

Kuh, G., & Ikenberry, S. (2009, October). More than you think, less than we need: Learning outcomes assessment in American higher education. (Abridged version). Champaign, IL: National Institute for Learning Outcomes Assessment. Retrieved from http://www.learningoutcomesassessment.org/documents/ niloaabridgedreport.pdf

Light, R. J. (1990) The Harvard Assessment Seminars: Explorations with students and faculty about teaching, learning, and student life (First report). Cambridge, MA: Harvard University Graduate School of Education.

Light, R. J. (1991) The Harvard Assessment Seminars: Explorations with students and faculty about teaching, learning, and student life (Second report). Cambridge, MA: Harvard University Graduate School of Education.

Lopez, C. L. (1998, March). The commission’s assessment initiative: A progress report. Paper presented at the 103rd Annual Meeting of the North Central Association of Colleges and Schools, Chicago, IL.

Miller, R., & Morgaine, W. (2009, winter). The benefits of e-portfolios for students and faculty in their own words. Peer Review, 11(1), 8–12.

National Governors Association. (1986). Time for results: The governors’ 1991 report on education. Washington, DC: Author.

National Survey of Student Engagement. (2009). Assessment for improvement: Tracking student engagement over time--Annual results. Bloomington, IN: Indiana University Center for Postsecondary Research.

O’Meara, K. (2005). Effects of encouraging multiple forms of scholarship nationwide and across institutional types. In K. O’Meara & R. E. Rice (Eds.), Faculty priorities reconsidered: Rewarding multiple forms of scholarship (pp. 255-289). San Francisco, CA: Jossey-Bass.

Pace, D., & Middendorf, J. (Eds.). (2004, summer). Decoding the disciplines: Helping students learn disciplinary ways of thinking (New Directions for Teaching and Learning No. 98). San Francisco, CA: Jossey-Bass.

Rhodes, T. L. (Ed.). (2010). Assessing outcomes and improving achievement: Tips and tools for using rubrics. Washington, DC: Association of American Colleges & Universities.

Spencer Foundation. (2010). Strategic initiatives: Data use and educational improvement [Web page]. Retrieved from http://www.spencer.org/content.cfm/data-use-and-educational-improvement.

Strategic Literacy Initiative. (2007, July). The impact of metacognition on college students. Material from WestEd Leadership Institute in Reading Apprenticeship, Oakland, CA.

Struck, P. T. (2007, February). Report to The Teagle Foundation on a listening on assessment. Report on The Teagle Foundation listening held on Jan. 18, 2007, at the American Academy of Arts and Sciences in Cambridge, MA. Retrieved from http://www.teagle.org/learning/pdf/20070201_struck.pdf

Study Group on the Conditions of Excellence in American Higher Education. (1984, October). Involvement in learning: Realizing the potential of American higher education. Washington, DC: National Institute of Education.

Walvoord, B. E. (2004, April). Assessment clear and simple: A practical guide for institutions, departments, and general education. San Francisco, CA: Jossey-Bass.

Werder, C., & Otis, M. M. (Eds.). (2009, October). Engaging student voices in the study of teaching and learning. Sterling, VA: Stylus.

Yancey, K. B. (2009, winter). Electronic portfolios a decade into the twenty-first century: What we know, what we need to know. Peer Review, 11(1), 28–32.

National Institute for Learning Outcomes Assessment | 20

NILOA National Advisory Panel

Trudy W. Banta Professor Indiana University-Purdue University Indianapolis

Douglas C. Bennett President Earlham College

Robert M. Berdahl President Association of American Universities

Molly Corbett Broad President American Council on Education

Judith Eaton President Council for Higher Education Accreditation

Richard Ekman President Council of Independent Colleges

Joni Finney Practice Professor University of Pennsylvania

Vice President, National Center for Public Policy and Higher Education

Susan Johnston Executive Vice President Association of Governing Boards

Paul Lingenfelter President State Higher Education Executive Officers

George Mehaffy Vice President Academic Leadership and Change American Association of State Colleges and Universities

Margaret Miller Professor University of Virginia

Charlene Nunley Program Director Doctoral Program in Community College Policy and Administration University of Maryland University College

Randy Swing Executive Director Association for Institutional Research

Carol Geary Schneider President Association of American Colleges and Universities

David Shulenburger Vice President Association of Public and Land-Grant Universities

Belle Wheelan President Southern Association of Colleges and Schools

George Wright President Prairie View A&M University

Ex-Officio Members Peter Ewell Vice President National Center for Higher Education Management Systems

Stanley Ikenberry Interim President University of Illinois

George Kuh Chancellor’s Professor Indiana University

NILOA Mission

NILOA’s primary objective is to discover and disseminate ways that academic programs and institutions can productively use assessment data internally to inform and strengthen undergraduate education, and exter- nally to communicate with policy makers, families and other stake- holders.

NILOA Occasional Paper Series

NILOA Occasional Papers are commissioned to examine contemporary issues that will inform the academic community of the current state-of-the art of assessing learning outcomes in American higher education. The authors are asked to write for a general audience in order to provide comprehensive, accurate information about how institutions and other organizations can become more proficient at assessing and reporting student learning outcomes for the purposes of improving student learning and responsibly fulfilling expectations for transparency and accountability to policy makers and other external audiences.

Comments and questions about this paper should be sent to [email protected].

National Institute for Learning Outcomes Assessment | 21

About NILOA

• � The National Institute for Learning Outcomes Assessment (NILOA) was established in December 2008.

• � NILOA is co-located at the University of Illinois and Indiana Univer- sity.

• � The NILOA web site went live on February 11, 2009. www.learningoutcomesassessment.org

• � The NILOA research team reviewed 725 institution web sites for learning outcomes assessment transparency from March 2009 to August 2009.

• � One of the co-principal NILOA investigators, George Kuh, founded the National Survey for Student Engagement (NSSE).

• � The other co-principal investigator for NILOA, Stanley Ikenberry, was president of the University of Illinois from 1979 to 1995 and of the American Council of Education from 1996 to 2001. He is currently serving as Interim President of the University of Illinois.

• � Peter Ewell joined NILOA as a senior scholar in November 2009.

NILOA Staff

NATIONAL INSTITuTE FOR LEARNING OuTCOMES ASSESSMENT

Stanley Ikenberry, Co-Principal Investigator

George Kuh, Co-Principal Investigator and Director

Peter Ewell, Senior Scholar

Staci Provezis, Project Manager

Jillian Kinzie, Associate Research Scientist

Jason Goldfarb, Research Analyst

Natasha Jankowski, Research Analyst

Gloria Jea, Research Analyst

Julia Makela, Research Analyst

NILOA Sponsors

Carnegie Corporation of New York

Lumina Foundation for Education

The Teagle Foundation

Produced by Creative Services | Public Affairs at the University of Illinois for NILOA. 10.032

knowledge accountabil ity connection self -reflection educate action understand communicate listen learn access quality innovation success ingenuity intel lect curiosity challenge create achievement connection self -reflection educate action understand communicate listen learn access quality innovation success ingenuity intel lect curiosity challenge knowledge accountabil ity connection understand communicate listen learn access quality innovation success ingenuity self -reflection educate action understand intel lect knowledge accountabil ity connection self -reflection educate action understand communicate curiosity challenge create achievement connection self -reflection curiosity challenge create achievement connection self -reflection knowledge accountabil ity c ge e ction u on educate a ity c success i e a uriosity c tion s icate l i nnovation s ection k ss quality i t ion u nuity c nection s s q reate a ledge a n self - r unicate l i e educate

onnection self -reflection educate action understand communicate listen learn access ualit innovation success in enuit intel lect curiosit challen

n g

i l e

n

n s

t

q y g y y ducate innovation success ingenuity intel lect curiosity challenge create achievement knowledge accountabil ity connection self -reflection educate a nderstand communicate curiosity challenge create achievement connection self -reflection understand communicate l isten learn access quality acti ct ion understand communicate l isten learn action understand communicate l isten learn access quality innovation success ingenuity intel lect curios hallenge knowledge accountabil ity connection access quality self -reflection curiosity challenge create achievement learn access quality innovation genuity self -reflection educate action understand intel lect knowledge accountabil ity connection self -reflection educate action understand knowled ccountabil ity connection self -reflection educate action understand communicate l isten learn access quality innovation success ingenuity intel lect c hallenge connection knowledge accountabil ity connection self -reflection educate action understand communicate l isten learn access quality innova uccess ingenuity challenge create achievement connection self -reflection educate action understand connection self -reflection understand commun sten learn access quality action create achievement connection self -reflection educate action understand communicate l isten learn access quality uccess educate action communicate l isten learn access quality action educate action understand communicate educate innovation success self -ref nowledge accountabil ity communicate l isten learn achievement connection self -reflection educate action understand communicate l isten learn acc novation success ingenuity intel lect access quality innovation success self -reflection curiosity challenge create achievement connection self -reflec nderstand educate action understand communicate l isten learn action understand communicate l isten learn access quality innovation success inge uriosity challenge create achievement connection self -reflection understand communicate l isten learn access quality action create achievement co elf -reflection educate action understand communicate l isten learn access quality innovation success educate action communicate l isten learn acce uality action educate action understand create achievement connection self -reflection understand communicate l isten learn access quality action c chievement connection self -reflection educate action understand communicate l isten communicate educate innovation success self -reflection know ccountabil ity connection self -reflection educate action understand communicate l isten learn access quality innovation ingenuity intel lect connectio eflection understand communicate l isten learn access quality action create achievement connection self -reflection educate action understand comm sten learn access quality innovation success educate action communicate l isten learn access quality action educate action understand communica

National Institute for Learning Outcomes Assessment

For more information, please contact:

National Institute for Learning Outcomes Assessment (NILOA) University of Illinois at Urbana-Champaign 340 Education Building Champaign, IL 61820

learningoutcomesassessment.org [email protected] Fax: 217.244.3378 Phone: 217.244.2155