MULTI-SITE ASSESSMENT IN ENGINEERING EDUCATION

Gloria Rogers, Ph.D.
Dean for Institutional Research and Assessment
Rose-Hulman Institute of Technology
5500 Wabash Avenue
Terre Haute, Indiana 47803-3999 USA
ph. (812)877.8451; fax (812)877.8931; email: gloria.rogers@rose-hulman.edu


ABSTRACT

The National Science Foundation has funded a number of major initiatives aimed at systemic change in engineering education. One of the biggest challenges in documenting the effectiveness of this funding effort is the development of comprehensive assessment and evaluation strategies. This paper focuses on the experience of one such initiative, the Foundation Coalition, in the challenges of assessing the effectiveness of the project goals. The Foundation Coalition has seven partner institutions representing a very diverse student populations. This paper will provide a background of the role of assessment in curriculum innovation, an example of goal development, and a summary of the challenges of developing a multi-site assessment process.


INTRODUCTION

Assessment and evaluation are crucial in the feedback and improvement process for innovative educational programs. Unfortunately, these are also tasks which are too often delayed in favor of immediately pressing issues which demand attention as organizations undertake change. During the summer of 1995, representatives from forty-six engineering colleges met to discuss strategies for the assessment of innovative engineering programs. These colleges represented six Engineering Education Coalitions which are funded by the National Science Foundation and designed to created systemic reform in undergraduate engineering education.

WHY ASSESS?

There are three basic reasons to assess educational innovation: to improve, to prove, and to inform. To improve a project, the assessment process is generally short-term and driven by the faculty/project director for the purpose of making adjustments while the innovation is in progress. This is generally referred to as "formative" assessment. Summative assessment is a longer-term assessment design and has as its primary focus to answer the question, "Did the innovation achieve its goals?" Assessment is also designed to provide faculty and administration with information to guide the decision-making process.

For innovation to become institutionalized and/or disseminated beyond a pilot project, it is often not enough to know that the project has been improved or that it has achieved its goals. Examples can be given where successful, innovative curricula did not survive after external funding had ended. The reasons for discontinuation had nothing to do with the efficacy of the curricula. Decision-makers need to be involved in the development of the overall evaluation design so that important data is gathered which can guide the decision making process and increase the likelihood of institutionalizaton and dissemination. Necessary information might include such factors as cost/benefit analysis, departmental buy-in, student attitudes, alumni support, etc.

WHAT ARE THE COALITIONS DOING?

During the summer meeting of the assessment teams of the six coalitions, it was found that the coalitions are using multiple assessment methods including both quantitative and qualitative techniques. Quantitative techniques being used included standardized testing, analysis of grades in courses, compilation of data related to retention, demographics, and graduation rates, and pre- and post-test differences on various assessments. Qualitative techniques included the analysis of student work portfolios, focus groups, interviews, faculty and student journals, and video analysis of student performance. However, it was found that even though all of these techniques were being employed in various schools within the coalitions, there were few coalition-wide plans for assessment of coalition goals. There also seemed to be a general lack of understanding about the distinction between program evaluation and student assessment and the process of forming performance indicators from project goals. These distinctions and understanding are critical to the evaluation and assessment of the coalition program.

The process of program evaluation determines how well a program is doing in light of its objectives. The Coalition program has broad goals and objectives which include reforming engineering curricula and identifying what graduates should be able to do as a result of participation. Assessment then includes assessing what the students who are participating in the individual coalition projects have learned and whether or not they perform as expected upon graduation. In addition to assessing student performance, evaluation must be done to determine the quality of the new curricula (and/or courseware, delivery system, etc.) as well as the climate for institutionalization of the innovation and its dissemination beyond the Coalition.

In engineering education in general, there seems to be more interest in the "tools" of assessment than in the process of developing an appropriate assessment plan. That is, there is more interest on what methods are being used and how the methods are implemented than whether or not the assessment method is appropriate and linked to the goals of the program. The choice of "tools" should be a consideration in the development of the broader assessment planning strategies.

Although the assessment process itself is a "closed loop" system, the assessment planning process is linear and unidirectional. It is first necessary to identify the goal(s) of the innovation. These goals are usually stated in broad, general terms. Once the goals are identified, objectives should be developed. These objectives should identify the conditions, behavior, and minimum criteria which must be met. There might be several objectives for each goal. Once the objectives have been identified, performance indicator(s) need to be developed for each goal. The performance indicator is a specific statement(s) which identifies what measurable behavior/condition a person or group would exhibit. Performance that meets or exceeds the expected level would indicate that an objective has been achieved. Once the above steps have been taken, the methods for collecting the data ("tools") can be identified. Because the methods and data collected need to be linked to the criteria, the process of method selection should be integrated into the broader goal development process.

An Example From the Foundation Coalition:

The Foundation Coalition (FC) consists of seven institutions with unique experiences and resources: the University of Alabama, Arizona State University, Maricopa Community College District, Rose-Hulman Institute of Technology, Texas A&M University, Texas A&M University at Kingsville, and Texas Woman's University. These institutions represent a variety of settings: large research institutions; a majority minority institution; a selective, private college of science and engineering; a college with a 3+2 program in engineering; and a large, multi-campus community college district.

The Foundation Coalition has as its central driving theme the creation of an enduring foundation for student development and life-long learning. The member institutions have made a commitment to fundamentally restructure the undergraduate curricula to produce graduates who are able to define problems, explore and synthesize diverse knowledge bases as they develop and evaluate alternative solutions, and to select and implement the best solutions. In addition, there is a commitment to ensure that FC graduates know how to work and communicate effectively in teams and to apply appropriate technology for data collection and reduction, analysis, design, and communication.

Goals for Student Learning:

The Foundation Coalition has established six primary outcome goals for student learning. They are:

  1. Increased appreciation and motivation for life-long learning;
  2. Increased ability to participate in effective teams;
  3. Effective oral, written, graphical, and visual communication skills;
  4. Improved ability to appropriately apply the fundamentals of mathematics and the sciences;
  5. Increased capability to integrate knowledge from different disciplines to define problems, develop and evaluate alternative solutions, and specify appropriate solutions;
  6. Increased flexibility and competence in using modern technology effectively for analysis, design and communication.

Each of these student outcome goals have been defined by performance criteria with appropriate tools identified. An example is shown below:

Goal: Increased ability to be an effective team member

Complexity of Assessment Plan:

Assessment of multi-site projects has a number of challenges and opportunities. The differences in size and type of institutions allow for the ability to determine effects of innovation in a variety of contexts. However, the diversity of institutions also provides difficulty in interpretation of Coalition-wide results. Although there are common goals and assessment methods, the curriculum development and implementation stages and processes vary from campus to campus. Determining cause and effect relationships is difficult at a given campus. This is challenge is compounded among diverse campuses.

Among the institutions there are a variety of programs to be assessed and, by the end of the funding period, a multiplicity of curricular programs (first year, sophomore, junior, and senior year) will be taught on each campus. In addition, the commitment to formative or "loop" assessment suggests a changing or evolving assessment model. Each program is being modified on a continuous basis as the project evolves. Faced with the complex task of assessing the activities of the Foundation Coalition, the Assessment and Evaluation (A&E) team has chosen to generate its assessment strategy guided by two principles: 1) recognition of the diversity and autonomy of the institutions comprising the Foundation Coalition; and 2) recognition of the need for shared assessment goals and criteria, and comparable methods across the Coalition.

The overall assessment plan is enhanced by the input from a number of assessment team members across a spectrum of institutions. Much learning has taken place within the team as assessment plans have been developed. In addition, working in sub-teams to develop assessment processes has lessened the overall workload of the individual campuses. A&E team members are seen as resources by others on the team.

The Foundation Coalition is coordinated using continuous improvement and teaming principles. Team members have participated in team and facilitator training and have adopted management styles consistent with quality principles and continuous improvement. Each campus has an interactive video system which allows the teams to have face-to-face meetings weekly thereby improving the quality of on-going communication among the geographically dispersed institutions.

During year five of the Foundation Coalition, common summative assessment processes will be used across all campuses involved in the Coalition. The A&E team has been working to develop these processes to maximize the ability to report on the impact of the FC efforts on a diverse group of campuses.

Summary:

Assessment and evaluation of Coalition activities are central to systemic reform. They provide information for project developers to improve the project, evidence of the efficacy of the project, and needed information enabling key stakeholders to make determinations about institutionalization and dissemination of the project.

Systemic reform will only happen when faculty believe that the proposed changes will promote learning and will not be adverse to their academic agenda; when administrators believe that institutionalizaton of the innovation will have a cost/benefit ratio which is acceptable to the campus community; when other institutions believe that they can adopt the innovation and it will add value to their program with benefits that outweigh the costs; and when funding agencies find that seed money for reform has taken root and is producing the desired results thereby building confidence in continued investment in innovation.

There is much work to be done within and among the Coalitions. The evaluation and assessment of complex entities such as Coalitions is not easy. However, it is vital to the process of systemic reform. To quote H.L. Mencken, "For every complex question there is a simple answer. . . and it's wrong!" Working together, we believe we can get it right!


Back to Table of Contents