Jump to Navigation
dept services image

interpreting the data

In advising faculty clients about how best to interpret ESCI results, Instructional Consultation frequently makes the following points. These are intended as general suggestions, and should not be taken as rigid or universal guidelines. For assistance in interpreting particular data, please contact a consultant at extension 2972.

1. ESCI results represent what your students say about the overall quality of your teaching and your course. This is an important and valuable source of information (but it should not be the only source or kind of information -- see 3 below).

2. Questions A and B are overall "summary" questions. There are two ways to look at the data:

  • How well students think you are doing in "absolute" terms, as defined by the "anchor points" of the scale: "Excellent", "Very Good", "Good", "Fair", "Poor."
  • How well students in your course think you are doing relative to what students say about other courses in the department and campus-wide.

3. Results of Questions A and B should be corroborated (or questioned) via other evidence about the course before reaching important conclusions. For ideas and discussion of other kinds of information, see a consultant.

4. Results of Questions A and B should not be over-interpreted. In general, we suggest that by themselves these results are accurate enough only to place instructors and courses into three broad categories: the truly outstanding, those with problems, and the vast majority in the middle.

5. To get the most useful information from the results, we suggest you look at the entire distributions, not just the means or medians. Results are always reported in terms of percentage distributions, so it is easy to look at the results and see that, for example, "46% of my students rated my teaching as Excellent or Very Good." Looking at the entire distribution lets you see whether the responses are clustered in a couple of categories ("They loved it!"), or spread across the categories ("Some loved it, some thought it was OK, and some hated it"), or perhaps even bimodal ("Some loved it and some hated it, but nobody was indifferent"). NOTE:  It is not good practice to add the means together for different items within a survey or for different courses and arrive at an “average” rating for an instructor’s teaching for a given course or across the instructor’s courses.

6. In using the comparison norms, your results will differ to some extent from those of your department during the current quarter, the department over time, and the campus over time. It is important to understand when those differences are educationally meaningful and when they are not. It is possible to use statistical procedures to compare two distributions, but it is not uncommon for the number of student respondents to be so large that any difference is statistically significant, even though it may not be educationally meaningful. As a rough rule of thumb to decide when the differences between your choice and the "norms" may be meaningful, we suggest the following:

  • If the percentage of students who rate your course in any particular category differs by about 10% from the percentage in a norm group, then there MAY be something educationally meaningful going on, and it's worth examining.

For example, suppose your department this quarter has 54% of students rating "the overall quality of the instructor's teaching" as "Excellent," and 64% of your students rate your teaching as "Excellent." Then you may be doing a better job in this course than the departmental average, and you could look at responses to other questions on the survey and to other sources to understand the differences, and to decide for yourself whether they are meaningful.

  • If for any response category the percentage differs by about 20%, then there's probably something meaningful going on, and it's definitely worth seeking further understanding.
  • If the percentage differs by about 30%, then there IS something meaningful going on.

These are not meant as hard and fast cutoff points; the overall message is that the larger the differences between two distributions, the bigger the chance that the differences are meaningful, and therefore the more important it is to understand what's going on. We definitely would caution against reaching conclusions based on differences as small as 5%.

7. Overall, teaching at UCSB is rated highly by our students: from Fall 2001 through Spring 2006, in 82% of courses, students have rated the overall quality of the instructor's teaching as "Excellent" or "Very Good", and the figure increases to 95% if one includes the "Good" category. This suggests, among other things, that with such a capable group of instructors it will only be the truly exceptional who stand out, and that those who receive average or typical ratings are competent and effective instructors who are doing well for students and for the University.


Communicating ESCI Results to Others

If you would like to share information from your ESCI surveys with others, we suggest that you consider the following:

The most complete presentation would, of course, be to share copies of all printouts. If that is not feasible or appropriate, then it is possible to summarize or condense the information in many different ways. Because of the complexity of such options, we suggest consulting with OIC staff, who will be happy to help you.

Please be aware that we have a bias toward providing readers of summarized results with entire percentage distributions rather than just means and/or medians.

consultation contacts

George Michaelsexecutive director2130 Kerr Hall
work805-893-2378
lisa berrysenior instructional consultant1130 Kerr Hall
work805-893-8395
mindy colininstructional consultant1130 Kerr Hall
work805-893-2828
Mary Lou Ramos database and ESCI administrator1130 Kerr Hall
work805-893-3523
Aisha Wedlaw ESCI assistant1124 Kerr Hall
work805-893-4278
Sarah Koepkeoffice manager 1130 Kerr Hall
work805-893-2972
faxfax: 805-893-5915