The road to hell is paved with good intentions, or so the saying goes. The National Student Survey or NSS is certainly an instrument borne of the best intentions. Primarily these are
– enabling students (and their parents) to compare universities and make an informed choice of both course and institution
– give feedback to universities, leading to an overall improvement of the student learning experience within institutions and for the higher education sector as a whole. >
How is this done?
The NSS relies on a national independent survey of about 22 questions answered by mostly final year undergraduate students from Higher Educational Institutions (HEIs), Further Education Institutions/ Colleges (FEIs/FECs) and the like across the UK. A collation of these responses results in the annual NSS results that are eagerly awaited by students, parents, institutions, and the media.
Both students and institutions want information on performance relating to student satisfaction: the former to make choices about where to study and the latter to improve performance, strategise and market themselves effectively. On one hand it will influence the lives of thousands of students as they base their future study decisions on the survey. On the other, the standing and financial health of educational institutions (and the sector) can be heavily influenced by it, especially in the current scenario after the hike in tuition fees following the Browne report.
It’s fair to say that the NSS plays a vital role in the higher education market. For this reason we can’t afford for it to be less than perfect. Metrics and feedback are an extremely important way to gauge the performance of institutions. Therefore, in principle, the NSS should be an ideal way to meet the needs set out above and listed on the NSS website.
This leads to two questions:
1. Can students use the NSS to reliably compare very different institutions offering similar courses?
2. Does the NSS, as currently implemented, adequately measure student satisfaction?
The first concern is more fundamental than the second, but both require a considerable overhaul of the way that we think about course, programme and university evaluation.
Let’s take them in turn.
Can students use the NSS to reliably compare very different institutions offering similar courses?
Student satisfaction is an important metric of course evaluation, and it should be taken seriously. It is also an important metric for evaluation of the teaching performance of an institution.
However, it’s not at all obvious that very different institutions (for example, a former Polytechnic or small college and a Russell group university) can be compared on course quality for the same course. In our example, the former may have staff focused on teaching and better student-teacher time than the latter, while the latter may have internationally research leading faculty who can impart knowledge at the cutting edge level. The former may be based in a remote part of the UK offering excellent access to local student population and adding value to the local community, while the latter may attract an international student base giving a very cosmopolitan experience to students. Both institutions and courses have value, and environmental factors may interact with course delivery in different ways. Can they really be compared with the NSS?
Let’s look at some data from the NSS 2013 results.
For BSc in Economics scores (for overall satisfaction with the course) are:
This is factual NSS data. Can it help students in comparing the teaching of Economics/Physics at these institutions?
Another critical and basic factor is this: what constitutes “value” may be different, or have different weight, in different programmes: for instance, a course in BA (Hons) Natural Philosophy may have different aims than would a course in BSc (Hons) Forensic Investigation and Biosciences. Again, can the NSS be used to compare them and can it tell prospective students which course is better value?
We need to understand what these objectives are, before we can determine how best to measure a course’s success in meeting them, or even when best to measure it. It may well be the case that the true value of a course will only emerge when a student in some way puts the course to use at the workplace, or goes on to graduate school. What is the appropriate time-frame for such a question? Is the student in the best position to determine this while the course is going on, or just immediately after?
There are other data available that can go some of the way towards answering these other questions: placement rates of universities, the research profile of an institution and so on. Using student satisfaction to evaluate a course and compare this across institutions is akin to using one temperature reading to determine the climate across a continent.
Near-contemporaneous feedback is great, because it helps course administrators to identify and fix problems in the way that a course is managed and delivered to the next cohort. But this assumes that the survey delivers consistent, reliable results that can be used to inform policy. That may be a strong assumption.
Let’s look at the second question:
Does the NSS, as currently implemented, adequately measure student satisfaction?
The NSS asks a series of standard, multiple-choice questions on aspects of course delivery such as teacher engagement, assignment assessment and intellectual content of a course. For it to perform as intended, we require (at least) the following to be true:
The NSS questionnaire should cover all significant aspects of course content;
– Without a clear policy on what constitutes a “significant aspect” of course content, this is hard to judge. Does the statement “I find this course intellectually stimulating” cover “This course has acquainted me with the research frontier in this topic”, or “This course has given me a fresh perspective on this topic” or “This course has shown me the policy implications of this movement/discovery/technology”?
– Without the option for open-ended questions, or for students to add comments, can we build in systematic revisions to the questionnaire process?
– Most universities undertake internal surveys for feedback from students for individual modules, courses and also have face-to-face feedback mechanisms. In meetings students raise their concerns and give valuable feedback: discussion allows for properly understanding the issues and engaging with students, coming to agreements and making positive changes. The NSS does not allow for suggestions or deeper understanding.
If two institutions differ on NSS scores for the same course, is this is entirely due to differences in course delivery between the two?
There are several factors- historical, environmental, cultural and economic- that are external to a course, and interact in unpredictable ways with the way that a course is delivered. Consider a course on the history of diplomatic relations with the Middle East- the identical course, but in one case taught in a multiethnic environment, and in the other taught in a more homogeneous classroom. Are the two experiences in any way comparable?
If the scores for a course in an institution are different between years, is this entirely due to differences in course delivery?
The results are presented by year. Therefore a small student cohort responding in a year as compared to a larger cohort, can lead to big statistical deviations year on year. Therefore the NSS score can fluctuate significantly just due to the number of respondents rather than the underlying teaching quality fluctuating. This is only partially explained in the fine print and may not be obvious to people using the survey. Though results for each year are available, there can be more meaningful measure such as the average trends.
Feedback is essential to make improvements in any system. The NSS can be a hugely important tool for that. In its current form, it may not be leading us as much as misleading us.