The framework was introduced by the Conservative government to provide students with information about the teaching and learning experience on offer at university and this is the first time the performance of universities has been assessed in this way.
The awards cover undergraduate teaching and universities in England choosing to participate in the TEF can also increase their fees in line with inflation, while universities in devolved nations can participate but without the funding incentive. Judgements are based on three broad criteria: teaching quality, learning environment and student outcomes. Assessment used a core set of quantitative metrics, including student satisfaction and labour market destinations that were then 'benchmarked' against similar institutions, based on factors such as student intake and subject mix. Universities then submitted their own evidence which was reviewed by an independent panel made up of sector leaders and experts. The TEF also sits on top rigorous baseline quality standards that are required of all universities.
What do the results tell us?
The first notable result is that out of the 134 universities that took part, 32% achieved golds (43) versus 18% bronze (24). This is a broadly positive result for the sector, particularly when you bear in mind that this exercise is above the quality requirements that all universities need to meet. There is also a mix of institutions in the different categories, with no pattern immediately jumping out from an initial reading of the results. Institutions receiving gold range from research intensive universities all the way through to some of the very newest teaching focused universities, a pattern that is replicated in silver and bronze.
There are some patterns however. The first of these is that there is a slight correlation between a higher entry tariff and a gold score. This is also likely to have a bearing on any relationship between TEF scores and the social and ethnic background of a university's student population and other factors such as part time, living at home or recruitment from the local area. The second is that London universities have had mixed fortunes with the highest proportion of Bronze ratings (11 out of 33), ahead of the general trend. On the flip side, the East Midlands has 'won' the regional competition with 8 golds and 1 silver.
Is this what was expected?
One of the stated intentions of the TEF was to provide a different perspective on university performance that focuses on teaching. The three TEF categories is a less granular way of differentiating performance than linear rankings but there are clearly some outliers in comparison to some of the league tables. There is however a correlation between the TEF and the Guardian newspaper's league table for example, which uses much of the same data, but doesn't use benchmarking, provider submission or panel judgements.
What this top line analysis doesn't us yet tell us is what lies behind these judgements. For example, London universities often have poorer NSS results – but Londoners generally are less satisfied with life, as are people from Wolverhampton, so it isn't just students – but this hasn't necessarily applied to all London universities. Another potential dimension is the impact of regional labour markets on performance. Analysis of panel judgements and provider submissions themselves will be an important opportunity to understand the factors that have led to universities doing well.
This also highlights the question about how you define 'excellence' – is it comparison against 'similar' universities, the 'impact' a university has on its students, or an absolute measure across the whole sector. The correlation with tariff and TEF awards, as with the Longitudinal Educational Outcomes (LEO) earnings data, highlights the interrelationship between institutional outcomes and the students that a university attracts. How these factors are balanced in a performance tool like TEF and what students need to know from it when making decisions are all highly sensitive questions.
There are notable developments still in the pipeline, including pilots for subject level assessments that will increase the size and cost of TEF and new metrics, such as the incorporation of the LEO data. The appointment of an independent reviewer of TEF is an important opportunity to assess the effectiveness of TEF given the potential impact on universities and students. The role of the TEF in the Office for Student's regulatory framework and the role that the sector itself has in its future development are major questions that will need to be addressed.
The TEF is meant to have consequences. It is intended to have reputational impacts for individual institutions that may impact the whole UK sector, to inform students and to incentivise a competitive focus on teaching. Universities UK surveyed the teams that were involved in compiling university submissions and nearly 50% reported that the TEF had had some impact on decision making and 72% believed the TEF would raise the profile of teaching and learning. However, participating in the TEF represented a significant investment in staff time and there was very little confidence that the TEF would accurately assess teaching and learning. This is clearly a concern in terms of confidence in the system.
This is not to say that the TEF has no
merit. Those institutions that have achieved gold ratings will be pleased with
doing well and should be congratulated. Similarly, while there is debate about whether
TEF assesses teaching, student satisfaction and labour market destinations are
important measures in their own right. As the first full iteration of the
assessment, it is important that we build on this experience to ensure the TEF
can make a positive contribution to teaching and learning in the sector and
informing students. Otherwise we risk unfairly undermining the reputation of
the sector and diverting the energy and creativity of universities down blind