UAL designed a process to help secure buy-in from across our staff in the development of a range of non-financial metrics and KPIs to help better measure, value and improve KE performance. Examples include gathering qualitative feedback from partners (through surveys), and quantitative data on non-income-generating KE. These, like income, are only proxy data for the diverse types of value and impacts that our KE work create but they better speak to the variety of activities that KE includes.
-
Please provide a brief description of the KE project/ case study and why you believe it is considered good practice or innovative (and for whom).
We’re still very early on our journey in gathering more diverse data on KE. UAL has a high level of KE, reflected in our HEIF allocation, and social purpose at the core of our institutional strategy, which makes us feel accountable for how we demonstrate the public and societal benefit of the work we do.
We consider the way we have adapted elements of our current system to be relatively innovative. When we developed our institutional KE strategy, we conducted a thorough bottom-up process and found that our staff were generally not motivated by the current income measures, as they didn’t really reflect their motivations for doing KE, or their sense of the value it creates.
We started a conversation and, through that process, we looked at theory of change models for the five different areas of our institutional strategy. We developed a set of measures that could be applied across all different strands of our KE activity and adapted our CRIS system (research database) to enable our academics to capture their KE activity. We also developed training programmes to support colleagues with this, resulting in increased uptake.
We are aiming to get to the point where we are able to consistently gather a robust set of non-financial data across the institution that is readily understood and can usefully supplement our reporting within the frameworks for UKRI and others.
2. Where did the idea for the project/ programme come from? Was this related to a strategic objective? How did you secure senior buy in?
When we were developing our KE strategy, there was an appetite amongst our academic staff to develop more meaningful non-financial measures, coupled with a perceived appetite from UKRI to also develop alternative ways of measuring KE, and so we decided to address this issue head on within our KE continuous improvement programme.
Since submitting the KEC Action Plan, UAL has published a 10-year institutional strategy, in which our new Vice-Chancellor has foregrounded the University’s social purpose. As a consequence, the next stage for our development of non-financial metrics for KE is likely to be incorporated into a broader pan-UAL effort to develop an effective social purpose measurement framework, which we hope will have wider sector application and transferability. This means this agenda is fully owned at the executive level, giving the whole programme of work more visibility and urgency.
3. What impact/ outcome has this project/ activity had on your university? Students? Local economy? Staff? Other external parties, e.g. businesses.
Creative practice is a big part of our institutional mission and is seen as a natural part of our academic practice, not as an add on. As such, it has been important for us to give KE equal standing with teaching and research within our career development pathways, enabling academics to progress to reader or professor in KE.
Reframing KE within our university frameworks and processes as a form of social enterprise and aligning it to our institutional mission as a social purpose organisation, has also helped to make our measurements and KPIs more meaningful in relation to our academic staff’s motivations. As we start to develop a better evidence base, we are getting better at designing activities that have more impact and we are gaining a better idea of what works.
5. How did you measure impact?
We commissioned a bespoke training session in theory of change-based impact evaluation methods with a particular focus on non-economic impact indicators. Through this session, we developed a set of good practice and encouraged our staff to capture that information on our CRIS system. In addition, we commissioned the Social Design Institute to help our academics develop evaluation methods which considered broader types of value creation. Keeping up the momentum of all of that is the real challenge.
6. What types of resources were required to implement this project?
We have needed to make little additional spend to get us to where we are now – as it was important that this work become embedded within business-as-usual, rather than becoming a new additional activity or resource. We already had a number of academic leads who were relatively high profile in this space, and they have been able to lend their time and thinking towards this. We did spend some additional money on developing software to help us underpin our new model. But mostly, it’s been about people’s time in reviewing the frameworks, testing those and feeding back, and that is an ongoing process. We also spent resources on commissioning an external consultant to develop a two-day training programme, and there is now a repeat fee for running it periodically.
With the development of a more institution-wide approach to the impact measurement of our social purpose, it is likely we will now also invest in more dedicated additional resources, to support implementation beyond KE.
7. What are the governance structures in place to oversee it?
This whole programme is owned by the Deputy Vice Chancellor for Research, KE and Enterprise, who also works closely with our Chief Social Purpose Officer. There will be a social purpose advisory group that drives this work – which will include academics, professional services and externals.
8. What sorts of partners have you engaged in the impact assessment process that were not obvious on the outset? And how?
A lot of our project activities are co-designed with our external partners which helps to address how we inclusively design our project evaluations. Increasingly, we look at how we use equality impact assessments at the start of our projects. This is embedded in some places and a work in progress in others.
In approaching the development of new metrics for KE at UAL, one of our main areas of focus was on the involvement of our students and graduates in KE, as this is one of our institutional strengths at UAL, and a very distinctive characteristic of our activity, but not currently represented in any of the HE-BCIS/KEF data.
9. Describe any challenges that you have had to overcome either before, during or after implementing this project?
There is always the challenge to produce a metrics model that is administratively low burden, but which gets you some way towards more meaningful measurement. It’s inevitable that you will disappoint people who are experts in different evaluative methods, but it’s important to come up with solutions that are absorbable within the institution’s operations and culture.
We also struggle with ensuring that measurement indicators have a relevance/applicability that is at least at programme level (not just project level), which can then ideally be meaningfully and consistently used for all activity directed towards specific thematic objectives. This is so that our measurements have a durability and transferability that will enable us to build a longitudinal and cumulative picture of performance/ value creation, not just fragmented project-level data.
This remains a challenge. It would have been a lot harder without senior level buy-in and convincing staff to capture all their interactions within our CRIS remains a challenge.
10. Next steps?
We’d like to be doing more of the same thing, but more comprehensively and consistently. The interesting learning through this was having this sense that to get something meaningful that works at institutional level, you need to have common measures. Because otherwise, you can do great more bespoke project-based evaluations, which are great, but this creates no cumulative evidence base. You need to be able to identify measures that work at an institutional level. We started with developing a set of impacts we were trying to achieve at the institutional level, and then worked backwards, and this should enable us to make better project development, data gathering and investment decisions in the future.