“Practical value” of a research project — what it really means?

Alex A. Renoire
9 min readMar 22, 2018
“Give me a grant please”

Many scientists prefer to believe that they have nothing to do with politics. The reality is that the grant applications review process goes hand in hand with politics. The commission’s objective is to evaluate from which of the proposed projects society would benefit the most. The choice is often difficult, but luckily there is a set of criteria that we can and should rely upon.

Daniil Granin’s novel “Going Inside a Storm” describes the life of scientists in an electrical field physics research laboratory. Although the book was written half a century ago, the story is oddly modern: science and pseudoscience compete for a research grant. One of the protagonists, physicist and Director of the research institute, Dankevich, is slowly, steadily and carefully conducting research on thunderstorms. But at some point his experiments go behind schedule and only give negative results. Meanwhile academician Denisov, a social climber, presents a research plan that would allegedly allow him to gain control over thunderstorms and thus increase agricultural yields. The two men of science stand before the commission’s judgement. A man in the audience asks Dankevich:

— Here, Professor Denisov bids to provide results critical to the nation’s economy in the shortest possible time. Apart from theorizing, what results can you present?

— Just negative results, but they are important to science, too.

— And tell us, when are you going to achieve any positive results? After all, we have a planned economy, we cannot let people’s money go down the drain!

— Science cannot be rushed. It is impossible to foresee the outcome of a research project.

After further discussion, the social climber gets the money and blows it on various “conferences and delegations”, while Dankevich’s Institute is forced to seek other funding sources. Part of the team quits. “The real science” loses. The moral of the book is: science is something from beyond the clouds, obscure to mere mortals, therefore the crowd should admire the scientists, and that scientific issues cannot be resolved by a vote, or they will do no good at all.

A scene from the film based on Granin’s novel “Going Inside a Storm”.
A scene from the film based on Granin’s novel “Going Inside a Storm”.

But, contrary to this moral, “Western” science is governed exactly by the principle of a vote and practical value. The difference between the exaggerated picture in the film and the actual situation is that in real life, members of the commission usually realize that for many crucial studies it is virtually impossible to state unequivocally that “now we are going to invent the wheel, use it to build carts on which you may carry goods, and thereby make the trading cycle three times faster.” The real practical value of research is difficult to ascertain; it requires an accurate and adaptable definition “practical value”. The good news is that the academic community has already proposed several techniques to better evaluate the potential benefits of research.

But why on earth would the ordinary reader need to know how grant funds are allocated? The fact is that it is the taxpayers’ money that governments spend on science, and governments want science to help people live better lives. That this money is taken out of our wages makes this a matter of self-interest for us, and a reason to ask ourselves: “Where has my money gone? What was the benefit?”

Where did the requirement for practical value come from?

Previously it was believed that any scientific progress is good by default. This opinion has been gradually evolving — over the course of the XX century, the number of universities increased, and all of them needed government funding. The national budget is not infinite, so there appeared a need to pick out research projects (and universities) to allocate funds to. For example, if we have an agriculture-based economy, it makes sense to invest money into “thunderstorm control” in order to increase the agricultural output. If there’s a war going on, we need to create better tanks. If we are living in an age of a post-industrial economy, we should focus on information technology. In doing so, in a perfect world, if a government benefits from research, the citizens would be benefiting, too, as it is the people’s money that is spent on funding research, and therefore benefits of that research should redound to them.

What is interesting is that in the old days, practical value was often evaluated on a hunch. Explicit definition of “practical value” is a relatively recent innovation. For example, in Britain before the 90s, the requirements of practical value had not been established, they just said: “the research will provide general good”, or “social benefit”. In 1993, there was a political and rhetoric shift accompanied by an increasing sense that science should have a direct social impact. It seems quite natural that the years of recession in country’s economy, as well as rising inflation and unemployment, preceded that. Given these realities, it has become necessary to make decisions on allocation of public funds to science more stringently.

In Russia, according to historian of science Mikhail Sokolov, the thesis/dissertation section with the outline of “practical value” firstly appeared as a requirement in 1930s, at the same time when PhD programs were reintroduced after being cancelled in 1918 (the Decree of October 1st, 1918, had abolished academic degrees and titles, they were not reinstituted until 1934). And we can see why — the official ideology placed great emphasis on the national economy, so everything was supposed to benefit it. These days, the section “practical value” in graduates’ theses looks like an homage to good old Soviet-times science, but this section hasn’t always been a ridiculous formality — it made a lot of sense back in the day.

What are the criteria for practical value?

Many countries have their own frameworks for scientific projects quality evaluation and grant allocation. In Britain, for example, it is the Research Excellence Framework, accepted in 2014. Formerly, there was another framework, but its grant allocation procedure had a flaw that created many non-scientific drivers of university-based work, so it was decided to replace the framework. Holland utilizes Standard Evaluation Protocols, which are replaced every six years. The procedure currently in use is called SEP 2015–2021.

These and other evaluation frameworks are based on certain criteria for quality, including SIAMPI (Social Impact Assessment Methods), or REPP (Research Embedment and Performance Profile). These criteria tend to calculate benefit in many respects (of which the number could be potentially unlimited). One criterion might be the economic value of the results of a research project— that is, how a project might benefit national economic development. Another might be: how can theory be turned into practice? Both criteria are interrelated: if we came up with a supercool technology, but there is no production base for its implementation, its economical value is close to zero. There is also environmental value (we care about our planet and want it to retain its features, due to which humanity still exists here). Or social value (impact on social environment) —how much a research project contributes to the dissemination of humanitarian values; cultural value — what a research project adds to a country’s cultural wealth. For the evaluators, all these values exist within an individual nation state: benefit to national ecology, national culture, national economy.

What happens after we have defined our research evaluation criteria? We have to establish the relationship between them. What if ten points for one criterion correlate with minus ten for another one? Or if the choice is between equal scores on different scales? The answer is — to rely on common sense. In the worst case — to pick randomly. Let’s have a look at medical research projects and place ourselves in the shoes of a jury judging the merits of a research project. There are two projects; both are written brilliantly, both are immaculate. One is aimed at finding a cure to a very rare children’s disease, another one deals with a more commonly diagnosed illness. Which one to pick? The second one seems to provide more practical value. But if every institute follows that principle, a cure might never be found for the less common disease, and that is no good. Another dilemma: a study might lead to the conclusion that a polluting factory must immediately be shut down or else people in its vicinity will develop health problems within fifty years. In the short term, shutting down the factory is economically disadvantageous, but if, in fifty years, people get sick and are unable to work, that will be harmful to the economy as well.

It is often the case that the long-term practical value of a study is hard to define. For example, no one ever thought that the study of apoptosis (the process of programmed cell death) might have any value; it was believed to belong only to the realm of theoretical medicine. But now we may search for a cure for ageing and cancer that would work with involvement of apoptosis. So, if we cut off the funding of the field of science, the long-term benefit of which we cannot predict, we risk slowing down progress.

This is a visualization of a research evaluation in the framework of Research Embedment and Performance Profile. Along the sides — various parameters of benefit.

How can we evaluate practical value for a research project that has not yet begun?

It is more or less clear how to calculate benefit for a completed project. However, it is much less clear how to evaluate the potential benefits of a project when it is still in the draft stage. Still, we have to choose somehow what to fund and what to not fund.

There are two methods (which often complement each other) — case study and peer review. The first one is when we take a successfully completed project and measure how much the approach used in it compares to our approach. More specifically, we look at the completed study — what kind of people took part in it, from what fields, how they collaborated. Then we choose the project most similar to the former among the proposed ones. This approach is good in cases where it is hard to foresee the potential benefits of a study; for example, when doing so would require time and resources that we lack. Here is an example of the “case study” method: many projects that delivered real benefit had a wide network of people involved — conferences were organized, panel discussions took place, work was coordinated with industry representatives. It can be concluded that a collaboration like this is one of the keys to a study’s success. Many commissions share this view, which is why some researchers tend to organize more conferences and open tables — in order to show that “we’ve got collaboration here, which means that we’ll produce practical value, which means that you should extend the funding.” Sadly, all projects are not the same, and it is often by chance or coincidence that a project succeeds, so comparison is rather bad indicator of “value”. Another approach is when the commission reviews a research proposal to ensure that it meets various value criteria. That is, the appointed individuals write a review of the project, trying to pinpoint its strengths and weaknesses. Several reviewers are assigned to each project, and each reviewer reviews several projects. This presumes that for 50 projects applying for a grant, 25 reviewers should be employed, and each of them should write 10 reviews in order to get five reviews per project. This winds up being quite costly.

And what is the bottom line?

Professor Dankevich, the honest scientist from Granin’s novel, would be horrified seeing today’s universities: the topics are decided by a vote and centrally planned management is everywhere (see, for example, Horizon 2020 — European Commission’s seven-year program for research and technological development, or the Swedish six-year program mentioned previously in this article). I believe some readers are not thrilled, either, with that science has been put under this pressure, because in these circumstances, science is inevitably linked to politics and meets the need of governments (though many scientists prefer to believe that their work has nothing to do with politics). We see it in practice —the Large Hadron Collider has been developed in Switzerland, and not in Texas, due to the difference in national policies towards ‘practical value’.

We may even ask: is it good or bad that science depends on politics? But a question like this would be disconnected with reality. Better to ask ourselves: can we make science more self-reliant now, and, if so, how? And what are the consequences? The academic community is certainly in a tough situation, which must be relieved. Such relief can only come when these questions have been answered.

--

--