Academic absurd cases? Looking for candidates with a minimum h-index

Although the h-index is well known, I didn’t know that it may have some value, but it does! Such as a minimum requirement for certain academic positions; something that I had never seen before, and I have reviewed many academic job posts for years on the internet from all universities in the world!

Academic absurd cases? Looking for candidates with a minimum h-indexTo illustrate it, I’ve recently received the following academic job offer:

  • From the Department of International Relations we share with you this call for professors of sociology, economics and history interested in pursuing an academic stay at Saint Petersburg State Polytechinical University (Russian Federation). Candidates should have at least 2 points in the Hirsch index”.

H-index is, for a given researcher, the number h of publications that are cited at least h times each in academic journals. For example, my h-index is 2 (Finance) because I have 2 papers cited 2 or more times each; the rest of my publications have only 1 (or none) citation each. And to get a 3 h-index, these other papers would have to be cited at least twice, and that would mean 3 articles cited at least 3 times each.

But this so convoluted thing seems to work, at least within the same field of knowledge. So the pirates that wrote the announcement knew what they were doing, and thus I guess they avoid reviewing and/or analyzing (with love and care) a lot of requests, such as those who do not understand the h-index, those do not maintain it, or those without citations, among others. Why wasting time browsing hundreds of academic curriculums when they have the miraculous h-index?

Moving forward, Google Scholar calculates your h-index, though you can also obtain it your course, but you have to document and demonstrate it indicating journals, dates, articles, authors, etc.. The weak point is that Google Scholar considers also self-citations (should they be included for the h-index?), and citations from not indexed journals, books, and other publications. But it’s what we have.

Therefore, apart from the overwhelming logic of using the h-index to select researchers, don’t you think it is outrageously absurd? You can imagine what may be the environment/working pressure on a place that calls for the h-index to their professors/researchers … but it’s up to you and your needs.

Poll results on quality of research: Journals 3-2 H-Index

What a surprise! There have been fewer responses than in other polls, I thought that there were more interest on this topic, but the results are clear on quality of research: Journals 3-2 H-Index.

Is Google Scholar a good indicator of your quality of research activity and influence?

Poll results on quality of research: Journals 3-2 H-Index

* The poll was posted in October 2013 in many academic discussion groups. Around 470 answers were collected.

Although Google Scholar is open and reliable because it treats scholars equally, it’s not considered to be a good indicator of quality of research activity and influence. It’s incredible and difficult to digest because I had a hope in this. So I guess what you can think of Altmetric, which is based on an ample idea of impact, not just on academic production…

The reasons for these results are implicit in the survey because, if on the one hand these metrics provide useful public information, on the other hand I understand that, in general, H-Index metrics have the following barriers:

  1. It adds pressure on researchers.
  2. No organization seems to be looking seriously at them.
  3. Not many people are using them, because their citations are poor comparing with those of some champions in the sciences, and because is another annoying tool to take care of.
  4. And mainly because it is still considered that journals are a better indicator of quality of research.

Well okay, I get it, we’ll look at the individual metrics but focusing on publishing in indexed journals. I don’t think that in the medium term this will change much, peer-review will remain the king of research quality assessment, and makes perfect sense.

But instead, for journals these results are a triumph and a shot of adrenaline. The road for them is to be indexed in well-known databases and be open to open access to allow authors to be cited and have an impact, isn’t it?

I’m not stupid when publishing in journals

I’m not stupid when publishing in journalsLast week I gave a lecture (Professors and their publications. War techniques in the web 3.0 environment) in a private university on scientific publications within a seminar series on research. Though this is not the purpose of this post but one of the interesting topics discussed or that raised interest.

It was the attitude of the most senior or more experienced professors towards everything that had to do with the requirement of academic publications: they were quite critical of the current system of publications in indexed journals in assessing the quality of research, and by extension of academic accreditations and their impact on teaching skills.

But by now you know my position about the demands of publications for professors and scientists, which can be summarized in the Media Markt slogan “I’m not stupid”, but referring to publish in academic journals and to improve the research part of the curriculum:

  • What is important is the research activity, so to improve as a professor; but of course you can be a good teacher without having a PhD or publishing your research in journals, although it’s more difficult.
  • The quality of scientific research is currently assessed almost exclusively by the publication record in academic journals.
  • This system of publications in indexed journals (mainly in ISI Web of Knowledge and Scopus) is the one we have, which is pretty good by the way, you just have to know it a little, without obsessing.
  • You also have to know how editors and journals work, their needs and objectives.
  • Finally, social networks used wisely can also help to improve the chances of publishing in journals and that our papers are known, and then obtain citations.

The other related topic was h-index, which I don’t even remember most of it because nobody seems to demand it at my university or in the academic job posts that I see on the web, and that I will write about soon, not so much on how it’s calculated which is well-known, but on who calculates it or how I can get it, and about its advantages and disadvantages for professors as an alternative for assessing the quality of our research and prestige.

And what about your academic career? Do you care about your research activity?

The future of scientific research dissemination: Liberalism back again

The future of scientific research dissemination: Liberalism back againLast week was the presentation of my book ‘Publish in Journals 3.0’ and attended as speakers one of the foremost authorities in Spain on accreditation, the President of ACAP; the Director of the Corporate Finance Department at Universidad Complutense de Madrid, and the Library director of the Faculty of Business and Economics; who brought their different views on the future of scientific research dissemination.

In the later discussion, there were addressed two issues of particular relevance, which I found interesting to comment here for its reflection.

1. We were wondering if it makes sense for a centralized agency to evaluate professors, and somehow tell the universities which of them could recruit.

  • It would be something similar to university admissions, there is now a centralized evaluation to be replaced in the near future by the specific of each college, American style.
  • Accreditation agencies would focus then to certify program studies and not to professors, seen as a private subject, of its quality and vision of teaching.
  • Many professors present at the event, as me, were slightly perplexed since we are working very hard on our accreditations, and because this new scenario would put it much harder for their foreseeable lack of transparency and equality of criteria.
  • But this change in evaluations doesn’t mean that we shouldn’t take care of our academic curriculum, on the contrary, the requirements will not be lower.

2. The other interesting point is that probably the future of quality of research dissemination is not in the Platform (journals, repositories or even peer-reviewed books and conferences), but in the number of citations.

  • Although the results of the last poll I conducted on this topic reflected the opposite, which was the opinion of researchers about their current situation; in the future more emphasis will be given to citations obtained than to the relevance of the journals in which research is published, both closely interrelated.
  • Moreover, publish papers in journals is not the only thing that measures the impact or quality of research, but there are other important activities, such as patents and transfer of knowledge to society through the creation of start-ups.

That is, the conclusion I draw is that the important thing is to do research, publishing is its result, not the goal, or the system become perverted. Either way, we professors expect troubled times (you know: life is change, change is life), but not necessarily for the better academically and for the future of society. It is the vision of radical liberalism that now prevails, I guess.

POLL: Do you bet your future as a researcher on Google Scholar metrics?

POLL: Do you bet your future as a researcher on Google Scholar metrics?The journals’ world is boiling: Internet and Open Access is questioning the indexed journals’ model, from anonymity in peer review to the selection criteria of the directories or databases, and the calculation of the journal impact indicators.

An alternative model to measure the quality of research are the personal citation indexes (H-index), which allow open tools, accessible and public such us Google Scholar; although, there are also private metrics, or at a cost, as Altmetric.

  • The change is motivated because the indexed journals’ model doesn’t just convince scientists, because of the power that some journals and private directories have. But it is producing an undesirable effect, that the two models are used now to evaluate the quality of research, with the danger of drowning professors and scientists even more.
  • And paradoxically, journals are also pressured in some way, because now they have to ‘promise’ implicitly that their published articles will be cited more with them.

Google Scholar metrics are here to stay. What do you think? Do you bet on your citation appeal?

* It can be chosen 1 o 2 answers.
**Comments are highly encouraged.

Impact factor (I). The “Golden calf”

There is little more to write that has not already been written about impact factors and journal indexation systems.

Anyway, we still have some questions: why is scientific research evaluated this way? Is there only one kind of impact factor? Who is calculating them? What is the relationship between impact factors and journal databases? Could I do something to enhance the impact of my publications?

Let us make a list of four bullets about this discussion:

1. You have to publish your research work, but in indexed journals.

The inclusion of a journal in a relevant citation index or database , such as Web of Knowledge (Reuters), Scopus (Elsevier), EBSCOhost (EBSCO Publishing) or DOAJ (Sparc),  ensures that your research activity, if published there, meets certain minimum quality requirements and has some impact on society.

And this is significant because universities use it to select candidates or to allocate funds for research, though it is not the only criteria used, of course, they also take into account the other aspect of your scholarly curriculum, as your teaching experience or your previous activity in research centers, among others

2. There are many indicators to measure the impact of your research work.

The impact factor is a concept that emerged in the mid-twentieth century to help librarians to categorize through citations the relevance of the publications and manuscripts. It is calculated by taking the total number of citations a journal has received in the past year and dividing by the total number of articles it has published in the previous two years. The h-index, on the other hand, measures the impact of the published work of a scientist, based on the number of citations that they have received in other publications.

The best known ones are the Impact Factor (for journals Contained In Web of Knowledge /  Thomson Reuters) and the SJR (SCImago Journal Rank, for journals Contained in Scopus / Elsevier), so it is important to know that they belong to publishing companies, and they measure the impact of the manuscripts published in the journals indexed in their databases, which is a small percentage of all journals available in the world, and are estimated in a given period of time (two to three years).

Now search engines as Google Scholar Metrics or CiteSeer (financed by Microsoft Research) provide another measure (freely available online) for authors to gauge the visibility and influence of recent articles in scholarly publications, taking all kind of journals (indexed or not) and books.

ahmad hariri

* In the image, Ahmad Hariri, professor of Neurosciences (Drake University), a ‘google scholar’ champion that will be mentioned in the second part of this article. 

** in the next article, we will discuss if impact indicators are the right system to measure the impact of a research work and what to use in advantage.

%d bloggers like this: