Member Login

Lost your password?

Registration is closed

Sorry, you are not allowed to register by yourself on this site!


A Critique of Rick Hess’ Edu-Scholar Public Presence Rankings

Rick Hess just released the 2013 version of his Edu-scholar Public Presence Rankings. He claims that these rankings are “…designed to recognize those university-based academics who are contributing most substantially to public debates about K-12 and higher education. The rankings offer a useful, if imperfect, gauge of the public impact edu-scholars had in 2012.”

I’m not fond of “rankings,” generally, but I do think it’s important to gauge the “impact” that education scholars have on the field. I’ve written about the limits of traditional “impact factor” metrics and I’ve also written about how scholars ought to consider more modern means of knowledge dissemination. Currently, I’m working on a more comprehensive manuscript about modern scholarly communication specifically in the field of education.

In that context, I’ve long applauded Rick and his colleagues for the idea of public presence rankings; I value public presence for education scholars. That said, I think the implementation stinks. In other words, the approach Rick and his colleagues use is highly flawed. He is very careful and consistent about owning some level of error, but I can’t continue to watch my colleagues tout these rankings and pat themselves on the back when I see serious flaws in the methodology.

Hess does a decent job of describing the way the scores are computed (oh, and Rick, please ixnay the use of the word “rubric;” that ain’t no rubric, sir). My critique is as follows.

Coverage/Inclusion: Rick writes that “…this list is not intended to be exhaustive. There are many other faculty tackling education or education policy. Wednesday’s scores are for a prominent cross-section of faculty, from various disciplines, institutions, generations, and areas of inquiry, but they are not comprehensive.” I get that and appreciate how much work goes into generating this list. But, why do it half-assed? There are some very serious scholars not on the list who should be, and, for that, I think the rankings suffer. Where, for example, is Aaron Pallas? Dr. Pallas is a tremendous scholar who has REALLY taken advantage of various forms of new media to contribute to the educational policy arena. He has his own blog, and has written for Gotham Schools and contributed to the National Educational Policy Center. Also, my colleague, Dr. Charol Shakeshaft, is widely viewed as a pioneer in the field of educational leadership and is very publicly active around the policy issue of educator sexual misconduct. Whenever a case of educators sexual misconduct is written up in mainstream media, it’s highly likely to include quotes or insight from Dr. Shakeshaft.

Also, including people like Tony Wagner doesn’t make a whole lot of sense. He may have a university affiliation, but as far as I know, he doesn’t teach there (or anywhere) regularly. He is freed up to write more. In fact, it would be interesting to note the teaching loads of those on the list. Folks who have a higher teaching load simply have less time to devote to scholarly expression.

Frankly, though, the coverage/inclusion issue is the least of my concerns. As for the metrics included in the formula…

Google Scholar Score: the first three metrics (Google Scholar Score, Book Points, and Highest Amazon Ranking) clearly privilege more senior scholars. I’d bet a decent amount of money that the Google Scholar score correlates pretty highly (and positively) with years in academia. The more time one is in academia, the more articles one writes (for the most part) and the more time there is for folks to cite your articles. A number of folks on the list hit the cap of 50 on this score, and given that the highest total score is 172.9, we see how much weight it ultimately carries. If this is supposed to be a ranking of how present scholars are, I would urge Hess and his colleagues to consider weighting for currency. Consider, for example, the case of Nel Noddings. I am as much a fan of her work as anyone. But, 90 of her 97 points (placing her at #14 on the list) come from the first three categories. She is a prolific writer. Though she’s still writing and churning out books, she’s been retired from the professoriate since 1998 and is much more known as a philosopher than as someone directly involved in educational policy. I’d hardly consider her as someone who is publicly present in educational policy debates these days.

The other major problem with the Google Scholar Score relates to the idea that these rankings are supposed to be an indication of “public” presence. “Public” is an important concept here. A scholar may have a very high Google Scholar score because they prolifically publish oft-cited articles in peer-reviewed journals. However, if those articles are in journals that are distributed by publishers and in databases that are behind paywalls, are they really, truly “public?” To whom are they really accessible? If I were developing a formula to measure “public” presence, I would give significant weight to those who make their scholarship as accessible to the public as possible. I would give greater weight to those who publish in open-access, peer-reviewed journals, place journal articles in open access scholarly repositories, post articles to a personal/professional website, etc.

Book Points / Highest Amazon Ranking: I like that Amazon’s ranking privileges more recent books because, again, I think currency/recency should matter. But, again, these metrics favor more seasoned academics. They also favor historians, philosophers and others more closely aligned to the “foundations of education” than those in ed. policy and/or ed. leadership.

Education Press Mentions / Blog Mentions /Newspaper Mentions: At first, I thought these shouldn’t be separate categories, largely because the lines between journalism and blogging are blurrier than ever. And, if we’re truly concerned with “public” presence and not just presence in the field of education, we should think about what really matters and weight accordingly. Mostly, though, I think these are reasonably quality indicators in the formula. It’s noteworthy that the top four on the list all scored the maximum of 30 on Blog Mentions and Newspaper Mentions.

Congressional Record Rankings: I understand the idea here, but given how few of these folks got the 5 points on this indicator, I’m not sure it’s worth tracking. It’s also pretty unfortunate that so few prominent edu-scholars show up in the Congressional Record.

Klout scores: I’ve been fairly critical of Klout and other services attempting to measure “influence” through social media. They are fairly opaque, VERY easily gamed, and don’t take into account network theory and metrics from social network analysis that do a better job of gauging “influence.”  Also, I’d have the 4th highest Klout score of anyone on the list (I’m not on the list at all, BTW… my Klout score alone would put me in the top… 158!). But, I spend a lot of time using social media for developing and maintaining “personal” relationships, many of which are also professional. Any “influence” I’ve developed through social media by chatting about Duke basketball (#GoDuke) should probably be discounted. I think it’s important to include social media presence and activity in the public presence rankings formula, but I’m certain that Klout scores are not the best way to do that.

Related, Hess et al. do not include the maintenance of a scholarly blog in the formula. That’s a huge oversight. “Blog mentions” is included, but not blog posts. I’m a huge fan of the way folks like Bruce Baker and Sara Goldrick-Rab use their blogs to disseminate their knowledge and ideas. Bruce, in particular, is doing an INCREDIBLE job of informing the public about very complicated educational finance issues through his blog. He makes complicated matters very accessible through narrative and slick data visualization. They are also both very active on Twitter (Bruce and Sara) sharing their work, the work of others, and engaging in lively debates on key educational policy matters. This MUST count if we are considering scholars’ public presence. Some of this may be reflected in their Klout scores, but I’ve already pointed out the weakness of that metric and Hess et al. give it very little weight in the overall score. Bruce and Sara rank #40 and #60 respectively, but they should rank MUCH higher on this list, in my opinion.

Before I sum up, I also want to point out a problem inherent in relying on the search(es) necessary to generate these data. I don’t envy whoever did all of the grunt work of doing these searches, but search is not as simple as plugging a name into Google. I, for example, am a particularly problematic case. Different results are generated for “Jon Becker” as for “Jonathan Becker” as for “Jonathan D. Becker.” Also, a search for @jonbecker yields some interesting results.  There is some attention to this as Hess writes that “[o]n a few occasions, a middle initial or name was used to avoid duplication with authors who had the same name.” But, for each of the searches, it should be clear that they searched all possible mentions of an individual.

In sum, I like the idea of the edu-scholar public presence rankings. I don’t, however, like how they are done. Mainly, I think they are not as inclusive as they could be in some cases and too inclusive in some ways, they privilege senior scholars (and especially those who’ve written lots of books), don’t privilege accessible scholarship, and don’t emphasize enough expressions on scholarly blogs and Twitter activity.

There’s more to be said and I hope you’ll weigh in on the comments and that Rick and his crew consider this critique as they approach the 2014 edu-scholar public presence rankings.

Tags: , , , , , , ,