Reddit Science and the impact factor of doom
So Reddit is pretty awesome for science communication, in my experience. It’s an enormous network of potential audiences to reach with new research, and things like Reddit AMA’s allow pretty cool engagement directly with research and researchers. Some researchers have had pretty poor experiences with Reddit, as we might expect. After all, Reddit is just like a condensed form of the Web, and we should expect a range of experiences as a result.
But today something new was revealed to me about Reddit as a science communication platform. The Science sub Reddit doesn’t seem to let you post research from a journal article, or blog posts about them. unless the journal in which the original research was published has an Impact Factor of 1.5 or more.
Er, what now? Yeah, it’s as silly as it sounds. We all know the problems with the impact factor (if not, see here for a nice place to start), and now it seems that even the Science subreddit has adopted the poor practice of evaluating research articles based on it.
So how do I know this? Well, I wrote a piece about some of Tony Martin’s cool research on dinosaur footprints in Australia, which was published as part of a special volume. The article was peer reviewed as normal (see the Acknowledgements), and forms part of a highly specialised series for palaeontologists, and anyone else interested, I guess.
As I do with many blog posts, I posted this to the Science subreddit, as this is usually a nice way of generating traffic and generating discussion within the Reddit community. But then I got the following message:
Hi protohedgehog, your submission has been removed for the following reason(s)
It does not include references to new, peer-reviewed research. Please feel free to post it in our sister subreddit /r/EverythingScience.
If you feel this was done in error, or would like further clarification, please don’t hesitate to message the mods.
So I messaged the mods to let them know that this was peer reviewed research.
Hey there! My post did contain references to peer reviewed research. It is fully referenced at the bottom of the article itself, and the new paper it is referring to can be found here: https://museumvictoria.com.au/pages/381426/063-071_MMV74_Martin_2_WEB.pdf Cheers!
And the response.
Sorry “Memoirs of Museum Victoria” is a publication that’s too obscure to count. We have a cut-off at 1.5 impact factor.
*twitch*
You’re kidding, right? That is completely arbitrary, and you’re simply cutting off an enormous amount of awesome research for no reason.
Response.
It’s arbitrary, but lets us escape most predatory journals that publish whatever, for a range of different reasons.
Etc.
But in this case that clearly is not the case. I’m an experienced Redditor, scientist, and science writer, so in this case the policy fails. You also exclude fields in which attaining that sort of impact factor is impossible even for the highest ranked journals. Can I ask that this policy be reconsidered, and in this case waived, please?
The next response gnawed at me a little..
Sorry. But our policy stands firm here. As an experienced scientist, then you should be very well-acquainted with the idea of standards for communication among scientific forums and publications. We have them too.
Feel free to share this with /r/EverythingScience though.
My response to this mode was fairly straight forward.
Of course I understand what standards are. That doesn’t mean you can’t be flexible though. I understand that the impact factor is a useless metric in determining the quality of individual publications, and you should be able to evaluate this case based on the merit of the article itself and my subsequent work on it.
Another mod was a little more helpful, and offered this.
I’m taking a look to give a second opinion. I can’t seem to find any information about their peer review process. Most journals have this on their website. But all I see is that the museum undertakes its own research and publishes it in this journal. I found the information for authors but all it has is formatting instructions.
Right now I can’t find anything on their site that even says the journal is peer reviewed. Maybe I’m missing it, though. Can you point me to their peer review information?
This is fine. If the journal is lacking information, then this is a good sign that perhaps it needs to work on it’s website a bit to make sure the necessary information of this sort is publicly available and clear.
Hm, thanks for looking into it. I don’t know the website too well, but if you look at the manuscript itself, it states that the article was peer reviwed in the acknowledgements (https://museumvictoria.com.au/pages/381426/063-071_MMV74_Martin_2_WEB.pdf): “I thank Erich Fitzgerald (Museum Victoria) for asking me to write this manuscript, as well as Lisa Buckley and Anthony Romilio for their insightful and helpful reviews.”
And that’s it for now. Irrespective of how this is resolved, setting an arbitrary impact factor based policy like this is the exact opposite direction that science communication, and especially social platforms, should be moving. The best journals in some fields have low impact factors, often because research fields are small and specialised. For example, in Palaeontology, a ‘good’ impact factor is considered to be around 2. For ones like Archaeology, it’s even lower, with even the top ranking journals struggling to get above 1.5.
This issue shows that we still have much to do as a research community to disassociate impact factors with any semblance of individual quality. By putting policies like this into place, we create a cycle where only research which is perceived to be most important is given the chance to be made more public. And research which might be equally as important is excluded, simply based on a number associated with the venue in which it is published. I fail to see how this is helping to progress any form of science communication.
As Justin Kiggins pointed out on Twitter, this whole thing reeks of the elitism that is rife in the scholarly publishing world too: “This link is better suited for a more specialized subreddit.” Sound familiar?
Anyway, not to sound too moany without doing anything about it, I would love the moderators of the Science subreddit to reconsider their posting/submission policy to remove this arbitrary and potentially harmful factor.
All research deserves the opportunity to shine, and the Reddit community could really help in leading the way in establishing better communication and evaluation criteria for scientific research.
—-UPDATE 1—-
“Lisa Buckley and Anthony Romilio for their insightful and helpful reviews”
This doesn’t mean that it went through a standard rigorous peer review. We hold journals to a standard. Every full mod is a full time working scientist and our contributions to reddit are completely unpaid (we are volunteers). We have hundreds of posts each day. It is not possible (nor do we have the expertise) to read and evaluate the quality of every post. Instead we have rules while ensure that the quality of the sub remains high. Sometimes this means we lose some good content, but that is the cost we pay to operate efficiently and keep a basic standard for the quality of science we allow. You may disagree, and if so, you are welcome to start your own science sub and carefully evaluate every post. If you don’t want to do that, you may post your content to our sister sub, /r/everythingscience, which has lower standards for posts. I’m sorry, but there’s nothing else we can offer you here.
I would love to know what people think about this.
—-UPDATE 2—-
My response to update 1.
Thanks for this comment. I understand that you are all extremely busy and have a lot to go through as volunteers (I’ve read this: https://drive.google.com/file/d/0B3fzgHAW-mVZVWM3NEh6eGJlYjA/view). I also understand that strict quantitative standards are good, and so is the concept of rejection here to avoid sharing of low quality content. But the criterion employed here is nonsensical, especially for a platform supposedly in the vanguard of online communication. It is well known among the scientific community that the impact factor has no bearing on any article-level factors, including quality or reliability.
Why not simply have an optional or additional filter based on expert opinion, which I just provided for you by writing about the research. Then the criterion switches to filtering who is credible enough to listen to. That’s something that’s actually useful, and would avoid cases like this. It doesn’t have to be something as time-consuming as a whitelist, just simply evaluate situations when challenges like this are made.
Furthermore, I do not understand the point in having mods with science background if decisions are going to be practically automated, and when decisions are challenged they are rejected due to clearly non-scientific reasoning? This seems far too much like reflecting the status quo and aligning with the inherent biases in our field, rather than challenging them. Given the enormous scope that this subreddit has, I truly believe this is a missed opportunity for change at the moment.
Finally, regarding this case, a cursory glance at the website would show you that the journal dates back to 1906 (https://museumvictoria.com.au/about/books-and-journals/journals/memoirs-of-museum-victoria/1900-1939/1906—vol-1/). Whatever arbitrary standards of quality you employ, this should probably be included in them to some degree.
—-UPDATE 3—-
Response from Mod 1:
Thanks for the suggestions. We realize IF is not by any means a perfect measure of a journal’s quality. We do, unfortunately, need some baseline for fairly applying standards that keep out predatory and poor journals.
We are willing to take suggestions for how to improve our system, however. But I would add the caveat that having the author of the piece or coverage of the piece in question providing evidence for the quality of the journal/paper is probably not a good criteria. No offense to you but we all tend to be biased about our own work.
Age of a journal is also likely not a good measure since some journals improve but others get worse over time. And some are just poor all around. For example, Mankind Quarterly, which was started by a former Nazi and the science advisor to Mussolini, began in 1961 and still publishes regularly. It is very unscientific and often flat out racist. It has an impact factor of 1.2, though, so submissions from there would not be allowed to /r/science. This is helpful since moderators without a background in anthropology or psychology might not immediately realize the problems with this journal.
If you can help us develop better standards that we can apply fairly and easily (i.e. so that the moderators who are handling issues at 3 AM don’t need to wait for the resident paleontology expert to wake up) we’d love to take a look. We are always happy to improve our system to ensure quality content is posted. But it also has to be realistic given time and volunteer staffing constraints.
and Mod 2 (my original text in quotes, their response in normal text):
But the criterion employed here is nonsensical, especially for a platform supposedly in the vanguard of online communication. It is well known among the scientific community that the impact factor has no bearing on any article-level factors, including quality or reliability.
You will probably get different opinions on this, but the viewpoint that the impact factor is worthless and has no bearing on any article-level factors is not the consensus for the entire community. Many people, us included, agree that it does have issues and because of the issues with the impact factor we set the cutoff so low. It is far from nonsensical, and we have spent a huge amount of time discussing and debating the merits of this rule. On a singular paper to paper level, yes stuff may fall through the cracks. There could be a good paper that isn’t cited often and is in a low impact journal, or conversely, there could be a great paper that is cited a ton…but is full of garbage. We see both. That said, on average, the lower impact journals have low quality papers. When you start getting to a threshold of 1-1.5, the quality of these papers drops off dramatically.
Why not simply have an optional or additional filter based on expert opinion, which I just provided for you by writing about the research.
How do we prove somebody is an expert in their field? Should we require a PhD? Should we do it by H-index? How about a faculty position? Again, this is a very arbitrary type of cut-off, except it is even less based on pure metrics. Would this not be the same as whitelisting journals? How do we vet experts for all of the hundreds of submissions that we get all of the time? You may be an expert in your field, it’s hard for me to say since I’m unfamiliar with that area, so then who would vet our “expert” opinions?
This seems far too much like reflecting the status quo and aligning with the inherent biases in our field, rather than challenging them. Given the enormous scope that this subreddit has, I truly believe this is a missed opportunity for change at the moment.
You might think this is clearly non-scientific reasoning, but we respectfully disagree. Further, none of this is actually a solution to the “status quo” you speak of. Your only possible solution was to let experts be contributors and that is the criteria we should base allowed submissions on. Or rather, somehow base it on the date of inception of the journal which I fail to see how that has any bearing on anything. Together, that doesn’t address any of the problems of the predatory journals and low quality submissions we receive. Further, we cannot apply this sort of system fairly or systematically. So ultimately, we need something that would be an alternative to the impact factor.
And so now I pose to you the question, what is the legitimate, systematic, consistent way in which we can quickly gauge content? What is the proposed alternative to the impact factor? How can we address the problem of predatory publishers and poor quality submissions that run rampant in far too many journals (and are often published by “experts”)? We would love to have answers to these questions, but unfortunately there has not yet been any meaningful solution proposed that we could adopt.
—- UPDATE 4—-
My response to the above (mainly Mod 1):
Hey, thanks for taking the time again to respond to this.
So regarding the IF again, the only thing it correlates with is the least reliable research:http://bjoern.brembs.net/2016/01/even-without-retractions-top-journals-publish-the-least-reliable-science/ andhttp://journal.frontiersin.org/article/10.3389/fnhum.2013.00291/full
I know it would be difficult, but can I ask that you engage with the wider scientific and reddit communities on how to find a better standard for this?
And yes, I totally understand that simply me saying ‘this is legit’ as the author/expert could come across as suspect. I imagine though that if this exchange has told you anything it’s not that I’m simply in this for self-promotion or to promote whacko science! And imagine that these sorts of interaction are rare enough that if there was a decision challenged then they could be evaluated based on a simple chat like this? 🙂 If you want another palaeo expert though as a mod, I’d be more than happy to help out!
I hope you don’t mind, but I have been blogging this conversation (anonymously) too, and as you can see from the response here (https://fossilsandshit.com/2016/08/10/reddit-science-and-the-impact-factor-of-doom/), the journal is clearly peer reviewed.
Strict quantitative standards are good, and so is the concept of rejection. But their criterion is stupid, especially for a platform supposedly in the vanguard of online communication. RCR (relative citation ratio) would be a sensible alternative, but does not help with new research. For new stuff, the only thing that exists for filtering is expert opinion, which you just provided by writing about the research. Then the criterion switches to filtering who is credible enough to listen to.
Yep, I agree with this entirely. Just because you have a standard, it does not excuse laziness and a lack of ability to think.
With regard to the question about whether Memoirs of Museum Victoria has “standard rigorous peer review”, I co-authored three papers in that volume; all three received “full” peer review by reviewers who were (based on their comments and/or their signed reviews) clearly experts in the areas being discussed. I also reviewed the paper by Leah Schwartz, and measured it to the same standards I use when reviewing for any other journal.
While I agree that Impact Factor is, of course, a relatively arbitrary and useless metric, the problem with expert evaluation is that that still costs far more time than simply looking at the IF, which makes it unsuited for official policy. A better policy might be to keep a whitelist of journals, with all currently allowed journals (i.e. IF > 1.5) automatically on it, and new ones added by expert evaluation.
That said, since they’re taking the time to engage with you directly already, they could at least admit this one if they’ve seen it has thorough peer review.
Looks like you failed to tech the moderators there a lesson in standards of quality science: the higher the IF, the lower the quality:
http://bjoern.brembs.net/2016/01/even-without-retractions-top-journals-publish-the-least-reliable-science/
and the link to an above 1.5 journal:
http://journal.frontiersin.org/article/10.3389/fnhum.2013.00291/full
Why would you choose to defend your journal without much data when you can attack their journals with peer-reviewed evidence? Sounds like a strange choice of strategy to me…
Yeah, I did consider sending them these. I have to respond to the latest salvo still, so will send them along and update here.
Sorry, I meant to say that you missed an opportunity to teach the moderators a lesson…
I think in terms of framing this debate, it is important to say that the article was not finally rejected on the basis of IF (from your report above). There are two issues here that should be dealt with individually:
1) They use IF as their first (and primary) filter of high quality science (This is a policy issue)
2) Having done a review of this specific case they have rejected it’s inclusion based on looking at the journal in question. (This is a moderator decision issue).
I think conflating the two is muddying the waters. This piece of sci-comm has NOT been denied inclusion because of IF, it has finally been denied inclusion because the journal itself is opaque about what it’s peer reviewed processes are (That’s not to say they are not up to standard; but the available info on it is opaque). I understand the disappointment in the decision, but procedurally it seems they gave right of appeal and looked into the case in more detail with a second opinion when requested. I’m not sure *procedurally* there’s a lot more they can have in place. So to use an example from my field, if I were to write something about a paper in Zeitschrift für Geomorphologie (a previously well thought of journal which has become less popular in recent decades – IF1.08) it would have been rejected on IF. if I’d appealed (based on the precedent here) the mods would have been able to see the info about number of reviewers and guidelines for editors on the publisher website and based on what they have said above it may well have then been subsequently approved.
Personally I sympathise with both parties here. From the mods point of view they have two subs, an ‘everything goes’ one, and a more ‘prestigious’ one. Assuming we accept that there should be a filtered one (and I’m guessing you do because you are fighting to be included on it!!) then it follows there has to be a filter of some kind. IF is a crude and poor way of doing it, but it is very simple and easy to implement. Effectively what it is doing in this instance is making type II errors – incorrectly rejected properly peer reviewed and interesting science. On balance that is probably better than making type I errors, and I think that is what they are arguing. This is open for debate.
I think it is unrealistic to suggest the mods at r/science should do a lot more work, (E.G. conducting detailed article level screening for every submission) so I think any potential solution which involves a lot more work isn’t feasible. So if the screening policy needs to change then it would need to be replaced by something equally ‘quick and dirty’ that provides a simple screen. I think constructively the best we can do to help improve the sub is to suggest alternatives. The only one I can think of is to have a list of “approved” journals which meet whatever standards of rigour they deem sufficient (and for transparency publish this list and the criteria behind it (likely something around clear peer review standards). This would require a lot of front end work to set up the list, but once done would likely function fairly smoothly. I’m not sure that in this specific case though the journal in question (based on what the mods have said above) would have met hypothetical standards, because of their lack of info about peer review on the website.
So I’m just saying I think it is more productive to focus on what their front end screening system is (and/or process more widely), rather than the individual decision.
After reading through all this, I have still never seen a reason to go and read Reddit. Despite it’s frequent referencing on Slashdot. I don’t have time in my life to waste on another news aggregator.
Not that the (unpaid) mods at Reddit would (probably) be in the least bit bothered by this.
Lost the damned comment between log-in and NoScript!
[… but fortunately I’d got it on the clipboard.]
After reading through all this, I have still never seen a reason to go and read Reddit. Despite it’s frequent referencing on Slashdot. I don’t have time in my life to waste on another news aggregator.
Not that the (unpaid) mods at Reddit would probably be in the least bit bothered by this.
I agree with the sentiment that reddit, and particularly r/science offer an opportunity to broaden engagement with a wider community. I used to post regularly there, on /r/geology and some of the other subreddits.
Reddit is often accused of being too racist, too misogynistic, too whatever, and the counter claim was always to tailor your subreddits to find your niche, but at a certain point I decided enough was enough. The value of broadening discussion is important, but by participating in the platform you are also implicitly supporting its values & the values of the community. In the end I just couldn’t anymore. Even a board like /r/science suffers from this values creep, and it wasn’t worth supporting science communication there if it meant implicitly supporting the creeps.
I’ve just been waiting for an opportunity to vent 🙂
I can see a massive flaw in their logic just by reading their responses to your comments. They say that Mankind Quarterly, that Nazi science journal has an impact factor of 1.2, which makes it unqualified to be posted on the subreddit based on their standards. So by that logic, if the Nazi science journal somehow managed to get its impact factor up to 2.0, they would allow articles from that journal to be posted on the subreddit, whereas a legitimate scientific article that just happens to be published in a low-ranked journal (say Memoirs of Museum Victoria, or, even better, something like Australian Mammalogy, which has an impact factor of less than 1 but is heavily peer-reviewed) would be rejected.
Additionally, since the Impact Factor is seemingly defined by the average number of citations an article in that journal gets in a single year, wouldn’t a journal that is published quarterly or bi-monthly as opposed to annually like the Memoirs of Museum Victoria have a lower impact factor simply because fewer articles are published annually in that journal?
In fact, I would say that /r/science’s response to your queries is even worse than when it comes from high-prestige journals. When high IF-ranked journals say that your work is too specialized for publication in their journal, it is generally because they have a large number of recieved manuscripts and they are trying to choose the ones they feel will have the widest appeal to their readers per the limited space they have available. However, the way that /r/science words it, it makes it sound like they are rejecting this article because it is not scientific enough.
It really does sound like they are also not accounting for the fact that impact factor scales differently for different subfields in science. I mean just looking at the impact factor for Paleontology, the top ten highest impact factor journals in Paleontology barely escape crossing over the 1.5 threshhold (according to Science Watch, http://archive.sciencewatch.com/dr/sci/10/jun6-10_2/). In fact on this list Journal of Vertebrate Paleontology, often seen as the flagship publication for Vertebrate Paleontology in general (or at least the highest ranked journal devoted solely to vert. paleo.) has an impact factor of 1.54 and would be ineligible for submission to the subreddit (admittedly this is 2008 data, data from later years are somewhat higher).
Finally, what made things really weird is that it seemed like they thought you were self-promoting your own article, as opposed to sharing the research of someone else. I would suggest the response in this case is to get a petition of a bunch of researchers saying that the article being recommended is a legitimate scientific article, but I’m betting their response would be basically to say you can always round up a bunch of people to throw their support behind any issue, no matter how nonsensical.