So Reddit is pretty awesome for science communication, in my experience. It’s an enormous network of potential audiences to reach with new research, and things like Reddit AMA’s allow pretty cool engagement directly with research and researchers. Some researchers have had pretty poor experiences with Reddit, as we might expect. After all, Reddit is just like a condensed form of the Web, and we should expect a range of experiences as a result.
But today something new was revealed to me about Reddit as a science communication platform. The Science sub Reddit doesn’t seem to let you post research from a journal article, or blog posts about them. unless the journal in which the original research was published has an Impact Factor of 1.5 or more.
Er, what now? Yeah, it’s as silly as it sounds. We all know the problems with the impact factor (if not, see here for a nice place to start), and now it seems that even the Science subreddit has adopted the poor practice of evaluating research articles based on it.
So how do I know this? Well, I wrote a piece about some of Tony Martin’s cool research on dinosaur footprints in Australia, which was published as part of a special volume. The article was peer reviewed as normal (see the Acknowledgements), and forms part of a highly specialised series for palaeontologists, and anyone else interested, I guess.
As I do with many blog posts, I posted this to the Science subreddit, as this is usually a nice way of generating traffic and generating discussion within the Reddit community. But then I got the following message:
Hi protohedgehog, your submission has been removed for the following reason(s)
It does not include references to new, peer-reviewed research. Please feel free to post it in our sister subreddit /r/EverythingScience.
If you feel this was done in error, or would like further clarification, please don’t hesitate to message the mods.
So I messaged the mods to let them know that this was peer reviewed research.
Hey there! My post did contain references to peer reviewed research. It is fully referenced at the bottom of the article itself, and the new paper it is referring to can be found here: https://museumvictoria.com.au/pages/381426/063-071_MMV74_Martin_2_WEB.pdf Cheers!
And the response.
Sorry “Memoirs of Museum Victoria” is a publication that’s too obscure to count. We have a cut-off at 1.5 impact factor.
You’re kidding, right? That is completely arbitrary, and you’re simply cutting off an enormous amount of awesome research for no reason.
It’s arbitrary, but lets us escape most predatory journals that publish whatever, for a range of different reasons.
But in this case that clearly is not the case. I’m an experienced Redditor, scientist, and science writer, so in this case the policy fails. You also exclude fields in which attaining that sort of impact factor is impossible even for the highest ranked journals. Can I ask that this policy be reconsidered, and in this case waived, please?
The next response gnawed at me a little..
Sorry. But our policy stands firm here. As an experienced scientist, then you should be very well-acquainted with the idea of standards for communication among scientific forums and publications. We have them too.
Feel free to share this with /r/EverythingScience though.
My response to this mode was fairly straight forward.
Of course I understand what standards are. That doesn’t mean you can’t be flexible though. I understand that the impact factor is a useless metric in determining the quality of individual publications, and you should be able to evaluate this case based on the merit of the article itself and my subsequent work on it.
Another mod was a little more helpful, and offered this.
I’m taking a look to give a second opinion. I can’t seem to find any information about their peer review process. Most journals have this on their website. But all I see is that the museum undertakes its own research and publishes it in this journal. I found the information for authors but all it has is formatting instructions.
Right now I can’t find anything on their site that even says the journal is peer reviewed. Maybe I’m missing it, though. Can you point me to their peer review information?
This is fine. If the journal is lacking information, then this is a good sign that perhaps it needs to work on it’s website a bit to make sure the necessary information of this sort is publicly available and clear.
Hm, thanks for looking into it. I don’t know the website too well, but if you look at the manuscript itself, it states that the article was peer reviwed in the acknowledgements (https://museumvictoria.com.au/pages/381426/063-071_MMV74_Martin_2_WEB.pdf): “I thank Erich Fitzgerald (Museum Victoria) for asking me to write this manuscript, as well as Lisa Buckley and Anthony Romilio for their insightful and helpful reviews.”
And that’s it for now. Irrespective of how this is resolved, setting an arbitrary impact factor based policy like this is the exact opposite direction that science communication, and especially social platforms, should be moving. The best journals in some fields have low impact factors, often because research fields are small and specialised. For example, in Palaeontology, a ‘good’ impact factor is considered to be around 2. For ones like Archaeology, it’s even lower, with even the top ranking journals struggling to get above 1.5.
This issue shows that we still have much to do as a research community to disassociate impact factors with any semblance of individual quality. By putting policies like this into place, we create a cycle where only research which is perceived to be most important is given the chance to be made more public. And research which might be equally as important is excluded, simply based on a number associated with the venue in which it is published. I fail to see how this is helping to progress any form of science communication.
As Justin Kiggins pointed out on Twitter, this whole thing reeks of the elitism that is rife in the scholarly publishing world too: “This link is better suited for a more specialized subreddit.” Sound familiar?
Anyway, not to sound too moany without doing anything about it, I would love the moderators of the Science subreddit to reconsider their posting/submission policy to remove this arbitrary and potentially harmful factor.
All research deserves the opportunity to shine, and the Reddit community could really help in leading the way in establishing better communication and evaluation criteria for scientific research.
“Lisa Buckley and Anthony Romilio for their insightful and helpful reviews”
This doesn’t mean that it went through a standard rigorous peer review. We hold journals to a standard. Every full mod is a full time working scientist and our contributions to reddit are completely unpaid (we are volunteers). We have hundreds of posts each day. It is not possible (nor do we have the expertise) to read and evaluate the quality of every post. Instead we have rules while ensure that the quality of the sub remains high. Sometimes this means we lose some good content, but that is the cost we pay to operate efficiently and keep a basic standard for the quality of science we allow. You may disagree, and if so, you are welcome to start your own science sub and carefully evaluate every post. If you don’t want to do that, you may post your content to our sister sub, /r/everythingscience, which has lower standards for posts. I’m sorry, but there’s nothing else we can offer you here.
I would love to know what people think about this.
My response to update 1.
Thanks for this comment. I understand that you are all extremely busy and have a lot to go through as volunteers (I’ve read this: https://drive.google.com/file/d/0B3fzgHAW-mVZVWM3NEh6eGJlYjA/view). I also understand that strict quantitative standards are good, and so is the concept of rejection here to avoid sharing of low quality content. But the criterion employed here is nonsensical, especially for a platform supposedly in the vanguard of online communication. It is well known among the scientific community that the impact factor has no bearing on any article-level factors, including quality or reliability.
Why not simply have an optional or additional filter based on expert opinion, which I just provided for you by writing about the research. Then the criterion switches to filtering who is credible enough to listen to. That’s something that’s actually useful, and would avoid cases like this. It doesn’t have to be something as time-consuming as a whitelist, just simply evaluate situations when challenges like this are made.
Furthermore, I do not understand the point in having mods with science background if decisions are going to be practically automated, and when decisions are challenged they are rejected due to clearly non-scientific reasoning? This seems far too much like reflecting the status quo and aligning with the inherent biases in our field, rather than challenging them. Given the enormous scope that this subreddit has, I truly believe this is a missed opportunity for change at the moment.
Finally, regarding this case, a cursory glance at the website would show you that the journal dates back to 1906 (https://museumvictoria.com.au/about/books-and-journals/journals/memoirs-of-museum-victoria/1900-1939/1906—vol-1/). Whatever arbitrary standards of quality you employ, this should probably be included in them to some degree.
Response from Mod 1:
Thanks for the suggestions. We realize IF is not by any means a perfect measure of a journal’s quality. We do, unfortunately, need some baseline for fairly applying standards that keep out predatory and poor journals.
We are willing to take suggestions for how to improve our system, however. But I would add the caveat that having the author of the piece or coverage of the piece in question providing evidence for the quality of the journal/paper is probably not a good criteria. No offense to you but we all tend to be biased about our own work.
Age of a journal is also likely not a good measure since some journals improve but others get worse over time. And some are just poor all around. For example, Mankind Quarterly, which was started by a former Nazi and the science advisor to Mussolini, began in 1961 and still publishes regularly. It is very unscientific and often flat out racist. It has an impact factor of 1.2, though, so submissions from there would not be allowed to /r/science. This is helpful since moderators without a background in anthropology or psychology might not immediately realize the problems with this journal.
If you can help us develop better standards that we can apply fairly and easily (i.e. so that the moderators who are handling issues at 3 AM don’t need to wait for the resident paleontology expert to wake up) we’d love to take a look. We are always happy to improve our system to ensure quality content is posted. But it also has to be realistic given time and volunteer staffing constraints.
and Mod 2 (my original text in quotes, their response in normal text):
But the criterion employed here is nonsensical, especially for a platform supposedly in the vanguard of online communication. It is well known among the scientific community that the impact factor has no bearing on any article-level factors, including quality or reliability.
You will probably get different opinions on this, but the viewpoint that the impact factor is worthless and has no bearing on any article-level factors is not the consensus for the entire community. Many people, us included, agree that it does have issues and because of the issues with the impact factor we set the cutoff so low. It is far from nonsensical, and we have spent a huge amount of time discussing and debating the merits of this rule. On a singular paper to paper level, yes stuff may fall through the cracks. There could be a good paper that isn’t cited often and is in a low impact journal, or conversely, there could be a great paper that is cited a ton…but is full of garbage. We see both. That said, on average, the lower impact journals have low quality papers. When you start getting to a threshold of 1-1.5, the quality of these papers drops off dramatically.
Why not simply have an optional or additional filter based on expert opinion, which I just provided for you by writing about the research.
How do we prove somebody is an expert in their field? Should we require a PhD? Should we do it by H-index? How about a faculty position? Again, this is a very arbitrary type of cut-off, except it is even less based on pure metrics. Would this not be the same as whitelisting journals? How do we vet experts for all of the hundreds of submissions that we get all of the time? You may be an expert in your field, it’s hard for me to say since I’m unfamiliar with that area, so then who would vet our “expert” opinions?
This seems far too much like reflecting the status quo and aligning with the inherent biases in our field, rather than challenging them. Given the enormous scope that this subreddit has, I truly believe this is a missed opportunity for change at the moment.
You might think this is clearly non-scientific reasoning, but we respectfully disagree. Further, none of this is actually a solution to the “status quo” you speak of. Your only possible solution was to let experts be contributors and that is the criteria we should base allowed submissions on. Or rather, somehow base it on the date of inception of the journal which I fail to see how that has any bearing on anything. Together, that doesn’t address any of the problems of the predatory journals and low quality submissions we receive. Further, we cannot apply this sort of system fairly or systematically. So ultimately, we need something that would be an alternative to the impact factor.
And so now I pose to you the question, what is the legitimate, systematic, consistent way in which we can quickly gauge content? What is the proposed alternative to the impact factor? How can we address the problem of predatory publishers and poor quality submissions that run rampant in far too many journals (and are often published by “experts”)? We would love to have answers to these questions, but unfortunately there has not yet been any meaningful solution proposed that we could adopt.
—- UPDATE 4—-
My response to the above (mainly Mod 1):
Hey, thanks for taking the time again to respond to this.
So regarding the IF again, the only thing it correlates with is the least reliable research:http://bjoern.brembs.net/2016/01/even-without-retractions-top-journals-publish-the-least-reliable-science/ andhttp://journal.frontiersin.org/article/10.3389/fnhum.2013.00291/full
I know it would be difficult, but can I ask that you engage with the wider scientific and reddit communities on how to find a better standard for this?
And yes, I totally understand that simply me saying ‘this is legit’ as the author/expert could come across as suspect. I imagine though that if this exchange has told you anything it’s not that I’m simply in this for self-promotion or to promote whacko science! And imagine that these sorts of interaction are rare enough that if there was a decision challenged then they could be evaluated based on a simple chat like this? 🙂 If you want another palaeo expert though as a mod, I’d be more than happy to help out!
I hope you don’t mind, but I have been blogging this conversation (anonymously) too, and as you can see from the response here (https://fossilsandshit.com/2016/08/10/reddit-science-and-the-impact-factor-of-doom/), the journal is clearly peer reviewed.