Peer review sucks?
Peer review sucks?
Article from a working scientist on why peer review sucks and should be done away with. Any comments?
https://www.experimental-history.com/p/ ... eer-review
https://www.experimental-history.com/p/ ... eer-review
Re: Peer review sucks?
Peer review is worth exactly what the journals pay for it.
Re: Peer review sucks?
I see that was published DEC 13, 2022. Has there been much change since? Not to my knowledge, but I'm out of the loop these days.
-
- Catbabel
- Posts: 792
- Joined: Sat Nov 16, 2019 8:18 am
Re: Peer review sucks?
What a witty and wise article! Unfortunately, it doesn't match my experience as author, reviewer and Editor-in-Chief, over nearly fifty years. As a researcher I usually found peer review of my work helpful, even if it is just because it represents the first couple of external readers. As an author It's really easy to read your draft as what you're thinking, rather than what you've actually written. Peer reviewers often raised a question which made me think, 'yes, I didn't put that very clearly'. Kind of like a fancy and very informed proof reader. And I used to recommend to my PhD students and post docs to actively engage in peer review - it makes you think about the scientific writing process rather than just the science (as well as sometimes giving you an advance heads up on something in very competitive and quickly moving fields)! Since I felt I benefitted from peer reviews, I also felt I had to pay that back by thoughtful reviews, sometimes spending a day or several days on the review, and being fair. Of course, our major subject journal also paid reviewers, which helped: £15 in 1980 would be about £80 now, so worth having. I know of one case where the authors of a paper found a review so helpful that they asked if the reviewer could be an author on the revised version (the reviewer declined).
But that was then. Now I rarely agree to review papers, nor recommend others to do so. I still do it as conscientiously as ever, if I do agree, but it is an exception. The process is not as useful as it once was. There are a number of factors behind this, not mentioned in the (excellent) link. One is open access (which I think is the real disaster here). It seemed so sensible. But it has also brought about a marked increase in the number of journals, because now there is a new market. The may not ALL be predatory. But they all have a financial interest in accepting work. And since researchers who have the money can now pretty much guarantee getting published no matter what the standard of their work is, they don't have to try so hard to make them good in advance. Anther factor is the rise of managerialism in science/Universities, where output was counted rather than rated on quality. If your promotion (or even your job) depends on the number of papers you publish, you will have to publish lots of papers to feed the kids and pay the mortgage. In my view the average standard of papers has dropped sharply over the years. And finally electronic journals don't have the page costs of print so can publish many more articles.
I know I will sound like a traditionalist/old fogey. But when I was EiC of a print journal (paid by library subscriptions), we received three times as many papers as we could publish. We were in competition with another (American) journal to be the lead journal in the field, so we wanted to choose the best possible papers (and improve them through review if we possibly could). Then we would get cited more often and beat the rivals - and we would be a desirable place to publish so we would get the best papers, in what felt like a virtuous circle. Honestly, it was rare to even receive a terrible paper - it would become an office joke. It wasn't generally worth sending us dross, so people didn't.
Now I reckon the majority of papers published in my field are dross. I don't always even trust them for honesty (and Steamy here hasn't encouraged me)! And there are too many being published to keep track of anyway. i don't write as many papers as I used to (because my career position/age means I can get away with not publishing) even though I have some interesting results - it's not worth the effort of submitting, and getting really poor quality reviews . Instead I share results informally through fora and social media. So I've ended up in something of the same place as the article author, but I assign different reasons for it.
But that was then. Now I rarely agree to review papers, nor recommend others to do so. I still do it as conscientiously as ever, if I do agree, but it is an exception. The process is not as useful as it once was. There are a number of factors behind this, not mentioned in the (excellent) link. One is open access (which I think is the real disaster here). It seemed so sensible. But it has also brought about a marked increase in the number of journals, because now there is a new market. The may not ALL be predatory. But they all have a financial interest in accepting work. And since researchers who have the money can now pretty much guarantee getting published no matter what the standard of their work is, they don't have to try so hard to make them good in advance. Anther factor is the rise of managerialism in science/Universities, where output was counted rather than rated on quality. If your promotion (or even your job) depends on the number of papers you publish, you will have to publish lots of papers to feed the kids and pay the mortgage. In my view the average standard of papers has dropped sharply over the years. And finally electronic journals don't have the page costs of print so can publish many more articles.
I know I will sound like a traditionalist/old fogey. But when I was EiC of a print journal (paid by library subscriptions), we received three times as many papers as we could publish. We were in competition with another (American) journal to be the lead journal in the field, so we wanted to choose the best possible papers (and improve them through review if we possibly could). Then we would get cited more often and beat the rivals - and we would be a desirable place to publish so we would get the best papers, in what felt like a virtuous circle. Honestly, it was rare to even receive a terrible paper - it would become an office joke. It wasn't generally worth sending us dross, so people didn't.
Now I reckon the majority of papers published in my field are dross. I don't always even trust them for honesty (and Steamy here hasn't encouraged me)! And there are too many being published to keep track of anyway. i don't write as many papers as I used to (because my career position/age means I can get away with not publishing) even though I have some interesting results - it's not worth the effort of submitting, and getting really poor quality reviews . Instead I share results informally through fora and social media. So I've ended up in something of the same place as the article author, but I assign different reasons for it.
- wilsontown
- Clardic Fug
- Posts: 219
- Joined: Tue Feb 23, 2021 11:51 am
Re: Peer review sucks?
I tend to agree with a lot of what Allo V Psycho writes above.
What do we actually want peer review to achieve? If we want it to spot fraudulent results, it doesn't really work for that. And if we want it to be quality assurance, then it doesn't really do that either. Perhaps the best thing I could say is that a peer reviewed paper is almost always going to be better than one that was published without being peer reviewed. Certainly in my experience reviews can be useless or unhelpful, but my papers have always been improved by the overall peer review process. Still, I'm no longer convinced that's worth the time and resources spent on it, mostly on a voluntary basis.
I'm currently a deputy editor in a reasonably well-known journal in my field, and I definitely recognise the bit about poor-quality submissions. I have recently received some absolute garbage and I don't hesitate to send these back without reviews as it would just be a waste of everyone's time. No doubt not all of my contributions have been outstanding additions to the literature, but I would have been embarrassed to submit some of the dross we are receiving.
What do we actually want peer review to achieve? If we want it to spot fraudulent results, it doesn't really work for that. And if we want it to be quality assurance, then it doesn't really do that either. Perhaps the best thing I could say is that a peer reviewed paper is almost always going to be better than one that was published without being peer reviewed. Certainly in my experience reviews can be useless or unhelpful, but my papers have always been improved by the overall peer review process. Still, I'm no longer convinced that's worth the time and resources spent on it, mostly on a voluntary basis.
I'm currently a deputy editor in a reasonably well-known journal in my field, and I definitely recognise the bit about poor-quality submissions. I have recently received some absolute garbage and I don't hesitate to send these back without reviews as it would just be a waste of everyone's time. No doubt not all of my contributions have been outstanding additions to the literature, but I would have been embarrassed to submit some of the dross we are receiving.
"All models are wrong but some are useful" - George Box
Re: Peer review sucks?
I am part of a loose coalition of researchers and open research advocates at our institution trying to encourage the adoption of Open Peer Review. In theory it should make peer review a more discursive and collaborative process, and remove many of the common criticism of the current peer review process. But if it does in practice remains to be seen.
- Woodchopper
- Princess POW
- Posts: 7410
- Joined: Sat Oct 12, 2019 9:05 am
Re: Peer review sucks?
I agree with some of the other comments. Overall, I think that peer review has improved the papers I've written. Sometimes the criticism can be painful, but on reflection the reviewer usually had a point.
So long as peer review is unpaid I don't think that we should expect more from reviewers than 'does the paper make a meaningful contribution to the field' and 'could the paper better present its findings'. That doesn't take a long time. I don't think that unpaid reviewers should be expected to uncover scientific fraud etc. If they were then they'd need to use far more time than most can afford. In practice I do my peer reviews over evenings or weekends, and I'm not going to spend more than an hour or two.
Preventing fraud should primarily be the role of the institution where the author is based (obviously there are problems with some places). If we want reviewers to routinely uncover fraud then we need to pay them the time it takes to do that.
Based upon my own experience and conversations with editors, one recent change is a rise in the number of fake papers. Of course this is an old problem, but LLMs have made it much easier. The last paper I reviewed was a paragraph by paragraph rewrite of an already published article. In the paper I reviewed before that almost all the cited publications either didn't exist or were irrelevant (which is what I'd expect from an LLM).
According to my editor friends the problem is that they don't have time to filter out the fake papers from the real ones. An issue here is that the move from funding via library subscriptions to funding via open access and payment by the author (or their institution) has left the journal with lower revenue, which means less time for editors. I assume that automated reference checking would be possible at least in terms of ensuring that the publications exist, but it doesn't seem to be widely used.
So long as peer review is unpaid I don't think that we should expect more from reviewers than 'does the paper make a meaningful contribution to the field' and 'could the paper better present its findings'. That doesn't take a long time. I don't think that unpaid reviewers should be expected to uncover scientific fraud etc. If they were then they'd need to use far more time than most can afford. In practice I do my peer reviews over evenings or weekends, and I'm not going to spend more than an hour or two.
Preventing fraud should primarily be the role of the institution where the author is based (obviously there are problems with some places). If we want reviewers to routinely uncover fraud then we need to pay them the time it takes to do that.
Based upon my own experience and conversations with editors, one recent change is a rise in the number of fake papers. Of course this is an old problem, but LLMs have made it much easier. The last paper I reviewed was a paragraph by paragraph rewrite of an already published article. In the paper I reviewed before that almost all the cited publications either didn't exist or were irrelevant (which is what I'd expect from an LLM).
According to my editor friends the problem is that they don't have time to filter out the fake papers from the real ones. An issue here is that the move from funding via library subscriptions to funding via open access and payment by the author (or their institution) has left the journal with lower revenue, which means less time for editors. I assume that automated reference checking would be possible at least in terms of ensuring that the publications exist, but it doesn't seem to be widely used.
-
- Catbabel
- Posts: 792
- Joined: Sat Nov 16, 2019 8:18 am
Re: Peer review sucks?
I'm increasingly concerned about the current publishing models. Yesterday I had 10 predatory journal invites to publish with them. I imagine they would have accepted just about anything, so for a young researcher, it might well be a good investment to get AI to write plausible papers and boost their CV to the point where they became more employable/promotable. As it is, I accessed a paper in a seemingly respectable journal through Google Scholar yesterday, and it hadn't even benefitted from AI. It was a illiterate rehash of previously published work, that obviously hadn't been proof read or refereed. The problem with this is that we are just about to embark on another systematic review of a literature topic. Our last one had to review 50,000 articles to get to the 50 or so that were relevant. If this kind of junk increases the number of initial hits to say 100,000, our ability to do systematic reviews and perhaps meta-analyses is compromised.
Even for the 10 predatory invites of yesterday, because I occasionally do get a genuine invite from a reputable journal waiving the fees, I still need to scan them. It isn't a large time investment, but multiply it by every day and every researcher, and it adds up.
Even for the 10 predatory invites of yesterday, because I occasionally do get a genuine invite from a reputable journal waiving the fees, I still need to scan them. It isn't a large time investment, but multiply it by every day and every researcher, and it adds up.
Re: Peer review sucks?
You won't be the first to find problems doing a systematic review: https://www.science.org/content/article ... -are-peril.
I'm not an academic, but it seems to me the problem isn't peer review or even the publishing model, but the whole academic model is fundamentally broken. It's always been pretty bad (the toxic career structure, even decades ago, is one reason I'm not academic) but the parasites are taking over the host. What started as measures to try to screen out poor quality, such as peer review and using publications to evaluate researchers' performance, no longer work: metrics are gamed, incentives are perverted, and Goodhart's law rules. Likewise the perversion of open access publishing.
I have no idea how to fix this. The difficulty is that academia is decentralised and international, yet takes huge sums of public money with the responsibilities that carries. Tinkering with peer review might help a bit but it's pissing in the ocean.
Please tell me I'm wrong and there is hope...
I'm not an academic, but it seems to me the problem isn't peer review or even the publishing model, but the whole academic model is fundamentally broken. It's always been pretty bad (the toxic career structure, even decades ago, is one reason I'm not academic) but the parasites are taking over the host. What started as measures to try to screen out poor quality, such as peer review and using publications to evaluate researchers' performance, no longer work: metrics are gamed, incentives are perverted, and Goodhart's law rules. Likewise the perversion of open access publishing.
I have no idea how to fix this. The difficulty is that academia is decentralised and international, yet takes huge sums of public money with the responsibilities that carries. Tinkering with peer review might help a bit but it's pissing in the ocean.
Please tell me I'm wrong and there is hope...
Re: Peer review sucks?
As a non academic who has worked professionally in collaboration with academics, it does seem to be a systemic issue as Scolius suggests. Goodhart’s Law being at the root of it.
-
- Catbabel
- Posts: 792
- Joined: Sat Nov 16, 2019 8:18 am
Re: Peer review sucks?
This may be a bit of a ramble, because I'm thinking aloud (and typing....)
We have many more applicants to medical school than we can admit, so prior educational achievement and interview scores are used as filters. The interviews are MMIs - Multiple Mini Interviews, because these have better predictive validity than other interview methods. Candidates move through 10 stations, about 5 minutes each, and there is a scored task at each, with a tight scoring rubric, running over 10-15 points. Interviewers are also required to give a 'Global Judgement' on a 4 category scale, which is pretty much just their general impression, unrelated to the scored task. The receptionist who they meet on arrival also gives them a Global Judgement (they are advised that the interview process starts as soon as they arrive). Of these two measures, the total Global Judgement score has higher predictive validity for their subsequent performance than the apparently detailed station score total. The same is true for our Objective Structured Clinical Exams, on which, indeed MMIs were modelled: the apparently imprecise and subjective Global Judgements are better predictors than the numerical components (which are the bits which are meant to be 'objective').
OK, when I started my PhD (50 years ago this year...) there weren't any metrics, but there was a clear hierarchy of labs based on the science they did. Informal, unquantified judgements were made on the quality of work, of people, and of journals. It may have helped that the kind of lab based work we did was fairly easy to repeat. If you published something startling, your competitors could repeat the experiment in about a couple of months, so you had a very strong incentive to stay accurate and honest.
It was really the subsequent attempts to quantify success that has proved to be the problem, a la Goodhart. I suppose my point, if I have one, is that multiple subjective judgements may be better than quantified, apparently objective, ones.
We have many more applicants to medical school than we can admit, so prior educational achievement and interview scores are used as filters. The interviews are MMIs - Multiple Mini Interviews, because these have better predictive validity than other interview methods. Candidates move through 10 stations, about 5 minutes each, and there is a scored task at each, with a tight scoring rubric, running over 10-15 points. Interviewers are also required to give a 'Global Judgement' on a 4 category scale, which is pretty much just their general impression, unrelated to the scored task. The receptionist who they meet on arrival also gives them a Global Judgement (they are advised that the interview process starts as soon as they arrive). Of these two measures, the total Global Judgement score has higher predictive validity for their subsequent performance than the apparently detailed station score total. The same is true for our Objective Structured Clinical Exams, on which, indeed MMIs were modelled: the apparently imprecise and subjective Global Judgements are better predictors than the numerical components (which are the bits which are meant to be 'objective').
OK, when I started my PhD (50 years ago this year...) there weren't any metrics, but there was a clear hierarchy of labs based on the science they did. Informal, unquantified judgements were made on the quality of work, of people, and of journals. It may have helped that the kind of lab based work we did was fairly easy to repeat. If you published something startling, your competitors could repeat the experiment in about a couple of months, so you had a very strong incentive to stay accurate and honest.
It was really the subsequent attempts to quantify success that has proved to be the problem, a la Goodhart. I suppose my point, if I have one, is that multiple subjective judgements may be better than quantified, apparently objective, ones.
- shpalman
- Princess POW
- Posts: 8504
- Joined: Mon Nov 11, 2019 12:53 pm
- Location: One step beyond
- Contact:
Re: Peer review sucks?
There's an eValutation of the Quality of Research (VQR) coming up for us, and I'm involved in the selection of research products (it will be on average 2.5 works per researchers/professor) and making sure the members of my research line have correctly uploaded their work to the relevant database.
There's a way in which articles can be scored objectively according to how many citations they have relative to other works in the field, but we also did a sort of peer-review in which everyone got sent about 6 articles from other people in the department. Except of course they were generally works from completely different research lines and we were told to spend about 10-15 minutes on each one because that's what the external referees will eventually do with the works we submit. So nothing like properly done peer review.
So yeah all the fancy algorithms and stuff but it will eventually come down to what some other person reckons in 15 minutes.
So I am generally in favour of accepting the in-house peer review score above anything "objective" because that's how it's going to work in the end, and at least this way we've selected the papers to submit based on what we reckon instead of trying to AlgMaxx.
There's a way in which articles can be scored objectively according to how many citations they have relative to other works in the field, but we also did a sort of peer-review in which everyone got sent about 6 articles from other people in the department. Except of course they were generally works from completely different research lines and we were told to spend about 10-15 minutes on each one because that's what the external referees will eventually do with the works we submit. So nothing like properly done peer review.
So yeah all the fancy algorithms and stuff but it will eventually come down to what some other person reckons in 15 minutes.
So I am generally in favour of accepting the in-house peer review score above anything "objective" because that's how it's going to work in the end, and at least this way we've selected the papers to submit based on what we reckon instead of trying to AlgMaxx.
having that swing is a necessary but not sufficient condition for it meaning a thing
@shpalman@mastodon.me.uk
@shpalman.bsky.social / bsky.app/profile/chrastina.net
threads.net/@dannychrastina
@shpalman@mastodon.me.uk
@shpalman.bsky.social / bsky.app/profile/chrastina.net
threads.net/@dannychrastina
Re: Peer review sucks?
I'm also old enough to remember the invention of the Scientific Citation Index with the aim of introducing objectivity into evaluations. The old system certainly had serious issues. It was very much who-knows-who, old-boy-network, and riddled with both explicit and structural sexism, racism and classism. Glad to be rid of that.Allo V Psycho wrote: ↑Sun Jan 12, 2025 9:44 amOK, when I started my PhD (50 years ago this year...) there weren't any metrics, but there was a clear hierarchy of labs based on the science they did. Informal, unquantified judgements were made on the quality of work, of people, and of journals. It may have helped that the kind of lab based work we did was fairly easy to repeat. If you published something startling, your competitors could repeat the experiment in about a couple of months, so you had a very strong incentive to stay accurate and honest.
It was really the subsequent attempts to quantify success that has proved to be the problem, a la Goodhart. I suppose my point, if I have one, is that multiple subjective judgements may be better than quantified, apparently objective, ones.
I briefly mentioned public funding, but that's a really important factor in all this. Decision-makers have to justify where money goes to tax-payers (and newspaper editors). Mostly that doesn't seem to have gone too badly in recent decades, but there is an appreciable vein of anti-intellectualism which seems to be growing into a "defund academia" movement, exemplified by the recent palaver over the Olfactory Ethics PhD (which probably deserves its own thread but here are a couple of links: intro to the affair with a copy of the PhD abstract on, of all places, Know Your Meme; video where I don't actually understand the position of the woman in the video but which has a lot of good comments BTL.)
-
- Catbabel
- Posts: 792
- Joined: Sat Nov 16, 2019 8:18 am
Re: Peer review sucks?
My field had a good number of very successful women (and a number of wife-husband couples, when I thought about it). As a bloke, I wasn't conscious of overt sexism. But one of the successful women I was thinking of was Anne McLaren, https://en.wikipedia.org/wiki/Anne_McLaren who was Chair of AWISE. In terms of who you knew, the key steps were which UG University you went to (Glasgow was fine) but then where you could get a PhD place. I was lucky, in that my Head of School was of past eminence, but still had sufficient contacts with the current movers and shakers. Of course, he must also have thought that a rough edged kid from a council estate, often in trouble at Uni, deserved the chance of working in a top lab, so it wasn't about social background intrinsically.
What I did find difficult, though, was the golden triangle of Oxford, Cambridge and London. After a PhD in London and a post doc in Oxford, I took a lectureship much further north (family with two young babies). Suddenly I no longer had access to all the unofficial networks I hadn't even realised I was in. In the end, I had to change fields completely, for one with different networks I could access.
What I did find difficult, though, was the golden triangle of Oxford, Cambridge and London. After a PhD in London and a post doc in Oxford, I took a lectureship much further north (family with two young babies). Suddenly I no longer had access to all the unofficial networks I hadn't even realised I was in. In the end, I had to change fields completely, for one with different networks I could access.