I've served on and off as an Associate Editor for a number of journals in my career. Last year, I received a paper from the Editor-in-Chief of an IEEE journal for which I served this position. IEEE stands for the Institute of Electronic and Electrical Engineers.1 The world's largest professional society, with over 440,000 members, IEEE publishes 180 transactions, journals, and magazines. As an Associate Editor of one of these publications, it has been my duty to locate reviews for submitted papers that are referred to me by the journal's Editor-in-Chief.
I looked over the most recent paper I was assigned and saw it was only peripherally in my area. That's okay. It's the reviewers' job to inform me. And it's my job to find reviewers.
How Are Reviewers Chosen?
Peer Review = Suasage Factory: unqualified refs, doing time. I'm an old hand at recruiting reviewers (which is not to say that I'm good at it). I send requests to 15 potential reviewers. How does one choose 15 reviewers for a paper that isn't in your specialty? One helpful resource is the authors themselves. On the electronic submissions page, authors can list suggested reviewers for their papers. When I submit papers, I list all my buddies as potential reviewers. These are colleagues who, at worst, will paint my contribution in the most positive light possible.2 And my cronies expect the same of me when they list my name.3 (So much for peer review by disinterested professionals). So I go to the list of reviewers suggested by the authors and put them on my list.
In a vain attempt to avoid cronyism and with the goal of letting a paper stand on its own merit, some favor double-blind reviews. When the reviewer receives a paper, the authors' names have been stripped from the copy. Double-blind review is motivated by the double-blind testing method required by the FDA when drugs and placebos are administered to human subjects. The practice doesn't translate well to the review of papers. As a repeat participant in this process, I estimate I can as a reviewer identify the author or authors of the double-blind review paper about 90% of the time. Although the identity of the authors is stripped from the paper, their prose and the references remain. The easiest cases of identification come from reading something like “Our previous work focused on…” At the end of the sentence there are references that immediately identify the author. If one's goal is to have a paper reviewed without revealing the identity of the authors, double-blind review is probably the best approach, in spite of the fact that it doesn't work very well.
For the paper assigned to me, I so far have only the names of reviewers suggested by the authors. To assure outside reviews, I Google the names and email addresses connected to some of the references listed in the paper. All in all, I build up a list of 15 names and send out the invitations. This is done for me on the journal's website. The email content is a form letter making the review invitation in a polite albeit slightly forceful fashion.
Have you ever tried to get kindergarteners to rank the taste of boiled vegetables? Recruiting top tier peer reviewers to assess the quality of a submitted journal paper is like that. Only one of the 15 I invite to review responds positively.
And the positive response was not from someone on the authors' list of buddies. It may seem curious that the authors' cronies didn't respond. But due to the anonymity guaranteed by the reviewers to the author, the authors might never know for certain. Perhaps the authors in question would benefit from more loyal friends.
Some of the reviewers I invite give me the courtesy of declining the invitation. The rest don't even bother to respond. So one week later, I invite 15 more potential reviewers to take a look at the paper. At this point I'm getting desperate. I use Google Scholar to identify papers cited in the paper's references. Of the 15 new invitations to review, I get three positive responses. But two of these responses turn out to be lies. After they say yes, I never hear from them again. To be safe, I send out five more invitations one week later. As the list gets longer, my attention to the quality of the reviewer diminishes. Like I said, I'm getting desperate.
So Much For Consensus
I eventually yield three reviewers out of 35 invitations. That's a pathetic .120 batting average. The final reviewers are (1) a postdoc from California, (2) a Professor from Croatia and (3) a Professor from Malaysia. I have never heard of any of them. After a couple of months, I get their reviews and recommendations. The three reviews are across the spectrum:
-accept the paper for publication,
-revise the paper according to comments made in the review and resubmit for additional reviewing, and
-reject the paper
So much for consensus.
An Associate Editor like me hopes for some uniformity of recommendations. Uniformity not only constitutes a consensus on the quality of the paper, but more importantly, makes the job of the Associate Editor easier. We don't need to spend a lot of time looking at the paper. Being an Associate Editor is like a judge waiting for a jury's verdict. For this paper, the jury was hung.
So I'd like to tell you that I took a day out of my life to sit down and read the paper carefully and make my own independent assessment. But I didn't. I scanned the reviews, began reading the paper and dozed off. It was so boring! I find that subjects outside of my field often are. When I awoke, I decided to do the right thing and follow the Golden Rule. If I was an author of the paper, how would I like the Associate Editor to respond? After the paper was revised according to the reviews, I accepted it for publication.
Recall Physicist Frank Tipler's claim that those that decide the fate of a paper are “quite often not as intellectually able as the author whose work he judges.”4 For the paper I was assigned, I can't vouch for the academic ranking of the authors in their field. However, I must confess that in terms of expertise needed for this paper, I was one of Tipler's less able judges.
Today, I try to make a policy of declining invitations to review papers that I don't find interesting. So yes, I am normally in the group of the 32 of 35 invitees who decline to review a paper. Last year I resigned all my Associate Editorships.5
How Can Peer Review Be Improved?
All of this elicits the question of how things can be improved. A sausage lover can't rightfully complain about the blood on the floor in the sausage factory without offering a better way to make sausage. I just accepted the EIC responsibilities for the journal Bio–Complexity, which allows publications of papers supporting intelligent design. 6 I have published a number of papers there myself and am sympathetic to the journal's mission and goals.
Stay tuned and I'll let you know if I find any new recipes for bloodless sausage.
Robert J. Marks II, Ph.D. is a Distinguished Professor of Electrical & Computer Engineering at Baylor University in Waco, Texas. The material in this column, though, does not necessarily represent the views of and has not been reviewed or approved by Baylor University.
 In a feeble attempt at assuring disinterested reviews, the Journal will ask if you have a conflict of interest with any of those you list. I suspect claimed relationships are not checked by the journals. As an Associate Editor, I never have checked.
Back to passage
 The padding of anonymity often results in less than friendly reviews however. I've gotten zinged by my buddies more than once. At least that's what I think.
Back to passage
 I still remain on the Editorial Boards of some journals, but these are largely honorary titles and required no time or effort on my part. For the bean counters, such inexpensive beans are seen as very large.
Back to passage