- Publication Process
Journal Acceptance Rates: Everything You Need to Know
- 4 minute read
- 71.7K views
Table of Contents
Every journal has a role to publish and disseminate research in their field, and within that role is a sub-role, if you will. That of a âgatekeeper.â In other words, selecting which research is deserving of being published within the journalâs pages. Obviously, not all unsolicited papers can be accepted, so the editorial team of the journal will reject articles, either before or after peer review.
In this article, weâll discuss what a journal acceptance rate is, and what it measures. Weâll also touch on how to find a journalâs acceptance rate.
The acceptance rateâthe proportion of manuscripts that are selected for publication from the pool of all submitted manuscriptsâis an apparently straightforward measure that an author make take into consideration when deciding where to submit a manuscript. Does an acceptance rate have any meaning as an evaluative metric, though?
What do Acceptance Rates Measure?
The acceptance rate of a journal is a measurement of how many manuscripts are accepted for publication, compared to how many are submitted. Even though it may seem to be a straightforward measurement, like most things in the research journal world, itâs a little more complicated than that. But, donât worry, weâll sort it all out.
To determine a journalâs acceptance rate, the number of accepted manuscripts is simply divided by the number of submitted manuscripts. For example, if in one year a journal accepts 60 manuscripts, but 500 are submitted that same year, the journalâs acceptance rate is:
60/500 = .12 or 12% acceptance rate
Seems simple enough, right? But, what does that number really mean? If the journal is relatively selective, like this acceptance rate indicates, what does it mean when a manuscript is rejected? It could be because the manuscript was poorly written, or it could be that it was an excellent manuscript, but out of the scope of the journalâs focus. Therefore, is a journalâs acceptance rate really an accurate measurement of a journalâs rigor in selecting manuscripts for publication?
Additionally, some journals calculate their acceptance rate differently. For example, looking at the number of accepted manuscripts divided by the sum of accepted and rejected manuscripts. In this other approach, the publisher is reporting a lower acceptance rate than a publisher with the same number of accepted and rejected articles. You can see how that looks below:
60/560 = .107 or 11% acceptance rate
So, in addition to knowing a journalâs acceptance rate, it helps to know how theyâre calculating that rate. Journalâs with lower acceptance rates are generally thought to be more âprestigious,â but is it true? For instance, some journals let their editor select which manuscripts are even sent to the editorial team, and calculate their acceptance rate on those manuscripts – which is much less than the total of the received manuscripts. Other editors donât keep an accurate count, and submit an estimate of their acceptance rate. Also, if the journal is highly specific, and only a few scientists and researchers can write manuscripts related to the scope of the journal, that would artificially increase the acceptance rate of the journal.
The bigger question might be, though, âDoes it matter?â Does a journalâs acceptance really have any meaning, as youâre evaluating which journal to submit your paper to?
What Our Research Shows
We looked at over 2,300 journals (more than 80% of them published by Elsevier), and calculated that the average acceptance rate was 32%. The range of acceptance was from just over 1% to 93.2%.
However, if we look at the different aspects of the group of journals, we can draw some general conclusions.
- Larger journals have lower acceptance rates than smaller journals, between 10-60%
- Older journals have lower acceptance rates than newer journals, but not by much
- High-impact journals have relatively low acceptance rates, but thereâs much variation still (5-50% acceptance)
- We did not see a relationship between the share of review papers that were published compared to the corresponding acceptance rate
- Gold open access journals had higher acceptance rates than other models of open access journals. Take note that newer journals tend to follow the Gold open access model.
- No relationship was found between the breadth of scope for a journal and its acceptance rate. But journals within the scope of formal sciences (mathematics, economics, computer science) had lower acceptance rates than journals that focused on medicine and the life sciences.
For yet another take on this topic, check out our article on Journal Impact Factors .
How to Find Journal Acceptance Rates
While thereâs no comprehensive journal acceptance rate list, per se, this information is readily found in journal editor reports, journal finding tools and on metric pages within the journal itself. You can find these rates by utilizing the below tips:
- Contact the journal: Many times, if you contact the editor of the journal, they will share their acceptance rate with you.
- Industry/field publishing resources: Check with library databases within your field. Sometimes you can find acceptance rates there.
- Google: Some journals publish their acceptance rate on their home page. Alternatively, if you Google a specific society, they may also publish the acceptance rates of associated journals.
- Elsevier Journal Acceptance Rate : We keep track of our journalsâ acceptance rates by dividing the total of accepted articles by the total of submitted articles.
Language Editing Plus
Improve the flow and writing of your paper and get unlimited editing support for up to one year. Weâll support you at every stage of the submission process, including manuscript formatting for your chosen journal. Learn more about this Language Editing Service , and get started today!
Herbert, Rachel, Accept Me, Accept Me Not: What Do Journal Acceptance Rates Really Mean? (February 15, 2020). International Center for the Study of Research Paper No. Forthcoming, Available at SSRN: https://ssrn.com/abstract=3526365 or http://dx.doi.org/10.2139/ssrn.3526365
Research Data Storage and Retention
- Manuscript Preparation
Know How to Structure Your PhD Thesis
You may also like.
Publishing Biomedical Research: What Rules Should You Follow?
Writing an Effective Cover Letter for Manuscript Resubmission
How to Find and Select Reviewers for Journal Articles
How to Request the Addition of an Extra Author Before Publication
Paper Rejection: Common Reasons
How to Write a Journal Article from a Thesis
Input your search keywords and press Enter.
Click through the PLOS taxonomy to find articles in your field.
For more information about PLOS Subject Areas, click here .
Loading metrics
Open Access
Peer-reviewed
Research Article
Time to publish? Turnaround times, acceptance rates, and impact factors of journals in fisheries science
Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Software, Validation, Visualization, Writing â original draft, Writing â review & editing
* E-mail: [email protected]
Current address: Southeast Fisheries Science Center, National Marine Fisheries Service, Beaufort, North Carolina, United States of America
Affiliation Department of Applied Ecology, North Carolina State University, Morehead City, North Carolina, United States of America
- Brendan J. Runde
- Published: September 23, 2021
- https://doi.org/10.1371/journal.pone.0257841
- Peer Review
- Reader Comments
Selecting a target journal is a universal decision faced by authors of scientific papers. Components of the decision, including expected turnaround time, journal acceptance rate, and journal impact factor, vary in terms of accessibility. In this study, I collated recent turnaround times and impact factors for 82 journals that publish papers in the field of fisheries sciences. In addition, I gathered acceptance rates for the same journals when possible. Findings indicated clear among-journal differences in turnaround time, with median times-to-publication ranging from 79 to 323 days. There was no clear correlation between turnaround time and acceptance rate nor between turnaround time and impact factor; however, acceptance rate and impact factor were negatively correlated. I found no field-wide differences in turnaround time since the beginning of the COVID-19 pandemic, though some individual journals took significantly longer or significantly shorter to publish during the pandemic. Depending on their priorities, authors choosing a target journal should use the results of this study as guidance toward a more informed decision.
Citation: Runde BJ (2021) Time to publish? Turnaround times, acceptance rates, and impact factors of journals in fisheries science. PLoS ONE 16(9): e0257841. https://doi.org/10.1371/journal.pone.0257841
Editor: Charles William Martin, University of Florida, UNITED STATES
Received: July 6, 2021; Accepted: September 10, 2021; Published: September 23, 2021
Copyright: © 2021 Brendan J. Runde. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All relevant data are within the manuscript and its Supporting information files.
Funding: The author(s) received no specific funding for this work.
Competing interests: The authors have declared that no competing interests exist.
Introduction
Settling on a target journal for a completed scientific manuscript can be a non-scientific process. Some critical elements of the decision are intangible, e.g., attempting to reach a certain target audience or how well the paper âfitsâ within the scope of the journal [ 1 â 3 ]. Others, such as turnaround time, acceptance rate, and journal impact, can be measured but (other than impact) these metrics are often challenging to locate, leading authors to make decisions without full information [ 3 , 4 ].
Timeliness of publication has been reported as among the most important factors in the decision of target journal [ 4 â 8 ]. Prolonged peer review and/or production can be a major hindrance to authors [ 9 ]. Aarssen et al. [ 4 ] surveyed authors of ecological papers and found that 72.2% considered likelihood of a rapid decision a âvery importantâ or âimportantâ factor in choosing a journal. In some fields, research outcomes may be time-sensitive, so lengthy review can render results obsolete even before publication [ 10 ]. Desires and expectations for turnaround time are often not met: Mulligan et al. [ 11 ] found that 43% of survey respondents rated âtime-to-first-decisionâ of their most recent article as âslowâ or âvery slow.â Allen et al. [ 12 ] found that authors in the life sciences expect peer review to take less than 30 days (although this may be unrealistic). Moreover, Nguyen et al. [ 7 ] conducted a survey of authors in conservation biology in which the vast majority (86%) of respondents reported that their perceived optimal duration for peer review was eight weeks or under, though their experienced peer review time was on average 14.4 weeks. Over half of the respondents in Nguyen et al. [ 7 ] believed that lengthy peer-reviews can have a detrimental impact on their career, including individuals who reported that the lack of timely publication obstructed their acceptance into educational institutions and caused delays to degree conferral.
Despite the obvious and documented importance of journal turnaround time, published per-journal values are almost non-existent (BR, personal observation). Some journals do publicize âtime-to-first-decisionâ on their (or their publisherâs) webpages (e.g., ICES Journal of Marine Science ), but summary statistics of times to acceptance and publication remain generally unavailable to the public. Lewallen and Crane [ 13 ] recognized the importance of turnaround time and recommended authors contact potential target journals and request information directly. However, this approach is time-consuming and unlikely to result in universal acquiescence from potential target journals. Moreover, because the duration of the review process is unpredictable, journals are more likely to give an average or a rangeâas an indicatorârather than guarantee a specific turnaround time (H. Browman, Ed. in Chief, ICES J . Mar . Sci ., personal communication).
In many biological journals, individual papers contain metadata that can be used to generate turnaround times. Specifically, a majority of journals in the sciences report âDate Received,â âDate Accepted,â and at least one of âDate Published,â âDate Available,â or similar on the webpage or in the downloadable PDF of each paper (BR, personal observation). Aggregating these dates on a per-journal basis allows for the calculation of turnaround time statistics, which would be extremely valuable to authors seeking to identify an ideal target journal.
In this study, I present summary data on turnaround times for over 80 journals that regularly publish papers in fisheries science and the surrounding disciplines. I restrict my analyses to this field out of personal interest and because cross-discipline comparisons may not be apt. Moreover, my goal in this study is to provide field-specific information, and data on journals in other disciplines was beyond that scope. In addition, I provide per-journal information on impact factor and acceptance rate (where available) which are also key factors in deciding on a target journal [ 4 ]. The information presented herein is intended to be used in concert with other factors, including authorsâ notions of their paperâs âfit,â to refine the process of selecting a target journal.
Literature review and journal selection
I began by developing a list of journals that regularly publish papers in fisheries science. On 20 March 2021, I searched the Web of Science Core Collection (Clarivate Analytics; v.5.35) for published articles with âfisheries or fishermen or fishes or fish or fishingâ as the topic. These terms were used by Branch and Linnell [ 14 ] for a similar purpose. I refined this search by selecting only âArticlesâ and âProceedings Papersâ thereby excluding reviews, meeting abstracts, brief communications, et cetera. Finally, I truncated the search to include only documents that were published during 2010â2020. This search resulted in 242,280 published works. Using Web of Scienceâs âAnalyze Resultsâ tool, I compiled a list of source titles (i.e., journals) that have published >400 papers meeting the specifics of my query. This threshold was used because it emerged as a natural break in the list of journals. A total of 85 journals met these requirements. I removed from this list journals that publish strictly in the field of food sciences (e.g., Food Chemistry ) as well as hyper-regional journals that may not be of broad interest to authors in the field (though their exclusion is not indicative of their quality). Finally, I added several journals ad hoc that had not met the 400-paper minimum. These additions were included either because of my personal interest (e.g., Marine and Coastal Fisheries and Global Change Biology ) or because of their relevance and value in among-journal comparisons (e.g., Science and Nature ). After removals and additions, the list included 82 total journals.
Turnaround time.
In the spring of 2021, I accessed webpages of each of the 82 journals selected for inclusion. For each journal, I located publication history information (i.e., dates received, accepted, and published) on the webpages or in the PDFs of individual papers. I tabulated these dates for each paper. Generally, I aspired to gather dates for all papers published from present day back to at least the beginning of 2018. It was my explicit goal to compare timeliness of publication only for original research papers. For all journals where possible, I excluded papers if they were not original research articles. Some journals publish a higher proportion of reviews, brief communications, errata, or editorials, all of which likely have a shorter turnaround time than original research. Most journals list the paper type on each document, allowing for easy exclusion of papers that were not original research.
I examined distributions of time-to-acceptance (calculated as date accepted â date received ) and time-to-publication (calculated as date publishedâdate received ). For date published , I used the earliest date after acceptance, i.e., if âdate published onlineâ and âdate published in an issueâ were both provided, I used only âdate published online.â Some articles reported acceptance times that are inconsistent with the usual paradigm of peer review (for instance, progressing from received to accepted in 0 days). It is highly unlikely (perhaps impossible) that an unsolicited original research article could be accepted or published within 30 days of submission. I assumed that any implausibly short publication histories either were typographical errors, artifacts of that journalâs methods for tracking papers, or the papers were simply not unsolicited original research articles. I therefore excluded from further analysis any papers with a time-to-acceptance or time-to-publication of fewer than 30 days; by-journal proportions of such papers ranged from zero to 0.06 ( Table 1 ). Similarly, some papers reported publication times on the order of several years or more since receipt. While extreme delays in publication are certainly possible, I assumed that any paper with a time-to-publication of over 600 days was either a typographical error or a result of extenuating circumstances for which the journal staff and reviewers likely played no role. I therefore excluded papers with a time-to-acceptance or a time-to-publication of over 600 days from further analysis; by-journal proportions of such papers ranged from zero to 0.08 ( Table 1 ). Paper-by-paper information on the duration from receipt until reviews are received is generally not available. However, this so-called âtime-to-first-decisionâ is often available on journal websites. Where available, I obtained time-to-first-decision for each journal.
- PPT PowerPoint slide
- PNG larger image
- TIFF original image
https://doi.org/10.1371/journal.pone.0257841.t001
I generated summary data for each journal in this study in R [ 15 ]. Specifically, I examined median time-to-acceptance, median time-to-publication, median time between acceptance and publication, proportion of papers published in under six months, and proportion of papers published in over one year. For the latter two metrics, I selected six months and one year because, though arbitrary, these durations may be representative of many authorsâ notions of short versus long turnaround times. Medians were used because distributions of time-to-acceptance and time-to-publication were usually skewed right (see Results ).
Some journals included in this study have an extremely broad scope. Specifically, Nature , PeerJ , PLOS ONE , Proceedings of the National Academy of Sciences , and Science publish papers on topics reaching far beyond fisheries or ecology. I hypothesized that turnaround times of fisheries papers published in these journals may be dissimilar to turnaround times for these journals overall since internal editorial structure at the journals may differ among disciplines. I queried Web of Science for âfisheries or fishermen or fishes or fish or fishingâ for each of these five journals individually, obtained turnaround times for the resulting papers, and compared median times to publication for fisheries papers and for all papers in each journal.
COVID-19 pandemic effects
During the COVID-19 pandemic, some journals offered leniency to authors and reviewers when setting deadlines to account for the increased probability of extenuating personal or professional circumstances (B. Runde, personal observation). Because of this phenomenon, I hypothesized that turnaround times for each journal may be different prior to and after the start of the COVID-19 pandemic. Hobday et al. [ 16 ] showed that for seven leading journals in marine science, times in review were shorter in FebruaryâJune 2020 as compared to the previous year. For each journal in my study, I compared times-to-publication of all papers published during the year prior to the pandemic (1 March 2019â29 February 2020) and the year following the beginning of the pandemic (1 March 2020â28 February 2021). As above, papers were excluded from this analysis if their time-to-publication was extremely short (< 30 days) or extremely long (> 600 days). I conducted two-sample Wilcoxon tests to examine for differences in publication times between these two periods. Significance was evaluated at the α = 0.05 level. Analyses were performed in R [ 15 ].
Impact factors
The most widely used metric of impact, impact factor, is considered flawed by some scientists due to the disproportionate influence of review articles and its propensity for manipulation [ 17 â 19 ]. Nonetheless, impact factor is still listed on many journal webpages and is relied on by many authors [ 20 â 22 ]. I obtained impact factor for 2018 (the most recent year for which it was available for all journals) from https://www.resurchify.com/impact-factor.php . Impact factor is calculated as the number of citations received in a given year by all articles published in that journal during the previous two years, divided by the number of articles published in that journal during the previous two years.
Acceptance rates
I searched the web for reliable (i.e., not anecdotal) information on per-journal acceptance rates, which was generally limited. Most journals reject a percentage of submissions at the editorial stage prior to peer review (so-called âdesk rejectionsâ) due to a lack of fit within the journalâs scope, deficiencies in writing quality, and/or insignificant scientific merit [ 23 ]. Of course, rejections after peer review also occur, and overall rejection rates are increasingly made available on journalsâ or publishersâ websites or in compendium papers [e.g., 20 ]. Unfortunately, rates of desk rejections are still rarely available online [ 23 ]. However, many journalsâ overall acceptance rates are reported either on their own page or on the publisherâs website. For instance, Elsevier and Springer both offer acceptance rates for some (but not all) of their journals on their JournalFinder ( https://journalfinder.elsevier.com/ ) and Journal suggester ( https://journalsuggester.springer.com/ ) respectively. I extracted reported acceptance rates wherever available and tabulated them per journal. In addition, I sent email correspondence to Editors-in-Chief and/or publishers of each of the journals included in this study asking for their journalâs desk rejection rate and overall acceptance rate. When information was provided, it was tabulated on a per-journal basis. In some cases, acceptance rates provided via email were not equal to the rate provided on the journalâs webpage. In these cases, the value provided by the editor or publisher was used, as it is likely more recent and thus more valid. Such chases did not differ in these figures by more than 10%. It is possible that there are discrepancies in the calculation of acceptance rates, e.g., resubmissions may be tabulated differently among journals. I made no attempt to account for these potential differences in the present study.
Data analysis
I examined summary data for each journal and calculated correlations between median time-to-publication, difference in median publication time during COVID-19 as compared to the prior year, impact factor, and acceptance rate (where available). I plotted correlations using the R package âcorrplotâ [ 24 ]. In addition, I plotted relationships between median time-to-publication and impact factor.
From the 82 journals in this study, I extracted publication information for 83,797 individual papers. Median times to acceptance ranged from 64 to 269 days and median times-to-publication ranged from 79 to 323 days ( Fig 1 ). Turnaround times did not differ substantially for fisheries papers in any of the five broad-scope journals in this study ( Fig 2 ); therefore, for the other analyses in this study data from these journals were not restricted to fish-only papers. The ranges of times-to-publication for each journal were generally broad ( Fig 3 ); the middle 50% often spanned a range of 100 days or more. Distributions were typically skewed right. Virtually every journal in the study published one or more papers that took close to 600 days to publish (the maximum timespan retained in the analysis). Percentages of papers published in over one year ranged from 0 to 28%; percentages of papers published in under 6 months ranged from 2 to 99% ( Table 1 ). Of 82 journals examined, 28 had significantly different (Wilcoxon p < 0.05) times-to-publication in the year following the start of the COVID-19 pandemic as compared to the previous year. Of these 28, 12 were significantly faster and 16 were significantly slower during the pandemic ( Table 1 ).
https://doi.org/10.1371/journal.pone.0257841.g001
PNAS is Proceedings of the National Academy of Sciences .
https://doi.org/10.1371/journal.pone.0257841.g002
Central vertical lines represent medians, hinges represent the 25 th and 75 th percentiles, and lower and upper whiskers extend to either the lowest and highest values respectively or 1.5 * the inter-quartile range. Black dots represent papers that were outside 1.5 * the inter-quartile range. Boxes are shaded to correspond with 2018 Impact Factor, where darker green represents higher impact.
https://doi.org/10.1371/journal.pone.0257841.g003
I was able to obtain overall acceptance rate information for 60 journals in this study. Of these 60, I gathered desk rejection rates for 27 journals. For each of these 27, I calculated acceptance rates for papers that were peer-reviewed (i.e., not desk rejected). There was a weak positive correlation between this value and the proportion of articles that were peer-reviewed, implying that rates of the two types of rejections are not independent ( Fig 4A ). Higher impact journals tended to have higher desk rejection rates and lower percentages of acceptance given that peer review occurred. Of the 60 journals with overall acceptance rate information, I obtained time-to-first-decision for 48 journals; I plotted overall acceptance rate against these values ( Fig 4B ). There was no clear relationship between these variables; however, journals with higher impact tended to have lower acceptance rates and shorter times-to-first-decision.
A) The proportion of submissions that are peer-reviewed (i.e., 1 minus the desk rejection rate) versus the acceptance rate of submissions given that they are peer-reviewed for 27 journals that publish in fisheries and related topics. B) Time-to-first-decision (d) versus overall acceptance rate for 48 journals that publish in fisheries and aquatic sciences. Points in both panels are shaded to reflect 2018 Impact Factor of each journal, where darker green means higher impact.
https://doi.org/10.1371/journal.pone.0257841.g004
There was no strong correlation between any pairwise combination of median time-to-publication, difference in median publication time during COVID-19 as compared to the prior year, impact factor, and acceptance rate ( Fig 5 ). A moderate correlation (Pearson correlation = -0.43) was found between impact factor and overall acceptance rate, a phenomenon that has been documented previously [ 4 ]. The relationship between a journalâs median time-to-publication and impact factor was broadly scattered ( Fig 6 ).
Correlation bubbles are colored and shaded based on the calculated Pearson correlation coefficient, where negative correlations are pink, positive correlations are green, and darker shades and larger sizes represent stronger correlations.
https://doi.org/10.1371/journal.pone.0257841.g005
Inset panels shows a broader view to include Science and Nature which have high impact factors.
https://doi.org/10.1371/journal.pone.0257841.g006
There are clearly intrinsic differences in turnaround time among journals that publish in fisheries science ( Fig 3 ). The causes for these differences are varied, and some are artifacts of the journalâs specific publishing paradigm. For instance, some journals publish uncorrected, non-typeset versions of accepted manuscripts very shortly after acceptance; for the purposes of this study, such papers were considered published even if they were not yet in their final form. I elected to consider any post-acceptance online version âpublishedâ because such versions can be shared and cited, thereby fulfilling the desires of many authors [ 7 ] and meeting one of the overall goals of scienceâdisseminating research results. However, some journals do not publish any manuscript version other than the finalized document. Such journals have inherently longer turnaround times than those hosting unpolished versions online, and I made no attempt to specify or account for those differences in this study.
In addition to differences in which versions are published online first, differences in journal production formats can influence turnaround time. Some journals publish monthly, some publish quarterly, and some publish on a rolling basis (particularly those that are online only). Strictly periodical journals may choose to allow accepted papers to accumulate prior to publishing several in an issue all at once. Such journals, especially those with page limitations, may have a backlog of papers that are accepted but not yet published. I made no attempt to differentiate between journals based on these format differences, which certainly influence time-to-publication.
Similarly, some journals (or publishers) may enter revised manuscripts into their system as new submissions. This practice ostensibly artificially deflates turnaround times and may also artificially deflate acceptance rates. Unfortunately, to my knowledge no journals state publicly whether this is their modus operandi , precluding the possibility of applying any correction factor or per-journal caveat herein.
Beyond these differences in production time that stem from journal structure, the time it takes to publish a paper can be divided into time the paper is with editorial staff, reviewers, and authors after review. Differences may exist in author revision time among journals; it is possible that reviews of manuscripts submitted to higher impact journals are more thorough and therefore require longer response times. However, I found no association between impact factor and turnaround time ( Fig 6 ), so it may be that no such differences exist. Further, extenuating circumstances on the part of the author(s) of a paper may result in extremely lengthy revision times. There is no data available on per-journal rates of extension requests, but presumably it is low and approximately equivalent across journals. I removed from my dataset any papers that took longer than 600 days to publish. Still, I present median turnaround times in this study as a measure that is robust to outliers.
In contrast to time with the authors, it seems likely that among-journal differences in time with editorial staff and reviewers are responsible for a large portion of differences in overall turnaround time. Delays at the editorial and reviewer level may be inherent to each journal, and could be a result of editorial workload (i.e., number of submissions per editor), level of strictness of the editor-in-chief when communicating with the associate editors, or differences in persistence on the part of the editors when asking reviewers to be expeditious. In addition, some journals may have a more difficult time finding a suitable number of agreeable reviewers; this may be especially true for lower-impact journals although no association between IF and turnaround time was found. A majority of authors surveyed by Mulligan et al. [ 11 ] had declined to review at least one paper in the preceding 12 months, mainly due to the paper being outside the reviewerâs area of expertise or the reviewer being too busy with work and/or prior reviewing commitments. If among-journal differences do exist in acceptance rates of review requests, this could possibly alter turnaround times.
In this study, I treated impact factor as a proxy for the quality of individual journals. While impact factor is often still used in this way [ 22 ], its limitations are well-documented by authors across many disciplines [e.g., 25 â 27 ]. For instance, the calculation of how many âcitableâ documents a single journal has produced is often dubious, as this may or may not include errata, letters, and book reviews depending on the publisher [ 28 ]; misclassification can inflate or deflate a given journalâs impact factor, and the rate of misclassification may depend on the individual journalâs publishing paradigm [ 29 ]. Alternatives to impact factor, such as SCImago Journal Rank (SJR) and H-index, have been proposed and may in some cases be more valid metrics of journal prestige or quality [ 30 , 31 ]. Comparison of these bibliometrics among journals in fisheries was beyond the scope of this paper, and I elected to use only impact factor given its ubiquity and despite its known disadvantages.
The COVID-19 pandemic had no discernable field-wide effect on turnaround time, and differences in turnaround time during the pandemic were not correlated with acceptance rate or impact factor ( Fig 5 ). Hobday et al. [ 16 ] found minor changes in turnaround time during COVID-19 (through June 2020) for seven marine science journals; they reported only slight disruptions to scientific productivity in this field. Overall, my results corroborate those of Hobday et al. [ 16 ], although some journals took significantly longer or significantly shorter to publish during COVID-19. It is unclear whether these correlations were causal, as non-pandemic effects may have affected turnaround times at these individual journals.
The turnaround times, acceptance rates, and impact factors presented in this paper are snapshots and may change over time. The degree to which these metrics change is likely variable among journals. However, barring major changes in journal formats or editorial regimes, the data presented here are probably applicable for the next several years at least. Indeed, median monthly turnaround times for most journals in this study were approximately static for the period from January 2018 to April 2021 ( Fig 7 ). Similarly, acceptance rates and impact factors [ 32 ] are generally strongly auto-correlated from one year to the next. I therefore suggest that the metrics presented here can be used by authors as a baseline, but if more than several years have transpired it may befit the reader to obtain updated information (particularly on impact factor and acceptance rate, which are generally more accessible than turnaround time). In addition, it is theoretically possible that this paper itself may alter turnaround times and/or acceptance rates for some journals. Enlightened readers may elect to change their submission habits in favor of certain journals that are more expeditious or that otherwise meet their priorities for a given paper. Authors without a preconceived notion of a specific target journal should still consider the paperâs âfitâ to be the most important factor in their decision [ 1 ]. I suggest that after assembling a shortlist based on fit, authors should use the results of this paper to select a journal that best aligns with their priorities.
The dashed horizontal line at 1.0 represents the baseline proportion.
https://doi.org/10.1371/journal.pone.0257841.g007
Supporting information
https://doi.org/10.1371/journal.pone.0257841.s001
Acknowledgments
This manuscript benefited greatly from discussions with H. I. Browman, D. D. Aday, W. L. Smith, R. C. Chambers, N. M. Bacheler, K. W. Shertzer, S. R. Midway, S. M. Lombardo, and C. A. Harms. My thanks to K. W. Shertzer and H. I. Browman for reviewing early drafts of this paper. I am grateful to my advisor, J. A. Buckel, for allowing me the time to pursue this side project while I worked on my dissertation. Thanks to the numerous editors, publishers, and other journal staff who replied to my requests for journal information.
- View Article
- Google Scholar
- PubMed/NCBI
- 15. R Core Team. R: a language and environment for statistical compujting, Vienna, Austria. URL http://www.R-project.org/ . 2021.
- 24. Wei T, Simko V. R package "corrplot": Visualization of a Correlation Matrix (Version 0.84). https://github.com/taiyun/corrplot . 2017.
- 28. Rossner M, Van Epps H, Hill E. Irreproducible results: a response to Thomson Scientific. Rockefeller University Press; 2008.
A guide to journal acceptance rates
Understand the definition of binomial nomenclature and look at some binomial nomenclature examples from everyday life.
Journals are responsible for publishing and disseminating research relevant to particular fields. The role of achieving the right information, that is selecting the research that should be published within the pages of the journal, is an integral part of that. The journal’s editorial review board will reject unsolicited submissions regardless of whether they have been peer-reviewed or not.
There is no fixed rate of rejection annually; it’s a range that naturally develops over time. Journals sometimes set monthly targets, but they are only used to note when rates fluctuate significantly, which occurs on a regular basis. Our focus in this article will be on the concept of journal acceptance rates, as well as how they are calculated.
Journal acceptance rates are used to measure what?
An acceptance or rejection rate is typically used by journals to assess the rate at which papers are accepted or rejected and to observe any patterns that might be noteworthy. A journal’s acceptance rate depends on the high standard of its submissions. Unlike impact factors, these rates are internally assessed quality control measures. (Learn more about Journal impact factor .)
A journal’s acceptance rate indicates what percentage of all submissions can be published. In deciding where to submit a manuscript, authors should consider this simple indicator. In order to assist submitters, ICSR strongly recommends that journal acceptance rates are easily accessible to the public.
Is it possible to find the acceptance rate of a journal?
In some cases, finding acceptance rates for particular journals based on specialized fields can be challenging. It is nevertheless crucial to enhance and secure a publication’s credibility. There is a common perception that journals with lower acceptance rates are more prestigious and more deserving. Low acceptance rates tend to be seen in large, established, and high-impact journals.
In most cases, journals do not publicly report acceptance rates because they believe low acceptance rates might deter authors from submitting their work. It is also important to note that journal editors decline manuscripts for several factors. Feedback is often provided to researchers in the form of helpful remarks. There are some niche journals that decline papers on the basis of their relevance.
Fake websites abound on the web, just as fraudster journals target academics seeking publication. Make use of well-known journal ranking resources when determining where to publish. In some fields or disciplines, acceptance rates are provided by many sources. These rates can be found using the following approaches:
- You can often find out the acceptance rate of a journal by contacting its editor.
- There may be databases within your field that contain information about acceptance rates for field publications.
- A journal’s acceptance rate can be found on its webpage on Google.
- In addition, you can search for acceptance rates of associated journals on the website of an association.
- For the MLA International Bibliography, the “Directory of Periodicals” provides acceptance data for literature, linguistics, and folklore journals.
How to calculate acceptance rates?
It is not common practice to use the same method to calculate acceptance rates. For a few journals, the rate is calculated based on all manuscripts received. In addition, the editor can choose which manuscripts are forwarded to reviewers so that the acceptance rate can be calculated from those which are reviewed in a shorter time period than the total number of submissions received. Furthermore, many editors use only approximate estimates of this data and do not keep precise records.
Each journal accepts a different number of manuscripts, based on several practical factors, such as the quality of manuscripts, the significance of the paper, and affiliations. The acceptance rate will be affected by each factor.
In order to calculate the acceptance rate of a journal, simply divide the number of accepted submissions by the number of received submissions. A journal’s acceptance rate, for illustration, is 5% if it accepts 50 manuscripts in one year and 1000 are submitted in that year.
50/1000 = 5% acceptance rate
Generally, there is a difference between 10-50% between acceptance rates for larger journals and smaller journals. There is a modest difference in acceptance rates between older and newer journals. While there is still considerable variation in acceptance rates among high-impact journals (3-50% acceptance).
Using the best and free infographic maker, you can communicate science visually
Illustrating your work visually can have a better impact on your readers as well as increase your visibility. You can do it easily and for free. With illustrations and posters, Mind the Graph can help you communicate scientific findings to a large community. We also offer customizations. A scientist’s time is precious, so let’s go!
Subscribe to our newsletter
Exclusive high quality content about effective visual communication in science.
About Aayushi Zaveri
Aayushi Zaveri majored in biotechnology engineering. She is currently pursuing a master's degree in Bioentrepreneurship from Karolinska Institute. She is interested in health and diseases, global health, socioeconomic development, and women's health. As a science enthusiast, she is keen in learning more about the scientific world and wants to play a part in making a difference.
Content tags
Sponsored by
Donât worry about journal acceptance rates â and hereâs why
Deciding where to submit a manuscript? A journal acceptance rate is a useful signal to prospective authors of the probability of acceptance of their manuscript â no more and no less than that
Rachel Herbert
Additional links.
âGate-keepingâ is the selection of research that is deemed worthy of and relevant for publication in a journal. Choosing from unsolicited manuscripts submitted to the journal, the editorial team accepts some and rejects others, often after peer review. A way to quantify this process is the âjournal acceptance rateâ.
The acceptance rate appears regularly on journal home pages and via journal finder tools. But what does this seemingly straightforward measure signal to an author considering where to submit a manuscript?
What do acceptance rates measure?
Several practical factors influence the number of manuscripts that each journal accepts; these include the quality, interest in or importance of submitted manuscripts, the number of and relationships to other journals in the same field, and any manuscript backlogs or page limitations. Each factor will have a varying impact on acceptance rate.
The drivers of submission rates might include: the size of the field, the number of and relationships among journals, journal âbrandâ awareness or perceived prestige, and the potential impact of successful publication for the author.
The Metrics Toolkit suggests that the rate can be used as a âproxy for perceived prestige and demand as compared to availabilityâ. Yet overlapping drivers for the two factors determine an acceptance rate.
And the concept of separating the âwheat from the chaffâ is pushed to the limit when journals such as Nature and Science have acceptance rates of 10 per cent or less. Being rejected from extremely selective journals surely canât tell us much about that manuscript.
Comparing acceptance rates with other journal attributes
In 2020, the International Center for the Study of Research at Elsevier explored a set of 2,371 journals â the majority of which (82 per cent) are published by Elsevier â and their acceptance rates in 2017. The journals represent a broad set of subject areas, journal types and ages, with all but the social sciences and arts and humanities well represented, a limitation of the findings presented here. The journals in the dataset had acceptance rates ranging from 1.1 per cent to 93.2 per cent in 2017, with an average of 32 per cent: overall, journals tend to accept fewer articles than they reject (Figure 1).
We then studied various attributes of the journals to see what aspects correlated with high or low acceptance rates.
Low acceptance rates are typically associated with very large, very old and very high-impact journals, as well as those that are not gold open access. Thatâs a mixed bag of attributes. The relationship to impact is nuanced and not strong enough be a clear signal. Importantly, even where relationships between journal attributes and acceptance rates could be identified, the variance in the acceptance rate is still so high that the findings are unlikely to be useful in the real world.
So where does that leave authors considering which journals they should submit their manuscript to?
We believe that journal acceptance rates do hold meaning; they indicate to prospective authors the probability of acceptance of their manuscript, based on historical success rates at the same journal. As such, we believe that journal acceptance rates have a place in the array of journal metrics. However, acceptance rate is not a signal of other attributes, and so it should be considered alongside other metrics and indicators but not conflated with them.
Rachel Herbert is a senior research evaluation manager, working within the International Center for the Study of Research , at Elsevier.
If you found this interesting and want advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the THE Campus newsletter .
For more about journal acceptances and their interplay with other journal attributes, read the full report from the International Center for the Study of Research at Elsevier: â Accept me, accept me not: What do journal acceptance rates really mean? â
You may also like
Emotions and learning: what role do emotions play in how and why students learn?
Global perspectives: navigating challenges in higher education across borders, a diy guide to starting your own journal, universities, ai and the common good, artificial intelligence and academic integrity: striking a balance, create an onboarding programme for neurodivergent students, how to help young women see themselves as coders, contextual learning: linking learning to the real world, authentic assessment in higher education and the role of digital creative technologies, how hard can it be testing ai detection tools.
Register for free
and unlock a host of features on the THE site
Kathryn A. Martin Library
- Research & Collections
- News & Events
Scholarly Publishing
- Author's Rights
- Scholarly Identification
- Author Identifiers
- Google Scholar Profile
- Research Evaluation
- Tools for Finding Open Content
- Knowledge Maps
- Journal Status & Acceptance Rates
- Journal Citation Reports
- Journal Metrics (Powered by Scopus)
- Google Scholar Metrics
- Eigenfactor
- Metrics and Epistemic Injustice
- Paper Mills
- How to Publish in Open Access Journals
- Creative Commons
- Avoid Citing Retracted Papers
Article Processing Charges APCs
An Article Processing Charge (APC) is a publishing fee charged to authors who are publishing in open access journal. The fee shifts the journal production cost to the author and replaces the subscription charge that libraries and researcher would pay to access an article behind a paywall.
Acceptance Rates
The method for computing acceptance rates may vary among journals.
Typically it refers to the number of manuscripts accepted for publication relative to the number of manuscripts submitted within the last year.
- Browse the 'information for authors' section of a journal's website
- Do a Google search for: Journal Title "acceptance rate"
More online and print sources from various fields can be found at these library FAQ pages: St. John's University: Journal Rankings and Acceptance Rate
University of North Texas Library: Journal Acceptance Rates
American Psychological Association (APA) provides historical data on their affiliated journals and the data can found at the Journal Statistics and Operations Data .
Peer Review Status & Distribution
Additional Acceptance Rate Figures
Where to publish, the right journal for your article.
Our scholarly communication blog will provide you with this 10 sources:
https://lib.d.umn.edu/scholarly-communications/right-journal-your-article
- << Previous: Knowledge Maps
- Next: Journal Impact >>
- Last Updated: Nov 7, 2023 3:55 PM
- URL: https://libguides.d.umn.edu/scholarly_publishing
- Give to the Library
- Maps & Floorplans
- Libraries A-Z
- Ellis Library (main)
- Engineering Library
- Geological Sciences
- Journalism Library
- Law Library
- Mathematical Sciences
- MU Digital Collections
- Veterinary Medical
- More Libraries...
- Instructional Services
- Course Reserves
- Course Guides
- Schedule a Library Class
- Class Assessment Forms
- Recordings & Tutorials
- Research & Writing Help
- More class resources
- Places to Study
- Borrow, Request & Renew
- Call Numbers
- Computers, Printers, Scanners & Software
- Digital Media Lab
- Equipment Lending: Laptops, cameras, etc.
- Subject Librarians
- Writing Tutors
- More In the Library...
- Undergraduate Students
- Graduate Students
- Faculty & Staff
- Researcher Support
- Distance Learners
- International Students
- More Services for...
- View my MU Libraries Account (login & click on My Library Account)
- View my MOBIUS Checkouts
- Renew my Books (login & click on My Loans)
- Place a Hold on a Book
- Request Books from Depository
- View my ILL@MU Account
- Set Up Alerts in Databases
- More Account Information...
Measuring Research Impact and Quality
- Times cited counts
- Journal Impact Factor & other journal rankings
All disciplines
- Business Journals
- Education/Psychology Journals
- Health Science/Medical Journals
- Humanities Journals
- Altmetrics This link opens in a new window
- Impact by discipline
- Researcher Profiles
Journal acceptance rates: basics
Definition: The number of manuscripts accepted for publication compared to the total number of manuscripts submitted in one year. The exact method of calculation varies depending on the journal. Journals with lower article acceptance rates are regarded as more prestigious.
More information
- Accept me, accept me not: What do journal acceptance rates really mean? "Study considers what journal acceptance rates can tell a submitting author about a journal"
- How to interpret acceptance rates.
Check the publisher's website for the journal to see if the acceptance rates are listed. Currently, only a few publishers list their journals' acceptance rates online, but the numbers are increasing. Following are a few examples.
- American Association for the Advancement of Science (AAAS) Journals
- Journal Insights (Elsevier) Provides acceptance rates for some of the journals published by Elsevier.
- Taylor and Francis journals Click on Journal Metrics in the left column on the informational page for a specific journal title..
E-mail the editor of the journal to request the acceptance rates.
- The name of the editor should be listed on the journal website.
- In the e-mail include the reason why you are requesting acceptance rates and the years needed.
- Most but not all editors will provide acceptance rates.
Business journals
- Cabell's Directory of Publishing Opportunities in Accounting
- Cabell's Directory of Publishing Opportunities in Computer Science and Business Information Systems
- Cabell's Directory of Publishing Opportunities in Economics and Finance
- Cabell's Directory of Publishing Opportunities in Management
- Cabell's Directory of Publishing Opportunities in Marketing
Education/psychology journals
- Cabell's Directory of Publishing Opportunities in Educational Curriculum and Methods
- Cabell's Directory of Publishing Opportunities in Educational Psychology and Administration
- Cabell's Directory of Publishing Opportunities in Educational Technology and Library Science
- American Psychological Association Journal Statistics and Operations Data Annual report on American Psychological Association journals. Includes information about rejection rates, publication lag time and related statistics.
- Cabell's Directory of Publishing Opportunities in Psychology Print copy shelved in Ellis Library Reference Call Number: BF76.7 .C33 Latest year available: 2006
Health Science/medical journals
The following link is to a page from the Health Sciences Library pertaining to journal acceptance rates in health science journals.
- Promotion & Tenure Resources for Research/Scholarship Assessment
Humanities journals
- MLA Directory of Periodicals Lists the acceptance rate for some selected periodicals in literature, language, linguistics, folklore.
- << Previous: Journal Impact Factor & other journal rankings
- Next: Book data >>
- Last Updated: Apr 2, 2024 11:01 AM
- URL: https://libraryguides.missouri.edu/impact
Acceptance rates of peer-reviewed journals
by Esther van de Vosse | Jan 26, 2021 | Publication | 3 comments
Did you write an excellent manuscript with ground breaking data that will have a great impact on the research in your field? In that case you should definitely submit your manuscript to an extremely high-ranking journal, such as Nature or Science or the top-ranking journal in your field. If your data are however good but are not expected to result in widely shared press releases and interviews in major newspapers, you may want to submit your manuscript to a good or excellent journal where you have a realistic chance of getting it accepted for publication. To select a relevant journal start with reading my tips , as it will greatly improve your chances of getting your manuscript accepted and reduce your frustration caused by rejected articles.
Acceptance rates of high ranking journals are low. For several journals, studies have shown the number of submitted manuscripts, number of manuscripts sent out for peer- review, and the final number of manuscripts accepted for publication. One of these studies analysed the Nature journals . From March 2015 to February 2017 the number of manuscripts submitted was 128,454 of which 79% was rejected outright. After peer review, another 56% were rejected, leading to an overall acceptance rate of about 12%. For the journal Nature itself this rate was less than 7% of the 20,406 manuscripts submitted.
Many journals publish their acceptance rates on their websites, although it is sometimes not clear whether these refer to percentage of peer-reviewed manuscripts or percentage of total manuscripts submitted. Below I have indicated the acceptance rates of a number of life sciences and medical journals, with a link to the source of the information.
Acceptance rates according to journal websites (data from February 2023)
Science rejects about 84% of submitted manuscripts during the initial screening stage, and accepts 6.1% of the original research papers submitted (data of 2022).
Nature has acceptance rates of various years on their website, with the last year with data 2017, that year the acceptance rate was 7.6%.
The British Medical Journal (BMJ) accepts about 7% of all the 7000-8000 manuscripts submitted each year, but many of these are not research articles. Only 4% of research articles are accepted.
Other journals in the BMJ group: Gut 12% acceptance in 2022, BMJ Open 47% in 2022, BMJ Case Reports 38% in 2022, BMJ Global Health 14% in 2022.
The New England Journal Medicine (NEJM) receives more than 16,000 research and other submissions for publication each year. About 5% of the original research submissions are accepted for publication.
PLOS ONE is a respected journal with a very broad scope that has an acceptance rate of over 48% (in 2020).
PLOS Medicine has an acceptance rate of about 10%.
The Journal of the American Medical Association (JAMA)’s acceptance rate is 11% of the more than 7,000 major manuscripts it receives annually, and only 4% of the more than 4,400 research papers received.
The acceptance rate of some other Elsevier journals: The Lancet has an acceptance rate of about 5%. The Journal of Pediatrics : 14.7% in 2022. Biomaterials : 14.2% in 2022. Gastroenterology acceptance rate of 10-12%.
The Journal of Adolescent Health reported an acceptance rate of 15% in 2010, I could not find more recent data.
Acceptance rates of journals known as (previously) predatory journals
Be aware of predatory journals that will accept anything. These are very low quality journals.
The average acceptance rates of Dove press journals was 32% in 2022. Dove Press was also originally on the list of predatory publishers but has since been removed.
The acceptance rates of all Hindawi journals can be found on one page and range from 8% to 62%. Note: some Hindawi journals have been suspected to be predatory journals, but are now considered borderline.
Presubmission inquiries
Authors who would like to know whether their manuscript would be appropriate for publication in a specific journal can sometimes send a presubmission inquiry to the editors. The minimum requirements for presubmission inquiries are often an abstract and a cover letter. Find out on the website of the journal whether this is an option.
Of the published medical papers how many are scholarly articles?
Of the peer reviewed medical research studies published, what percentage are later classified as a Scholarly Article.
Dear Mike, I am sorry I did not see your comment on my blog until now. Somehow the notifications were turned off.
Can you let me know what you mean by ‘Scholarly Article’? I did not hear of this expression being an official designation, it is just a general opinion I think. Esther
Submit a Comment Cancel reply
Your email address will not be published. Required fields are marked *
Recent Posts
- How to write an abstract
- How to take notes when reading scientific articles
- Publication: how long does it take after submission?
- Authorship of scientific publications: contribution, order, and number
- Predatory meetings and how to avoid them
Recent Comments
- Prof. Neelam Kotihari on Predatory meetings and how to avoid them
- Sathya Meonah ST on How to take notes when reading scientific articles
- Andres Cuervo on Predatory award organization â yet another scam
- Subiksha Singh on Predatory award organization â yet another scam
- Hurriyat on Predatory award organization â yet another scam
- Entries feed
- Comments feed
- WordPress.org
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
- View all journals
- Explore content
- About the journal
- Publish with us
- Sign up for alerts
Editorial criteria and processes
This document provides an outline of the editorial process involved in publishing a scientific paper (Article) in Nature , and describes how manuscripts are handled by editors between submission and publication.
Editorial processes are described for the following stages: At submission | After submission | After acceptance
At submission
Criteria for publication
The criteria for publication of scientific papers (Articles) in Nature are that they:
- report original scientific research (the main results and conclusions must not have been published or submitted elsewhere)
- are of outstanding scientific importance
- reach a conclusion of interest to an interdisciplinary readership.
Further editorial criteria may be applicable for different kinds of papers, as follows:
- large dataset papers : should aim to either report a fully comprehensive data set, defined by complete and extensive validation, or provide significant technical advance or scientific insight.
- technical papers: papers that make solely technical advances will be considered in cases where the technique reported will have significant impacts on communities of fellow researchers.
- therapeutic papers: in the absence of novel mechanistic insight, therapeutic papers will be considered if the therapeutic effect reported will provide significant impact on an important disease.
Articles published in Nature have an exceptionally wide impact, both among scientists and, frequently, among the general public.
Who decides which papers to publish?
Nature âs aim is to publish the best research across a wide range of scientific fields, which means it has to be highly selective. As a result, only about 8% of submitted manuscripts will be accepted for publication. Most submissions are declined without being sent out for peer review.
Nature does not employ an editorial board of senior scientists, nor is it affiliated to a scientific society or institution, thus its decisions are independent, unbiased by scientific or national prejudices of particular individuals. Decisions are quicker, and editorial criteria can be made uniform across disciplines. The judgement about which papers will interest a broad readership is made by Nature 's editors, not its referees. One reason is because each referee sees only a tiny fraction of the papers submitted and is deeply knowledgeable about one field, whereas the editors, who see all the papers submitted, can have a broader perspective and a wider context from which to view the paper.
How to submit an Article
Authors should use the formatting guide section to ensure that the level, length and format (particularly the layout of figures and tables and any Supplementary Information) conforms with Nature 's requirements, at submission and each revision stage. This will reduce delays. Manuscripts should be submitted via our online manuscript submission system . Although optional, the cover letter is an excellent opportunity to briefly discuss the importance of the submitted work and why it is appropriate for the journal. Please avoid repeating information that is already present in the abstract and introduction. The cover letter is not shared with the referees, and should be used to provide confidential information such as conflicts of interest and to declare any related work that is in press or submitted elsewhere. All Nature editors report to the Editor of Nature , who sets Nature 's publication policies. Authors submitting to Nature do so on the understanding that they agree to these policies .
After submission
What happens to a submitted article.
The first stage for a newly submitted Article is that the editorial staff consider whether to send it for peer-review. On submission, the manuscript is assigned to an editor covering the subject area, who seeks informal advice from scientific advisors and editorial colleagues, and who makes this initial decision. The criteria for a paper to be sent for peer-review are that the results seem novel, arresting (illuminating, unexpected or surprising), and that the work described has both immediate and far-reaching implications. The initial judgement is not a reflection on the technical validity of the work described, or on its importance to people in the same field. Special attention is paid by the editors to the readability of submitted material. Editors encourage authors in highly technical disciplines to provide a slightly longer summary paragraph that descries clearly the basic background to the work and how the new results have affected the field, in a way that enables nonspecialist readers to understand what is being described. Editors also strongly encourage authors in appropriate disciplines to include a simple schematic summarizing the main conclusion of the paper, which can be published with the paper as Supplementary Information . Such figures can be particularly helpful to nonspecialist readers of cell, molecular and structural biology papers. Once the decision has been made to peer-review the paper, the choice of referees is made by the editor who has been assigned the manuscript, who will be handling other papers in the same field, in consultation with editors handling submissions in related fields when necessary. Most papers are sent to two or three referees, but some are sent to more or, occasionally, just to one. Referees are chosen for the following reasons:
- independence from the authors and their institutions
- ability to evaluate the technical aspects of the paper fully and fairly
- currently or recently assessing related submissions
- availability to assess the manuscript within the requested time.
Referees' reports
The ideal referee's report indicates
- who will be interested in the new results and why
- any technical failings that need to be addressed before the authors' case is established.
Although Nature 's editors themselves judge whether a paper is likely to interest readers outside its own immediate field, referees often give helpful advice, for example if the work described is not as significant as the editors thought or has undersold its significance. Although Nature 's editors regard it as essential that any technical failings noted by referees are addressed, they are not so strictly bound by refereesâ editorial opinions as to whether the work belongs in Nature .
Competitors
Some potential referees may be engaged in competing work that could influence their opinion. To avoid such conflicts of interest, Nature requires potential referees to disclose any professional and commercial competing interests before undertaking to review a paper, and requires referees not to copy papers or to circulate them to un-named colleagues. Although Nature editors go to every effort to ensure manuscripts are assessed fairly, Nature is not responsible for the conduct of its referees. Nature welcomes authors' suggestions for suitable independent referees (with their contact details), but editors are free to decide themselves who to use as referees. Nature editors will normally honour requests that a paper not be sent to one or two (but no more) competing groups for review.
Transparent peer review
Nature uses a transparent peer review system, where for manuscripts submitted from February 2020 we can publish the reviewer comments to the authors and author rebuttal letters of published original research articles. Authors are provided the opportunity to opt out of this scheme at the completion of the peer review process, before the paper is accepted. If the manuscript was transferred to us from another Nature Research journal, we will not publish reviewer reports or author rebuttals of versions of the manuscript considered by the originating Nature Research journal. The peer review file is published online as a supplementary peer review file. Although we hope that the peer review files will provide a detailed and useful view into our peer review process, it is important to note that these files will not contain all the information considered in the editorial decision making process, such as the discussions between editors, editorial decision letters, or any confidential comments made by reviewers or authors to the editors.
This scheme only applies to original research Articles, and not to Review articles or to other published content. For more information, please refer to our FAQ page .
Reviewer information
In recognition of the time and expertise our reviewers provide to Nature âs editorial process, we formally acknowledge their contribution to the external peer review of articles published in the journal. All peer-reviewed content will carry an anonymous statement of peer reviewer acknowledgement, and for those reviewers who give their consent, we will publish their names alongside the published article. We will continue to publish peer reviewer reports where authors opt in to our separate transparent peer review scheme. In cases where authors opt in to publication of peer reviewer comments and reviewers opt in to being named, we will not link a reviewerâs name to their report unless they choose to sign their comments to the author with their name. For more information, please refer to our FAQ page .
If the reviewers wish to be named their names will appear in alphabetical order at the end of the paper in a statement as below:
- Nature thanks [Name], [Name] and [Name] for their contribution to the peer review of this work.
Any reviewers that wish to remain anonymous will be acknowledged using a slightly modified statement:
- Nature thanks [Name], [Name] and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.
If no reviewers agree to be named, we will still acknowledge their valuable service using the statement below:
- Nature thanks the anonymous reviewers for their contribution to the peer review of this work.
Nature makes decisions about submitted papers as rapidly as possible. All manuscripts are handled electronically throughout the consideration process. Authors are usually informed within a week if the paper is not being considered. Most referees honour their prior agreement with Nature to deliver a report within two weeks or other agreed time limit, and send their reports online. Decisions by editors are routinely made very rapidly after receipt of reports, and Nature offers an advance online publication (AOP) service to an increasing number of manuscripts.
What the decision letter means
All Articles published in Nature go through at least one round of review, usually two or three, sometimes more. At each stage, the editor will discuss the manuscript with editorial colleagues in the light of refereesâ reports, and send a letter to the author offering one of the following options:
- The paper is accepted for publication without any further changes required from the authors.
- The paper is accepted for publication in principle once the authors have made some revisions in response to the refereesâ comments. Under these circumstances, revised papers are not usually sent back to the referees because further technical work has not been required, but are accepted for publication once the editors have checked that the refereesâ suggestions have been implemented and the paper is in the required format (the formatting guide section is helpful to this end).
- A final decision on publication is deferred, pending the authorsâ response to the refereesâ comments. Under these circumstances, further experiments or technical work are usually required to address some or all of the refereesâ concerns, and revised papers are sent back to some or all of the referees for a second opinion. Revised papers should be accompanied by a point-by-point response to all the comments made by all the referees.
- The paper is rejected because the referees have raised considerable technical objections and/or the authorsâ claim has not been adequately established. Under these circumstances, the editorâs letter will state explicitly whether or not a resubmitted version would be considered. If the editor has invited the authors to resubmit, authors must ensure that all the refereesâ technical comments have been satisfactorily addressed (not just some of them), unless specifically advised otherwise by the editor in the letter, and must accompany the resubmitted version with a point-by-point response to the refereesâ comments. Editors will not send resubmitted papers to the reviewers if it seems that the authors have not made a serious attempt to address all the refereesâ criticisms.
- The paper is rejected with no offer to reconsider a resubmitted version. Under these circumstances, authors are strongly advised not to resubmit a revised version as it will be declined without further review. If the authors feel that they have a strong scientific case for reconsideration (if the referees have missed the point of the paper, for example) they can appeal the decision in writing. But in view of Nature 's extreme space constraints and the large number of papers under active consideration at any one time, editors cannot assign a high priority to consideration of such appeals. The main grounds for a successful appeal for reconsideration are if the author can identify a specific technical or other point of interest which had been missed by the referees and editors previously. Appeals written in general or vague terms, or that contain arguments not relevant to the content of the particular manuscript, are not likely to be successful. Manuscripts cannot be submitted elsewhere while an appeal is being considered.
Editorsâ letters also contain detailed guidance about the paperâs format and style where appropriate (see below), which should be read in conjunction with the manuscript formatting guide when revising and resubmitting. In replying to the refereesâ comments, authors are advised to use language that would not cause offence when their paper is shown again to the referees, and to bear in mind that if a point was not clear to the referees and/or editors, it is unlikely that it would be clear to the nonspecialist readers of Nature .
If Nature declines to publish a paper and does not suggest resubmission, authors are strongly advised to submit their paper for publication elsewhere. If an author wishes to appeal against Nature 's decision, the appeal must be made in writing, not by telephone, and should be confined to the scientific case for publication. Nature âs editors are unable to assign high priority to consideration of appeals. Authors often ask for a new referee to be consulted, particularly in cases where two referees have been used and one is negative, the other positive. Nature is reluctant to consult new referees unless there is a particular, relevant area of scientific expertise that was lacking in the referees already used. Authors should note that as Nature is an interdisciplinary journal, referees for a paper are chosen for different reasons, for example a technical expert and a person who has a general overview of a field might both referee the same paper. A referee might be selected for expertise in only one area, for example to judge if a statistical analysis is appropriate, or if a particular technique that is essential to underpin the conclusion has been undertaken properly. This refereeâs opinion must be satisfied for the manuscript to be published, but as this referee may not know about the field concerned, an endorsement in isolation from the other referee(s) would not constitute grounds for publication. Editorsâ decisions are weighted according to the expertise of the referees, and not by a âvotingâ procedure. Hence, Nature prefers to stick with the original referees of a particular paper rather than to call in new referees to arbitrate, unless there is some specific way in which the referee can be shown to be technically lacking or biased in judgement. If Nature 's editors agree to reconsider a paper, the other original referee(s) will have the chance to see and comment on the report of the referee who is the subject of the complaint. New referees can often raise new sets of points, which complicates and lengthens the consideration process instead of simplifying it. If an author remains unsatisfied, he or she can write to the Editor , citing the manuscript reference number. In all these cases, it is likely that some time will elapse before Nature can respond, and the paper must not be submitted for publication elsewhere during this time.
After acceptance
See this document for a full description of what happens after acceptance and before publication.
Formats and lengths of papers
Space in Nature is extremely limited, and so format requirements must be strictly observed, as advised by the editor handling the submission, and detailed in the manuscript formatting guide.
Subediting of accepted papers
After a paper is accepted, it is subedited (copyedited) to ensure maximum clarity and reach, a process that enhances the value of papers in various ways. Nature 's subeditors are happy to advise authors about the format of their Articles after acceptance for publication. Their role is to
- edit the language for maximum clarity and precision for those in other disciplines. Special care is given to papers whose authorsâ native language is not English, and special attention is given to summary paragraphs.
- ensure that the paper is at the length specified by the manuscript editor (including number of figures).
- ensure that the terminology and notation conform to Nature 's house style.
- ensure that the figures and tables are clear and will fit in the space available.
Proofs and reprints
Our subeditors send authors the edited text for approval before it is typeset. This enables most queries to be resolved before proof stage. Authors subsequently receive an e-proof, including the figures, and can also download a PDF proof of the layout. We suggest that authors send proofs to co-authors for them to check, but request that changes among the co-authors are coordinated so that only one author communicates with Nature and only one set of corrections is sent. The corresponding (or other single designated) author is responsible on behalf of all co-authors for the accuracy of all content, including spelling of names and current affiliations of all co-authors, so please ensure these are checked carefully. Proofs are cycled between Nature âs production staff, our typesetter and the author by means of an online production-tracking system. Authors are provided with an encrypted link to this system after their paper has been accepted. Instructions for ordering reprints are provided after the paper is scheduled for publication.
Publication and the media
Nature papers are assigned to a print issue two weeks before publication, at which time authors will receive an e-mail notifying them of their scheduled print publication date. Many papers are published online ahead of print publication date: corresponding authors of these papers will be informed by e-mail when the online publication date is confirmed, one week ahead of publication
The Journals and Editorial press office distributes embargoed mailings highlighting upcoming content to registered journalists six days in advance of publication. The titles of forthcoming papers will be listed on the mailing, along with the corresponding authors' contact details, which means authors may receive media enquiries relating to their paper during this embargo period.
Authors are free to discuss their paper with the media from six days before the publication date, and are asked to ensure that Nature Research's embargo conditions are understood by journalists and others. Journalists have embargoed access to papers via the Nature Research press site, and are permitted to show papers to independent specialists a few days in advance of publication, under embargo conditions, solely for the purpose of eliciting comment on the work described.
Before publication, the Journals and Editorial press office also informs the press / public information officers of authors' institutions to allow them to prepare their own publicity. Authors and their institutions are advised to coordinate any of their own publicity with the Journals and Editorial press office by e-mail after their paper is accepted for publication.
The content of the press release mailing and papers described therein is embargoed until 1600 London time/1100 US Eastern Time on the day of publication, which is the time at which the research is published and become publicly available. In all cases, authors are expected to abide by the Nature Research press and embargo policies .
Quick links
- Explore articles by subject
- Guide to authors
- Editorial policies
Paper acceptance prediction at the institutional level based on the combination of individual and network features
- Published: 05 January 2021
- Volume 126 , pages 1581â1597, ( 2021 )
Cite this article
- Wenyan Wang 1 , 3 ,
- Jun Zhang 2 ,
- Fang Zhou 3 ,
- Peng Chen 2 &
- Bing Wang  ORCID: orcid.org/0000-0003-4945-7725 1 , 2 , 3 Â
373 Accesses
2 Citations
Explore all metrics
Papers published in top conferences or journals is an important measure of the innovation ability of institutions, and ranking paper acceptance rate can be helpful for evaluating affiliation potential in academic research. Most studies only focus on the paper quality itself, and apply simple statistical data to estimate the contribution of institutions. In this work, a novel method is proposed by combining different types of features of affiliation and author to predict the paper acceptance at the institutional level. Based on the history of the paper published, this work firstly calculates the affiliation scores, constructs an institutional collaboration network and analyzes the importance of the institutions using network centrality measures. Four measures about the authorsâ influence and capability are then extracted to take the contributions of authors into consideration. Finally, a random forest algorithm is adopted to solve the prediction problem of paper acceptance. As a result, this paper improves the ranking of the paper acceptance rate NDCG@20 to 0.865, which is superior to other state-of-the-art approaches. The experimental results show the effectiveness of proposed method, and the information between different types of features can be complementary for predicting paper acceptance rate.
This is a preview of subscription content, log in via an institution to check access.
Access this article
Price includes VAT (Russian Federation)
Instant access to the full article PDF.
Rent this article via DeepDyve
Institutional subscriptions
Similar content being viewed by others
Ranking Research Institutions Based on the Combination of Individual and Network Features
Influential tweeters in relation to highly cited articles in altmetric big data
Saeed-Ul Hassan, Timothy D. Bowman, ⊠Naif Radi Aljohani
Can we predict ESI highly cited publications?
Fenghua Wang, Ying Fan, ⊠Zengru Di
Al-Juboori, A. F. M. A., Su, D. J., & Ko, F 2011 University Ranking and Evaluation: Trend and Existing Approaches. In The International Conference on Next Generation Information Technology 137â142
Bai, X., Zhang, F., Hou, J., Xia, F., Tolba, A., & Elashkar, E. (2017). Implicit multi-feature learning for dynamic time series prediction of the impact of institutions. IEEE Access, 99, 1â1.
Google Scholar Â
BarabĂĄsi, A. L., Jeong, H., NĂ©da, Z., Ravasz, E., Schubert, A., & Vicsek, T. (2002). Evolution of the social network of scientific collaborations. Physica A: Statistical Mechanics and its Applications, 311 (3), 590â614.
Article  MathSciNet  Google Scholar Â
Belgiu, M., & DrÄguĆŁ, L. (2016). Random forest in remote sensing: A review of applications and future directions. Isprs Journal of Photogrammetry & Remote Sensing, 114, 24â31.
Article  Google Scholar Â
Bellingeri, M., Bevacqua, D., Scotognella, F., & Cassi, D. (2019). The heterogeneity in link weights may decrease the robustness of real-world complex weighted networks. Scientific Reports . https://doi.org/10.1038/s41598-019-47119-2 .
Clemente, G. P., & Cornaro, A. (2019). A novel measure of edge and vertex centrality for assessing robustness in complex networks. Soft Computing . https://doi.org/10.1007/s00500-019-04470-w .
Crucitti, P., Latora, V., Marchiori, M., & Rapisarda, A. (2000). Error and attack tolerance of complex networks. Nature, 340 (1), 378â382.
MathSciNet  MATH  Google Scholar Â
Cuthbert, R. (2011). University rankings, diversity, and the new landscape of higher education. International Journal of Lifelong Education, 30 (1), 119â121.
DĂ©sir, C., Bernard, S., Petitjean, C., & Heutte, L. (2013). One class random forests. Pattern Recognition, 46 (12), 3490â3506.
Gao, M., Xu, L., Lin, L. M., Huang, Y. Z., & Zhang, X. X. (2020). Influence maximization based on activity degree in mobile social networks. Concurrency and Computation-Practice & Experience . https://doi.org/10.1002/cpe.5677 .
Gupta, A., & Murty, M. N. (2016). Finding Influential Institutions in Bibliographic Information Networks.
Hao, Y. C., Jia, L. M., & Wang, Y. H. (2020). Edge attack strategies in interdependent scale-free networks. Physica a-Statistical Mechanics and Its Applications . https://doi.org/10.1016/j.physa.2019.122759 .
Holme, P., Edling, C. R., & Liljeros, F. (2004). Structure and time evolution of an Internet dating community. Social Networks, 26 (2), 155â174.
Klimek, P., Jovanovic, S. A., Egloff, R., & Schneider, R. (2016). Successful fish go with the flow: citation impact prediction based on centrality measures for termâdocument networks. Scientometrics, 107 (3), 1265â1282. https://doi.org/10.1007/s11192-016-1926-1 .
Lordan, O., & Sallan, J. M. (2019). Core and critical cities of global region airport networks. Physica a-Statistical Mechanics and Its Applications, 513, 724â733. https://doi.org/10.1016/j.physa.2018.08.123 .
Mussard, M., & James, A. P. (2018). Engineering the global university rankings: Gold standards, its limitations and implications. IEEE Access, 99, 1â1.
Orouskhani, Y., & Tavabi, L. (2016). Ranking Research Institutions Based On Related Academic Conferences. arXiv e-prints,
Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., et al. (2012). Scikit-learn: Machine Learning in Python.
Qian, Y., Dong, Y., Ma, Y., Jin, H., & Li, J. (2016). Feature Engineering and Ensemble Modeling for Paper Acceptance Rank Prediction.
Qiu, Z., Kong, D., Zhu, Z., Lu, H., & Cheng, J. (2016). Heterogenous Graph Mining for Measuring the Impact of Research Institutions.
Ren, X., & LĂŒ, L. (2014). Review of ranking nodes in complex networks. Chinese Science Bulletin, 59 (13), 1175.
Rintyarna, B. S., Sarno, R., & Yuananda, A. L. Automatic ranking system of university based on technology readiness level using LDA-Adaboost.MH. In International Conference on Information and Communications Technology, 2018 495â499
Safaei, F., Yeganloo, H., & Akbar, R. (2020). Robustness on topology reconfiguration of complex networks: An entropic approach. Mathematics and Computers in Simulation, 170, 379â409. https://doi.org/10.1016/j.matcom.2019.11.013 .
Sandulescu, V., & Chiru, M. (2016). Predicting the future relevance of research institutions - The winning solution of the KDD Cup 2016.
Shen, D. Q., & Cao, S. S. (2018). An efficient immunization strategy based on transmission limit in weighted complex networks. Chaos Solitons & Fractals, 114, 1â7. https://doi.org/10.1016/j.chaos.2018.06.014 .
Article  MathSciNet  MATH  Google Scholar Â
Sinha, A., Shen, Z., Song, Y., Ma, H., Eide, D., Hsu, B. J., et al. An Overview of Microsoft Academic Service (MAS) and Applications. In International Conference on World Wide Web, 2015 (pp. 243â246)
Szentirmai, L., & Radacs, L. World university rankings qualify teaching and primarily research. In IEEE International Conference on Emerging Elearning Technologies and Applications, 2013 (pp. 369â374)
Tran, V. H., Cheong, S. A., & Bui, N. D. (2019). Complex network analysis of the robustness of the hanoi, vietnam bus network. Journal of Systems Science and Complexity, 32 (5), 1251â1263. https://doi.org/10.1007/s11424-019-7431-x .
Article  MATH  Google Scholar Â
Wang, Y., Wang, L., Li, Y., He, D., Liu, T. Y., & Chen, W. (2013). A theoretical analysis of ndcg type ranking measures. Journal of Machine Learning Research, 30, 25â54.
Wilson, J., Mohan, R., Arif, M., Chaudhury, S., & Lall, B. (2016). Ranking academic institutions on potential paper acceptance in upcoming conferences.
Xie, J. (2016). Predicting Institution-Level Paper Acceptance at Conferences: A Time-Series Regression Approach.
Zhang, J., Xu, B., Liu, J., Tolba, A., Al-Makhadmeh, Z., & Xia, F. (2018). PePSI: Personalized Prediction of Scholarsâ Impact in Heterogeneous Temporal Academic Networks.
Zhang, X. J., Tang, Y., Xiong, J. S., Wang, W. J., & Zhang, Y. C. (2020). Ranking game on networks: The evolution of hierarchical society. Physica a-Statistical Mechanics and Its Applications . https://doi.org/10.1016/j.physa.2019.123140 .
Download references
Acknowledgements
This work is supported by the National Natural Science Foundation of China (Nos. 61472282, and 61672035) and Anhui Provincial Department of Education (No. KJ2019ZD05), Open Fund from Key Laboratory of Metallurgical Emission Reduction & Resources Recycling (No. KF 2017-02), the fund of Co-Innovation Center for Information Supply & Assurance Technology in AHU (No. ADXXBZ201705), and Anhui Scientific Research Foundation for Returned Scholars.
Author information
Authors and affiliations.
School of Metallurgical Engineering, Anhui University of Technology, Maâanshan, 243002, Anhui, China
Wenyan Wang & Bing Wang
Co-Innovation Center for Information Supply and Assurance Technology, Anhui University, Hefei, 230032, Anhui, China
Jun Zhang, Peng Chen & Bing Wang
School of Electrical and Information Engineering, Anhui University of Technology, Maâanshan, 243002, Anhui, China
Wenyan Wang, Fang Zhou & Bing Wang
You can also search for this author in PubMed  Google Scholar
Corresponding authors
Correspondence to Peng Chen or Bing Wang .
Rights and permissions
Reprints and permissions
About this article
Wang, W., Zhang, J., Zhou, F. et al. Paper acceptance prediction at the institutional level based on the combination of individual and network features. Scientometrics 126 , 1581â1597 (2021). https://doi.org/10.1007/s11192-020-03813-x
Download citation
Received : 11 July 2020
Published : 05 January 2021
Issue Date : February 2021
DOI : https://doi.org/10.1007/s11192-020-03813-x
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Academic research institutions
- Paper acceptance
- Feature combination
- Random forest
- Find a journal
- Publish with us
- Track your research
Journal Rejections: How Common Are They?
Journal rejections are common, irrespective of the researcherâs academic career. The acceptance rate of scholarly journals is an important selection criterion for authors when choosing where to submit their manuscripts. However, information about the acceptance (or rejection rates) of individual journals is seldom available.
There are significant differences between the fields of science, with biomedicine having higher acceptance rates compared to for instance the social sciences. Open access journals usually have higher acceptance rates than subscription journals. This is particularly true for so-called OA mega-journals, which have their peer review criteria focusing on sound science only. Let us find out some more about journal acceptance rates.
Can Rejection Rates/Acceptance rates of Journals Be Calculated?
Unfortunately, the journal industry does not have a set standard on calculating rejection rates/acceptance rates. Few things that journals can possibly have an impact on the journal acceptance/rejection rates are:
- The reputation of the journal (the acceptance rate is lower rate for reputed international journals as compared to local journals)
- The total amount of papers sent to them
- Only papers within their aim and scope (One can look at the website to understand the aims and scope of the journal. One can also read previous issues to get the sense of successful papers that have been published.)
- The amount of papers they sent for peer review
Studies Related to Finding Journal Acceptance/ Rejection Rates
One of the studies was done by Thomson Reuters (2012) on this aspect of journals in the publishing industry, leveraging on the massive amounts of data collected in their ScholarOne submission and publishing system. This system, used by over 4,000 journals from over 300 different publishers, includes commercial publishers, scholarly societies, and university presses. The study provides global averages across manuscripts, and it included data about over three million manuscripts submitted in 2005-2010. From 2005 to 2010 the overall acceptance rates decreased slightly from 40,6% to 37,1%. The major reason is probably the increased share of submissions from countries like China and India, which typically have a lower chance of acceptance.
The acceptance rates among corresponding authors across different countries is in fact particularly interesting. The 2010 rates for the best performing countries (USA, UK, Canada, Australia, Germany, Switzerland and Sweden) were in the narrow range 46,8â51,9%, whereas the worst performers (China, Taiwan, India, Brazil and Turkey) in the range 18,7â26,8%.
With respect to OA journals, one of the studies finds that acceptance rates are significantly higher in OA journals. They report differences of 4-16% for the five different fields. If the number of journals in the fields they cover is used to weight the acceptance rate, the overall rate for the OA journals would be 41% and for the non-OA journals 33%. It is problematic to compare the overall rates of all traditional and all OA-journals. This is because OA journals are on average much younger. Also, OA-journals have spread more rapidly in biomedicine, where the acceptance rates are generally higher, than in the social sciences.
How to Locate Rejection Rates/Acceptance Rates?
Determining acceptance rates for individual journals or for specific disciplines can be difficult, yet is necessary information for promotion and tenure activities. Journals with lower article acceptance rates are frequently considered to be more prestigious and more âmeritoriousâ.
The method of calculating acceptance rates varies among journals . Some journals use all manuscripts received as a base for computing this rate. Other journals allow the editor to choose which papers are sent to reviewers and calculate the acceptance rate on those that are reviewed that is less than the total manuscripts received. Also, many editors do not maintain accurate records on this data and provide only a rough estimate. Furthermore, the number of people associated with a particular area of specialization influences the acceptance rate. If only a few people can write papers in an area, it tends to increase the journal’s acceptance rate.
Resources or Ways to Find Journal Acceptance Rates
a few useful tips and resources on finding journal acceptance rates below.
- Contact the Editor of the Journal
Try contacting the editor of the journal to see if s/he will share the acceptance rate.
- Google the Journal Name
Some societies also publish acceptance rates for their journals on their home pages. A Google search using the name of the journal or the name of the society will usually take you to the journal home page.
- Cabell’s Directory of Publishing Opportunities
Acceptance rates for some journals can be found in this database . Currently over 1200 journals, primarily in Educational Technology, Educational Curriculum and Educational Psychology are available within this database but other discipline journals may also be included.
- American Psychological Association (APA) Journal Acceptance Rates
APA provides statistics which include number of manuscripts received, accepted and percentage of rejected manuscripts annual from 2004 to 2013.
Should Your Decision to Publish Depend on Acceptance & Rejection Rates of Journals?
Along with rejection rate/acceptance rate, you might want to consider the impact factor when choosing a journal. Impact factor refers to the amount of citations an article receives over time. High impact journals frequently have high rejection rates but they can be still be high for lower impact journals. If you publish your work in a low impact journal, it might have an impact on your job or research grant applications so it’s important to find a balance between rejection rate and impact. Therefore, choose wisely and make sure you donât regret your choice of journal.
For more help in journal selection , you can check out Enagoâs Journal Selection services. Our team of experts holds extensive experience in peer reviewing  and editing  for international journals in your field of study. Simply send us your criteria, and we will present you with the top 3-5 journals best suited to your manuscript and preferences. Whatâs more: By using Enago’s Journal Selection Service, you will get vital feedback for your manuscript, including suggestions on how to improve your paper.
Rate this article Cancel Reply
Your email address will not be published.
Enago Academy's Most Popular Articles
- Old Webinars
- Webinar Mobile App
Behind the Scenes With Editors-in-Chief
Mistakes leading to desk rejection Acceptable standard for English language quality How to avoid plagiarism…
An Editor-in-Chief’s Advice on ‘How to Avoid Desk Rejections of Your Manuscript’
Desk-rejection of manuscripts is a common occurrence in academic publishing. Manuscripts getting desk rejected can…
- Publishing Research
- Understanding Reviews
15 Most Common Reasons for Journal Rejection
Every researcher aims to publish their work in a high impact journal. However, publishing research…
ćŠäœéżć æćæçšżăćč¶ææćć€ćźĄé èŻèźșïŒ
ćèĄèŻćźĄ ćèĄèŻćźĄç±»ć ćșćŻčæçšż ćšç„ćæćäžćèĄšæç«
- Global Japanese Webinars
ăȘăžă§ăŻăăžăźćŻŸćŠæłâè”·æ»ćçăźăăŁăłăčăæŽă
æ»èȘăăă»ăč ăȘăžă§ăŻăăăăèŠć æ»èȘè ăłăĄăłăăźçè§ŁăšćčæçăȘćç ăȘăžă§ăŻăăćéżăăăăăźăă€ăłă
Top 5 Tips to Avoid Journal Rejection
Learning to Avoid Journal Rejection
Writing an Effective Appeal Letter to a Journal Editor
Sign-up to read more
Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:
- 2000+ blog articles
- 50+ Webinars
- 10+ Expert podcasts
- 50+ Infographics
- 10+ Checklists
- Research Guides
We hate spam too. We promise to protect your privacy and never spam you.
I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:
What should universities' stance be on AI tools in research and academic writing?
An official website of the United States government
The .gov means itâs official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure youâre on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
- Publications
- Account settings
Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .
- Advanced Search
- Journal List
- Head Face Med
A retrospective analysis of submissions, acceptance rate, open peer review operations, and prepublication bias of the multidisciplinary open access journal Head & Face Medicine
Thomas stamm.
1 Poliklinik für Kieferorthopädie, Universitätsklinikum, Westfälische Wilhelms-Universität, Münster, Germany
Ulrich Meyer
2 Klinik und Poliklinik für Mund-, Kiefer- und Gesichtschirurgie, Heinrich Heine Universität, Düsseldorf, Germany
Hans-Peter Wiesmann
3 Klinik und Poliklinik für Mund-, Kiefer- und Gesichtschirurgie, Universitätsklinikum, Westfälische WiIhelms-Universität, Münster, Germany
Johannes Kleinheinz
Murat cehreli.
4 CosmORAL Oral and Dental Health Polyclinics, Cinnah 7/5 Kavaklýdere, Ankara, Turkey
Zafer C Cehreli
5 Department of Pediatric Dentistry, Faculty of Dentistry, Hacettepe University, Ankara, Turkey
This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Associated Data
Head & Face Medicine (HFM) was launched in August 2005 to provide multidisciplinary science in the field of head and face disorders with an open access and open peer review publication platform. The objective of this study is to evaluate the characteristics of submissions, the effectiveness of open peer reviewing, and factors biasing the acceptance or rejection of submitted manuscripts.
A 1-year period of submissions and all concomitant journal operations were retrospectively analyzed. The analysis included submission rate, reviewer rate, acceptance rate, article type, and differences in duration for peer reviewing, final decision, publishing, and PubMed inclusion. Statistical analysis included Mann-Whitney U test, Chi-square test, regression analysis, and binary logistic regression.
HFM received 126 articles (10.5 articles/month) for consideration in the first year. Submissions have been increasing, but not significantly over time. Peer reviewing was completed for 82 articles and resulted in an acceptance rate of 48.8%. In total, 431 peer reviewers were invited (5.3/manuscript), of which 40.4% agreed to review. The mean peer review time was 37.8 days. The mean time between submission and acceptance (including time for revision) was 95.9 days. Accepted papers were published on average 99.3 days after submission. The mean time between manuscript submission and PubMed inclusion was 101.3 days. The main article types submitted to HFM were original research, reviews, and case reports. The article type had no influence on rejection or acceptance. The variable 'number of invited reviewers' was the only significant (p < 0.05) predictor for rejection of manuscripts.
The positive trend in submissions confirms the need for publication platforms for multidisciplinary science. HFM's peer review time comes in shorter than the 6-weeks turnaround time the Editors set themselves as the maximum. Rejection of manuscripts was associated with the number of invited reviewers. None of the other parameters tested had any effect on the final decision. Thus, HFM's ethical policy, which is based on Open Access, Open Peer, and transparency of journal operations, is free of 'editorial bias' in accepting manuscripts.
Original data
Provided as a downloadable tab-delimited text file (URL and variable code available under section 'additional files').
Head & Face Medicine ( HFM ) was launched in August 2005 to provide multidisciplinary research with a state-of-the-art publication platform [ 1 - 3 ]. Being clinicians, we realized that the ongoing fragmentation of medical specialties may increase specialist medical knowledge but that any effect of this knowledge on traditional and established therapy strategies is slow. We also realized that, with the increase of new specialties; the borders between the fields had become increasingly blurry. Much important clinical research takes place between different fields, which in turn necessitates a multidisciplinary platform to disseminate the results of research to the relevant audience. However, after a period of one year, no answer could be found to the question as to whether HFM will be an ideal platform to disseminate multidisciplinary knowledge in the area of head and face disorders. HFM still is developmental in character and the journal's ethical policy based on open access and open peer review results in a commitment to a regular self-analysis of HFM's maturation. The aim of the present paper was therefore to evaluate the characteristics of submissions, the effectiveness of the open peer reviewing process, and factors biasing acceptance or rejection of manuscripts. This analysis attempts to generate information to assess the journal's development and was also conducted for the sake of transparency and objectivity in all journal operations of Head & Face Medicine .
Manuscripts submitted to Head & Face Medicine undergo a strictly uniform editorial process. Based on this workflow, the following journal operations were extracted from HFM 's Content Management System for submissions between August 2005 and August 2006.
• Number of complete/incomplete submissions
• Date of submission
• Number of reviewers invited
• Number of agreements to review
• Number of reports returned
• Date of report
• Number of acceptances/rejections
• Date of acceptance/rejection
• Date of publishing (provisional)
• Date of PubMed record
The following times were calculated based on the obtained data.
• Peer review time (PRT): The time between date of submission and date when reports are returned to the authors. PRT is at any time greater than the time used for processing the review because of the time differential between invitation to review and agreement of peer reviewers. PRT does not include revision time and re-review time.
• Acceptance/rejection time (A/RT): The time between date of submission and "editorial" acceptance or rejection. A/RT includes revision time and re-review time. Editorial acceptance is different from full acceptance and concerns the content of the paper and positive reports only. Full acceptance is declared when the paper complies with the formatting requirements laid out in the instructions for authors. Full acceptance is, in general, equal to the provisional publication of the article.
• Publishing time (PT): The time between date of submission and date of provisional publication of the paper. With its provisional publication on the HFM website, the paper is immediately accessible via the Internet and searchable by any web browser.
• PubMed availability time (PAT): The time between date of submission and date of inclusion into PubMed of the final title and abstract. The PubMed entry was obtained from the EDAT tag.
Additionally, the following data were evaluated.
• Submission and acceptance rates
• Type of submission
• Editorial workload. Editorial work is difficult to measure. The only quantifiable data are the number of submissions and the number of e-mails generated through communication between authors, reviewers, and editors.
The Mann-Whitney U test was chosen to assess differences in journal operations between accepted and rejected papers. Crosstabs with Chi-square test was used to evaluate differences between various types of articles.
Binary logistic regression analysis was performed to identify variables most responsible for the prediction of acceptance or rejection (editorial bias). For this purpose, the observed event 'editorial decision' was dichotomized to two values, which represent the occurrence (acceptance) or non-occurrence (rejection) of the event.
Submission and acceptance rates
In total, 126 manuscripts were submitted between August 2005 and July 2006. An additional number of 40 manuscripts were submitted incomplete and therefore not yet under review. Figure Figure1 1 shows a slightly increasing submission rate over the last 12 months with a distinct peak in February, the month before the article processing charge (APC) was introduced. On average, 10.5 articles were submitted per month. Excluding the peak in February (assuming the same number of 12 submissions as in January), the rate would decrease to 8.2 articles per month.
Submissions to HFM between August '05 and August '06. The second call for papers to prospective authors was e-mailed in September (CFP 2 ) and a further call in January (CFP 3 ). PubMed inclusion started on 2 nd December. The article processing charge (APC) was introduced in March '06.
Between August 2005 and August 2006, peer reviewing was completed for 82 articles. Of those, 40 manuscripts were rejected and two were withdrawn, which is equal to an acceptance rate of 48.8%.
Peer review process
Prospective reviewers for all manuscripts were selected from the Editorial Board and from PubMed only. In total, 431 experts were invited to review 82 manuscripts. 174 peer reviewers agreed to review and 52 of them reviewed more than one paper. 199 invited experts declined to review, while six experts agreed but did not provide any report. The maximum number of invitations sent before two reports were finally received was 18. On average, 5.3 experts were invited per manuscript.
HFM's peer review process is based on a minimum of two reports per manuscript. The peer review time for the first and seond reports were 33.8 and 41.9 days, respectively. In total, the mean PRT was 37.8 days, which comes in shorter than the 6-weeks turnaround time the Editors set themselves as the maximum. The PRT of rejected manuscripts was shorter (35.3 days) when compared to accepted papers (40.3 days), but not to a significant extent (p > 0.05).
The mean acceptance time was 95.9 days. Taking into account the time needed for re-reviews required after authors' revisions, this figure calculates down to approximately 95.9-40.3 = 55.6 days for revision. The mean rejection time was 49 days. The Editors-in-Chief needed approximately 49-35.3 = 13.7 days for the final decision by assessing the reports and manuscripts.
The mean publishing time (PT) was 99.3 days. After this time, the authors' work was first made available to the scientific community because title, abstract, and a provisional PDF of the manuscript were published on the HFM website and thus, became both accessible and retrievable via the Internet. PubMed availability time, the time between submission and inclusion into PubMed of the title and abstract, was on average 101.3 days.
Type of submission
The main article types submitted to HFM were original research articles, reviews, and case reports (Figure (Figure2). 2 ). All other types represented less than 8% of the submitted manuscripts. Research papers were accepted by reviewers most frequently, whereas case reports were rejected more often than other types of articles. Although accepted and rejected papers differed by article type (χ 2 , p < 0.05), there is no increased probability for case reports to be rejected (p > 0.05). In general, the variable 'article type' is not a predictor for rejection or acceptance. There is also no significant relation between article type and the time of peer reviewing (Figure (Figure3). 3 ). Case reports had a shorter publishing time (p < 0.05) when compared to research papers and reviews.
Article types showed significant differences (χ 2 , p < 0.05) when comparing accepted with rejected manuscripts. Case reports were rejected most frequently whereas research papers were accepted more often than other types. In general, 'article type' is not a predictor for rejection or acceptance (p > 0.05).
The peer review time (PRT) was not significantly different between article types.
Editorial workload
In general, the means of communication used between all parties concerned was e-mail. Only two manuscripts were sent to reviewers by mail or fax. Between August 2005 and August 2006, the HFM e-mail account held 2521 e-mails. The correspondence with authors and reviewers comprised 1607 e-mails. 501 of them were exchanged between BMC and HFM and 413 e-mails were sent to the editorial co-workers. On average, approximately 6.9 e-mails were written per day to ensure the daily editorial workflow. These sums up to 19.7 e-mails per submitted article. Considering the current acceptance rate of 48.8%, an average of 73 e-mails were exchanged for each published article.
The e-mail rate in Figure Figure4 4 shows a pattern similar to the submission rate (Figure (Figure1). 1 ). In contrast to the characteristics of the latter one, the correspondence decreased slightly over time. This was due to functionality improvements (FI) of the peer reviewing system. Because there was no handbook at all and the editorial team was unaware of the full functionality of the BMC online peer reviewing system, a considerable amount of correspondence was exchanged offline (manually) at the beginning of HFM . The first significant improvement (FI 1 ) of the online system was the e-mail archiving in November 2005. Any e-mails sent by the editors were then automatically added to a history page, resulting in a chronological overview which facilitates evaluation of the whole peer reviewing process. Because of a further functionality improvement (FI 2 ) in March 2006, multiple notifications to authors and reviewers were eliminated. From that point in time, only the following was performed independently from the editorial managemet tools: a) informing authors if revisions were required, b) requesting re-review of a manuscript if required after revisions, and c) accepting a manuscript. This resulted in a significant reduction of e-mails from March to April '06 (Figure (Figure4). 4 ). Two further improvements were introduced in April and June '06: accepting articles (FI 3 ) and requesting revisions online (FI 4 ).
E-mail rate over the last 12 months. Out of 2521 e-mails, 1607 were exchanged between the editors and authors and reviewers. On average, 73 e-mails for each published article were exchanged. Several functionality improvements (FI) facilitated the online peer reviewing process. FI 1 = e-mail archiving, FI 2 = elimination of multiple notifications, FI 3 = accepting articles online, FI 4 = requesting revisions online.
Decision bias
Except for one parameter, none of the obtained variables had an effect on the decision to accept or reject papers. Binary logistic regression revealed a significant relationship (p < 0.05) between rejection of a paper and the number of invited reviewers. The probability of rejection P R could be computed by the logistic equation
P R = 1 ( 1 + e − z ) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacqWGqbaudaWgaaWcbaGaemOuaifabeaakiabg2da9maalaaabaGaeGymaedabaGaeiikaGIaeGymaeJaey4kaSIaemyzau2aaWbaaSqabeaacqGHsislcqWG6bGEaaGccqGGPaqkaaaaaa@38B6@ , where z is computed as z = constant + regression coefficient .
Based on our data z = -0.519+0.148. The graphical representation is shown in Figure Figure5. 5 . The computed model corresponds to our rejection rate (51.2%). As a mean of 5.3 reviewers were invited, there is a probability of P R = 0.57 that the paper will be rejected.
The logistic function produces a sigmoid curve, where y represents the probability of rejection ( P R ) and x the number of invited reviewers. Inviting a minimum of 2 reviewer corresponds to a probability of rejection P R = 0.44. Inviting 15 reviewers increases P R to 0.85.
Medical journals have to assume a high level of ethical responsibility because by disseminating scientific findings they cause far-reaching consequences for patients. Due to the global availability of the Internet, the volume and the speed of dissemination of medical data has grown exponentially. However, such a fortunate consequence for medical science, also puts a strain on control schemes (such as peer reviewing) that are supposed to ensure the quality of the published outcomes.
An important step related to process quality is to reduce pre-publication bias through transparent journal operations. New journals, which cannot rely on a tradition of experience and reputation, therefore have the obligation to demonstrate their process quality and objectivity throughtout the publication process. The obtained data can be useful, furthermore, to assess the profile of other journals. The aim of this paper was therefore to evaluate the characteristics of submissions, the effectiveness of the open-peer reviewing process, and factors biasing acceptance or rejection of manuscripts.
Data on the first-year submission rates to a medical journal are not available. Just as with trans-discipline comparisons, it is uncertain whether this kind of comparison makes sense at all. Despite the difficulties of interpretation, we consider that the slightly increasing submission rate, at a mean 10.5 papers/month in the first year, validates the multidisciplinary approach of HFM. The acceptance rate was established at 48%. The HFM Editorial Team does not consider the number of rejected manuscripts to be a quality criterion for a journal. Thus no comparison was made as to the rejection rates.
The APC introduction in March 2006 seems to have adversely affected the submission rate, which would also explain the high number of submissions in February 2006 (Figure (Figure1). 1 ). Nevertheless, it is interesting to note, though, that the same trend cannot be observed for the application for waivers. Although the APC can be waived for 35% of all papers (for authors who genuinely have no access to funding) the corresponding application was received in the course of the first year for less than 10% of submitted papers. This fact may be seen as an indicator for lack of information or for a combined effect of funds available and geographic origin of submissions or for the APC's being of secondary importance.
Timely peer reviewing is an exceptionally essential factor for new journals. There seems to be a minimum time when requesting a review from an unpaid, well-renowned reviewer, which it is impossible to shorten any further. Other journals have also recognized that their shortened peer review time could only be achieved at the expense of the destruction of the very process [ 1 ]. A mean PRT of 37.8 days could be achieved only by inviting more than two reviewers (5.3 reviewers on average). There was no PRT difference between accepted and rejected papers.
A further important point, besides timely peer reviewing, is fast publication. This time depends on cumulated times of revision, re-reviewing, and the final decision made by the editorial team. The mean acceptance time was 95.9 days, and provisional publication occurred after a mean 99.3 days. This timeframe is the critical item, since it reflects the duration after which the paper becomes retrievable via the Internet for the first time and starts to exist within the scientific community. Another important marker related to this process is the PubMed inclusion. PAT amounted to a mean 101.3 days from submission and depended also on e-publication workflow.
HFM mainly received standard types of manuscripts, such as original research articles, reviews, and case reports. Case studies, database articles, hypotheses, methodology articles, short reports, software articles, and study protocols are underrepresented, indicating an increased need for information to be provided to contributors on the avaliability of publishing different typed of manuscripts in HFM. Although case reports represent the majority of rejected papers (52%), there is no increased probability for this type of article to be rejected according to the logistic regression model. Compared with other article types, the shorter publishing time associated with case reports can be explained by the shorter length of these papers, which also means less revision time. No difference in PRT was found.
The editorial workload is difficult to measure and was presented herein with the e-mail data to describe the amount of editorial time as a basis for comparison. Time is a major factor in the quality of a journal and has to be reasonably supported by staff. Currently, editorial workflow (except peer review) is handled by a core team of 3 editors-in-chief, 1 deputy editor, 1 executive editor, 2 section editors, 2 peer review coordinators, and one statistical advisor.
A hidden decision bias may compromise the objectivity of a journal, and regular analysis is, therefore, required. With the exception of one parameter, none of the recorded journal operations of HFM affects significantly the decision as to acceptance or rejection. Only the number of invited peer reviewers was associated with a higher probability of rejection. Inviting a minimum of 2 reviewers corresponds to a probability of rejection P R = 0.44. HFM 's reviewer rate of 5.3/manuscript corresponds to a probability of rejection P R = 0.57, which is close to the current rejection rate of 51.2%. Inviting 15 reviewers would increase P R to 0.85. The advantage of keeping the peer review time below 6 weeks is achieved at the expense of inviting more than two reviewers, which in turn increases the probability of rejection.
In our opinion, however, a P R amounting to 0.57 does not indicate a negative effect insofar as one has to take into account that Open Peer generally results in higher acceptance rates [ 4 ]. This corresponds to a balance that is indeed confirmed by the lower rejection rate, which amounts to 51.2%.
The positive trend in submissions to HFM confirms the need for publication platforms for multidisciplinary science. HFM's peer review time comes in shorter than the 6-weeks turnaround time the Editors set themselves as the maximum. Rejection of manuscripts was associated with the number of invited reviewers but had no negative effect on the overall rejection rate. None of the other parameters had any effect on the final decision. Thus, HFM's ethical policy, which is based on Open Access, Open Peer, and transparency of journal operations, was found to be free of 'editorial bias' in accepting manuscripts.
Competing interests
The authors declare that they have no competing interests other than being Editors-in-Chief of the journal.
Authors' contributions
TS suggested the original idea for the study, initiated the investigations leading to these results, participated in discussions on the undertaking of the study, did the statistical analysis and interpreted the data, reviewed all iterations of the paper, and wrote the first draft and the final version of the paper. UM interpreted the data, and reviewed and contributed to the writing of all iterations of the paper, including the final version of the manuscript. HPW and JK participated in discussions on the undertaking of the study, interpreted the data, reviewed the paper for content and contributed to the writing of all iterations of the paper, including the final version of the manuscript. MC and ZCC participated in discussions on the undertaking of the study, interpreted the data, reviewed all iterations of the paper and contributed to the writing of the manuscript. MC and ZCC revised the English grammar of the final version of the manuscript. All authors approved the final manuscript.
Supplementary Material
Tab-delimited file containing original data. Variables: n = number of article; rev = number of reviewers; peer1 = PRT of reviewer 1; peer2 PRT of reviewer 2; accep = accepted (0 = yes; 1 = no); print = PT; pubmed = PAT; a_p = days between acceptance and print; a_pub = days between acceptance and PubMed inclusion; peer = average peer review time; a_type = article type.
Acknowledgements
We are grateful to Sonja Holz and Jutta Wohlgemuth for their assistance in data mining.
- Stamm T. Head & Face Medicine – a new journal for intra-interdisciplinary science. Why? When? Where? Head Face Med. 2005; 1 :1. doi:Â 10.1186/1746-160X-1-1. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Ulrich Meyer, Hans-Peter Wiesmann. Tissue engineering: a challenge of today's medicine. Head Face Med. 2005; 1 :2. doi:Â 10.1186/1746-160X-1-2. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Ulrich Joos, Ulrich Meyer. New paradigm in implant osseointegration. Head Face Med. 2006; 2 :19. doi:Â 10.1186/1746-160X-2-19. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
- Walsh E, Rooney M, Appleby L, Wilkinson G. Open peer review: a randomised controlled trial. Br J Psychiatry. 2000; 176 :47â51. doi:Â 10.1192/bjp.176.1.47. [ PubMed ] [ CrossRef ] [ Google Scholar ]
More From Forbes
Harvard accepts 3.59% of applicants, highest rate in 4 years.
- Share to Facebook
- Share to Twitter
- Share to Linkedin
This year's Ivy League acceptance rates continue to reflect todayâs competitive landscape of higher education but there are some signs of easing. Harvard University accepted 3.59% of applicants for the Class of 2028, its highest acceptance rate in four years, according to the student-run Harvard Crimson.
This compares with last yearâs 3.41% for the Class of 2027 and a 3.19% acceptance rate the year before, which was a record low.
Despite the slight widening of Harvardâs acceptance rate, student interest in highly-ranked universities remained robust. Yale University accepted 3.7% of applicants, its lowest rate ever, according to Yale Daily News.
âThereâs never been more demand for the top-50 most recognizable universities, universities where people feel a life-changing education is at hand,â says Hafeez Lakhani, Founder of Lakhani Coaching. âThe demand for that is really off the charts.â
The acceptance rates at Harvard University and other Ivy League schools remained low, reflecting ... [+] continued student interest in attending these highly-ranked institutions.
Elsewhere, Brown University reported an acceptance rate of 5.2% for the Class of 2028, its third-lowest acceptance rate, according to the student-run Brown Daily Herald . Columbia University accepted 3.85% of applicants, narrowing slightly from last yearâs acceptance rate of 3.9%, according to the Columbia Daily Spectator.
Best High-Yield Savings Accounts Of 2024
Best 5% interest savings accounts of 2024.
Additionally, Dartmouth drew a record-high number of applicants and accepted a record-low 5.3% of those applicants, that rate nearly a percentage point lower than the rate last year.
âStudents are still very interested in applying to the Ivies as well as other top-20 and top-50 colleges,â says Aaron Andrikopoulos, Co-Founder of AJ Tutoring.
âThey still see a lot of value in the name brand of those colleges and the doors that might open in the future, as well as the educational opportunities and the chance to be surrounded by a cohort of similarly motivated students,â he adds.
The University of Pennsylvania said it had received the largest applicant pool in the schoolâs history but did not immediately share its acceptance rate. Cornell, too, did not immediately share details.
While Ivy League admission rates remain very competitive, experts note that some are wider than theyâve been in recent years. They say a number of factors are likely contributing - for example, this is the first admissions cycle since the end of affirmative action.
In addition, there may be the dampening effect of the so-called âenrollment cliffâ that is expected to become increasingly apparent in 2025, when the number of high school graduates declines. This fall stems from a drop in birthrates following the 2008 global recession due to concerns over finances.
Earlier, Brown had reported that a total of 48,881 students applied to its Class of 2028, marking a 5% decline from last year, according to the Brown Daily Herald.
âThe population in the high school seniorsâ age bracket applying to colleges is going to decline, so that might increase admission rates slightly,â says Connie Livingston, Head of College Counseling at edtech company Empowerly and a former admissions officer at Brown University. However, she adds that âwe may see a one or two percent increase, nothing very dramatic. I think the numbers of highly qualified students will remain constant.â
The number of applicants to highly-ranked universities may decline, too, due to some schools reinstating SAT/ACT submission requirements. Among them are Dartmouth College , Brown University and the University of Texas at Austin . Yale University has a new flexible testing policy that requires students to submit scores with their applications, although the university will accept Advanced Placement or International Baccalaureate exam scores in lieu of the ACT or SAT.
Some universities cited Opportunity Insights research that found SAT and ACT scores have substantive predictive power for academic success in college.
In making its decision to reinstate test requirements, Brown University referred to a report that cited, âStandardized tests provide an important piece of information about the applicantâs performance in the context of the opportunities that were available to them.â For example, a student with scores below the Brown median but at the top of the range of their own school may be a promising candidate for admission but the lack of test scores âremoves a piece of evidence that could make a positive difference in the admissions decision,â the report noted.
Harvard has retained its test-optional policies through to the admitted Class of 2030 .
Despite this yearâs low acceptance rates, itâs important to note that these schools are generally outliers, as most colleges admit many of their applicants. In fact, that average acceptance rate among all ranked colleges that report their admissions information to U.S. News & World Report in 2022 was 71.4% and 37 schools said they accepted all applicants.
However, with regards to high school students aiming for the countryâs most highly-ranked universities, competition may continue to ease up, at least a little, in the future.
âThe demographics are speaking loud and clear,â says Hafeez Lakhani of Lakhani Coaching. âI donât think weâre going to get to where itâs going to be easy to get into Yale. But I do think weâve peaked, so to speak, at record low acceptance rates.â
- Editorial Standards
- Reprints & Permissions
IMAGES
VIDEO
COMMENTS
The range of acceptance was from just over 1% to 93.2%. However, if we look at the different aspects of the group of journals, we can draw some general conclusions. Larger journals have lower acceptance rates than smaller journals, between 10-60%. Older journals have lower acceptance rates than newer journals, but not by much.
The data below refer to full year 2023 and include article types "research articles," "reports," and "research resources." Note that they do not include "reviews." ... Accept Rate - The percentage of manuscripts accepted by journal, ... Submission to Acceptance (Days) Accept to Publish (Days) Science: 8: 63: 174: 39: Science ...
A journal's acceptance rate is the percentage of submitted papers that are ultimately chosen for publication. It's not as simple as it seems; like most things in the research journal world, there's more to it than meets the eye. To calculate a journal's acceptance rate, divide the number of accepted manuscripts by the number of ...
Introduction. Settling on a target journal for a completed scientific manuscript can be a non-scientific process. Some critical elements of the decision are intangible, e.g., attempting to reach a certain target audience or how well the paper "fits" within the scope of the journal [1-3].Others, such as turnaround time, acceptance rate, and journal impact, can be measured but (other than ...
Selecting a target journal is a universal decision faced by authors of scientific papers. Components of the decision, including expected turnaround time, journal acceptance rate, and journal impact factor, vary in terms of accessibility. In this study, I collated recent turnaround times and impact factors for 82 journals that publish papers in the field of fisheries sciences.
A journal's acceptance rate, for illustration, is 5% if it accepts 50 manuscripts in one year and 1000 are submitted in that year. 50/1000 = 5% acceptance rate. Generally, there is a difference between 10-50% between acceptance rates for larger journals and smaller journals. There is a modest difference in acceptance rates between older and ...
3 mins. Most journals use acceptance or rejection rates to analyse whether the number of papers that they are rejecting or accepting is too high and to monitor any unusual trends. The rates depend on the quality of papers submitted to a journal. These rates are an internal quality control benchmark, whereas the impact factor is an external ...
The journals in the dataset had acceptance rates ranging from 1.1 per cent to 93.2 per cent in 2017, with an average of 32 per cent: overall, journals tend to accept fewer articles than they reject (Figure 1). We then studied various attributes of the journals to see what aspects correlated with high or low acceptance rates.
The acceptance rate of research papers depends on several factors, ... the acceptance rate of The Lancet is roughly 5 per cent and the acceptance rate for research articles in JAMA is 4 per cent.
The method for computing acceptance rates may vary among journals. Typically it refers to the number of manuscripts accepted for publication relative to the number of manuscripts submitted within the last year. Browse the 'information for authors' section of a journal's website; Do a Google search for: Journal Title "acceptance rate"
Journal acceptance rates: basics. Definition: The number of manuscripts accepted for publication compared to the total number of manuscripts submitted in one year. The exact method of calculation varies depending on the journal. Journals with lower article acceptance rates are regarded as more prestigious. More information
The Journal of the American Medical Association (JAMA)'s acceptance rate is 11% of the more than 7,000 major manuscripts it receives annually, and only 4% of the more than 4,400 research papers received. The acceptance rate of some other Elsevier journals: The Lancet has an acceptance rate of about 5%. The Journal of Pediatrics: 14.7% in 2022.
The content of the press release mailing and papers described therein is embargoed until 1600 London time/1100 US Eastern Time on the day of publication, which is the time at which the research is ...
Parameters of acceptance: The basic or standard criteria determining acceptance are, on the author's side, the novelty of the study and the quality of the paper (the writing - how you have presented your research), and on the journal's side, a match with their focus/scope and adherence to their submission guidelines. Beyond these ...
Published on behalf of the American Educational Research Association, ... SUBMIT PAPER. The Review of Educational Research (RER) ... Acceptance Rate: 5.3 %; Average review time for manuscript: 8.8 weeks; Average time from acceptance to online publication: 35 days; LEARN MORE.
1 Answer to this question. Answer: It is difficult to recommend any journal without going through the entire manuscript. However, I can give you some generic guidance on choosing a journal for submitting your paper. I would advise you to not aim to choose a journal that is easy to get an acceptance. Usually, journals that guarantee acceptance ...
Read 29 answers by scientists with 2 recommendations from their colleagues to the question asked by Joshua Chukwuma Onwe on Jan 14, 2020
Papers published in top conferences or journals is an important measure of the innovation ability of institutions, and ranking paper acceptance rate can be helpful for evaluating affiliation potential in academic research. Most studies only focus on the paper quality itself, and apply simple statistical data to estimate the contribution of institutions. In this work, a novel method is proposed ...
7. In CS science conferences, there are often the following levels : strong accept (3), accept (2), weak accept (1), borderline (0), weak reject (-1), reject (-2) and strong reject (-3). But this can vary from one conference to another. Besides, a reviewer may often be able to indicats its confidence level on a scale for example of 1 to 5.
By Enago Academy Apr 29, 2022. 4 mins read. đ Listen. Journal rejections are common, irrespective of the researcher's academic career. The acceptance rate of scholarly journals is an important selection criterion for authors when choosing where to submit their manuscripts. However, information about the acceptance (or rejection rates) of ...
Research papers were accepted by reviewers most frequently, whereas case reports were rejected more often than other types of articles. ... Considering the current acceptance rate of 48.8%, an average of 73 e-mails were exchanged for each published article.
1 Answer to this question. Answer: Having a 100% acceptance rate would mean that all the papers that are submitted to the journal are accepted. This is rather unusual, but it might be possible if the journal accepts articles by invitation only. In that case, the journal would solicit articles from established authors, and all these articles ...
This year's Ivy League acceptance rates are low, but show signs of easing. Harvard University accepted 3.59 percent of applicants for the Class of 2028, its highest acceptance rate in four years.
Yale University received a record 57,465 applications, for instanceâdwarfing its previous record of 52,250 by nearly 10 percentâand had an acceptance rate of 3.7 percent, its lowest ever, down from 4.5 percent last year. Some highly selective non-Ivy institutions also boasted record-low acceptance rates, including Williams College and Rice ...