Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

Authorship and citation manipulation in academic research

Contributed equally to this work with: Eric A. Fong, Allen W. Wilhite

Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Writing – original draft, Writing – review & editing

Affiliation Department of Management, University of Alabama in Huntsville, Huntsville, Alabama, United States of America

* E-mail: [email protected]

Affiliation Department of Economics, University of Alabama in Huntsville, Huntsville, Alabama, United States of America

ORCID logo

  • Eric A. Fong, 
  • Allen W. Wilhite

PLOS

  • Published: December 6, 2017
  • https://doi.org/10.1371/journal.pone.0187394
  • Reader Comments

Table 1

Some scholars add authors to their research papers or grant proposals even when those individuals contribute nothing to the research effort. Some journal editors coerce authors to add citations that are not pertinent to their work and some authors pad their reference lists with superfluous citations. How prevalent are these types of manipulation, why do scholars stoop to such practices, and who among us is most susceptible to such ethical lapses? This study builds a framework around how intense competition for limited journal space and research funding can encourage manipulation and then uses that framework to develop hypotheses about who manipulates and why they do so. We test those hypotheses using data from over 12,000 responses to a series of surveys sent to more than 110,000 scholars from eighteen different disciplines spread across science, engineering, social science, business, and health care. We find widespread misattribution in publications and in research proposals with significant variation by academic rank, discipline, sex, publication history, co-authors, etc. Even though the majority of scholars disapprove of such tactics, many feel pressured to make such additions while others suggest that it is just the way the game is played. The findings suggest that certain changes in the review process might help to stem this ethical decline, but progress could be slow.

Citation: Fong EA, Wilhite AW (2017) Authorship and citation manipulation in academic research. PLoS ONE 12(12): e0187394. https://doi.org/10.1371/journal.pone.0187394

Editor: Lutz Bornmann, Max Planck Society, GERMANY

Received: February 28, 2017; Accepted: September 20, 2017; Published: December 6, 2017

Copyright: © 2017 Fong, Wilhite. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: All relevant data are within the paper and its Supporting Information files. The pertinent appendices are: S2 Appendix: Honorary author data; S3 Appendix: Coercive citation data; and S4 Appendix: Journal data. In addition the survey questions and counts of the raw responses to those questions appear in S1 Appendix: Statistical methods, surveys, and additional results.

Funding: This publication was made possible by a grant from the Office of Research Integrity through the Department of Health and Human Services: Grant Number ORIIR130003. Contents are solely the responsibility of the authors and do not necessarily represent the official views of the Department of Health and Human Services or the Office of Research Integrity. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing interests: The authors have declared that no competing interests exist.

Introduction

The pressure to publish and to obtain grant funding continues to build [ 1 – 3 ]. In a recent survey of scholars, the number of publications was identified as the single most influential component of their performance review while the journal impact factor of their publications and order of authorship came in second and third, respectively [ 3 ]. Simultaneously, rejection rates are on the rise [ 4 ]. This combination, the pressure to increase publications coupled with the increased difficulty of publishing, can motivate academics to violate research norms [ 5 ]. Similar struggles have been identified in some disciplines in the competition for research funding [ 6 ]. For journals and the editors and publishers of those journals, impact factors have become a mark of prestige and are used by academics to determine where to submit their work, who earns tenure, and who may be awarded grants [ 7 ]. Thus, the pressure to increase a journal’s impact factor score is also increasing. With these incentives it is not surprising that academia is seeing authors and editors engaged in questionable behaviors in an attempt to increase their publication success.

There are many forms of academic misconduct that can increase an author’s chance for publication and some of the most severe cases include falsifying data, falsifying results, opportunistically interpreting statistics, and fake peer-review [ 5 , 8 – 12 ]. For the most part, these extreme examples seem to be relatively uncommon; for example, only 1.97% of surveyed academics admit to falsifying data, although this probably understates the actual practice as these respondents report higher numbers of their colleagues misbehaving [ 10 ].

Misbehavior regarding attribution, on the other hand, seems to be widespread [ 13 – 18 ]; for example, in one academic study, roughly 20% of survey respondents have experienced coercive citation (when editors direct authors to add citations to articles from the editors’ journals even though there is no indicated lack of attribution and no specific articles or topics are suggested by the editor) and over 50% said they would add superfluous citations to a paper being submitted to a coercive journal in an attempt to increase its chance for publication [ 18 ]. Honorary authorship (the addition of individuals to manuscripts as authors, even though those individuals contribute little, if anything, to the actual research) is a common behavior in several disciplines [ 16 , 17 ]. Some scholars pad their references in an attempt to influence journal referees or grant reviewers by citing prestigious publications or articles from the editor’s journal (or the editor’s vita) even if those citations are not pertinent to the research. While there is little systematic evidence that such a strategy influences editors, the perception of its effectiveness is enough to persuade some scholars to pad [ 19 , 20 ]. Overall, it seems that many scholars consider authorship and citation to be fungible attributes, components of a project one can alter to improve their publication and funding record or to increase journal impact factors (JIFs).

Most studies examining attribution manipulation focus on the existence and extent of misconduct and typically address a narrow section of the academic universe; for example, there are numerous studies measuring the amount of honorary authorship in medicine, but few in engineering, business, or the social sciences [ 21 – 25 ]. And, while coercive citation has been exposed in the some business fields, less is known about its prevalence in medicine, science, or engineering. In addition, the pressure to acquire research funding is nearly as intense as publication pressures and in some disciplines funding is a major component of performance reviews. Thus, grant proposals are also viable targets of manipulation, but research into that behavior is sparse [ 2 , 6 ]. However, if grant distributions are swayed by manipulation then resources are misdirected and promising areas of research could be neglected.

There is little disagreement with the sentiment that this manipulation is unethical, but there is less agreement about how to slow its use. Ultimately, to reverse this decline of ethics we need to better understand the factors that impact attribution manipulation and that is the focus of this manuscript. Using more than 12,000 responses to surveys sent to more than 110,000 academics from disciplines across the academic universe, this study aims to examine the prevalence and systematic nature of honorary authorship, coercive citation, and padded citations in eighteen different disciplines in science, engineering, medicine, business, and the social sciences. In essence, we do not just want to know how common these behaviors are, but whether there are certain types of academics who add authors or citations or are coerced more often than others. Specifically, we ask, what are the prevailing attributes of scholars who manipulate, whether willingly (e.g., padded citation) or not (e.g., coercive citation), and we consider attributes like academic rank, gender, discipline, level of co-authorship, etc. We also look into the reasons scholars manipulate and ask their opinions on the ethics of this behavior. In our opinion, a deeper understanding of manipulation can shed light on potential ways to reduce this type of academic misconduct.

As noted in the introduction, the primary component of performance reviews, and thus of individual research productivity, is the number of published articles by an academic [ 3 ]. This number depends on two things: (i) the number of manuscripts on which a scholar is listed as an author and (ii) the likelihood that each of those manuscripts will be published. The pressure to increase publications puts pressure on both of these components. In a general sense, this can be beneficial for society as it creates incentives for individuals to work harder (to increase the quantity of research projects) and to work better (to increase the quality of those projects) [ 6 ]. There are similar pressures and incentives in the application for, and distribution of, research grants as many disciplines in science, engineering, and medicine view the acquisition of funding as both a performance measure and a precursor to publication given the high expense of the equipment and supplies needed to conduct research [ 2 , 6 ]. But this publication and funding pressure can also create perverse incentives.

Honorary authorship

Working harder is not the only means of increasing an academic’s number of publications. An alternative approach is known as “honorary authorship” and it specifically refers to the inclusion of individuals as authors on manuscripts, or grant proposals, even though they did not contribute to the research effort. Numerous studies have explored the extent of honorary authorship in a variety of disciplines [ 17 , 20 , 21 – 25 ]. The motivation to add authors can come from many sources; for instance, an author may be directed to add an individual who is a department chair, lab director, or some other administrator with power, or they might voluntarily add such an individual to curry favor. Additionally, an author might create a reciprocal relationship where they add an honorary author to their own paper with the understanding that the beneficiary will return the favor on another paper in the future, or an author may just do a friend a favor and include their name on a manuscript [ 23 , 24 ]. In addition, if the added author has a prestigious reputation, this can also increase the chances of the manuscript receiving a favorable review. Through these means, individuals can raise the expected value of their measured research productivity (publications) even though their actual intellectual output is unchanged.

Similar incentives apply to grant funding. Scholars who have a history of repeated funding, especially funding from the more prestigious funding agencies, are viewed favorably by their institutions [ 2 ]. Of course, grants provide resources, which increase an academic’s research output, but there are also direct benefits from funded research accruing to the university: overhead charges, equipment purchases that can be used for future projects, graduate student support, etc. Consequentially, “rainmakers” (scholars with a record of acquiring significant levels of research funding) are valued for that skill.

As with publications, the amount of research funding received by an individual depends on the number and size of proposals put forth and the probability of each getting funded. This metric creates incentives for individuals to get their names on more proposals, on bigger proposals, and to increase the likelihood that those proposals will be successful. That pressure opens the door to the same sorts of misattribution behavior found in manuscripts because honorary authorship can increase the number of grant proposals that include an author’s name and by adding a scholar with a prestigious reputation as an author they may increase their chances of being funded. As we investigate the use of honorary authorship we do not focus solely on its prevalence; we also question whether there is a systematic nature to its use. First, for example, it makes sense that academics who are early in their career have less funding and lack the protection of tenure and thus need more publications than someone with an established reputation. To begin to understand if systematic differences exist in the use of honorary authorship, the first set of empirical questions to be investigated here is: who is likely to add honorary authors to manuscripts or grant proposals? Scholars of lower rank and without tenure may be more likely to add authors, whether under pressure from senior colleagues or in their own attempt to sway reviewers. Tenure and promotion depend critically on a young scholars’ ability to establish a publication record, secure research funding, and engender support from their senior faculty. Because they lack the protection of rank and tenure, refusing to add someone could be risky. Of course, senior faculty members also have goals and aspirations that can be challenging, but junior faculty have far more on the line in terms of their career.

Second, we expect research faculty to be more likely to add honorary authors, especially to grant proposals, because they often occupy positions that are heavily dependent on a continued stream of research success, particularly regarding research funding. Third, we expect that female researchers may be less able to resist pressure to add honorary authors because women are underrepresented in faculty leadership and administrative positions in academia and lack political power [ 26 , 27 ]. It is not just their own lack of position that matters; the dearth of other females as senior faculty or in leadership positions leave women with fewer mentors, senior colleagues, and administrators with similar experiences to help them navigate these political minefields [ 28 , 29 ]. Fourth, because adding an author waters down the credit received by each existing author, we expect manuscripts that already have several authors to be less resistant to additional “credit sharing.” Simply put, if credit is equally distributed across authors then adding a second author would cut your perceived contribution in half, but adding a sixth author reduces your contribution by only 3% (from 20% to 17%).

Fifth, because academia is so competitive, the decisions of some scholars have an impact on others in the same research population. If your research interests are in an area in which honorary authorship is common and considered to be effective, then a promising counter-policy to the manipulation undertaken by others is to practice honorary authorship yourself. This leads us to predict that the obligation to add honorary authors to grant proposals and/or manuscripts is likely to concentrate more heavily in some disciplines. In other words, we do not expect it to be practiced uniformly or randomly across fields; instead, there will be some disciplines who are heavily engaged in adding authors and other disciplines less so engaged. In general, we have no firm predictions as to which disciplines are more likely to practice honorary authorship; we predict only that its practice will be lumpy. However, there may be reasons to suspect some patterns to emerge; for example, some disciplines, such as science, engineering, and medicine, are much more heavily dependent on research funding than other disciplines, such as the social sciences, mathematics, and business [ 2 ]. For example, over 70% of the NSF budget goes to science and engineering and about 4% to the social sciences. Similarly, most of the NIH budget goes to doctors and a smaller share to other disciplines [ 30 ]. Consequently, we suspect that the disciplines that most prominently add false investigators to grant proposals are more likely to be in science, engineering, and the medical fields. We do not expect to see that division as prominent in the addition of authors to manuscripts submitted for publication.

There are several ways scholars may internalize the pressure to perform, which can lead to different reasons why a scholar might add an honorary author to a paper. A second goal of this paper is to study who might employ these different strategies. Thus, we asked authors for the reasons they added honorary authors to their manuscripts and grants; for example, was this person in a position of authority, or a mentor, did they have a reputation that increased the chances for publication or funding, etc? Using these responses as a dependent variable, we then look to find out if these were related to the professional characteristics of the scholars in our study. The hypotheses to be tested mirror the questions posed for honorary authors. We expect junior faculty, research faculty, female faculty, and projects with more co-authors to be more likely to add additional coauthors to manuscripts and grants than professors, male faculty, and projects with fewer co-authors. Moreover, we expect for the practice to differ across disciplines. Focusing specifically on honorary authorship in grant proposals, we also explore the possibility that the use of honorary authorship differs between funding opportunities and agencies.

Coercive citation

Journal rankings matter to editors, editorial boards, and publishers because rankings affect subscriptions and prestige. In spite of their shortcomings, impact factors have become the dominant measure of journal quality. These measures include self-citation, which creates an incentive for editors to direct authors to add citations even if those citations are irrelevant, a practice called “coercive citation” [ 18 , 27 ]. This behavior has been systematically measured in business and social science disciplines [ 18 ]. Additionally, researchers have found that coercion sometimes involves more than one journal; editors have gone as far as organizing “citation cartels” where a small set of editors recommend that authors cite articles from each other’s journal [ 31 ].

When editors make decisions to coerce, who might they target, who is most likely to be coerced? Assuming editors balance the costs and benefits of their decisions, a parallel set of empirical hypotheses emerge. Returning to the various scholar attributes, we expect editors to target lower-ranked faculty members because they may have a greater incentive to cooperate as additional publications have a direct effect on their future cases for promotion, and for assistant professors on their chances of tenure as well. In addition, because they have less political clout and are less likely to openly complain about coercive treatment, lower ranked faculty members are more likely to acquiesce to the editor’s request. We predict that editors are more likely to target female scholars because female scholars hold fewer positions of authority in academia and may lack the institutional support of their male counterparts. We also expect the number of coauthors to play a role, but contrary to our honorary authorship prediction, we predict editors will target manuscripts with fewer authors rather than more authors. The rationale is simple; authors do not like to be coerced and when an editor requires additional citations on a manuscript having many authors then the editor is making a larger number of individuals aware of their coercive behavior, but coercing a sole-authored paper upsets a single individual. Notice that we are hypothesizing the opposite sign in this model than in the honorary authorship model; if authors are making a decision to add honorary authors then they prefer to add people to articles that already have many co-authors, but if editors are making the decision then they prefer to target manuscripts with few authors to minimize the potential pushback.

As was true in the model of honorary authorship, we expect the practice of coercion to be more prevalent in some disciplines than others. If one editor decides to coerce authors and if that strategy is effective, or is perceived to be effective, then there is increased pressure for other editors in the same discipline to also coerce just to maintain their ranking—if one journal climbs up in the rankings, others, who do nothing, fall. Consequently, coercion begets additional coercion and the practice can spread. But, a journal climbing up in the rankings in one discipline has little impact on other disciplines and thus we expect to find coercion practiced unevenly; prevalent in some disciplines, less so in others. Finally, as a sub-conjecture to this hypothesis, we expect coercive citation to be more prevalent in disciplines for which journal publication is the dominant measure for promotion and tenure; that is, disciplines that rely less heavily on grant funding. This means we expect the practice to be scattered, and lumpy, but we also expect relatively more coercion in the business and social sciences disciplines.

We are also interested in the types of journals that have been reported to coerce and to explore those issues we gather data using the journal as the unit of observation. As above, we expect differences between disciplines and we expect those discipline differences to mirror the discipline differences found in the author-based data set. We also expect a relationship between journal ranking and coercion because the costs and benefits of coercion differ for more or less prestigious journals. Consider the benefits of coercion. The very highest ranked journals have high impact factors; consequently, to rise another position in the rankings requires a significant increase in citations, which would require a lot of coercion. Lower-ranked journals, however, might move up several positions with relatively few coerced citations. Furthermore, consider the cost of coercion. Elite journals possess valuable reputations and risking them by coercing might be foolhardy; journals deep down in the rankings have less at stake. Given this logic, it seems likely that lower ranked journals are more likely to have practiced coercion.

We also look to see if publishers might influence the coercive decision. Journals are owned and published by many different types of organizations; the most common being commercial publishers, academic associations, and universities. A priori , commercial publishers, being motivated by profits, are expected to be more interested in subscriptions and sales, so the return to coercion might be higher for that group. On the other hand, the integrity of a journal might be of greater concern to non-profit academic associations and university publishers, but we don’t see a compelling reason to suppose that universities or academic associations will behave differently from one another. Finally, we control for some structural difference across journals by including each journal’s average number of cites per document and the total number of documents they publish per year.

Padded citations

The third and final type of attribution manipulation explored here is padded reference lists. Because some editors coerce scholars to add citations to boost their journals’ impact factor score and because this practice is known by many scholars there is an incentive for scholars to add superfluous citations to their manuscripts prior to submission [ 18 ]. Provided there is an incentive for scholars to pad their reference lists in manuscripts, we wondered if grant writers would be willing to pad reference lists in grants in an attempt to influence grant reviewers.

As with honorary authorship, we suspect there may be a systematic element to padding citations. In fact, we expect the behavior of padding citations to parallel the honorary author behavior. Thus we predict that scholars of lower rank and therefore without tenure and female scholars to be more likely to pad citations to assuage an editor or sway grant reviewers. Because the practice also encompasses a feedback loop (one way to compete with scholars who pad their citations is to pad your citations) we expect the practice to proliferate in some disciplines. The number of coauthors is not expected to play a role, but we also expect knowledge of other types of manipulation to be important. That is, we hypothesize that individuals who are aware of coercion, or who have been coerced, are more likely to pad citations. With grants, we similarly expect individuals who add honorary authors to grant proposals to also be likely to pad citations in grant proposals. Essentially, the willingness to misbehave in one area is likely related to misbehavior in other areas.

The data collection method of choice for this study is survey because to it would be difficult to determine if someone added honorary authors or padded citations prior to submission without asking that individual. As explained below, we distributed surveys in four waves over five years. Each survey, its cover email, and distribution strategy was reviewed and approved by the University of Alabama in Huntsville’s Institutional Review Board. Copies of these approvals are available on request. We purposely did not collect data that would allow us to identify individual respondents. We test our hypotheses using these survey data and journal data. Given the complexity of the data collection, both survey and archival journal data, we will begin with discussing our survey data and the variables developed from our survey. We then discuss our journal data and the variables developed there. Over the course of a five-year period and using four waves of survey collection, we sent surveys, via email, to more than 110,000 scholars in total from eighteen different disciplines (medicine, nursing, biology, chemistry, computer science, mathematics, physics, engineering, ecology, accounting, economics, finance, marketing, management, information systems, sociology, psychology, and political science) from universities across the U.S. See Table 1 for details regarding the timing of survey collection. Survey questions and raw counts of the responses to those questions are given in S1 Appendix : Statistical methods, surveys, and additional results. Complete files of all of the data used in our estimates are in the S2 , S3 and S4 Appendices.

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pone.0187394.t001

Potential survey recipients and their contact information (email addresses) were identified in three different ways. First, we were able to get contact information for management scholars through the Academy Management using the annual meeting catalog. Second, for economics and physicians we used the membership services provided by the American Economic Association and the American Medical Association. Third, for the remaining disciplines we identified the top 200 universities in the United States using U . S . News and World Report’s “National University Rankings” and hand-collected email addresses by visiting those university websites and copying contact information for individual faculty members from each of the disciplines. We also augmented the physician contact list by visiting the web sites of the medical schools in these top 200 school as well. With each wave of surveys, we sent at least one reminder to participate. The approximately 110,000 surveys yielded about 12,000 responses for an overall response rate of about 10.5%. Response rates by discipline can be found in Table A in S1 Appendix .

Few studies have examined the systematic nature of honorary authorship and padded citation and thus we developed our own survey items to address our hypotheses. Our survey items for coercive citation were taken from prior research on coercion [ 18 ]. All survey items and the response alternatives with raw data counts are given in S1 Appendix . The complete data are made available in S2 – S4 Appendices.

Our first set of tests relate to honorary authorship in manuscripts and grants and is made up of several dependent variables, each related to the research question being addressed. We begin with the existence of honorary authorship in manuscripts. This dependent variable is composed of the answers to the survey question: “Have YOU felt obligated to add the name of another individual as a coauthor to your manuscript even though that individual’s contribution was minimal?” Responses were in the form of yes and no where “yes” was coded as a 1 and “no” coded as a 0. The next dependent variable addresses the frequency of this behavior asking: “In the last five years HOW MANY TIMES have you added or had coauthors added to your manuscripts even though they contributed little to the study?” The final honorary authorship dependent variables deal with the reason for including an honorary author in manuscripts: “Even though this individual added little to this manuscript he (or she) was included as an author. The main reason for this inclusion was:” and the choices regarding this answer were that the honorary author is the director of the lab or facility used in the research, occupies a position of authority and can influence my career, is my mentor, is a colleague I wanted to help out, was included for reciprocity (I was included or expect to be included as a co-author on their work), has data I needed, has a reputation that increases the chances of the work being published, or they had funding we could apply to the research. Responses were coded as 1 for the main reason given (only one reason could be selected as the “main” reason) and 0 otherwise.

Regarding honorary authorship in grant proposals, our first dependent variable addresses its existence: “Have you ever felt obligated to add a scholar’s name to a grant proposal even though you knew that individual would not make a significant contribution to the research effort?” Again, responses were in the form of yes and no where “yes” was coded as a 1 and “no” coded as a 0. The remaining dependent variables regarding honorary authorship in grant proposals addresses the reasons for adding honorary authors to proposals: “The main reason you added an individual to this grant proposal even though he (or she) was not expected to make a significant contribution was:” and the provided potential responses were that the honorary author is the director of the lab or facility used in the research, occupies a position of authority and can influence my career, is my mentor, is a colleague I wanted to help out, was included for reciprocity (I was included or expect to be included as a co-author on their work), has data I needed, has a reputation that increases the chances of the work being published, or was a person suggested by the grant reviewers. Responses were coded as 1 for the main reason given (only one reason could be selected as the “main” reason) and 0 otherwise.

Our next major set of dependent variables deal with coercive citation. The first coercive citation dependent variable was measured using the survey question: “Have YOU received a request from an editor to add citations from the editor’s journal for reasons that were not based on content?” Responses were in the form of yes (coded as a 1) and no (coded as 0). The next question deals with the frequency: “In the last five years, approximately HOW MANY TIMES have you received a request from the editor to add more citations from the editor’s journal for reasons that were not based on content?”

Our final set of dependent variables from our survey data investigates padding citations in manuscripts and grants. The dependent variable that addresses an author’s willingness to pad citations for manuscripts comes from the following question: “If I were submitting an article to a journal with a reputation of asking for citations to itself even if those citations are not critical to the content of the article, I would probably add such citations BEFORE SUBMISSION.” Answers to this question were in the form of a Likert scale with five potential responses (Strongly Disagree, Disagree, Neutral, Agree, and Strongly Agree) where Strongly Disagree was coded as a 1 and Strongly Agree coded as a 5. The dependent variable for padding citations in grant proposals uses responses to the statement: “When developing a grant proposal I tend to skew my citations toward high impact factor journals, even if those citations are of marginal import to my proposal.” Answers were in the form of a Likert scale with five potential responses (Strongly Disagree, Disagree, Neutral, Agree, and Strongly Agree) where Strongly Disagree was coded as a 1 and Strongly Agree coded as a 5.

To test our research questions, several independent variables were developed. We begin by addressing the independent variables that cut across honorary authorship, coercive citation, and padding citations. The first is academic rank. We asked respondents their current rank: Assistant Professor, Associate Professor, Professor, Research Faculty, Clinical Faculty, and other. Dummy variables were created for each category with Professor being the omitted category in our tests of the hypotheses. The second general independent variable is discipline: Medicine, Nursing, Accounting, Economics, Finance, Information Systems, Management, Marketing, Political Science, Psychology, Sociology, Biology, Chemistry, Computer Science, Ecology, and Engineering. Again, dummy variables were created for each discipline, but instead of omitting a reference category we include all disciplines and then constrain the sum of their coefficients to equal zero. With this approach, the estimated coefficients then tell us how each discipline differs from the average level of honorary authorship, coercive citation, or padded citation across the academic spectrum [ 32 ]. We can conveniently identify three categories: (i) disciplines that are significantly more likely to engage in honorary authorship, coercive citation, or padded citation than the average across all disciplines, (ii) disciplines that do not differ significantly from the average level of honorary authorship, coercive citation, or padded citation across all of these disciplines, and (iii) those who are significantly less likely to engage in honorary authorship, coercive citation, or padded citation than the average. We test the potential gender differences with a dummy variable male = 1, females = 0.

Additional independent variables were developed for specific research questions. In our tests of honorary authorship, there is an independent variable addressing the number of co-authors on a respondent’s most recent manuscript. If the respondent stated that they have added an honorary author then they were asked “Please focus on the most recent incidence in which an individual was added as a coauthor to one of your manuscripts even though his or her contribution was minimal. Including yourself, how many authors were on this manuscript?” Respondents who had not added an honorary author were asked to report the number of authors on their most recently accepted manuscript. We also include an independent variable regarding funding agencies: “To which agency, organization, or foundation was this proposal directed?” Again, for those who have added authors, we request they focus on the most recent proposal where they used honorary authorship and for those who responded that they have not practiced honorary authorship, we asked where they sent their most recent proposal. Their responses include NSF, HHS, Corporations, Private nonprofit, State funding, Other Federal grants, and Other grants. Regarding coercive citation, we included an independent variable regarding number of co-authors on their most recent coercive experience and thus if a respondent indicated they’ve been coerced we asked: “Please focus on the most recent incident in which an editor asked you to add citations not based on content. Including yourself, how many authors were on this manuscript?” If a respondent indicated they’ve never been coerced, we asked them to state the number of authors on their most recently accepted manuscript.

Finally, we included control variables. In our tests, we included the respondent’s performance or exposure to these behaviors. For those analyses focusing on manuscripts we used acceptances: “Within the last five years, approximately how many publications, including acceptances, do you have?” The more someone publishes, the more opportunities they have to be coerced, add authors, or add citations; thus, scholars who have published more articles are more likely to have experienced coercion, ceteris paribus. And in our tests of grants we used two performance indicators: 1) “In the last five years approximately how many grant proposals have you submitted for funding?” and 2) “Approximately how much grant money have you received in the last five years? Please write your estimated dollars in box; enter 0 if zero.”

We also investigate coercion using a journal-based dataset, Scopus, which contains information on more than 16,000 journals from these 18 disciplines [ 33 ]. It includes information on the number of articles published each year, the average number of citations per manuscript, the rank of the journal, disciplines that most frequently publish in the journal, the publisher, and so forth. These data were used to help develop our dependent variable as well as our independent and control variables for the journal analysis. Our raw journal data is provided in S4 Appendix : Journal data.

The dependent variables in our journal analysis measure whether a specific journal was identified as a journal in which coercion occurred, or not, and the frequency of that identification. Survey respondents were asked: “To track the possible spread of this practice we need to know specific journals. Would you please provide the names of journals you know engage in this practice?” Respondents were given a blank space to write in journal names. The majority of our respondents declined to identify journals where coercion has occurred; however, more than 1200 respondents provided journal names and in some instances, respondents provided more than one journal name. Among the population of journals in the Scopus database, 612 of these were identified as journals that have coerced by our survey respondents, some of these journals were identified several times. The first dependent variable is binary, coded as 1 if a journal was identified as a journal that has coerced, and coded as 0 otherwise. The frequency estimates uses the count, how many times they were named, as the dependent variable.

The independent variables measure various journal attributes, the first being discipline. The Scopus database identifies the discipline that most frequently publishes in any given journal, and that information was used to classify journals by discipline. Thus, if physics is the most common discipline to publish in a journal, it was classified as a physics journal. We look to see if there is a publisher effect using the publisher information in Scopus to create four categories: commercial publishers, academic associations, universities, and others (the omitted reference category).

We also control for differing editorial norms across disciplines. First, we include the number of documents published annually by each journal. All else equal, a journal that publishes more articles has more opportunities to engage in coercion, and/or it interacts with more authors and is more likely to be reported in our sample. Second, we control for the average number of citations per article. The average number of citations per document controls for some of the overall differences in citation practices across disciplines.

Given the large number of hypotheses to be tested, we present a compiled list of the dependent variables in Table 2 . This table names the dependent variables, describes how they were constructed, and lists the tables that present the estimated coefficients pertinent to those dependent variables. Table 2 is intended to give readers an outline of the arc of the remainder of the manuscript.

thumbnail

https://doi.org/10.1371/journal.pone.0187394.t002

Honorary authorship in research manuscripts

Looking across all disciplines, 35.5% of our survey respondents report that they have added an author to a manuscript even though the contribution of those authors was minimal. Fig 1 displays tallies of some raw responses to show how the use of honorary authorship, for both manuscripts and grants, differs across science, engineering, medicine, business, and the social sciences.

thumbnail

Percentage of respondents who report that honorary authors have been added to their research projects, they have been coerced by editor to add citations, or who have padded their citations, sorted by field of study and type of manipulation.

https://doi.org/10.1371/journal.pone.0187394.g001

To begin the empirical study of the systematic use of honorary authorship, we start with the addition of honorary authors to research manuscripts. This is a logit model in which the dependent variable equals one if the respondent felt obligated to add an author to their manuscript, “even though that individual’s contribution was minimal.” The estimates appear in Table 3 . In brief, all of our conjectures are observed in these data. As we hypothesized above, the pressure on scholars to add authors “who do not add substantially to the research project,” is more likely to be felt by assistant professors and associate professors relative to professors (the reference category). To understand the size of the effect, we calculate odds ratios ( e β ) for each variable, also reported in Table 3 . Relative to a full professor, being an assistant professor increases the odds of honorary authorship in manuscripts by 90%, being an associate professor increases those odds by 40%, and research faculty are twice as likely as a professor to add an honorary author.

thumbnail

https://doi.org/10.1371/journal.pone.0187394.t003

Consistent with our hypothesis, we found support that females were more likely to add honorary authors as the estimated coefficient on males was negative and statistically significant. The odds that a male feels obligated to add an author to a manuscript is 38% lower than for females. As hypothesized, authors who already have several co-authors on a manuscript seem more willing to add another; consistent with our hypotheses that the decrement in individual credit diminishes as the number of authors rises. Overall, these results align with our fundamental thesis that authors are purposively deciding to deceive, adding authors when the benefits are higher and the costs lower.

Considering the addition of honorary authors to manuscripts, Table 3 shows that four disciplines are statistically more likely to add honorary authors than the average across all disciplines. Listing those disciplines in order of their odds ratios and starting with the greatest odds, they are: marketing, management, ecology, and medicine (physicians). There are five disciplines in which honorary authorship is statistically below the average and starting with the lowest odds ratio they are: political science, accounting, mathematics, chemistry, and economics. Finally, the remaining disciplines, statistically indistinguishable from the average, are: physics, psychology, sociology, computer science, finance, engineering, biology, information systems, and nursing. At the extremes, scholars in marketing are 75% more likely to feel an obligation to add authors to a manuscript than the average across all disciplines while political scientists are 44% less likely than the average to add an honorary author to a manuscript.

To bolster these results, we also asked individuals to tell us how many times they felt obligated to add honorary authors to manuscripts in the last five years. Using these responses as our dependent variable we estimated a negative binomial regression equation with the same independent variables used in Table 3 . The estimated coefficients and their transformation into incidence rate ratios are given in Table 4 . Most of the estimated coefficients in Tables 3 and 4 have the same sign and, with minor differences, similar significance levels, which suggests the attributes associated with a higher likelihood of adding authors are also related to the frequency of that activity. Looking at the incidence rate ratios in Table 4 , scholars occupying the lower academic ranks, research professors, females, and manuscripts that already have many authors more frequently add authors. Table 4 also suggests that three additional disciplines, Nursing, Biology, and Engineering, have more incidents of adding honorary authors to manuscripts than the average of all disciplines and, consequently, the disciplines that most frequently engage in honorary authorship are, by effect size, management, marketing, ecology, engineering, nursing, biology, and medicine.

thumbnail

https://doi.org/10.1371/journal.pone.0187394.t004

Another way to measure effect sizes is to standardize the variables so that the changes in the odds ratios or incidence rate ratios measure the impact of a one standard deviation change of the independent variable on the dependent variable. In Tables 3 and 4 , the continuous variables are the number of coauthors on the particular manuscripts of interest and the number of publications of each respondent. Tables C and D (in S1 Appendix ) show the estimated coefficients and odds ratios with standardized coefficients. Comparing the two sets of results is instructive. In Table 3 , the odds ratio for the number of coauthors is 1.035, adding each additional author increases the odds of this manuscript having an honorary author by 3.5%. The estimated odds ratio for the standardized coefficient, (Table C in S1 Appendix ) is 1.10, meaning an increase in the number of coauthors of one standard deviation increases the odds that this manuscript has an honorary author by 10%. Meanwhile the standard deviation of the number of coauthors in this sample is 2.78, so 3.5% x 2.78 = 9.73%; the two estimates are very similar. This similarity repeats itself when we consider the number of publications and when we compare the incidence rate ratios across Table 4 and Table D in S1 Appendix . Standardization also tells us something about the relative effect size of different independent variables and in both models a standard deviation increase in the number of coauthors has a larger impact on the likelihood of adding another author than a standard deviation increase in additional publications.

Honorary authorship in grant proposals

Our next set of results focus on honorary authorship in grant proposals. Looking across all disciplines, 20.8% of the respondents reported that they had added an investigator to a grant proposal even though the contribution of that individual was minimal (see Fig 1 for differences across disciplines). To more deeply probe into that behavior we begin with a model in which the dependent variable is binary, whether a respondent has added an honorary author, or not, to a grant proposal and thus use a logit model. With some modifications, the independent variables include the same variables as the manuscript models in Tables 3 and 4 . We remove a control variable relevant to manuscripts (total number of publications) and add two control variables to measure the level of exposure a particular scholar has to the funding process: the number of grants funded in the last five years and the total amount of grant funding (dollars) in that same period.

The results appear in Table 5 and, again, we see significant participation in honorary authorship. The estimates largely follow our predictions and mirror the results of the models in Tables 3 and 4 . Academic rank has a smaller effect, being an assistant professor increases the odds of adding an honorary author to a grant by 68% and being an associate professor increases those odds by 52%. On the other hand, the impact of being a research professor is larger in the grant proposal models than the manuscripts model of Table 3 while the impact of sex is smaller. As was true in the manuscripts models, the obligation to add honorary authors is also lumpy, some disciplines being much more likely to engage in the practice than others. We find five disciplines in the “more likely than average” category: medicine, nursing, management, engineering, and psychology. The disciplines that tend to add fewer honorary authors to grants are political science, biology, chemistry, and physics. Those that are indistinguishable from the average are accounting, economics, finance, information systems, sociology, ecology, marketing, computer science, and mathematics.

thumbnail

https://doi.org/10.1371/journal.pone.0187394.t005

We speculated that science, engineering, and medicine were more likely to practice honorary authorship in grant proposals because those disciplines are more dependent on research funding and more likely to consider funding as a requirement for tenure and promotion. The results in Tables 3 and 5 are somewhat consistent with this conjecture. Of the five disciplines in the “above average” category for adding honorary authors to grant proposals, four (medicine, nursing, engineering, and psychology) are dependent on labs and funding to build and maintain such labs for their research.

Reasons for adding honorary authors

Our next set of results looks more deeply into the reasons scholars give for adding honorary authors to manuscripts and to grants. When considering honorary authors added to manuscripts, we focus on a set of responses to the question: “what was the major reason you felt you needed to add those co-author(s)?” When we look at grant proposals, we use responses to the survey question: “The main reason you added an individual to this grant proposal even though he (or she) was not expected to make a significant contribution was…” Starting with manuscripts, although nine different reasons for adding authors were cited (see survey in S1 Appendix ), only three were cited more than 10% of the time. The most common reason our respondents added honorary authors (28.4% of these responses) was because the added individual was the director of the lab. The second most common reason (21.4% of these responses), and the most disturbing, was that the added individual was in a position of authority and could affect the scholar’s career. Third among the reasons for honorary authorship (13.2%) were mentors. “Other” was selected by about 13% of respondents. The percentage of raw responses for each reason is shown in Fig 2 .

thumbnail

Each pair of columns presents the percentage of responses who selected a particular reason for adding an honorary author to a manuscript or a grant proposal. Director refers to responses stating, “this individual was the director of the lab or facility used in the research.” Authority refers to responses stating, “this individual occupies a position of authority and can influence my career.” Mentor, “this is my mentor”; colleague, “this a colleague I wanted to help”; reciprocity, “I was included or expect to be included as a co-author on their work”; data, “they had data I needed”; reputation, “their reputation increases the chances of the work being published (or funded)”; funding, “they had funding we could apply to the research”; and reviewers, “the grant reviewers suggested we add co-authors.”

https://doi.org/10.1371/journal.pone.0187394.g002

To find out if the three most common responses were related to the professional characteristics of the scholars in our study, we re-estimate the model in Table 3 after replacing the dependent variable with the reasons for adding an author. In other words, the first model displayed in Table 6 , under the heading “Director of Laboratory,” estimates a regression in which the dependent variable equals one if the respondent added the director of the research lab in which they worked as an honorary author and equals zero if this was not the reason. The second model indicates those who added an author because he or she was in a position of authority and so forth. The estimated coefficients appear in Table 6 and the odds ratios are reported in S1 Appendix , Table E. Note the sample size is smaller for these regressions because we include only those respondents who say they have added a superfluous author to a manuscript.

thumbnail

https://doi.org/10.1371/journal.pone.0187394.t006

The results are as expected. The individuals who are more likely to add a director of a laboratory are research faculty (they mostly work in research labs and centers), and scholars in fields in which laboratory work is a primary method of conducting research (medicine, nursing, psychology, biology, chemistry, ecology, and engineering). The second model suggests that the scholars who add an author because they feel pressure from individuals in a position of authority are junior faculty (assistant and associate professors, and research faculty) and individuals in medicine, nursing, and management. The third model suggests assistant professors, lecturers, research faculty, and clinical faculty are more likely to add their mentors as an honorary author. Since many mentorships are established in graduate school or through post-docs, it is sensible that scholars who are early in their career still feel an obligation to their mentors and are more likely to add them to manuscripts. Finally, the disciplines most likely to add mentors to manuscripts seem to be the “professional” disciplines: medicine, nursing, and business (economics, information systems, management, and marketing). We do not report the results for the other five reasons for adding honorary authors because few respondent characteristics were statistically significant. One explanation for this lack of significance may be the smaller sample size (less than 10% of the respondents indicated one of these remaining reasons as being the primary reason they added an author) or it may be that even if these rationales are relatively common, they might be distributed randomly across ranks and disciplines.

Turning to grant proposals, the dominant reason for adding authors to grant proposals even though they are not actually involved in the research was reputation. Of the more than 2100 individuals who gave a specific answer to this question, 60.8% selected “this individual had a reputation that increases the chances of the work being funded.” The second most frequently reported reason for grants was that the added individual was the director of the lab (13.5%), and third was people holding a position of authority (13%). All other reasons garnered a small number of responses.

We estimate a set of regressions similar to Table 6 using the reasons for honorary grant proposal authorship as the dependent variable and the independent variables from the grant proposal models of Table 5 . Before estimating those models we also add six dummy variables reflecting different sources of research funding to see if the reason for adding honorary citations differs by type of funding. These dummy variables indicate funding from NSF, HHS (which includes the NIH), research grants from private corporations, grants from private, non-profit organizations, state research grants, and then a variable capturing all other federally funded grants. The omitted category is all other grants. The estimated coefficients appear in Table 7 and the odds ratios are reported in Table F in S1 Appendix .

thumbnail

https://doi.org/10.1371/journal.pone.0187394.t007

The first column of results in Table 7 replicates and adds to the model in Table 5 , in which the dependent variable is: “have you added honorary authors to grant proposals.” The reason we replicate that model is to add the six funding sources to the regression to see if some agencies see more honorary authors in their proposals than other agencies. The results in Table 7 suggest they do. Federally funded grants are more likely to have honorary authorships than other sources of grant funding as the coefficients on NSF, NIH, and other federal funding are all positive and significant at the 0.01 level. Corporate research grants also tend to have honorary authors included.

The remaining columns in Table 7 suggest that scholars in medicine and management are more likely to add honorary authors to grant proposals because of the added scholar’s reputation, but there is little statistical difference across the other characteristics of our respondents. Exploring the different sources of funds, adding an individual because of his or her reputation is more likely to be practiced with grants to the Department of Health and Human Services (probably because of the heavy presence of medical proposals and honorary authorship is common in medicine) and it is statistically less likely to be used in grant proposals directed towards corporate research funding.

Table 7 shows that lab directors tend to be honorary authors in grant proposals with assistant professors and for grant proposals directed to private corporations. While position of authority (i.e., political power) was the third most frequently cited reason to add someone to a proposal, its practice seems to be dispersed across the academic universe as the regression results in Table 7 do not show much variation across rank, discipline, their past experience with research funding, or the funding source to which the proposal was directed. The remaining reasons for adding authors garnered a small portion of the total responses and there was little significant variation across the characteristics measured here. For these reasons, their regression results are not reported.

Coercive citations

There is widespread distaste among academics concerning the use of coercive citation. Over 90% of our respondents view coercion as inappropriate, 85.3% think its practice reduces the prestige of the journal, and 73.9% are less likely to submit work to a journal that coerces. These opinions are shared across the academic spectrum as shown in Fig 3 , which breaks out these responses by the major fields, medicine, science, engineering, business, and the social sciences. Despite this disapproval, 14.1% of the overall respondents report being coerced. Similar to the analyses above, our task is to see if there is a systematic set of attributes of scholars who are coerced or if there are attributes of journals that are related to coercion.

thumbnail

The first column in each cluster presents the percentage of respondents from each major academic group who either strongly agree or agree with the statement the coercive citations, “is inappropriate.” The second column is the percentage that agrees to, “[it] reduces the prestige of the journal.” The third column reflects agreement to, “are less likely to submit work to a journal that coerces.”

https://doi.org/10.1371/journal.pone.0187394.g003

Two dependent variables are used to measure the existence and the frequency of coercive citation. The first is a binary dependent variable, whether respondents were coerced or not, and the second counts the frequency of coercion, asking our respondents how many times they have been coerced in the last five years. Table 8 presents estimates of the logit model (coerced or not) and their odds ratios and Table 9 presents estimates of the negative binomial model (measuring the frequency of coercion) and their accompanying incident rate ratios. With but a single exception (the estimated coefficient on female scholars was opposite our expectation) our hypotheses are supported. In this sample, it is males who are more likely to be coerced, the effect size estimates that being a male raises the odds ratio of being coerced by 18%. In the frequency estimates in Table 9 , however, there was no statistical difference between male and female scholars.

thumbnail

https://doi.org/10.1371/journal.pone.0187394.t008

thumbnail

https://doi.org/10.1371/journal.pone.0187394.t009

Consistent with our hypotheses, assistant professors and associate professors were more likely to be coerced than full professors and the effect was larger for assistant professors. Being an assistant professor increases the odds that you will be coerced by 42% over a professor while associate professors see about half of that, a 21% increase in their odds. Table 9 shows assistant professors are also coerced more frequently than professors. Co-authors had a negative and significant coefficient as predicted in both sets of results. Consequently, comparing Tables 3 and 8 we see that manuscripts with many co-authors are more likely to add honorary authors, but are less likely to be targeted for coercion. Finally, we find significant variation across disciplines. Eight disciplines are significantly more likely to be coerced than the average across all disciplines and ordered by their odds ratios (largest to smallest) they are: marketing, information systems, finance, management, ecology, engineering, accounting, and economics. Nine disciplines are less likely to be coerced and ordered by their odds ratios (smallest to largest) they are: mathematics, physics, political science, chemistry, psychology, nursing, medicine, computer science, and sociology. Again, there is support for our speculation that disciplines in which grant funding is less critical (and therefore publication is relatively more critical) experience more coercion. In the top coercion category, six of the eight disciplines are business disciplines, where research funding is less common, and in “less than average” coercion disciplines, six of the nine disciplines rely heavily on grant funding. The anomaly (and one that deserves greater study) is that the social sciences see less than average coercion even though publication is their primary measure of academic success. While they are prime targets for coercion, the editors in their disciplines have largely resisted the temptation. Again, this same pattern emerges in the frequency model. In the S1 Appendix , these models are re-estimated after standardizing the continuous variables. Results appear in Table G (existence of coercion) and Table H (frequency of coercion.)

Coercive citations: Journal data

To achieve a deeper understanding of coercive citation, we reexamine this behavior using academic journals as our unit of observation. We analyze these journal-based data in two ways: 1) a logit model in which the dependent variable equals 1 if that journal was named as having coerced and 0 if not and 2) a negative binomial model where the dependent variable is the count of the number of times a journal was identified as one where coercion occurred. As before, the variance of these data substantially exceeds the mean and thus Poison regression is inappropriate. To test our hypotheses, our included independent variables are the dummy variables for discipline, journal rank, and dummy variables for different types of publishers. We control for some of the different editorial practices across journals by including the number of documents published annually by each journal and the average number of citations per article.

The results of the journal-based analysis appear in Table 10 . Once again, and consistent with our hypothesis, the differences across disciplines emerge and closely follow the previous results. The discipline journals most likely to have coerced authors for citations are in business. The effect of a journal’s rank on its use of coercion is perhaps the most startling finding. Measuring journal rank using the h-index suggests that more highly rated journals are more likely to have coerced and coerced more frequently, which is opposite our hypothesis that lower ranked journals are more likely to coerce. Perhaps the chance to move from being a “good” journal to a “very good” journal is just too tempting to pass. There is some anecdotal evidence that is consistent with this result. If one surfs through the websites of journals, many simply do not mention their rank or impact factor. However, those that do mention their rank or impact tend to be more highly ranked journals (a low-ranked journal typically doesn’t advertise that fact), but the very presence of the impact factor on a website suggests that the journal, or more importantly the publisher, places some value on it and, given that pressure, it is not surprising to find that it may influence editorial decisions. On the other hand, we might be observing the results of established behavior. If some journals have practiced coercion for an extended time then their citation count might be high enough to have inflated their h-index. We cannot discern a direction of causality, but either way our results suggest that more highly ranked journals end up using coercion more aggressively, all else equal.

thumbnail

https://doi.org/10.1371/journal.pone.0187394.t010

There seems to be publisher effects as well. As predicted, journals published by private, profit oriented companies are more likely to be journals that have coerced, but it also seems to be more common in the academic associations than university publishers. Finally, we note that the total number of documents published per year is positively related to a journal having coerced and the impact of the average number of citations per document was not significantly different than zero.

The result that higher-ranked journals seem to be more inclined than lower-ranked journals to have practiced coercion warrants caution. These data contain many obscure journals; for example, there are more than 4000 publications categorized as medical journals and this long tail could create a misleading result. For instance, suppose some medical journals ranked between 1000–1200 most aggressively use the practice of coercion. In relative terms these are “high” ranked journals because 65% of the journals are ranked even lower than these clearly obscure publications. To account for this possibility, a second set of estimates was calculated after eliminating all but the “top-30” journals in each discipline. The results appear in Table 11 and generally mirror the results in Table 10 . Journals in the business disciplines are more likely to have used coercion and used it more frequently than the other disciplines. Medicine, biology, and computer science journals used coercion less. However, even concentrating on the top 30 journals in each field, the h-index remains positive and significant; higher ranked journals in those disciplines are more likely to have coerced.

thumbnail

https://doi.org/10.1371/journal.pone.0187394.t011

Padded reference lists

Our final empirical tests focus on padded citations. We asked our respondents that if they were submitting an article to a journal with a reputation of asking for citations even if those citations are not critical to the content of the article, would you “add such citations BEFORE SUBMISSION.” Again, more than 40% of the respondents said they agreed with that sentiment. Regarding grant proposals, 15% admitted to adding citations to their reference list in grant proposals “even if those citations are of marginal import to my proposal.”

To see if reference padding is as systematic as the other types of manipulation studied here, we use the categorical responses to the above questions as dependent variables and estimate ordered logit models using the same descriptive independent variables as before. The results for padding references in manuscripts and grant proposals appear in Tables 12 and 13 , respectively. Once more, with minor deviation, our hypotheses are strongly supported.

thumbnail

https://doi.org/10.1371/journal.pone.0187394.t012

thumbnail

https://doi.org/10.1371/journal.pone.0187394.t013

Tables 12 and 13 show that scholars of lesser rank and those without tenure are more likely to pad citations to manuscripts and skew citations in grant proposals than are full professors. The gender results are mixed, males are less likely to pad their citations in manuscripts, but more likely to pad references in grant proposals. It is the business disciplines and the social sciences that are more likely to pad their references in manuscripts and business and medicine who pad citations on grant proposals. In both situations, familiarity with other types of manipulation has a strong, positive correlation with the likelihood that individuals pad their reference list. That is, respondents who are aware of coercive citation and those who have been coerced in the past are much more likely to pad citations before submitting a manuscript to a journal. And, scholars who have added honorary authors to grant proposals are also more likely to skew their citations to high-impact journals. While we cannot intuit the direction of causation, we show evidence that those who manipulate in one dimension are willing to manipulate in another.

Our results are clear; academic misconduct, specifically misattribution, spans the academic universe. While there are different levels of abuse across disciplines, we found evidence of honorary authorship, coercive citation, and padded citation in every discipline we sampled. We also suggest that a useful construct to approach misattribution is to assume individual scholars make deliberate decisions to cheat after weighing the costs and benefits of that action. We cannot claim that our construct is universally true because other explanations may be possible, nor do we claim it explains all misattribution behavior because other factors can play a role. However, the systematic pattern of superfluous authors, coerced citations, and padded references documented here is consistent with scholars who making deliberate decisions to cheat after evaluating the costs and benefits of their behavior.

Consider the use of honorary authorship in grant proposals. Out of the more than 2100 individuals who gave a specific reason as to why they added a superfluous author to a grant proposal, one rationale outweighed the others; over 60% said they added the individual because of they thought the added scholar’s reputation increased their changes of a positive review. That behavior, adding someone with a reputation even though that individual isn’t expected to contribute to the work was reported across disciplines, academic ranks, and individuals’ experience in grant work. Apparently, adding authors with highly recognized names to grant proposals has become part of the game and is practiced across disciplines and rank.

Focusing on manuscripts, there is more variation in the stated reasons for honorary authorship. Lab directors are added to papers in disciplines that are heavy lab users and junior faculty members are more likely to add individuals in positions of authority or mentors. Unlike grant proposals, few scholars add authors to manuscripts because of their reputation. A potential explanation for this difference is that many grant proposals are not blind reviewed, so grant reviewers know the research team and can be influenced by its members. Journals, however, often have blind referees, so while the reputation of a particular author might influence an editor it should not influence referees. Furthermore, this might reflect the different review process of journals versus funding agencies. Funding agencies specifically consider the likelihood that a research team can complete a project and the project’s probability of making a significant contribution. Reputation can play a role in setting that perception. Such considerations are less prevalent in manuscript review because a submitted work is complete—the refereeing question is whether it is done well and whether it makes a significant contribution.

Turning to coercive citations, our results in Tables 8 and 9 are also consistent with a model of coercion that assumes editors who engage in coercive citation do so mindfully; they are influenced by what others in their field are doing and if they coerce they take care to minimize the potential cost that their actions might trigger. Parallel analyses using a journal data base are also consistent with that view. In addition, the distinctive characteristics of each dataset illuminate different parts of the story. The author-based data suggests editors target their requests to minimize the potential cost of their activity by coercing less powerful authors and targeting manuscripts with fewer authors. However, contrary to the honorary authorship results, females are less likely to be coerced than males, ceteris paribus . The journal-based data adds that it is higher-ranked journals that seem to be more inclined to take the risk than lower ranked journals and that the type of publisher matters as well. Furthermore, both approaches suggest that certain fields, largely located in the business professions, are more likely to engage in coercive activities. This study did not investigate why business might be more actively engaged in academic misconduct because there was little theoretical reason to hypothesize this relationship. There is however some literature suggesting that ethics education in business schools has declined [ 34 ]. For the last 20–30 years business schools have turned to the mantra that stock holder value is the only pertinent concern of the firm. It is a small step to imagine that citation counts could be viewed as the only thing that matters for journals, but additional research is needed to flesh out such a claim.

Again, we cannot claim that our cost-benefit model of editors who try to inflate their journal impact factor score is the only possible explanation of coercion. Even if editors are following such a strategy, that does not rule out additional considerations that might also influence their behavior. Hopefully future research will help us understand the more complex motivations behind the decision to manipulate and the subsequent behavior of scholars.

Finally, it is clear that academics see value in padding citations as it is a relatively common behavior for both manuscripts and grants. Our results in Tables 12 and 13 also suggest that the use of honorary authorship and padding citations in grant proposals and coercive citation and padding citations in manuscripts is correlated. Scholars who have been coerced are more likely to pad citations before submitting their work and individuals who add authors to manuscripts also skew their references on their grant proposals. It seems that once scholars are willing to misrepresent authorship and/or citations, their misconduct is not limited to a single form of misattribution.

It is difficult to examine these data without concluding that there is a significant level of deception in authorship and citation in academic research and while it would be naïve to suppose that academics are above such scheming to enhance their position, the results suggest otherwise. The overwhelming consensus is that such behavior is inappropriate, but its practice is common. It seems that academics are trapped; compelled to participate in activities they find distasteful. We suggest that the fuel that drives this cultural norm is the competition for research funding and high-quality journal space coupled with the intense focus on a single measure of performance, the number of publications or grants. That competition cuts both ways, on the one hand it focuses creativity, hones research contributions, and distinguishes between significant contributions and incremental advances. On the other hand, such competition creates incentives to take shortcuts to inflate ones’ research metrics by strategically manipulating attribution. This puts academics at odds with their core ethical beliefs.

The competition for research resources is getting tighter and if there is an advantage to be gained by misbehaving then the odds that academics will misbehave increase; left unchecked, the manipulation of authorship and citation will continue to grow. Different types of attribution manipulation continue to emerge; citation cartels (where editors at multiple journals agree to pad the other journals’ impact factor) and journals that publish anything for a fee while falsely claiming peer-review are two examples [ 30 , 35 ].

It will be difficult to eliminate such activities, but some steps can probably help. Policy actions aimed at attribution manipulation need to reduce the benefits of manipulation and/or increase the cost. One of the driving incentives of honorary authorship is that the number of publications has become a focal point of evaluation and that number is not sufficiently discounted by the number of authors [ 36 ]. So, if a publication with x authors counted as 1/x publications for each of the authors, the ability to inflate one’s vita is reduced. There are problems of course, such as who would implement such a policy, but some of these problems can be addressed. For example if the online, automated citation counts (e.g., h-index, impact factor, calculators such as SCOPUS and Google Scholar) automatically discounted their statistics by the number of authors, it could eventually influence the entire academe. Other shortcomings of this policy is that this simple discounting does not allow for differential credit to be given that may be warranted, nor does it remove the power disparity in academic ranks. However, it does stiffen the resistance to adding authors and that is a crucial step.

An increasing number of journals, especially in medicine, are adopting authorship guidelines developed by independent groups, the most common being set forth by the International Committee of Medical Journal Editors (ICMJE) [ 37 ]. To date, however, there is little evidence that those standards have significantly altered behavior; although it is not clear if that is because authors are manipulating in spite of the rules, if the rules are poorly enforced, or if they are poorly designed from an implementation perspective [ 21 ]. Some journals require authors to specifically enumerate each author’s contribution and require all of the authors to sign off on that division of labor. Such delineation would be even more effective if authorship credit was weighted by that division of labor. Additional research is warranted.

There may be greater opportunities to reduce the practice of coercive citation. A fundamental difference between coercion and honorary authorship is the paper trail. Editors write down such “requests” to authors, therefore violations are easier to document and enforcement is more straightforward. First, it is clear that impact factors should no longer include self-citations. This simple act removes the incentive to coerce authors. Reuters makes such calculations and publishes impact factors including and excluding self-citations. However, the existence of multiple impact factors gives journals the opportunity to adopt and advertise the factor that puts them in the best light, which means that journals with editors who practice coercion can continue to use impact factors that can be manipulated. Thus, self-citations should be removed from all impact factor calculations. This does not eliminate other forms of impact factor manipulation such as posting accepted articles on the web and accumulating citations prior to official publication, but it removes the benefit of editorial coercion and other strategies based on inflating self-citation [ 38 ]. Second, journals should explicitly ban their editors from coercing. Some journals are taking these steps and while words do not insure practice, a code of ethics reinforces appropriate behavior because it more closely ties a journal’s reputation to the practices of its editors and should increase the oversight of editorial boards. Some progress is being made on the adoption of editorial guidelines, but whether they have any impact is currently unknown [ 39 , 40 ].

These results also reinforce the idea that grant proposals be double blind-reviewed. Blind-review shifts the decision calculus towards the merit of a proposal and reduces honorary authorship incentives. The current system can inadvertently encourage misattribution. For example, scholars are often encouraged to visit granting agencies to meet with reviewers and directors of programs to talk about high-interest research areas. Such visits make sense, but it is easy for those scholars to interpret their visit as a name-collecting exercise; finding people to add to proposals and collecting references to cite. Fourth, academic administrators, Provosts, Deans, and Chairs need to have clear rules concerning authorship. Far too many of our respondents said they added a name to their work because that individual could have an impact on their career. They also need to have guidelines that address the inclusion of mentors and lab directors to author lists. Proposals that include name-recognizable scholars for only a small proportion of the grant should be viewed with suspicion. This is a consideration in some grant opportunities, but that linkage can be strengthened. Finally, there is some evidence that mentoring can be effective, but there is a real question as to whether mentors are teaching compliance or how to cheat [ 41 ].

There are limitations in this study. Although surveys have shortcomings such as self-reporting bias, self-selection issues, etc., there are some issues for which surveys remain as the data collection method of choice. Manipulation is one of these issues. It would be difficult to determine if someone added honorary authors or padded citations prior to submission without asking that individual. Similarly, coercion is most directly addressed by asking authors if editors coerced them for citations. Other approaches, such as examining archival data, running experiments, or building simulations, will not work. Thus, despite its shortcomings, survey is the method of choice.

Our survey was sent via email and the overall response rate was 10.5%, which by traditional survey standards may be considered to be low. We have no data on how many surveys were filtered as spam or otherwise ended up in junk mail folders or how many addresses were obsolete. We recognize however that there is a rising hesitancy by individuals to click on an emailed link and that is what we were asking our recipients to do. For these reasons, we anticipated that our response rate may be low and compensated by increasing the number of surveys sent out. In the end, we have over 12,000 responses and found thousands of scholars who have participated in manipulation. In the S1 appendix , Table A presents response rates by discipline and while there is variation across disciplines, that variation does not correlate with any of the fundamental results, that is, there does not seem to be a discipline bias arising from differential response rates.

A major concern when conducting survey research is that the sample may not represent the population. To address this possible issue in our study, we perform various statistical analyses to determine if we encountered sampling bias. First, we compared two population demographics (sex and academic rank) to the demographics of our respondents (see Table B in S1 Appendix ). The percentage of males and females in each discipline was very close to the reported sex of the respondents. There was greater variation in academic ranks with the rank of full professor being over-represented in our sample. One should keep this in mind when interpreting our findings. However, our hypotheses and results suggest that professors are the least likely to be coerced, use padded citations, and use honorary authorship, consequently our results may actually under-estimate the incidence of manipulation. Perhaps the greatest concern of potential bias innate in surveys comes from the intuition that individuals who are more intimately affected by a particular issue are more likely to respond. In the current study, it is plausible that scholars who have been coerced, or felt obligated to add authors to manuscripts, or have added investigators to grants proposals, are upset by that consequence and more likely to respond. However, if that motivation biased our responses it should show up in the response rates across disciplines, i.e., disciplines reporting a greater incidence of manipulation should have higher percentage of their population experiencing manipulation and thus higher response rates. The rank correlation coefficient between discipline response rates and the proportion of scholars reporting manipulation is r s = -0.181, suggesting virtually no relationship between the two measures.

In the end, we cannot rule out the existence of bias but we find no evidence that suggests it affects our results. We are left with the conclusion that scholars manipulate attribution adding honorary authors to their manuscripts and false investigators to their grant proposals, and some editors coerce scholars to add citations that are not pertinent to their work. It is unlikely that this unethical behavior can be totally eliminated because academics are a competitive, intelligent, and creative group of individuals. However, most of our respondents say they want to play it straight and therefore, by reducing the incentives of misbehavior and raising the costs of inappropriate attribution, we can expect a substantial portion of the community to go along. With this inherent support and some changes to the way we measure scientific contributions, we may reduce attribution misbehavior in academia [ 42 ].

Supporting information

S1 appendix. statistical methods, surveys, and additional results..

https://doi.org/10.1371/journal.pone.0187394.s001

S2 Appendix. Honorary authors data.

https://doi.org/10.1371/journal.pone.0187394.s002

S3 Appendix. Coercive citation data.

https://doi.org/10.1371/journal.pone.0187394.s003

S4 Appendix. Journal data.

https://doi.org/10.1371/journal.pone.0187394.s004

  • View Article
  • Google Scholar
  • PubMed/NCBI
  • 26. Ward K, Eddy PL. Women and Academic Leadership: ‘Leaning Out’ Chron. of Higher Ed. 2013; Dec. 3.
  • 29. Dominici, F., Fried, L. P., Zeger, S. L. So few women leaders. Academe. July-August, 2009.
  • 33. Scopus 2014. http://www.elsevier.com/online-tools/scopus .
  • 34. McDonald D. The Golden Passport : Harvard Business School , the Limits of Capitalism , and the Moral Failure of the MBA Elite . New York: HarperCollins; 2017.
  • 37. ICJME. Defining the Role of Authors and Contributors, Section 2. Who is an Author? 2014. http://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html .
  • 39. Editors Joint Policy Statement Regarding ‘Coercive citations’. http://www.jfqa.org/EditorsJointPolicy.html .

A Model of Behavioral Manipulation

We build a model of online behavioral manipulation driven by AI advances. A platform dynamically offers one of n products to a user who slowly learns product quality. User learning depends on a product’s “glossiness,’ which captures attributes that make products appear more attractive than they are. AI tools enable platforms to learn glossiness and engage in behavioral manipulation. We establish that AI benefits consumers when glossiness is short-lived. In contrast, when glossiness is long-lived, users suffer because of behavioral manipulation. Finally, as the number of products increases, the platform can intensify behavioral manipulation by presenting more low-quality, glossy products.

We thank participants at various seminars and conferences for comments and feedback. We are particularly grateful to our discussant Liyan Yang. We also gratefully acknowledge financial support from the Hewlett Foundation, Smith Richardson Foundation, and the NSF. This paper was prepared in part for and presented at the 2023 AI Authors’ Conference at the Center for Regulation and Markets (CRM) of the Brookings Institution, and we thank the CRM for financial support as well. The views expressed herein are those of the authors and do not necessarily reflect the views of the National Bureau of Economic Research.

MARC RIS BibTeΧ

Download Citation Data

More from NBER

In addition to working papers , the NBER disseminates affiliates’ latest findings through a range of free periodicals — the NBER Reporter , the NBER Digest , the Bulletin on Retirement and Disability , the Bulletin on Health , and the Bulletin on Entrepreneurship  — as well as online conference reports , video lectures , and interviews .

15th Annual Feldstein Lecture, Mario Draghi, "The Next Flight of the Bumblebee: The Path to Common Fiscal Policy in the Eurozone cover slide

research papers on manipulation

Physiology News Magazine

research papers on manipulation

  • Spring (May) 2020- Issue 118

The science and art of detecting data manipulation and fraud: An interview with Elisabeth Bik

News and Views

Julia Turan, Managing Editor, Physiology News

https://doi.org/10.36866/pn.118.10

How big of a problem is data manipulation and fraud in biomedical science?

It is hard to make a good estimate about the percentage of papers with manipulated data. In my search of 20,000 biomedical papers that contained western blots (photos of protein gels stained with an antibody to analyse that protein’s expression) I detected image duplication in about 4% of the papers. 1 Some of those duplications could be simple errors, but about half of those papers contained shifted, rotated, mirrored, or manipulated duplicates, which are more suggestive of an intention to mislead. So based on that study, we might conclude that 2% of those papers might contain intentionally duplicated photos. But the percentage of manipulated data, so not just looking at photos but also considering tables and line graphs, might be much higher. It is very hard to detect falsified or fabricated data in a table unless you compare the original lab book notes to the published data. Therefore the true percentage of data manipulation is probably much higher than 2%.

Which data are prone to the most manipulation/fraud?

All data. For me, it is easiest to detect duplications in photos, but I sometimes find unrealistic data in tables as well. For example, I found tables in which the standard deviation of dozens of values was always around 10% of the mean value they represent. That is not realistic for biological data, which is usually much more variable.

How, if at all, are journals/institutes/governments dealing with it?

Most journals that are part of the large scientific publishing houses scan for plagiarism, which is a form of research misconduct. Several journals, such as Nature , PLOS One , and Journal of Cell Biology , have recently implemented more strict guidelines for photographic figures, such as specifically prohibiting cloning, stamping, splicing, etc. And some journals are starting to better scrutinise images in manuscripts sent to them for peer review, as well as asking authors to provide raw data.

Institutes have not been very responsive to allegations of misconduct. Most institutes in the US will provide some classes on misconduct, but when it comes to actually responding and acting upon whistleblowers’ reports, they tend to underperform. Most of the misconduct cases are swept under the rug and, very often, the whistleblower is the one who is fired, not the person accused of the misconduct. Kansas State University ecologist, Joseph Craine, and Johns Hopkins University statistician, Daniel Yuan, were both fired for being whistleblowers, 2,3 while Eleni Liapi of Maastricht University lost access to her lab and servers before being asked to quit, 4 and Karl-Henrik Grinnemo’s career was severely damaged. 5

What is an image duplication detective and how did you get into it?

I started this work around 2014, when I investigated a PhD thesis with plagiarised text in which, coincidentally, I spotted a duplicated western blot. I realised that this might also happen in published science papers, so I started scanning papers that very evening. Immediately, I found some other cases, and I was fascinated and shocked at the same time. Since then, I have scanned 20,000 papers in a structured way, so from different journals, different publishers, and different years. After the publication of our 2016 mBio paper together with Ferric Fang and Arturo Casadevall, 1 I kept on doing this work. About a year ago, I left my paid job to do this work full time.

There are several ways I scan papers, but I mostly follow up on leads that other people send me (“Can you please check the papers by Prof. X because we all suspect misconduct?”) or on groups of papers from the same authors that I found earlier. Image misconduct appears to cluster around certain persons or even institutes.

research papers on manipulation

What has been the reception to these activities in terms of resistance or support from the community and powers that be?

The reception has not been very warm, as you might imagine. Journal editors were possibly embarrassed and perhaps even overwhelmed when I started to send them dozens of cases of papers with duplicated images. Over half of the cases I sent to them in 2014 and 2015 have not been addressed at all, which has been frustrating. Some editors refused to respond to me and others have told me that they did not see any problems with those papers.

Institutes to which I reported sets of papers by the same author(s) have mostly been silent as well. But in the last couple of years, the tide has been changing, and I am starting to see more and more journal editors who are supportive and are actively trying to reject manuscripts with image duplications, before they are published.

How do you detect image manipulation, and are there resources to help automate detection or learn how to do it?

I scan purely by eye. Having scanned probably over 50,000 papers by now, I have some experience on which types of duplications or manipulations to look out for. For complicated figures with lots of microscopy panels I use Forensically 6 but that can only detect direct copies, not anything that has been rotated or zoomed in/out. There is no good software on the market yet to screen for these duplications, but there are several groups working on automated approaches, with promising results. 7,8,9

What are some of the most common types of manipulations? How are these done?

The most common ones are overlapping microscopy images. These are two photos that represent two different experiments, but that actually show an area of overlap, suggesting they are the same tissue. Another very common type is western blots that are shifted or rotated to represent two different experiments. These two examples are not photoshopped, but manipulated in the sense that the photos are somewhat changed (shifted, mirrored, rotated) to mislead the reader. True photoshopped images, where parts of photos are cloned or copy/pasted into other photos are quite common among flow cytometry images.

How have social media and crowdsourcing helped in this endeavour (i.e. Twitter, or your blogs Microbiome digest and Science Integrity Digest, etc.)?

They are helping in making people better peer reviewers, and more critical readers of scientific papers. I use Twitter to show examples of duplications, so people are more aware of them, and can find these cases in the future. PubPeer.com is a website where individual papers can be discussed and flagged for all kinds of concerns. ScienceIntegrityDigest.com is meant for more reflective blog posts, or to describe patterns among scientific papers that cannot be spotted by looking at individual papers, such as the paper mill of over 400 papers that we recently discovered. 10

What are your hopes for the future of image duplication, manipulation and fraud detection and handling as a community?

There will always be dishonest people, and photoshopping techniques are getting better and better, so it is unrealistic to think we can catch all of these cases during peer review, even with detection software. But I hope we can take some of the pressure off scientists that feel driven to publish at any cost. Scientific papers are the foundation of science, but it is unrealistic to ask graduate students, postdocs and assistant professors to publish X number of papers with a combined impact factor of Y before they can graduate or get tenure. Good science takes time, often fails, and never keeps to imposed deadlines. If we measure good science by the wrong output parameters, we put too much temptation onto people to cheat.

What is next for you and how do people keep track of your fascinating activities?

I am not sure yet! I have so many interesting leads to follow that I will probably be busy uncovering “clusters” of misconduct for the next couple of years. But I also hope I will be less regarded as a pair of extraordinary eyes with a Twitter account, and more as a real scientist who wants to improve science. I hope there will be a place at the table for me with institutes and publishers to talk about better ways to detect and decrease science misconduct. You can always follow me on Twitter at @MicrobiomDigest .

  • Bik EM et al . (2016). mBio 7 (3), e00809-16. DOI:10.1128/mBio.00809-16
  • Han AP (2017a). Ecologist loses appeal for whistleblower protection. [Online] Retraction Watch . Available at: retractionwatch.com/2017/05/05/ ecologist-loses-appeal-whistleblower-protection/ [Accessed 17 May 2020].
  • Han AP (2017b). Would-be Johns Hopkins whistleblower loses appeal in case involving Nature retraction. [Online] Retraction Watch . Available at: retractionwatch.com/2017/05/25/johns-hopkins-whistleblower-loses-appeal-case-involving-nature-retraction/ [Accessed 17 May 2020].
  • Degens W (2020). Maastricht professor of Cardiology accused of academic fraud. [Online] Observant . Available at: www.observantonline.nl/Home/Artikelen/ articleType/ArticleView/articleId/17817/Maastricht-professor-of-Cardiology-accused-of-academic-fraud [Accessed 17 March 2020].
  • Herold E (2018). A star surgeon left a trail of dead patients–and his whistleblowers were punished. [Online] Available at: leapsmag.com/a-star-surgeon-left-a-trail-of-dead-patients-and-his-whistleblowers-were-punished/ [Accessed 17 March 2020].
  • Wagner J (2015). Forensically, Photo Forensics for the Web. [Online] ch . Available at: 29a.ch/2015/08/16/ forensically-photo-forensics-for-the-web [Accessed 17 March 2020].
  • Acuna DE et al. (2018). Bioscience-scale automated detection of figure element reuse. [Preprint] DOI: 10.1101/269415
  • Bucci EM (2018). Automatic detection of image manipulations in the biomedical literature. Cell Death & Disease 9 , 400. DOI: 10.1038/s41419-018-0430-3
  • Cicconet M et al . (2018). Image Forensics: Detecting duplication of scientific images with manipulation-invariant image similarity. [Preprint] arXiv :1802.06515
  • Bik EM (2020). The Tadpole Paper Mill. [Online] Science Integrity Digest . Available at: scienceintegritydigest.com/2020/02/21/the-tadpole-paper-mill/ [Accessed 6 May 2020].

Site search

Content type.

  • Latest News
  • Upcoming Events
  • Team Members
  • Honorary Fellows

Advertisement

Advertisement

A Survey on Learning-Based Robotic Grasping

  • Robotics in Manufacturing (JN Pires, Section Editor)
  • Open access
  • Published: 20 September 2020
  • Volume 1 , pages 239–249, ( 2020 )

Cite this article

You have full access to this open access article

  • Kilian Kleeberger   ORCID: orcid.org/0000-0002-0711-0785 1 ,
  • Richard Bormann 1 ,
  • Werner Kraus 1 &
  • Marco F. Huber 1 , 2  

24k Accesses

130 Citations

19 Altmetric

Explore all metrics

This article has been updated

Purpose of Review

This review provides a comprehensive overview of machine learning approaches for vision-based robotic grasping and manipulation. Current trends and developments as well as various criteria for categorization of approaches are provided.

Recent Findings

Model-free approaches are attractive due to their generalization capabilities to novel objects, but are mostly limited to top-down grasps and do not allow a precise object placement which can limit their applicability. In contrast, model-based methods allow a precise placement and aim for an automatic configuration without any human intervention to enable a fast and easy deployment.

Both approaches to robotic grasping and manipulation with and without object-specific knowledge are discussed. Due to the large amount of data required to train AI-based approaches, simulations are an attractive choice for robot learning. This article also gives an overview of techniques and achievements in transfers from simulations to the real world.

Similar content being viewed by others

research papers on manipulation

A review of motion planning algorithms for intelligent robots

Chengmin Zhou, Bingding Huang & Pasi Fränti

research papers on manipulation

Deep learning-based 3D reconstruction: a survey

Taha Samavati & Mohsen Soryani

research papers on manipulation

Vision-based robotic grasping from object localization, object pose estimation to grasp estimation for parallel grippers: a review

Guoguang Du, Kai Wang, … Kaiyong Zhao

Avoid common mistakes on your manuscript.

Introduction

Humans see novel objects and can almost immediately determine how to pick them. The capabilities of robots lag far behind. Robotic grasping and manipulation is a critical challenge [ 1 ]. Creating cognitive robots that can operate at the same level of dexterity as humans has been approached for many decades. Despite the interest in research and industry, it remains an unsolved problem [ 2 ] [ 3 ].

Shorter product lifecycles and the steadily rising demand for customization require more flexible and changeable production systems leading to the need for an automatic configuration (Plug & Produce) of robot systems [ 4 ]. Developing robots that can operate in dynamic and unstructured environments (i.e., bin-picking, household or everyday environments, professional services) is of great interest. Approaches to robotic grasping utilize learning-based methods to automatically configure for the given task without any human intervention which allows to significantly reduce programming efforts [ 5 ]. Machine learning in particular is a promising approach to robotic grasping due to the generalization ability to novel objects.

This article aims to provide a comprehensive overview of different approaches to robotic grasping. A categorization of different methods is proposed as well as various techniques for grasping and sim-to-real transfer—motivated by the lack of real-world data—are introduced.

Categorization of Methods

Approaches to vision-based robotic grasping can be categorized along multiple different criteria. Generally speaking, approaches can be divided into analytic or data-driven methods [ 6 , 7 ]. Analytic (or sometime called geometric) approaches typically analyze the shape of a target object to identify a suitable grasp pose. Data-driven (or sometimes called empirical) approaches are based on machine learning and have gained popularity in recent years. They have made significant progress due to increased data availability, better computational resources, and algorithmic improvements. This review article focuses on learning-based approaches to robotic grasping and manipulation. For analytic grasping approaches, we refer the readers to [ 7 , 8 , 9 ].

Furthermore, approaches can be categorized as model-based or model-free, depending on whether or not specific knowledge about the object (e.g., CAD model or previously scanned model [ 10 ]) is used to solve the considered task. They can further be differentiated on whether they are focused on grasping and manipulating rigid, articulated, or flexible/deformable objects and whether the method is able to handle known, familiar, or unknown objects [ 6 ]. Figure 1 gives an overview of typical pipelines to robotic grasping. Model-based approaches for known rigid objects typically include a pose estimation step and allow a precise placement of the object. Model-free approaches directly propose grasp candidates and typically aim for a generalization to novel objects.

figure 1

Typical pipelines to robotic grasping: Model-based approaches (top row) typically estimate the object pose, determine a suitable grasp pose on the object, plan a path, and finally execute the grasp. Model-free approaches (bottom row) directly determine grasp poses based on the observations given from the sensor. When being trained in simulation, sim-to-real techniques are needed for a robust transfer. This review article discusses the green elements of the figure

An additional criterion is the type of machine learning, i.e. whether the system is trained using supervised learning (SL) or reinforcement learning (RL) [ 11 ]. Annotations can be provided by humans or obtained in a self-supervised manner, i.e., the labels are generated automatically. Approaches typically either sample grasp candidates and rank them using a neural network (discriminative approaches) [ 12 , 13 ] or directly generate suitable grasp poses (generative approaches) [ 14 , 15 ]. Furthermore, approaches differ on whether they are trained in a simulation environment, in the real world, or both and utilize various kinds of sensor data (RGB image, depth image, RGB-D image, point cloud, potentially multiple sensors, …). Moreover, methods either operate in an open- (i.e., without any feedback) or closed-loop fashion [ 3 , 16 , 17 ]. Using continuous feedback based on visual features is commonly referred to as visual servoing [ 17 ]. Besides the robot hardware, the gripper type (two-finger gripper, suction gripper, …) and gripper freedom (4D, 6D, …) also differentiate approaches. Moreover, some approaches focus on grasping of single separated objects only, while others target grasping in dense clutter. Furthermore, some methods are able to perform pre-grasp manipulations in order to move the object in a better configuration for grasping. Table 1 provides an overview of the discussed approaches and shows a small and exemplary selection from the variety of methods available in the literature. In addition to the abovementioned criteria, the reported grasp success rate is indicated, although being determined on different benchmarks.

Object Pose Estimation for Robotic Grasping

Model-based robotic grasping can be considered as a three-stage process where first object poses are estimated, then a grasp pose is determined, and finally a collision-free and kinematically feasible path is planned towards the object to pick it [ 34 , 35 ]. This chapter focuses on the first part, which has the goal to estimate the translation and rotation relative to a given reference frame (usually the camera) of potentially multiple objects in the scene. This task is challenging because of sensor noise, varying lighting conditions, clutter and occlusions, and the variety of objects in the real world. Furthermore, object symmetries result in pose ambiguities which have to be addressed because with symmetries different annotations for identical observations are available [ 36 , 37 , 38 , 39 ]. For learning-based approaches on the second part, we refer the readers to [ 40 ].

When utilizing object-specific knowledge, approaches typically require an object-specific configuration (high amount of manual tuning) until a satisfactory system performance is reached which limits the scalability to novel objects [ 5 ]. More specifically, parameters for the template or feature matching method for pose estimation [ 41 , 42 ] or the definition of robust grasp poses together with (static) priorities are required [ 35 ] and have to be tuned in real-world experiments. Therefore, model-based approaches aim for an automatic configuration with minimal user input and without any tuning that has to be done by experts to allow a fast and easy transfer to novel objects.

Utilizing the strength of supervised learning for 6D object pose estimation requires large amounts of labeled data for training. Creating and annotating datasets with 6D poses is very tedious, time-consuming, and does not scale [ 43 ]. Thus, it is a trend to train models on synthetic data because simulations are an abundant source of data and flawless ground truth annotations are automatically available (see also “Simulations” section). Transfer techniques are used for deployment to the real world (see also “Techniques for Sim-to-Real Transfer” section). [ 18 , 20 •]

In recent years, research in 6D object pose estimation has been dominated by approaches based on convolutional neural networks (CNNs). Approaches typically either discretize the pose space in bins and predict a class [ 44 , 45 ] or solve pose estimation in terms of a regression task [ 19 , 20 •, 46 ]. DOPE [ 18 ] uses a deep neural network to process an RGB image, outputs the 2D image coordinates of the 3D bounding box of the objects, and uses a PnP algorithm [ 47 ] to estimate the 6D pose of each instance. The model is trained entirely on synthetic data while for the transfer from simulation to the real world, DOPE employs a combination of domain randomization [ 48 ••] and photorealistic rendering. The authors further demonstrate that the pose estimator trained on synthetic data can operate in real-world grasping systems with sufficient accuracy.

Pose estimation challenges [ 49 , 50 ] and standard benchmarking systems [ 51 ] for pose estimation allow advancing the state of the art and enable a transparent and fair comparison of different approaches. Especially, the robust pose estimation of multiple objects in bulk is a great challenge and of major importance. These scenarios, which are often present in industrial bin-picking scenarios, are challenging due to a high amount of clutter and occlusion as visualized in Fig.  2 . A challenge focusing on 6D object pose estimation for bin-picking [ 49 ] has been organized at IROS 2019 and utilized a large-scale dataset [ 43 ] comprising fully 6D pose-annotated synthetic and real-world scenes. For evaluation, the metric from Brégier et al. [ 36 , 37 ] was used which properly accounts for object symmetries and considers objects with visibility of more than 50%.

figure 2

Cluttered scene for bin-picking

In general, learning-based approaches have proven to be robust to occlusions due to learning plausible object pose configurations [ 49 ]. PPR-Net [ 19 ], the winning method of the aforementioned challenge, operates on point clouds and utilizes PointNet++ [ 52 ] to estimate a 6D pose for each point of the point cloud and applies clustering in 6D space to compute the final pose hypotheses by averaging each identified cluster. The approach is outperformed by OP-Net [ 20 •] in terms of average precision on the noisy Siléane dataset [ 36 ]. Furthermore, OP-Net is much faster than PPR-Net because it provides a much more compact parameterization of the output and does not require post processing. The approach discretizes the 3D space of the scene and regresses a pose and confidence for each resulting volume element.

A major advantage of learning-based object pose estimators is that they do not require a manual parameter tuning for the configuration of new objects [ 41 , 42 ]. Furthermore, they can be entirely trained on synthetic data, which can easily be obtained using a physics simulation by dropping objects in a random position and orientation above a bin in the case of bin-picking [ 43 ] or by placing (household) objects in virtual scenes [ 18 ].

Model-Free Robotic Grasping

Model-free approaches are attractive due to their ability to generalize to unseen objects [ 53 ] and pose a dominant direction in robotic grasping research. They do not use prior knowledge about the objects and therefore work without a pose estimation step, which is in contrast to the approaches discussed in the “Object Pose Estimation for Robotic Grasping” section. Approaches often show promising results in terms of generalization ability to novel objects, and models are usually trained in an end-to-end fashion. A placement of the objects after picking is mainly not considered and the type of object being picked is unknown.

Supervised Learning for Robotic Grasping

Supervised learning is concerned with learning a (non-linear) mapping based on labeled training data. In this chapter, we categorize the approaches as discriminative or generative depending on whether the grasp configuration is the input or output.

Discriminative Approaches

Discriminative approaches sample grasp candidates (e.g., using CEM [ 54 ]) and rank them using a neural network. For grasp execution, the robot chooses the grasp with the highest score. These approaches typically have a high runtime because they require multiple forward passes of the neural network to get high-quality grasps. Nonetheless, these approaches come with the advantage that arbitrarily many grasp pose can be evaluated and these methods are not limited by discretization of the grasping primitives/output space. Furthermore, a gradient-based refinement process can be applied/employed to improve the grasp success rate [ 32 •].

Levine et al. [ 24 ] proposed a learning-based approach to hand-eye coordination for robotic grasping based on RGB images. In their work, they used up to 14 robots to collect success labels for 800,000 grasps in 2 months. The trained convolutional neural network can predict the grasp success for a given candidate based on an RGB image of the bin and is used to servo the gripper towards successful grasps. While this approach demonstrates the potential of learning-based approaches to robotic grasping, changes in the hardware setup require the collection of new data for retraining the system.

Dex-Net [ 12 , 26 ] uses a physics simulation to grasp objects in randomized poses on a plane. The outcome of the grasp is logged together with an aligned crop of a depth image where the grasp is located forming one sample to the dataset. Their Grasp Quality Convolutional Neural Network (GQ-CNN) is trained by using that dataset. The trained model can predict the grasp success for given grasp candidates and depth images and generalizes to different rigid, articulated, or flexible objects unseen during training. The Dex-Net framework has been extended to suction grippers [ 13 ] and a dual-arm robot [ 27 ] where the policy infers whether to use a parallel jaw or suction gripper for emptying a cluttered bin. Furthermore, a fully convolutional network architecture generating grasps has been proposed to avoid an expensive sampling and ranking of grasp candidates [ 28 ].

Generative Approaches

Generative approaches output a grasp configuration. One approach to this—called robotic grasp detection—is to detect oriented rectangles [ 55 ] in the image plane, which represent promising grasp candidates for parallel jaw grippers. This parameterization comprises the position, orientation, and opening width of the gripper as visualized in Fig.  3 . The problem of robotic grasp detection is analogous to object detection [ 56 , 57 , 58 ] in computer vision with the only difference being an added term for the gripper orientation.

figure 3

Parameterization for robotic grasp detection: Two values for the position, two for the size, and one for the orientation of the oriented rectangle. Red sides indicate the jaws of the gripper and blue the opening width

For the scenario where a single object is placed on a plane surface, Redmon et al. [ 14 ] proposed a system called SingleGrasp which can predict an oriented rectangle and simultaneously classify the object for a given RGB-D image using a neural network. Since an object can be grasped in multiple different ways, they also introduced MultiGrasp, which can predict multiple grasp poses per image. This approach led to the You Only Look Once (YOLO) [ 56 , 57 ] approach for object detection. Lenz et al. [ 21 ] proposed a learning-based two-stage system that samples candidates and ranks them using a second neural network. In their work, they demonstrated that their approach can be used for real-world robotic grasping tasks. An increased performance is obtained by utilizing more sophisticated network architectures [ 3 ].

A public dataset for robotic grasp detection is the Cornell grasping dataset [ 59 ] which comprises 1035 images from 280 objects with human annotated grasps. Due to the low number of samples, the dataset has been heavily augmented for good performance [ 14 ]. The Jacquard dataset [ 60 ] comprises over 50,000 synthetic samples of more than 11,000 objects with grasps obtained from grasping trials in simulation and enables better generalization due to the increased diversity.

Utilizing these public datasets, GG-CNN [ 15 , 22 ] outputs a grasp configuration together with a quality estimate for each pixel in the image using a small fully convolutional architecture. Due to its low computational demands, the approach can be used for closed-loop grasping in dynamic/non-static environments. Furthermore, this approach can grasp in clutter, although the model is trained on single isolated images only, which is due to the convolution being a local operation.

TossingBot [ 30 ] learns to throw arbitrary objects to given target locations which allows to increase the physical reachability of a robot arm. The authors propose an end-to-end formulation that jointly learns to infer control parameters for grasping and throwing from images of objects in a bin by trial and error. As a result, the system learns to select grasps that lead to predictable throws through self-supervision. The problem of throwing is simplified to predict the release velocity only. The release velocity is estimated using a physics-based controller and adjusted based on the residual estimate of the neural network.

Generative approaches are fast because they require one forward pass only. They usually provide multiple grasp candidates simultaneously and the highest quality grasp is executed by the robot.

Reinforcement Learning for Robotic Grasping and Manipulation

Deep reinforcement learning has emerged as a promising and powerful technique to automatically acquire control policies by trial and error. By processing raw sensory inputs, such as images, complex behaviors can be performed.

Pre-grasp manipulations such as pushing or shifting [ 61 , 62 ] are also of major importance to rearrange cluttered objects and ensure that the objects can be grasped at all or more robustly. Using reinforcement learning, the trained policies also demonstrate generalization to novel objects [ 61 , 62 ].

A comparison of a variety of methods based on deep reinforcement learning on grasping tasks is provided in [ 63 ]. QT-Opt [ 29 ••] demonstrates a rich set of manipulation strategies and responds dynamically to disturbances and perturbations. The robot observes a reward of 1 for successfully lifting an object and 0 for a failed grasp. Their closed-loop vision-based control framework operates in a similar setup as in [ 24 , 25 •, 64 •] and reports a grasp success rate of 96% on unseen objects by optimizing long-horizon grasp success with a total of about 800 robot hours collected within 4 months and across 7 robots.

“Grasping in the Wild” [ 33 ••] allows a closed-loop 6D grasping of novel objects based on human demonstrations and can operate in dynamics scenes with moving objects, up to some speed constraint.

Simulations and Sim-to-Real Transfer

Despite all advantages w.r.t. performance and robustness, deep learning has the disadvantage of requiring large amounts of data for training. This is especially problematic in robotics, where the generation of training data on real-world systems can be expensive and time-consuming. For instance, Pinto et al. [ 23 ] trained a robot to grasp novel objects by collecting 50,000 trials in more than 700 h, Levine et al. [ 24 , 25 •] required 800,000 grasps parallelized over 14 robots in 2 months for robust grasping performance, and QT-Opt [ 29 ••] collected over 560,000 grasps within the course of several weeks across 7 robots. Additionally, these systems are not invariant to changes in the hardware setup such as changing the gripper, table height, or moving the camera. To avoid the need to setup “arm farms” for learning robust robotic grasping and manipulation policies, using simulations is an attractive alternative.

  • Simulations

Commonly used physics simulations are V-REP/CoppeliaSim [ 65 ], PyRep [ 66 ], MoJuCo [ 67 ], Blender [ 68 ], and Gazebo [ 69 ], to name only a few. To overcome these aforementioned limitations, simulations can be employed because they provide an abundant source of data with flawless annotations. Furthermore, simulations are fast and can be parallelized across multiple machines for rapid learning or data generation. Physics simulations allow training the robots without wear and tear of the components and no interruption of production in the field. Apart from these advantages, simulations require the explicit programming of the desired application, potentially require license costs, and do not perfectly capture the properties of the real world.

Techniques for Sim-to-Real Transfer

Generally, models trained in simulations do not tend to directly transfer well to the real world due to the “reality gap” [ 64 •, 70 , 71 ]. This section discusses different approaches to allow bridging the simulation-to-reality gap. Models can be transferred to the real world by providing better simulations, domain randomization [ 48 ••], or domain adaptation [ 64 •, 70 , 72 , 73 ].

Domain Randomization

The technique domain randomization [ 48 ••] applies various randomizations on the observations (vision randomization) or system dynamics (dynamics randomization) such that the real world appears to the model as just another variation. Randomizing various visual aspects of the simulator such as textures and colors of the objects and the background, lighting, object placement including camera placement, and type and amount of noise added to the image forces the network to learn to focus on the essential features of the image (vision randomization). Randomizations can also be applied to the dynamics of the system or environment [ 71 ] including gravity, mass of each link in the robot’s body, damping of each joint, pose of the robot base as well as mass, friction, and damping of the manipulated objects (dynamics randomization) for a robust transfer from simulation to the real world.

This technique has been successfully used for object localization [ 48 ••], segmentation [ 74 ], robot control for pick-and-place [ 75 ], swing-peg-in-hole [ 76 ], opening a cabinet drawer [ 76 ], in-hand manipulation [ 77 ], one-handed Rubik’s Cube solving [ 78 ], precise 6D pose regression in highly cluttered environments [ 20 •], etc. Modifications propose an automatic scheduling of the intensity of the randomization based on the current performance of the system [ 78 ] or adapting simulation randomizations by using real-world data to identify distributions that are particularly suited for a successful transfer [ 76 ]. Synthesizing millions of random object shapes for training [ 79 ] indicates further potentials of this technique for robotic grasping.

Domain Adaptation

Domain adaptation is a process that allows a machine learning model, trained with samples from a source domain to generalize to a target domain, which can be achieved by utilizing unlabeled data from the target domain. In sim-to-real transfer, the source domain is (usually) the simulation and the target domain is the real world. Prior work can be grouped into feature-level domain adaptation [ 80 , 81 ], which focuses on learning domain-invariant features, and pixel-level domain adaptation [ 70 ], which focuses on restyling of images to bridge the domain gap [ 16 , 64 •].

Domain adaptation techniques are usually based on generative adversarial networks (GANs) [ 82 ]. With some unlabeled real-world data, those approaches allow a drastic reduction in the number of real-world samples needed. Using a similar system for hand-eye coordination as in [ 24 , 25 •], GraspGAN [ 64 •] allows reducing the number of real-world samples needed to approximately 2% for similar system performance. This is a drastic reduction of the required real-world samples needed and allows a faster deployment of the solution in different setups.

Still, these approaches require data from the target domain (i.e., some samples from the real world are needed) which negatively affects scalability. Apart from being hard to train and often yielding fragile training results, the output images from the generator network (refiner) are not perfectly realistic and may include inaccuracies and artifacts.

RCAN [ 16 ] translates randomized simulation images to a canonical simulation version which are then used for policy training. The trained system can be used to translate real-world images to canonical images and consequently allows a sim-to-real transfer of the grasping policy, which is demonstrated by using QT-Opt [ 29 ••].

Benchmarking

As there are often many new approaches to pose estimation which are evaluated on a small number of datasets only, the Benchmark for 6D Object Pose Estimation BOP [ 51 ] aims for standardizing datasets to allow a better comparability. Apart from challenges such as “Occluded Object Challenge” [ 83 ], SIXD [ 50 ], and “Object Pose Estimation Challenge for Bin-Picking” [ 49 ], BOP also organizes challenges for pose estimation.

Challenges focusing on robotic grasping and manipulation [ 84 , 85 ] are of great value to the research community because of capturing and advancing the current state of the art in the field. The Amazon Picking/Robotics Challenge [ 2 , 86 , 87 , 88 , 89 , 90 , 91 ] focused on autonomous picking in warehouse scenarios. Still, a participation can be challenging due to the required participation on site and hardware costs. Introducing detailed instructions on how to place the objects for picking [ 92 ] allows a comparison of different approaches. Especially, simulation environments allow a benchmarking of grasping and manipulation approaches under reproducible scenarios without hardware costs and are of high importance to measure scientific progress [ 93 ].

Conclusions

Learning-based approaches to robotic grasping enable picking of diverse sets of objects and are able to demonstrate high grasping success rates even in cluttered scenes and non-static environments. Machine learning and simulation allow fast and easy deployment due to the automatic configuration of model-based solutions and generalization abilities to novel objects of model-free approaches.

Despite impressive results, robotic grasping and manipulation is not solved. All discussed model-free approaches execute top-down grasps and have a limited flexibility in the gripper orientation. There is only a limited number of works focusing on learning-based approaches to robotic grasping in 6D for single objects [ 32 •, 94 , 95 , 96 ] or in clutter [ 31 , 33 ••, 34 , 97 ]. While getting an increased focus in research, model-free grasping in 6D is especially relevant for picking objects from a cluttered bin [ 35 ], from a shelf [ 10 ], or for more robust grasps in general.

Usually, the task of the robot is to “grasp anything.” Some works focus on a directed grasping to pick a specific object from a cluttered scene [ 63 , 73 , 98 ]. Model-free approaches do not allow a precise placement of the objects. Instead of simply dropping the picked object, many practical applications require an at least semi-precise or gentle placement of the components, which has been addressed less. While solutions for avoiding the entanglement of objects exist [ 99 , 100 ], no general solution has been proposed to unhook complex object geometries.

Change history

29 october 2020.

Springer Nature’s version of this paper was updated to include the Funding note: Open Access funding enabled and organized by Projekt DEAL.

Papers of particular interest, published recently, have been highlighted as: • Of importance •• Of major importance

Hodson R. A gripping problem: designing machines that can grasp and manipulate objects with anything approaching human levels of dexterity is first on the to-do list for robotics. In: Nature; 2018.

Zeng A, Song S, Yu K-T, Donlon E, Hogan FR, Bauza M, et al. Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching. In: IEEE, editor. IEEE International Conference on Robotics and Automation (ICRA); May 21–25, 2018; Brisbane, QLD, Australia. PiscatawayJ: IEEE; 2018.

Kumra S, Kanan C. Robotic grasp detection using deep convolutional neural networks. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); September 24–28, 2017; Vancouver: IEEE; 2017.

Reinhart G, Hüttner S, Krug S. Automatic configuration of robot systems – upward and downward integration. In: Jeschke S, Liu H, Schilberg D, editors. Berlin, Heidelberg: Springer Berlin Heidelberg; 2011.

El-Shamouty M, Kleeberger K, Lämmle A, Huber M. Simulation-driven machine learning for robotics and automation. tm - Technisches Messen. 2019;86:673–84.

Article   Google Scholar  

Bohg J, Morales A, Asfour T, Kragic D. Data-driven grasp synthesis—a survey. In: IEEE Transactions on Robotics (T-RO); 2014.

Sahbani A, El-Khoury S, Bidaud P. An overview of 3D object grasp synthesis algorithms. In: Robotics and Autonomous Systems; 2012.

Bicchi A, Kumar V. Robotic grasping and contact: a review. In: IEEE, editor. IEEE International Conference on Robotics and Automation (ICRA); April 24–28, 2000; San Francisco, CA, USA; 2000.

Shimoga KB. Robot grasp synthesis algorithms: a survey. In: The International Journal of Robotics Research (IJRR); 1996.

Bormann R, Brito BF de, Lindermayr J, Omainska M, Patel M. Towards automated order picking robots for warehouses and retail. In: Tzovaras, Dimitrios and Giakoumis, Dimitrios and Vincze, Markus and Argyros, Antonis, editor. Computer Vision Systems; September 23–25, 2019; Thessaloniki, Greece. Cham: Springer International Publishing; 2019.

Sutton RS, Barto AG. Reinforcement learning: an introduction. Cambridge Massachusetts: The MIT Press; 2018.

MATH   Google Scholar  

Mahler J, Liang J, Niyaz S, Laskey M, Doan R, Liu X, et al. Dex-Net 2.0: deep learning to plan robust grasps with synthetic point clouds and analytic grasp Metrics. In: Amato N, Srinivasa S, Ayanian N, Kuindersma S, editors. Robotics: Science and Systems (RSS); July 12–16, 2017; Cambridge, Massachusetts, USA: Robotics Science and Systems Foundation; 2017.

Mahler J, Matl M, Liu X, Li A, Gealy D, Goldberg K. Dex-Net 3.0: computing robust vacuum suction grasp targets in point clouds using a new analytic model and deep learning. In: IEEE, editor. IEEE International Conference on Robotics and Automation (ICRA); May 21–25, 2018; Brisbane, QLD, Australia. Piscataway, NJ: IEEE; 2018.

Redmon J, Angelova A. Real-time grasp detection using convolutional neural networks. In: IEEE, editor. IEEE International Conference on Robotics and Automation (ICRA); May 26–30, 2015; Seattle, WA, USA; 2015.

Morrison D, Leitner J, Corke P. Closing the loop for robotic grasping: a real-time, generative grasp synthesis approach. In: Kress-Gazit H, Srinivasa S, Atanasov N, editors. Robotics: Science and Systems (RSS); June 26–30, 2018. Pittsburgh: Robotics Science and Systems Foundation; 2018.

Google Scholar  

James S, Wohlhart P, Kalakrishnan M, Kalashnikov D, Irpan A, Ibarz J, et al. Sim-to-real via sim-to-sim: data-efficient robotic grasping via randomized-to-canonical adaptation networks. In: IEEE, editor. IEEE Conference on Computer Vision and Pattern Recognition (CVPR); June 16–20, 2019; Long Beach, CA; 2019.

Siciliano B, Khatib O, editors. Springer Handbook of Robotics. Berlin: Springer Science+Business Media; 2008.

Tremblay J, To T, Sundaralingam B, Xiang Y, Fox D, Birchfield S. Deep object pose estimation for semantic robotic grasping of household objects. In: Conference on Robot Learning (CoRL); October 29–31, 2018. Zürich: PMLR; 2018.

Dong Z, Liu S, Zhou T, Cheng H, Zeng L, Yu X, Liu H. PPR-Net: point-wise pose regression network for instance segmentation and 6D pose estimation in bin-picking scenarios. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); November 4–8, 2019; The Venetian Macao, Macau, China: IEEE; 2019.

• Kleeberger K, Huber MF. Single shot 6D object pose estimation. In: IEEE, editor. IEEE International Conference on Robotics and Automation (ICRA); May 31 – June 4, 2020; Palais des Congrès de Paris, France; 2020. Provides state-of-the-art results for 6D object pose estimation in highly cluttered scenes.

Lenz I, Lee H, Saxena A. Deep learning for detecting robotic grasps. In: The International Journal of Robotics Research (IJRR); 2015.

Morrison D, Corke P, Leitner J. Learning robust, real-time, reactive robotic grasping. In: The International Journal of Robotics Research (IJRR); 2019.

Pinto L, Gupta A. Supersizing self-supervision: learning to grasp from 50K tries and 700 robot hours. In: IEEE, editor. IEEE International Conference on Robotics and Automation (ICRA); May 16–21, 2016; Stockholm, Sweden; 2016.

Levine S, Pastor P, Krizhevsky A, Quillen D. Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. In: International Symposium on Experimental Robotics (ISER); 2016.

• Levine S, Pastor P, Krizhevsky A, Ibarz J, Quillen D. Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. In: The International Journal of Robotics Research (IJRR); 2018. Highly influential work demonstrating the potential of deep learning for robotic grasping.

Mahler J, Pokorny FT, Hou B, Roderick M, Laskey M, Aubry M, et al. Dex-Net 1.0: a cloud-based network of 3D objects for robust grasp planning using a multi-armed bandit model with correlated rewards. In: IEEE, editor. IEEE International Conference on Robotics and Automation (ICRA); May 16–21, 2016; Stockholm, Sweden; 2016.

Mahler J, Matl M, Satish V, Danielczuk M, DeRose B, McKinley S, Goldberg K. Learning ambidextrous robot grasping policies. SCIENCE ROBOTICS. 2019.

Satish V, Mahler J, Goldberg K. On-policy dataset synthesis for learning robot grasping policies using fully convolutional deep networks. In: IEEE Robotics and Automation Letters; 2019.

•• Kalashnikov D, Irpan A, Pastor P, Ibarz J, Herzog A, Jang E, et al. QT-Opt: scalable deep reinforcement learning for vision-based robotic manipulation. In: Conference on Robot Learning (CoRL); October 29–31, 2018; Zürich, Switzerland: PMLR; 2018. Setting a milestone in robotic grasping and manipulation.

Zeng A, Song S, Lee J, Rodriguez A, Funkhouser TA. TossingBot: learning to throw arbitrary objects with residual physics. In: Bicchi A, Kress-Gazit H, Hutchinson S, editors. Robotics: Science and Systems (RSS); June 22–26, 2019; Messe Freiburg, Germany; 2019.

Qin Y, Chen R, Zhu H, Song M, Xu J. S4G: amodal single-view single-shot SE(3) grasp detection in cluttered scenes. In: Conference on Robot Learning (CoRL); October 30 – November 1, 2019; Osaka, Japan; 2019.

• Mousavian A, Eppner C, Fox D. 6-DOF GraspNet: variational grasp generation for object manipulation. In: IEEE, editor. IEEE International Conference on Computer Vision (ICCV); October 27 – November 2, 2019; Seoul, Korea; 2019. Addresses model-free grasping in 6D.

•• Song S, Zeng A, Lee J, Funkhouser T. Grasping in the Wild:learning 6DoF closed-loop grasping from low-cost demonstrations. In: IEEE, editor. IEEE International Conference on Robotics and Automation (ICRA); May 31 – June 4, 2020; Palais des Congrès de Paris, France; 2020. Addresses closed-loop model-free grasping in 6D and in cluttered scenes.

ten Pas A, Gualtieri M, Saenko K, Platt R. Grasp pose detection in point clouds. In: The International Journal of Robotics Research (IJRR); 2017.

Spenrath F, Pott A. Gripping point determination for bin picking using heuristic search. In: CIRP Conference on Intelligent Computation in Manufacturing Engineering (CIRP ICME); July 20–22, 2016; Ischia, Italy; 2016.

Brégier R, Devernay F, Leyrit L, Crowley JL. Symmetry aware evaluation of 3D object detection and pose estimation in scenes of many parts in bulk. In: IEEE, editor. IEEE International Conference on Computer Vision (ICCV); October 22–29, 2017; Venice, Italy; 2017.

Brégier R, Devernay F, Leyrit L, Crowley JL. Defining the pose of any 3D rigid object and an associated distance. In: International Journal of Computer Vision (IJCV); 2018.

Hodaň T, Matas J, Obdržálek Š. On evaluation of 6D object pose estimation. In: European Conference on Computer Vision (ECCV); 2016.

Hinterstoisser S, Lepetit V, Ilic S, Holzer S, Bradski G, Konolige K, Navab N. Model based training, detection and pose estimation of texture-less 3D objects in heavily cluttered scenes. In: Asian Conference on Computer Vision (ACCV); 2012.

Spenrath F, Pott A. Using neural networks for heuristic grasp planning in random bin picking. In: IEEE International Conference on Automation Science and Engineering (CASE); August 20–24, 2018; Munich, Germany; 2018.

Ledermann T. Partikel-Schwarm-Optimierung zur Objektlageerkennung in Tiefendaten [Dissertation]. Stuttgart: University of Stuttgart; 2012.

Palzkill M. Heuristisches Suchverfahren zur Objektlageerkennung aus Punktewolken für industrielle Zuführsysteme [Dissertation]. Stuttgart: University of Stuttgart; 2014.

Kleeberger K, Landgraf C, Huber MF. Large-scale 6D object pose estimation dataset for industrial bin-picking. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); November 4–8, 2019. IEEE: The Venetian Macao, Macau, China; 2019.

Book   Google Scholar  

Kehl W, Manhardt F, Tombari F, Ilic S, Navab N. SSD-6D: making RGB-based 3D detection and 6D pose estimation great again. In: IEEE, editor. IEEE International Conference on Computer Vision (ICCV); October 22–29, 2017; Venice, Italy; 2017.

Sundermeyer M, Marton Z, Durner M, Triebel R. Implicit 3D orientation learning for 6D object detection from RGB images. In: European Conference on Computer Vision (ECCV); 2018.

Tekin B, Sinha SN, Fua P. Real-time seamless single shot 6D object pose prediction. In: IEEE, editor. IEEE Conference on Computer Vision and Pattern Recognition (CVPR); June 18–22, 2018; Salt Lake City, Utah; 2018.

Lepetit V, Moreno-Noguer F, Fua P. EPnP: an accurate O(n) solution to the PnP problem. In: International Journal of Computer Vision (IJCV); 2009.

•• Tobin J, Fong R, Ray A, Schneider J, Zaremba W, Abbeel P. Domain randomization for transferring deep neural networks from simulation to the real world. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); September 24–28, 2017; Vancouver, BC, Canada: IEEE; 2017. Highly influential work regarding sim-to-real transfer.

Kleeberger K, Huber MF. Object pose estimation challenge for bin-picking. 2019. https://www.bin-picking.ai/en/competition.html . Accessed 1 June 2020.

Hodaň T, Michel F, Sahin C, Kim T-K, Matas J, Rother C. SIXD Challenge 2017. 2017. http://cmp.felk.cvut.cz/sixd/challenge2017/ . Accessed 1 June 2020.

Hodaň T, Michel F, Brachmann E, Kehl W, Glent Buch A, Kraft D, et al. BOP: benchmark for 6D object pose estimation. In: European Conference on Computer Vision (ECCV); 2018. Accessed 1 June 2020.

Qi CR, Yi L, Su H, Guibas LJ. PointNet++: deep hierarchical feature learning on point sets in a metric space. In: I. Guyon, U.V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, R. Garnett, editors. Advances in Neural Information Processing Systems 30 (NIPS 2017); December 04–09, 2017. Long Beach, California; 2017.

Saxena A, Driemeyer J, Ng AY. Robotic grasping of novel objects using vision. In: The International Journal of Robotics Research (IJRR); 2008.

Rubinstein RY, Kroese DP. The cross-entropy method: a unified approach to combinatorial optimization, Monte-Carlo Simulation and Machine Learning. Berlin: Springer-Verlag; 2004.

Jiang Y, Moseson S, Saxena A. Efficient grasping from RGBD images: learning using a new rectangle representation. In: IEEE, editor. IEEE International Conference on Robotics and Automation (ICRA); May 9–13, 2011; Shanghai, China. Piscataway, NJ: IEEE; 2011.

Redmon J, Divvala S, Girshick R, Farhadi A. You Only Look Once: unified, real-time object detection. In: IEEE, editor. IEEE Conference on Computer Vision and Pattern Recognition (CVPR); June 26 – July 1, 2016; Las Vegas, Nevada; 2016.

Redmon J, Farhadi A. YOLO9000: better, faster, stronger. In: IEEE, editor. IEEE Conference on Computer Vision and Pattern Recognition (CVPR); July 21–26, 2017; Honolulu, Hawaii; 2017. 7263–7271.

Szegedy C, Toshev A, Erhan D. Deep neural networks for object detection. In: C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, K. Q. Weinberger, editors. Advances in Neural Information Processing Systems 26 (NIPS 2013): Curran Associates, Inc; 2013.

Cornell University. Cornell Grasping Dataset. http://pr.cs.cornell.edu/grasping/rectdata/data.php . Accessed 1 June 2020.

Depierre A, Dellandréa E, Chen L. Jacquard: a large scale dataset for robotic grasp detection. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); October 1–5, 2018; Madrid, Spain: IEEE; 2018.

Zeng A, Song S, Welker S, Lee J, Rodriguez A, Funkhouser TA. Learning synergies between pushing and grasping with self-supervised deep reinforcement learning. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); October 1–5, 2018; Madrid, Spain: IEEE; 2018.

Berscheid L, Meißner P, Kroeger T. Robot learning of shifting objects for grasping in cluttered environments. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); November 4–8, 2019; The Venetian Macao, Macau, China: IEEE; 2019.

Quillen D, Jang E, Nachum O, Finn C, Ibarz J, Levine S. Deep reinforcement learning for vision-based robotic grasping: a simulated comparative evaluation of off-policy methods. In: IEEE, editor. IEEE International Conference on Robotics and Automation (ICRA); May 21–25, 2018; Brisbane, QLD, Australia. Piscataway, NJ: IEEE; 2018.

• Bousmalis K, Irpan A, Wohlhart P, Bai Y, Kelcey M, Kalakrishnan M, et al. Using simulation and domain adaptation to improve efficiency of deep robotic grasping. In: IEEE, editor. IEEE International Conference on Robotics and Automation (ICRA); May 21–25, 2018; Brisbane, QLD, Australia. Piscataway, NJ: IEEE; 2018. Highly influential work regarding sim-to-real transfer for robotic grasping.

Rohmer E, Singh SPN, Freese M. V-REP: a versatile and scalable robot simulation framework. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); November 3–7, 2013; Tokyo, Japan: IEEE; 2013.

James S, Freese M, Davison AJ. PyRep: bringing V-REP to deep robot learning; 26.06.2019.

Todorov E, Erez T, Tassa Y. MuJoCo: a physics engine for model-based control. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); October 7–12, 2012; Vilamoura, Algarve, Portugal: IEEE; 2012.

Blender. https://www.blender.org/ . Accessed 1 June 2020.

Koenig N, Howard A. Design and use paradigms for gazebo, an open-source multi-robot simulator. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); 28 September – 2 October, 2004; Sendai, Japan: IEEE; 2004. p. 2149–2154. https://doi.org/10.1109/IROS.2004.1389727 . Accessed 1 June 2020.

Bousmalis K, Silberman N, Dohan D, Erhan D, Krishnan D. Unsupervised pixel-level domain adaptation with generative adversarial networks. In: IEEE, editor. IEEE Conference on Computer Vision and Pattern Recognition (CVPR); July 21–26, 2017; Honolulu, Hawaii; 2017.

Peng XB, Andrychowicz M, Zaremba W, Abbeel P. Sim-to-real transfer of robotic control with dynamics randomization. In: IEEE, editor. IEEE International Conference on Robotics and Automation (ICRA); May 21–25, 2018; Brisbane, QLD, Australia. Piscataway, NJ: IEEE; 2018.

Shrivastava A, Pfister T, Tuzel O, Susskind J, Wang W, Webb R. Learning from simulated and unsupervised images through adversarial training. In: IEEE, editor. IEEE International Conference on Computer Vision (ICCV); October 22–29, 2017; Venice, Italy; 2017.

Fang K, Bai Y, Hinterstoisser S, Savarese S, Kalakrishnan M. Multi-task domain adaptation for deep learning of instance grasping from simulation. In: IEEE, editor. IEEE International Conference on Robotics and Automation (ICRA); May 21–25, 2018; Brisbane, QLD, Australia. Piscataway, NJ: IEEE; 2018.

Danielczuk M, Matl M, Gupta S, Li A, Lee A, Mahler J, Goldberg K. Segmenting unknown 3D objects from real depth images using Mask R-CNN trained on synthetic data. In: IEEE, editor. IEEE International Conference on Robotics and Automation (ICRA); May 20–24, 2019; Montreal, Canada; 2019.

James S, Davison AJ, Johns E. Transferring end-to-end visuomotor control from simulation to real world for a multi-stage task. In: Conference on Robot Learning (CoRL); November 13–15, 2017; Mountain View, California: PMLR; 2017.

Chebotar Y, Handa A, Makoviychuk V, Macklin M, Issac J, Ratliff N, Fox D. Closing the sim-to-real loop: adapting simulation randomization with real world experience. In: IEEE, editor. IEEE International Conference on Robotics and Automation (ICRA); May 20–24, 2019; Montreal, Canada; 2019.

OpenAI, Andrychowicz M, Baker B, Chociej M, Jozefowicz R, Mc Grew B, et al. Learning dexterous in-hand manipulation; 01.08.2018.

OpenAI, Akkaya I, Andrychowicz M, Chociej M, Litwin M, McGrew B, et al. Solving Rubik’s Cube with a robot hand; 16.10.2019.

Tobin J, Biewald L, Duan R, Andrychowicz M, Handa A, Kumar V, et al. Domain randomization and generative models for robotic grasping. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); October 1–5, 2018; Madrid, Spain: IEEE; 2018.

Ganin Y, Ustinova E, Ajakan H, Germain P, Larochelle H, Laviolette F, et al. Domain-adversarial training of neural networks. In: Journal of Machine Learning Research 17; 2016.

Bousmalis K, Trigeorgis G, Silberman N, Krishnan D, Erhan D. Domain separation networks. In: D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, R. Garnett, editors. Advances in Neural Information Processing Systems 29 (NIPS 2016): Curran Associates, Inc; 2016.

Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al. Generative adversarial nets. In: Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, K. Q. Weinberger, editors. Advances in Neural Information Processing Systems 27 (NIPS 2014); December 08–13, 2014. Palais des Congrès de Montréal, Montréal Canada: Curran Associates, Inc; 2014.

Visual Learning Lab Heidelberg. Occluded Object Challenge. 2015. https://hci.iwr.uni-heidelberg.de/vislearn/iccv2015-occlusion-challenge/ . Accessed 1 June 2020.

Sun Y, Falco J, editors. Robotic grasping and manipulation: first robotic grasping and manipulation challenge, RGMC 2016, Held in Conjunction with IROS 2016, Daejeon, South Korea, October 10–12, 2016, Revised Papers. Cham: Springer; 2018.

Sun Y, Calli B, Falco J, Leitner J, Roa M, Xiong R, Yokokohji Y. Robotic grasping and manipulation competition. 2019. https://rpal.cse.usf.edu/competitioniros2019/ . Accessed 1 June 2020.

Eppner C, Höfer S, Jonschkowski R, Martín-Martín R, Sieverling A, Wall V, Brock O. Lessons from the Amazon Picking Challenge: four aspects of building robotic systems. In: Hsu D, Amato N, Berman S, Jacobs S, editors. Robotics: Science and Systems (RSS); June 18–22, 2016; Ann Arbor, Michigan, USA; 2016.

Zeng A, Yu K-T, Song S, Suo D, Walker E, JR., Rodriguez A, Xiao J. Multi-view self-supervised deep learning for 6D pose estimation in the Amazon Picking Challenge. In: IEEE, editor. IEEE International Conference on Robotics and Automation (ICRA); May 29 – June 3, 2017; Singapore, Singapore: IEEE; 2017.

Morrison D, Tow AW, McTaggart M, Smith R, Kelly-Boxall N, Wade-McCue S, et al. Cartman: the low-cost cartesian manipulator that won the Amazon Robotics Challenge. In: IEEE, editor. IEEE International Conference on Robotics and Automation (ICRA); May 21–25, 2018; Brisbane, QLD, Australia. Piscataway, NJ: IEEE; 2018.

Hernandez C, Bharatheesha M, Ko W, Gaiser H, Tan J, van Deurzen K, et al. Team Delft’s robot winner of the Amazon Picking Challenge 2016; 18.10.2016.

Jonschkowski R, Eppner C, Hofer S, Martin-Martin R, Brock O. Probabilistic multi-class segmentation for the Amazon Picking Challenge. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); October 9–14, 2016; Daejeon, South Korea: IEEE; 2016.

Correll N, Bekris KE, Berenson D, Brock O, Causo A, Hauser K, et al. Analysis and observations from the first Amazon Picking Challenge. In: IEEE Transactions on Automation Science and Engineering. p. 172–188.

Leitner J, Tow AW, Dean JE, Suenderhauf N, Durham JW, Cooper M, et al. The ACRV Picking Benchmark (APB): a robotic shelf picking benchmark to foster reproducible research. In: IEEE, editor. IEEE International Conference on Robotics and Automation (ICRA); May 29 – June 3, 2017; Singapore, Singapore: IEEE; 2017.

Ulbrich S, Kappler D, Asfour T, Vahrenkamp N, Bierbaum A, Przybylski M, Dillmann R. The OpenGRASP benchmarking suite: an environment for the comparative analysis of grasping and dexterous manipulation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); September 25–30, 2011; San Francisco, CA, USA: IEEE; 2011.

Yan X, Hsu J, Khansari M, Bai Y, Pathak A, Gupta A, et al. Learning 6-DOF grasping interaction via deep geometry-aware 3D representations. In: IEEE, editor. IEEE International Conference on Robotics and Automation (ICRA); May 21–25, 2018; Brisbane, QLD, Australia. Piscataway, NJ: IEEE; 2018.

Zhou Y, Hauser K. 6DOF grasp planning by optimizing a deep learning scoring function. In: Amato N, Srinivasa S, Ayanian N, Kuindersma S, editors. Robotics: Science and Systems (RSS); July 12–16, 2017. Cambridge: Robotics Science and Systems Foundation; 2017.

Riedlinger MA, Völk M, Kleeberger K, Khalid MU, Bormann R. Model-free grasp learning framework based on physical simulation. In: International Symposium on Robotics (ISR). Munich, Germany; 2020.

Gualtieri M, Platt R. Learning 6-DoF grasping and pick-place using attention focus. In: Conference on Robot Learning (CoRL); October 29–31, 2018; Zürich, Switzerland: PMLR; 2018.

Jang E, Vijayanarasimhan S, Pastor P, Ibarz J, Levine S. End-to-end learning of semantic grasping. In: Conference on Robot Learning (CoRL); November 13–15, 2017; Mountain View, California: PMLR; 2017.

Matsumura R, Domae Y, Wan W, Harada K. Learning based robotic bin-picking for potentially tangled objects. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); November 4–8, 2019; The Venetian Macao, Macau, China: IEEE; 2019.

Moosmann M, Spenrath F, Kleeberger K, Khalid MU, Mönnig M, Rosport J, Bormann R. Increasing the robustness of random bin picking by avoiding grasps of entangled workpieces. In: CIRP Conference on Manufacturing Systems (CIRP CMS); July 1–3, 2020; Chicago, IL, US; 2020.

Download references

Open Access funding enabled and organized by Projekt DEAL. This work was partially supported by the Ministry of Economic Affairs of the state Baden-Württemberg (Zentrum für Kognitive Robotik Grant No. 017-180004 and Zentrum für Cyber Cognitive Intelligence (CCI) Grant No. 017-192996).

Author information

Authors and affiliations.

Fraunhofer IPA, Stuttgart, Germany

Kilian Kleeberger, Richard Bormann, Werner Kraus & Marco F. Huber

IFF, University of Stuttgart, Stuttgart, Germany

Marco F. Huber

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Kilian Kleeberger .

Ethics declarations

Conflict of interest.

The authors declare that they have no conflict of interest.

Human and Animal Rights and Informed Consent

This article does not contain any studies with human or animal subjects performed by any of the authors.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the Topical Collection on Robotics in Manufacturing

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Kleeberger, K., Bormann, R., Kraus, W. et al. A Survey on Learning-Based Robotic Grasping. Curr Robot Rep 1 , 239–249 (2020). https://doi.org/10.1007/s43154-020-00021-6

Download citation

Published : 20 September 2020

Issue Date : December 2020

DOI : https://doi.org/10.1007/s43154-020-00021-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Robotic grasping and manipulation
  • Artificial intelligence
  • Deep learning
  • Sim-to-real transfer
  • Find a journal
  • Publish with us
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

Neural Response During a Mechanically Assisted Spinal Manipulation in an Animal Model: A Pilot Study

William r. reed.

1 Palmer Center for Chiropractic Research, Davenport, IA, USA

Michael A.K. Liebschner

2 Department of Neurosurgery, Baylor College of Medicine, Houston, TX, USA

3 Research Service Line, Michael E. DeBakey VA Medical Center, Houston, TX, USA

4 Exponent Failure Analysis, Houston, TX, USA

Randall S. Sozio

Joel g. pickar, maruti r. gudavalli, introduction.

Mechanoreceptor stimulation is theorized to contribute to the therapeutic efficacy of spinal manipulation. Use of mechanically-assisted spinal manipulation (MA-SM) devices is increasing among manual therapy clinicians worldwide. The purpose of this pilot study is to determine the feasibility of recording in vivo muscle spindle responses during a MA-SM in an intervertebral fixated animal model.

Intervertebral fixation was created by inserting facet screws through the left L 5-6 and L 6-7 facet joints of a cat spine. Three L 6 muscle spindle afferents with receptive fields in back muscles were isolated. Recordings were made during MA-SM thrusts delivered to the L 7 spinous process using an instrumented Activator IV clinical device.

Nine MA-SM thrusts were delivered with peak forces ranging from 68-122N and with thrust durations of less than 5ms. High frequency muscle spindle discharge occurred during MA-SM. Following the MA-SM, muscle spindle responses included returning to pre-manipulation levels, slightly decreasing for a short window of time, and greatly decreasing for more than 40s.

This study demonstrates that recording in vivo muscle spindle response using clinical MA-SM devices in an animal model is feasible. Extremely short duration MA-SM thrusts (<5ms) can have an immediate and/or a prolonged (> 40s) effect on muscle spindle discharge. Greater peak forces during MA-SM thrusts may not necessarily yield greater muscle spindle responses. Determining peripheral response during and following spinal manipulation may be an important step in optimizing its’ clinical efficacy. Future studies may investigate the effect of thrust dosage and magnitude.

Spinal manipulation is a form of manual therapy commonly used by clinicians and therapists for conservative treatment of musculoskeletal complaints. Spinal manipulation is typically distinguished from spinal mobilization by the presence of a short duration mechanical thrust applied to the spinal column using either direct hand contact (≤150ms) or one of several commercially available mechanical devices (≤10ms) [ 1 - 4 ]. Among chiropractic clinicians, use of mechanically-assisted spinal manipulation (MA-SM) is growing rapidly with reports that 40-60% of practitioners in the United States, Britain, Belgium, Canada, Australia, and New Zealand use MA-SM in some capacity of patient care [ 5 - 10 ].

Spinal manipulation has been shown to be effective in the treatment of neck and low back pain and is recommended by clinical guidelines and evidence reports [ 11 - 16 ]. Several reviews regarding the clinical efficacy, safety, usage, and mechanical effects of MA-SM have recently been published [ 17 - 20 ]. A majority of the MA-SM reviews have noted study weaknesses such as small sample size, non-randomization, and/or lack of a placebo or control group. Despite these limitations, great strides have recently been made in determining the mechanical characteristics and/or biological effects of MA-SM [ 1 - 4 , 21 - 31 ]. These studies may provide a foundation for larger randomly controlled trials of MA-SM therapy. One distinct advantage MA-SM offers over manually delivered manipulative thrusts in a research setting is that the thrust velocity and thrust magnitude can be standardized. This feature is of particular importance in efficacy and mechanistic studies investigating the biomechanical and/or neurophysiological effects of spinal manipulation. In addition, MA-SM devices can be mechanically altered to provide an adequate sham spinal manipulation (no force delivered) which is more difficult to accomplish with manually delivered manipulative thrusts.

Spinal manipulation by its very nature is a mechanical stimulus typically applied at clinically identified sites of intervertebral joint fixation or joint hypomobility. Theorized mechanisms for its therapeutic effects include breaking of joint adhesions and/or alteration of sensory input from primary afferents of paraspinal tissues which subsequently act to influence spinal cord reflexes and/or other central neural mechanisms [ 32 , 33 ]. MA-SM has been shown to result in oscillatory intervertebral movements [ 4 , 24 , 29 , 34 , 35 ] and neurophysiological responses in the form of bilateral compound action potentials in both in vivo animal [ 24 , 36 ] and human [ 21 , 23 , 29 ] studies. The compound action potentials from spinal nerve roots have been attributed to the simultaneous activation of mechano-sensitive afferents innervating viscoelastic spinal tissues such as muscles, ligaments, facet joints, and discs, but the exact sources of neural activity were not identified [ 23 , 29 , 37 ]. Muscle spindles are likely among the mechanoreceptors stimulated by MA-SM. They provide the central nervous system with sensory information regarding both changes in muscle length and the velocity at which those length changes occur. Using a feedback motor control system, we have previously shown that manipulative thrust durations between 25 and 150ms elicit high frequency discharge from paraspinal muscle spindles [ 38 - 40 ]. However to our knowledge, recordings of muscle spindle response associated with short manipulative thrust durations (≤10ms) as generated with clinical MA-SM devices, have never been recorded. It is unclear whether the noise artifact or high frequency mechanical perturbation associated with use of short thrust duration MA-SM devices would prohibit, obscure, or otherwise interfere with dorsal root recordings in a cat preparation. Therefore, the primary goal of this pilot study was to determine the feasibility of recording primary afferent muscle spindle responses in dorsal rootlets using a commercially available MA-SM device in an in vivo feline model of intervertebral joint fixation.

Materials and Methods

The experimental preparation and procedures used in this study have been described in greater detail elsewhere [ 39 - 42 ] and are therefore presented here only briefly. Electrophysiological recordings were made from 3 back muscle spindle afferents traveling in the dorsal roots of a single Nembutal-anesthetized (35 mg/kg, iv; Oak Pharmaceuticals, Lake Forest, IL) adult male cat (4.5 kg). All experimental procedures were approved by the Institutional Animal Care and Use Committee (#20120601). This pilot data using a MA-SM device was collected from an experimental preparation associated with a separate study investigating the relationship between intervertebral fixation and L 6 spinal manipulation delivered by a computer controlled feedback motor.

Catheters were placed in the common carotid artery and external jugular vein to monitor blood pressure, introduce fluids and/or supplemental anesthesia if the arterial pressure rose above 120mm Hg or if a withdrawal reflex became present. The trachea was intubated and the cat was artificially ventilated. Since our focus was on back afferents, the right sciatic nerve was cut to reduce afferent input from the hindlimb. An L 5 laminectomy was performed exposing the right L 6 dorsal rootlets which were cut close to their entrance to the spinal cord and placed on a platform. Thin filaments were teased with fine forceps until action potentials from a single neuron were identified that responded to both mechanical pressure applied directly to the paraspinal back muscles (multifidus or longissimus) and a fast vibratory stimulus (~70 Hz; mini-therapeutic massage vibrator; North Coast Medical, Morgan Hill CA, USA). Afferent fibers remained positioned on the recording electrode while facet screws (10mm titanium endosteally anchored miniscrews; Dentaurum, Ispringen, Germany) were inserted through the left articular pillars of L 5-6 and L 6-7 vertebra in similar fashion to that previously described [ 40 ]. An x-ray of the L 5-6 and L 6-7 facet fixation is shown in Figure 1 . Neural activity was passed through a high-impedance probe (HIP511, Grass, West Warwick, RI), amplified (P511 K, Grass) and recorded using a CED 1401 interface and Spike 2 data acquisition software (Cambridge Electronic Design, Cambridge, England).

An external file that holds a picture, illustration, etc.
Object name is nihms-719339-f0001.jpg

An x-ray of the unilateral L5-6 and L6-7 facet joint fixation and a photograph depicting the modified Activator IV device with attached dynamic load cell and tri-axial accelerometer.

MA-SM Device

The Activator IV (Activator IV, Activator Methods Int. Ltd., Phoenix, AZ) is a hand-held clinical device comprised of a rubber-tipped spring-loaded hammer with 4 device settings that produce relative increases in thrust magnitude. Its thrust duration is<10ms and can deliver a maximum force of 212N when tested directly on a load cell [ 1 ]. For the current study, the device was modified by attaching an impedance head under the rubber tip ( Figure 1 ). The impedance head included a dynamic load cell (Model 208C04; PCB, NY) and a tri-axial accelerometer (Model 356A01, PCB, NY).

Once a single back afferent had been isolated, the Activator IV device was placed by hand directly onto the exposed fascia overlying the cat’s L 7 spinous process (one segment caudal to the level of afferent recording) and a small preload was applied. The L 7 vertebra was chosen to receive the MA-SM thrust due to the increased risk of tearing the L 6 afferent fiber off the recording electrode during an L 6 manipulation. The Activator IV device requires that a preload force be applied in order to completely retract the instrument tip prior to triggering the manipulative thrust. We used the two lowest device settings (1 and 2) which can deliver a force of 123N when tested directly on a load cell [ 1 ] but substantially less force (79N) when tested on polymer spinal tissue analog blocks [ 3 ]. MA-SM thrusts were applied in a dorsal-ventral direction and separated by a minimum period of 5 minutes. Electronic signals obtained from the force transducer and accelerometer were each sampled at 12,800 Hz and recorded in a binary file format on a computer using Lab View (National Instruments, Austin, TX).

Three muscle spindle afferents with receptive fields located in the longissimus back muscle were recorded during 9 L 7 MA-SMs in a single cat preparation with intervertebral joint fixation. All afferents responded to mechanical movement of the spine and had sustained responses to fast vibratory stimuli (~70 Hz). All 3 afferents received MA-SM thrusts at a device setting of 1, whereas 2 afferents also received MA-SM thrusts at a device setting of 2. Individual MA-SM thrust profiles are reported in Table 1 . All thrust durations were <5ms in duration and applied MA-SM peak forces ranged from 78.2 to 121.8N.

The thrust profiles of mechanical-assisted spinal manipulation using the Activator IV instrumented device for the 3 muscle spindle afferents in this study are shown. Total peak force includes preload which can be influenced by the device operator.

Examples of spindle responses to MA-SM thrusts from Afferent 1 and 2 are shown in Figure 2 . For afferent 1 at a device setting of 1, the combined preload and MA-SM peak thrust force was 116.5N and the thrust duration was 2.0ms. The MA-SM thrust resulted in a high frequency spindle discharge during preload and thrust. Immediately following the thrust there was a 2.89s cessation of spindle discharge followed by the resumption of resting discharge but at a mean frequency slightly less than that prior to the thrust and lasting for the remaining 20s of recording ( Figure 2A ). For afferent 2 at a device setting of 1, the combined preload and peak MA-SM thrust force was 121.8N and the thrust duration was 2.0ms ( Figure 2B , Table 1 ). Unlike Afferent 1, Afferent 2 exhibited no cessation of discharge following the MA-SM thrust and rapidly resumed resting discharge.

An external file that holds a picture, illustration, etc.
Object name is nihms-719339-f0002.jpg

Recordings from 2 muscle spindle afferents in response to mechanically-assisted spinal manipulation (setting 1) with applied peak forces of 116.6N (A) and 121.8N (B). In Afferent 1, there was a 2.89s cessation of spindle discharge immediately following the manipulative thrust and slightly reduced resting discharge for at least 20s after the thrust. In Afferent 2, there was no cessation of discharge following the thrust and near immediate return of resting spindle discharge frequency despite similar peak thrust forces being delivered to the two afferents.

The four MA-SM thrusts using device setting 1 delivered to Afferent 3 had a mean peak force of 109N and mean thrust duration of 3.0ms ( Table 1 ). Similar to Afferents 1 and 2, there was an increase of spindle discharge as a result of preload and MA-SM thrust at the L 7 spinous process ( Figure 3A ). Following the thrust there was a decrease (but not a cessation) in spindle discharge lasting approximately 2.47s before there sumption of pre-thrust resting discharge frequency. For Afferent 3, mean peak force for the two MA-SM thrusts at device setting 2 was 81 N and mean thrust duration was 3.0ms. Afferent 3’s response to one of thrusts at device setting 2 is shown in Figure 3B . There was an increase in spindle discharge with preload and thrust similar to that when the device was set at 1. However, unlike with setting 1 post-thrust activity was further reduced and more prolonged (~4.13s) at device setting 2. Despite the lower peak force (78.2N) delivered on device setting 2 compared to device setting 1 (107.9 N), there is a prolonged period (>40s) during which resting discharge did not return to pre-thrust levels ( Figure 3A, 3B ). It should be noted that mean Afferent 3 resting discharge frequency prior to the MA-SM thrust delivered at device setting 1 or 2 were similar ( Figure 3A,3B ). Although the precise time is not known, Afferent 3 returned to its resting discharge frequency at some point within 5 min after the setting 2 thrust delivery depicted in Figure 3B . Afferent 3 also exhibited increased afferent discharge to a fast vibratory stimulus (70 Hz) after the thrust suggesting that no fiber damaged had occurred as a result of this MA-SM.

An external file that holds a picture, illustration, etc.
Object name is nihms-719339-f0003.jpg

Recordings from a third muscle spindle afferent to mechanically-assisted spinal manipulations at device settings of 1 (A) and 2 (B). Greater peak forces were physically applied with setting 1 (107.9N) than with setting 2 (78.2N), however the lower total peak force produced an immediate and prolonged decrease in muscle spindle response following the manipulative thrust.

To our knowledge, this study is the first to record muscle spindle response evoked by a mechanically-assisted spinal manipulation device that is used in clinical practice. Because spinal manipulation is typically delivered at sites of clinically determined biomechanical joint dysfunction and/or pain provocation, the relationship between intervertebral joint fixation/hypomobility and sensory signaling elicited from paraspinal mechanoreceptors during spinal manipulation is of particular interest to manual therapy researchers and clinicians alike. The purpose of the facet fixation model was to produce a moderate degree of segmental dysfunction that might be similar to that encountered by manual therapy clinicians in practice. It will likely be through a combination of both basic and clinical research that the underlying physiological mechanisms of manual therapy will be elucidated and its clinical efficacy optimized.

Although this pilot study contained a limited number of afferents, it demonstrated some important findings and will help to inform future studies. First with regards to the preparation, we demonstrated the feasibility of recording muscle spindle responses in an in vivo animal model using a clinical MA-SM device. The afferent fiber was wrapped around the recording electrode and withstood the perturbation associated with the mechanical delivery of 78 to 122 N forces over extremely short durations (< 5ms). Evidence for a lack of damage to the afferent fiber is in part provided through the return of pre-thrust resting spindle discharge following MA-SM. The risk of potential afferent fiber damage during MA-SM delivery in this preparation is real, but can be minimized by using dorsal rootlets that are longer in length. Although noise artifacts were encountered during the experiments, this appeared due in large part to movement of the device while the operator delivered the thrust. This issue can be remedied by non-manually triggering the MA-SM device attached to a rigid frame or perhaps using newer electrically powered (non-spring-loaded) MA-SM devices [ 3 ].

We found that the extremely short MA-SM thrust durations elicited high frequency discharge from paraspinal muscle spindle afferents. This response appears similar that which occurs during 25-150ms thrust durations delivered by a computer-controlled feedback motor [ 38 - 40 ] ( Figure 4 ), but direct comparisons are difficult due to the presence of preload forces and a lack of controlled preload durations in the current study. This pilot study clearly demonstrated that muscle spindle afferents can respond differently to similar MA-SM thrust forces ( Figures 1 - ​ -3). 3 ). Afferents 1-3 exhibited post-thrust responses ranging from limited diminution of discharge (Afferent 2), to a mild decrease (Afferent 3) or complete cessation of discharge for nearly 3s (Afferent 1). It is not known, whether these differences in post-MA-SM thrust response are due to inherent differences related to muscle spindle intrafusal fiber types (e.g. bag vs chain fibers; for greater discussion in this regard see [ 43 , 44 ]), the anatomical proximity of the afferent’s receptive field to the L 7 spinous process thrust site, and/or other biological factors. In a previous study investigating the effects of L 6 and L 7 anatomical thrust delivery sites on L 6 muscle spindle discharge, we found that segmental contact sites distant to the muscle spindle’s receptive field were just as effective at increasing spindle discharge as contact sites close to the receptive field [ 45 ].

An external file that holds a picture, illustration, etc.
Object name is nihms-719339-f0004.jpg

Recordings from two muscle spindle afferents in separate but similar cat experiments in which a mechanical feedback motor was used to deliver L 6 manipulative thrusts of 25 ms (A) and 50 ms (B) duration without a tissue preload.In (A) there was a cessation of discharge (0.3 s) following a 24.5 N thrust, while in (B) there was a decrease in discharge (3.47 s) following a 19.6 N thrust. Cat body weight in (A) was 5.1 kg and in (B) 3.2 kg. Similarity in muscle spindle response characteristics between less forceful thrusts delivered by a feedback motor and greater forces delivered by the Activator IV device suggests a possible plateau effect for thrust magnitude on muscle spindle response.

We found it interesting that the lower force delivered at setting 2 (78.2N) versus the higher peak force delivered at setting 1 force (107.9N) had a greater impact on Afferent 3’s discharge post-thrust ( Figure 3 ). It is reasonable to think that greater forces delivered into the spine over the same duration would create greater vertebral displacement and thereby evoke a greater response from paraspinal muscle spindle afferents. However, several variables and conditions in the current experiment may affect this rationale including the use of extremely short thrust durations (<5 ms), a thrust site 1 segment caudal to afferent recording level, the presence of intervertebral fixation, and/or the greater inherent flexibility of the cat spine. Colloca and colleagues in a sheep model found that as the applied force increased vertebral displacements also increased [ 24 , 31 ]. However, they also found that a constant thrust force of 80N at L 3 produced larger adjacent vertebral motions at shorter thrust durations (10ms) compared to longer thrust durations (100 and 200ms) [ 24 , 31 ]. It is thought that the mechanical principles of resonant frequency may apply to the human spine. If so, lower manipulative forces applied at resonance frequencies of the spine may accomplish similar vertebral motions as greater forces applied at nonresonant frequencies [ 17 ]. However, since settings 1 and 2 thrust durations are nearly equivalent, this particular explanation of differences in muscle spindle response is unlikely.

Limitations

Preload forces and preload durations were not standardized in the current study as the Activator IV device was operated by hand as is performed clinically. Applied preload forces are required to retract the tip of Activator IV device, but we consciously attempted to limit the magnitude of applied preload forces since the preload duration was not standardized. We used thrust force magnitudes in our animal model that were the same or similar to those used in human studies in the human cervical spine. In humans, mean peak forces during manually applied cervical manipulation has been reported to be 118N [ 46 ]. Although direct circumference measurements were not performed in this study, the actual trunk size of adult male cats appears to be similar to the anatomical size of the human neck. While, we acknowledge that the thrust forces used in the current study were up to 2.7× the cat’s body weight we must also be mindful that the whole lumbar spine stiffness of the cat spine has been shown to be 2-7× less than that of human spines. Species differences in spinal stiffness have been clearly demonstrated in that unlike human cadaveric specimens, structural failure did not occur in the cadaveric cat spines with flexion/extension biomechanical testing [ 47 ]. Figure 4 demonstrates that much smaller forces (24.5 N and 19.6 N) have similar effects on paraspinal muscle spindle response suggesting a plateau effect of thrust magnitude. In addition, previous studies have indicated that Activator devices produce a maximum of 0.3 J of kinetic energy which is far below the energies required to produce tissue injury [ 36 , 48 ]. As is the case clinically, the Activator IV device is commonly used on much smaller human body parts than the human neck such as the wrists, elbows or ankles [ 49 ].

This pilot study demonstrates feasibility of recording in vivo muscle spindle response during spinal manipulation using clinical mechanically-assisted spinal manipulation devices. It also demonstrates that extremely short duration manipulative thrusts (<5ms) of equivalent forces to that delivered to the human cervical spine can have an immediate and/or perhaps a prolonged effect (> 40s) on paraspinal muscle spindle discharge. While the clinical relevance of how mechanoreceptor stimulation or inhibition related to spinal manipulation modulates central nervous system activity remains to be clarified, determining how various mechanoreceptors respond during and following spinal manipulative thrusts in a clinically relevant fashion is an important step toward achieving this goal.

Acknowledgments

This work was supported by the NIH National Center for Complementary and Alternative Medicine (K01AT005935) to WRR and was conducted in a facility with support from the NIH National Center for Research Resources under Research Facilities Improvement Grant Number C06RR15433. The authors would like to thank Activator Methods International for providing the Activator IV instrumented device, Dr. Robert Vining for radiology assistance, and Dr. Robert Cooperstein for his helpful editorial comments.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 14 March 2018

Automatic detection of image manipulations in the biomedical literature

  • Enrico M. Bucci   ORCID: orcid.org/0000-0002-3317-8003 1 , 2  

Cell Death & Disease volume  9 , Article number:  400 ( 2018 ) Cite this article

13k Accesses

16 Citations

47 Altmetric

Metrics details

Images in scientific papers are used to support the experimental description and the discussion of the findings since several centuries. In the field of biomedical sciences, in particular, the use of images to depict laboratory results is widely diffused, at such a level that one would not err in saying that there is barely any experimental paper devoid of images to document the attained results. With the advent of software for digital image manipulation, however, even photographic reproductions of experimental results may be easily altered by researchers, leading to an increasingly high rate of scientific papers containing unreliable images. In this paper I introduce a software pipeline to detect some of the most diffuse misbehaviours, running two independent tests on a random set of papers and on the full publishing record of a single journal. The results obtained by these two tests support the feasibility of the software approach and imply an alarming level of image manipulation in the published record.

Similar content being viewed by others

research papers on manipulation

Assessing GPT-4 for cell type annotation in single-cell RNA-seq analysis

Wenpin Hou & Zhicheng Ji

research papers on manipulation

High-throughput prediction of protein conformational distributions with subsampled AlphaFold2

Gabriel Monteiro da Silva, Jennifer Y. Cui, … Brenda M. Rubenstein

research papers on manipulation

A visual-language foundation model for computational pathology

Ming Y. Lu, Bowen Chen, … Faisal Mahmood

Introduction

In a set of drawings dating 13th March 1610 published on the “Sidereus Nuncius”, Galileo represented the uneven curve of the sun’s light over the moon disc, as seen only in January of the same year using his telescope 1 . The intent was to prove that the moon surface was rough, with several differences in elevation, in contrast to the idea prevalent at the time of a smooth, perfect sphere. This is a good example of the conscious usage of a series of images to document a scientific observation and to prove a scientific hypothesis, a common practice in several domains of science. Given the complexity of the subjects to be represented, however, in the field of life sciences only few scientists with excellent drawing skills (or having access to gifted artists) could successfully and universally propagate their findings using images— think for example of Haeckel’s embryos or of Darwin’s orchids. It was only after “objective” photographic reproduction of experimental outcomes was routinely available, that using images to represent the outcome of a biological experiment became a method accessible to anyone; a method perceived to be as objective as any other experimental set-up, so that in many cases images produced by dedicated apparatuses became the results to be analyzed, qualitatively and quantitatively, to prove a given hypothesis. This fact led to a proliferation of images published in the biomedical literature, where photographs are used to document experimental results, as opposed to abstract graphs and graphical arts used mostly to summarize mathematical quantities or to represent an experimental set-up or a theoretical model. The status of relative “objectivity” attributed to photographic documents was however severely challenged in the transition from classical photography to digital imaging, because the same software used for producing and analyzing digital images was very early used to retouch the images to be published. While this can be acceptable in principle—for example, intensity calibration of a digital image can be required for a quantitative analysis—it is also true that image manipulation aiming to deceive the readers of a scientific paper became extremely easy. The once difficult photographic retouching is today technically available to anyone; thus, an easy prediction would be that illicit manipulation of scientific images should be highly prevalent. In particular, once the original obstacle (i.e., technical feasibility) has been lifted, there are certain conditions that would lead to a higher number of misconduct cases connected to image manipulation, namely:

the manipulation confers some strong advantages to the person committing it;

the probability of being discovered is low;

even after an actual fraud is discovered, the consequences for the offender are mild if any.

Indirect evidence for the hypothesis that fraudulent image manipulations are indeed increasingly common comes from the US Office for Research Integrity (ORI) database. In fact, since the introduction of Photoshop in 1988, the number of ORI cases with questioned images has been growing exponentially. 2 However, image manipulations that surfaced in ORI cases are by definition originating from a tiny selection of research groups—only cases involving US Federal Funding are reported to and considered by ORI—and, even for the population considered, ORI cases are suspected to be only the tip of the iceberg. 3 In recognition of this problem, we thus decided to measure the actual extent of suspect image manipulation in the biomedical literature by performing an unbiased, automated analysis of a large image sample obtained from recent scientific publications, supplemented by expert analysis for verification of the findings. To this aim, we tweaked some home-made software with available open-source and commercial tools, to get an efficient pipeline for the extraction and processing of images from the scientific literature on a bulk scale.

Type of image manipulations considered and instruments selected for the analysis

One of the most debated questions in the field of scientific misconduct involving images is the necessarily arbitrary definition of what is acceptable and what is not. Beside the general idea that manipulations that aim at deceiving the reader, concealing data features, or fabricating whole or parts of an image are all examples of misconduct, there are few if any clear-cut guidelines. We started from the ORI guidelines as reported in the ORI website at the time of writing this manuscript. 4

In particular, we considered the following evidence of potential misconduct:

Cloning objects into an image, to add features that were not present in the first place, taking the cloned object from the same or a different image;

Reusing an image or a “slightly modified” version of the same image in the same paper without an explicit mention of it. Two or more images are considered as a “slightly modified” version of a single image if their difference is restricted to a small, discrete region (not larger than 5% of the total area expressed in pixels), or if they differ only in scale, rotation, linear stretching, cropping, contrast, or brightness (in any combination);

Reusing an image or part of it from a previous paper, including reusing a “slightly modified” version of a previously published image (in the same sense as for point 2).

This list is intentionally restricted to a fraction of what is technically possible to detect, because proving any of the above-mentioned image manipulation strongly implies a scientific misconduct case.

Specifically, point 1 corresponds to data fabrication—no original experiment for the published image exists.

Points 2 and 3 include cases that range from image plagiarism, if the involved images are presented in the same way (e.g., they are labelled in the same way and they are referred to the same experiments, discussed in the same way), to falsification, if they are presented as referring to completely different experiments (e.g., they are labelled differently and refer to physical objects which are not the same).

Investigation of a random set of Open Access papers

In a first experiment, we considered the open source papers released by PubMed Central 5 (PMC) in January 2014. Assuming a global population of 30,000,000 of papers, to ensure that the results were representative (error level ±5% with 99% confidence), we included in this sample a number of papers equal to more than twice the minimum requested sample size (which would be 664). In this way, we could balance for the presence of up to 50% of irrelevant papers (review, letters, image-free papers etc.). The sample thus included 1364 papers randomly selected from PMC, from 451 journals. After automated extraction and filtering, this set gave 4778 images annotated by the software pipeline. The processing time on a Xenon E5 exacore equipped with a 30 Gb RAM was about 30 min.

Out of the 1364 examined papers, we discovered 78 papers (5.7% of the total) from 46 different journals (10.2% of the total, average IF = 4.00, ranging from 0.11 to 9.13) containing at least one instance of suspected image manipulation. To see whether any of the retrieved papers was known to contain any problem, we checked twice on the anonymous post-publication peer review site PubPeer ( www.pubpeer.com ): once at the time of the first analysis (March 2014) and once at the time of preparation of this manuscript. None of the identified papers was found among those discussed by PubPeer. It is to be seen whether the site community will detect problems in the identified papers in the next future.

As for the type of manipulated images, the vast majority of the identified papers contain manipulations of gel electrophoresis images ( n  = 65, i.e., 83% of flagged papers contain at least one manipulated gel image). Given the fact that part of our pipeline was specifically designed to identify cloning of bands and lanes in gel images, this result is hardly surprising. However, if we refer this number to papers containing at least one image of a gel electrophoresis experiment ( n  = 299), we obtain that 21.7% of this subset do contain a potential ORI policy violating manipulation involving gel images—which appears to be a high incidence per se. This particular finding comes as an experimental verification of suspicions raised on the extensive manipulation of gel-electrophoresis images by Marcus and Oransky 6 among others, and it is consistent with the easiness with which such manipulations can be produced and can escape human visual inspection.

The affected journals that yielded more than one paper for the analysis are reported in the Table  1 , sorted by number of papers included in the sample (examined papers). The absolute number of potentially manipulated papers and the corresponding ratio over the total is reported for each journal.

Of note, we checked whether there is any correlation between the ratio of manipulated papers and the IF of the affected journal (2012 values), but we could find no evidence for it. In this respect, at least in the examined sample, we could neither find that higher IF guarantee more stringent checking procedures, nor that journals having higher IF are target of more manipulations.

We then tested whether the amount of image manipulations found in each journal correlates with the number of retractions already published by that journal. This possibility follows from assuming that image manipulation is highly prevalent among scientific misconduct cases—which is indeed true for claims examined by ORI 2 —and that (when discovered) it results in a retraction, so that journals were image manipulations are highly prevalent should also retract more papers than others. To test whether this correlation exists, at least in the limited sample examined in this paper, we isolated from our set those journals which:

were represented by at least 10 papers included in our initial set;

were found to have at least 1 manipulated paper included in our set;

had at least 1 paper dubbed as retracted by PubMed.

Seven journals satisfied all the above-mentioned conditions. We thus compared the ratio of manipulated images found in the sample to the ratio of retracted papers (number of retracted over total published papers). The result is exemplified in Fig.  1 .

figure 1

Linear correlation between the retraction rate and the rate of manipulated images found in published manuscripts, as obtained in the examined random sample of journals considered in this paper

A strong linear correlation is observed between the Image Manipulation Ratio and the Retraction Ratio for all journals but the Yonsei Medical Journal, which appears to have by far more retractions than expected. An examination of the 6 retractions retrieved for this journal (at the time of preparation of this paper) may clarify why: 4 retractions were due to text plagiarism, 1 to intellectual property issues, and 1 is due to undisclosed reasons. It appears that, for this journal, retractions do not correlate with image manipulations; whether this fact is due to lack of detection or to dismissal of the corresponding manipulation claims by the editorial board remains to be ascertained. However, it holds true that, for 6 out of the 7 journals examined, the image manipulation rate appears to correlate with the retraction rate. As shown in the previous graph, for the journals considered, retractions totaled a mere 0.38% of papers containing potentially manipulated images. However, for the same journals, an examination of retractions or corrections reveals that, for those cases where enough information is disclosed, a substantial amount of retractions is indeed due to image manipulations of the kind discussed in this paper. Table  2 shows the number of retractions which were at least in part caused by image manipulations (as of May 2015).

On average, image problems are reported in about 40% of the retraction notes detailing the reasons for paper withdrawal. Therefore, it appears that the discrepancy between retraction rates and manipulation rates is mainly due to a detection problem, not to the dismissal of claims by the editorial boards.

We next examined the distribution of manipulated papers by country. We assigned each paper to a country according to the location of the corresponding author’s institutions. The original sample contained papers from 69 countries, with an average of 21 papers per country (standard deviation = 49, range = 1–256). Figure  2 reports the number of problematic papers as a function of the total number of examined papers for each country.

figure 2

A paper is attributed to a given nation according to the nationality of affiliation of the institution of the corresponding author

Considering the first three countries sorted by number of examined papers, groups from China (and to a lower extent USA) produced more manipulated papers than expected, while UK groups produced less.

Eventually, in an attempt to evaluate the potential economic impact of the 78 papers containing some problematic images, we retrieved the funding information provided by PubMed for each of the included papers. While this information is only partial—there is a variety of obligations in disclosing funding sources, which depends on National legislations—we could nonetheless assess the minimal economic impact by examining the disclosed information. The number of problematic papers per funding source according to PubMed is reported in the upper pie graph in Fig.  3 . The bottom pie graph represents the distribution of funding sources for the overall test sample (information available for 926 papers out of 1364).

figure 3

Disclosed source of funding for the paper containing manipulated images (upper pie) and for the overall examined random sample (bottom pie)

First, by comparing the funding source distribution in the two pie charts, we may notice that there is no specific enrichment in the set of manipulated papers. This means that there is not a “preferred” funding source for manipulated papers.

As for the 78 papers containing manipulated images, the total number of papers, which disclosed some funding information, is 53, with some of the papers having more than one funding source (69 grants reported as a funding source). While the identified paper manipulations are not necessarily connected to misconduct or fraud in a scientific project, the corresponding grant values allow the estimate of a lower bound for the money potentially lost in bad science. For example, if we consider that the average value of extramural NIH research projects is above $400,000 7 , then the overall value for the 12 NIH projects which produced 12 problematic papers (red portion of the pie in the preceding graph) is greater than $4,800,000, in agreement with an independent estimate that was recently published. 8 It should be noted that, while detecting manipulations in papers cannot prevent the loss of money invested in the corresponding projects (since it already happened), it can however prevent these papers to be used in further grant requests, and, if used to screen at that level, can be used to assess the quality of all data—including unpublished one.

Uncovering time trends: investigation of the journal Cell Death and Disease

To further test our pipeline, we would approach the analysis of a single journal, examining all its published papers, instead of a sample of different journals like in the previous example.

This allows to follow the temporal spreading of the manipulations, to see whether there is a growing trend or a nearly constant rate of manipulation. Moreover, if the yearly acceptance rate for a journal is known, it is possible to see whether a direct correlation exists between published manipulated data and easiness for a paper to get published. As a last point, by looking at the entire dataset of images published by a journal, image reusing may be easily spotted, adding a further layer to detectable misconduct.

We selected as a representative target the journal Cell Death and Disease (CDDIS), published by the Nature Publishing Group (NPG), and focused on the 1546 papers published in the period 2010–2014. Overall, we found 8.6% of papers to contain manipulated images, which is well in the range found for the PubMed Central sample.

However, if one looks to the temporal evolution of the yearly percentage of manipulated papers, a growing trend is immediately evident, with a percentage of manipulations exceeding the PMC range in the last 2 years examined (Fig.  4 ).

figure 4

The yearly number of retrieved manipulated images and the number of papers containing them is also reported (blue and red bars, respectively)

While for CDDIS we observed a growing trend in image manipulation, from published data it appears that during the same period the acceptance rate slightly decreased (from above 50% to about 40%). 9

However, the overall number of papers submitted increased substantially, from about 100 papers in 2010 to more than 1000 in 2014. The I.F. of the journal during the same period has been stable or it slightly increased; however, the submission quality was apparently affected by an increase in potentially manipulated images, which again confirms that these two parameters are not correlated.

While from a publisher perspective I.F. and submission growth are the hallmark of a successful journal, our data point to the fact that, without some extra checking of the manuscript quality, it is not possible to ensure that a highly successful journal (in terms of its reception by the public) is free of a substantial number of manipulated images.

Another interesting result, which is not caught by the preceding figures and tables, is that of image reusing in different papers published by the same journal. By looking to papers published by CDDIS in 2014, we discovered that 20 contained images previously published by the same group of authors. While self-plagiarism could have easily been spotted by an automated procedure relying on a database including all images published by a journal, referees of a paper submitted in 2014 might have never seen the reused images before or might have dedicated less time to the revision process, due to the aforementioned increase in the number of submitted manuscripts. To draw any conclusions, one should compare my analysis with the analysis of a journal of similar level, that in the same period has not increased the number of articles.

Conclusions

We run a scientific literature analysis to check for image manipulations. While our approach suffers from a few limitations in scope, being restricted to the detection of only a tiny amount of possible data manipulations, we discovered that about 6% of published papers contain manipulated images, and that about 22% of papers reporting gel electrophoresis experiments are published with unacceptable images. This last figure might be compared to the result of recently published independent study 10 , which examined a set of randomly selected papers in basic oncology and found 25% of them containing manipulated gel images.

On a larger dataset, using flagging criteria that appear to be quite like those assumed in this paper, Bik and coworkers found 3.8% of manually examined papers to include at least one figure containing an inappropriate image manipulation. Looking at single journals in the same sample, this percentage ranged from 0.3% in Journal of Cell Biology to 12.4% in International Journal of Oncology , albeit the examined samples for different journals were very different in size 11 . Again, the manipulation rate found independently on a different sample of papers appear compatible to what has been automatically detected using our procedure in a different sample.

Moreover, our study allowed for the first time, albeit in a limited sample, to establish a correlation between the number of manipulated images in a journal and the number of retractions issued by that journal: with some notable exception, the measured ratio of manipulations is a proxy for the retraction rate. This is also confirmed by the fact that the retraction notes refer in a substantial number of cases to image manipulations; however, the number of published papers containing manipulated images exceeds by orders of magnitude the number of retracted papers, pointing to a detection problem on the journal side.

As for the details of our analysis, in contrast with previous proposed hypotheses and some published results 11 , we could not find any correlation between a journal impact factor and the ratio of manipulations detected in our sample, but we do identified China as a country producing more problematic papers than average (in agreement with statistics on different kinds of misconduct, such as plagiarism) and UK as a country producing less. This last result might be related to the limited period considered by our analysis (a single month in 2014), and needs to be confirmed before trusted; both the higher prevalence measured for China and the lower measured for UK do however agree with recently published data 11 .

Eventually, by focusing on a single journal from the Nature Publishing Group, we could unearth a temporal growing trend for potential misconduct connected to image manipulation, which sounds as an alarm bell for any journal.

In conclusion, we want to stress here the fact that, being such a prevalent form of misconduct, image manipulation should and could be faced by journals, and no delay can be allowed anymore, nor can it be justified by pretending the analysis is too complex or long to run.

At the same time, academic and scientific institutions should implement procedures to properly handle allegations of image manipulations, using software tools as a source of unbiased flagging and screening, before human assessment leads to conclusion about any potential misconduct; this process indeed is initiated in some large scale research institution 2 .

Materials and methods

We designed an automated pipeline able to extract all images from a set of papers and perform several tests on the images aiming to assess any evidence of the type described in the previous section. This pipeline includes the use of the following open source, commercial and in-house pieces of software:

A pdf converter, to extract single pages from papers and save them as jpg files (we used a home-made software, but there are several open source tools that can be used for the same scope).

An in-house developed software, dubbed ImageCutter, to perform the extraction of image panels from each page.

A specific gel-checking routine, named ImageCheck, to uncover cloning of image portions elsewhere in the same or in different image panels.

An image duplication tracking software, to check for image panel duplication in the same or in different papers.

The first step in the procedure starts from a set of pdf files corresponding to a collection of papers to be analyzed (pdf conversion step). From each pdf file, an ordered set of jpg files is generated, each jpg file corresponding to an entire page of the original pdf file.

The second step consists in the extraction of image panels from each page of any paper included in the original collection (panel segmentation step). Starting from the jpg files representing all the pages of the target papers, image panels are automatically identified and cropped using our in-house software ImageCutter. Ideally, an image panel is any portion of a page, which corresponds to a single graphical element—e.g. the photo of a single Petri dish or a single western blotting membrane. Small graphical arts—including, e.g., mathematical formulas, logos, or other small graphical objects, are filtered based on size of the generated image (images not larger than 10 Kb are eliminated). The automated workflow corresponding to this step is schematically represented in Fig.  5 .

figure 5

JPEG versions of each page in the pdf file are changed to 8-bit, gray level images. Assuming a white background, page images are then inverted, smoothed and used for a segmentation step adopting the rolling ball procedure. To avoid over-segmenting, a region growing step increasing each segmented region is performed; if two segmented regions are joined after moderate area growing, they are considered as a single object to segment. After that, borders of the segmented object are refined by identifying abrupt changes in gray level intensity, assuming all objects to segment are rectangular. After segmentation, those objects with predefined properties (e.g., uniform color and small size) are discarded, while all the others are passed as image panels to the following routines

At this point, we have two sets of images:

The page set (jpg files corresponding each to a single page from a paper).

The image panel set (jpg files corresponding to single graphical elements incorporated into the figures of a paper).

The first set is used to check for cloning of specific image portions from the original location elsewhere in the image, as previously described—i.e., to detect a specific type of data fabrication. The second set is used for detecting image panel reusing—i.e., for finding out a type of plagiarism and a specific type falsification.

The third step consists in looking for cloned image portions. While changes in background, or intensity discontinuity or other subtle evidence can reveal the splicing of objects into images, to reach the absolute certainty of a cloning event one must find the source of the cloned object. This is an easy task when the object is present twice or more in the same or in different image panels—i.e., when one or more copies of the same graphical feature are detected in a figure, which is not expected to contain self-similar regions.

What kind of images are more often manipulated in this way? To address this question, we performed an overview of the biomedical publications retracted for image manipulation by looking at all open-source retracted publication in the PMC collection. In agreement with previous findings 6 , we realized that very often illicit cloning of portion images happens in figure depicting fictitious western blotting experiments (or other sorts of gel-electrophoresis experiments). An image documenting the result of a gel-electrophoresis experiment (including western blots) consists in a rectangular area, which should present a noisy background (either dark or light) and some prominent elliptical or rectangular spots (called bands, dark or light in opposition to the background), arranged in several columns (the gel lanes). The relative dimensions and the intensity of some bands in specific positions represent the expected outcome of an enormous variety of biomedical experiments, which is one of the reason why the technique is so popular among researchers (the other being its relative inexpensiveness). Fabrication of gel-electrophoretic images, on the other hand, is quite a simple process, and usually consists in the addition of some bands to a realistic background, to simulate the outcome of a given experiment. Given the large diffusion of the technique, as well as its prevalence in alleged cases of fabrication, we decided to tailor our routine toward the detection of fabricated gel-electrophoresis images. In particular, images corresponding to the jpg versions of each page from pdf files (the above-mentioned set 1) were used to feed a routine aimed to detect gel features cloned into a single gel or among different gels reported in a paper page. To achieve this, our software ImageCheck was set up to look for typical gel features in image panels, i.e., rectangular images, preferably in grey scale (or with a relatively low number of colored pixels), containing either a dark or a clear background and several internal spots. After a segmentation step, if their area did not differ by more than 10%, the software compared each possible couple of spots by reciprocal alignment (using their respective center-of-mass and allowing a minimal shifting in every direction). A couple of spots were flagged if after alignment (checking also for 180° rotation and/or a mirror transformation) a pixel-by-pixel intensity subtraction resulted in a difference lower than 2% of the normalized average area of the two spots (i.e., the sum of their area divided by 2). The 2% threshold was selected according to the R.O.C. analysis performed as described in the supplementary section.

Once all the gel images contained in the papers under investigation had been checked for cloned features, all image panels contained in set 2—representing a gel or any other type of scientific image—were used to investigate image reuse. To this aim, we originally used the commercial software “Visual Similarity Duplicate Image Finder Pro” (MindGems Inc), setting a threshold of 95% for similarity (the default value by the software producer) and allowing for intensity differences, vertical and horizontal stretching, mirroring, and 180° rotation. Of note, during the preparation of this paper, several free alternatives emerged to the commercial solution we used; however, we could not (yet) find anything as quick or effective as the originally selected tool, which was apparently developed for helping professional photographer to find duplicates in their large collections of images.

Box 1: ORI and its investigations

The Office of Research Integrity (ORI) oversees and directs Public Health Service (PHS) research integrity activities in US on behalf of the Secretary of Health and Human Services except for the regulatory research integrity activities of the Food and Drug Administration.

It handles about 30 cases per year, based on allegations that (a) involve US-based laboratories and (b) involve work funded with public US money. Allegations led to finding of misconduct about in one-third of times (on average, 10 report of misconduct findings per year) and may lead to different administrative or even penal sanctions. (Source: Zoe Hammatt, presentation at the 4th World Conference on Research Integrity, Rio de Janeiro, 1 June 2015).

Around 70% of ORI cases involve image manipulations. 1

Galilei, G. & Van Helden, A. Sidereus Nuncius, or, The Sidereal Messenger (University of Chicago Press, Chicago & London, 1989).

Bucci, E. M., Adamo, G., Frandi, A. & Caporale, C. Introducing an unbiased software procedure for image checking in a large research institution. In Extended Abstracts from the 5th World Conference on Research Integrity (Elsevier, Amsterdam, 2017).

Titus, S. L., Wells, J. A. & Rhoades, L. J. Repairing research integrity. Nature 453 , 980–982 (2008).

Article   CAS   PubMed   Google Scholar  

Questionable Practices. Available at http://ori.hhs.gov/education/products/RIandImages/practices/default.html (Accessed 18 May 2014).

Home - PMC - NCBI. Available at http://www.ncbi.nlm.nih.gov/pmc/ (Accessed 18 May 2014).

Marcus, A. & Oransky, I. Can we trust western blots? Lab Times 2 , 41 (2012).

Google Scholar  

FY2013 By The Numbers: Research Applications, Funding, and Awards, NIH Extramural Nexus. Available at http://nexus.od.nih.gov/all/2014/01/10/fy2013-by-the-numbers/ (Accessed 13 July 2014).

Stern, A. M., Casadevall, A., Steen, R. G. & Fang, F. C. Financial costs and personal consequences of research misconduct resulting in retracted publications. eLife 3 , e02956 (2014).

Article   PubMed   PubMed Central   Google Scholar  

Melino, G. 1000 successes as CDDIS reaches 1000 published papers! Cell Death Dis. 5 , e1041 (2014).

Article   PubMed Central   Google Scholar  

Oksvold, M. P. Incidence of data duplications in a randomly selected pool of life science publications. Sci. Eng. Ethics 22 , 487–496 (2015).

Article   PubMed   Google Scholar  

Bik, E. M., Casadevall, A. & Fang, F. C. The prevalence of inappropriate image duplication in biomedical research publications. mBio 7 , e00809–e00816 (2016).

Download references

Acknowledgements

I wish to thank here Professor G. Melino for his critical reading of this manuscript and Professor D. Vaux for the fruitful discussions on the problems of scientific image manipulation.

Author information

Authors and affiliations.

Temple University, Philadelphia, PA, USA

  • Enrico M. Bucci

Sbarro Health Research Organization, Philadelphia, PA, USA

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Enrico M. Bucci .

Ethics declarations

Conflict of interest.

E.M.B. is the founder and owner of Resis Srl ( www.resis-srl.com ), a company dedicated to improving scientific publishing, promoting research integrity, and fighting academic misconduct.

Additional information

Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Edited by G. Melino.

Electronic supplementary material

Supplementary figure legend(docx 11 kb), supplementary figure 1(tif 324 kb), supplementary material(docx 19 kb), rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Bucci, E.M. Automatic detection of image manipulations in the biomedical literature. Cell Death Dis 9 , 400 (2018). https://doi.org/10.1038/s41419-018-0430-3

Download citation

Received : 11 January 2018

Revised : 17 February 2018

Accepted : 19 February 2018

Published : 14 March 2018

DOI : https://doi.org/10.1038/s41419-018-0430-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Community-developed checklists for publishing images and image analyses.

  • Christopher Schmied
  • Michael S. Nelson
  • Helena Klara Jambor

Nature Methods (2024)

SILA: a system for scientific image analysis

  • Daniel Moreira
  • João Phillipe Cardenuto
  • Edward Delp

Scientific Reports (2022)

Benchmarking Scientific Image Forgery Detectors

  • João P. Cardenuto
  • Anderson Rocha

Science and Engineering Ethics (2022)

The 1-h fraud detection challenge

  • Marcel A. G. van der Heyden

Naunyn-Schmiedeberg's Archives of Pharmacology (2021)

On zombie papers

Cell Death & Disease (2019)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

research papers on manipulation

IMAGES

  1. (PDF) PSYCHOLOGICAL ASPECTS OF MANIPULATION WITHIN AN INTERPERSONAL

    research papers on manipulation

  2. How To Write A Chemistry Research Paper? All Details

    research papers on manipulation

  3. 010 Format Methodology Research Paper ~ Museumlegs

    research papers on manipulation

  4. (PDF) A Collective Assessment on Data Manipulation in Research Science

    research papers on manipulation

  5. Types of research papers

    research papers on manipulation

  6. Example Of An Introduction For A Scientific Research Paper

    research papers on manipulation

VIDEO

  1. Unveiling the Secrets of Psychological Manipulation

  2. "The Danger of Manipulation in Personal Relationships" #psychology#selfimprovement

  3. || How to find Research Papers & Identify Research Gap || AI tools || Research Beginners Guide ||

  4. Unraveling Minds: The Power of Leaving Them Hanging #manipulation #darkpsychology

  5. How People Perceive Aesthetic Designs

  6. Explore the dark psychology techniques of master manipulators #manipulation

COMMENTS

  1. Psychological Aspects of Manipulation Within an Interpersonal Interaction: Manipulations and Manipulators

    The phenomenon of manipulation has long attracted the attention of scientists - theorists and practitioners in various fields. It still attracts their attention, especially in times of ...

  2. Full article: Then again, what is manipulation? A broader view of a

    3.3. Manipulation as bypassing and subverting our rationality. It seems to be commonplace that manipulation is 'a kind of influence that bypasses or subverts the target's rational capacities' (Coons and Weber Citation 2014, 11; Fischer Citation 2017, 41).That manipulation at least bypasses rationality to a certain extent seems to be plausible because it is something other than rational ...

  3. Fake news, disinformation and misinformation in social media: a review

    Fake news may refer to the manipulation of information that can be carried out through the production of false information, or the distortion of true information. However, that does not mean that this problem is only created with social media. ... However, the comparison provided in Table 5 is deduced from the studied research papers; it is our ...

  4. Manipulation of information in medical research: Can it be morally

    Manipulation of information in medical research could be (preliminarily) defined as the intentional attempt of the medical investigator to influence a potential subject's decision to participate in a research, by presenting the related information in a manner that may modify or alter the potential subject's understanding or appreciation of ...

  5. PDF Ethics and Data Manipulation

    Ethics and Data Manipulation Michelle E. Louch [email protected] Center for Leadership and Management Duquesne University Pittsburgh, 15213, United States ... Decision-Making, Research, Data 1. INTRODUCTION When it comes to ethics and ethical dilemmas, only the major issues seem to hit the news - cover-ups, fraud, embezzlement, abuse of ...

  6. Authorship and citation manipulation in academic research

    Fig 1. Manipulation of authorship and citation across academia. Percentage of respondents who report that honorary authors have been added to their research projects, they have been coerced by editor to add citations, or who have padded their citations, sorted by field of study and type of manipulation.

  7. The fingerprints of misinformation: how deceptive content ...

    This paper explores the characteristics of misinformation content compared to factual news—the "fingerprints of misinformation"—using 92,112 news articles classified into several ...

  8. Meet this super-spotter of duplicated images in science papers

    Her skill and doggedness have earned her a worldwide following. "She has an uncommon ability to detect even the most complicated manipulation," says Enrico Bucci, co-founder of the research ...

  9. Manipulation As Theft by Cass R. Sunstein :: SSRN

    On welfarist grounds, manipulation, lies, and paternalistic coercion share a different characteristic; they displace the choices of those whose lives are directly at stake, and who are likely to have epistemic advantages, with the choices of outsiders, who are likely to lack critical information.

  10. Publishers unite to tackle doctored images in research papers

    Credit: Getty. Some of the world's largest publishers have come together to tackle the growing problem of image manipulation in scientific papers. They have developed a three-tier classification ...

  11. A Model of Behavioral Manipulation

    A Model of Behavioral Manipulation. Daron Acemoglu, Ali Makhdoumi, Azarakhsh Malekian & Asuman Ozdaglar. Working Paper 31872. DOI 10.3386/w31872. Issue Date November 2023. We build a model of online behavioral manipulation driven by AI advances. A platform dynamically offers one of n products to a user who slowly learns product quality.

  12. Ethics of generative AI and manipulation: a design-oriented research

    Generative AI enables automated, effective manipulation at scale. Despite the growing general ethical discussion around generative AI, the specific manipulation risks remain inadequately investigated. This article outlines essential inquiries encompassing conceptual, empirical, and design dimensions of manipulation, pivotal for comprehending and curbing manipulation risks. By highlighting ...

  13. An Overview of Market Manipulation by Tālis J. Putniņš :: SSRN

    Abstract. In this chapter, I describe the various forms of market manipulation, ranging from classical pump and dump schemes, bear raids, and painting the tape, through to recent forms of manipulation such as spoofing, layering, pinging, and quote stuffing. I discuss the defining elements of market manipulation, including recent legislative ...

  14. Market manipulation detection: A systematic literature review

    This paper, as a pioneer in this area, has done a systematic review of the literature on market manipulation detection from 2010 to 2020, and the 52 most significant studies were reviewed and analyzed deeply and comprehensively. In the selected studies, a review has been conducted on the definitions and taxonomies of trade-based manipulation ...

  15. The science and art of detecting data manipulation and fraud: An

    In my search of 20,000 biomedical papers that contained western blots (photos of protein gels stained with an antibody to analyse that protein's expression) I detected image duplication in about 4% of the papers. 1 Some of those duplications could be simple errors, but about half of those papers contained shifted, rotated, mirrored, or ...

  16. PDF Impact Of Manipulation By Brand On Customers: A Powerful Tactic To Gain

    The manipulation tactics are used by the brands to draw customers' attention and to create a market share in the market. Manipulation being unethical is still used by the companies to draw and retain customers. This paper will try to bring out how comapanies are manipulating stimuli of customers and turning it into a positive adaptation.

  17. A Survey on Learning-Based Robotic Grasping

    Purpose of Review This review provides a comprehensive overview of machine learning approaches for vision-based robotic grasping and manipulation. Current trends and developments as well as various criteria for categorization of approaches are provided. Recent Findings Model-free approaches are attractive due to their generalization capabilities to novel objects, but are mostly limited to top ...

  18. AI beats human sleuth at finding problematic images in research papers

    Scientific-image sleuth Sholto David blogs about image manipulation in research papers, a pastime that has exposed him to many accounts of scientific fraud. But other scientists "are still a ...

  19. Effect of manual versus mechanically assisted manipulations of the

    It is a short paper-pencil instrument, which is based on a similar instrument used for patients with lower back pain, the Oswestry Low Back Pain Questionnaire. ... patients in terms of effective therapeutic options for treating neck pain patients without the risk of cervical spine manipulation. The outcomes of the research will be published in ...

  20. Anger is eliminated with the disposal of a paper written ...

    Half of the participants (disposal group) disposed of the paper in the trash can (Experiment 1) or in the shredder (Experiment 2), while the other half (retention group) kept it in a file on the desk.

  21. Neural Response During a Mechanically Assisted Spinal Manipulation in

    Mechanical-Assisted Spinal Manipulation Thrust Profiles. ... (K01AT005935) to WRR and was conducted in a facility with support from the NIH National Center for Research Resources under Research Facilities Improvement Grant Number C06RR15433. The authors would like to thank Activator Methods International for providing the Activator IV ...

  22. Dana-Farber Cancer Institute Confronts a Seventh Study Retraction

    Dana-Farber Cancer Institute retracts a seventh study due to alleged image manipulation, in an ongoing issue involving multiple research paper irregularities.

  23. Automatic detection of image manipulations in the biomedical ...

    For example, if we consider that the average value of extramural NIH research projects is above $400,000 7, then the overall value for the 12 NIH projects which produced 12 problematic papers (red ...