U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Springer Nature - PMC COVID-19 Collection

Logo of phenaturepg

Misinformation, manipulation, and abuse on social media in the era of COVID-19

Emilio ferrara.

1 University of Southern California, Los Angeles, CA 90007 USA

Stefano Cresci

2 Institute of Informatics and Telematics, National Research Council (IIT-CNR), 56124 Pisa, Italy

Luca Luceri

3 University of Applied Sciences and Arts of Southern Switzerland (SUPSI), Manno, Switzerland

The COVID-19 pandemic represented an unprecedented setting for the spread of online misinformation, manipulation, and abuse, with the potential to cause dramatic real-world consequences. The aim of this special issue was to collect contributions investigating issues such as the emergence of infodemics, misinformation, conspiracy theories, automation, and online harassment on the onset of the coronavirus outbreak. Articles in this collection adopt a diverse range of methods and techniques, and focus on the study of the narratives that fueled conspiracy theories, on the diffusion patterns of COVID-19 misinformation, on the global news sentiment, on hate speech and social bot interference, and on multimodal Chinese propaganda. The diversity of the methodological and scientific approaches undertaken in the aforementioned articles demonstrates the interdisciplinarity of these issues. In turn, these crucial endeavors might anticipate a growing trend of studies where diverse theories, models, and techniques will be combined to tackle the different aspects of online misinformation, manipulation, and abuse.

Introduction

Malicious and abusive behaviors on social media have elicited massive concerns for the negative repercussions that online activity can have on personal and collective life. The spread of false information [ 8 , 14 , 19 ] and propaganda [ 10 ], the rise of AI-manipulated multimedia [ 3 ], the presence of AI-powered automated accounts [ 9 , 12 ], and the emergence of various forms of harmful content are just a few of the several perils that social media users can—even unconsciously—encounter in the online ecosystem. In times of crisis, these issues can only get more pressing, with increased threats for everyday social media users [ 20 ]. The ongoing COVID-19 pandemic makes no exception and, due to dramatically increased information needs, represents the ideal setting for the emergence of infodemics —situations characterized by the undisciplined spread of information, including a multitude of low-credibility, fake, misleading, and unverified information [ 24 ]. In addition, malicious actors thrive on these wild situations and aim to take advantage of the resulting chaos. In such high-stakes scenarios, the downstream effects of misinformation exposure or information landscape manipulation can manifest in attitudes and behaviors with potentially dramatic public health consequences [ 4 , 21 ].

By affecting the very fabric of our socio-technical systems, these problems are intrinsically interdisciplinary and require joint efforts to investigate and address both the technical (e.g., how to thwart automated accounts and the spread of low-quality information, how to develop algorithms for detecting deception, automation, and manipulation), as well as the socio-cultural aspects (e.g., why do people believe in and share false news, how do interference campaigns evolve over time) [ 7 , 15 ]. Fortunately, in the case of COVID-19, several open datasets were promptly made available to foster research on the aforementioned matters [ 1 , 2 , 6 , 16 ]. Such assets bootstrapped the first wave of studies on the interplay between a global pandemic and online deception, manipulation, and automation.

Contributions

In light of the previous considerations, the purpose of this special issue was to collect contributions proposing models, methods, empirical findings, and intervention strategies to investigate and tackle the abuse of social media along several dimensions that include (but are not limited to) infodemics, misinformation, automation, online harassment, false information, and conspiracy theories about the COVID-19 outbreak. In particular, to protect the integrity of online discussions on social media, we aimed to stimulate contributions along two interlaced lines. On one hand, we solicited contributions to enhance the understanding on how health misinformation spreads, on the role of social media actors that play a pivotal part in the diffusion of inaccurate information, and on the impact of their interactions with organic users. On the other hand, we sought to stimulate research on the downstream effects of misinformation and manipulation on user perception of, and reaction to, the wave of questionable information they are exposed to, and on possible strategies to curb the spread of false narratives. From ten submissions, we selected seven high-quality articles that provide important contributions for curbing the spread of misinformation, manipulation, and abuse on social media. In the following, we briefly summarize each of the accepted articles.

The COVID-19 pandemic has been plagued by the pervasive spread of a large number of rumors and conspiracy theories, which even led to dramatic real-world consequences. “Conspiracy in the Time of Corona: Automatic Detection of Emerging COVID-19 Conspiracy Theories in Social Media and the News” by Shahsavari, Holur, Wang, Tangherlini, and Roychowdhury grounds on a machine learning approach to automatically discover and investigate the narrative frameworks supporting such rumors and conspiracy theories [ 17 ]. Authors uncover how the various narrative frameworks rely on the alignment of otherwise disparate domains of knowledge, and how they attach to the broader reporting on the pandemic. These alignments and attachments are useful for identifying areas in the news that are particularly vulnerable to reinterpretation by conspiracy theorists. Moreover, identifying the narrative frameworks that provide the generative basis for these stories may also contribute to devise methods for disrupting their spread.

The widespread diffusion of rumors and conspiracy theories during the outbreak has also been analyzed in “Partisan Public Health: How Does Political Ideology Influence Support for COVID-19 Related Misinformation?” by Nicholas Havey. The author investigates how political leaning influences the participation in the discourse of six COVID-19 misinformation narratives: 5G activating the virus, Bill Gates using the virus to implement a global surveillance project, the “Deep State” causing the virus, bleach, and other disinfectants as ingestible protection against the virus, hydroxychloroquine being a valid treatment for the virus, and the Chinese Communist party intentionally creating the virus [ 13 ]. Results show that conservative users dominated most of these discussions and pushed diverse conspiracy theories. The study further highlights how political and informational polarization might affect the adherence to health recommendations and can, thus, have dire consequences for public health.

“Understanding High and Low Quality URL Sharing on COVID-19 Twitter Streams” by Singh, Bode, Budak, Kawintiranon, Padden, and Vraga investigate URL sharing patterns during the pandemic, for different categories of websites [ 18 ]. Specifically, authors categorize URLs as either related to traditional news outlets, authoritative health sources, or low-quality and misinformation news sources. Then, they build networks of shared URLs (see Fig. ​ Fig.1). 1 ). They find that both authoritative health sources and low-quality/misinformation ones are shared much less than traditional news sources. However, COVID-19 misinformation is shared at a higher rate than news from authoritative health sources. Moreover, the COVID-19 misinformation network appears to be dense (i.e., tightly connected) and disassortative. These results can pave the way for future intervention strategies aimed at fragmenting networks responsible for the spread of misinformation.

An external file that holds a picture, illustration, etc.
Object name is 42001_2020_94_Fig1_HTML.jpg

Network based on the web-page URLs shared on Twitter from January 16, 2020 to April 15, 2020 [ 18 ]. Each node represents a web-page URL, while connections indicate links among web-pages. The purple nodes represent traditional news sources, the orange nodes indicate the low-quality and misinformation news sources, and the green nodes represent authoritative health sources. The edges take the color of the source, while the node size is based on the degree

The relationship between news sentiment and real-world events is a long-studied matter that has serious repercussions for agenda setting and (mis-)information spreading. In “Around the world in 60 days: An exploratory study of impact of COVID-19 on online global news sentiment” , Chakraborty and Bose explore this relationship for a large set of worldwide news articles published during the COVID-19 pandemic [ 5 ]. They apply unsupervised and transfer learning-based sentiment analysis techniques and they explore correlations between news sentiment scores and the global and local numbers of infected people and deaths. Specific case studies are also conducted for countries, such as China, the US, Italy, and India. Results of the study contribute to identify the key drivers for negative news sentiment during an infodemic, as well as the communication strategies that were used to curb negative sentiment.

Farrell, Gorrell, and Bontcheva investigate one of the most damaging sides of online malicious content: online abuse and hate speech. In “Vindication, Virtue and Vitriol: A study of online engagement and abuse toward British MPs during the COVID-19 Pandemic” , they adopt a mixed methods approach to analyze citizen engagement towards British MPs online communications during the pandemic [ 11 ]. Among their findings is that certain pressing topics, such as financial concerns, attract the highest levels of engagement, although not necessarily negative. Instead, other topics such as criticism of authorities and subjects like racism and inequality tend to attract higher levels of abuse, depending on factors such as ideology, authority, and affect.

Yet, another aspect of online manipulation—that is, automation and social bot interference—is tackled by Uyheng and Carley in their article “Bots and online hate during the COVID-19 pandemic: Case studies in the United States and the Philippines”  [ 22 ]. Using a combination of machine learning and network science, the authors investigate the interplay between the use of social media automation and the spread of hateful messages. They find that the use of social bots yields more results when targeting dense and isolated communities. While the majority of extant literature frames hate speech as a linguistic phenomenon and, similarly, social bots as an algorithmic one, Uyheng and Carley adopt a more holistic approach by proposing a unified framework that accounts for disinformation, automation, and hate speech as interlinked processes, generating insights by examining their interplay. The study also reflects on the value of taking a global approach to computational social science, particularly in the context of a worldwide pandemic and infodemic, with its universal yet also distinct and unequal impacts on societies.

It has now become clear that text is not the only way to convey online misinformation and propaganda [ 10 ]. Instead, images such as those used for memes are being increasingly weaponized for this purpose. Based on this evidence, Wang, Lee, Wu, and Shen investigate US-targeted Chinese COVID propaganda, which happens to rely heavily on text images [ 23 ]. In their article “Influencing Overseas Chinese by Tweets: Text-Images as the Key Tactic of Chinese Propaganda” , they tracked thousands of Twitter accounts involved in the #USAVirus propaganda campaign. A large percentage ( ≃ 38 % ) of those accounts was later suspended by Twitter, as part of their efforts for contrasting information operations. 1 Authors studied the behavior and content production of suspended accounts. They also experimented with different statistical and machine learning models for understanding which account characteristics mostly determined their suspension by Twitter, finding that the repeated use of text images played a crucial part.

Overall, the great interest around the COVID-19 infodemic and, more broadly, about research themes such as online manipulation, automation, and abuse, combined with the growing risks of future infodemics, make this special issue a timely endeavor that will contribute to the future development of this crucial area. Given the recent advances and breadth of the topic, as well as the level of interest in related events that followed this special issue—such as dedicated panels, webinars, conferences, workshops, and other special issues in journals—we are confident that the articles selected in this collection will be both highly informative and thought provoking for readers. The diversity of the methodological and scientific approaches undertaken in the aforementioned articles demonstrates the interdisciplinarity of these issues, which demand renewed and joint efforts from different computer science fields, as well as from other related disciplines such as the social, political, and psychological sciences. To this regard, the articles in this collection testify and anticipate a growing trend of interdisciplinary studies where diverse theories, models, and techniques will be combined to tackle the different aspects at the core of online misinformation, manipulation, and abuse.

1 https://blog.twitter.com/en_us/topics/company/2020/information-operations-june-2020.html .

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Emilio Ferrara, Email: ude.csu@efoilime .

Stefano Cresci, Email: [email protected] .

Luca Luceri, Email: [email protected] .

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

Authorship and citation manipulation in academic research

Contributed equally to this work with: Eric A. Fong, Allen W. Wilhite

Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Writing – original draft, Writing – review & editing

Affiliation Department of Management, University of Alabama in Huntsville, Huntsville, Alabama, United States of America

* E-mail: [email protected]

Affiliation Department of Economics, University of Alabama in Huntsville, Huntsville, Alabama, United States of America

ORCID logo

  • Eric A. Fong, 
  • Allen W. Wilhite

PLOS

  • Published: December 6, 2017
  • https://doi.org/10.1371/journal.pone.0187394
  • Reader Comments

Table 1

Some scholars add authors to their research papers or grant proposals even when those individuals contribute nothing to the research effort. Some journal editors coerce authors to add citations that are not pertinent to their work and some authors pad their reference lists with superfluous citations. How prevalent are these types of manipulation, why do scholars stoop to such practices, and who among us is most susceptible to such ethical lapses? This study builds a framework around how intense competition for limited journal space and research funding can encourage manipulation and then uses that framework to develop hypotheses about who manipulates and why they do so. We test those hypotheses using data from over 12,000 responses to a series of surveys sent to more than 110,000 scholars from eighteen different disciplines spread across science, engineering, social science, business, and health care. We find widespread misattribution in publications and in research proposals with significant variation by academic rank, discipline, sex, publication history, co-authors, etc. Even though the majority of scholars disapprove of such tactics, many feel pressured to make such additions while others suggest that it is just the way the game is played. The findings suggest that certain changes in the review process might help to stem this ethical decline, but progress could be slow.

Citation: Fong EA, Wilhite AW (2017) Authorship and citation manipulation in academic research. PLoS ONE 12(12): e0187394. https://doi.org/10.1371/journal.pone.0187394

Editor: Lutz Bornmann, Max Planck Society, GERMANY

Received: February 28, 2017; Accepted: September 20, 2017; Published: December 6, 2017

Copyright: © 2017 Fong, Wilhite. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: All relevant data are within the paper and its Supporting Information files. The pertinent appendices are: S2 Appendix: Honorary author data; S3 Appendix: Coercive citation data; and S4 Appendix: Journal data. In addition the survey questions and counts of the raw responses to those questions appear in S1 Appendix: Statistical methods, surveys, and additional results.

Funding: This publication was made possible by a grant from the Office of Research Integrity through the Department of Health and Human Services: Grant Number ORIIR130003. Contents are solely the responsibility of the authors and do not necessarily represent the official views of the Department of Health and Human Services or the Office of Research Integrity. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing interests: The authors have declared that no competing interests exist.

Introduction

The pressure to publish and to obtain grant funding continues to build [ 1 – 3 ]. In a recent survey of scholars, the number of publications was identified as the single most influential component of their performance review while the journal impact factor of their publications and order of authorship came in second and third, respectively [ 3 ]. Simultaneously, rejection rates are on the rise [ 4 ]. This combination, the pressure to increase publications coupled with the increased difficulty of publishing, can motivate academics to violate research norms [ 5 ]. Similar struggles have been identified in some disciplines in the competition for research funding [ 6 ]. For journals and the editors and publishers of those journals, impact factors have become a mark of prestige and are used by academics to determine where to submit their work, who earns tenure, and who may be awarded grants [ 7 ]. Thus, the pressure to increase a journal’s impact factor score is also increasing. With these incentives it is not surprising that academia is seeing authors and editors engaged in questionable behaviors in an attempt to increase their publication success.

There are many forms of academic misconduct that can increase an author’s chance for publication and some of the most severe cases include falsifying data, falsifying results, opportunistically interpreting statistics, and fake peer-review [ 5 , 8 – 12 ]. For the most part, these extreme examples seem to be relatively uncommon; for example, only 1.97% of surveyed academics admit to falsifying data, although this probably understates the actual practice as these respondents report higher numbers of their colleagues misbehaving [ 10 ].

Misbehavior regarding attribution, on the other hand, seems to be widespread [ 13 – 18 ]; for example, in one academic study, roughly 20% of survey respondents have experienced coercive citation (when editors direct authors to add citations to articles from the editors’ journals even though there is no indicated lack of attribution and no specific articles or topics are suggested by the editor) and over 50% said they would add superfluous citations to a paper being submitted to a coercive journal in an attempt to increase its chance for publication [ 18 ]. Honorary authorship (the addition of individuals to manuscripts as authors, even though those individuals contribute little, if anything, to the actual research) is a common behavior in several disciplines [ 16 , 17 ]. Some scholars pad their references in an attempt to influence journal referees or grant reviewers by citing prestigious publications or articles from the editor’s journal (or the editor’s vita) even if those citations are not pertinent to the research. While there is little systematic evidence that such a strategy influences editors, the perception of its effectiveness is enough to persuade some scholars to pad [ 19 , 20 ]. Overall, it seems that many scholars consider authorship and citation to be fungible attributes, components of a project one can alter to improve their publication and funding record or to increase journal impact factors (JIFs).

Most studies examining attribution manipulation focus on the existence and extent of misconduct and typically address a narrow section of the academic universe; for example, there are numerous studies measuring the amount of honorary authorship in medicine, but few in engineering, business, or the social sciences [ 21 – 25 ]. And, while coercive citation has been exposed in the some business fields, less is known about its prevalence in medicine, science, or engineering. In addition, the pressure to acquire research funding is nearly as intense as publication pressures and in some disciplines funding is a major component of performance reviews. Thus, grant proposals are also viable targets of manipulation, but research into that behavior is sparse [ 2 , 6 ]. However, if grant distributions are swayed by manipulation then resources are misdirected and promising areas of research could be neglected.

There is little disagreement with the sentiment that this manipulation is unethical, but there is less agreement about how to slow its use. Ultimately, to reverse this decline of ethics we need to better understand the factors that impact attribution manipulation and that is the focus of this manuscript. Using more than 12,000 responses to surveys sent to more than 110,000 academics from disciplines across the academic universe, this study aims to examine the prevalence and systematic nature of honorary authorship, coercive citation, and padded citations in eighteen different disciplines in science, engineering, medicine, business, and the social sciences. In essence, we do not just want to know how common these behaviors are, but whether there are certain types of academics who add authors or citations or are coerced more often than others. Specifically, we ask, what are the prevailing attributes of scholars who manipulate, whether willingly (e.g., padded citation) or not (e.g., coercive citation), and we consider attributes like academic rank, gender, discipline, level of co-authorship, etc. We also look into the reasons scholars manipulate and ask their opinions on the ethics of this behavior. In our opinion, a deeper understanding of manipulation can shed light on potential ways to reduce this type of academic misconduct.

As noted in the introduction, the primary component of performance reviews, and thus of individual research productivity, is the number of published articles by an academic [ 3 ]. This number depends on two things: (i) the number of manuscripts on which a scholar is listed as an author and (ii) the likelihood that each of those manuscripts will be published. The pressure to increase publications puts pressure on both of these components. In a general sense, this can be beneficial for society as it creates incentives for individuals to work harder (to increase the quantity of research projects) and to work better (to increase the quality of those projects) [ 6 ]. There are similar pressures and incentives in the application for, and distribution of, research grants as many disciplines in science, engineering, and medicine view the acquisition of funding as both a performance measure and a precursor to publication given the high expense of the equipment and supplies needed to conduct research [ 2 , 6 ]. But this publication and funding pressure can also create perverse incentives.

Honorary authorship

Working harder is not the only means of increasing an academic’s number of publications. An alternative approach is known as “honorary authorship” and it specifically refers to the inclusion of individuals as authors on manuscripts, or grant proposals, even though they did not contribute to the research effort. Numerous studies have explored the extent of honorary authorship in a variety of disciplines [ 17 , 20 , 21 – 25 ]. The motivation to add authors can come from many sources; for instance, an author may be directed to add an individual who is a department chair, lab director, or some other administrator with power, or they might voluntarily add such an individual to curry favor. Additionally, an author might create a reciprocal relationship where they add an honorary author to their own paper with the understanding that the beneficiary will return the favor on another paper in the future, or an author may just do a friend a favor and include their name on a manuscript [ 23 , 24 ]. In addition, if the added author has a prestigious reputation, this can also increase the chances of the manuscript receiving a favorable review. Through these means, individuals can raise the expected value of their measured research productivity (publications) even though their actual intellectual output is unchanged.

Similar incentives apply to grant funding. Scholars who have a history of repeated funding, especially funding from the more prestigious funding agencies, are viewed favorably by their institutions [ 2 ]. Of course, grants provide resources, which increase an academic’s research output, but there are also direct benefits from funded research accruing to the university: overhead charges, equipment purchases that can be used for future projects, graduate student support, etc. Consequentially, “rainmakers” (scholars with a record of acquiring significant levels of research funding) are valued for that skill.

As with publications, the amount of research funding received by an individual depends on the number and size of proposals put forth and the probability of each getting funded. This metric creates incentives for individuals to get their names on more proposals, on bigger proposals, and to increase the likelihood that those proposals will be successful. That pressure opens the door to the same sorts of misattribution behavior found in manuscripts because honorary authorship can increase the number of grant proposals that include an author’s name and by adding a scholar with a prestigious reputation as an author they may increase their chances of being funded. As we investigate the use of honorary authorship we do not focus solely on its prevalence; we also question whether there is a systematic nature to its use. First, for example, it makes sense that academics who are early in their career have less funding and lack the protection of tenure and thus need more publications than someone with an established reputation. To begin to understand if systematic differences exist in the use of honorary authorship, the first set of empirical questions to be investigated here is: who is likely to add honorary authors to manuscripts or grant proposals? Scholars of lower rank and without tenure may be more likely to add authors, whether under pressure from senior colleagues or in their own attempt to sway reviewers. Tenure and promotion depend critically on a young scholars’ ability to establish a publication record, secure research funding, and engender support from their senior faculty. Because they lack the protection of rank and tenure, refusing to add someone could be risky. Of course, senior faculty members also have goals and aspirations that can be challenging, but junior faculty have far more on the line in terms of their career.

Second, we expect research faculty to be more likely to add honorary authors, especially to grant proposals, because they often occupy positions that are heavily dependent on a continued stream of research success, particularly regarding research funding. Third, we expect that female researchers may be less able to resist pressure to add honorary authors because women are underrepresented in faculty leadership and administrative positions in academia and lack political power [ 26 , 27 ]. It is not just their own lack of position that matters; the dearth of other females as senior faculty or in leadership positions leave women with fewer mentors, senior colleagues, and administrators with similar experiences to help them navigate these political minefields [ 28 , 29 ]. Fourth, because adding an author waters down the credit received by each existing author, we expect manuscripts that already have several authors to be less resistant to additional “credit sharing.” Simply put, if credit is equally distributed across authors then adding a second author would cut your perceived contribution in half, but adding a sixth author reduces your contribution by only 3% (from 20% to 17%).

Fifth, because academia is so competitive, the decisions of some scholars have an impact on others in the same research population. If your research interests are in an area in which honorary authorship is common and considered to be effective, then a promising counter-policy to the manipulation undertaken by others is to practice honorary authorship yourself. This leads us to predict that the obligation to add honorary authors to grant proposals and/or manuscripts is likely to concentrate more heavily in some disciplines. In other words, we do not expect it to be practiced uniformly or randomly across fields; instead, there will be some disciplines who are heavily engaged in adding authors and other disciplines less so engaged. In general, we have no firm predictions as to which disciplines are more likely to practice honorary authorship; we predict only that its practice will be lumpy. However, there may be reasons to suspect some patterns to emerge; for example, some disciplines, such as science, engineering, and medicine, are much more heavily dependent on research funding than other disciplines, such as the social sciences, mathematics, and business [ 2 ]. For example, over 70% of the NSF budget goes to science and engineering and about 4% to the social sciences. Similarly, most of the NIH budget goes to doctors and a smaller share to other disciplines [ 30 ]. Consequently, we suspect that the disciplines that most prominently add false investigators to grant proposals are more likely to be in science, engineering, and the medical fields. We do not expect to see that division as prominent in the addition of authors to manuscripts submitted for publication.

There are several ways scholars may internalize the pressure to perform, which can lead to different reasons why a scholar might add an honorary author to a paper. A second goal of this paper is to study who might employ these different strategies. Thus, we asked authors for the reasons they added honorary authors to their manuscripts and grants; for example, was this person in a position of authority, or a mentor, did they have a reputation that increased the chances for publication or funding, etc? Using these responses as a dependent variable, we then look to find out if these were related to the professional characteristics of the scholars in our study. The hypotheses to be tested mirror the questions posed for honorary authors. We expect junior faculty, research faculty, female faculty, and projects with more co-authors to be more likely to add additional coauthors to manuscripts and grants than professors, male faculty, and projects with fewer co-authors. Moreover, we expect for the practice to differ across disciplines. Focusing specifically on honorary authorship in grant proposals, we also explore the possibility that the use of honorary authorship differs between funding opportunities and agencies.

Coercive citation

Journal rankings matter to editors, editorial boards, and publishers because rankings affect subscriptions and prestige. In spite of their shortcomings, impact factors have become the dominant measure of journal quality. These measures include self-citation, which creates an incentive for editors to direct authors to add citations even if those citations are irrelevant, a practice called “coercive citation” [ 18 , 27 ]. This behavior has been systematically measured in business and social science disciplines [ 18 ]. Additionally, researchers have found that coercion sometimes involves more than one journal; editors have gone as far as organizing “citation cartels” where a small set of editors recommend that authors cite articles from each other’s journal [ 31 ].

When editors make decisions to coerce, who might they target, who is most likely to be coerced? Assuming editors balance the costs and benefits of their decisions, a parallel set of empirical hypotheses emerge. Returning to the various scholar attributes, we expect editors to target lower-ranked faculty members because they may have a greater incentive to cooperate as additional publications have a direct effect on their future cases for promotion, and for assistant professors on their chances of tenure as well. In addition, because they have less political clout and are less likely to openly complain about coercive treatment, lower ranked faculty members are more likely to acquiesce to the editor’s request. We predict that editors are more likely to target female scholars because female scholars hold fewer positions of authority in academia and may lack the institutional support of their male counterparts. We also expect the number of coauthors to play a role, but contrary to our honorary authorship prediction, we predict editors will target manuscripts with fewer authors rather than more authors. The rationale is simple; authors do not like to be coerced and when an editor requires additional citations on a manuscript having many authors then the editor is making a larger number of individuals aware of their coercive behavior, but coercing a sole-authored paper upsets a single individual. Notice that we are hypothesizing the opposite sign in this model than in the honorary authorship model; if authors are making a decision to add honorary authors then they prefer to add people to articles that already have many co-authors, but if editors are making the decision then they prefer to target manuscripts with few authors to minimize the potential pushback.

As was true in the model of honorary authorship, we expect the practice of coercion to be more prevalent in some disciplines than others. If one editor decides to coerce authors and if that strategy is effective, or is perceived to be effective, then there is increased pressure for other editors in the same discipline to also coerce just to maintain their ranking—if one journal climbs up in the rankings, others, who do nothing, fall. Consequently, coercion begets additional coercion and the practice can spread. But, a journal climbing up in the rankings in one discipline has little impact on other disciplines and thus we expect to find coercion practiced unevenly; prevalent in some disciplines, less so in others. Finally, as a sub-conjecture to this hypothesis, we expect coercive citation to be more prevalent in disciplines for which journal publication is the dominant measure for promotion and tenure; that is, disciplines that rely less heavily on grant funding. This means we expect the practice to be scattered, and lumpy, but we also expect relatively more coercion in the business and social sciences disciplines.

We are also interested in the types of journals that have been reported to coerce and to explore those issues we gather data using the journal as the unit of observation. As above, we expect differences between disciplines and we expect those discipline differences to mirror the discipline differences found in the author-based data set. We also expect a relationship between journal ranking and coercion because the costs and benefits of coercion differ for more or less prestigious journals. Consider the benefits of coercion. The very highest ranked journals have high impact factors; consequently, to rise another position in the rankings requires a significant increase in citations, which would require a lot of coercion. Lower-ranked journals, however, might move up several positions with relatively few coerced citations. Furthermore, consider the cost of coercion. Elite journals possess valuable reputations and risking them by coercing might be foolhardy; journals deep down in the rankings have less at stake. Given this logic, it seems likely that lower ranked journals are more likely to have practiced coercion.

We also look to see if publishers might influence the coercive decision. Journals are owned and published by many different types of organizations; the most common being commercial publishers, academic associations, and universities. A priori , commercial publishers, being motivated by profits, are expected to be more interested in subscriptions and sales, so the return to coercion might be higher for that group. On the other hand, the integrity of a journal might be of greater concern to non-profit academic associations and university publishers, but we don’t see a compelling reason to suppose that universities or academic associations will behave differently from one another. Finally, we control for some structural difference across journals by including each journal’s average number of cites per document and the total number of documents they publish per year.

Padded citations

The third and final type of attribution manipulation explored here is padded reference lists. Because some editors coerce scholars to add citations to boost their journals’ impact factor score and because this practice is known by many scholars there is an incentive for scholars to add superfluous citations to their manuscripts prior to submission [ 18 ]. Provided there is an incentive for scholars to pad their reference lists in manuscripts, we wondered if grant writers would be willing to pad reference lists in grants in an attempt to influence grant reviewers.

As with honorary authorship, we suspect there may be a systematic element to padding citations. In fact, we expect the behavior of padding citations to parallel the honorary author behavior. Thus we predict that scholars of lower rank and therefore without tenure and female scholars to be more likely to pad citations to assuage an editor or sway grant reviewers. Because the practice also encompasses a feedback loop (one way to compete with scholars who pad their citations is to pad your citations) we expect the practice to proliferate in some disciplines. The number of coauthors is not expected to play a role, but we also expect knowledge of other types of manipulation to be important. That is, we hypothesize that individuals who are aware of coercion, or who have been coerced, are more likely to pad citations. With grants, we similarly expect individuals who add honorary authors to grant proposals to also be likely to pad citations in grant proposals. Essentially, the willingness to misbehave in one area is likely related to misbehavior in other areas.

The data collection method of choice for this study is survey because to it would be difficult to determine if someone added honorary authors or padded citations prior to submission without asking that individual. As explained below, we distributed surveys in four waves over five years. Each survey, its cover email, and distribution strategy was reviewed and approved by the University of Alabama in Huntsville’s Institutional Review Board. Copies of these approvals are available on request. We purposely did not collect data that would allow us to identify individual respondents. We test our hypotheses using these survey data and journal data. Given the complexity of the data collection, both survey and archival journal data, we will begin with discussing our survey data and the variables developed from our survey. We then discuss our journal data and the variables developed there. Over the course of a five-year period and using four waves of survey collection, we sent surveys, via email, to more than 110,000 scholars in total from eighteen different disciplines (medicine, nursing, biology, chemistry, computer science, mathematics, physics, engineering, ecology, accounting, economics, finance, marketing, management, information systems, sociology, psychology, and political science) from universities across the U.S. See Table 1 for details regarding the timing of survey collection. Survey questions and raw counts of the responses to those questions are given in S1 Appendix : Statistical methods, surveys, and additional results. Complete files of all of the data used in our estimates are in the S2 , S3 and S4 Appendices.

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pone.0187394.t001

Potential survey recipients and their contact information (email addresses) were identified in three different ways. First, we were able to get contact information for management scholars through the Academy Management using the annual meeting catalog. Second, for economics and physicians we used the membership services provided by the American Economic Association and the American Medical Association. Third, for the remaining disciplines we identified the top 200 universities in the United States using U . S . News and World Report’s “National University Rankings” and hand-collected email addresses by visiting those university websites and copying contact information for individual faculty members from each of the disciplines. We also augmented the physician contact list by visiting the web sites of the medical schools in these top 200 school as well. With each wave of surveys, we sent at least one reminder to participate. The approximately 110,000 surveys yielded about 12,000 responses for an overall response rate of about 10.5%. Response rates by discipline can be found in Table A in S1 Appendix .

Few studies have examined the systematic nature of honorary authorship and padded citation and thus we developed our own survey items to address our hypotheses. Our survey items for coercive citation were taken from prior research on coercion [ 18 ]. All survey items and the response alternatives with raw data counts are given in S1 Appendix . The complete data are made available in S2 – S4 Appendices.

Our first set of tests relate to honorary authorship in manuscripts and grants and is made up of several dependent variables, each related to the research question being addressed. We begin with the existence of honorary authorship in manuscripts. This dependent variable is composed of the answers to the survey question: “Have YOU felt obligated to add the name of another individual as a coauthor to your manuscript even though that individual’s contribution was minimal?” Responses were in the form of yes and no where “yes” was coded as a 1 and “no” coded as a 0. The next dependent variable addresses the frequency of this behavior asking: “In the last five years HOW MANY TIMES have you added or had coauthors added to your manuscripts even though they contributed little to the study?” The final honorary authorship dependent variables deal with the reason for including an honorary author in manuscripts: “Even though this individual added little to this manuscript he (or she) was included as an author. The main reason for this inclusion was:” and the choices regarding this answer were that the honorary author is the director of the lab or facility used in the research, occupies a position of authority and can influence my career, is my mentor, is a colleague I wanted to help out, was included for reciprocity (I was included or expect to be included as a co-author on their work), has data I needed, has a reputation that increases the chances of the work being published, or they had funding we could apply to the research. Responses were coded as 1 for the main reason given (only one reason could be selected as the “main” reason) and 0 otherwise.

Regarding honorary authorship in grant proposals, our first dependent variable addresses its existence: “Have you ever felt obligated to add a scholar’s name to a grant proposal even though you knew that individual would not make a significant contribution to the research effort?” Again, responses were in the form of yes and no where “yes” was coded as a 1 and “no” coded as a 0. The remaining dependent variables regarding honorary authorship in grant proposals addresses the reasons for adding honorary authors to proposals: “The main reason you added an individual to this grant proposal even though he (or she) was not expected to make a significant contribution was:” and the provided potential responses were that the honorary author is the director of the lab or facility used in the research, occupies a position of authority and can influence my career, is my mentor, is a colleague I wanted to help out, was included for reciprocity (I was included or expect to be included as a co-author on their work), has data I needed, has a reputation that increases the chances of the work being published, or was a person suggested by the grant reviewers. Responses were coded as 1 for the main reason given (only one reason could be selected as the “main” reason) and 0 otherwise.

Our next major set of dependent variables deal with coercive citation. The first coercive citation dependent variable was measured using the survey question: “Have YOU received a request from an editor to add citations from the editor’s journal for reasons that were not based on content?” Responses were in the form of yes (coded as a 1) and no (coded as 0). The next question deals with the frequency: “In the last five years, approximately HOW MANY TIMES have you received a request from the editor to add more citations from the editor’s journal for reasons that were not based on content?”

Our final set of dependent variables from our survey data investigates padding citations in manuscripts and grants. The dependent variable that addresses an author’s willingness to pad citations for manuscripts comes from the following question: “If I were submitting an article to a journal with a reputation of asking for citations to itself even if those citations are not critical to the content of the article, I would probably add such citations BEFORE SUBMISSION.” Answers to this question were in the form of a Likert scale with five potential responses (Strongly Disagree, Disagree, Neutral, Agree, and Strongly Agree) where Strongly Disagree was coded as a 1 and Strongly Agree coded as a 5. The dependent variable for padding citations in grant proposals uses responses to the statement: “When developing a grant proposal I tend to skew my citations toward high impact factor journals, even if those citations are of marginal import to my proposal.” Answers were in the form of a Likert scale with five potential responses (Strongly Disagree, Disagree, Neutral, Agree, and Strongly Agree) where Strongly Disagree was coded as a 1 and Strongly Agree coded as a 5.

To test our research questions, several independent variables were developed. We begin by addressing the independent variables that cut across honorary authorship, coercive citation, and padding citations. The first is academic rank. We asked respondents their current rank: Assistant Professor, Associate Professor, Professor, Research Faculty, Clinical Faculty, and other. Dummy variables were created for each category with Professor being the omitted category in our tests of the hypotheses. The second general independent variable is discipline: Medicine, Nursing, Accounting, Economics, Finance, Information Systems, Management, Marketing, Political Science, Psychology, Sociology, Biology, Chemistry, Computer Science, Ecology, and Engineering. Again, dummy variables were created for each discipline, but instead of omitting a reference category we include all disciplines and then constrain the sum of their coefficients to equal zero. With this approach, the estimated coefficients then tell us how each discipline differs from the average level of honorary authorship, coercive citation, or padded citation across the academic spectrum [ 32 ]. We can conveniently identify three categories: (i) disciplines that are significantly more likely to engage in honorary authorship, coercive citation, or padded citation than the average across all disciplines, (ii) disciplines that do not differ significantly from the average level of honorary authorship, coercive citation, or padded citation across all of these disciplines, and (iii) those who are significantly less likely to engage in honorary authorship, coercive citation, or padded citation than the average. We test the potential gender differences with a dummy variable male = 1, females = 0.

Additional independent variables were developed for specific research questions. In our tests of honorary authorship, there is an independent variable addressing the number of co-authors on a respondent’s most recent manuscript. If the respondent stated that they have added an honorary author then they were asked “Please focus on the most recent incidence in which an individual was added as a coauthor to one of your manuscripts even though his or her contribution was minimal. Including yourself, how many authors were on this manuscript?” Respondents who had not added an honorary author were asked to report the number of authors on their most recently accepted manuscript. We also include an independent variable regarding funding agencies: “To which agency, organization, or foundation was this proposal directed?” Again, for those who have added authors, we request they focus on the most recent proposal where they used honorary authorship and for those who responded that they have not practiced honorary authorship, we asked where they sent their most recent proposal. Their responses include NSF, HHS, Corporations, Private nonprofit, State funding, Other Federal grants, and Other grants. Regarding coercive citation, we included an independent variable regarding number of co-authors on their most recent coercive experience and thus if a respondent indicated they’ve been coerced we asked: “Please focus on the most recent incident in which an editor asked you to add citations not based on content. Including yourself, how many authors were on this manuscript?” If a respondent indicated they’ve never been coerced, we asked them to state the number of authors on their most recently accepted manuscript.

Finally, we included control variables. In our tests, we included the respondent’s performance or exposure to these behaviors. For those analyses focusing on manuscripts we used acceptances: “Within the last five years, approximately how many publications, including acceptances, do you have?” The more someone publishes, the more opportunities they have to be coerced, add authors, or add citations; thus, scholars who have published more articles are more likely to have experienced coercion, ceteris paribus. And in our tests of grants we used two performance indicators: 1) “In the last five years approximately how many grant proposals have you submitted for funding?” and 2) “Approximately how much grant money have you received in the last five years? Please write your estimated dollars in box; enter 0 if zero.”

We also investigate coercion using a journal-based dataset, Scopus, which contains information on more than 16,000 journals from these 18 disciplines [ 33 ]. It includes information on the number of articles published each year, the average number of citations per manuscript, the rank of the journal, disciplines that most frequently publish in the journal, the publisher, and so forth. These data were used to help develop our dependent variable as well as our independent and control variables for the journal analysis. Our raw journal data is provided in S4 Appendix : Journal data.

The dependent variables in our journal analysis measure whether a specific journal was identified as a journal in which coercion occurred, or not, and the frequency of that identification. Survey respondents were asked: “To track the possible spread of this practice we need to know specific journals. Would you please provide the names of journals you know engage in this practice?” Respondents were given a blank space to write in journal names. The majority of our respondents declined to identify journals where coercion has occurred; however, more than 1200 respondents provided journal names and in some instances, respondents provided more than one journal name. Among the population of journals in the Scopus database, 612 of these were identified as journals that have coerced by our survey respondents, some of these journals were identified several times. The first dependent variable is binary, coded as 1 if a journal was identified as a journal that has coerced, and coded as 0 otherwise. The frequency estimates uses the count, how many times they were named, as the dependent variable.

The independent variables measure various journal attributes, the first being discipline. The Scopus database identifies the discipline that most frequently publishes in any given journal, and that information was used to classify journals by discipline. Thus, if physics is the most common discipline to publish in a journal, it was classified as a physics journal. We look to see if there is a publisher effect using the publisher information in Scopus to create four categories: commercial publishers, academic associations, universities, and others (the omitted reference category).

We also control for differing editorial norms across disciplines. First, we include the number of documents published annually by each journal. All else equal, a journal that publishes more articles has more opportunities to engage in coercion, and/or it interacts with more authors and is more likely to be reported in our sample. Second, we control for the average number of citations per article. The average number of citations per document controls for some of the overall differences in citation practices across disciplines.

Given the large number of hypotheses to be tested, we present a compiled list of the dependent variables in Table 2 . This table names the dependent variables, describes how they were constructed, and lists the tables that present the estimated coefficients pertinent to those dependent variables. Table 2 is intended to give readers an outline of the arc of the remainder of the manuscript.

thumbnail

https://doi.org/10.1371/journal.pone.0187394.t002

Honorary authorship in research manuscripts

Looking across all disciplines, 35.5% of our survey respondents report that they have added an author to a manuscript even though the contribution of those authors was minimal. Fig 1 displays tallies of some raw responses to show how the use of honorary authorship, for both manuscripts and grants, differs across science, engineering, medicine, business, and the social sciences.

thumbnail

Percentage of respondents who report that honorary authors have been added to their research projects, they have been coerced by editor to add citations, or who have padded their citations, sorted by field of study and type of manipulation.

https://doi.org/10.1371/journal.pone.0187394.g001

To begin the empirical study of the systematic use of honorary authorship, we start with the addition of honorary authors to research manuscripts. This is a logit model in which the dependent variable equals one if the respondent felt obligated to add an author to their manuscript, “even though that individual’s contribution was minimal.” The estimates appear in Table 3 . In brief, all of our conjectures are observed in these data. As we hypothesized above, the pressure on scholars to add authors “who do not add substantially to the research project,” is more likely to be felt by assistant professors and associate professors relative to professors (the reference category). To understand the size of the effect, we calculate odds ratios ( e β ) for each variable, also reported in Table 3 . Relative to a full professor, being an assistant professor increases the odds of honorary authorship in manuscripts by 90%, being an associate professor increases those odds by 40%, and research faculty are twice as likely as a professor to add an honorary author.

thumbnail

https://doi.org/10.1371/journal.pone.0187394.t003

Consistent with our hypothesis, we found support that females were more likely to add honorary authors as the estimated coefficient on males was negative and statistically significant. The odds that a male feels obligated to add an author to a manuscript is 38% lower than for females. As hypothesized, authors who already have several co-authors on a manuscript seem more willing to add another; consistent with our hypotheses that the decrement in individual credit diminishes as the number of authors rises. Overall, these results align with our fundamental thesis that authors are purposively deciding to deceive, adding authors when the benefits are higher and the costs lower.

Considering the addition of honorary authors to manuscripts, Table 3 shows that four disciplines are statistically more likely to add honorary authors than the average across all disciplines. Listing those disciplines in order of their odds ratios and starting with the greatest odds, they are: marketing, management, ecology, and medicine (physicians). There are five disciplines in which honorary authorship is statistically below the average and starting with the lowest odds ratio they are: political science, accounting, mathematics, chemistry, and economics. Finally, the remaining disciplines, statistically indistinguishable from the average, are: physics, psychology, sociology, computer science, finance, engineering, biology, information systems, and nursing. At the extremes, scholars in marketing are 75% more likely to feel an obligation to add authors to a manuscript than the average across all disciplines while political scientists are 44% less likely than the average to add an honorary author to a manuscript.

To bolster these results, we also asked individuals to tell us how many times they felt obligated to add honorary authors to manuscripts in the last five years. Using these responses as our dependent variable we estimated a negative binomial regression equation with the same independent variables used in Table 3 . The estimated coefficients and their transformation into incidence rate ratios are given in Table 4 . Most of the estimated coefficients in Tables 3 and 4 have the same sign and, with minor differences, similar significance levels, which suggests the attributes associated with a higher likelihood of adding authors are also related to the frequency of that activity. Looking at the incidence rate ratios in Table 4 , scholars occupying the lower academic ranks, research professors, females, and manuscripts that already have many authors more frequently add authors. Table 4 also suggests that three additional disciplines, Nursing, Biology, and Engineering, have more incidents of adding honorary authors to manuscripts than the average of all disciplines and, consequently, the disciplines that most frequently engage in honorary authorship are, by effect size, management, marketing, ecology, engineering, nursing, biology, and medicine.

thumbnail

https://doi.org/10.1371/journal.pone.0187394.t004

Another way to measure effect sizes is to standardize the variables so that the changes in the odds ratios or incidence rate ratios measure the impact of a one standard deviation change of the independent variable on the dependent variable. In Tables 3 and 4 , the continuous variables are the number of coauthors on the particular manuscripts of interest and the number of publications of each respondent. Tables C and D (in S1 Appendix ) show the estimated coefficients and odds ratios with standardized coefficients. Comparing the two sets of results is instructive. In Table 3 , the odds ratio for the number of coauthors is 1.035, adding each additional author increases the odds of this manuscript having an honorary author by 3.5%. The estimated odds ratio for the standardized coefficient, (Table C in S1 Appendix ) is 1.10, meaning an increase in the number of coauthors of one standard deviation increases the odds that this manuscript has an honorary author by 10%. Meanwhile the standard deviation of the number of coauthors in this sample is 2.78, so 3.5% x 2.78 = 9.73%; the two estimates are very similar. This similarity repeats itself when we consider the number of publications and when we compare the incidence rate ratios across Table 4 and Table D in S1 Appendix . Standardization also tells us something about the relative effect size of different independent variables and in both models a standard deviation increase in the number of coauthors has a larger impact on the likelihood of adding another author than a standard deviation increase in additional publications.

Honorary authorship in grant proposals

Our next set of results focus on honorary authorship in grant proposals. Looking across all disciplines, 20.8% of the respondents reported that they had added an investigator to a grant proposal even though the contribution of that individual was minimal (see Fig 1 for differences across disciplines). To more deeply probe into that behavior we begin with a model in which the dependent variable is binary, whether a respondent has added an honorary author, or not, to a grant proposal and thus use a logit model. With some modifications, the independent variables include the same variables as the manuscript models in Tables 3 and 4 . We remove a control variable relevant to manuscripts (total number of publications) and add two control variables to measure the level of exposure a particular scholar has to the funding process: the number of grants funded in the last five years and the total amount of grant funding (dollars) in that same period.

The results appear in Table 5 and, again, we see significant participation in honorary authorship. The estimates largely follow our predictions and mirror the results of the models in Tables 3 and 4 . Academic rank has a smaller effect, being an assistant professor increases the odds of adding an honorary author to a grant by 68% and being an associate professor increases those odds by 52%. On the other hand, the impact of being a research professor is larger in the grant proposal models than the manuscripts model of Table 3 while the impact of sex is smaller. As was true in the manuscripts models, the obligation to add honorary authors is also lumpy, some disciplines being much more likely to engage in the practice than others. We find five disciplines in the “more likely than average” category: medicine, nursing, management, engineering, and psychology. The disciplines that tend to add fewer honorary authors to grants are political science, biology, chemistry, and physics. Those that are indistinguishable from the average are accounting, economics, finance, information systems, sociology, ecology, marketing, computer science, and mathematics.

thumbnail

https://doi.org/10.1371/journal.pone.0187394.t005

We speculated that science, engineering, and medicine were more likely to practice honorary authorship in grant proposals because those disciplines are more dependent on research funding and more likely to consider funding as a requirement for tenure and promotion. The results in Tables 3 and 5 are somewhat consistent with this conjecture. Of the five disciplines in the “above average” category for adding honorary authors to grant proposals, four (medicine, nursing, engineering, and psychology) are dependent on labs and funding to build and maintain such labs for their research.

Reasons for adding honorary authors

Our next set of results looks more deeply into the reasons scholars give for adding honorary authors to manuscripts and to grants. When considering honorary authors added to manuscripts, we focus on a set of responses to the question: “what was the major reason you felt you needed to add those co-author(s)?” When we look at grant proposals, we use responses to the survey question: “The main reason you added an individual to this grant proposal even though he (or she) was not expected to make a significant contribution was…” Starting with manuscripts, although nine different reasons for adding authors were cited (see survey in S1 Appendix ), only three were cited more than 10% of the time. The most common reason our respondents added honorary authors (28.4% of these responses) was because the added individual was the director of the lab. The second most common reason (21.4% of these responses), and the most disturbing, was that the added individual was in a position of authority and could affect the scholar’s career. Third among the reasons for honorary authorship (13.2%) were mentors. “Other” was selected by about 13% of respondents. The percentage of raw responses for each reason is shown in Fig 2 .

thumbnail

Each pair of columns presents the percentage of responses who selected a particular reason for adding an honorary author to a manuscript or a grant proposal. Director refers to responses stating, “this individual was the director of the lab or facility used in the research.” Authority refers to responses stating, “this individual occupies a position of authority and can influence my career.” Mentor, “this is my mentor”; colleague, “this a colleague I wanted to help”; reciprocity, “I was included or expect to be included as a co-author on their work”; data, “they had data I needed”; reputation, “their reputation increases the chances of the work being published (or funded)”; funding, “they had funding we could apply to the research”; and reviewers, “the grant reviewers suggested we add co-authors.”

https://doi.org/10.1371/journal.pone.0187394.g002

To find out if the three most common responses were related to the professional characteristics of the scholars in our study, we re-estimate the model in Table 3 after replacing the dependent variable with the reasons for adding an author. In other words, the first model displayed in Table 6 , under the heading “Director of Laboratory,” estimates a regression in which the dependent variable equals one if the respondent added the director of the research lab in which they worked as an honorary author and equals zero if this was not the reason. The second model indicates those who added an author because he or she was in a position of authority and so forth. The estimated coefficients appear in Table 6 and the odds ratios are reported in S1 Appendix , Table E. Note the sample size is smaller for these regressions because we include only those respondents who say they have added a superfluous author to a manuscript.

thumbnail

https://doi.org/10.1371/journal.pone.0187394.t006

The results are as expected. The individuals who are more likely to add a director of a laboratory are research faculty (they mostly work in research labs and centers), and scholars in fields in which laboratory work is a primary method of conducting research (medicine, nursing, psychology, biology, chemistry, ecology, and engineering). The second model suggests that the scholars who add an author because they feel pressure from individuals in a position of authority are junior faculty (assistant and associate professors, and research faculty) and individuals in medicine, nursing, and management. The third model suggests assistant professors, lecturers, research faculty, and clinical faculty are more likely to add their mentors as an honorary author. Since many mentorships are established in graduate school or through post-docs, it is sensible that scholars who are early in their career still feel an obligation to their mentors and are more likely to add them to manuscripts. Finally, the disciplines most likely to add mentors to manuscripts seem to be the “professional” disciplines: medicine, nursing, and business (economics, information systems, management, and marketing). We do not report the results for the other five reasons for adding honorary authors because few respondent characteristics were statistically significant. One explanation for this lack of significance may be the smaller sample size (less than 10% of the respondents indicated one of these remaining reasons as being the primary reason they added an author) or it may be that even if these rationales are relatively common, they might be distributed randomly across ranks and disciplines.

Turning to grant proposals, the dominant reason for adding authors to grant proposals even though they are not actually involved in the research was reputation. Of the more than 2100 individuals who gave a specific answer to this question, 60.8% selected “this individual had a reputation that increases the chances of the work being funded.” The second most frequently reported reason for grants was that the added individual was the director of the lab (13.5%), and third was people holding a position of authority (13%). All other reasons garnered a small number of responses.

We estimate a set of regressions similar to Table 6 using the reasons for honorary grant proposal authorship as the dependent variable and the independent variables from the grant proposal models of Table 5 . Before estimating those models we also add six dummy variables reflecting different sources of research funding to see if the reason for adding honorary citations differs by type of funding. These dummy variables indicate funding from NSF, HHS (which includes the NIH), research grants from private corporations, grants from private, non-profit organizations, state research grants, and then a variable capturing all other federally funded grants. The omitted category is all other grants. The estimated coefficients appear in Table 7 and the odds ratios are reported in Table F in S1 Appendix .

thumbnail

https://doi.org/10.1371/journal.pone.0187394.t007

The first column of results in Table 7 replicates and adds to the model in Table 5 , in which the dependent variable is: “have you added honorary authors to grant proposals.” The reason we replicate that model is to add the six funding sources to the regression to see if some agencies see more honorary authors in their proposals than other agencies. The results in Table 7 suggest they do. Federally funded grants are more likely to have honorary authorships than other sources of grant funding as the coefficients on NSF, NIH, and other federal funding are all positive and significant at the 0.01 level. Corporate research grants also tend to have honorary authors included.

The remaining columns in Table 7 suggest that scholars in medicine and management are more likely to add honorary authors to grant proposals because of the added scholar’s reputation, but there is little statistical difference across the other characteristics of our respondents. Exploring the different sources of funds, adding an individual because of his or her reputation is more likely to be practiced with grants to the Department of Health and Human Services (probably because of the heavy presence of medical proposals and honorary authorship is common in medicine) and it is statistically less likely to be used in grant proposals directed towards corporate research funding.

Table 7 shows that lab directors tend to be honorary authors in grant proposals with assistant professors and for grant proposals directed to private corporations. While position of authority (i.e., political power) was the third most frequently cited reason to add someone to a proposal, its practice seems to be dispersed across the academic universe as the regression results in Table 7 do not show much variation across rank, discipline, their past experience with research funding, or the funding source to which the proposal was directed. The remaining reasons for adding authors garnered a small portion of the total responses and there was little significant variation across the characteristics measured here. For these reasons, their regression results are not reported.

Coercive citations

There is widespread distaste among academics concerning the use of coercive citation. Over 90% of our respondents view coercion as inappropriate, 85.3% think its practice reduces the prestige of the journal, and 73.9% are less likely to submit work to a journal that coerces. These opinions are shared across the academic spectrum as shown in Fig 3 , which breaks out these responses by the major fields, medicine, science, engineering, business, and the social sciences. Despite this disapproval, 14.1% of the overall respondents report being coerced. Similar to the analyses above, our task is to see if there is a systematic set of attributes of scholars who are coerced or if there are attributes of journals that are related to coercion.

thumbnail

The first column in each cluster presents the percentage of respondents from each major academic group who either strongly agree or agree with the statement the coercive citations, “is inappropriate.” The second column is the percentage that agrees to, “[it] reduces the prestige of the journal.” The third column reflects agreement to, “are less likely to submit work to a journal that coerces.”

https://doi.org/10.1371/journal.pone.0187394.g003

Two dependent variables are used to measure the existence and the frequency of coercive citation. The first is a binary dependent variable, whether respondents were coerced or not, and the second counts the frequency of coercion, asking our respondents how many times they have been coerced in the last five years. Table 8 presents estimates of the logit model (coerced or not) and their odds ratios and Table 9 presents estimates of the negative binomial model (measuring the frequency of coercion) and their accompanying incident rate ratios. With but a single exception (the estimated coefficient on female scholars was opposite our expectation) our hypotheses are supported. In this sample, it is males who are more likely to be coerced, the effect size estimates that being a male raises the odds ratio of being coerced by 18%. In the frequency estimates in Table 9 , however, there was no statistical difference between male and female scholars.

thumbnail

https://doi.org/10.1371/journal.pone.0187394.t008

thumbnail

https://doi.org/10.1371/journal.pone.0187394.t009

Consistent with our hypotheses, assistant professors and associate professors were more likely to be coerced than full professors and the effect was larger for assistant professors. Being an assistant professor increases the odds that you will be coerced by 42% over a professor while associate professors see about half of that, a 21% increase in their odds. Table 9 shows assistant professors are also coerced more frequently than professors. Co-authors had a negative and significant coefficient as predicted in both sets of results. Consequently, comparing Tables 3 and 8 we see that manuscripts with many co-authors are more likely to add honorary authors, but are less likely to be targeted for coercion. Finally, we find significant variation across disciplines. Eight disciplines are significantly more likely to be coerced than the average across all disciplines and ordered by their odds ratios (largest to smallest) they are: marketing, information systems, finance, management, ecology, engineering, accounting, and economics. Nine disciplines are less likely to be coerced and ordered by their odds ratios (smallest to largest) they are: mathematics, physics, political science, chemistry, psychology, nursing, medicine, computer science, and sociology. Again, there is support for our speculation that disciplines in which grant funding is less critical (and therefore publication is relatively more critical) experience more coercion. In the top coercion category, six of the eight disciplines are business disciplines, where research funding is less common, and in “less than average” coercion disciplines, six of the nine disciplines rely heavily on grant funding. The anomaly (and one that deserves greater study) is that the social sciences see less than average coercion even though publication is their primary measure of academic success. While they are prime targets for coercion, the editors in their disciplines have largely resisted the temptation. Again, this same pattern emerges in the frequency model. In the S1 Appendix , these models are re-estimated after standardizing the continuous variables. Results appear in Table G (existence of coercion) and Table H (frequency of coercion.)

Coercive citations: Journal data

To achieve a deeper understanding of coercive citation, we reexamine this behavior using academic journals as our unit of observation. We analyze these journal-based data in two ways: 1) a logit model in which the dependent variable equals 1 if that journal was named as having coerced and 0 if not and 2) a negative binomial model where the dependent variable is the count of the number of times a journal was identified as one where coercion occurred. As before, the variance of these data substantially exceeds the mean and thus Poison regression is inappropriate. To test our hypotheses, our included independent variables are the dummy variables for discipline, journal rank, and dummy variables for different types of publishers. We control for some of the different editorial practices across journals by including the number of documents published annually by each journal and the average number of citations per article.

The results of the journal-based analysis appear in Table 10 . Once again, and consistent with our hypothesis, the differences across disciplines emerge and closely follow the previous results. The discipline journals most likely to have coerced authors for citations are in business. The effect of a journal’s rank on its use of coercion is perhaps the most startling finding. Measuring journal rank using the h-index suggests that more highly rated journals are more likely to have coerced and coerced more frequently, which is opposite our hypothesis that lower ranked journals are more likely to coerce. Perhaps the chance to move from being a “good” journal to a “very good” journal is just too tempting to pass. There is some anecdotal evidence that is consistent with this result. If one surfs through the websites of journals, many simply do not mention their rank or impact factor. However, those that do mention their rank or impact tend to be more highly ranked journals (a low-ranked journal typically doesn’t advertise that fact), but the very presence of the impact factor on a website suggests that the journal, or more importantly the publisher, places some value on it and, given that pressure, it is not surprising to find that it may influence editorial decisions. On the other hand, we might be observing the results of established behavior. If some journals have practiced coercion for an extended time then their citation count might be high enough to have inflated their h-index. We cannot discern a direction of causality, but either way our results suggest that more highly ranked journals end up using coercion more aggressively, all else equal.

thumbnail

https://doi.org/10.1371/journal.pone.0187394.t010

There seems to be publisher effects as well. As predicted, journals published by private, profit oriented companies are more likely to be journals that have coerced, but it also seems to be more common in the academic associations than university publishers. Finally, we note that the total number of documents published per year is positively related to a journal having coerced and the impact of the average number of citations per document was not significantly different than zero.

The result that higher-ranked journals seem to be more inclined than lower-ranked journals to have practiced coercion warrants caution. These data contain many obscure journals; for example, there are more than 4000 publications categorized as medical journals and this long tail could create a misleading result. For instance, suppose some medical journals ranked between 1000–1200 most aggressively use the practice of coercion. In relative terms these are “high” ranked journals because 65% of the journals are ranked even lower than these clearly obscure publications. To account for this possibility, a second set of estimates was calculated after eliminating all but the “top-30” journals in each discipline. The results appear in Table 11 and generally mirror the results in Table 10 . Journals in the business disciplines are more likely to have used coercion and used it more frequently than the other disciplines. Medicine, biology, and computer science journals used coercion less. However, even concentrating on the top 30 journals in each field, the h-index remains positive and significant; higher ranked journals in those disciplines are more likely to have coerced.

thumbnail

https://doi.org/10.1371/journal.pone.0187394.t011

Padded reference lists

Our final empirical tests focus on padded citations. We asked our respondents that if they were submitting an article to a journal with a reputation of asking for citations even if those citations are not critical to the content of the article, would you “add such citations BEFORE SUBMISSION.” Again, more than 40% of the respondents said they agreed with that sentiment. Regarding grant proposals, 15% admitted to adding citations to their reference list in grant proposals “even if those citations are of marginal import to my proposal.”

To see if reference padding is as systematic as the other types of manipulation studied here, we use the categorical responses to the above questions as dependent variables and estimate ordered logit models using the same descriptive independent variables as before. The results for padding references in manuscripts and grant proposals appear in Tables 12 and 13 , respectively. Once more, with minor deviation, our hypotheses are strongly supported.

thumbnail

https://doi.org/10.1371/journal.pone.0187394.t012

thumbnail

https://doi.org/10.1371/journal.pone.0187394.t013

Tables 12 and 13 show that scholars of lesser rank and those without tenure are more likely to pad citations to manuscripts and skew citations in grant proposals than are full professors. The gender results are mixed, males are less likely to pad their citations in manuscripts, but more likely to pad references in grant proposals. It is the business disciplines and the social sciences that are more likely to pad their references in manuscripts and business and medicine who pad citations on grant proposals. In both situations, familiarity with other types of manipulation has a strong, positive correlation with the likelihood that individuals pad their reference list. That is, respondents who are aware of coercive citation and those who have been coerced in the past are much more likely to pad citations before submitting a manuscript to a journal. And, scholars who have added honorary authors to grant proposals are also more likely to skew their citations to high-impact journals. While we cannot intuit the direction of causation, we show evidence that those who manipulate in one dimension are willing to manipulate in another.

Our results are clear; academic misconduct, specifically misattribution, spans the academic universe. While there are different levels of abuse across disciplines, we found evidence of honorary authorship, coercive citation, and padded citation in every discipline we sampled. We also suggest that a useful construct to approach misattribution is to assume individual scholars make deliberate decisions to cheat after weighing the costs and benefits of that action. We cannot claim that our construct is universally true because other explanations may be possible, nor do we claim it explains all misattribution behavior because other factors can play a role. However, the systematic pattern of superfluous authors, coerced citations, and padded references documented here is consistent with scholars who making deliberate decisions to cheat after evaluating the costs and benefits of their behavior.

Consider the use of honorary authorship in grant proposals. Out of the more than 2100 individuals who gave a specific reason as to why they added a superfluous author to a grant proposal, one rationale outweighed the others; over 60% said they added the individual because of they thought the added scholar’s reputation increased their changes of a positive review. That behavior, adding someone with a reputation even though that individual isn’t expected to contribute to the work was reported across disciplines, academic ranks, and individuals’ experience in grant work. Apparently, adding authors with highly recognized names to grant proposals has become part of the game and is practiced across disciplines and rank.

Focusing on manuscripts, there is more variation in the stated reasons for honorary authorship. Lab directors are added to papers in disciplines that are heavy lab users and junior faculty members are more likely to add individuals in positions of authority or mentors. Unlike grant proposals, few scholars add authors to manuscripts because of their reputation. A potential explanation for this difference is that many grant proposals are not blind reviewed, so grant reviewers know the research team and can be influenced by its members. Journals, however, often have blind referees, so while the reputation of a particular author might influence an editor it should not influence referees. Furthermore, this might reflect the different review process of journals versus funding agencies. Funding agencies specifically consider the likelihood that a research team can complete a project and the project’s probability of making a significant contribution. Reputation can play a role in setting that perception. Such considerations are less prevalent in manuscript review because a submitted work is complete—the refereeing question is whether it is done well and whether it makes a significant contribution.

Turning to coercive citations, our results in Tables 8 and 9 are also consistent with a model of coercion that assumes editors who engage in coercive citation do so mindfully; they are influenced by what others in their field are doing and if they coerce they take care to minimize the potential cost that their actions might trigger. Parallel analyses using a journal data base are also consistent with that view. In addition, the distinctive characteristics of each dataset illuminate different parts of the story. The author-based data suggests editors target their requests to minimize the potential cost of their activity by coercing less powerful authors and targeting manuscripts with fewer authors. However, contrary to the honorary authorship results, females are less likely to be coerced than males, ceteris paribus . The journal-based data adds that it is higher-ranked journals that seem to be more inclined to take the risk than lower ranked journals and that the type of publisher matters as well. Furthermore, both approaches suggest that certain fields, largely located in the business professions, are more likely to engage in coercive activities. This study did not investigate why business might be more actively engaged in academic misconduct because there was little theoretical reason to hypothesize this relationship. There is however some literature suggesting that ethics education in business schools has declined [ 34 ]. For the last 20–30 years business schools have turned to the mantra that stock holder value is the only pertinent concern of the firm. It is a small step to imagine that citation counts could be viewed as the only thing that matters for journals, but additional research is needed to flesh out such a claim.

Again, we cannot claim that our cost-benefit model of editors who try to inflate their journal impact factor score is the only possible explanation of coercion. Even if editors are following such a strategy, that does not rule out additional considerations that might also influence their behavior. Hopefully future research will help us understand the more complex motivations behind the decision to manipulate and the subsequent behavior of scholars.

Finally, it is clear that academics see value in padding citations as it is a relatively common behavior for both manuscripts and grants. Our results in Tables 12 and 13 also suggest that the use of honorary authorship and padding citations in grant proposals and coercive citation and padding citations in manuscripts is correlated. Scholars who have been coerced are more likely to pad citations before submitting their work and individuals who add authors to manuscripts also skew their references on their grant proposals. It seems that once scholars are willing to misrepresent authorship and/or citations, their misconduct is not limited to a single form of misattribution.

It is difficult to examine these data without concluding that there is a significant level of deception in authorship and citation in academic research and while it would be naïve to suppose that academics are above such scheming to enhance their position, the results suggest otherwise. The overwhelming consensus is that such behavior is inappropriate, but its practice is common. It seems that academics are trapped; compelled to participate in activities they find distasteful. We suggest that the fuel that drives this cultural norm is the competition for research funding and high-quality journal space coupled with the intense focus on a single measure of performance, the number of publications or grants. That competition cuts both ways, on the one hand it focuses creativity, hones research contributions, and distinguishes between significant contributions and incremental advances. On the other hand, such competition creates incentives to take shortcuts to inflate ones’ research metrics by strategically manipulating attribution. This puts academics at odds with their core ethical beliefs.

The competition for research resources is getting tighter and if there is an advantage to be gained by misbehaving then the odds that academics will misbehave increase; left unchecked, the manipulation of authorship and citation will continue to grow. Different types of attribution manipulation continue to emerge; citation cartels (where editors at multiple journals agree to pad the other journals’ impact factor) and journals that publish anything for a fee while falsely claiming peer-review are two examples [ 30 , 35 ].

It will be difficult to eliminate such activities, but some steps can probably help. Policy actions aimed at attribution manipulation need to reduce the benefits of manipulation and/or increase the cost. One of the driving incentives of honorary authorship is that the number of publications has become a focal point of evaluation and that number is not sufficiently discounted by the number of authors [ 36 ]. So, if a publication with x authors counted as 1/x publications for each of the authors, the ability to inflate one’s vita is reduced. There are problems of course, such as who would implement such a policy, but some of these problems can be addressed. For example if the online, automated citation counts (e.g., h-index, impact factor, calculators such as SCOPUS and Google Scholar) automatically discounted their statistics by the number of authors, it could eventually influence the entire academe. Other shortcomings of this policy is that this simple discounting does not allow for differential credit to be given that may be warranted, nor does it remove the power disparity in academic ranks. However, it does stiffen the resistance to adding authors and that is a crucial step.

An increasing number of journals, especially in medicine, are adopting authorship guidelines developed by independent groups, the most common being set forth by the International Committee of Medical Journal Editors (ICMJE) [ 37 ]. To date, however, there is little evidence that those standards have significantly altered behavior; although it is not clear if that is because authors are manipulating in spite of the rules, if the rules are poorly enforced, or if they are poorly designed from an implementation perspective [ 21 ]. Some journals require authors to specifically enumerate each author’s contribution and require all of the authors to sign off on that division of labor. Such delineation would be even more effective if authorship credit was weighted by that division of labor. Additional research is warranted.

There may be greater opportunities to reduce the practice of coercive citation. A fundamental difference between coercion and honorary authorship is the paper trail. Editors write down such “requests” to authors, therefore violations are easier to document and enforcement is more straightforward. First, it is clear that impact factors should no longer include self-citations. This simple act removes the incentive to coerce authors. Reuters makes such calculations and publishes impact factors including and excluding self-citations. However, the existence of multiple impact factors gives journals the opportunity to adopt and advertise the factor that puts them in the best light, which means that journals with editors who practice coercion can continue to use impact factors that can be manipulated. Thus, self-citations should be removed from all impact factor calculations. This does not eliminate other forms of impact factor manipulation such as posting accepted articles on the web and accumulating citations prior to official publication, but it removes the benefit of editorial coercion and other strategies based on inflating self-citation [ 38 ]. Second, journals should explicitly ban their editors from coercing. Some journals are taking these steps and while words do not insure practice, a code of ethics reinforces appropriate behavior because it more closely ties a journal’s reputation to the practices of its editors and should increase the oversight of editorial boards. Some progress is being made on the adoption of editorial guidelines, but whether they have any impact is currently unknown [ 39 , 40 ].

These results also reinforce the idea that grant proposals be double blind-reviewed. Blind-review shifts the decision calculus towards the merit of a proposal and reduces honorary authorship incentives. The current system can inadvertently encourage misattribution. For example, scholars are often encouraged to visit granting agencies to meet with reviewers and directors of programs to talk about high-interest research areas. Such visits make sense, but it is easy for those scholars to interpret their visit as a name-collecting exercise; finding people to add to proposals and collecting references to cite. Fourth, academic administrators, Provosts, Deans, and Chairs need to have clear rules concerning authorship. Far too many of our respondents said they added a name to their work because that individual could have an impact on their career. They also need to have guidelines that address the inclusion of mentors and lab directors to author lists. Proposals that include name-recognizable scholars for only a small proportion of the grant should be viewed with suspicion. This is a consideration in some grant opportunities, but that linkage can be strengthened. Finally, there is some evidence that mentoring can be effective, but there is a real question as to whether mentors are teaching compliance or how to cheat [ 41 ].

There are limitations in this study. Although surveys have shortcomings such as self-reporting bias, self-selection issues, etc., there are some issues for which surveys remain as the data collection method of choice. Manipulation is one of these issues. It would be difficult to determine if someone added honorary authors or padded citations prior to submission without asking that individual. Similarly, coercion is most directly addressed by asking authors if editors coerced them for citations. Other approaches, such as examining archival data, running experiments, or building simulations, will not work. Thus, despite its shortcomings, survey is the method of choice.

Our survey was sent via email and the overall response rate was 10.5%, which by traditional survey standards may be considered to be low. We have no data on how many surveys were filtered as spam or otherwise ended up in junk mail folders or how many addresses were obsolete. We recognize however that there is a rising hesitancy by individuals to click on an emailed link and that is what we were asking our recipients to do. For these reasons, we anticipated that our response rate may be low and compensated by increasing the number of surveys sent out. In the end, we have over 12,000 responses and found thousands of scholars who have participated in manipulation. In the S1 appendix , Table A presents response rates by discipline and while there is variation across disciplines, that variation does not correlate with any of the fundamental results, that is, there does not seem to be a discipline bias arising from differential response rates.

A major concern when conducting survey research is that the sample may not represent the population. To address this possible issue in our study, we perform various statistical analyses to determine if we encountered sampling bias. First, we compared two population demographics (sex and academic rank) to the demographics of our respondents (see Table B in S1 Appendix ). The percentage of males and females in each discipline was very close to the reported sex of the respondents. There was greater variation in academic ranks with the rank of full professor being over-represented in our sample. One should keep this in mind when interpreting our findings. However, our hypotheses and results suggest that professors are the least likely to be coerced, use padded citations, and use honorary authorship, consequently our results may actually under-estimate the incidence of manipulation. Perhaps the greatest concern of potential bias innate in surveys comes from the intuition that individuals who are more intimately affected by a particular issue are more likely to respond. In the current study, it is plausible that scholars who have been coerced, or felt obligated to add authors to manuscripts, or have added investigators to grants proposals, are upset by that consequence and more likely to respond. However, if that motivation biased our responses it should show up in the response rates across disciplines, i.e., disciplines reporting a greater incidence of manipulation should have higher percentage of their population experiencing manipulation and thus higher response rates. The rank correlation coefficient between discipline response rates and the proportion of scholars reporting manipulation is r s = -0.181, suggesting virtually no relationship between the two measures.

In the end, we cannot rule out the existence of bias but we find no evidence that suggests it affects our results. We are left with the conclusion that scholars manipulate attribution adding honorary authors to their manuscripts and false investigators to their grant proposals, and some editors coerce scholars to add citations that are not pertinent to their work. It is unlikely that this unethical behavior can be totally eliminated because academics are a competitive, intelligent, and creative group of individuals. However, most of our respondents say they want to play it straight and therefore, by reducing the incentives of misbehavior and raising the costs of inappropriate attribution, we can expect a substantial portion of the community to go along. With this inherent support and some changes to the way we measure scientific contributions, we may reduce attribution misbehavior in academia [ 42 ].

Supporting information

S1 appendix. statistical methods, surveys, and additional results..

https://doi.org/10.1371/journal.pone.0187394.s001

S2 Appendix. Honorary authors data.

https://doi.org/10.1371/journal.pone.0187394.s002

S3 Appendix. Coercive citation data.

https://doi.org/10.1371/journal.pone.0187394.s003

S4 Appendix. Journal data.

https://doi.org/10.1371/journal.pone.0187394.s004

  • View Article
  • Google Scholar
  • PubMed/NCBI
  • 26. Ward K, Eddy PL. Women and Academic Leadership: ‘Leaning Out’ Chron. of Higher Ed. 2013; Dec. 3.
  • 29. Dominici, F., Fried, L. P., Zeger, S. L. So few women leaders. Academe. July-August, 2009.
  • 33. Scopus 2014. http://www.elsevier.com/online-tools/scopus .
  • 34. McDonald D. The Golden Passport : Harvard Business School , the Limits of Capitalism , and the Moral Failure of the MBA Elite . New York: HarperCollins; 2017.
  • 37. ICJME. Defining the Role of Authors and Contributors, Section 2. Who is an Author? 2014. http://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html .
  • 39. Editors Joint Policy Statement Regarding ‘Coercive citations’. http://www.jfqa.org/EditorsJointPolicy.html .

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • NEWS FEATURE
  • 13 May 2020

Meet this super-spotter of duplicated images in science papers

  • Helen Shen 0

Helen Shen is a science journalist based in Sunnyvale, California.

You can also search for this author in PubMed   Google Scholar

Credit: Gabriela Hasbun for Nature

February the fourteenth starts like most other days for Elisabeth Bik: checking her phone in bed, she scrolls through a slew of Twitter notifications and private messages from scientists seeking her detective services. Today’s first request is from a researcher in Belgium: “Hi! I know you have a lot of people asking you to use your magic powers to analyse figures, blots and others but I just wanted to ask your opinion…”

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 51 print issues and online access

185,98 € per year

only 3,65 € per issue

Rent or buy this article

Prices vary by article type

Prices may be subject to local taxes which are calculated during checkout

Nature 581 , 132-136 (2020)

doi: https://doi.org/10.1038/d41586-020-01363-z

Reprints and permissions

Related Articles

research papers on manipulation

  • Peer review

Who will make AlphaFold3 open source? Scientists race to crack AI model

Who will make AlphaFold3 open source? Scientists race to crack AI model

News 23 MAY 24

Pay researchers to spot errors in published papers

Pay researchers to spot errors in published papers

World View 21 MAY 24

Plagiarism in peer-review reports could be the ‘tip of the iceberg’

Plagiarism in peer-review reports could be the ‘tip of the iceberg’

Nature Index 01 MAY 24

To regain lost public trust, incorporate research ethics into graduate training

Correspondence 02 JUL 24

Boycotting academics in Israel is counterproductive

Correspondence 18 JUN 24

Why museums should repatriate fossils

Why museums should repatriate fossils

Comment 18 JUN 24

Microbiologist wins case against university over harassment during COVID

Microbiologist wins case against university over harassment during COVID

News 12 JUL 24

Western scientists more likely to get rejected papers published — and do it faster

Western scientists more likely to get rejected papers published — and do it faster

News 02 JUL 24

Spy on millions of sleeping butterflies and more — June’s best science images

Spy on millions of sleeping butterflies and more — June’s best science images

Postdoctoral Researcher - Schmidt AI in Science Fellow

The University of Toronto now recruiting for the Eric and Wendy Schmidt AI in Science Postdoctoral Fellowship. Valued at $85,000 CDN per year.

Toronto (City), Ontario (CA)

University of Toronto (U of T)

research papers on manipulation

Associate or Senior Editor (Quantum Physics and Quantum Technologies)

To help us to build on the success of this journal, we’re seeking a researcher with a background in quantum physics.

London or Madrid – hybrid working model.

Springer Nature Ltd

research papers on manipulation

Five industrial PhD students to the Research School in Future Silviculture

We are looking for five industrial PhD students to join the Research School in Future Silviculture at the Swedish University of Agricultural Sciences.

Umeå, Uppsala

Swedish University of Agricultural Sciences

research papers on manipulation

‘Excellence by Choice’ Postdoctoral Programme in Life Science

Up to four postdoctoral fellowships within ‘Excellence by Choice’ Postdoctoral Programme in Life Science at Umeå University, Sweden

Umeå, Sweden

Umeå University (KBC)

research papers on manipulation

Southeast University Future Technology Institute Recruitment Notice

Professor openings in mechanical engineering, control science and engineering, and integrating emerging interdisciplinary majors

Nanjing, Jiangsu (CN)

Southeast University

research papers on manipulation

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Ethics of generative AI and manipulation: a design-oriented research agenda

  • Original Paper
  • Open access
  • Published: 03 February 2024
  • Volume 26 , article number  9 , ( 2024 )

Cite this article

You have full access to this open access article

research papers on manipulation

  • Michael Klenk   ORCID: orcid.org/0000-0002-1483-0799 1  

4600 Accesses

1 Altmetric

Explore all metrics

Generative AI enables automated, effective manipulation at scale. Despite the growing general ethical discussion around generative AI, the specific manipulation risks remain inadequately investigated. This article outlines essential inquiries encompassing conceptual, empirical, and design dimensions of manipulation, pivotal for comprehending and curbing manipulation risks. By highlighting these questions, the article underscores the necessity of an appropriate conceptualisation of manipulation to ensure the responsible development of Generative AI technologies.

Similar content being viewed by others

research papers on manipulation

Getting into the engine room: a blueprint to investigate the shadowy steps of AI ethics

research papers on manipulation

Intentionality gap and preter-intentionality in generative artificial intelligence

research papers on manipulation

Contestable AI by Design: Towards a Framework

Avoid common mistakes on your manuscript.

Introduction

Research on generative AI is growing at scale, and the results achieved by recent applications are nothing short of astonishing (though see Floridi, 2023 ). These developments create “enormous promise and peril” (The Economist, 2023 ), especially by enabling effective, automated influence at scale.

On the one hand, such ability is promising because many good things depend on effective influence. For example, effective influence is required to facilitate better lifestyle interventions to improve health outcomes (see e.g. Tremblay et al., 2010 ). It could also improve public policy, helping governments to communicate with citizens amidst the noise of propaganda, filter bubbles, and fake news (European Commission, forthcoming).

On the other hand, effective influence invites manipulation, a morally dubious form of influence. Generative AI could, for instance, “make email scams more effective by generating personalised and compelling text at scale” (Weidinger et al., 2022 ) or learn to generate outputs that effectively exploit users’ cognitive biases to influence their behaviour (Kenton et al., 2021 ). More generally, whenever effective influence is rewarded—which is the case in almost any area of human interaction, such as social life, marketing, or politics—there is a strong incentive to turn from legitimate forms of influence like rational persuasion to more effective but morally dubious forms of influence like manipulation. Hence, generative AI “aggravates” (Klenk & Jongepier, 2022b ) existing ethical concerns about online manipulation.

However, there is no clear view of how the (dis-)value of manipulation should play a role in designing new technologies based on generative AI. How, in other words, can generative AI (or, more precisely, applications that use it) be designed so that its application avoids illegitimate forms of manipulation? Existing work in AI ethics barely touches on design questions and focuses more on the important but still preliminary step of drawing attention to pertinent ethical risks (e.g. Weidinger et al., 2022 ). Moreover, some technical work on AI alignment already addresses in general terms how to make generative AI applications “helpful, honest, and harmless” (Askell et al., 2021 ), but there is insufficient attention paid to an appropriate conceptualisation of manipulation that can guide design, which is unsurprising given that manipulation is a difficult concept to grasp.

The lack of attention to manipulation in the debate about generative AI is a significant omission. Manipulation is identified as a disvalue and thus an explicit target of AI regulation, e.g. in the EU’s forthcoming AI Act (European Commission, 2021 ; European Commission et al., 2022 ). More generally, manipulation is considered a threat to democracy and trustworthiness, which means that it is a fundamental threat to the critical aim of responsible, trustworthy AI (Faraoni, 2023 ). In addition, a large body of literature documents worries about manipulation in other contexts, notably nudging and advertising (cf. Sunstein, 2016 ). There is, thus, a compelling legal and moral case for paying attention to manipulation. Given these goals, it is imperative to understand clearly what manipulation is and to devise appropriate requirement specifications.

Therefore, this article discusses a research agenda studying manipulation in generative AI. I argue that good research on manipulation and generative AI—which everyone concerned with trustworthy AI and the value of democracy is or should be interested in—depends significantly on our conceptualisation of manipulation. It matters because different phenomena will come into view depending on our conception of manipulation. It also matters pragmatically because different conceptions of manipulation will imply different design and regulatory requirements.

I proceed as follows. The section “Design for values and conceptual engineering” “Design for non-manipulation” introduces the design for value approach in general. The section “Design for non-manipulation” then discusses pertinent research questions about manipulation that relate to the conceptual, empirical, and implementation phases of a design for value project, with a focus on the conceptual phase.

Design for values and conceptual engineering

I take a design perspective that aims to help designers and engineers put values at the heart of the design of new technologies (van de Poel, 2020 ; van den Hoven et al., 2015 ). Central to the design perspective—whose importance is stressed by the IEEE, the WHO, UNESCO, the EU, and many others—is that human values should inform and shape appropriate design requirements. Footnote 1 Consequently, several key questions for any design for value project concern the nature of the values that should be designed for. Footnote 2

Central to the idea of design for values is then that the target values can be specified in a way that allows for a systematic and reliable deduction of concrete design requirements from a general, abstract conception of target values such as ‘trust,’ ‘democracy,’ or ‘non-manipulation’ (van de Poel, 2013 , 2020 ; Veluwenkamp & van den Hoven, 2023 ). It is generally acknowledged that there are often different, (prima facie) plausible conceptualisations of target values, and quite some attention has been devoted to different ways of settling on a value conceptualisation (cf. Friedman & Hendry, 2019 ). Footnote 3

However, what’s only recently been emphasised is the important question of how we can adjudicate between different, perhaps conflicting conceptualisations of a target value (Himmelreich & Köhler, 2022 ; Veluwenkamp & van den Hoven, 2023 ). Footnote 4 As Veluwenkamp and van den Hoven ( 2023 , p. 2) put it, “it is not always obvious which concepts invoked in the decomposition of requirements is the most appropriate in the relevant context of use.” In answering the question of how can we decide which conceptualisations to use? , we must be aware that conceptualisations have consequences; they matter a great deal. For one, different conceptualisations matter for our understanding because they will bring different phenomena into view. For example, conceiving manipulation as an influence hidden from the user will prompt researchers and designers to look at completely different phenomena than conceiving manipulation as a kind of social pressure that need not be hidden from the user at all. In that sense, different conceptions of manipulation function like searchlights. Once they are adopted for a given target value, they bring into scope some and blind us to other phenomena, which may be equally if not more important (see also Barnhill, 2022 ). Therefore, it is relevant for good research on manipulation and generative AI that the chosen conceptualisation reflects or covers the phenomena that make people worried about manipulation in the first place.

Furthermore, conceptualisations also influence the concrete technological interventions and innovations developed to solve the design challenge. For example, thinking of trust as epistemic reliability will imply very different design requirements, and result in different technical solutions toward the goal of trustworthy AI than conceptualising trust in moral terms such as benevolence (cf. Veluwenkamp & van den Hoven, 2023 ). Picking a conceptualisation is thus far from being ‘just about words.’ It is a consequential, material choice. When we start with two different conceptualisations of manipulation, we will likely get two different technical artefacts or systems when we design for non-manipulation. Moreover, if our conceptualisation is bad or inappropriate, the design challenge addresses a faux problem. So, when we aim to design for values, our success depends on the kinds of conceptualisations we pick.

Therefore, good research on manipulation and generative AI depends on an appropriate conceptualisation of ‘manipulation.’ Footnote 5 Existing discussions of manipulation and generative AI leave much wanting in that dimension. Weidinger et al. ( 2022 ) are concerned with a taxonomy of generative AI risks. When they discuss manipulation, they fail to distinguish it from deception adequately. This omission raises questions that they do not answer. Is design for non-manipulation just design for non-deception? Or is there more? If there is more, what would that conception look like? Kenton et al. ( 2021 ) provide a more elaborate discussion, and they end up with a broad and encompassing conceptualisation of manipulation, arguing from a safety perspective: the more phenomena covered, the safer the resulting design. But, as they acknowledge themselves, their conceptualisation may be “too wide-ranging” (Kenton et al., 2021 , p. 11). Too many phenomena will come into view as instances of manipulation, cloud our sense of what manipulation really is, and designs targeted at the phenomena may be overburdened with requirements. Going forward, research on manipulation in generative AI should focus on sharper, more appropriate conceptions of the target phenomenon.

The obvious yet fundamental question concerns the appropriate criteria for choosing a conceptualisation. What makes one conception of, for example, ‘manipulation’ better than another? Traditionally, conceptualisations seem appropriate insofar as they match the target phenomenon. In that view, a conceptualisation of manipulation is appropriate insofar as it captures all and only cases of manipulation. Let this be the narrow criterion of appropriateness . Footnote 6 Importantly, a conceptualisation of manipulation is appropriate according to the narrow criterion quite independently of whether it ‘works’ in practice, such as in design or policy work. The narrow criterion chiefly aims at understanding by clarifying the constituent parts of a concept with little to no regard for whether or not the conceptualisation is helpful in design projects.

However, recently, the debate on ‘conceptual engineering’ in philosophy and the ethics of technology suggested that there may also be moral and pragmatic reasons that have a legitimate influence on our choice of conceptualisation, and tentative proposals have been made about how to systematically assess those reasons (cf. Veluwenkamp & van den Hoven, 2023 ). From this perspective, moral and pragmatic considerations about the causal effects of using a particular conceptualisation or its practicality also play a role in determining whether it is an appropriate conceptualisation (in addition to considerations about whether the conceptualisation captures all and only cases of the target phenomenon, in line with the narrow criterion). Let this be the broad criterion of appropriateness for conceptualisation choice. The broad criterion of appropriateness may especially be relevant from a design perspective, given that a conceptualisation of manipulation in the context of generative AI ultimately ought to inform design choices. However, to what extent broad considerations ought to outweigh narrow considerations is a challenging and unresolved metaphilosophical question.

My aim here is not to weigh in on the metaphilosophical question of whether and why we should prefer the narrow or broad criterion of appropriateness. Footnote 7 Instead, in what follows, I will point out the open questions that still stand in the way of contributing to either approach: What is manipulation (as a folk concept), and what should it be, provided we are prepared to deviate from the folk concept, for reasons of accuracy or other pragmatic, and moral reasons?

Design for non-manipulation

Design for value approaches typically involve the following stages: a phase of considering the appropriate conceptualisation of a value using conceptual means (e.g. reasoning), an empirical stage where stakeholder input is solicited to contribute to the conceptualisation, and a design or implementation stage (Buijsman et al., forthcoming; Friedman & Hendry, 2019 ).

The three stages of a design for value project—conceptual, empirical, and design—are meant to be repeated at different stages of specification of the target value (i.e. from identification of the value to conceptualisation, association with norms, etc.), until concrete design requirements are reached (cf. Veluwenkamp & van den Hoven, 2023 ). I restrict my focus primarily to the conceptualisation stage. As our understanding of manipulation grows and questions about the appropriate conceptualisation get resolved, we should expect the debate to turn to the subsequent steps of operationalising toward concrete design requirements. Footnote 8

Since manipulation is generally seen as a dis-value, I focus on non-manipulation , viz. the absence of manipulation, as a target value. It is clear, then, that even a successful non-manipulative design will probably still leave many other ethically significant issues untouched. A generative AI application that does not manipulate may be ethically legitimate from a manipulation perspective , but overall it may still have other ethical issues (such as issues to do with explainability, privacy, etc.). As such, design for non-manipulation may need to be combined with, or form a part of, broader design aspirations, such as design for trustworthy AI or design for democracy (EGE, 2023 ).

Conceptual stage

To design for non-manipulative generative AI, at least the following questions need to be answered:

What are reliable criteria to identify manipulation and to distinguish it from other (often less morally suspect) forms of influence?

How can generative AI applications be aligned with criteria for non-manipulation?

When and why is manipulation morally bad?

The first question is quintessentially connected to an appropriate conceptualisation of manipulation. Answering it will give us a way to tell whether a given influence—such as an output produced by a generative AI application—is manipulation. For example, suppose that a personal digital health assistant driven by generative AI outputs ‘You should be ashamed of yourself for ordering that meal’ to the user after drawing on their recent purchase history. To decide whether that prompt—or any other output generated by the system—qualifies as manipulation, we need reliable criteria to identify manipulation. In this section, I will briefly review the most pertinent criteria for manipulation. After considering and rejecting several potential criteria, I will suggest—in Sect. “ The indifference criterion ”—that the indifference criterion is most appropriate to conceptualise manipulation.

The continuum model of influence

Manipulation is a form of influence (Coons & Weber, 2014b ). As social animals, humans influence each other in countless ways. Some influences are intentional, such as a speech act, while others are unintentional, such as the intimidating effect a very tall person may have on others. However, not all intentional influences are ethically problematic. For example, if you are the passenger in a car and you yell out to the driver to warn them about an accident, you are not doing anything wrong (cf. Sunstein, 2016 ). Therefore, the first question requires us to determine how manipulation, as a morally suspect influence, is set apart from other types of influence that are generally deemed legitimate.

Criteria for identifying manipulation implied by a chosen conceptualisation may be derived by contrasting it with other forms of influence. Indeed, it has been suggested that manipulation sits on a continuum of influence, situated between rational persuasion and coercion (Beauchamp, 1984 ; Beauchamp & Childress, 2019 ). This continuum model helps draw basic distinctions and conceptualise the idea that there are some benign types of influence, like rational persuasion, and other types of influence that are clearly problematic, like coercion.

However, the continuum model does not yet provide us with reliable criteria for manipulation. There seem to be forms of non-persuasive and non-coercive influence that are not manipulation (Noggle, 1996 ). For example, dressing up for a job interview is neither rational persuasion nor coercion, but it does not look like manipulation either (Noggle, 1996 ). Depending on how we define the reference points of ‘persuasion’ and ‘coercion,’ the continuum model might give us criteria for manipulation that are much too broad, resulting in overly stringent design requirements for generative AI.

Therefore, it is more promising to turn to philosophical theories of manipulation that offer more specific criteria for identifying manipulation. There are several influential ideas about manipulation that are simple, intuitive, and seemingly easy to apply in practice.

The hidden influence criterion

Perhaps the most influential idea is that manipulation is necessarily a form of hidden influence (cf. Faraoni, 2023 , and its uptake and reflection in policy documents). According to Susser et al. ( 2019a , 2019b ), manipulation is an influence that the victim is not or could not easily be aware of. For this conception to be useful in generative AI, it is crucial to specify exactly what remains hidden from the manipulation victim. For example, must the intended outcome of the influence be hidden from the user? Or the precise psychological mechanism through which the influence is intended to work? Or how the influence was generated? The latter, for example, would suggest that any influence generated by generative AI but not declared as such would count as manipulative on the hidden influence conception. In any case, the hidden influence conception helps distinguish manipulation from persuasion and coercion on the continuum model because these forms of influence are necessarily overt (cf. Klenk, 2021c ). Footnote 9

However, the hidden influence conceptualisation of manipulation is unlikely to provide reliable criteria to capture the phenomenon of manipulation accurately, let alone entirely.

On the one hand, many hidden influences do not fall under manipulation. For instance, the heuristic and biases research programme in psychology suggests that many of our decisions arise out of hidden processes that are not the result of conscious deliberation (Kahneman, 2012 ). Still, such processes often seem legitimate and non-manipulative (cf. Sunstein, 2016 ). Therefore, the criterion of hidden influence risks being over-inclusive: it classifies too many cases as manipulation, thus generating false positives. It would require further work to explain how hidden influence is to be understood in a way that makes it a credible criterion for manipulation. Footnote 10

On the other hand, some important forms of manipulation are not covered by the hidden influence conception (cf. Klenk, 2021c ). For example, a manipulative real-estate agent may use the homely scent of freshly baked cookies at a house viewing to lure in potential buyers who, nonetheless, are be fully aware that they are being manipulated (Barnhill, 2014 ). Similarly, the dark pattern known as a ‘roach motel’ often prevents users from cancelling a service by making it cumbersome and tiring to complete (Brignull, 2023 ). Victims of a roach motel are being manipulated even though they are often fully aware of the influence. Therefore, the hidden influence criterion also risks being under-inclusive: it generates insufficient cases as manipulation, thus generating false negatives.

As a result, the hidden influence conception fails given the narrow criterion of appropriateness I discussed in Sect. “ Design for values and conceptual engineering ” (recall that the narrow criterion says that a criterion is appropriate only if it captures all cases of manipulation).

It is also questionable whether the hidden influence conception fares well on a broad criterion of appropriateness. Setting aside the important questions raised at the beginning of this section, the criterion seems easy enough to apply, which may count in its favour given a broad criterion (though see Klenk, 2023 ). However, it may have the morally problematic implication that it shifts some of the burden for combating manipulation from the perpetrator to the victim (cf. Klenk, 2021c ). After all, if manipulation is defined as hidden, then drawing it out into the open means that manipulation ceases to exist. This invites a simple but inappropriate approach to combating manipulation: by calling for potential victims of manipulation to sharpen their ability to uncover manipulation when a more appropriate approach would focus on regulating the perpetrator’s behaviour instead. Thus, even if the over- and under-inclusiveness of the hidden influence conception could be addressed, there are moral reasons to think differently about the conceptualisation of manipulation, according to the broad criterion of appropriateness.

The bypassing rationality criterion

Another influential idea is that manipulation can be identified by influences that bypass rationality (Sunstein, 2016 ; Wilkinson, 2013 ). Again, the notion of bypassing rationality must be specified further for the criterion to be useful (see Gorin, 2014a for discussion). Like the hidden influence conception, the bypassing rationality conception should help to distinguish manipulation from coercion and persuasion, and it correlates with many paradigmatic cases of manipulation. For example, it is manipulative to prompt a generative AI to guilt-trip a target into donating money to a charity because the influence targets the victim’s emotions and bypasses rational deliberation. Footnote 11

However, important questions about the ‘bypassing rationality’ conceptualisation remain. While it seems accurate enough—it accounts for many paradigmatic cases of manipulation—it has been subject to severe criticism for generating false negatives (Gorin, 2014a , 2014b ). Some forms of manipulation—such as peer pressure or charm—do not seem to involve bypassed rationality (Baron, 2003 ; Noggle, 2022 ). Hence, the bypassing conceptualisation of manipulation does not reliably identify all manipulation cases.

Moreover, many forms of tremendously important influences, such as testimony or influences that ‘activate heuristics’, bypass rationality but are not examples of manipulation. Hence, the bypassing criterion is also over-inclusive and generative false positives. For example, testimony bypasses rationality because it is often accepted at face value, given a positive evaluation of the source’s credibility. This is not a rational process in the sense of being conscious, yet testimony is unlikely to be a form of manipulation. Similarly, the availability or recognition heuristic allows people to make frugal decisions without conscious deliberation. It is rational to rely on the heuristic when there is a correlation between the criterion and recognition (Gigerenzer & Goldstein, 1996 ). This suggests that ‘activating’ the availability heuristic need not be manipulative, even though it means to bypass rationality in the sense of bypassing conscious deliberation.

In summary, the bypassing rationality criterion suffers from over- and under-inclusivity. Since it also lacks the advantage of being relatively simple—insofar as bypassing rationality is more difficult to operationalise than hidden influence—it is of questionable relevance for the aim to design for non-manipulation.

Disjunctive conceptions of manipulation

The hidden influence and bypassing conceptions fail because manipulation is a varied and diverse phenomenon. Neither the hidden influence nor the bypassing rationality conceptions offers a way to capture all cases of manipulation and only cases of manipulation.

This led some to wonder whether there is any conceptualisation of manipulation at all that is satisfactory given a narrow criterion for appropriateness (cf. Coons & Weber, 2014a ; Klenk & Jongepier, 2022b ).

However, disjunctive conceptions for identifying manipulation may be a solution. For example, in their discussion of the ethical alignment of language agents, Kenton et al. ( 2021 ) reflect on the diversity of philosophical accounts of manipulation and opt for a disjunctive conception that combines several criteria that are discussed in the philosophical literature. Accordingly, they suggest that manipulation occurs by bypassing rationality, trickery, or pressure. Footnote 12 Recent work on manipulation in AI ethics reflects a similar broad-strokes approach by throwing together different criteria like ‘being hidden’, which correlate with many cases of manipulation, hoping to capture the phenomenon in the wide net of a disjunctive conceptualisation.

However, disjunctive conceptualisations of manipulation are problematic from a narrow criterion of appropriateness (see Noggle, 2020 , 2022 ). If a disjunctive conception incorporates criteria for manipulation that are over-inclusive on their own, then the resulting disjunctive conception risks being over-inclusive, too, viz. it wrongly classifies cases as manipulative. For example, including ‘hidden influence’ in a disjunctive conception risks inheriting the hidden influence criterion’s problems with false positives. The worry that stems from a narrow conception of appropriateness may be addressed by interpreting the disjunction as tracking a family resemblance, such that individual disjuncts are not taken as sufficient for classification. Footnote 13

However, disjunctive criteria still come with significant theoretical, practical, and ethical costs even if they address the problem of over-inclusivity. Footnote 14 Theoretically, they prevent us from identifying what the varied forms of manipulation have in common, for it is possible that there are simply different types of manipulation (cf. Coons & Weber, 2014a ; Noggle, 2022 ). This is particularly worrisome given a narrow criterion of appropriateness. From a design perspective, we would need to specify what type of manipulation we are designing against each time. This is a practical problem independent of our criterion of appropriateness. A measure that may work against manipulation understood as hidden influence (e.g. disclaimers) may fail to address manipulation tracked by other disjuncts, like bypassing reason, yet all forms will register as ‘manipulation’ on a disjunctive conception. To remedy this, ‘design for non-manipulation’ could be misleading given a disjunctive criterion, and it would always have to specify exactly what kind of manipulation is in scope. This illustrates that there are definite practical advantages to identifying a common factor behind all forms of manipulation because it would make ‘design for non-manipulation’ clear and informative.

Ethically, disjunctive criteria make a common, unified ethical and regulatory response to manipulative influence more difficult (see Coons & Weber, 2014a for discussion). If there are different reasons why a given influence qualifies as manipulation, there have to be different ethical responses to it (a phenomenon known as supervenience). This is more complicated and stands in stark contrast to the current way regulators and ethicists propose to deal with manipulation—namely, in a uniform fashion. Therefore, insofar as an appropriate conceptualisation helps us understand and grasp the phenomenon in question, a disjunctive criterion merely dilutes the picture. This is clearly a problem given the narrow criterion for appropriateness.

On a broad criterion of appropriateness, disjunctive conceptualisations of manipulation fare better. There are already practicable disjunctive conceptualisations of concepts other than manipulation in the AI ethics domain. For example, text classifiers of hate speech can be understood as ‘tracking’ a disjunctive criterion for hate speech, and a similar criterion may be envisioned for manipulation. Footnote 15 Ultimately, however, there are serious obstacles to a disjunctive conceptualisation. A text classifier of manipulation would likely have to take into account a host of contextual factors that are hard to identify and represent. More so, there is likely no inherent connection between objectively identifiable features of the influence, such as the kind of words used in a text output and its manipulativeness. Manipulative influence does not, as it were, ‘wear its manipulativeness on the sleeve.’ For example, the sentence ‘you promised to give it to me!’ may be part of a manipulative guilt trip or part of a perfectly benign and non-manipulative conversation. It seems unlikely that we can reliably classify the influence without considering the motivation or genesis of the influence, such as the intention of the manipulator. This is because it is misleading to suggest, as Eliot ( 2023 ) does, that there are objectively identifiable manipulative patterns in texts that generative AI reproduces and that we could identify by looking at the generative AI output. Footnote 16 A text classifier as a practicable way to implement a disjunctive conceptualisation of manipulation would thus need to look at several currently unknown factors whose complexity needs to be considered in the evaluation of the approach.

In summary, disjunctive conceptualisations of manipulation are interesting but ultimately problematic on both narrow and broad criteria of appropriateness.

The trickery criterion

A more promising approach is to understand manipulation in terms of the influencer’s intentions rather than the features of the influence itself. One very influential account suggests that we can identify manipulation by the intention to trick the recipient by causing them to violate a norm of belief, desire, or emotion (Noggle, 2020 ). Typical cases of fraud, for example, are classified as manipulation in this model because they involve the attempt to trick the target into adopting a false belief or an inappropriate desire. For example, when a scammer uses text messages to pose as a relative and asks for money, they try to induce a mistaken belief in the target.

The trickery conceptualisation seems helpful in addressing many intentionally manipulative uses of generative AI. In particular, the trickery conception works well in cases where generative AI is used as a tool to facilitate manipulative influence. In their critical assessment of AI-driven influence operations, Goldstein et al. ( 2023 ) describe how generative AI can be used to scale up fraud and make it more economical. For example, phishing and other attempts to make people solicit information or resources can aggravated by using generative AI to create persuasive phishing material, such as text messages or emails. The intent to trick the victim is clearly recognisable in such cases.

However, it is important to distinguish a different type of manipulation enabled by generative AI where the trickery criterion seems less appropriate. Footnote 17 In particular, the trickery conceptualisation produces false negatives in at least two relevant, though still less prevalent, use cases. Footnote 18

First, someone may unwittingly use generative AI to generate manipulative influences, although they cannot be said to intend to trick anyone. For example, Brignull ( 2023 ) describes how automated A/B testing allows users to run the test and automatically implement the ‘winning’ design. Someone using this feature may simply be interested in creating an effective design that drives sales or engagement on their website. Still, since the ‘winning’ design may include paradigmatic dark patterns like, the user may be said to act manipulatively on account of their indifference or carelessness about the actual quality of their influence. The trickery account does not readily account for cases of unintended manipulation like this. Footnote 19

Second, the trickery account’s focus on intentions leads to problems insofar as generative AI—rather than being used as a tool for manipulation—may itself be manipulative. AI systems are generally thought to lack intention, even if the debate about this has been renewed in light of advances in generative AI. We might speak as if generative AI manipulates (Nyholm, 2022 ), and we might apply the criterion to analyse whether the deployers or designers of a generative AI system wanted to manipulate. But when the system itself is thought to be the source of manipulation, with no intention or only opaque (quasi-)intention, then the trickery account will yield a false negative: it will not classify such cases as manipulation even though they seem like cases of manipulation. Footnote 20 Cappuccio et al. ( 2022 ) argue that new forms of manipulation driven by AI may be “emergent” and not reducible to e.g. the intentions of a human user. Pham et al. ( 2022 ) also stress the importance of considering emergent, non-intentional forms of manipulation that have their source in the automated behaviour of AI-driven applications. An account of manipulation that emphasises the intention to trick or lead astray will not allow us to identify unwitting manipulation that emerges out of the automated behaviour of the system. Although the immediate risk of manipulation by AI is most clearly seen in its use as a tool, the threat of emergent, non-intentional manipulation is clearly relevant and may even be much greater than the risk posed by humans that intentionally use generative AI for manipulative purposes. Hence, the trickery criterion needs to be critically examined. Footnote 21

In summary, the trickery conception faces the biggest challenge in contexts where it generative AI threatens to aggravate existing concerns about manipulation by amplifying the scale of manipulative influence. In lieu of intentions and in lieu of overt features of manipulation, we cannot readily classify emergent forms of manipulation as manipulation on the trickery account.

The indifference criterion

A proposal that promises to overcome these problems is to identify manipulation with indifference to some ideal state rather than some malicious intention to do harm or induce a mistake (Klenk, 2020 , 2021c ). According to the indifference criterion for manipulation, manipulation is an influence that aims to be effective but is not explained by the aim to reveal reasons to the interlocutor (Klenk, 2021c , 2023 ). Footnote 22

For example, when a fraudster uses a generative AI application to produce a text message that seems to come from a child in distress to solicit money from a concerned parent, the fraudster’s concern will likely be an effective influence (i.e. successful fraud). At the same time, they are indifferent as to how they achieve their desired goal. In contrast to the trickery conceptualisation, which interprets the fraudster as intending to trick the victim, the indifference account instead emphasises the fraudster’s motivation to use whichever method works to reach their goal. Footnote 23 Similarly, when generative AI is used to create a political campaign ad that evokes the image of ‘foreign’ looking people and those images are chosen image because they are thought to optimise some desired effect of the campaign (e.g. to ignite people’s xenophobia and racial hatred), then that use of the system counts as manipulative (cf. Mills, 1995 ).

Relatedly, the indifference view can be used to describe manipulation in the behaviour of automated systems. For example, when a recommender system is set to display content that effectively engages people’s attention, and it displays that content for that purpose rather than to reveal reasons to users e.g. about whom to vote for, what to buy, or what to believe, then the recommender system is used manipulatively. Moreover, it might be said that the system itself functions manipulatively (Klenk, 2020 , 2022b ). This has ramifications for possible future uses of generative AI applications. While current generative AI applications like ChatGPT are not yet capable of fine-tuning their output in pursuit of goals other than text-sequence prediction, attempts to fine-tune generative AI applications with objectives aimed at effective influence are possible future use cases (and already discussed e.g. by Matz et al., 2023 ). When such future generative AI applications optimise for effective influence on the user (e.g. to increase sales through a customer service application), then their manipulativeness may not come down to anyone’s intention (as discussed further below). Footnote 24

The indifference view thus identifies manipulation based on two criteria. First, it only looks at influence that is aimed at a particular goal. In that sense, and in line with most, if not all, of the literature on manipulation, the view excludes influence that is purely accidental from counting as manipulation (see Noggle, 2018 ). Footnote 25 Second, the indifference view then asks why a particular means of influence was chosen to achieve the relevant goal. Manipulative influence is characterised negatively in terms of a choice of a means of influence that is not being explained by the aim to reveal reasons to the target of the influence. The manipulator is, in that sense, “careless” (Klenk, 2021c ) or indifferent to revealing reasons to their victims in their choice of the means of influence that they employ. Importantly, the indifference view can be interpreted non-intentionally by thinking about the function of a chosen means of influence. For example, the 'watch next video' choice that a recommender system offers to a user has a particular function, say to induce a target behavior in the user. The indifference view would classify this as manipulation insofar as 'revealing reasons' is not the function of that means of influence.

Notably, the indifference criterion can capture emergent, unwitting manipulation resulting from the sense that generative AI systems act as “stochastic parrots” (Bender et al., 2021 ). This is one of the chief advantages of the indifference view over the trickery conceptualisation of manipulation. Generative AI systems can be understood as ‘bullshitters’ in Frankfurt’s sense of bullshitting as a type of speech act indifferent to truth (Frankfurt, 2005 ). Manipulation as a super-category of bullshit (Klenk, 2022a ) may not be restricted to malicious intent but more broadly connected to indifference to truth and inquiry. Footnote 26 This neatly characterises the ‘behaviour’ of generative AI systems. They are like “a trickster: they gobble data in astronomical quantities and regurgitate (what looks to us as) information. If we need the “tape” of their information, it is good to pay close attention to how it was produced, why and with what impact” (Floridi, 2023 ).

However, despite its advantages, the indifference criterion also raises some critical questions. For one thing, the ‘ideal state’ that manipulators are indifferent to ultimately ideally needs to be specified in more detail to yield a more informative operationalisation. A promising route forward is to investigate what it takes to reveal reasons to interlocutors, as suggested by Klenk ( 2021b ). The vast literature on evidential relations and good deliberation in the philosophical debate promises a suitable starting point. Relatedly, the indifference criterion must be further specified and operationalised to identify manipulation in practice reliably. In particular, what is a reliable sign that indifference explains the choice of the given method of influence? An initial idea is to consider counterfactuals about what method or type of influence would have been chosen if the aim would have been to reveal reasons in a particular situation to a particular user, and to compare the counterfactual output with the system’s actual output. A discrepancy could be interpreted as an indicator of indifference and, thus, manipulation. Finally, there is a risk that generative AI systems come away as necessarily manipulative (cf. Klenk, 2020 ), which would not be at all helpful as it blurs the boundary between legitimate and illegitimate uses of those systems (a boundary that plausibly exists).

In summary, the indifference view offers some notable advantages over alternative conceptualisations of manipulation, notably by allowing us to recognise manipulation in situations where intentions are hard, if not impossible, to detect and by avoiding the problems with false positives and false negatives that plague the hidden influence- and bypassing rationality criteria. Unlike disjunctive criteria, it also fares well on the narrow criterion of appropriateness. Like all current conceptualisations of manipulation, however, the indifference criterion relies on further clarification and operationalisation of key terms.

Hence, answering the question of how we can identify manipulation will still require us to answer how we can identify those criteria in practice . This is a relevant question because the most plausible criteria for manipulation (the trickery criterion and the indifference criterion) are linked to intentions, purposes or aims, which are not directly observable. This means that designers and regulators need to figure out ways to operationalise the criteria and develop methods to detect them in practice. The design for value approach makes room for that in recommending an iterative process where considerations in the conceptual stage are informed by the empirical- and design stages. Keeping in mind the open question about how to choose between a narrow and broad conception of appropriateness, there is room to let design considerations about which conception of manipulation is implementable have weight in the choice of conceptualisation.

The question about picking suitable conceptualisations of manipulation is closely linked to the question of how to ensure that generative AI systems are aligned with the criteria in such a way that they do not generate manipulative content. Following Gabriel ( 2020 ), the identification of reliable criteria for manipulation would answer one part of the alignment question. The question about implementation, however, would remain open. Plausibly, as van de Poel ( 2020 ) suggests, there would need to be prolonged attention to the system after the initial design stage.

Finally, the ethics of manipulation needs to be evaluated. Though it is generally agreed that manipulation is a morally dubious form of influence, there are open questions about whether or not it is always and categorically morally wrong, or whether manipulation could sometimes be permissible in light of other considerations (cf. Noggle, 2022 ). Support for the latter position comes from the observation that manipulation is a pervasive part of everyday life, and often not considered to be deeply problematic, as in some marketing or advertising tactics. This would support conceptualising manipulation as pro tanto rather than morally wrong.

Related to this, there is a question about whether and how situational and personal factors may moderate the ethical status of manipulation. Designers and regulators would need to consider if and how those factors have an impact on the moral status of manipulative influence. For example, it may be that the positive impact of some public health communication driven by manipulative generative AI may outweigh the negative value that accrues from the manipulative nature of the influence. Relatedly, some users may willingly adopt personal health assistants that use effective but manipulative influence tactics. In both cases, it is an open question whether these trade-offs are reasonable and ethically legitimate.

Empirical stage

One of the core commitments of design for value approaches is the commitment to involve stakeholder perspectives in the design process (Buijsman et al., forthcoming). This usually involves a process of weighing up the conceptualisation of a value developed during the conceptual stage with input gleaned from stakeholders. Looking beyond stakeholders’ input toward empirical input more generally, we should consider empirical data that bears on the question of an appropriate conceptualisation of manipulation. At least the following questions specifically related to manipulation need to be addressed in the empirical part of a design for value project:

What do relevant stakeholders consider as criteria for manipulation?

How should those empirical findings impact conceptual findings?

How do stakeholders view the ethical status of manipulation?

The design of non-manipulative generative AI should be informed by empirical findings about criteria for manipulation. But how do people, in fact, distinguish between different forms of influence? The empirical investigation of manipulation is still in its infancy. The studies by Osman and Bechlivanidis are the only ones that explicitly address folk-conceptions of manipulation (Osman, 2020 ; Osman & Bechlivanidis, 2021 , 2022 , 2023 ). An important finding is that judgements about the impact of manipulation and its ethical seriousness differs by context. These findings are valuable starting points. Going forward, it would be interesting to see how the users of a given generative AI application think about manipulation in the aim of design approaches to consider especially the views of stakeholders in the design process. An important question is whether and how the views of different groups of stakeholders differ regarding criteria for manipulation. For instance, are there political or personal factors that moderate how people distinguish manipulative from non-manipulative influence? Next to quantitative research paradigms familiar from the social sciences, researchers and regulators can draw on established methods for design for values approaches, such as focus groups or participatory design, to address these questions.

More generally, there is a need for further studies of the folk concept of manipulation. While such findings do not settle which conceptualisations of manipulation are appropriate, they will serve as valuable reference points. Are there (aspects of) folk conceptualisations not covered by any of the current theoretical conceptualisations? Which theoretical conceptualisations (most closely) match the ordinary conceptualisation of manipulation? What factors influence how people think about manipulation? Are there personal or situational factors that influence whether or not people make reliable judgements about manipulation and its (dis-)value? Answering these questions is relevant both from a narrow and broad criterion for appropriateness, since the answers may bear on the accuracy of the conceptualisation or its moral appropriateness.

Gathering empirical insights into judgements about manipulation will raise the question of how those findings should be combined with the conceptual findings of the previous step. Should empirical findings lead researchers to revise reliable criteria for manipulation? If so, to what extent? Presumably, manipulation is a phenomenon that is socially constructed in the limited sense that it depends on people and social structures to exist (Hacking, 1999 ), but it is an open question whether its criteria are entirely depended on what people think of it. There are, however, strong reasons to suspect that the criteria for manipulation are not entirely up for grabs: they are not entirely determined by what people think of manipulation. To illustrate, consider the case of an generative AI system that manages to influence users’ views on what counts as manipulation. Users may then judge that bypassing their reason, influencing them covertly, and trying to induce mistakes in them is a legitimate form of persuasion, rather than manipulation. Designers and regulators should not take that result to revise their theory of manipulation entirely. How, precisely, the revision should work, however is an intricate, and open question. The literature on the significance of empirical, experimental philosophy may offer relevant pointers on this question (Knobe & Nichols, 2017 ), as well as the literature on applying theories in the context of bioethics (Beauchamp & Childress, 2019 ).

Quite independently of findings about conceptualisations of manipulation, empirical findings can help us understand more about the value of different conceptualisations. For example, empirical findings are clearly relevant in determining the impact of manipulation. Though manipulation seems like a prima facie problematic type of influence, as discussed in the previous section, it matters for the ethical assessment whether it has particularly pernicious consequences. So far, however, we know very little about the impact of manipulation. There is a widespread assumption that manipulation is antithetical to autonomy (Susser et al., 2019b ), but that view has yet to be corroborated from an empirical perspective (Klenk & Hancock, 2019 ), and we already know that people’s judgements about manipulation’s impact are more nuanced (Osman & Bechlivanidis, 2021 ). We also know that generative AI can have an impact on people’s moral judgements (Krügel et al., 2023 ), but it is unclear whether and why the influence in question qualifies as manipulation or not.

Finally, considerations about the need to consult stakeholders and to reflect on how their views should impact criteria for manipulation will also apply to the empirical investigation of the ethics of manipulation. Can manipulation be ‘made’ permissible insofar as people consent to it? Do people consent to manipulation? If so, under which circumstances and in what contexts? An important question here is how to align empirical findings which suggest more lenient takes on the ethics of manipulation with the strong regulatory aversion against manipulative influence that already applies to generative AI.

Design stage

Insights from the conceptual and the empirical stage will eventually have to be translated and transformed into concrete design requirements for generative AI applications. Given the present focus on appropriate conceptualisations of manipulation, these questions are out of scope for this paper. Nonetheless, at least two broad questions that crop up at the design stage have a bearing on appropriate conceptualisations nonetheless.

For one, it might be thought that the conceptualisation question could be settled by design. Specifically, there may be ways to address the conceptualisation question through different alignment approaches in AI, which ultimately bottom-out in a machine learning approach. Footnote 27 Kenton et al. ( 2021 ) discuss as the option to include human preferences in the training of the generative AI system (Christiano, 2017). Roughly, this means that the output of the system is fine-tuned in light of user feedback. For example, human souls classify sample outputs of the system as (more or less) manipulative to fine-tune the system with this feedback. More precisely, Ouyang et al. ( 2022 ) describe a process to alignment that fine-tunes the output of a Large Language Model in light of human-generated output (which is used in a supervised learning model) and human rankings of system-generated output (to train a reward model, which then fine-tunes the supervised baseline using reinforcement learning). Ouyang et al. ( 2022 ) demonstrate that the resulting model shows improvements over the outputs of GPT-3 (which are not fine-tuned by the described process) in several ethically significant domains like the toxicity and truthfulness of the output.

Naturally, such an approach depends on the ability of human labelers to spot manipulation, and it raises questions about combining users’ perspectives with theoretical insights discussed in the previous section. Kenton et al. ( 2021 ) do not discuss that users have a questionable track-record of discovering manipulation. Ouyang et al. ( 2022 ) are sensitive to the issue that the success of their alignment approach depends on the quality of the feedback provided by human labellers, and they suggest that the quality of human feedback may depend on a variety of personal and situational factors. In this context, the philosophical and ‘ folk-psychological ’ disagreement about conceptualisations of manipulation must be stressed more. At least, the current empirical findings on manipulation suggest human judgements about manipulation differ, sometimes quite strikingly, from the conceptualisations defended in the philosophical literature. Therefore, an important question in the design stage concerns not just the technical implementation of human feedback for fine-tuning, but the empirical- and theoretical investigation of the reliability of user judgements in learning from human feedback.

There is an important sense in which design requirements can only be judged for their appropriateness after testing and experimenting. An early chatbot, TAI by Microsoft, illustrated how an initially well-functioning system got off the rails over time by updating its behaviour in response to user feedback. So, manipulation in generative AI may only arise after some time, after deployment, and design needs to take measures to deal with that risk. For that reason, van de Poel ( 2020 ) advocates for prolonged monitoring, in addition to considerations about the appropriately aligning the model.

Moreover, the broad criterion of appropriateness may allow us to consider pragmatic considerations as relevant for choosing a conceptualisation of manipulation. On the one hand, considerations about the technology’s capability may prompt us to adjust the conceptualisation of manipulation. Consider that most current cases of manipulation with generative AI involve a human in the loop that uses generative AI to generate manipulative content (see Goldstein et al., 2023 ). In such cases, design against manipulation may rely on a conceptualisation of manipulation that refers to intentions, since we can ask about the intentions of the human in the loop. But this may change. While the leading current generative AI applications produce output that essentially predicts the next text token on a webpage from the internet (Ouyang et al., 2022 ), future applications may fine-tune that output with e.g. the aim to improve persuasiveness, possibly by incorporating information about personal attributes of the human user. Matz et al. ( 2023 ) already demonstrate that GPT-3 can be prompted to produce personalised and more persuasive outputs when the ‘human prompter’ is able to match prompt and target. Footnote 28 Future applications will likely attempt to automate the process of obtaining persuasion profiles of targets and producing persuasive prompts using generative AI, thus removing the human from the loop. Given the aim to design for non-manipulation also in cases like this, and doubts about the intentionality of generative AI, there is reason to favour a conceptualisation that makes no reference to intention. Pepp et al. ( 2022 ) already discuss this option in some detail. Footnote 29 In this way, concrete considerations about the practical use of generative AI applications that crop up at the design stage will might have a bearing on the conceptual stage of the design process.

On the other hand, there may be moral reasons to pick a conceptualisation that is applicable to the technology if and insofar as such a conceptualisation serves a worthy moral goal. Calling some applications that use generative AI manipulative may, for instance, lead to desirable consequences because they come under public scrutiny or in the scope of regulation All this depends, of course, on the appropriateness of the broad criterion in the first place.

In summary, even though questions at the design stage are not of primary concern for picking a conceptualisation of manipulation, technical and moral aspects may yet make the design stage relevant, given a broad criterion of appropriateness for choosing conceptualisations.

Generative AI brings enormous promise and peril. It may enable effective, automated influence at scale. This can be used for good, for instance in meaningful and ethical communication or in the design of digital health assistants. But it also harbours the risks of manipulation. This article introduced a research agenda focused on designing generative AI systems for non-manipulation to make good on its promise and avoid the peril. It demonstrated that if we want to design for non-manipulation, which everyone interested in responsible and trustworthy AI should be concerned with, we must begin with an appropriate conceptualisation of the phenomenon.

Apart from drawing attention to pertinent research questions concerning manipulation and generative AI, the main upshot of the article is a reasonable, brief overview of not only the importance of choosing the right conceptualisation of manipulation but also some of the key considerations for doing so. Clearly, both the general point about how to choose appropriate conceptualisations, and the specific points about different possible ways to conceptualise manipulation are mere beginnings. Key questions such as ‘how should we pick conceptualisation?’ and intricate points from the debate about manipulation remain beyond the scope of this paper. In particular, questions about the value of a given conceptualisation vis-à-vis its practical implementability in current alignment approaches are crucial, and difficult to answer.

Each dimension—conceptual, empirical, and design—should, in future work, be further elaborated on to outline the research questions in more detail and to critically consider different attempts at answering them. Given the aim to direct the debate in a fruitful direction, however, these omissions are deemed justified. In light of existing legal and regulatory measures against manipulation, and moral concerns that are aggravated by generative AI, the questions outlined in this article should contribute to some progress toward the responsible innovation of generative AI.

Data and materials availability

All data is available in the MS.

See, for example, European Parliamentary Research Services ( 2020 ), IEEE ( 2019 ). In line with the relevant literature, I am using the term ‘value’ quite broadly here to mean something like ‘a phenomenon of positive normative significance.’ In that sense manipulation is not a value but a dis-value, a phenomenon of negative normative significance. This loose way of talking seems appropriate in this context, and it should not be interpreted as leaning on more nuanced, axiological discussions.

A preliminary question for any design for value project concerns the kind of values that should be designed for, i.e. an enumeration of the target values. The answer to this question is not—in general—trivial or obvious. Which values matter in which context is a complex ethical and societal question. But in our case, this question has in part been answered by the moral and legal case for attending to manipulation, to which I already pointed in the introduction. Naturally, this does not mean, however, that manipulation should be the exclusive or even dominant focus in pursuing responsible generative AI. See Weidinger et al. ( 2022 ) for a taxonomy of other risks, many of which are perfectly general worries about AI, such as worries about privacy, fairness, and explainability.

In what follows, I use ‘conceptualisation’ and ‘conception’ interchangeably. I use ‘conceptualisation’ rather than ‘concept’ to emphasise the sense in which we (artificially) construct conceptualisations, e.g. for scientific use, and to demarcate the discussion from concepts as the building blocks of thought.

A conceptualisation or conception of a concept can be thought of as a specification or description of a concept’s content. Concepts have a content and an extension. A concept’s content can be thought of as its specification or a description of what the concept is about. A concept’s extension, in contrast, refers to the things that the concept is a about. For example, the content of the concept ‘bachelor’ is something like ‘an unmarried man’ (the concept is ‘about’ unmarried men), while its extension contains all unmarried men. While some questions are about content—what is the concept about—others concern extension.

Implicitly, this point seems to be acknowledged, for example, in much of the (applied) ethical debate on manipulation in current online technologies Klenk and Jongepier ( 2022a ) or nudging Wilkinson ( 2013 ), where researchers often first aim to arrive at an appropriate conceptualisation of manipulation before commencing to analyse specific cases through that lens. This mirrors a kind of mid-level principle approach to bioethics, cf. Flynn ( 2022 ).

This view is closely linked to the method of conceptual analysis in philosophy, see Klenk and Jongepier ( 2022b , pp. 16–19) for discussion.

This is still a controversially debated issue in the debate about conceptual engineering and conceptual ethics. Though tentative proposals have been made, I am sceptical that anything approaching a theory of conceptualisation choice is currently available. A critical open question is when we ought to consider a given conceptualisation defective.

A design focus implies at least one important assumption and limitation. It assumes that there are people motivated to design for non-manipulation. I need not assume that people are morally motivated, however. Existing and forthcoming regulation on manipulation should provide some purely pragmatic impetus to seek ways to design non-manipulative generative AI. What this leaves out is the question of how to regulate or control for non-manipulative AI (both of which can be approached from a design perspective, of course). Note also that I will not discuss how do balance the goal of non-manipulation with other values. An important aspect of design approaches to values in technology is that they will have to deal with conflicting values van de Poel ( 2015 ). For instance, the design of an engine will strike a balance between cost-effectiveness and sustainability or content moderation at a social media platform realises values of the decision makers. Applications based on generative AI will likewise have to strike a legitimate balance between the promise of effective influence and the peril of manipulation. There will be many other trade-offs and conflicts that a full-blown design for value approach to generative AI will also have to consider (e.g. concerning sustainability and resource-use). The focus of this research agenda, however, will be firmly on questions about manipulation, thus leaving open the further question of balancing concerns about manipulation appropriately with other goals and values.

The hidden influence criterion may also be attractive because it lays a connection to the burgeoning debate about AI deception. However, it is crucial to recognise that manipulation and deception are not the same thing see Cohen ( 2023 ) for a recent assessment.

Hidden influence may not even provide a necessary criterion for manipulation, as many researchers have pointed out, see Klenk ( 2021c ).

For simplicity, I interpret ‘bypassing rationality’ in the psychological sense of ‘bypassing conscious deliberation’ following common understanding. Appealing to emotions is a paradigmatic example of doing this. There are more elaborate interpretations, discussed by Gorin ( 2014a ), that suffer from similar problems to the ones I discuss here.

Pressure is another criterion of manipulation that is sometimes discussed in the philosophical literature, cf. Noggle ( 2022 ) .

Thanks to an anonymous referee for suggesting this point.

Some of these are problems arise given a narrow criterion of appropriateness. Though they do not undermine a positive evaluation of disjunctive criteria in general, or text-classifiers as a practicable way to implement disjunctive criteria, they matter for the appropriateness of a conceptualisation. Thanks to an anonymous referee for prompting me to clarify this point.

Thanks to an anonymous referee for suggesting this helpful example.

This is explained, for example, by the failure of the bypassing rationality criterion. Since bypassing rationality understood as appealing to emotion is neither necessary nor sufficient for manipulation, it is unlikely that ‘emotional’ words or text patterns are reliable indicators of manipulative influence.

Thanks to an anonymous referee for prompting me to clarify the non-intentional use-case.

Noggle ( 2020 ) revises the account to focus on the intention to induce a mistake in the victim, mainly to accommodate a problem with false negatives in the trickery account regarding cases of pressure manipulation. I’d like to draw attention to two different types of cases that might lead to false negatives, and in these cases the observations about the trickery account apply to the revised mistake account, too. Moreover, there are more general considerations about the mistake criterion in terms of false negatives, the criterion might thus be under-inclusive, discussed in Klenk ( 2021b ).

Importantly, the influence is not accidental, since the manipulator did aim to have a particular effect on the target audience. How that effect was achieved, however, was unintended.

The result of being manipulated may also come apart from the intention to manipulate, which could be identified independently Klenk ( 2022b ).

The problem is related to but wider than the problem of AI ‘hallucinations’ where the AI system presents false information as facts. The problem is wider because, as stated above, manipulation cannot be reduced to misleading or false communication. I thank an anonymous referee for pointing out this connection.

Ideas pertinent to the indifference view have also been defended by Gorin ( 2014b ), Mills ( 1995 ), and Baron ( 2014 ). The account is more systematically developed by Klenk, who first uses the term ‘carelessness’ (2021), whereas Klenk ( 2022a , 2022b ) introduces the more appropriate term ‘indifference’ to avoid the misleading impression that manipulation is, overall, lazy or not planned out. Indeed, manipulation is often carefully crafted influence in its aim to be effective, but careless or indifferent only to the aim of revealing reasons to others.

Which may, counterfactually, be a different method than to trick the victim.

Thanks to an anonymous referee for prompting me to clarify the relevance of future use cases.

Importantly, a goal can but need not be understood in intentional terms. Animals can be said to have goals, as do automated systems, or even simple artefacts based on their use plan, van de Poel ( 2020 ), or affordances, Klenk ( 2021a ). In short, goals can be understood in functional terms.

See Klenk ( 2020 ) for a discussion of manipulation in relation to bullshit.

Thanks to an anonymous referee for suggesting this formulation, and for prompting me to clarify this point.

To wit, the human prompter is able to assess the persuasion profile of the target, subsequently prompt GPT-3 to produce an output that matches that profile, and then present the target with that output.

The emerging work on conceptual engineering in the ethics of technology offers further examples concerning, e.g., notion of responsibility Veluwenkamp and van den Hoven ( 2023 ), Himmelreich and Köhler ( 2022 ).

Askell, A., Bai, Y., Chen, A., Drain, D., Ganguli, D., Henighan, T., et al. (2021). A general language assistant as a laboratory for alignment . Retrieved from http://arxiv.org/pdf/2112.00861.pdf

Barnhill, A. (2014). What is manipulation? In C. Coons & M. Weber (Eds.), Manipulation: Theory and practice (pp. 51–72). Oxford University Press.

Chapter   Google Scholar  

Barnhill, A. (2022). How philosophy might contribute to the practical ethics of online manipulation. In M. Klenk & F. Jongepier (Eds.), The philosophy of online manipulation (pp. 49–71). Routledge.

Baron, M. (2003). Manipulativeness. Proceedings and Addresses of the American Philosophical Association, 77 , 37. https://doi.org/10.2307/3219740

Article   Google Scholar  

Baron, M. (2014). The mens rea and moral status of manipulation. In C. Coons & M. Weber (Eds.), Manipulation: Theory and practice (pp. 98–109). Oxford University Press.

Beauchamp, T. L. (1984). Manipulative advertising. Business and Professional Ethics Journal, 3 , 1–22.

Beauchamp, T. L., & Childress, J. F. (2019). Principles of biomedical ethics . Oxford University Press.

Google Scholar  

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots. ACM Digital Library FAccT ’21: 2021 ACM Conference on Fairness, Accountability, and Transparency, Virtual Event Canada, 03 03 2021 10 03 2021 (pp. 610–623). Association for Computing Machinery. https://doi.org/10.1145/3442188.3445922

Brignull, H. (2023). Deceptive patterns: Exposing the tricks tech companies use to control you . Harry Brignull.

Buijsman, S., Klenk, M., & van den Hoven, J. (forthcoming). Ethics of AI. In N. Smuha (Ed.), Cambridge handbook on the law, ethics and policy of artificial intelligence . Cambridge University Press.

Cappuccio, M. L., Sandis, C., & Wyatt, A. (2022). Online manipulation and agential risk. In M. Klenk & F. Jongepier (Eds.), The philosophy of online manipulation (pp. 72–90). Routledge.

Cohen, S. (2023). Are all deceptions manipulative or all manipulations deceptive? Journal of Ethics and Social Philosophy . https://doi.org/10.26556/jesp.v25i2.1998

European Commission (2021). Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on Artificial Intelligence (ARTIFICIAL INTELLIGENCE ACT) and amending certain Union legislative acts . European Commission.

European Commission. (forthcoming). Meaningful and ethical communications . European Commission.

Coons, C., & Weber, M. (2014a). Introduction. In C. Coons & M. Weber (Eds.), Manipulation: Theory and practice. Oxford University Press.

Coons, C., & Weber, M. (Eds.). (2014b). Manipulation: Theory and practice . Oxford University Press.

The Economist (2023, April 22). The Generation Game. The Economist , pp. 65–66. Retrieved from https://www.economist.com/interactive/science-and-technology/2023/04/22/large-creative-ai-models-will-transform-how-we-live-and-work

EGE. (2023). Democracy in the digital age . EGE.

Eliot, L. (2023, March 1). Generative AI ChatGPT as masterful manipulator of humans, worrying AI ethics and AI law. Forbes. Retrieved December 20, 2023, from https://www.forbes.com/sites/lanceeliot/2023/03/01/generative-ai-chatgpt-as-masterful-manipulator-of-humans-worrying-ai-ethics-and-ai-law/

European Parliamentary Research Services. (2020). European framework on ethical aspects of artificial intelligence, robotics and related technologies: European added value assessment . European Parliamentary Research Services.

European Commission, Directorate-General for Justice and Consumers, Lupiáñez-Villanueva, F., Boluda, A., Bogliacino, F., Liva, G., Lechardoy, L., & Rodríguez de las Heras Ballell, T. (2022). Behavioural study on unfair commercial practices in the digital environment: Dark patterns and manipulative personalisation . Final report.

Faraoni, S. (2023). Persuasive technology and computational manipulation: Hypernudging out of mental self-determination. Frontiers in Artificial Intelligence, 6 , 1216340. https://doi.org/10.3389/frai.2023.1216340

Floridi, L. (2023). AI as agency without intelligence: On ChatGPT, large language models, and other generative models. Philosophy and Technology, 36 , 1–7. https://doi.org/10.1007/s13347-023-00621-y

Flynn, J. (2022). Theory and bioethics. In E. N. Zalta & U. Nodelman (Eds.), Stanford encyclopedia of philosophy: Winter 2022. Stanford University.

Frankfurt, H. G. (2005). On bullshit . Princeton University Press.

Book   Google Scholar  

Friedman, B., & Hendry, D. (2019). Value sensitive design: Shaping technology with moral imagination/Batya Friedman and David G. Hendry . The MIT Press.

Gabriel, I. (2020). Artificial intelligence, values, and alignment. Minds and Machines, 30 (3), 411–437.

Gigerenzer, G., & Goldstein, D. G. (1996). Reasoning the fast and frugal way: Models of bounded rationality. Psychological Review, 103 , 650–669. https://doi.org/10.1037/0033-295x.103.4.650

Goldstein, J. A., Sastry, G., Musser, M., DiResta, R., Gentzel, M., & Sedova, K. (2023). Generative language models and automated influence operations: Emerging threats and potential mitigations . https://doi.org/10.48550/arXiv.2301.04246

Gorin, M. (2014a). Do manipulators always threaten rationality? American Philosophical Quarterly, 51 (1), 51–61.

Gorin, M. (2014b). Towards a theory of interpersonal manipulation. In C. Coons & M. Weber (Eds.), Manipulation: Theory and practice (pp. 73–97). Oxford University Press.

Hacking, I. (1999). The social construction of what? (8th ed.). Harvard University Press.

Himmelreich, J., & Köhler, S. (2022). Responsible AI through conceptual engineering. Philosophy and Technology, 35 , 1–30. https://doi.org/10.1007/s13347-022-00542-2

IEEE. (2019). Ethically aligned design: A vision for prioritizing human well-being with autonomous and intelligent systems . Retrieved from https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead1e.pdf

Kahneman, D. (2012). Thinking, fast and slow Penguin psychology (1st ed.). Penguin.

Kenton, Z., Everitt, T., Weidinger, L., Gabriel, I., Mikulik, V., & Irving, G. (2021). Alignment of language agents . Retrieved from https://arxiv.org/pdf/2103.14659

Klenk, M., & Hancock, J. (2019). Autonomy and online manipulation. Internet Policy Review .

Klenk, M. (2020). Digital well-being and manipulation online. In C. Burr & L. Floridi (Eds.), Ethics of digital well-being: A multidisciplinary perspective (pp. 81–100). Springer.

Klenk, M. (2021a). How do technological artefacts embody moral values? Philosophy and Technology, 34 , 525–544. https://doi.org/10.1007/s13347-020-00401-y

Klenk, M. (2021b). Interpersonal manipulation. SSRN Electronic Journal . https://doi.org/10.2139/ssrn.3859178

Klenk, M. (2021c). Manipulation (Online): Sometimes hidden, always careless. Review of Social Economy . https://doi.org/10.1080/00346764.2021.1894350

Klenk, M. (2022a). Manipulation as indifference to inquiry. SSRN Electronic Journal . https://doi.org/10.2139/ssrn.3859178

Klenk, M. (2022b). Manipulation, injustice, and technology. In M. Klenk & F. Jongepier (Eds.), The philosophy of online manipulation (pp. 108–131). Routledge.

Klenk, M. (2023). Algorithmic transparency and manipulation. Philosophy and Technology, 36 , 1–20. https://doi.org/10.1007/s13347-023-00678-9

Klenk, M., & Jongepier, F. (2022a). Introduction and overview of chapters. In M. Klenk & F. Jongepier (Eds.), The philosophy of online manipulation (pp. 1–12). Routledge.

Klenk, M., & Jongepier, F. (2022b). Manipulation online: Charting the field. In M. Klenk & F. Jongepier (Eds.), The philosophy of online manipulation (pp. 15–48). Routledge.

Knobe, J., & Nichols, S. (2017). Experimental philosophy. In E. N. Zalta (Ed.), Stanford encyclopedia of philosophy: Winter 2017. Stanford University.

Krügel, S., Ostermaier, A., & Uhl, M. (2023). ChatGPT’s inconsistent moral advice influences users’ judgment. Scientific Reports, 13 , 4569. https://doi.org/10.1038/s41598-023-31341-0

Matz, S., Teeny, J., Vaid, S. S., Harari, G. M., & Cerf, M. (2023). The potential of generative AI for personalized persuasion at scale. https://doi.org/10.31234/osf.io/rn97c

Mills, C. (1995). Politics and manipulation. Social Theory and Practice, 21 (1), 97–112.

Noggle, R. (1996). Manipulative actions: A conceptual and moral analysis. American Philosophical Quarterly, 33 (1), 43–55.

Noggle, R. (2018). The ethics of manipulation. In E. N. Zalta (Ed.), Stanford encyclopedia of philosophy: Summer 2018 (2018th ed.). Stanford University.

Noggle, R. (2020). Pressure, Trickery, and a unified account of manipulation. American Philosophical Quarterly, 57 , 241–252. https://doi.org/10.2307/48574436

Noggle, R. (2022). The ethics of manipulation. In E. N. Zalta (Ed.), Stanford encyclopedia of philosophy: Summer 2022 (2022nd ed.). Stanford University.

Nyholm, S. (2022). Technological manipulation and threats to meaning in life. In M. Klenk & F. Jongepier (Eds.), The philosophy of online manipulation. Routledge.

Osman, M. (2020). Overstepping the boundaries of free choice: Folk beliefs on free will and determinism in real world contexts. Consciousness and Cognition, 77 , 102860. https://doi.org/10.1016/j.concog.2019.102860

Osman, M., & Bechlivanidis, C. (2021). Public perceptions of manipulations on behavior outside of awareness. Psychology of Consciousness: Theory, Research, and Practice . https://doi.org/10.1037/cns0000308

Osman, M., & Bechlivanidis, C. (2022). Impact of personalizing experiences of manipulation outside of awareness on autonomy. Psychology of Consciousness: Theory, Research, and Practice . https://doi.org/10.1037/cns0000343

Osman, M., & Bechlivanidis, C. (2023). Folk beliefs about where manipulation outside of awareness occurs, and how much awareness and free choice is still maintained. Psychology of Consciousness: Theory, Research, and Practice . https://doi.org/10.1037/cns0000379

Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C. L., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., Schulman, J., Hilton, J., Kelton, F., Miller, L., Simens, M., Askell, A., Welinder, P., Christiano, P., … Lowe, R. (2022). Training language models to follow instructions with human feedback . Retrieved from http://arxiv.org/pdf/2203.02155.pdf

Pepp, J., Sterken, R., McKeever, M., & Michaelson, E. (2022). Manipulative machines. In M. Klenk & F. Jongepier (Eds.), The philosophy of online manipulation (pp. 91–107). Routledge.

Pham, A., Rubel, A., & Castro, C. (2022). Social media, emergent manipulation, and political legitimacy. In M. Klenk & F. Jongepier (Eds.), The philosophy of online manipulation. Routledge.

Sunstein, C. R. (2016). The ethics of influence: Government in the age of behavioral science . Cambridge University Press.

Susser, D., Roessler, B., & Nissenbaum, H. (2019a). Online manipulation: Hidden influences in a digital world. Georgetown Law Technology Review, 4 (1), 1–45.

Susser, D., Roessler, B., & Nissenbaum, H. (2019b). Technology, autonomy, and manipulation. Internet Policy Review, 8 , 1–22. https://doi.org/10.14763/2019.2.1410

Tremblay, M. S., Colley, R. C., Saunders, T. J., Healy, G. N., & Owen, N. (2010). Physiological and health implications of a sedentary lifestyle. Applied Physiology, Nutrition, and Metabolism, 35 , 725–740. https://doi.org/10.1139/H10-079

van de Poel, I. (2013). Translating values into design requirements. Philosophy and engineering: Reflections on practice, principles and process (pp. 253–266). Springer.

van de Poel, I. (2015). Conflicting values in design for values. In J. van den Hoven, P. E. Vermaas, & I. van de Poel (Eds.), Handbook of ethics, values, and technological design: Sources, theory, values and application domains (pp. 89–116). Springer.

van de Poel, I. (2020). Embedding values in artificial intelligence (AI) systems. Minds and Machines, 30 , 385–409. https://doi.org/10.1007/s11023-020-09537-4

van den Hoven, J., Vermaas, P. E., & van de Poel, I. (2015). Design for values: An introduction. In J. van den Hoven, P. E. Vermaas, & I. van de Poel (Eds.), Handbook of ethics, values, and technological design: Sources, theory, values and application domains (pp. 1–7). Springer.

Veluwenkamp, H., & van den Hoven, J. (2023). Design for values and conceptual engineering. Ethics and Information Technology, 25 , 1–12. https://doi.org/10.1007/s10676-022-09675-6

Weidinger, L., Uesato, J., Rauh, M., Griffin, C., Huang, P.-S., Mellor, J., et al. (2022). Taxonomy of risks posed by language models. FAccT ’22: 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul Republic of Korea, 21 06 2022 24 06 2022 (pp. 214–229). ACM. https://doi.org/10.1145/3531146.3533088

Wilkinson, T. M. (2013). Nudging and manipulation. Political Studies, 61 , 341–355. https://doi.org/10.1111/j.1467-9248.2012.00974.x

Download references

Acknowledgements

I am thankful to Caroline Figueiredo, and to two anonymous referees for helpful feedback.

The author’s work on this paper has been part of the project Ethics of Socially Disruptive Technologies that has received funding from the Dutch Organisation of Scientific Research.

Author information

Authors and affiliations.

Department of Values, Technology and Innovation, TU Delft, Jaffalaan 5, 2628 BX, Delft, The Netherlands

Michael Klenk

You can also search for this author in PubMed   Google Scholar

Contributions

N/A (single author).

Corresponding author

Correspondence to Michael Klenk .

Ethics declarations

Conflict of interest.

No competing interests.

Ethical approval and consent to participate

Consent for publication.

Consent for publication is given.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Klenk, M. Ethics of generative AI and manipulation: a design-oriented research agenda. Ethics Inf Technol 26 , 9 (2024). https://doi.org/10.1007/s10676-024-09745-x

Download citation

Accepted : 09 January 2024

Published : 03 February 2024

DOI : https://doi.org/10.1007/s10676-024-09745-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Generative AI
  • Large Language Models (LLMs)
  • Manipulation
  • Value sensitive design
  • Find a journal
  • Publish with us
  • Track your research

A Model of Behavioral Manipulation

We build a model of online behavioral manipulation driven by AI advances. A platform dynamically offers one of n products to a user who slowly learns product quality. User learning depends on a product’s “glossiness,’ which captures attributes that make products appear more attractive than they are. AI tools enable platforms to learn glossiness and engage in behavioral manipulation. We establish that AI benefits consumers when glossiness is short-lived. In contrast, when glossiness is long-lived, users suffer because of behavioral manipulation. Finally, as the number of products increases, the platform can intensify behavioral manipulation by presenting more low-quality, glossy products.

We thank participants at various seminars and conferences for comments and feedback. We are particularly grateful to our discussant Liyan Yang. We also gratefully acknowledge financial support from the Hewlett Foundation, Smith Richardson Foundation, and the NSF. This paper was prepared in part for and presented at the 2023 AI Authors’ Conference at the Center for Regulation and Markets (CRM) of the Brookings Institution, and we thank the CRM for financial support as well. The views expressed herein are those of the authors and do not necessarily reflect the views of the National Bureau of Economic Research.

MARC RIS BibTeΧ

Download Citation Data

Mentioned in the News

More from nber.

In addition to working papers , the NBER disseminates affiliates’ latest findings through a range of free periodicals — the NBER Reporter , the NBER Digest , the Bulletin on Retirement and Disability , the Bulletin on Health , and the Bulletin on Entrepreneurship  — as well as online conference reports , video lectures , and interviews .

15th Annual Feldstein Lecture, Mario Draghi, "The Next Flight of the Bumblebee: The Path to Common Fiscal Policy in the Eurozone cover slide

Help | Advanced Search

Computer Science > Robotics

Title: maniwav: learning robot manipulation from in-the-wild audio-visual data.

Abstract: Audio signals provide rich information for the robot interaction and object properties through contact. These information can surprisingly ease the learning of contact-rich robot manipulation skills, especially when the visual information alone is ambiguous or incomplete. However, the usage of audio data in robot manipulation has been constrained to teleoperated demonstrations collected by either attaching a microphone to the robot or object, which significantly limits its usage in robot learning pipelines. In this work, we introduce ManiWAV: an 'ear-in-hand' data collection device to collect in-the-wild human demonstrations with synchronous audio and visual feedback, and a corresponding policy interface to learn robot manipulation policy directly from the demonstrations. We demonstrate the capabilities of our system through four contact-rich manipulation tasks that require either passively sensing the contact events and modes, or actively sensing the object surface materials and states. In addition, we show that our system can generalize to unseen in-the-wild environments, by learning from diverse in-the-wild human demonstrations. Project website: this https URL
Subjects: Robotics (cs.RO); Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV); Sound (cs.SD); Audio and Speech Processing (eess.AS)
Cite as: [cs.RO]
  (or [cs.RO] for this version)
  Focus to learn more arXiv-issued DOI via DataCite

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

research papers on manipulation

  • Tensorflow & Keras
  • CV4Faces [Enrolled Users] (Old)
  • Python for Beginners FREE
  • AI Consulting

research papers on manipulation

  • Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

CVPR 2024 Key Research & Dataset Papers – Part 2

Jaykumaran

July 12, 2024 By Leave a Comment

CVPR 2024 (Computer Vision and Pattern Recognition) is an annual conference held from June 17th to 21st at the Seattle Convention Center, USA, which was a huge success. Why did the CVPR conference deserve a spotlight here? The IEEE CVPR 2024 Research Papers has an acceptance rate of around 23.6%, proving its high-quality research standards. The conference offered many interesting papers, workshops, datasets, and benchmarks for the computer vision community, which may be the foundation for the next decade.

In this article, we primarily aim to focus on:

  • What problem statement existed in each category?
  • What were the novel methodologies the authors carried out? 
  • And finally, there are impressive demos with the GitHub repository link for the respective papers.

This is the second part of our series on noteworthy papers from CVPR 2024. In our last article, we covered a wide variety of papers that drive current research in 3D Diffusion, Autonomous Vehicles , Nerf, and more.

If you are here directly to this article, bookmark our Part 1 of CVPR 2024: An Overview to read it for later.

OpenCV Booth at CVPR 2024 - CVPR 2024 conference

Here is a quick overview of 11 papers that we will cover.

  • DocRes: A Generalist Model Toward Unifying Document Image Restoration Tasks
  • DiffMOT: A Real-time Diffusion-based Multiple Object Tracker with Non-linear Prediction
  • From Audio to Photoreal Embodiment: Synthesizing Humans in Conversations
  • Object Recognition as Next Token Prediction
  • MultiPly: Reconstruction of Multiple People from Monocular Video in the Wild
  • ManipLLM: Embodied Multimodal Large Language Model for  Object-Centric Robotic Manipulation
  • MemSAM: Taming Segment Anything Model for Echocardiography Video Segmentation
  • EventPS: Real-Time Photometric Stereo Using an Event Camera
  • Comparing the Decision-Making Mechanisms by Transformers and CNNs via Explanation Methods
  • LEAP-VO: Long-term Effective Any Point Tracking for Visual Odometry

Key Datasets

  • Special Mention

  1.  Florence 2

           Arxiv :  https://arxiv.org/abs/2311.06242

Problem statement: Unified Architecture for Vision tasks. Category: Vision, Language, and Reasoning

Florence -2 Multi-tasks - Computer Vision research

Florence-2 by Bin Xiao et al. from Azure AI, Microsoft is a strong foundational VLM that outshines its competitors showcasing task agnostic zero-shot performance. Florence-2 was pre-trained on the FLD-5B dataset having 126M images. The authors point out that by unfreezing the vision backbone the model’s ability is enhanced to learn from region and pixels. It was also found that language pre-trained weights had less impact on purely vision-based tasks.

The datasets were prepared and refined using specialist models and services like Mask R-CNN , DINO, Azure OCR , etc.,  which excel at specific task categories and are trained with weak supervision .

MODEL ARCHITECTURE

Florence 2 - Model Architecture - CVPR 2024 Research Paper - Microsoft

Understanding global semantics and local features is vital for image comprehension. Florence 2 excels at this and adopts a sequence-to-sequence framework to address various vision tasks in a unified manner. 

Vision or Image Encoder

Florence 2 uses a DaViT vision encoder to preprocess input images of shape I ∈ R H×W×3 (H, W, channels) to flatten visual token embeddings (V ∈ R Nv×Dv, where Nv and Dv represent the number and dimensionality of vision tokens, respectively). Along with this, multi-task prompts are tokenized as text+location embeddings . 

Multi-modality encoder decoder

Following the Image Encoder, a standard transformer encoder block’s cross-attention captures the relationship between visual and textual queries. Then, the decoder’s higher-dimensional output is projected into interpretable text, visual, and location representations for downstream tasks.

Model Configuration :

Florence 2 Model Configuration - CVPR Best Papers 2024 - Microsoft - Vision Language Model - Multi-task

Inference Results: (Florence-2-large-ft)

Here, FT means a Fine-tuned model on a collection of downstream tasks.

Let’s perform some experiments on an RTX 4050 GPU with an i5 CPU machine to test Florence-2-large-ft’s capabilities on various downstream tasks. You can test with your images using HuggingFace Spaces listed on the Model’s Page.

Task: <MORE_DETAILED_CAPTION>

Detailed Captioning Task - Florence-2-Microsoft - Computer Vision research

Task : <OPEN_VOCABULARY_DETECTION>

Prompt : Camel

Open Vocabulary Detection Task - Florence 2 - AI advancements 2024

Task : <OCR>

OCR Task - Florence 2 - CVPR2024 Research Papers

Highlights of Paper:

  • By extending the vocab size of the tokenizer to include location tokens, the model performed better in both spatial coverage and semantic granularity. This eliminates the need for task-specific heads, making Florence-2 a good generalist model .
  • Despite their small sizes (base – 0.23B and large – 0.77 B), the models give a neck-to-neck performance to large models like Flamingo, PALI, and Kosmos2.
  • Because of its unified architecture, Florence-2 is capable of tasks such as Visual grounding, Object Detection, Referring Expression Segmentation, open vocabulary detection, detailed captioning etc.

💡 Interesting Fact : Earlier in 2018,  Project Florence by Microsoft aimed to develop a plant human interface using light and electrical signals.

Observation and Takeaways

From our initial  testing, we found that Florence-2 excels at OCR and Detailed Captioning . However, in some images consisting of difficult scenarios, it struggles with prompt specified object detection or segmentation compared to supervised task specific models like YOLO and Mask R-CNN.

The author suggests further fine-tuning Florence 2 can improve its domain and task adaption.

  • Florence-2 Inference Notebook [ Link ]
  • Fine-tune Florence-2 Blog [ Link ]

2. DocRes: A Generalist Model Toward Unifying Document Image Restoration Tasks

Arxiv: https://arxiv.org/abs/2405.04408

Problem statement: Single network capable of doing five document restoration tasks.

Category: Document analysis and understanding

  • DocRes , by Jiaxin Zhang et al. from South China University, is a generalist model for document restoration that eliminates the need for multiple models for specific tasks which misses the synergies in input images among tasks.It can do five mutli-tasks like dewarping,deshadowing, appearance enhancement, deblurring and binarization . 
  • Existing methods heavily rely on image-to-image pair visual prompts, ProRes, and Mask Image Modeling (MIM). These methods are resource intensive as they follow a ViT framework which is limited to (448×448). This confines it to adapt to variable resolutions commonly up to 1K.
  • DocRes addresses this through an effective visual prompt approach called Dynamic Task-Specific Prompt (DTSPrompt) . DocRes using DTSPrompt analyzes the input image to extract task-specific features. On the basis of prior extracted features, DTSPrompt dynamically generates prompts specific to each task resulting in superior model performance.

$[ \text{Is} \in \mathbb{R}^{h \times w \times 3} ]$

Unlike Florence-2 which is a task agnostic generalist model, DocRes is a task oriented generalist model which is an essential aspect for document restoration tasks.

DocRes Generalist Model - Deep learning innovations 2024

Dynamic task-specific prompt :

1. Dewarping : The network uses the simplest text line mask algorithm for de-warping, which assists the document segmentation model in generating document masks. Additionally the authors incorporate the x and y coordinates of each pixel as positional information (prior features) to facilitate backward mapping, thus enabling the model to better understand and correct spatial distortions.

DTSPrompt for flattening documents is as follows:

$[ G(\text{Is}, \text{``dewarp''}) = [P_m (\text{Is}), P_{cx}, P_{cy}] ]$

where, prior document masks and positional information is concatenated along the channel dimension.

2. Deshadowing: 

DocRes pipeline uses the background of the document with shadow as prior features. The author mentions that to get the background they use dilation operations followed by a median filter to remove text and to smooth out artifacts.

DTSPrompt for shadow removal is,

$G(\text{Is}, \text{``deshadow''}) = P_{bg}(\text{Is})$

3 . Appearance Enhancement : Usually, background light, shadow map, or white-balance kernels are used as prior features for a clean appearance restoration. But here, the author opted for a simple approach by finding a difference between an input image and document background estimated (Pbg) as in our earlier task, as a guidance cue to the model for the initial enhancement process.

Clean appearance restoration follows an empirical formula as:

$P_{\text{diff}}(\text{Is}) = 255 - \left| \text{Is} - P_{bg}(\text{Is}) \right|$

4. Deblurring : When trying to fix a blurred image, traditionally, we use methods like gradient distribution of the image as a prior feature, which shows how the brightness varies across the image. However, in this paper, the advantage of the gradient map (Pg (Is) ∈ R^h×w ) of a picture is taken into account.

Deblurring is achieved using a DTSPrompt as:

$G(\text{Is}, \text{``deblur''}) = [P_g(\text{Is}), P_g(\text{Is}), P_g(\text{Is})]$

5 . Binarization: As we know, Binarization involves converting a grayscale or color image into a binary mask to separate the text from the background. For this, DocRes first uses the Sauvola binarization algorithm to determine which pixels of an image should be either black(0) or white(255), denoted by Pb(Is). Along with this, threshold map (Pt) and gradient information (Pg) are used as prior features for refining network’s decision.

For the Text segmentation task, the DTSPrompt is formulated as

$G(\text{Is}, \text{``binarize''}) = [P_b(\text{Is}), P_t(\text{Is}), P_g(\text{Is})]$

Highlights of the Paper

$ \text{Is} \in \mathbb{R}^{h \times w \times 6} $

  • DocRes shows excellent performance across multi-tasks often surpassing unified models like De-GAN, DocDiff as well as task-specific SOTA models like DocGeo for dewarping, BGSNet for deshadowing, UDoc-GAN for appearance enhancement and deblurring. However, for the binarization task, GDB holds the lead, with DocRes closely trailing behind it.
  • DocRes can be adapted for various image resolutions by replacing the framework(e.g., ViT). Also, the author discusses how DocRes is capable of generalizing out-of-domain data through their ablation studies.

Inference Results

Now it’s time for real testing; inference is performed on an RTX 3080Ti and i7-13700K with 12-cores.

Note : In inference.py , replace np.bool with bool to run without numpy error in colab.

research papers on manipulation

Based on our initial round of testing, we found that the DocRes end2end task requires nearly 10GB GPU vRAM. However, the inference results are quite promising. Future work can focus on finding ways to run DocRes in an optimized way.

Repository : [ Link ]

HuggingFace Spaces [ DocRes ]

For a similar Document restoration using deep learning and document scanner using OpenCV , you may find it interesting to read our earlier posts.

You can access the inference notebook for the above project from the Download Code section.

research papers on manipulation

3. DiffMOT: A Real-time Diffusion-based Multiple Object Tracker with Non-linear Prediction

  Arxiv: https://arxiv.org/abs/2403.02075

Problem statement: Realtime and accurate diffusion-based non-linear tracker.

Category: Video: Low-level Analysis, motion, and tracking

DiffMOT by Weiyi Lv et al. from Shanghai University is a first-of-its-kind, diffusion probabilistic-based model for real-time Multi-Object Tracking (MOT) focusing on challenges in predicting non-linear motion.

 DiffMOT Frame Tracking -  CVPR best papers 2024

MOTs that involve linear motion, like Pedestrian detection, are easily tracked by heuristic methods like the Kalman filter. Kalman Filters assume that an object’s motion, velocity, and direction remain constant within small intervals of time. As a result, KF Trackers don’t work well in complex scenarios with non-linear motion (i.e., non-uniform velocity and direction).

For example, dancers on a stage or players in a sport perform different movements at varying speeds. 

DiffMOT D²MP  Predictor - Innovations in computer vision at CVPR 2024

But DiffMOT tackles this kind of movement effectively by predicting the next position of an object’s bounding box. It does this by conditioning the trajectories of the object from the previous n frames, guiding the denoising process for the current frame.

Diffusion probabilistic models are inefficient because they start with a rough MOT guess and require generating thousands of samples and iterative refinement for precise final predictions, demanding heavy computation. To overcome this shortcoming, DiffMOT uses a Decoupled Diffusion -based Motion Predictor (D²MP ) approach.From previous trajectories and motion information , the motion predictor uses just one-step sampling to reduce inference time while still maintaining high accuracy. The association of correct bounding boxes over time uses the Hungarian Algorithm (similar to ByteTrack).

ARCHITECTURE

 DiffMOT Architecture- Deep learning innovations 2024

Unlike a typical diffusion model with only data-to-noise mapping, D²MP contains data-to-zero (Forward process) and zero-to-noise (Reverse process) over time. An HMINet (Historical Memory Information Network) is used in the Reverse Process of motion predictor. This uses Multi Head Self Attention ( MHSA ) to capture long-range dependencies in the previous frame and summarize them into a conditional embedding to predict the motion in the next frame.

Highlights of the Paper :

  • DiffMOT achieves State-of-the-art performance on non-linear datasets like DanceTrack and SportsMOT with HOTA metrics of 62.3% and 76.2%, respectively, and a real-time inference speed of nearly 22.7FPS on an RTX3090 machine.
  • It also outperforms widely used trackers like SORT , FairMOT , QDTrack, and ByteTrack in terms of accuracy.
  • The detector can be easily replaced with any object detection model to increase speed and detection accuracy, indicating DiffMOT’s flexibility.

The HOTA (High Order Tracking Accuracy) metric combines the detection accuracy of the detector ( YOLO-X ), associated accuracy, and Localization accuracy by the tracker ( D²MP ).

Inference Results ( Courtesy: DiffMOT Project )

Observation and takeaways :.

From the above inference results, we can observe that DiffMOT performs excellently in detection. However, it still faces challenges in videos with sudden changes or complex movements, leading to ID switching. Despite this, as the authors rightly mentioned in the paper, it clearly outperforms KF Trackers. DiffMOT is a good starting point for developing more accurate trackers based on diffusion models.

Note : As we have not tested DiffMOT extensively, we are refraining from making a qualitative comparison between DiffMOT and other state-of-the-art trackers . If you are interested in this further, please check the supplementary section of the paper on page 13.

4. From Audio to Photoreal Embodiment: Synthesizing Humans in Conversations

Arxiv : https://arxiv.org/abs/2401.01885

Problem statement : Generate 3D avatars with just a single audio

Category: Humans: Face, body, pose , gesture , movement.

The Audio to Photreal framework by Evonne Ng et.al from Meta proposes a novel approach of generating photorealistic avatars that produce realistic conversational motions and gestures for the face, body, and hands just using an audio input. The team achieved this by combining the diverse gesture possibilities offered by Vector Quantization (VQ) with the nuanced enhancements, such as eye gaze and smirks, provided by the diffusion network.

Audio to Photoreal Embodiment Meta - Innovations in computer vision at CVPR 2024

To better understand this, let’s look at an example : Let’s say we are animating a virtual person in a meta world to wave their hand.

  • Without VQ and Diffusion, the wave might look stiff and repetitive like a robot. 
  • But with VQ, we can simulate it to have varying wave patterns or styles each time, making it look more like a human.
  • Additionally, with a diffusion network, subtle realistic hand movements , such as bending fingers or hands, will make the avatar appear more natural and lifelike .

How does it work?

A rich set of dyadic conversations is captured between two people for training.The motion model comprises three major parts:

research papers on manipulation

  • a) Face Motion Model : This network is a diffusion model conditioned on conversational audio and lip movements. It generates facial expressions to reconstruct the facial mesh.
  • b) Guide Pose Predictor : This autoregressive transformer-based VQ network takes audio as input and outputs coarse guide pose at 1 FPS.
  • c) Pose Motion Predictor : The coarse poses are used as extra conditioning to this diffusion network to fill in higher frequency details of the motion.

Finally, the face and body pose are fed into an avatar render network, which generates a photorealistic avatar.

Audio to Photoreal Embodiment - Avatar Synthesis - AI advancements 2024

Highlights of Paper :

  • The paper presents an alternative way to create synthesized motions of interpersonal conversation with photorealism, addressing the shortcomings of mesh-based or skeletal avatars.
  • For the same input audio, the network generates diverse samples resulting in more peaky and dynamic motion like pointing. Despite being trained on specific individuals, the input features to the network are person agnostic and can adapt any persona for unseen audio without retraining.
  • The team open-sourced a multi-view dyadic(between two people) conversation dataset for accurate body or face tracking and photorealistic 3D reconstruction.

Repository: [ Link ]

Colab Notebook: [ Link ]

You may be interested in seeing Real-Time Automatic Speech Recognition and Diarization results with OpenAI Whisper from our earlier article.

  5. Object Recognition as Next Token Prediction

Arxiv : https://arxiv.org/abs/2312.02142

Problem Statement : Object recognition with language decoders

Category: Recognition: Categorization, detection, retrieval

The paper presents a thoughtful idea of object recognition in an autoregressive manner with LLMs by Kaiyu Yue et al. from Meta .

Object Recognition as Next Token Prediction - Computer Vision research

We know traditional linear classification networks like ResNet pretrained on the Imagenet dataset, which contains 1k classes and has a fixed final layer input dimension of 1000. This limits the ability of pretrained image classification models on a particular dataset to extend to other classes.

Modern architectures like CLIP can overcome this limitation to some extent by creating a flexible set of object embeddings to detect any class from the input image. However, CLIP requires a predefined set of object descriptions(gallery) to function as intended. The predefined gallery can cover only a subset of all possible objects and their variations. 

In simple terms, if an image has a dog or cat, CLIP can identify them but may not be able to detect specific features like the breed (like a Dalmatian dog or Angora cat). Thus, even CLIP is limited and can cover only a portion of textual space in practical scenarios. Additionally, increasing the gallery size of CLIP results  in performance degradation.

So, an ideal approach is to use LLM as a decoder to recognize any object and its variations in the textual space. Google’s Flamingo follows a similar approach, but it requires a few shot samples for each downstream task prior to the inference prompt. To address this, the authors suggest a more straightforward approach: aligning LLM for recognition tasks only .

Architecture - Object Recognition as Next Token Prediction - CVPR 2024 conference

Here, pretrained CLIP or ViT is used as an image encoder , which projects the image embeddings to higher dimensions of the language decoder (LLM) .

  • Instead of training the model on Visual Question Answering triplets , the approach uses image-caption pairs . The model was trained on the G70M dataset, which was made by gathering image pairs CC3M, COCO Captions, SBU, LAION-Synthetic-115M, etc. 
  • The model generates short and concise tags or labels only rather than a descriptive sentence about the image.
  • The author’s ingenuity lies in the model’s tokenization mechanism . Different object labels are treated independently, but tokens from the same label remain conditional. All labels are mainly dependent on image embeddings to determine their coexistence within an image. Then by one-shot sampling the model generates labels for all objects in an image at same time.
  • To decrease the inference time and improve efficiency by taking an LLM like LLaMA, retain only the first few transformer blocks and the final layer which are essential for recognition. This makes the LLM decoder to be more compact resulting in a 4.5x faster inference .

ViT Encoder and LLaMA 7B as Decoder -  Object Recognition as Next Token Prediction - Computer Vision research

  • Techniques like decoupling tokens of different labels to be independent with a Non-causal attention mask which avoids  repetition issues are quite impressive. Additionally making use of the strong parallelization capability of the transformer with one-shot sampling to simultaneously process multi-labels choosing top-k tokens is an unique approach.
  • From benchmarks it suggests that the model superseeds GPT-4V Preview , LLaVA, Flamingo, CLIP etc for recognition tasks on the COCO Validation split.

Inference Results 

Object Recognition as Next Token Prediction - Inference Results - Computer Vision research

  • The proposed model architecture can be an excellent choice for open vocabulary recognition, overcoming the limitations of CLIP. However this might be also an overhead to use LLM for object recognition in hardware resource limited machines.

Repository :  [ Li n k ]

Colab Notebook : [ Link ]

Starting with the Large Language Model can be an overwhelming yet demanding skill in the current job market. Have a look at our tri-part series on LLMs .

6. MultiPly: Reconstruction of Multiple People from Monocular Video in the Wild

  Arxiv : https://arxiv.org/ abs /2406.01595

Problem statement : 3D Reconstruction from Monocular video

Category: Humans: Face, body, pose, gesture, movement

MultiPly is a novel framework by Zeren Jiang et.al from ETH Zurich and Microsoft to reconstruct multi-people in 3D from monocular RGB videos. Typically to estimate in-the-wild 3D reconstruction, the setup demands multi-view and specialized equipment. The task also has additional challenges like depth ambiguities, human-human occlusions and dynamic human movements.

MultiPly: Reconstruction of Multiple People from Monocular Video in the Wild -  AI advancements 2024

MultiPly takes into account all these and recovers 3D Humans with high-fidelity shape , surface geometry and appearance by pixel level decomposition (accurate instance-level segmentation ) and plausible multi-person pose estimation .

MultiPly: Reconstruction of Multiple People from Monocular Video in the Wild - Architecture - AI research trends 2024

  • The process begins by taking each subject’s video frame and pose initializations as input and fusing them into single, temporally consistent human representations in a canonical space. 

The human points were sampled along a camera ray with Sparse Pixel Matching Loss (SPML) using NeRF++.

  • Then, these canonical human models are parameterized by a learnable MLP Network calculating sine distance and radiance values.
  • Following that, a l ayer-wise differential volume rendering for the entire scene (frame) is applied to extract human meshes.
  • To enhance clean separation even in close interactions or occluded scenes, progressive input prompts to SAM are given to dynamically update the instance segmentation masks until the whole human body is covered.
  • In addition, a confidence-guided optimization formulation is employed to avoid harmful shape update due to inaccurate poses resulting in spatially coherent 3D reconstruction.
  • MultiPly eliminates the need for high-quality 3D data and outperforms contemporary SOTA approaches like ECON and Vid2Avatar in monocular videos with highly occluded scenes.
  • Multiple Loss metrics including Reconstruction loss, Instance Mask Loss, Eikonal Loss, Depth Order Loss and Interpenetration loss are considered by the authors to generate highly accurate 3D humans with the MultiPly framework.

Observations and Takeaways

With the advent of Spatial Computing based devices like Apple Vision Pro, Meta Oculus Quest etc. , Audio2Photoreal and MultiPly frameworks can have a huge impact for creating realistic avatars or virtual agents in AR/VR space.

7. ManipLLM: Embodied Multimodal Large Language Model for  Object-Centric Robotic Manipulation

  Arxiv :   https://arxiv.org/abs/2312.16217

Problem Statement : Manipulate anything with MLLM Category: Robotics

ManipLLM developed by Xiaoqi Li et.al from Peking University focuses on integrating MultiModal LLM or MLLM’s reasoning capabilities of robots for effectively handling objects with the hand gripper .

As we have seen, recent advancements in integrating VLM models for robotics perception and reasoning, like 3D-VLA as language models, can adapt dynamically to any unseen environment due to their generalization capability. Traditionally robots manipulate the end effector directions or gripper after extensive training and simulation. However even after training, the decisive model might fail to handle unseen objects or out of domain events. 

ManipLLM: Embodied Multimodal Large Language Model for  Object-Centric Robotic Manipulation - Computer Vision research

ManipLLM addresses this problem very effectively by bringing a multi-modal LLM with backbones like LLaMA into the loop.

SYSTEM PIPELINE

ManipLLM: Embodied Multimodal Large Language Model for  Object-Centric Robotic Manipulation - AI advancements 2024

During inference, the system takes in an RGB image captured by intel realsense projected to higher dimension of the LLMs embedding space and text prompts are fed into LLaMA from the user to predict the initial pose of the gripper with a chain of thought reasoning approach. The network then returns its understanding of the image based on instructions given with a set of coordinates to establish the contact at a precise location determined by the LLaMA model. 

The chain of thought is to the point with three main objectives ,

  • To determine the category of the object.
  • Think about how to complete the given task.
  • To return end effectors pose (coordinates and rotation angle).

ManipLLM: Embodied Multimodal Large Language Model for  Object-Centric Robotic Manipulation - Chain-of-Thought to Complete Each Task -  Innovations in computer vision at CVPR 2024

After making initial contact, an Active Impedance Adaptation Policy within the network plans to create waypoints to achieve the task gradually with an end effector in a closed loop.

  • In general, the LLM doesn’t have the capability to manipulate objects, so Vision and Language adaptors are injected into the network. These injectors are further fine-tuned to adapt this manipulation task yet still retain the reasoning capability of the MLLM model.
  • From 2D Pixel coordinates and gripper rotation angle, returned by the MLLM, the network uses depth maps to project 2D coordinates into 3D space.

Many research institutes and robotics companies have adopted MLLM’s as the reasoning mind to enhance robots to interpret their perception and motion. Notable partnerships included integrating OpenAI GPT-4 into FigureAI robots and in Boston robotics. We believe as language and vision space fuse, there is a great scope for cross-domain knowledge transfer, enabling robots to adapt to new tasks with just a few shot samples.

CVPR Talk [ Link ]

Having a career in robotics is a “Pursuit of Happiness.” For a foundational understanding, explore our Getting Started with Robotics Series .

8. MemSAM: Taming Segment Anything Model for Echocardiography Video Segmentation

Paper: [ Link ]

Problem Statement : Robust ECG Segmentation Model Category: Medical and biological vision, cell microscopy

MemSAM by Deng et.al from Shenzhen University is a novel echocardiography segmentation model aimed to tackle challenges like  speckle noise, artifacts, blurred contours and shape variations of heart structures over time in an ultrasound image.

Comparison with SOTA Medical Segmentation Models - MemSAM: Taming Segment Anything Model for Echocardiography Video Segmentation -  CVPR best papers 2024

Echocardiography segmentation aims to segment key structures of the heart in ultrasound videos.

Examining and manually assessing the echocardiography is a time-consuming task. Even expert medical practitioners sometimes find it hard to write a perfect evaluation report about the condition. Automating medical imaging with AI is a crucial and demanding need in clinical practice. But the main challenge is limited access to perfectly annotated segments and apparently, there exists a need to annotate segments in each frame of an echocardiographic video.

That’s where MemSAM shows its excellence in proposing temporally consistent image segmentations in fast-changing ECG videos. We know that SAM stands apart due to its excellent feature representation and zero-shot generalizability on natural images. While existing SAM derived medical segmentation models like MedSAM, SAMed, SAMUS show promising results but they aren’t yet explored for video segmentation tasks. 

 SAM Annotation needed for each frame - Video-to-Frames - MemSAM: Taming Segment Anything Model for Echocardiography Video Segmentation - Deep learning innovations 2024

We can still use these variants by passing videos as frames for segmentation, but these methods heavily rely on a large number of prompts and annotations.

Annotation needed for 1st and Last frame - MemSAM: Taming Segment Anything Model for Echocardiography Video Segmentation  - CVPR 2024 Best paper

MemSAM just requires a point prompt for the first and last frames (annotated frames), while the in-between frames are tracked using Memory Prompt generated by the network. Finally the loss is calculated on just the annotated frames.

MemSAM Framework

 MedSAM Framework - Architecture - MemSAM: Taming Segment Anything Model for Echocardiography Video Segmentation - CVPR 2024 Best paper

  • In the SAM block(in gray), at first the image is converted to image embeddings. Following that a positive Point Prompt is encoded to guide the model, together they pass to the mask decoder to generate a predicted probability segmentation map. Suppose if the image is not the first frame in the video, the image embedding is projected to the memory feature space for Memory Reading .
  • The second block (represented in orange) queries and generates Memory Prompt from multiple feature memory (including Sensory Memory, Working Memory, and Long-term Memory).
  • At last the predicted probability map from mask decoder is encoded and used to update the multiple feature memory after memory reinforcement .
  • The addition of Memory Reading and Memory Reinforcement is a unique approach: In Memory reading, the image embedding is projected to a Query(q), which performs an Affinity query(W) to the Memory bank (Long-term and working memory) containing a key-value pair to get Memory Readout(F).  The image embedding, sensory memory, and memory read are fused to generate a Memory embedding. Finally to avoid noise on memory updation, a memory reinforcement module is employed.
  • MemSAM achieves state-of-the-art performance on EchoNet-Dynamic and CAMUS-Semi video datasets, outperforming five medical foundational models with semi-supervised (fewer annotations).

Medical Imaging with AI is expected to thrive, offering great value in the upcoming years. Institutes and companies like Google Deepmind, Microsoft, and OpenAI are actively looking to integrate AI-assisted evaluation reports to ease out the process, potentially saving many lives by early-stage detections. At the same time, always an expert clinician in the loop is necessary to monitor hallucinations and any adversarial effects.

9. EventPS: Real-Time Photometric Stereo Using an Event Camera

Problem Statement : Surface Normal Estimation with Stereo camera

Category: Physics-based vision and shape from X

EventPS by Bohan Yu et al. from Peking University is an exceptional technique to estimate the surface normal of an object, taking advantage of phenomenal characteristics of an event camera like temporal resolution, dynamic range and less latency.

EventPS -Photometric Stereo -  EventPS: Real-Time Photometric Stereo Using an Event Camera - AI research trends 2024

What is an Event Camera?

In Layman’s terms, generally, frame cameras capture sequences of frames at regular intervals of time. However, event cameras only detect when there is a motion or change in the scene, kind of like a motion detector reducing the need to store additional information about objects that didn’t change over time. This makes event cameras an excellent choice for real-time applications. The event camera records only logarithmic scene radiance changes(i.e., data points(events) like change in brightness) that point out in a scene when and where a change has occurred.

When the change in the brightness of a pixel reaches a trigger threshold, an event will be triggered by the event camera hardware.

Are you wondering what a photometric stereo (PS) is?

Photometric stereo involves keeping the object and the camera as static while changing the source of light’s position around the object. By capturing multiple images of the object, we can estimate its surface normal.

Prior to this work, frame-based cameras were the only choice for PS. Conventional Photometric Stereo caters to any of these two lighting setups:

  • The first setup would involve holding the light source in a robotic arm and capturing it by moving the light around  the object densely with good accuracy. However, it’s time consuming and not real-time.
  • The second choice is to use multiple flash lights located at fixed locations and turn them consecutively. It is real time but is inaccurate.

Understanding EventPS setup and Working

EventPS is mathematically modeled as,

$I_x(t) = \max [0, a_x(n_x \cdot L(t))]$

EventPS’s ingenuity lies in its setup. The setup addresses twoComparing the Decision-Making Mechanisms by Transformers and CNNs via Explanation Methods main questions,

How to illuminate an object for an event camera?

The setup involves an event camera fixed location that captures through a continuously rotating slit light setup(in green). The speed of light is 30 rotations per second, which is equivalent to the frame rate of a conventional camera powered by a DC Motor.

How do we estimate surface normal without absolute intensity?

The proposed idea is to convert each pair of consecutive events into a null space vector.

n \cdot (l_1 - \sigma l_2) = 0$

The null space vector is perpendicular to the object’s surface normal.

Interesting Fact

Null space vector caters to its use case in

  • In data analysis, determining which direction data varies the least results in dimensionality reductions like PCA.
  • Linear programming can be used to find feasible directions to improve objective function.
  • In computer vision for 3D reconstruction and camera calibration.

Highlight of the Paper

  • To solve noise in null space vectors of real events, multiple events are captured and converted to  null space vectors and combined with a Singular Value Decomposition (SVD) algorithm.
  • EventPS setup achieves very accurate surface normal estimation of both static and moving objects.

Neuromorphic based event camera based systems can hugely impact Robotics and autonomous vehicle systems which are prone to a lot of unpredictable events. The author suggests it has a lot of scope for future research in AR/VR based face rendering. The main application would be finding artifacts in an industry product using the surface normal.

10. Comparing the Decision-Making Mechanisms by Transformers and CNNs via Explanation Methods

Arxiv : https://arxiv.org/abs/2212.06872

Problem Statement : Understanding decision principles of Transformers and CNNs.

Category: Explainable Computer Vision Award: Best Student Paper – Runner-up (Honorable Mention)

Comparing the Decision-Making Mechanisms by Transformers and CNNs via Explanation Methods by Mingqi Jiang et.al from Oregon State University won CVPR awards in the Explainable Computer Vision category. The paper discusses interesting methodologies used to understand how deep black-box models like Transformer and CNN recognize images.

We know that prior works from Zhuang Liu1 et.al, that is employing the strategies used in training Transformers models, CNN based models like ConvNext were able to achieve on par performance of Transformers  models like ViT and Swin Transformers. So this poses further questions, such as whether the attention mechanism of Transformers is specifically responsible for its robustness. Or have only the design principles of ConvNext contributed to an increase in performance?Traditionally, attribution-based model explanations include gradient-based ( Grad-CAM ), Perturbation-based (RISE), and Optimization-based (iGOS++), etc. highlighting the attribution map of deep networks in an image for classification.

Attribution based explanation models - Computer Vision research

However, it is limited to only one explanation per image. So, in 2021, earlier work of Mingqi et al. proposed a search-based method called Structural Attention Graphs (SAG) that goes beyond just one explanation per image.

Structural Attention Graphs (SAG) - Computer Vision research - Comparing the Decision-Making Mechanisms by Transformers and CNNs via Explanation Methods

Search based algorithms produce Minimally Sufficient Explanations (MSEs) or minimal set of patches that are sufficient to make predictions with decision confidence like the same when a full image is shown. SAG uses beam search to find all combinations of images that generate high classification  confidence so we get different explanations for each image. 

Usually, explanation methods were used to explain just a single image. SAG explanation algorithms are tested on thousands of images from ImageNet to learn differences among network backbones (CNN and Transformers). The author proposes two different approaches as follows,

a) Sub-Explanation Counting:

An image is divided into non overlapping patches, followed by beam search to get MSE at more than 90% confidence level. Then sub explanation counting of an MSE is done by creating multiple child nodes or subsets with different classification confidence by deleting a patch (marked in red) of the parent MSE. When the child node of an MSE has less than 50% likelihood ratio, the tree expansion is stopped.

Sub Explanation Counting - MSE - Comparing the Decision-Making Mechanisms by Transformers and CNNs via Explanation Methods - AI research trends 2024

b) Cross Testing

The second approach evaluates the similarity between the decision principle of CNN and Transformer models. This is done by generating an attribution map from Model A like VGG-19 (CNN-based) and making mask patches by insertion/deletion, then passing it to Swin-T (Transformer) to evaluate the two models. If both the models make decisions on the basis of the same features, then they score high in cross-testing.

Cross Testing between Two Models - Comparing the Decision-Making Mechanisms by Transformers and CNNs via Explanation Methods - Top CVPR 2024 research papers

From the results, the model’s behavior falls under two categories:

  • Compositional Behavior : Models like ConvNext and non-distilled transformers make decisions based on multiple features in an image. If some parts are deleted, there is only a small change in the decision-making confidence.
  • Disjunctive Behavior : On the contrary CNNs and distilled transformers make decisions based on a few parts in the image. So even if a large part in the image is missing, the model still makes accurate predictions.

Compositional v/s Disjunctive behavior between CNN & Transformers - Comparing the Decision-Making Mechanisms by Transformers and CNNs via Explanation Methods - CVPR 2024 Research Papers

The author also conducted ablation studies by decreasing the receptive field with a 3×3 kernel size in ConvNexT-T and 4×4 in Swin-T-4 and observed a 40% drop in the total number of sub explanations. To better understand why this significant drop, multiple iterations of experiments are conducted.  The best result was obtained by replacing the layer normalization with batch normalization and training it with a small receptive field, and a drop in sub-explanation count was found by 88% .

It is found that less sub explanations drives the model to be more disjunctive in behavior. So the authors conclude that normalization layers can have a great impact on the model’s behavior. Layer Normalization/Group Normalization results in Compositional behavior while Batch Normalization makes the model to be of Disjunctive in behavior.

  • Instead of just anecdotes or assumptions, the paper shows actual experimental results on the validation subset of the ImageNet dataset on the first 5k images.
  • The paper answers our initial questions with models of the same type that use similar features for their predictions.

The study conducted is unique in  its kind that can bridge our gap in understanding how these deep black-box models make decisions on certain features or fundamentally design principles. More of this kind of research opens new possibilities for achieving optimal neural models for production and research that reduce carbon footprints.

Repository [ Link ]

11. LEAP-VO: Long-term Effective Any Point Tracking for Visual Odometry

    Arxiv : https://arxiv.org/abs/2401.01887

Problem statement : Tracking dynamic scenes, occlusion, and low texture areas with point trackers. Category: Video: Low-level Analysis, motion, and tracking

LEAP-VO by Weirong Chen et.al TU Munich is a new method for enhancing motion estimation and track uncertainty in dynamic environments. LeapVO makes use of temporal context with long-term point tracking.

Visual odometry(VO) estimates the motion of a moving camera based on visual cues. In simple terms, given a sequence of images, VO determines the location and orientation of the capturing camera.

Feature Visual Odometry

Classical approaches like Feature based Visual Monocular SLAM extracts the feature points in the first image and tracks them throughout the video. The camera pose can be recovered by optimizing the reprojection error . i.e. Bundle Adjustment.

A reprojection error is the distance between a key point detected in an image and its corresponding 3D world point projected in the same image.

However, Feature-based VO is unreliable in the following case:

  • when the scene is dynamic
  • when there is an occlusion 
  • for a low-texture area.

Prior works use a two-view or pair-wise tracker to track subsequent frames that don’t handle occlusion properly. Taking all these into account, LeapVO follows an effective learning-based approach with rich temporal context and continuous motion patterns to tackle all these challenges in multi-viewpoint tracking, even under partial occlusion.

 Pairwise Tracker v/s Long Term Point Tracker - Long-term Effective Any Point Tracking for Visual Odometry - Computer Vision research

METHODOLOGY

Point Tracking Front-end(LEAP) handles

  • Occlusion handling with Multi-frame tracking
  • Dynamic detection with Anchor-based motion estimation
  • Reliable estimation with Temporal probabilistic formulation

LEAP VO Point Tracking Front-end - Long-term Effective Any Point Tracking for Visual Odometry - Computer Vision research

In this pipeline, the feature map is extracted from images captured over time. Then, query points are sampled from an image, and additional anchor points based on image gradients are tracked over time. These points are refined iteratively to improve tracking using a refinement network. The network attributes the following.

  • Channel : Uses channel information between feature maps
  • Inter-track : uses the relationship between points being tracked
  • Temporal : Utilizes temporal information of the image sequence.

The refinement network outputs point distribution and motion over time, which tells about visibility and dynamic motion in the video.

LEAP-VO Tracking with Bundle Adjustment - Long-term Effective Any Point Tracking for Visual Odometry - Deep learning innovations 2024

In a video, feature keypoints are extracted from a sequence of RGB images, which are fed into the LEAP front end for tracking. Then trajectories are created for these key points which the LEAP module tracks over frames within the LEAP window. Not all keypoints tracked are useful. Some might be invisible or unreliable so it’s better to remove them as they will induce noise when averaged. 

Finally, the local Bundle Adjustment (BA) module is applied across frames in the current BA Window to update the camera pose and 3D positions, effectively minimizing reprojection error.

research papers on manipulation

The green points represent the static scene in an image, the yellow points are ambiguous or unreliable, and the red points are for dynamic scenes.

  • LEAP-VO outperforms state of the art methods including VO and SLAM based across different datasets for both static and dynamic scenes with its effective Long Term Point Tracking.

Leap-VO can cater its application extensively to autonomous vehicle navigation, robot path planning, and tracking movements in AR/VR.

s
1. : A 4D Dataset of Real-world Human Clothing with Semantic Annotations
2. : Robust Video-Language Alignment via Contrast Captions
3. : A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark
4. : Fine-Grained Functionality and Affordance Understanding in 3D Scenes
5. : A Large-Scale Scene Dataset for Deep Learning-based 3D Vision
6. : A Massive Collection of Multimodal Egocentric Daily Motion in the Wild
7. : An Egocentric Synthetic Data Generator
8. : Multi-modal accident video understanding
9. : The Dynamic Visual Dataset for Immersive Neural Fields
10. : Diverse Large-Scale Multi-Campus Dataset for Robot Perception
11. : MultiAgent, multitRaverSal, multimodal autonomous vehicle dataset

Special Mention🔥

  • Longuet-Higgins Prize recognizes CVPR Papers from ten years ago that have made a significant impact in the field of computer vision research. This year it was awarded to the famous object detection and semantic segmentation R-CNN paper from 2014. Check our post on how Region Proposals in R-CNN works.
  • Rich Human Feedback for Text-to-Image Generation from Google got the Best Paper Award of CVPR 2024 .
  • Mip-Splatting: Alias-free 3D Gaussian Splatting by Zehao Yu et al. got the Best Student Paper Award.

Key Takeaways of CVPR 2024 Research Papers

  • Conferences like CVPR, NeurIPS, IROS never fail to surprise us by bringing innovative research into the limelight in the Deep Learning and Robotics domain.
  • Any single research paper can be game-changing and define the pace of technology for the next 10 to 20 years, like “ Attention is All You Need .” In addition to these groundbreaking papers, there were notable workshops on autonomous vehicles by Wayve.ai, GenAI Sora by OpenAI, and others.

We hope you found it intriguing and insightful to read the essential gist of the latest research trends in AI with demos. In a two-part series, we gave a comprehensive overview of CVPR 2024, covering major categories to the best of our knowledge. Which of the CVPR 2024 research papers do you think was a showstopper and had an absolute visual treat? We would love to hear from you in the comments.

Looking ahead, we will try to provide an in-depth review of the latest research and conferences in the fields of AI and Computer Vision.

Stay tuned by subscribing to get notifications.🔔

  • CVPR 2024 Workbook
  • CVPR 2024 Awards
  • CVPR 2024 Awards Press Release

Subscribe & Download Code

research papers on manipulation

  • We hate SPAM and promise to keep your email address safe.

Get Started with OpenCV

  • FREE OpenCV Crash Course
  • Getting Started Guides
  • Installation Packages
  • C++ And Python Examples
  • Newsletter Insights

Subscribe to receive the download link, receive updates, and be notified of bug fixes

seperator

Which email should I send you the download link?

research papers on manipulation

Empowering innovation through education, LearnOpenCV provides in-depth tutorials, code, and guides in AI, Computer Vision, and Deep Learning. Led by Dr. Satya Mallick, we're dedicated to nurturing a community keen on technology breakthroughs.

COMMENTS

  1. Full article: Then again, what is manipulation? A broader view of a

    3.3. Manipulation as bypassing and subverting our rationality. It seems to be commonplace that manipulation is 'a kind of influence that bypasses or subverts the target's rational capacities' (Coons and Weber Citation 2014, 11; Fischer Citation 2017, 41).That manipulation at least bypasses rationality to a certain extent seems to be plausible because it is something other than rational ...

  2. Psychological Aspects of Manipulation Within an Interpersonal

    The phenomenon of manipulation has long attracted the attention of scientists - theorists and practitioners in various fields. It still attracts their attention, especially in times of ...

  3. A Meta-Analytic Investigation of the Relationship Between Emotional

    We used a combination of the term "emotional intelligence" and each of the following keywords: "dark side," "emotional manipulation," "manipulative behavior," and "managing emotions of others." Regarding the year of publication, we took into consideration papers from 2007 (when the first relevant paper was published) to May ...

  4. (PDF) Tactics of Manipulation

    We conducted two studies to identify the manipulation tactics that people use to elicit and terminate the actions of others. Factor analyses of four instruments revealed six types of tactics ...

  5. Editorial: Language, Cognition, and the Manipulated Brain: Theoretical

    Manipulation is among the most recurrent topics in argumentation studies (Masia; de Saussure, 2005; Maillat, 2013; Oswald et al., 2016; Sorlin, 2017 among others).Most of what we know today about deceptive and manipulative uses of language seems to involve the impact that vagueness, ambiguity, presupposition, implicature and other types of underencoded meanings wield on sentence comprehension ...

  6. Construct Validation of Experimental Manipulations in Social Psychology

    Based on findings from the present research, it is likely that many of these cited papers did not report sufficient evidence for the manipulation's construct validity. Therefore, this is a relatively liberal criterion that probably overestimates the extent to which manipulations have been truly validated.

  7. Misinformation, manipulation, and abuse on social media in the era of

    Overall, the great interest around the COVID-19 infodemic and, more broadly, about research themes such as online manipulation, automation, and abuse, combined with the growing risks of future infodemics, make this special issue a timely endeavor that will contribute to the future development of this crucial area. Given the recent advances and ...

  8. The Law, Economics, and Psychology of Manipulation

    Abstract. This comment on Cass Sunstein's paper, Fifty Shades of Manipulation, argues that "manipulation" — "controlling or playing upon someone by artful, unfair, or insidious means especially to one's own advantage" — has always been regarded as wrongful, an indirect form of fraud, by common law courts and government regulators.

  9. Manipulation, Injustice, and Technology by Michael Klenk :: SSRN

    Download This Paper. Open PDF in Browser. Add Paper to My Library. ... Copy URL. Copy DOI. Manipulation, Injustice, and Technology. The Philosophy of Online Manipulation. Edited by Fleur Jongepier and Michael Klenk. Routledge. Forthcoming. 41 Pages Posted: 18 Nov 2021. See all articles by Michael Klenk ... Research Paper Series; Conference ...

  10. Beasts, victims or competent agents: The positioning of children in

    Overall, the research literature concerned with manipulation in the broad sense of intentional mental influence turned out to be predominantly from the fields of psychology and psychiatry. It is dominated by medical discourses that regard acts of manipulation from the point of view of a positivistic ontology as a cause-effect phenomenon ...

  11. Authorship and citation manipulation in academic research

    Fig 1. Manipulation of authorship and citation across academia. Percentage of respondents who report that honorary authors have been added to their research projects, they have been coerced by editor to add citations, or who have padded their citations, sorted by field of study and type of manipulation.

  12. Digital Market Manipulation by Ryan Calo :: SSRN

    A new theory of digital market manipulation reveals the limits of consumer protection law and exposes concrete economic and privacy harms that regulators will be hard-pressed to ignore. This Article thus both meaningfully advances the behavioral law and economics literature and harnesses that literature to explore and address an impending sea ...

  13. Meet this super-spotter of duplicated images in science papers

    Elisabeth Bik quit her job to spot errors in research papers — and has become the public face of image sleuthing. February the fourteenth starts like most other days for Elisabeth Bik: checking ...

  14. Ethics of generative AI and manipulation: a design-oriented research

    Generative AI enables automated, effective manipulation at scale. Despite the growing general ethical discussion around generative AI, the specific manipulation risks remain inadequately investigated. This article outlines essential inquiries encompassing conceptual, empirical, and design dimensions of manipulation, pivotal for comprehending and curbing manipulation risks. By highlighting ...

  15. Market manipulation detection: A systematic literature review

    This paper, as a pioneer in this area, has done a systematic review of the literature on market manipulation detection from 2010 to 2020, and the 52 most significant studies were reviewed and analyzed deeply and comprehensively. In the selected studies, a review has been conducted on the definitions and taxonomies of trade-based manipulation ...

  16. A Model of Behavioral Manipulation

    A Model of Behavioral Manipulation. Daron Acemoglu, Ali Makhdoumi, Azarakhsh Malekian & Asuman Ozdaglar. Working Paper 31872. DOI 10.3386/w31872. Issue Date November 2023. We build a model of online behavioral manipulation driven by AI advances. A platform dynamically offers one of n products to a user who slowly learns product quality.

  17. Manipulation Tactics Used by the Brands to Attract Customers

    Manipulation Tactics U sed by The Brands to. Attract Customers. Trishal Niboria Dr. Bharti Shukla Dr. Sudhir Narayan Singh. A BSTRACT. Given our belief that certa in marketing strategies are ma ...

  18. Manipulation As Theft by Cass R. Sunstein :: SSRN

    On welfarist grounds, manipulation, lies, and paternalistic coercion share a different characteristic; they displace the choices of those whose lives are directly at stake, and who are likely to have epistemic advantages, with the choices of outsiders, who are likely to lack critical information.

  19. PDF Impact Of Manipulation By Brand On Customers: A Powerful Tactic To Gain

    The manipulation tactics are used by the brands to draw customers' attention and to create a market share in the market. Manipulation being unethical is still used by the companies to draw and retain customers. This paper will try to bring out how comapanies are manipulating stimuli of customers and turning it into a positive adaptation.

  20. Deepfakes and beyond: A Survey of face manipulation and fake detection

    This manipulation creates entire non-existent face images. Table 1 summarises the main publicly available databases for research on detection of image manipulation techniques relying on entire face synthesis. Four different databases of fake images are of relevance here, all of them based on the same GAN architectures: ProGAN [48] and StyleGAN [41]. ...

  21. (PDF) Exploring the Power of Data Manipulation and Analysis: A

    This research paper delves into the intricacies of these libraries, unraveling their unique attributes and collective prowess in facilitating data manipulation and analysis.

  22. [2406.19464] ManiWAV: Learning Robot Manipulation from In-the-Wild

    Audio signals provide rich information for the robot interaction and object properties through contact. These information can surprisingly ease the learning of contact-rich robot manipulation skills, especially when the visual information alone is ambiguous or incomplete. However, the usage of audio data in robot manipulation has been constrained to teleoperated demonstrations collected by ...

  23. An Overview of Market Manipulation by Tālis J. Putniņš :: SSRN

    Abstract. In this chapter, I describe the various forms of market manipulation, ranging from classical pump and dump schemes, bear raids, and painting the tape, through to recent forms of manipulation such as spoofing, layering, pinging, and quote stuffing. I discuss the defining elements of market manipulation, including recent legislative ...

  24. CVPR 2024 Key Research & Dataset Papers

    CVPR 2024 research papers were promising, highlighting the latest trends and advancements in the fields of computer vision, Gen AI, and robotics applications. ... These injectors are further fine-tuned to adapt this manipulation task yet still retain the reasoning capability of the MLLM model. From 2D Pixel coordinates and gripper rotation ...

  25. RoboCAS: A Benchmark for Robotic Manipulation in Complex ...

    Foundation models hold significant potential for enabling robots to perform long-horizon general manipulation tasks. However, the simplicity of tasks and the uniformity of environments in existing benchmarks restrict their effective deployment in complex scenarios. To address this limitation, this paper introduces the \textit{RoboCAS} benchmark, the first benchmark specifically designed for ...

  26. Stock Market Manipulation by Mohd Sahil :: SSRN

    Market manipulation refers to artificial inflation or deflation of the price of a security. Also known as price manipulation or stock manipulation, it involves the literal manipulation of a financial market for personal gain. It means influencing the behavior of the securities with the intent to do so. Market manipulation can be difficult for ...