Elsevier QRcode Wechat

  • Publication Recognition

What is a Good H-index?

  • 4 minute read
  • 349.7K views

Table of Contents

You have finally overcome the exhausting process of a successful paper publication and are just thinking that it’s time to relax for a while. Maybe you are right to do so, but don’t take very long…you see, just like the research process itself, pursuing a career as an author of published works is also about expecting results. In other words, today there are tools that can tell you if your publication(s) is/are impacting the number of people you believed it would (or not). One of the most common tools researchers use is the H-index score.

Knowing how impactful your publications are among your audience is key to defining your individual performance as a researcher and author. This helps the scientific community compare professionals in the same research field (and career length). Although scoring intellectual activities is often an issue of debate, it also brings its own benefits:

  • Inside the scientific community: A standardization of researchers’ performances can be useful for comparison between them, within their field of research. For example, H-index scores are commonly used in the recruitment processes for academic positions and taken into consideration when applying for academic or research grants. At the end of the day, the H-index is used as a sign of self-worth for scholars in almost every field of research.
  • In an individual point of view: Knowing the impact of your work among the target audience is especially important in the academic world. With careful analysis and the right amount of reflection, the H-index can give you clues and ideas on how to design and implement future projects. If your paper is not being cited as much as you expected, try to find out what the problem might have been. For example, was the research content irrelevant for the audience? Was the selected journal wrong for your paper? Was the text poorly written? For the latter, consider Elsevier’s text editing and translation services in order to improve your chances of being cited by other authors and improving your H-index.

What is my H-index?

Basically, the H-index score is a standard scholarly metric in which the number of published papers, and the number of times their author is cited, is put into relation. The formula is based on the number of papers (H) that have been cited, and how often, compared to those that have not been cited (or cited as much). See the table below as a practical example:

In this case, the researcher scored an H-index of 6, since he has 6 publications that have been cited at least 6 times. The remaining articles, or those that have not yet reached 6 citations, are left aside.

A good H-index score depends not only on a prolific output but also on a large number of citations by other authors. It is important, therefore, that your research reaches a wide audience, preferably one to whom your topic is particularly interesting or relevant, in a clear, high-quality text. Young researchers and inexperienced scholars often look for articles that offer academic security by leaving no room for doubts or misinterpretations.

What is a good H-Index score journal?

Journals also have their own H-Index scores. Publishing in a high H-index journal maximizes your chances of being cited by other authors and, consequently, may improve your own personal H-index score. Some of the “giants” in the highest H-index scores are journals from top universities, like Oxford University, with the highest score being 146, according to Google Scholar.

Knowing the H-index score of journals of interest is useful when searching for the right one to publish your next paper. Even if you are just starting as an author, and you still don’t have your own H-index score, you may want to start in the right place to skyrocket your self-worth.

See below some of the most commonly used databases that help authors find their H-index values:

  • Elsevier’s Scopus : Includes Citation Tracker, a feature that shows how often an author has been cited. To this day, it is the largest abstract and citation database of peer-reviewed literature.
  • Clarivate Analytics Web of Science : a digital platform that provides the H-index with its Citation Reports feature
  • Google Scholar : a growing database that calculates H-index scores for those who have a profile.

Maximize the impact of your research by publishing high-quality articles. A richly edited text with flawless grammar may be all you need to capture the eye of other authors and researchers in your field. With Elsevier, you have the guarantee of excellent output, no matter the topic or your target journal.

Language Editing Services by Elsevier Author Services:

What is a corresponding author?

What is a Corresponding Author?

Systematic review vs meta-analysis

  • Manuscript Review

Systematic Review VS Meta-Analysis

You may also like.

PowerPoint Presentation of Your Research Paper

How to Make a PowerPoint Presentation of Your Research Paper

What is a corresponding author?

How to Submit a Paper for Publication in a Journal

Input your search keywords and press Enter.

Articles Web of Science: h-index information

Web of science: h-index information, jun 9, 2022 • knowledge, information.

The h-index was developed by J.E. Hirsch and published in Proceedings of the National Academy of Sciences of the United States of America 102 (46): 16569-16572 November 15 2005. It reflects the productivity of authors based on their publication and citation records.

The h-index is based on a list of publications ranked in descending order by the Times Cited. The value of h is equal to the number of papers (N) in the list that have N or more citations. This metric is useful because it discounts the disproportionate weight of highly cited papers or papers that have not yet been cited. 

Advantages:  The h-index reflects not just the number of papers, or the number of citations; it has some indication of the number of well-cited papers. This provides an interesting complement to other performance metrics, since it is not influenced by a single highly-cited paper.

Disadvantages:  The h-index, like any other citation-based metric, is dependent on the subject area considered, as well on as the time since publication of important works. The h-index in the Citation Report reflects citations as of the most recent database update, so it could vary upon subsequent analyses.

Calculating:  A researcher (or a set of papers) has an h-index of N if he/she has published N papers that have N or more citations each. The h-index is based on Times Cited data from the database. It will not include citations from non-indexed resources. The h-index is based on the depth of the user's subscription and the selected timespan, in that certain items may not be retrieved based on those parameters.  Any record that is retrieved will include all of the Times Cited for the article, whether or not the user has a subscription to all of the citing articles.

Factors to consider:  As with all metrics based on citation, h-index will vary by such factors as: time, subject area, and the number of papers. Users should be careful to make appropriate comparisons such as comparing h-indexes within similar types of searches and/or similar subject areas.

Find benchmark h-indices:  Because the h-index can be determined for any population of articles, it is difficult to provide overall benchmarks for the value of the h-index. Very productive researchers in subject areas with high volumes of publication and citation can show h-index values over 100 at the peak of their scientific careers. Newer researchers in smaller subject areas can have h-indexes under 10.

Currently, Web of Science has a limit of 10,000 records that can be used to generate a Citation Report. To calculate an h-index using the result set, perform the following steps:

1. From the Results page, sort the result list by Times Cited -- highest to lowest by using the “Sort by:” box on the right hand side of the screen. 2. Find the record with the same number of Times Cited as the number of the record in the list. For instance if record #19 has a 19 Times Cited count, the h-index will be 19. If no record number has an equal Times Cited count, the last record number with a Times Cited count greater than the record number will be the h-index. For instance, if record number 62 has a 63 Times Cited count, and record 63 has a Times Cited count of 60, the h-index will be 62. 

Manual process for adding citations to the h-index

Incorrect citations and non-source citations would not be included in the Citation Report counts and the H index, because they will not have been linked to a specific source item in the set of papers being analyzed.  

The Citation Report uses all and only citations that are linked to the source record and accessible through the Times Cited link. Citations with incorrect or incomplete bibliographic data may not be linked, but you can see these records through the Cited Reference Search feature to determine if they would contribute significantly to the citation count.

Additional data from cited reference searches specific to all the source papers can be added to the output of the Citation Report to create highly accurate citation metrics, including the h-index. For most cases, there will be little or no variance in the h-index, but you may want to test several researcher records and draw your own conclusion regarding the methodology. Do keep in mind that h-index for researchers who are still actively publishing can change as updates are loaded.

To manually add citations to the h-index, choose the Save to Excel File option under Export Data on the Citation Report screen to import the data into Excel. Be sure to change the "number of records" to reflect the entire set or desired number or you will just get the default of 10 records.

Then, starting with the most-cited articles, add in additional citations from your Cited Reference Search Index display until you have reached the h-index inflection point (where the rank-count of the paper exceeds the citation count for that paper). You can add in an entire row for non-source items which are not included, or edit the Times Cited number in existing rows to include cited references that were missed due to data errors. Continue the cited reference search for the next paper to see if this would alter the h-index - if so, move down the list until you have crossed the h-index inflection. If not, a spot check of some of the papers below the h-index cut-off would offer additional reassurance that there are not missing citations that could further alter the h-index.

Bob Wilson

Reference management. Clean and simple.

What is the h-index?

Academic career and h-index

A simple definition of the h-index

Step-by-step outline: how to calculate your h-index, why it is important for your career to know about the h-index, can all your academic achievements be summarized by a single number, frequently asked questions about h-index, related articles.

An h-index is a rough summary measure of a researcher’s productivity and impact. Productivity is quantified by the number of papers, and impact by the number of citations the researchers' publications have received.

The h-index can be useful for identifying the centrality of certain researchers as researchers with a higher h-index will, in general, have produced more work that is considered important by their peers.

The h-index was originally defined by J. E. Hirsch in a Proceedings of the National Academy of Sciences article as the number of papers with citation number ≥ h . An h-index of 3 hence means that the author has published at least three articles, of which each has been cited at least three times.

The h-index can also simply be determined by charting the article's citation counts. The h-index is then determined by the interception of the chart's diagonal with the citation data. In this case, there are 3 papers that are above the diagonal, and hence the h-index is 3.

Plotting citation count of papers to calculate the h-index

The definition of the h-index comes with quite a few desirable features:

  • First, it is relatively unaffected by outliers. If e.g. the top-ranked article had been cited 1,000 times, this would not change the h-index.
  • Second, the h-index will generally only increase if the researcher continues to produce good work. The h-index would increase to 4 if another paper was added with 4 citations, but would not increase if papers were added with fewer citations.
  • Third, the h-index will never be greater than the number of papers the author has published; to have an h-index of 20, the author must have published at least 20 articles which have each been cited at least 20 times.
  • Step 1 : List all your published articles in a table.
  • Step 2 : For each article gather the number of how often it has been cited.
  • Step 3 : Rank the papers by the number of times they have been cited.
  • Step 4 : The h-index can now be inferred by finding the entry at which the rank in the list is greater than the number of citations.

Here is an example of a table where articles have been ranked by their citation count and the h-index has been inferred to be 3.

Luckily, there are services like Scopus , Web of Science , and Google Scholar that can do the heavy lifting and automatically provide the citation count data and calculate the h-index.

The h-index is not something that needs to be calculated on a daily basis, but it's good to know where you are for several reasons. First, climbing the h-index ladder is something worth celebrating. If it's worth opening a bottle of champagne or just getting a cafe latte, that's up to you, but seriously take your time to celebrate this achievement (there aren't that many in academia). But more importantly, the h-index is one of the measures funding agencies or the university's hiring committee calculate when you apply for a grant or a position. Given the often huge number of applications, the h-index is calculated in order to rank candidates and apply a pre-filter.

Of course, funding agencies and hiring committees do use tools for calculating the h-index, and so can you.

It is important to note that depending on the underlying data that these services have collected, your h-index might be different. Let's have a look at the h-index of the well-known physicist Stephen W. Hawking to illustrate it:

So, if you are aware of a number of citations of your work that are not listed in these databases, e.g. because they are in conference proceedings not indexed in these databases, then please state that in your application. It might give your h-index an extra boost.

➡️ Learn more: What is a good h-index?

Definitely not! People are aware of this, and there have been many attempts to address particular shortcomings of the h-index, but in the end, it's just another number that is meant to emphasize or de-emphasize certain aspects of the h-index. Anyway, you have to know the rules in order to play the game, and you have to know the rules in order to change them. If you feel that your h-index does not properly reflect your academic achievements, then be proactive and mention it in your application!

An h-index is a rough summary measure of a researcher’s productivity and impact . Productivity is quantified by the number of papers, and impact by the number of citations the researchers' publications have received.

Google Scholar can automatically calculate your h-index, read our guide How to calculate your h-index on Google Scholar for further instructions.

Even though Scopus needs to crunch millions of citations to find the h-index, the look-up is pretty fast. Read our guide How to calculate your h-index using Scopus for further instructions.

Web of Science is a database that has compiled millions of articles and citations. This data can be used to calculate all sorts of bibliographic metrics including an h-index. Read our guide How to use Web of Science to calculate your h-index for further instructions.

The h-index is not something that needs to be calculated on a daily basis, but it's good to know where you are for several reasons. First, climbing the h-index ladder is something worth celebrating. But more importantly, the h-index is one of the measures funding agencies or the university's hiring committee calculate when you apply for a grant or a position. Given the often huge number of applications, the h-index is calculated in order to rank candidates and apply a pre-filter.

Tips for proofreading your thesis

Calculate your h-index

What is the h-index, find your h-index, metrics, impact and engagement.

Use metrics  to provide evidence of:

  • engagement with your research, and
  • the impact of your research.

Reusing content from this guide

h index for researchers

Attribute our work under a Creative Commons Attribution-NonCommercial 4.0 International License.

The h-index is a measure of the number of publications published (productivity), as well as how often they are cited .

h-index = the number of publications with a citation number greater than or equal to h.

For example, 15 publications cited 15 times or more, is a h-index of 15.

Read more about the h-index, first proposed by J.E. Hirsch, as An index to quantify an individual's scientific research output .

  • Do an author search for yourself in Scopus
  • Click on your name to display your number of publications, citations and h-index.

Google Scholar

  • Create a Google Scholar Citations Profile
  • Make sure your publications are listed.

Web of Science

Create a citation report of your publications that will display your h-index in Web of Science .

Watch Using Web of Science to find your publications and track record metrics 

h-index tips

  • Citation patterns vary across disciplines . For example, h-indexes in Medicine are much higher than in Mathematics
  • h-indexes are dependent on the coverage and related citations in the database. Always provide the data source and date along with the h-index
  • h-indexes do not account for different career stages
  • Your h-index changes over time . Recalculate it each time you include it in an application

Provide additional information about your metrics when talking about your h-index.

Example statement

A statement about your h-index could follow this format:

"My h-index, based on papers indexed in Web of Science, is 10. It has been 5 years since I finished my PhD. I have 4 papers (A, B, C, D) with more than 20 citations and 1 paper (E) with 29 citations (Web of Science, 05/08/19). I also have an additional 3 papers not indexed by WoS, with 29 citations based on Scopus data (01/12/20)"

Other indices

  • i10 index calculation includes the number of papers with at least 10 citations
  • g-index modification of the h-index to give more weight to highly cited papers
  • m-Quotient accounts for career length, the h-index divided by the number of years since an author's first publication
  • h-index and Variants overview of various indices, including a look at the advantages and disadvantages
  • Last Updated: Mar 15, 2024 10:32 AM
  • URL: https://guides.library.uq.edu.au/for-researchers/h-index

We want to hear from you! Fill out the Library's User Survey and enter to win.

Calculate Your Academic Footprint: Your H-Index

  • Get Started
  • Author Profiles
  • Find Publications (Steps 1-2)
  • Track Citations (Steps 3-5)
  • Count Citations (Steps 6-10)
  • Your H-Index

What is an H-Index?

The h-index captures research output based on the total number of publications and the total number of citations to those works, providing a focused snapshot of an individual’s research performance. Example: If a researcher has 15 papers, each of which has at least 15 citations, their h-index is 15.

  • Comparing researchers of similar career length.  
  • Comparing researchers in a similar field, subject, or Department, and who publish in the same journal categories.  
  • Obtaining a focused snapshot of a researcher’s performance.

Not Useful For

  • Comparing researchers from different fields, disciplines, or subjects.  
  • Assessing fields, departments, and subjects where research output is typically books or conference proceedings as they are not well represented by databases providing h-indices.

1  Working Group on Bibliometrics. (2016). Measuring Research Output Through Bibliometrics. University of Waterloo. Retrieved from https://uwspace.uwaterloo.ca/bitstream/handle/10012/10323/Bibliometrics%20White%20Paper%20 2016%2 0Final_March2016.pdf?sequence=4&isAllowed=y  

2  Alakangas, S. & Warburton, J. Research impact: h-index. The University of Melbourne. Retrieved from http://unimelb.libguides.com/c.php?g=402744&p=2740739  

Calculate Manually

To manually calculate your h-index, organize articles in descending order, based on the number of times they have been cited.

In the below example, an author has 8 papers that have been cited 33, 30, 20, 15, 7, 6, 5 and 4 times. This tells us that the author's h-index is 6.

Table illustrates previous example. Column 1 shows articles 1-8 and column 2 shows citation numbers. Article 6 has 6 citations

  • An h-index of 6 means that this author has published at least 6 papers that have each received at least 6 citations.

More context:

  • The first paper has been cited 33 times, and gives us a 1 (there is one paper that has been cited at least once).
  • The second paper has been cited 30 times, and gives us a 2 (there are two papers that have been cited at least twice).
  • The third paper gives us a 3 and all the way up to 6 with the sixth highest paper.
  • The final two papers have no effect in this case as they have been cited less than six times (Ireland, MacDonald & Stirling, 2012).

1 Ireland, T., MacDonald, K., & Stirling, P. (2012). The h-index: What is it, how do we determine it, and how can we keep up with it? In A. Tokar, M. Beurskens, S. Keuneke, M. Mahrt, I. Peters, C. Puschmann, T. van Treeck, & K. Weller (Eds.), Science and the internet (pp. 237-247). D ü sseldorf University Press.

Calculate Using Databases

  • Given Scopus  and Web of Science 's citation-tracking functionality, they can also calculate an individual’s h-index based on content in their particular databases.  
  • Likewise, Google Scholar collects citations and calculates an author's h-index via the Google Scholar Citations Profile feature.

Each database may determine a different h-index for the same individual as the content in each database is unique and different. 

  • << Previous: Count Citations (Steps 6-10)
  • Last Updated: Oct 5, 2023 7:37 AM
  • URL: https://subjectguides.uwaterloo.ca/calculate-academic-footprint

Research guides by subject

Course reserves

My library account

Book a study room

News and events

Work for the library

Support the library

We want to hear from you. You're viewing the newest version of the Library's website. Please send us your feedback !

  • Contact Waterloo
  • Maps & Directions
  • Accessibility

How do I find the h-index for an author?: Home

What is the h-index.

One measure of an author's productivity as well as citation-based impact can be analyzed with a tool known as the h Index, so named after its developer, Jorge E. Hirsch.  The h Index is based on a scholar's most cited works and the number of times these have been referenced in other scholars' publications. 

As an example, "an h Index for a group of selected documents or selected author(s) with an h Index of 12 means that out of the total number of documents selected to produce the graph, 12 of the documents have been cited at least 12 times. Published documents with fewer citations than h, in this case less than 12, are considered, but would not count in the h Index."  [Taken from the Scopus database]

Limitations

-Comparisons should not be made among researchers with different career lengths and different disciplines.

-No adjustment is made for researchers with short careers and/or those who have published only a few, yet significant articles.

-Multiple author IDs in a database for the same author will skew results.

Finding the h-index

You may want to check both Web of Science and Scopus to compare values (or ask a librarian for assistance). The values can differ in the 2 databases based on the different dates covered as well as different journals included.

Web of Science

Note: The h Index in the Web of Science is based on the depth of Mayo Clinic's subscription (1975+) and the calculation only includes items covered by the Web of Science database.

1.       Open the Web of Science database [find under Databases on the library web site ].

2.       Select Author from the drop-down menu to the right of the Basic Search box ["Author" is listed under the "Topic" drop-down menu].

3.       Enter the Author’s last name/first initial as directed and click search.

4.       Refine your search by organizations [e.g., Mayo], research area or other filters.

5.       Review your results.

6.       On the Author results page, click Create Citation Report to the right of the first citation.

7.       From the Citation Report screen, see the h-index in the right column.

Note: Scopus is in progress of updating pre-1996 cited references going back to 1970. The  h -index might increase over time. Also, the calculation only includes items covered in the Scopus database.

1.       Open the Scopus database [find under Databases on the  library web site ].

2.       Click the  Author Search  tab and type the author’s last name and first name or initials in the search box. You can also refine your search by typing an organization [such as Mayo] in the Affiliation box. Click the Search button.

3.       A list of authors appears with different variations of the name. Check all the appropriate author names and click on  View Citation Overview .

4.       The h-index is displayed at the top of the page under the number of cited documents.

Help with the h-index

  • Contact the library
  • Last Updated: Oct 13, 2023 10:18 AM
  • URL: https://libraryguides.mayo.edu/h-index
  • University of Michigan Library
  • Research Guides

Research Impact Assessment (Health Sciences)

  • Assessment Frameworks & Best Practices
  • Manage Your Research Identity
  • Article Indicators
  • Journal Indicators
  • Policy and Society
  • Alternative Indicators
  • Economic Indicators
  • Collaboration Indicators
  • Frequently Asked Questions
  • Michigan Experts
  • Online Learning Resources

Quick Links

  • Altmetric Explorer more... less... U-M subscribes to Altmetric Explorer for Institutions
  • Google Scholar @ U-M (with U-M Ann Arbor MGet It Links) This link opens in a new window
  • Journal Citation Reports This link opens in a new window
  • Michigan Experts This link opens in a new window
  • Open Researcher and Contributor ID (ORCID)
  • Scopus This link opens in a new window
  • Web of Science This link opens in a new window

Library Contact

Taubman Health Sciences Library logo

For more information or to schedule an individual or group consultation, contact the THL Research Impact Core.

[email protected]

What is the h-index?

Use of the h-index is controversial. Some organizations use the h-index for evaluating researchers while others do not use it. As information professionals, we do not advise using the h-index without fully understanding its limitations and caveats.

Use the h-index with extreme caution.

  • Article: An index to quantify an individual's scientific research output by J. E. Hirsch, 2005. "I propose the index h, defined as the number of papers with citation number ≥h, as a useful index to characterize the scientific output of a researcher."
  • Blog post: Halt the h-index by Sarah de Rijcke, Ludo Waltman, and Thed van Leeuwen, 2021. "Using the h-index in research evaluation? Rather not. But why not, actually? Why is using this indicator so problematic? And what are the alternatives anyway?"

Image of a graph showing the number of publications each year by an author, and listing the author's h-index for different platforms that calculate it.

Image: Screenshot of some metrics listed in an author profile in Michigan Experts. Includes the h-index from 4 different sources: Scopus, Dimensions, Web of Science, and Europe PMC.

  • This indicator typically varies by source (e.g., different values in Google Scholar, Scopus, and Web of Science).
  • It is not field-normalized and is not an accurate comparison of productivity across disciplines.
  • It is weighted positively towards mid and late-career researchers as publications have had more time to accrue citations.

There are several variations of the h-index, including:

  • i10-index A productivity indicator created by Google Scholar and used in Google's My Citations feature. It represents the number of publications with at least 10 citations.
  • g-index Created by Leo Egghe in 2006, the g-index gives more weight to authors' highly cited articles.

Where can I find my h-index?

The resources below contain author profiles which list an h-index. Remember, this metric typically varies by source, so an author's h-index in Scopus may be different than the one in Google Scholar.

  • Scopus 1. Once in Scopus, select "Authors" and perform search for your name. 2. Click on the correct name in the search results to view the full author profile, where the h-index is listed. 3. Click on "Analyze author output" for additional citation data.
  • Web of Science 1. Once in Web of Science, click on "Author Search" and search for your name. 2. Click on the correct name in the list of results to view the full author profile, including the h-index.
  • Google Scholar @ U-M 1. Search for the author or an article by the author. 2. On the search results page, click on the author's name to view their Google Scholar profile which includes the h-index. Note: not all authors have Google Scholar profiles; underlined author names indicate that a profile page exists.
  • Michigan Experts 1. In the bottom right, under "Useful Links" click on "Edit Your Michigan Experts Profile." 2. Log in with your U-M id and password. 3. Scroll down to the box called "H-Index" and h-index is listed there for the data sources that Michigan Experts uses.
  • Maps & Floorplans
  • Libraries A-Z

University of Missouri Libraries

  • Ellis Library (main)
  • Engineering Library
  • Geological Sciences
  • Journalism Library
  • Law Library
  • Mathematical Sciences
  • MU Digital Collections
  • Veterinary Medical
  • More Libraries...
  • Instructional Services
  • Course Reserves
  • Course Guides
  • Schedule a Library Class
  • Class Assessment Forms
  • Recordings & Tutorials
  • Research & Writing Help
  • More class resources
  • Places to Study
  • Borrow, Request & Renew
  • Call Numbers
  • Computers, Printers, Scanners & Software
  • Digital Media Lab
  • Equipment Lending: Laptops, cameras, etc.
  • Subject Librarians
  • Writing Tutors
  • More In the Library...
  • Undergraduate Students
  • Graduate Students
  • Faculty & Staff
  • Researcher Support
  • Distance Learners
  • International Students
  • More Services for...
  • View my MU Libraries Account (login & click on My Library Account)
  • View my MOBIUS Checkouts
  • Renew my Books (login & click on My Loans)
  • Place a Hold on a Book
  • Request Books from Depository
  • View my ILL@MU Account
  • Set Up Alerts in Databases
  • More Account Information...

Maximizing your research identity and impact

  • Researcher Profiles
  • h-index for resesarchers-definition

h-index for journals

H-index for institutions, computing your own h-index, ways to increase your h-index, limitations of the h-index, variations of the h-index.

  • Using Scopus to find a researcher's h-index
  • Additional resources for finding a researcher's h-index
  • Journal Impact Factor & other journal rankings
  • Altmetrics This link opens in a new window
  • Research Repositories
  • Open Access This link opens in a new window
  • Methods for increasing researcher impact & visibility

h-index for researchers-definition

  • The h-index is a measure used to indicate the impact and productivity of a researcher based on how often his/her publications have been cited.
  • The physicist, Jorge E. Hirsch, provides the following definition for the h-index:  A scientist has index h if  h of his/her N p  papers have at least h citations each, and the other (N p  − h) papers have no more than h citations each. (Hirsch, JE (15 November 2005) PNAS 102 (46) 16569-16572)
  • The h -index is based on the highest number of papers written by the author that have had at least the same number of citations.
  • A researcher with an h-index of 6 has published six papers that have been cited at least six times by other scholars.  This researcher may have published more than six papers, but only six of them have been cited six or more times. 

Whether or not a h-index is considered strong, weak or average depends on the researcher's field of study and how long they have been active.  The h-index of an individual should be considered in the context of the h-indices of equivalent researchers in the same field of study.

Definition :  The h-index of a publication is the largest number h such that at least h articles in that publication were cited at least h times each. For example, a journal with a h-index of 20 has published 20 articles that have been cited 20 or more times.

Available from:

  • SJR (Scimago Journal & Country Rank)

Whether or not a h-index is considered strong, weak or average depends on the discipline the journal covers and how long it has published. The h-index of a journal should be considered in the context of the h-indices of other journals in similar disciplines.

Definition :  The h-index of an institution is the largest number h such that at least h articles published by researchers at the institution were cited at least h times each. For example, if an institution has a h-index of 200 it's researchers have published 200 articles that have been cited 200 or more times.

Available from: exaly

In a spreadsheet, list the number of times each of your publications has been cited by other scholars. 

Sort the spreadsheet in descending order by the number of  times each publication is cited.  Then start counting down until the article number is equal to or not greater than the times cited.

Article                   Times Cited

1                              50          

2                              15          

3                              12

4                              10

5                              8

6                              7              == =>h index is 6

7                              5             

8                              1

How to successfully boost your h-index (enago academy, 2019)

Glänzel, Wolfgang On the Opportunities and Limitations of the H-index. , 2006

  • h -index based upon data from the last 5 years
  •  i-10 index is the number of articles by an author that have at least ten citations. 
  •  i-10 index was created by Google Scholar .
  • Used to compare researchers with different lengths of publication history
  • m-index =   ­­­­­­­­­­­­­­­­­­___________ h-index _______________                      # of years since author’s 1 st publication

Using Scopus to find an researcher's h-index

Additional resources for finding a researcher's h-index.

Web of Science Core Collection or Web of Science All Databases

  • Perform an author search
  • Create a citation report for that author.
  • The h-index will be listed in the report.

Set up your author profile in the following three resources.  Each resource will compute your h-index.  Your h-index may vary since each of these sites collects data from different resources.

  • Google Scholar Citations Computes h-index based on publications and cited references in Google Scholar .
  • Researcher ID
  • Computes h-index based on publications and cited references in the last 20 years of Web of Science .
  • << Previous: Researcher Profiles
  • Next: Journal Impact Factor & other journal rankings >>
  • Last Updated: Nov 15, 2023 11:59 AM
  • URL: https://libraryguides.missouri.edu/researchidentity

Facebook Like

Boston College Libraries homepage

  • Research guides

Assessing Article and Author Influence

Finding an author's h-index, the h-index: a brief guide.

This page provides an overview of the H-Index, an attempt to measure the research impact of a scholar. The topics include:

What is the H-Index?

How is the h-index computed, factors to bear in mind.

  • Using Harzing's Publish or Publish to Assess the H-Index

Using Web of Science to Assess the H-Index

  • H-Index Video

Contemporary H-Index

Selected further reading.

H-Index chart

The h-index, created by Jorge E. Hirsch in 2005, is an attempt to measure the research impact of a scholar. In his 2005 article Hirsch put forward "an easily computable index, h, which gives an estimate of the importance, significance, and broad impact of a scientist's cumulative research contributions." He believed "that this index may provide a useful yardstick with which to compare, in an unbiased way, different individuals competing for the same resource when an important evaluation criterion is scientific achievement." There has been much controversy over the value of the h-index, in particular whether its merits outweigh its weaknesses. There has also been much debate concerning the optimal methodology to use in assessing the index.  In locating someone's h-index a number of methodologies/databases may be used. Two major ones are ISI's Web of Science and the free Harzing's Publish or Perish which uses Google Scholar data.

An h-index of 20 signifies that a scientist has published 20 articles each of which has been cited at least 20 times. Sometimes the h=index is, arguably, misleading. For example, if a scholar's works have received, say, 10,000 citations he may still have a h-index of only 12 as only 12 of his papers have been cited at least 12 times. This can happen when one of his papers has been cited thousands and thousands of times. So, to have a high h-index one must have published a large number of papers. There have been instances of Nobel Prize winners in scientific fields who have a relatively low h-index. This is due to them having published one or a very small number of extremely influential papers and maybe numerous other papers that were not so important and, consequently, not well cited.

  • As citation practices/patterns can vary quite widely across disciplines, it is not advisable to use h-index scores to assess the research impact of personnel in different disciplines.
  • The h-index is not very widely used in the Arts and Humanities.
  • H-index scores can vary widely depending on the methodology/database used. This is because different methodologies draw upon different citation data. When comparing different people’s H-Index it’s essential to use the same methodology. The h-index does not distinguish the relative contributions of authors in multi-author articles.
  • The h-index may vary significantly depending on how long the scholar has been publishing and on the number of articles they’ve published. Older, more prolific scholars will tend to have a higher h-index than younger, less productive ones.
  • The h-index can never decrease. This, at times, can be a problem as it does not indicate the decreasing productivity and influence of a scholar.

Using Harzing's Publish or Publish to Assess the H-Index

Publish or Perish utilizes data from Google Scholar. Its software may be downloaded from the Publish or Perish website . A person's h-index located through Publish or Perish is often higher than the same person's index located by means of ISI's Web of Science . This is primarily because the Google Scholar data utilized by Publish or Perish includes a much wider range of sources, e.g. working papers, conference papers, technical reports etc., than does Web of Science . It has often been observed that Web of Science may sometimes produce a more authoritative h-index than Publish or Perish. This tends to be more likely in certain disciplines in the Arts, Humanities and Social Sciences.

After you've launched the application, click on "Author impact" on top. Enter the author's name as initial and surname enclosed with quotation marks, e.g. "S Helluy". Then click "Lookup" (top right). You'll see a screen with a listing of S. Helluy's works arranged by number of citations. Above this listing is a smaller panel where one may see the h-index score of 17:

H-index of 17

Publish or Perish uses Google Scholar data and these data occasionally split a single paper into multiple entries. This is usually due to incorrect or sloppy referencing of a paper by others, which causes Google Scholar to believe that the referenced works are different. However, you can merge duplicate records in the Publish or Perish results list. You do this by dragging one item and dropping it onto another; the resulting item has a small "double document" icon as illustrated below:

merged row indication in interface

  • Alan Marnett (2010). "H-Index: What It Is and How to Find Yours"
  • Harzing, Anne-Wil (2008) Reflections on the H-Index .
  • Hirsch, J. E. (15 November 2005). "An index to quantify an individual's scientific research output" . PNAS 102 (46): 16569–16572.
  • A. M. Petersen, H. E. Stanley, and S. Succi (2011). "Statistical Regularities in the Rank-Citation Profile of Scientists" Nature Scientific Reports 181 : 1–7.
  • Williams, Antony (2011). Calculating my H Index With Free Available Tools .

If you are using Clarivate's Web of Science database to assess a h-index, it is important to remember that Web of Science uses only those citations in the journals listed in Web of Science . However, a scholar’s work may be published in journals not covered by Web of Science . It is not possible to add these to the database’s citation report and go towards the h-index. Also, Web of Science only includes citations to journal articles – no books, chapters, working papers etc.). Moreover, Web of Science ’s coverage of journals in the Social Sciences and the Humanities is relatively sparse. This is especially so for the Humanities.

Select the option "Cited Reference Search" (on top). Enter the person’s last name and first initial followed by an asterisk, e.g. Helluy S* If the person always uses a second first name include the second initial followed by an asterisk, e.g. Franklin KT* .

screen shot

If other authors have the same name, it’s important that you omit their articles. You can use the check boxes to the left of each article to remove individual items that are not by the author you are searching. The “Refine Results” column on the left can also help by limiting to relevant “Organizations – Enhanced”, by “Research Areas”, by “Publication Years”.

When you've determined that all the articles in the list are by the author, S. Helluy , you're searching for click on “Create Citation Report” on the right. The h-index for S. Helluy will be displayed as well as other citation stats.

H-index for S. Helluy

Notice the two bar charts that graph the number of items published each year and the number of citations received each year.

bar charts of published items

If you wish to see how the person's h-index has changed over a time period you can use the drop-down menus below to specify a range of years. Web of Science will then re-calculate the h-index using only those articles added for those particular years.

h-index across selected years

Contending that Hirsch's H-Index does not take into account the "age" of an article, Sidiropoulos et al. (2006) came up with a modification, i.e. the Contemporary H-Index . They argued that though some older scholars may have have been "inactive" for a long period their h-index may still be high since the h-index cannot decline. This may be considered as somewhat unfair to older, senior scholars who continue to produce (if one has published a lot and already has a high h-index it is more and more difficult to incease the index). It may also be seen as unfair to younger brilliant scholars who have had time only to publish a small number of significant articles and consequently have only a low h-index. Hirsch's h-index, it is argued, doesn't distinguish between the different productivity/citations of these different kinds of scholars. The solution of Sidiropoulos et al.  is to give weightings to articles according to the year in which they're published. For example, "for an article published during the current year, its citations account four times. For an article published 4 year ago, its citations account only one time. For an article published 6 year ago, its citations account 4/6 times, and so on. This way, an old article gradually loses its 'value', even if it still gets citations." Thus, more emphasis is given to recent articles thereby favoring the h-index of scholars who are actively publishing.

One of the easiest ways to obtain someone's contemporary h-index, or "hc-index", is to use Harzing's Publish or Perish software.

Publish or Perish interface

  • << Previous: AltMetrics
  • Next: "Times Cited" >>
  • Last Updated: Mar 20, 2024 11:33 AM
  • Subjects: General
  • Tags: altmetrics , author ID , h-index , impact factor

Explainer: what is an H-index and how is it calculated?

h index for researchers

Professor of Organisational Behaviour, Cass Business School, City, University of London

Disclosure statement

Andre Spicer does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

City, University of London provides funding as a founding partner of The Conversation UK.

View all partners

h index for researchers

A previously obscure scholarly metric has became an item of heated public debate. When it was announced that Bjorn Lomborg, a researcher who is sceptical about the relative importance of climate change, would be heading a research centre at the University of Western Australia, the main retort from most scientists was “just look at the guy’s H-index!”

Many scientists who were opposed to Lomborg’s new research centre pointed out that his H-index score was 3 . Usually, someone appointed to a professorship in the natural sciences would be expected to have an H score about ten times that.

For people outside of academia this measure probably makes little sense. So what exactly is an H-index and why should we use it to judge whether someone should be appointed to lead a research centre?

What is the H-index and how is it calculated?

The H-Index is a numerical indicator of how productive and influential a researcher is. It was invented by Jorge Hirsch in 2005, a physicist at the University of California. Originally, Professor Hirsch wanted to create a numerical indication of the contribution a researcher has made to the field.

h index for researchers

At the time, an established measure was raw citation counts. If you wanted to work out how influential a researcher was, you would simply add up the number of times other research papers had cited papers written by that researcher.

Although this was relatively straightforward, researchers quickly discovered a significant problem with this score – you could get a huge citation count through being the scientific equivalent of a one-hit wonder.

If you published one paper that was widely cited and then never published a paper again after that, you would technically be successful. In such situations, outliers would have an undue and even distorting effect on our overall evaluation of a researcher’s contribution.

To rectify this problem, Hirsch suggested another approach for calculating the value of researchers, which he rather immodestly called the H-index (H for Hirsch of course). This is how he explains it:

A scientist has index h if h of his/her Np papers have at least h citations each, and the other (Np−h) papers have no more than h citations each.

To put it in a slightly more simple way - you give an H-index to someone on the basis of the number of papers (H) that have been cited at least H times. For instance, according to Google Scholar, I have an H-index of 28. This is because I have 28 papers that are cited at least 28 times by other research papers. What this means is that a scientist is rewarded for having a range of papers with good levels of citations rather than one or two outliers with very high citations.

It also means that if I want to increase my H-index, it is best to focus on encouraging people to read and cite my papers with more modest citation levels – rather than having them focus on one or two well-known papers which are already widely cited.

The influence of the H-index

While the H-index might have been created for the purpose of evaluating researchers in the area of theoretical physics, its influence has spread much further. The index is routinely used by researchers in a wide range of disciplines to evaluate both themselves and others within their field.

For instance, H-indexes are now a common part of the process of evaluating job applicants for academic positions. They are also used to evaluate applicants for research grants. Some scholars even use them as a sign of self-worth.

Calculating a scholar’s H-index has some distinct advantages. It gives some degree of transparency about the influence they have in the field. This makes it easy for non-experts to evaluate a researcher’s contribution to the field.

h index for researchers

If I was sitting on an interview panel in a field that I know nothing about (like theoretical physics), I would find it very difficult to judge the quality of their research. With an H-index, I am given a number that can be used to judge how influential or otherwise the person we are interviewing actually is.

This also has the advantage of taking out many of the idiosyncratic judgements that often cloud our perception of a researcher’s relative merits. If for instance I prefer “salt water” economics to “fresh water” economics, then I am most likely to be positively disposed to hiring the salt water economist and coming up with any argument possible to not accept the fresh water economist.

If however, we are simply given an H-index, then it it becomes possible to assess each scholar in a slightly more objective fashion.

The problems with the H-index

There are some dangers that come with the increasing prevalence of H-scores. It is difficult to compare H-scores across fields. H-scores can often be higher in one field (such economics) than another field (such as literary criticism).

Like any citation metric, H-scores are open to manipulation through practices like self-citation and what one of my old colleagues liked to call “citation soviets” (small circles of people who routinely cite each other’s work).

The H-index also strips out any information about author order. The result is that there is little information about whether you published an article in a top journal on your own or whether you were one member of a huge team.

But perhaps the most worrying thing about the rise of H-scores, or any other measure of research productivity or influence for that matter, is they actually strip out the ideas. They allow us to talk about intellectual endeavour without any reference at all to the actual content.

This can create a very strange academic culture where it is quite possible to discuss academic matters for hours without once mentioning an idea. I have been to meetings where people are perfectly comfortable chatting about the ins and outs of research metrics at great length. But little discussion is had about the actual content of a research project.

As this attitude to research becomes more common, aspirational academics will start to see themselves as H-index entrepreneurs. When this happens, universities will cease to be knowledge creators and instead become metric maximisers.

h index for researchers

Research Fellow – Beyond The Resource Curse

h index for researchers

Audience Development Coordinator (fixed-term maternity cover)

h index for researchers

Lecturer (Hindi-Urdu)

h index for researchers

Director, Defence and Security

h index for researchers

Opportunities with the new CIEHF

Measuring your research impact: H-Index

Getting Started

Journal Citation Reports (JCR)

Eigenfactor and Article Influence

Scimago Journal and Country Rank

Google Scholar Metrics

Web of Science Citation Tools

Google Scholar Citations

PLoS Article-Level Metrics

Publish or Perish

  • Author disambiguation
  • Broadening your impact

Table of Contents

Author Impact

Journal Impact

Tracking and Measuring Your Impact

Author Disambiguation

Broadening Your Impact

Other H-Index Resources

  • An index to quantify an individual's scientific research output This is the original paper by J.E. Hirsch proposing and describing the H-index.

H-Index in Web of Science

The Web of Science uses the H-Index to quantify research output by measuring author productivity and impact.

H-Index = number of papers ( h ) with a citation number ≥ h .  

Example: a scientist with an H-Index of 37 has 37 papers cited at least 37 times.  

Advantages of the H-Index:

  • Allows for direct comparisons within disciplines
  • Measures quantity and impact by a single value.

Disadvantages of the H-Index:

  • Does not give an accurate measure for early-career researchers
  • Calculated by using only articles that are indexed in Web of Science.  If a researcher publishes an article in a journal that is not indexed by Web of Science, the article as well as any citations to it will not be included in the H-Index calculation.

Tools for measuring H-Index:

  • Web of Science
  • Google Scholar

This short clip helps to explain the limitations of the H-Index for early-career scientists:

  • << Previous: Author Impact
  • Next: G-Index >>
  • Last Updated: Dec 7, 2022 1:18 PM
  • URL: https://guides.library.cornell.edu/impact
  • Interlibrary Loan and Scan & Deliver
  • Course Reserves
  • Purchase Request
  • Collection Development & Maintenance
  • Current Negotiations
  • Ask a Librarian
  • Instructor Support
  • Library How-To
  • Research Guides
  • Research Support
  • Study Rooms
  • Research Rooms
  • Partner Spaces
  • Loanable Equipment
  • Print, Scan, Copy
  • 3D Printers
  • Poster Printing
  • OSULP Leadership
  • Strategic Plan

Research metrics

  • Tools for Citation Analysis
  • Measure Journal Impact
  • Find Researcher Metrics (H-index)
  • Learn about Altmetrics
  • Increase Visibility
  • New books in scholarly communication

Finding an H-Index with Web of Science

1. Go to Web of Science from the OSU Libraries database list  and change your search from "basic" to "author."

screenshot of Web of Science search screen

2. Enter the author's name and click "select research domain."  Choose all the fields the author is likely to have published in.

3. Click "select organization" to narrow to specific organizations.*

4. From the results page, click "Create citation report."

screenshot of "create citation report" option in web of science

5. You'll see the citation analysis numbers at the top of the page, but first...

6. Look through the results to make sure they're accurate.  Check the box next to any records that aren't relevant and click "go."  You can also limit to a specific date range (if, for instance, your researcher wasn't active until 1970, you probably want to limit to 1970 - present).

7.  Now  you will see the h-index at the top of the page along with other metrics.

screenshot of citation analysis numbers in Web of Science

* Note : If someone has a unique name, you may want to skip steps 2 & 3.  Limiting by research domain and organization sometimes excludes relevant results.  On the flip side, if someone has a common name (like J Smith), failure to use these limiters can result in thousands of false positives.  It's a good idea to compare the final result set to a researcher's CV or other authoritative list of publications.  Find this disambiguation process frustrating?  Librarians do, too!  Encourage the researchers you know to get an ORCID identifier .

What Is the H-Index?

J. E. Hirsch, a physicist at the University of California, proposed the h-index to quantify individuals' scientific research output in a 2005  PNAS  paper .  The h-index measures both productivity and citation impact.

To calculate your h-index, list your papers based on the number of their citations, from most to least.  The number of citations for each paper must be equal to or greater than its rank in order to be counted.  Thus, if your first paper has at least 1 citation, your h index is at least one.  If your second paper has at least two citations, your h-index is at least two, and so on.  If you have papers A, B, C, D, and E, with 68, 12, 10, 3, and 2, respectively, your h-index is 3, because paper D (your fourth paper) must have more than four citations to be counted.

Finding an H-Index with Google Scholar

1. To find a researcher's h-index with Google Scholar , search for their name.  

2. If a user profile comes up* with the correct name, discipline, and institution, click on that.

screenshot of a user profile in Google Scholar

3. The h-index will be displayed for that author under "citation indices" on the top right-hand side.

screenshot of citation indices in Google Scholar

* If no user profile comes up, you'll need to use another tool, like Web of Science (below) or manually calculate the individual's h-index.

H-Index Caveats

  • What constitutes a "high" h-index varies by discipline (physicists have higher h-indexes than librarians, generally).
  • People who have many co-authors will have a higher h-index than those who author more solo papers.
  • H-index calculators (such as Web of Science and Google Scholar) will estimate someone's h-index differently from one another because they're relying on different sources (Web of Science's database is smaller and more academic than Google Scholar's).
  • The h-index is dependent on a researcher's "academic age."  Someone who has been publishing longer will have a higher h-index relative to a newer researcher.
  • Manually calculating an h-index will likely result in a different number than automated h-indexes.

It's always best to use the h-index in context, comparing scholars with their peers, and using other metrics as well.

  • << Previous: Measure Journal Impact
  • Next: Learn about Altmetrics >>
  • Last Updated: Feb 19, 2024 4:19 PM
  • URL: https://guides.library.oregonstate.edu/metrics

h index for researchers

Contact Info

121 The Valley Library Corvallis OR 97331–4501

Phone: 541-737-3331

Services for Persons with Disabilities

In the Valley Library

  • Oregon State University Press
  • Special Collections and Archives Research Center
  • Undergrad Research & Writing Studio
  • Graduate Student Commons
  • Tutoring Services
  • Northwest Art Collection

Digital Projects

  • Oregon Explorer
  • Oregon Digital
  • ScholarsArchive@OSU
  • Digital Publishing Initiatives
  • Atlas of the Pacific Northwest
  • Marilyn Potts Guin Library  
  • Cascades Campus Library
  • McDowell Library of Vet Medicine

FDLP Emblem

Academia Insider

What is a good H-index for each academic position?

Navigating the complex landscape of academia often involves decoding a series of metrics and benchmarks.

Among these, the h-index stands out as a critical measure of a scholar’s productivity and influence.

But what exactly constitutes a “good” h-index? And how does it vary across different academic positions and disciplines—from PhD students to full professors in fields as diverse as Life Sciences, Engineering, and Humanities?

On average and good H-index for a PhD student is between 1 and 5, a postdoc between 2 and 17, an assistant professor between 4 – 35 and a full professor typically about 30+.

Our comprehensive blog delves into the nuances of the h-index, its relevance in academic promotions, and the challenges it presents. 

Here is a quick summary of h-indexes that could be considered typical in different fields:

h index for researchers

What is the h-index metric?

The h-index is a metric designed to quantify the productivity and impact of a researcher, and increasingly, groups or journals.

Developed by physicist Jorge Hirsch, this index is computed as the number of papers (number of publications) with citation numbers larger or equal to ‘h.’

For instance, if a researcher has four papers cited at least four times each, their h-index is 4.

The metric comes in handy when comparing scholars within the same field but has limitations when used across disciplines. This is due to factors such as the average number of references per paper, the typical productivity of researchers in the field, and the field’s overall size.

Several databases, like Google Scholar, Web of Science, and Scopus, offer h-index calculations. However, it’s crucial to note that your h-index may vary between platforms due to differences in their database’s scope and what papers they include.

The h-index has become a crucial factor in academia for promotions, with assistant professors often striving for a ‘good h-index’ to become a full professor.

The h-index is not without its challenges:

  • it may not accurately reflect the impact of scholars with fewer but highly cited publications. In such cases, the h-index may paint an incomplete picture of an author’s impact, favoring those who publish more frequently regardless of the quality or impact of their work.
  • it is heavily influenced by the field’s norms. For example, in disciplines where papers usually have fewer citations, even established researchers may have a relatively low h-index.

Despite its limitations, the h-index remains a widely-used metric for assessing the influence and productivity of researchers, offering a more nuanced picture than simply counting the number of papers published or the number of citations.

How to calculate your h-index score

Calculating your h-index is a straightforward process, especially if you use academic databases that track citations. Here’s a simple step-by-step guide:

Manual Calculation

  • List Your Publications : Make a list of all your academic publications that have been cited.
  • Count Citations : For each publication, find out the number of times it has been cited. You can use Google Scholar, Web of Science, or Scopus for this, or you can manually check academic journals.
  • Sort by Citation Count : Arrange the list of publications in descending order based on the number of citations each paper has received.
  • Find the H-Index : Start from the top of the sorted list and look for the last publication where the number of citations is greater than or equal to the position in the sorted list. That position number is your h-index.

For example, if you have papers cited 10, 8, 5, 4, and 2 times, then your h-index would be 4 because you have 4 papers that have been cited at least 4 times.

Using Google Scholar

  • Create/Log into Account : Go to Google Scholar and create an account if you haven’t. If you already have one, log in.
  • Add Publications : You’ll be prompted to add your publications to your Google Scholar profile.
  • View H-Index : Once your publications are added, Google Scholar automatically calculates your h-index and displays it on your profile.

Using Web of Science

  • Access the Database : Go to Web of Science and log in or access it via your institution.
  • Search for Author : Search for your name in the author search.
  • Check H-Index : Your h-index will be displayed along with other citation metrics.

Using Scopus

  • Access and Search : Go to Scopus and use the author search to find your profile.
  • Locate H-Index : Your profile will display your h-index along with other metrics.

Calculating your h-index is an essential part of understanding your academic impact, and these steps should help you determine yours.

What is a good h-index for a PhD student?

determining what constitutes a “good” h-index for a Ph.D. student can vary greatly depending on the academic field, the number of years the student has been in the program, and other factors like collaborative work and the prominence of the journals where they’ve published.

Here’s a table that attempts to provide some generalized benchmarks:

It’s worth noting that while a “good” h-index can be indicative of a productive and impactful research career, it’s not the only metric to consider. Quality of research, contribution to the field, and other factors like teaching and community service are also important.

What is a good h-index for a Postdoc?

A “good” h-index for a Postdoc will typically be higher than for a PhD student, given the additional years of research and publications.

Again, the numbers can vary depending on the field, the productivity of the researcher, and other variables like the rate of collaboration and the types of journals in which they’ve published.

Here’s a generalized table:

Remember that while the h-index is a useful metric, it’s not the end-all measure of academic success. Qualities like the impact and innovation of one’s research, mentorship, and broader contributions to science and the academic community are also vital aspects of a successful Postdoc experience.

What’s a good h-index for an assistant professor academic position?

The h-index for an Assistant Professor would usually be higher than for a PhD student or Postdoc due to more years of research and publications.

Like in previous cases, the “good” h-index varies significantly based on academic field, years in the role, and other variables such as the type of institution, rate of collaboration, and types of journals in which the researcher has published.

It’s worth mentioning that although a “good” h-index is beneficial for career advancement, including promotions to Associate or Full Professor roles, it’s not the only metric of importance.

Peer review, teaching effectiveness, and service to the academic community are also critical factors in evaluating an Assistant Professor’s performance.

What is a good h-index for an associate professor?

The h-index for an Associate Professor would typically be higher still, given the further years of research and publishing, as well as the likelihood of having guided PhD students and Postdocs, which often results in collaborative publications.

Again, while a strong h-index is beneficial for career advancement and often required for promotion to Full Professor, it is not the sole indicator of academic success.

Qualities like innovative research, excellence in teaching, and meaningful service to the academic community are also critical in evaluating an Associate Professor’s overall performance.

H-index required for an academic position – Full professor? 

A Full Professor is generally expected to have a high h-index, reflecting a long career with significant contributions to research.

It’s important to recognize that the h-index will vary by academic field and will often be influenced by a range of factors such as international collaborations, research funding, and the impact factor of journals where the work is published.

Here’s a generalized table for what might be considered a “good” h-index for a Full Professor:

A Full Professor’s career is also evaluated based on other achievements, such as securing research grants, publishing influential works beyond journal articles, mentorship, administrative roles, and service to the academic and broader community.

Wrapping up – what h-index is considered good?

The quest to quantify academic productivity and influence has led to the widespread adoption of the h-index as an evaluative metric.

While this index offers a useful, albeit simplified, snapshot of a researcher’s impact, it’s crucial to understand its nuances and limitations.

Notably, what constitutes a “good” h-index can vary dramatically depending on several factors, including the academic discipline, stage of career, and other variables such as types of publications and rate of collaboration.

This blog has provided a comprehensive guide to the h-index, outlining its significance, methodology for calculation, and what might be considered typical scores across various academic stages and fields.

The h-index should not be viewed in isolation.

Other qualitative factors like the quality of research, peer review, teaching effectiveness, and service to the academic community are equally vital in evaluating an academic’s overall performance.

The h-index faces challenges such as not accounting for the quality or societal impact of a researcher’s work and not translating well across different disciplines.

As a result, while the h-index can serve as a useful tool in academic evaluations, it should be used in conjunction with other metrics and qualitative assessments for a more rounded understanding of a scholar’s contributions.

So, whether you are a PhD student or a full professor, it’s important to not only be aware of your h-index but also to engage in a broader reflection of your academic goals and contributions. 

h index for researchers

Dr Andrew Stapleton has a Masters and PhD in Chemistry from the UK and Australia. He has many years of research experience and has worked as a Postdoctoral Fellow and Associate at a number of Universities. Although having secured funding for his own research, he left academia to help others with his YouTube channel all about the inner workings of academia and how to make it work for you.

Thank you for visiting Academia Insider.

We are here to help you navigate Academia as painlessly as possible. We are supported by our readers and by visiting you are helping us earn a small amount through ads and affiliate revenue - Thank you!

h index for researchers

2024 © Academia Insider

h index for researchers

UNLV Logo

  • Library Accounts

UNLV Libraries Logo

Bibliometrics and Altmetrics

  • Altmetrics and Article-Level Metrics
  • Journal Impact Factor
  • Author Impact

About the H-index

Finding your h-index in web of science, finding your h-index in google scholar, finding your h-index in scopus.

  • Other Types of Metrics
  • Glossary - Definition of Terms

The h-Index is a primary author level metric designed to measure research quality over time, and accounts for both the scholarly productivity and the research impact of the author. The h-Index is calculated as follows - H stands for the number of articles that have each been cited H number of times. So, an h-Index of 30 means that the author has published 30 articles that have each been cited 30+ times.

 Image courtesy of Oregon State Libraries  under CC-BY-SA 4.0 license

Top Databases to Find Your H-Index

Web of Science .  Requires access through University Libraries database. 

Scopus . Requires access through University Libraries database.

Google Scholar .  Freely available

Metrics Toolkit .  Helps you Navigate the Research Metrics Landscape. It is a web resource for researchers and evaluators by providing guidance for demonstrating and evaluating research impact. It includes a section on the h-Index

Publish or Perish . A software program that retrieves and analyzes academic citations, pulled from a variety of sources. It's calculations available include the h-index and the g-index. 

Instructions to view h-index in Web of Science

  • Navigate to University library website: www.library.unlv.edu
  • On the right hand side of the screen under Research , select All Library Databases
  • Next, click the letter “W” scroll down the list of databases and select Web of Science
  • You will need to login with your ACE account
  • Once in Web of Science add your name to the first search box in this order: last name, first initial (ex. Candela, L) Change the drop down menu to the right to Author  
  • Click the +Add Row to add an a row
  • From the drop down menu select Organization-Enhanced in the search box type: University of Nevada, Las Vegas
  • Make sure you adjust the dates for publications you wish to retrieve (ex. 2000-2019)
  • Click the blue Search button to run your search
  • You will retrieve a list of your publications
  • To view your h-index  look for Create Citation Report on the right hand side of the screen
  • Click Create Citation Report
  • Your h-index will display in the second box at the top of the screen see screen shot example below.

h index for researchers

Instructions courtesy of Xan Goodman, Nursing, Allied Health, and Public Health Librarian

Instructions to view h-index in Google Scholar

1. Navigate to Google Scholar: scholar.google.com 2. Enter name of author  3. If a profile exists for the author it will appear at the top of the search results, click the name of the author  and their profile page will open 4. View the h-index for the person on the right side of the screen.

h index for researchers

Instructions to view h-index using Scopus CiteScore Metrics

1. Navigate to University Libraries website: www.library.unlv.edu 2. On the right side of the screen under Research , select All Library Databases 3. Next, click the letter "S" , scroll down the list of databases and select Scopus 4. You will need to login with your ACE account 5. Once in Scopus , select Authors, then enter your name Last Name first in the appropriate fields 6. Click the blue Search button to view h-index 7. Select Citation Overview to see the number of citations over a period of years

h index for researchers

  • << Previous: Author Impact
  • Next: Other Types of Metrics >>
  • Last Updated: Dec 20, 2023 9:38 AM
  • URL: https://guides.library.unlv.edu/biblio
  • Special Collections
  • Architecture Library
  • Medical Library
  • Music Library
  • Teacher Library
  • Law Library
  • Interlibrary Loan

Ask an Expert

Ask an expert about access to resources, publishing, grants, and more.

MD Anderson faculty and staff can also request a one-on-one consultation with a librarian or scientific editor.

  • Library Calendar

Log in to the Library's remote access system using your MyID account.

The University of Texas MD Anderson Cancer Center Home

  • UT MD Anderson Cancer Center
  • Ask the Research Medical Library

Q. What is an h-index? How do I find the h-index for a particular author?

  • 3 altmetrics
  • 1 BioRender
  • 11 cited references
  • 8 collections
  • 5 Copyright
  • 3 data management
  • 12 databases
  • 5 full text
  • 7 impact factors
  • 3 Interlibrary Loan
  • 10 journals
  • 6 NIH Public Access Policy
  • 4 open access
  • 1 Other Libraries
  • 2 peer review
  • 1 plagiarism
  • 18 publishing
  • 41 reference
  • 13 services
  • 12 Systematic Reviews

Answered By: Laurissa Gann Last Updated: Mar 27, 2023     Views: 407874

The h-index is a number intended to represent both the productivity and the impact of a particular scientist or scholar, or a group of scientists or scholars (such as a departmental or research group). 

The h-index is calculated by counting the number of publications for which an author has been cited by other authors at least that same number of times.  For instance, an h-index of 17 means that the scientist has published at least 17 papers that have each been cited at least 17 times.  If the scientist's 18th most cited publication was cited only 10 times, the h-index would remain at 17.  If the scientist's 18th most cited publication was cited 18 or more times, the h-index would rise to 18.

Part of the purpose of the h-index is to eliminate outlier publications that might give a skewed picture of a scientist's impact.  For instance, if a scientist published one paper many years ago that was cited 9,374 times, but has since only published papers that have been cited 2 or 3 times each, a straight citation count for that scientist could make it seem that his or her long-term career work was very significant.  The h-index, however, would be much lower, signifying that the scientist's overall body of work was not necessarily as significant.

The following resources will calculate an h-index:

Web of Science

Pure (MD Anderson Faculty and Fellows listed)

Keep in mind that different databases will give different values for the h-index.  This is because each database must calculate the value based on the citations it contains.  Since databases cover different publications in different ranges of years, the h-index result will therefore vary.   You should also keep in mind that what is considered a "good" h-index may differ depending on the scientific discipline.  A number that is considered low in one field might be considered quite high in another field.

A note about Google Scholar

Google Scholar usually provides the highest h-index compared to other sources. This is because Google Scholar indexes web pages not organized collections of article citations, like databases. This means Google Scholar:

  • Counts all publications, including books
  • Counts all versions of a paper it finds, including preprints
  • Counts self-citations 
  • Counts citations added manually, but not necessarily verified by a publisher or other source

Links & Files

  • How do I determine how many times and article was cited?
  • Share on Facebook

Was this helpful? Yes 197 No 69

Comments (0)

Chat or Zoom Live with Staff

Text Us: (281) 369-4872

Call Us: +1 (713) 792-2282

View Research Guides

Request an Online Consultation

Related Topics

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Pak J Med Sci
  • v.39(2); Mar-Apr 2023
  • PMC10025721

The h -Index: An Indicator of Research and Publication Output

Faaiz ali shah.

1 Faaiz Ali Shah, Associate Professor Orthopaedics, Lady Reading Hospital Peshawar, Pakistan

Shaukat Ali Jawaid

2 Shaukat Ali Jawaid, Chief Editor, Pakistan Journal of Medical Sciences, Karachi, Pakistan

The analysis of research publications using statistical methods is called Bibliometry. There are many bibliometric indices to measure the research output of individual researcher. 1 The h -index and impact factor(IF) are the most famous and widely used bibliometric indices. Jorge Eduardo Hirsch was Professor of Physics at the University of California who introduced the h -index (Hirsch Index) in 2005 for the first time. Hirsch defined h- index as “A scientist has index h if h of his or her Np papers have at least h citations each and the other( Np-h ) papers have fewer than h citations each.” 2 As an example one author will have h -index-7 if he had published seven papers and each paper has been cited at least seven times by others.

The research performance of an individual researcher at micro level can be determined using h -index. Many commonly used databases such as Clarivate Analytics Web of science, Google Scholar and Elsevier’ Scopus automatically calculate h -index for their authors. Publish or Perish is a free software program that calculate h-index for authors who do not have a profile account in Google Scholar. The calculation of h -index may be different in different databases or resources because each database covers different journals and years of indexation meaning that the same author will not have same h -index value across all databases. 3

The h- index evaluates the cumulative scholarly impact of an author’s performance. It measures the quantitative(productivity) and qualitative(citations) research work of a researcher as a single number meaning that neither few papers which are highly cited nor too many papers with very few citations will produce a high h -index. Scientists with outstanding and highly cited papers on new discoveries or inventions but few in numbers can not have high h-index. For example, Albert Einstein has h -index of 4 or 5 despite being acknowledged globally as an outstanding Physicist. Harry Kroto is a Nobel laureate in chemistry who won the Nobel award with his single publication in 1985 despite his position at 264 th in global chemist list base upon h -index. Contrary Charles Darwin gained total citations of 77539, h -index of 680 and i -10 index of 331 because he was the author of many books which were cited by many researchers worldwide. 4

There are several advantages of h- index. It is a reliable and robust indicator of scholarly achievement. It is applicable to researchers in individual capacity as well as to researchers groups, medical journals, publishers, projects, academic institutions, universities and even to countries. For example, the h -index of Pakistan Journal of Medical Sciences(PJMS) as calculated by Google Scholar is 36 with an impact factor of 2.340. 5 The h- index can be used as a yardstick for comparing many researchers for fellowships in research or for grants application in the same field. It can also predict the future achievement of a researcher. 6 H index is also becoming very popular in the science community and in the days to come it might become much more important to evaluate the scientific contributions of faculty members. 7 Indexing agencies, it is said, also look at the hIndex of Editorial Board members while evaluating the standard of a biomedical journal.

It is simple to calculate. But what should be a good h- index? Hirsch 2 was of the opinion that 20 h -index is Good,40 is Outstanding and 60 is Exceptional but after 20 years of research life. He further pointed out that approximately 84% of Physicists with Nobel Prizes had h -index of 30. The pattern of citation and publications are different across various disciplines of medical and health sciences. It is therefore very difficult to propose a competitive or acceptable h -index for recruitment and promotion of faculty or for funding and grants purposes. However, an h-index of Three and Five can be set as standard for assistant professor, 8 to 12 for associate professor and h -index of 15 to 20 is a good standard for appointment to full professor. In many disciplines a general rule is followed for an acceptable h -index with matching of numbers of years the author has been working in that discipline. In fact, Hirsch 2 made an adjustment by combining h -index with author’s active research time to arrive, the m -index and is determined by h-index dividing on time since researcher’s initial publication. The m- index of One is Very good, Two is Outstanding and Three is Exceptional. The famous English physicist and author Stephen Hawking had an m -index of 1.6.

Some intrinsic limitations of h -index have been reported in the literature. 8 The inability of h- index to consider the accurate position of a researcher in the author list of an article is the first limitation of h -index. The major and minor (or no) contributor in the research gained equal h -index. The second limitation of h -index is the influence of self citation of an author by quoting his or her earlier research publication with an intention to increase his h -index. The third limitation of h -index is the influence of researcher “scientific age or academic age.” Researchers with shorter scientific carriers may have less scientific papers and citations than with longer scientific life. Similarly, the female researches are particularly at disadvantage because of possible discontinuation of their research activities due to maternal or child bearing leaves. Fourthly selective publications or clinically less irrelevant but popular topics can increase h -index. Original articles may have less impact on h- index than review articles as the latter are more frequently cited. Lastly the worth or content of a research can not be taken into account while using h -index as citation matrix.

Due to the above limitations of h- index many complementary indexes or types of h -index have been postulated. These are grouped into two broad categories. The h -index, g -index, h (2) and m-quotient are designated as Productive Output Core as they describe the number of published papers. The second category include m -index, hw -index, r -index, a -index, and ar -index and they describe the Impact of research papers. Both groups although different but can complement each other. 9 A study by Guraya et al has showed that some universities offer generous grants to researchers who have a high h-index and with more publications in leading well reputed journals that ensures more chances of its citations and elevation in the scientific rankings of the funding institutions. 10

The h- index has been extensively studied in many medical and surgical subspecialties including Orthopaedic surgery and positive associations have been documented between the h -index and academic ranks or positions and promotions. 11 - 12 Atwan 13 studied 567 faculty of Orthopaedics of Canada. Among the study participants 485(85.5%) were academic faculty and 82(14.5%) were clinical faculty. Individual h- index was obtained from Elsevier’s Scopus database. The median h -index of academic faculty was 8 while median h- index of clinical faculty was 2(p<0.001) Assistant professors had h- index of 4, Associate had 12 and full Professor had 28 h- index(p<0.05). The spine specialty had the highest h- index of 11(4.5 to 18.5) while foot and ankle had the lowest mean h- index of 3.5(2 to 7.5). Atwan concluded that Orthopaedic faculty of Canada has higher h -index than Orthopaedic faculty of USA.Varady 14 determined the h- index of top 100 Orthoaedic surgeons of USA who were more active on twitter and noted that their mean h- index was 13.67±4.12. He concluded that social media influence was positively correlated with higher academic productivity as evident from higher h -index.

Currently there is not a single perfect bibliometric index which can accurately describe the impact of a researcher. Therefore, a combination of two or more metrics are advised. 15 Many researchers have outstanding contributions to their field through their ideas, time, skills and mentoring. Kelly 16 is of the opinion that it is distasteful to reduce the lifetime work of a researcher to a mere numerical value as Albert Einstein has rightly pointed out that: “Not everything that Counts is Countable, and Not Everything that is Countable Counts”. 17

What is the H-index, and Does it Matter?

Red, yellow, green and blue tape measures to represent an author's h-index

Listen to one of our scientific editorial team members read this article. Click here to access more audio articles or subscribe.

The h-index is a measure of research performance and is calculated as the highest number of manuscripts from an author (h) that all have at least the same number (h) of citations. The h-index is known to penalize early career researchers and does not take into account the number of authors on a paper. Alternative indexes have been created, including the i-10, h-frac, G-index, and M-number.

How do you measure how good you are as a scientist? How would you compare the impact of two scientists in a field? What if you had to decide which one would get a grant? One method is the h-index, which we will discuss in more detail below. First, we’ll touch on why this is not a simple task.

Measuring scientific performance is more complicated and more critical than it might first seem. Various methods for measurement and comparison have been proposed, but none of them is perfect.

At first, you might think that the method for measuring scientific performance doesn’t concern you—because all you care about is doing the best research you can. However, you should care because these metrics are increasingly used by funding bodies and employers to allocate grants and jobs. So, your perceived scientific performance score could seriously affect your career.

Metrics for Measuring Scientific Performance

What are the metrics involved in measuring scientific performance? The methods that might first spring to mind are:

  • Recommendations from peers. At first glance, this is a good idea in principle. However, it is subject to human nature, so personal relationships will inevitably affect perceived performance. Also, if a lesser-known scientist publishes a ground-breaking paper, they would likely get less recognition than if a more eminent colleague published the same paper.
  • The number of articles published. A long publication list looks good on your CV, but the number of articles published does not indicate their impact on the field. Having a few publications well-heeded by colleagues in the field (i.e., they are cited often) is better than having a long list of publications cited poorly or not at all.
  • The average number of citations per article published. So, if it’s citations we’re interested in, then surely the average number of citations per paper is a better number to look at. Well, not really. The average could be skewed dramatically by one highly cited article, so it does not allow a good comparison of overall performance.

The H-Index

In 2005, Jorge E. Hirsch of UCSD published a paper in PNAS in which he put forward the h-index as a metric for measuring and comparing the overall scientific productivity of individual scientists. [1]

The h-index has been quickly adopted as the metric of choice for many committees and bodies.

How to Calculate An Author’s H-Index

The h-index calculation is pretty simple. You plot the number of papers versus the number of citations you (or someone else) have received, and the h-index is the number of papers at which the 45-degree line (citations=papers, orange) intercepts the curve, as shown in Figure 1 . That is, h equals the number of papers that have received at least h citations. For example, do you have one publication that has been cited at least once? If the answer is yes, then you can go on to your next publication. Have your two publications each been cited at least twice? If yes, then your h-index is at least 2. You can keep going until you get to a “no.”

What is the H-index, and Does it Matter?

So, if you have an h-index of 20, you have 20 papers with at least 20 citations. It also means that you are doing pretty well with your science!

What is a Good H-Index?

Hirsch reckons that after 20 years of research, an h-index of 20 is good, 40 is outstanding, and 60 is truly exceptional.

In his paper, Hirsch shows that successful scientists do, indeed, have high h-indices: 84% of Nobel Prize winners in physics, for example, had an h-index of at least 30. Table 1 lists some eminent scientists and their respective h-indexes.

Table 1: H-index scores of some Nobel Laureates (data from Google Scholar collected on September 27, 2023).

Advantages of the H-Index

The advantage of the h-index is that it combines productivity (i.e., number of papers produced) and impact (number of citations) in a single number. So, both productivity and impact are required for a high h-index; neither a few highly cited papers nor a long list of papers with only a handful of (or no!) citations will yield a high h-index.

Limitations of the H-Index

Although having a single number that measures scientific performance is attractive, the h-index is only a rough indicator of scientific performance and should only be considered as such.

Limitations of the h-index include the following:

  • It does not take into account the number of authors on a paper. A scientist who is the sole author of a paper with 100 citations should get more credit than one on a similarly cited paper with 10 co-authors.
  • It penalizes early-career scientists. Outstanding scientists with only a few publications cannot have a high h-index, even if all of those publications are ground-breaking and highly cited. For example, Albert Einstein would have had an h-index of only 4 or 5 if he had died in early 1906 despite being widely known as an influential physicist at the time.
  • Review articles have a greater impact on the h-index than original papers since they are generally cited more often.
  • The use of the h-index has now broadened beyond science. However, it’s difficult to compare fields and scientific disciplines directly, so, really, a ‘good’ h-index is impossible to define.

Calculating the H-Index

There are several online resources and h-index calculators for obtaining a scientist’s h-index. The most established are ISI Web of Knowledge, and Scopus, both of which require a subscription (probably via your institution), but there are free options too, one of which is Publish or Perish .

You might get a different value if you check your own (or someone else’s) h-index with each of these resources. Each uses a different database to count the total publications and citations. ISI and Scopus use their own databases, and Publish or Perish uses Google Scholar. Each database has different coverage and will provide varying h-index values. For example, ISI has good coverage of journal publications but poor coverage of conferences, while Scopus covers conferences better but needs better journal coverage pre-1992. [2]

Is the H-index Still Effective?

A paper published in PLoS One in 2021 concluded that while a scientist’s h-index previously correlated well with the number of scientific awards, this is no longer the case. This lack of correlation is partly because of the change in authorship patterns, with the average number of authors per paper increasing. [3]

Are Alternatives to the H-Index Better?

Let’s take a look at some of the alternative measures available.

The H-Frac Index

The authors of the PLoS One paper suggest fractional analogs of the h-index are better suited for the job. [3] Here, the number of authors on a paper is also considered. One such measure is the h-frac, where citation counts are divided by the number of authors. However, this solution could also be manipulated to the detriment of more junior researchers, as minimizing the number of authors on a paper would maximize your h-frac score. This could mean more junior researchers are left off papers where they did contribute, harming their careers. 

The G-Index

This measure looks at the most highly cited articles of an author and is defined as “the largest number n of highly cited articles for which the average number of citations is at least n .” [4] This measure allows highly cited papers to bolster lower cited papers of an author. 

The i-10 Index

Developed by Google Scholar, this index is the number of articles published by an author that have received at least 10 citations. This measure, along with the h-index, is available on Google Scholar.

The m-value was developed to try to balance the scales for early career researchers. It corrects the h-index for time, allowing for easier comparison of researchers with different seniority and career lengths. It is calculated as the h-index divided by the number of years.

The Problem with Measuring Performance

While these numbers can be helpful to give a flavor of a scientist’s performance, they are all flawed. Many are biased towards researchers who publish often and are further into their careers. Many of these indexes can also be manipulated, such as adding extra authors to papers who didn’t contribute.

In reality, it isn’t possible to distill a researcher’s contributions to a single number. They may not have published many papers, but those papers they have published made vital contributions. Or their skills are in training the next round of researchers. When looking at these numbers, we should remember they are just a reflection of one small part of a researcher’s contributions and values and are not the be-all and end-all.

The H-Index Summed Up

The h-index provides a useful metric for scientific performance, but only when viewed in the context of other factors. While other measures are available, including the i-10 index, the G-index, and the h-frac index, these also have limitations. Therefore, when making decisions that are important to you (funding, job, finding a PI), be sure to read through publication lists, talk to other scientists (and students) and peers, and take account of career stage. So, remember that an h-index is only one consideration among many—and you should definitely know your h-index—but it doesn’t define you (or anyone else) as a scientist.

  • Hirsch JE. (2005) An index to quantify an individual’s scientific research output . PNAS 102(46):16569–72
  • Meho LI, Yang K. (2007) Impact of data sources on citation counts and rankings of LIS faculty: Web of science versus scopus and google scholar . JASIST 58(13):2105–25
  • Koltun V, Hafner D. (2021) The h-index is no longer an effective correlate of scientific reputation . PLoS One . 16(6):e0253397
  • Wikipedia. g-index . Accessed 25 September 2023

Originally published April 2, 2009. Reviewed and updated October 2023.

h index for researchers

It seems doubtful whether all fields of research can be effectively measured in this way. I am a First World War historian. If I want to be cited a lot, I will write about very popular questions (masculinity, identity, space etc at the moment). If I go off into virgin territory and explore, for the first time ever, say, comparative studies of First World War popular music, I will get far fewer citations for a good while, and this may see, a strange reward for asking rarer questions. Whereas asking rare questions, in history, is a key skill (see Keith Thomas for example). This is one of the reasons that scholarly human sciences organizations in France where I live often refuse to use bibliometric indexes of this sort.

h index for researchers

I’ve recently proposed a novel index for evaluation of individual researchers that does not depend on the number of publications, accounts for different co-author contributions and age of publications, and scales from 0.0 to 9.9 ( https://f1000research.com/articles/4-884 ). Moreover, it can be calculated with the help of freely available software. Please, share your thoughts on it. Would you use it along with the h-index, or maybe even instead of it, for evaluating your peers, potential collaborators or job applicants? If you’ve tried it on the people you know, do you find the results fair?

Leave a Comment Cancel Reply

You must be logged in to post a comment.

This site uses Akismet to reduce spam. Learn how your comment data is processed .

Forgot your password?

Lost your password? Please enter your email address. You will receive mail with link to set new password.

Back to login

  • Share full article

Advertisement

Supported by

What Researchers Discovered When They Sent 80,000 Fake Résumés to U.S. Jobs

Some companies discriminated against Black applicants much more than others, and H.R. practices made a big difference.

Claire Cain Miller

By Claire Cain Miller and Josh Katz

A group of economists recently performed an experiment on around 100 of the largest companies in the country, applying for jobs using made-up résumés with equivalent qualifications but different personal characteristics. They changed applicants’ names to suggest that they were white or Black, and male or female — Latisha or Amy, Lamar or Adam.

On Monday, they released the names of the companies . On average, they found, employers contacted the presumed white applicants 9.5 percent more often than the presumed Black applicants.

Yet this practice varied significantly by firm and industry. One-fifth of the companies — many of them retailers or car dealers — were responsible for nearly half of the gap in callbacks to white and Black applicants.

Two companies favored white applicants over Black applicants significantly more than others. They were AutoNation, a used car retailer, which contacted presumed white applicants 43 percent more often, and Genuine Parts Company, which sells auto parts including under the NAPA brand, and called presumed white candidates 33 percent more often.

In a statement, Heather Ross, a spokeswoman for Genuine Parts, said, “We are always evaluating our practices to ensure inclusivity and break down barriers, and we will continue to do so.” AutoNation did not respond to a request for comment.

Companies With the Largest and Smallest Racial Contact Gaps

Of the 97 companies in the experiment, two stood out as contacting presumed white job applicants significantly more often than presumed Black ones. At 14 companies, there was little or no difference in how often they called back the presumed white or Black applicants.

Source: Patrick Kline, Evan K. Rose and Christopher R. Walters

Known as an audit study , the experiment was the largest of its kind in the United States: The researchers sent 80,000 résumés to 10,000 jobs from 2019 to 2021. The results demonstrate how entrenched employment discrimination is in parts of the U.S. labor market — and the extent to which Black workers start behind in certain industries.

“I am not in the least bit surprised,” said Daiquiri Steele, an assistant professor at the University of Alabama School of Law who previously worked for the Department of Labor on employment discrimination. “If you’re having trouble breaking in, the biggest issue is the ripple effect it has. It affects your wages and the economy of your community going forward.”

Some companies showed no difference in how they treated applications from people assumed to be white or Black. Their human resources practices — and one policy in particular (more on that later) — offer guidance for how companies can avoid biased decisions in the hiring process.

A lack of racial bias was more common in certain industries: food stores, including Kroger; food products, including Mondelez; freight and transport, including FedEx and Ryder; and wholesale, including Sysco and McLane Company.

“We want to bring people’s attention not only to the fact that racism is real, sexism is real, some are discriminating, but also that it’s possible to do better, and there’s something to be learned from those that have been doing a good job,” said Patrick Kline, an economist at the University of California, Berkeley, who conducted the study with Evan K. Rose at the University of Chicago and Christopher R. Walters at Berkeley.

The researchers first published details of their experiment in 2021, but without naming the companies. The new paper, which is set to run in the American Economic Review, names the companies and explains the methodology developed to group them by their performance, while accounting for statistical noise.

Sample Résumés From the Experiment

Fictitious résumés sent to large U.S. companies revealed a preference, on average, for candidates whose names suggested that they were white.

Sample resume

To assign names, the researchers started with a prior list that had been assembled using Massachusetts birth certificates from 1974 to 1979. They then supplemented this list with names found in a database of speeding tickets issued in North Carolina between 2006 and 2018, classifying a name as “distinctive” if more than 90 percent of people with that name were of a particular race.

The study includes 97 firms. The jobs the researchers applied to were entry level, not requiring a college degree or substantial work experience. In addition to race and gender, the researchers tested other characteristics protected by law , like age and sexual orientation.

They sent up to 1,000 applications to each company, applying for as many as 125 jobs per company in locations nationwide, to try to uncover patterns in companies’ operations versus isolated instances. Then they tracked whether the employer contacted the applicant within 30 days.

A bias against Black names

Companies requiring lots of interaction with customers, like sales and retail, particularly in the auto sector, were most likely to show a preference for applicants presumed to be white. This was true even when applying for positions at those firms that didn’t involve customer interaction, suggesting that discriminatory practices were baked in to corporate culture or H.R. practices, the researchers said.

Still, there were exceptions — some of the companies exhibiting the least bias were retailers, like Lowe’s and Target.

The study may underestimate the rate of discrimination against Black applicants in the labor market as a whole because it tested large companies, which tend to discriminate less, said Lincoln Quillian, a sociologist at Northwestern who analyzes audit studies. It did not include names intended to represent Latino or Asian American applicants, but other research suggests that they are also contacted less than white applicants, though they face less discrimination than Black applicants.

The experiment ended in 2021, and some of the companies involved might have changed their practices since. Still, a review of all available audit studies found that discrimination against Black applicants had not changed in three decades. After the Black Lives Matter protests in 2020, such discrimination was found to have disappeared among certain employers, but the researchers behind that study said the effect was most likely short-lived.

Gender, age and L.G.B.T.Q. status

On average, companies did not treat male and female applicants differently. This aligns with other research showing that gender discrimination against women is rare in entry-level jobs, and starts later in careers.

However, when companies did favor men (especially in manufacturing) or women (mostly at apparel stores), the biases were much larger than for race. Builders FirstSource contacted presumed male applicants more than twice as often as female ones. Ascena, which owns brands like Ann Taylor, contacted women 66 percent more than men.

Neither company responded to requests for comment.

The consequences of being female differed by race. The differences were small, but being female was a slight benefit for white applicants, and a slight penalty for Black applicants.

The researchers also tested several other characteristics protected by law, with a smaller number of résumés. They found there was a small penalty for being over 40.

Overall, they found no penalty for using nonbinary pronouns. Being gay, as indicated by including membership in an L.G.B.T.Q. club on the résumé, resulted in a slight penalty for white applicants, but benefited Black applicants — although the effect was small, when this was on their résumés, the racial penalty disappeared.

Under the Civil Rights Act of 1964, discrimination is illegal even if it’s unintentional . Yet in the real world, it is difficult for job applicants to know why they did not hear back from a company.

“These practices are particularly challenging to address because applicants often do not know whether they are being discriminated against in the hiring process,” Brandalyn Bickner, a spokeswoman for the Equal Employment Opportunity Commission, said in a statement. (It has seen the data and spoken with the researchers, though it could not use an academic study as the basis for an investigation, she said.)

What companies can do to reduce discrimination

Several common measures — like employing a chief diversity officer, offering diversity training or having a diverse board — were not correlated with decreased discrimination in entry-level hiring, the researchers found.

But one thing strongly predicted less discrimination: a centralized H.R. operation.

The researchers recorded the voice mail messages that the fake applicants received. When a company’s calls came from fewer individual phone numbers, suggesting that they were originating from a central office, there tended to be less bias . When they came from individual hiring managers at local stores or warehouses, there was more. These messages often sounded frantic and informal, asking if an applicant could start the next day, for example.

“That’s when implicit biases kick in,” Professor Kline said. A more formalized hiring process helps overcome this, he said: “Just thinking about things, which steps to take, having to run something by someone for approval, can be quite important in mitigating bias.”

At Sysco, a wholesale restaurant food distributor, which showed no racial bias in the study, a centralized recruitment team reviews résumés and decides whom to call. “Consistency in how we review candidates, with a focus on the requirements of the position, is key,” said Ron Phillips, Sysco’s chief human resources officer. “It lessens the opportunity for personal viewpoints to rise in the process.”

Another important factor is diversity among the people hiring, said Paula Hubbard, the chief human resources officer at McLane Company. It procures, stores and delivers products for large chains like Walmart, and showed no racial bias in the study. Around 40 percent of the company’s recruiters are people of color, and 60 percent are women.

Diversifying the pool of people who apply also helps, H.R. officials said. McLane goes to events for women in trucking and puts up billboards in Spanish.

So does hiring based on skills, versus degrees . While McLane used to require a college degree for many roles, it changed that practice after determining that specific skills mattered more for warehousing or driving jobs. “We now do that for all our jobs: Is there truly a degree required?” Ms. Hubbard said. “Why? Does it make sense? Is experience enough?”

Hilton, another company that showed no racial bias in the study, also stopped requiring degrees for many jobs, in 2018.

Another factor associated with less bias in hiring, the new study found, was more regulatory scrutiny — like at federal contractors, or companies with more Labor Department citations.

Finally, more profitable companies were less biased, in line with a long-held economics theory by the Nobel Prize winner Gary Becker that discrimination is bad for business. Economists said that could be because the more profitable companies benefit from a more diverse set of employees. Or it could be an indication that they had more efficient business processes, in H.R. and elsewhere.

Claire Cain Miller writes about gender, families and the future of work for The Upshot. She joined The Times in 2008 and was part of a team that won a Pulitzer Prize in 2018 for public service for reporting on workplace sexual harassment issues. More about Claire Cain Miller

Josh Katz is a graphics editor for The Upshot, where he covers a range of topics involving politics, policy and culture. He is the author of “Speaking American: How Y’all, Youse, and You Guys Talk,” a visual exploration of American regional dialects. More about Josh Katz

From The Upshot: What the Data Says

Analysis that explains politics, policy and everyday life..

Employment Discrimination: Researchers sent 80,000 fake résumés to some of the largest companies in the United States. They found that some discriminated against Black applicants much more than others .

Pandemic School Closures: ​A variety of data about children’s academic outcomes and about the spread of Covid-19 has accumulated since the start of the pandemic. Here is what we learned from it .

Affirmative Action: The Supreme Court effectively ended race-based preferences in admissions. But will selective schools still be able to achieve diverse student bodies? Here is how they might try .

N.Y.C. Neighborhoods: We asked New Yorkers to map their neighborhoods and to tell us what they call them . The result, while imperfect, is an extremely detailed map of the city .

Dialect Quiz:  What does the way you speak say about where you’re from? Answer these questions to find out .

  • Open supplemental data
  • Reference Manager
  • Simple TEXT file

People also looked at

Original research article, associations between monitor-independent movement summary (mims) and fall risk appraisal combining fear of falling and physiological fall risk in community-dwelling older adults.

www.frontiersin.org

  • 1 Department of Mechanical Engineering, University of Central Florida, Orlando, FL, United States
  • 2 Disability, Aging and Technology Cluster, University of Central Florida, Orlando, FL, United States
  • 3 College of Medicine, University of Central Florida, Orlando, FL, United States
  • 4 School of Kinesiology and Rehabilitation Sciences, College of Health Professions and Sciences, University of Central Florida, Orlando, FL, United States
  • 5 Department of Statistics and Data Science, University of Central Florida, Orlando, FL, United States
  • 6 College of Nursing, University of Central Florida, Orlando, FL, United States

Introduction: Fall Risk Appraisal (FRA), a process that integrates perceived and objective fall risk measures, serves as a crucial component for understanding the incongruence between fear of falling (FOF) and physiological fall risk in older adults. Despite its importance, scant research has been undertaken to investigate how habitual physical activity (PA) levels, quantified in Monitor-Independent Movement Summary (MIMS), vary across FRA categories. MIMS is a device-independent acceleration summary metric that helps standardize data analysis across studies by accounting for discrepancies in raw data among research-grade and consumer devices.

Objective: This cross-sectional study explores the associations between MIMS (volume and intensity) and FRA in a sample of older adults in the United States.

Methods: We assessed FOF (Short Falls Efficacy Scale-International), physiological fall risk (balance: BTrackS Balance, leg strength: 30-s sit-to-stand test) and 7-day free-living PA (ActiGraph GT9X) in 178 community-dwelling older adults. PA volume was summarized as average daily MIMS (MIMS/day). PA intensity was calculated as peak 30-min MIMS (average of highest 30 non-consecutive MIMS minutes/day), representing a PA index of higher-intensity epochs. FRA categorized participants into following four groups: Rational (low FOF-low physiological fall risk), Irrational (high FOF-low physiological fall risk), Incongruent (low FOF-high physiological fall risk) and Congruent (high FOF-high physiological fall risk).

Results: Compared to rational group, average MIMS/day and peak 30-min MIMS were, respectively, 15.8% ( p = .025) and 14.0% ( p = .004) lower in irrational group, and 16.6% ( p = .013) and 17.5% ( p < .001) lower in congruent group. No significant differences were detected between incongruent and rational groups. Multiple regression analyses showed that, after adjusting for age, gender, and BMI (reference: rational), only irrational FRA was significantly associated with lower PA volume (β = −1,452.8 MIMS/day, p = .034); whereas irrational and congruent FRAs were significantly associated with lower “peak PA intensity” (irrational: β = −5.40 MIMS/day, p = .007; congruent: β = −5.43 MIMS/day, p = .004).

Conclusion: These findings highlight that FOF is a significant barrier for older adults to participate in high-intensity PA, regardless of their balance and strength. Therefore, PA programs for older adults should develop tailored intervention strategies (cognitive reframing, balance and strength exercises, or both) based on an individual’s FOF and physiological fall risk.

Introduction

In the United States (US), over 14 million adults aged 65 years or older fall each year ( Moreland et al., 2020 ; Kakara et al., 2023 ). According to the US Centers for Disease Control and Prevention, about 20% of falls in older adults cause serious injuries, which results in limited functional mobility, loss of independence, reduced quality of life, and premature death ( Ambrose et al., 2013 ). Fear of falling (FOF) has been recognized as an important psychological aspect associated with falls in older adults ( Jansen et al., 2021 ). However, studies report that many older adults might show a discrepancy between their FOF and physiological fall risk, known as maladaptive fall risk appraisal (FRA) ( Thiamwong et al., 2021a ), and such discrepancies can lead to adverse consequences. For example, individuals with low physiological fall risk but high FOF may overestimate their actual fall risk and restrict their daily activities, which can further lead to physical deconditioning and loss of muscle strength ( Deshpande et al., 2008 ). On the contrary, those with high physiological fall risk but low FOF may underestimate their actual fall risk and engage in unnecessary risky behavior beyond their physical capacity, making them even more vulnerable to falling ( Delbaere et al., 2010 ).

Therefore, FRA combining subjective and objective fall risk measures is important for understanding the discrepancy between FOF and physiological fall risk in older adults to inform more targeted interventions for fall prevention ( Thiamwong et al., 2020a ; Thiamwong et al., 2020b ). FRA is a two-dimensional fall risk assessment matrix that classifies older adults into four groups based on their FOF and physiological fall risk status ( Thiamwong, 2020 ). In FRA matrix, as shown in Figure 1 , two groups have their FOF level aligned with their physiological fall risk status, which are denoted as Rational (low FOF-low physiological fall risk) and Congruent (high FOF-high physiological fall risk). The other two groups show a mismatch between their FOF level and physiological fall risk status and are denoted as Incongruent (low FOF-high physiological fall risk) and Irrational (high FOF-low physiological fall risk).

www.frontiersin.org

Figure 1 . Fall Risk Appraisal (FRA) based on Fear of Falling (FOF) and physiological fall risk. Maladaptive FRA = mismatch between FOF and physiological fall risk; Adaptive FRA = FOF aligned with physiological fall risk.

Prior research has mostly focused on exploring the independent associations of FOF and objective fall risk measures with physical activity (PA) participation in older adults ( Gregg et al., 2000 ; Chan et al., 2007 ; Zijlstra et al., 2007 ; Heesch et al., 2008 ; Mendes da Costa et al., 2012 ). To date, only a small number of studies have investigated the combined effects of FOF and objective fall risk on PA engagement. For example, one study examined the joint associations of FOF and objective fall risk with everyday walking activities in older adults. This study used a four-group categorization from ( Delbaere et al., 2010 ), and found that the number of steps/day in their study sample was in accordance with objective fall risk rather than FOF ( Jansen et al., 2021 ). Another study examined accelerometry-based PA levels between FRA categories using the intensity cut-point approach and found that participants with high FOF accumulated significantly less time in moderate-to-vigorous PA (MVPA) compared to those with rational FRA, regardless of their balance performance ( Thiamwong et al., 2023 ). However, there exists a lack of evidence on how habitual PA levels, expressed in Monitor-Independent Movement Summary (MIMS) units, differ between FRA categories in older adults.

MIMS is used to summarize the acceleration measurements obtained on the x-, y-, and z-axes of wrist-worn activity monitors. This PA metric was first introduced in 2019 to summarize participant-level PA data for the 2011-2012 and 2013-2014 cycles of the US National Health and Nutrition Examination Survey (NHANES) ( John et al., 2019 ). The major benefit of using MIMS is that it is generated by a nonproprietary device–independent universal algorithm, allowing us to compare the total movement across studies regardless of the heterogeneity introduced by different brands, models and device types (such as consumer vs. research-grade) ( John et al., 2019 ). Similar to other traditional PA metrics such as steps/day or daily activity counts, PA volume can be expressed as daily MIMS (i.e., total MIMS unit accumulated per day) across valid days of assessment, where larger MIMS/day indicates higher daily PA volume ( Wolff-Hughes et al., 2014 ).

Traditionally, quantification of accelerometer-measured PA intensity has been predominantly based on minutes/day (or minutes/week) spent in MVPA, using either manufacturer-specific or device-specific cut points corresponding with ≥3 Metabolic Equivalents of Task (METs) ( Troiano et al., 2008 ). Recently, to establish an intensity-based expression for MIMS units, the concept of peak 30-min MIMS has been introduced ( Zheng et al., 2023 ). It is analogous to the concept of peak 30-min cadence, i.e., the average of 30 highest cadence (steps/minutes) values within a day, representing an individual’s best efforts ( Tudor-Locke et al., 2012 ). Similar to cadence (steps/minutes), MIMS/minutes values were shown to have a strong correlation with higher PA intensity ( John et al., 2019 ). Therefore, peak 30-min MIMS (i.e., the average of the highest 30 non-consecutive MIMS [minutes/day] values within a day) can be used as a measure of higher-intensity epochs across the PA monitoring period ( Zheng et al., 2023 ). Evaluating daily MIMS (volume) and peak 30-min MIMS (intensity) can facilitate a more comprehensive assessment of PA and its relationship with FRA.

Thus, the aim of this study is to investigate the associations between wrist-worn accelerometer-measured PA (expressed as daily MIMS and peak 30-min MIMS) and FRA in a sample of community-dwelling older adults. We are particularly interested in the question: “Which of the maladaptive FRA groups, i.e., Incongruent (low FOF-high physiological fall risk) and Irrational (high FOF-low physiological fall risk), differ more from the Rational (low FOF-low physiological fall risk) group in terms of habitual PA level?.” This will allow us to understand which of the two factors—FOF or physiological fall risk—has a stronger relationship with reduced PA participation among older adults.

Materials and methods

Study design and participants.

In this cross-sectional study, purposive sampling was used to recruit 178 community-dwelling older adults from the region of Central Florida, United States, between February 2021 and March 2023. The inclusion criteria were: i) 60 years of age or older; ii) being able to walk with or without an assistive device (but without the assistance of another person); iii) no marked cognitive impairment [i.e., Memory Impairment Screen score ≥5 ( Buschke et al., 1999 )], iv) fluency in English or Spanish, and v) living in their own homes or apartments. The exclusion criteria were: i) medical conditions that prevent PA engagement (e.g., shortness of breath, tightness in the chest, dizziness, or unusual fatigue at light exertion), ii) unable to stand on the balance plate, iii) currently receiving treatment from a rehabilitation facility, and iv) having medical implants (e.g., pacemakers). This study was approved by the Institutional Review Board at the University of Central Florida (Protocol No: 2189; 10 September 2020). All subjects provided written informed consent to participate. This cross-sectional assessment required one visit to the study site during which participants completed a demographic survey and anthropometric measurements, followed by assessments of FOF and physiological fall risk. At the end of the visit, each participant was fitted with a wrist-worn accelerometer for 7-day PA monitoring in free-living conditions.

Measurements

Fear of falling (fof).

FOF was assessed using the Short Falls Efficacy Scale-International (FES-I) questionnaire ( Yardley et al., 2005 ; Kempen et al., 2008 ). It is a 7-item, self-administered tool that uses a 4-point Likert scale to measure the level of concern about falling while performing seven activities (1 = not at all concerned to 4 = very concerned). The total scores ranged from 7 to 28. Short FES-I scores of 7–10 indicated low FOF, while scores of 11–28 indicated high FOF.

Physiological fall risk

Physiological fall risk was assessed using balance test and lower limb strength assessment. BTrackS Balance System (Balance Tracking Systems, San Diego, CA, United States) was used to measure static balance. This system includes a portable BTrackS Balance Plate and BTrackS Assess Balance Software running on a computer. It has shown high test–retest reliability (intraclass correlation coefficient, ICC = 0.83) and excellent validity (Pearson’s product-moment correlations, r > 0.90) in evaluating static balance ( Levy et al., 2018 ). The test protocol included four trials (each trial taking 20 s) with less than 10 s of inter-trial delays. During the trials, participants were asked to stand still on the BTrackS Balance Plate with their eyes closed, hands on their hips, and feet placed shoulder-width apart. BTrackS balance plate is an FDA-registered, lightweight force plate that measures center of pressure (COP) excursions during the static stance. The first trial was done for familiarization only. Results from the remaining three trials were used to calculate the average COP path length (in cm) across trials. COP path length is considered as a proxy measure for postural sway magnitude; thus, the larger the COP path length, the greater the postural sway is ( Goble et al., 2017 ). COP path length of 0–30  cm was used to indicate normal balance, while ≥31  cm indicated poor balance ( Thiamwong et al., 2021b ).

Lower limb strength was assessed using the 30-s sit-to-stand (STS) test, in accordance with the established protocol ( Yee et al., 2021 ; Choudhury et al., 2023 ). Participants were instructed to keep their arms folded across their chest, rise from a seated position on a chair to a standing posture and return to the sitting position as many times as possible within 30 s. The number of chair stands completed was counted and recorded. If a participant used his/her arms to stand, the test was stopped, and the score was recorded as zero. Age- and gender-specific STS normative scores were used as cut-offs to classify participants into below-average and average STS scores, as shown in Table 1 ( Rikli and Jones, 1999 ). A below-average STS score was indicative of a higher risk of fall. Meeting both normal balance and average STS score criteria was defined as low physiological fall risk, while not meeting either or both criteria was defined as high physiological fall risk.

www.frontiersin.org

Table 1 . Age and gender-specific below average scores for 30-s sit-to-stand test.

Fall risk appraisal (FRA) matrix

The FRA matrix was obtained using a combination of FOF and physiological fall risk status. Participants were grouped into the following four categories based on their FOF and physiological fall risk according to existing literature ( Thiamwong et al., 2020a ): i) Rational (low FOF-low physiological fall risk), ii) Irrational (high FOF-low physiological fall risk), iii) Incongruent (low FOF-high physiological fall risk), and iv) Congruent (high FOF-high physiological fall risk).

Physical activity (PA)

PA was assessed using ActiGraph GT9X Link (ActiGraph LLC., Pensacola, FL, United States), which contains a triaxial accelerometer with a dynamic range of ±8 gravitational units (g). The device was initialized to record acceleration data at 30  Hz sampling frequency. Participants wore the ActiGraph on their non-dominant wrists for seven consecutive days in free-living conditions. They were given instructions to wear it during waking hours and remove it only during sleeping, showering, swimming and medical imaging tests. After 7-day of PA monitoring, ActiGraph devices were collected from participants. Participants with ≥4 valid days were included in the analysis, and a day was considered valid if participants wore the device for at least 14 h or more ( Choudhury et al., 2023 ).

Raw acceleration data were downloaded as “.csv” files using ActiLife software v6.13.4 (ActiGraph LLC, Pensacola, FL, United States) and converted into MIMS units using MIMSunit package ( John et al., 2019 ) in R statistical software (R Core Team, Vienna, Austria). The data processing steps included: i) interpolating data to a consistent sampling rate (i.e., 100  Hz ) to account for inter-device variability in sampling rate, ii) extrapolating data to extend maxed-out signals to account for inter-device variability in dynamic range, iii) band-pass filtering to remove artifacts from acceleration signals that do not pertain to voluntary human movement, and iv) aggregation of processed signals from each axis into a sum of MIMS-units that represents the total amount of movement activity [details on MIMS-unit algorithm are published elsewhere ( John et al., 2019 )].

PA volume, denoted by daily MIMS (MIMS/day), was calculated by summing up all triaxial MIMS/minutes accumulated throughout a day and averaged across all valid days. PA intensity, expressed as peak 30-min MIMS, was obtained by (a) first rank ordering a participant’s triaxial MIMS/minutes values within each valid day, (b) calculating the average of the highest 30 MIMS/minutes values within each day, and (c) finally taking the average of the resulting MIMS/minutes values across all valid wear days.

Anthropometric measurements

Height (in cm) was measured using a stadiometer. Body mass (in kilograms) was measured using a digital scale with no shoes. Body mass index (BMI) was calculated as the weight (kg) divided by the square of height (m 2 ).

Statistical analyses

All statistical analyses were performed in R statistical software (version 4.1.2, R Core Team, Vienna, Austria) with statistical significance level set at .05. Descriptive characteristics of participants were summarized as mean (standard deviation, SD) for normally distributed continuous variables, as median (Interquartile Range, IQR) for non-normally distributed continuous variables, and as frequency (percentage) for categorical variables, stratified by FRA categories. The Shapiro-Wilk test was performed to check if a continuous variable followed a normal distribution. Differences across groups were examined using one-way analysis of variance (ANOVA) for normally distributed data and Kruskal–Wallis test for non-normally distributed data, with Bonferroni adjustment for post hoc comparisons.

Multiple linear regression was conducted for each outcome variable (i.e., daily MIMS and peak 30-min MIMS) using the four FRA groups—“Rational,” “Irrational,” “Incongruent” and “Congruent”—as explanatory variables, controlled by age, gender and BMI. A priori sample size calculation for multiple linear regression revealed that the minimum number of samples for 8 explanatory variables at a statistical power level of 0.8, α = 0.05, and a medium effect size (Cohen f 2 = 0.15) would be 108; therefore, our sample size (i.e., N = 178) had sufficient statistical power for multiple regression. The rational group (i.e., low FOF-low physiological fall risk) was selected as the reference group in the regression analysis.

Among 178 participants, 163 samples were included in the analyses, after retaining only those who had at least 4 days of valid PA data and completed both FOF and physiological fall risk assessments. The mean (SD) age of participants was 75.3 (7.1) years, and 73.6% of participants were in 60–79 years of age group ( n = 120) and 26.4% were above 80 years of age ( n = 43). Figure 2 shows the scatterplot of participants’ age (years) and FOF scores, stratified by physiological fall risk status. The proportion of participants with low FOF was 71.7% ( n = 86) in the 60–79 years of age group and 48.8% ( n = 21) in the ≥80 years of age group. The median (IQR) BMI of participants was 26.6 (6.3) kg/m 2 and majority of participants were female (79.1%). The median (IQR) Short FES-I score was 9 (5) and 34.4% of participants had high FOF. The median (IQR) COP path length was 27 (15) cm , and the median (IQR) sit-to-stand score was 13 (6) reps. 38.0% of participants had poor balance, 27.0% had below average lower limb strength, and 48.5% showed both poor balance and below average lower limb strength. Finally, 37.4% of participants were screened as rational ( n = 61), 14.2% were irrational ( n = 23), 28.2% were incongruent ( n = 46) and 20.2% were congruent ( n = 37). Table 2 summarizes the characteristics of study participants according to FRA categories.

www.frontiersin.org

Figure 2 . Scatterplot of Age (years) across Fear of Falling scores, stratified by physiological fall risk status. Low physiological fall risk = meeting both normal static balance cut-off and average sit-to-stand score cut-off. High physiological fall risk = not meeting normal static balance cut-off or average sit-to-stand score cut-off or both.

www.frontiersin.org

Table 2 . Participant characteristics stratified by Fall Risk Appraisal matrix.

In Figure 3A , the variations in average MIMS (MIMS/hours) over 24-h by FRA categories are shown. The average MIMS across all groups was in general low at night, then substantially increased during morning hours and gradually decreased as the day progressed and evening approached. In Figure 3B , the mean (line) and standard error (shaded area) of MIMS/hours for each FRA group is shown. Overall, rational group showed the highest average MIMS/hours across the day hours, while congruent had the lowest average MIMS/hours. Among maladaptive FRA groups, the peak was higher in incongruent group than their irrational counterparts, which indicates the potential role of FOF in limiting high-intensity PA participation.

www.frontiersin.org

Figure 3 . (A) Daily patterns of average MIMS per hour by Fall Risk Appraisal (FRA) groups. (B) Mean (line) and standard error (shaded area) of MIMS per hour for each FRA group.

The mean (SD) age in congruent group was 78.8 (7.6) years, which was higher than both rational (74.3 [5.8] years, p = .005) and incongruent (74.3 [7.0] years, p = .010) groups, as shown in Supplementary Figure S1 . This suggests that prevalence of high FOF, irrespective of balance performance and lower limb strength, may increase with advanced age. Also, the median (IQR) BMI in congruent group (28.9 [5.8]) kg/m 2 ) was higher in comparison to rational (24.9 [6.4] kg/m 2 , p = .001) and incongruent (26.9 [4.7] kg/m 2 , p = .018) groups (shown in Supplementary Figure S2 ), indicating that higher BMI in older adults may result in high FOF. However, no significant group differences were observed between rational and irrational groups in terms of age and BMI.

The mean (SD) daily MIMS in rational group was 10,408 (2,439) MIMS/day, which was 15.8% higher than irrational ( p = .025) and 16.6% higher than congruent ( p = .013) groups, as shown in Figure 4 . Also, the mean (SD) peak 30-min MIMS in rational group was 39.9 (8.3) MIMS/day, which was 14.0% higher than irrational ( p = .004) and 17.5% higher than congruent ( p < .001) groups ( Figure 5 ). Compared to rational group, incongruent participants showed no significant differences in PA volume and intensity, despite having poor balance and below average lower limb strength.

www.frontiersin.org

Figure 4 . Average daily MIMS (MIMS/day) across categories of Fall Risk Appraisal combining FOF and physiological fall risk, * p < .05.

www.frontiersin.org

Figure 5 . Peak 30-min MIMS per day across categories of Fall Risk Appraisal combining FOF and physiological fall risk, ** p < .01, *** p < .001.

Table 2 presents the regression models for daily MIMS. In comparison to reference group (i.e., rational), lower PA volume was associated with irrational ( β [SE] = −1,463.2 [687.7] MIMS/day, p = .035) and congruent ( β [SE] = −1,579.5 [582.9] MIMS/day, p = .007) FRAs in Model 1 (unadjusted). In model 2, after adjusting for age, gender and BMI, only irrational FRA was significantly associated with lower PA volume ( β [SE] = −1,476.41 [582.26] MIMS/day, p = .025; regression coefficients of covariates are presented in Supplementary Table S1 ).

Results for regression analysis for peak 30-min MIMS are presented in Table 3 . In model 1 (unadjusted), lower ‘peak PA intensity’ was associated with irrational ( β [SE] = −5.63 [1.99] MIMS/day, p = .005) and congruent FRAs ( β [SE] = −7.06 [1.76] MIMS/day, p < .001) compared to the reference group. In Model 2 ( Table 4 ), after adjusting for age, gender and BMI, both irrational and congruent FRAs were still significantly associated with lower “peak PA intensity”(irrational: β [SE] = −5.40 [1.97] MIMS/day, p = .007; congruent: β [SE] = −5.43 [1.86] MIMS/day, p = .004; regression coefficients of covariates are presented in Supplementary Table S2 ).

www.frontiersin.org

Table 3 . Association between Fall Risk Appraisal groups and average daily MIMS (MIMS/day) using linear regression. Model 2 was adjusted for age, gender and BMI.

www.frontiersin.org

Table 4 . Association between Fall Risk Appraisal groups and peak 30-min MIMS per day (MIMS/day) using linear regression. Model 2 was adjusted for age, gender and BMI.

This is the first study, to our knowledge, to evaluate the associations of FRA with daily MIMS and peak 30-min MIMS in a sample of community-dwelling US older adults. In general, both the volume and intensity of PA were highest in the rational group and lowest in the congruent group. In maladaptive FRA groups, high FOF (i.e., irrational FRA) was associated with lower PA volume and intensity compared to the reference group (i.e., rational FRA), but no significant differences were observed for high physiological fall risk (i.e., incongruent FRA).

Prior research has shown that FOF is associated with reduced PA levels in community-dwelling older adults using objectively measured PA data ( Jefferis et al., 2014 ; Choudhury et al., 2022 ). Our results broadly agree with it, showing that total daily PA volume was significantly lower in two high FOF groups (i.e., irrational and congruent) than the rational group. This suggests that regardless of balance performance and lower limb strength, low FOF was associated with high PA volume in our study sample. In linear regression analysis, after accounting for age, gender and BMI, reduced daily MIMS was significantly associated with irrational FRA, but not with congruent FRA. It can be attributed to the fact that the average age and BMI of congruent participants were higher than all other groups, and evidence suggests that that increasing older age and higher BMI contribute to lower PA levels in older adults ( Smith et al., 2015 ).

We did not observe any significant difference between two low FOF groups (i.e., rational and incongruent) in terms of daily PA volume. This suggests that, for maladaptive FRA, high physiological fall risk (not high FOF) had a stronger association with reduced daily PA accumulation in our study sample. In contrast to our findings, a recent study found that low physiological fall risk was more strongly associated with increased walking activity (steps/day) than low perceived fall risk in a sample of community-dwelling German older adults ( n = 294) ( Jansen et al., 2021 ). However, it should be noted that Jansen et al. used multiple independent risk factors (i.e., previous falls, balance impairment, gait impairment, and multimedication) to distinguish between high and low physiological fall risk, whereas they used only one tool (Short FES-I) to assess perceived fall risk. Furthermore, participants with low FOF and high physiological fall risk in that German older adult cohort ( Jansen et al., 2021 ) were relatively older than those in our study sample [mean (SD) age: 81.6 (5.5) years vs. 74.3 (7.0) years in our study]. Previous studies indicate that the likelihood of reduced participation in PA gradually increases with advanced age, because of age-related declines in muscle mass, muscle strength, and functional fitness (i.e., the physical capacity to perform activities of daily living independently and without the early onset of fatigue) ( Milanović et al., 2013 ; Westerterp, 2018 ; Suryadinata et al., 2020 ). Therefore, future research should examine how age-related functional declines mediate the relationship between maladaptive FRA and daily PA volume in older adults.

In our study, the peak PA intensity in both high FOF groups (i.e., irrational and congruent) was significantly lower than the rational group. Despite the differences in the PA metrics, this is in general agreement with the previous findings that showed older adults with irrational and congruent FRAs were more likely to spend less time in MVPA ( Thiamwong et al., 2023 ). Interestingly, after adjusting for confounders, the decrease in peak 30-min MIMS for irrational and congruent groups was almost equivalent in our study. This suggests that older adults with high FOF may restrict their participation in high intensity PA, irrespective of their physiological fall risk status. Our findings extend the previously reported association between PA intensity and FOF in older adults ( Sawa et al., 2020 ), highlighting the need to integrate cognitive behavioral therapy to reduce FOF in fall intervention programs.

For peak 30-min MIMS, we did not find any significant difference between two low FOF groups (i.e., rational and incongruent). This suggests that, similar to total PA volume, peak PA intensity was more strongly associated with high FOF (rather than high physiological fall risk) for maladaptive FRA in our sample. Unlike MVPA cut points that exclude PA intensities ≤3 METs or equivalent, peak 30-min MIMS considers acceleration magnitudes ranging from lower to higher peak efforts within a day, enabling comparison over the whole spectrum of PA intensity levels (e.g., light vs. vigorous) ( Zheng et al., 2023 ). Further research should investigate domains of peak PA efforts across different FRA groups, so that informed strategies can be developed to promote high-intensity PA participation according to the perceived and physiological risk of fall.

Based on the findings of our study, it can be conferred that the FRA assessment may be useful in designing customized PA interventions to promote an active lifestyle in older adults. For example, to increase PA participation in older adults with irrational FRA, cognitive behavioral therapy can be integrated into PA programs to improve their self-efficacy and sense of control over falling ( Tennstedt et al., 1998 ). For incongruent FRA, PA recommendations should include exercise regimens specifically designed to reduce physiological fall risk, such as high intensity balance and strength training, in addition to aerobic activities ( Sherrington et al., 2008 ). On the other hand, older adults with congruent FRA may benefit from PA programs that combine both balance and strength exercises, and cognitive behavioral therapy ( Brouwer et al., 2003 ).

A strength of our study is the use of MIMS metric to provide a comprehensive PA assessment (volume and intensity) enabling reliable, cross-study comparisons of our findings with other MIMS-based studies regardless of the device type, model or manufacturer. Furthermore, we used evidence-based cut-off points to determine FOF level (low vs. high FOF), balance status (poor vs. normal balance) and lower limb strength (below average vs. average strength) to categorize participants into FRA groups. However, our study has several limitations. First, to determine physiological fall risk status, we didn’t use the Physiological Profile Assessment ( Delbaere et al., 2010 ) or multiple independent risk factors ( Jansen et al., 2021 ), which might have led to different group formations than those studies. Instead, we used static balance and lower limb strength as physiological fall risk indicators. While balance and strength deficits are important predictors of falls in older adults, they might not account for all aspects of physiological fall risk (such as gait impairment, visual and sensory deficits, use of multi-medications etc.) ( Fabre et al., 2010 ). Second, it is to be noted that the balance performance measure (i.e., static balance) used in this study may not capture the full spectrum of an individual’s balance capabilities. There are different measures of balance performance, including static steady-state balance (i.e., the ability to maintain a steady position while standing or sitting), dynamic steady-state balance (i.e., the ability to maintain a steady position while performing postural transitions and walking), proactive balance (i.e., the ability to anticipate and mitigate a predicted postural disturbance), and reactive balance (i.e., the ability to recover a stable position following an unexpected postural disturbance) ( Shumway-Cook and Woollacott, 2007 ). Therefore, future studies may consider using more comprehensive assessments of balance performance in older adults to define physiological fall risk in FRA. Third, our study only considered FOF as the psychological fall risk measure in FRA and did not investigate other psychological constructs such as falls efficacy or balance confidence ( Moore et al., 2011 ). FOF and falls efficacy are two major fall-related psychological constructs in preventing and managing fall risks in older adults. It is to be noted that, though FOF and falls efficacy are correlated, they represent theoretically distinct concepts ( Hadjistavropoulos et al., 2011 ). FOF is defined as “the lasting concerns about falling that leads to an individual avoiding activities that one remains capable of performing.” Some common instruments for FOF measurement include FES-I, Short FES-I, Iconographical Falls Efficacy Scale (ICON-FES), Geriatric Fear of Falling Measure (GFFM), Survey of Activities and Fear of Falling in the Elderly (SAFE), Fear of Falling Avoidance Behaviour Questionnaire (FFABQ) etc., ( Soh et al., 2021 ). On the other hand, falls efficacy is defined as the perceived confidence in one’s ability to carry out activities of daily living without experiencing a fall ( Moore and Ellis, 2008 ). Existing instruments for measuring falls efficacy include Falls Efficacy Scale (FES), modified FES (MFES), Perceived Ability to Prevent and Manage Fall Risks (PAPMFR), and Perceived Ability to Manage Risk of Falls or Actual Falls (PAMF) ( Soh et al., 2021 ). Prior research has reported that, compared to FOF, falls efficacy shows stronger relationship with measures of basic and instrumental activities of daily living (ADL-IADL), and physical and social functioning ( Tinetti et al., 1994 ). Therefore, future studies should consider exploring the combined effects of falls efficacy and physiological fall risk measures on habitual PA level to determine whether FOF or fall efficacy should be considered as a target for PA interventions in older adults. Fourth, to date, there exists no established cut-offs for the MIMS metric to categorize total PA volume and intensity that correspond to meeting national PA guidelines, and it is still unknown how well MIMS/minute can estimate energy expenditure ( Vilar-Gomez et al., 2023 ). Our study just provided a first step toward the use of a standardized metric to associate PA behavior with FRA in a community-dwelling older adult sample in the US. Future studies should examine such associations in large, nationally representative populations to establish benchmark values for daily MIMS and peak 30-min MIMS in different FRA categories. Fifth, the cross-sectional design of the study didn’t allow us to determine a causal relationship between FRA and PA, so reverse and/or bidirectional causality might still be present. Sixth, although we controlled for age, gender, and BMI in the regression analyses, there remains the possibility of additional residual confounding [such as neuropsychological constructs that have been associated with FOF, which include depression, anxiety, neuroticism, attention, and executive function ( Delbaere et al., 2010 )]. Finally, our sample size was relatively small and 79% of participants were female. The generalizability of our findings might be restricted by the small, female dominant nature of our sample.

In conclusion, compared to rational FRA, the habitual PA level (daily MIMS and peak 30-min MIMS) was lower in both high FOF groups (i.e., irrational and congruent), but not in incongruent group. This suggests that, for maladaptive FRA in our study sample, high perceived fall risk had a stronger association with reduced PA level, rather than high physiological fall risk. When controlled for covariates, decrease in peak PA intensity remained significantly associated with irrational and congruent FRAs, indicating that older adults with high FOF performed PA at lower peak efforts, irrespective of their physiological fall status. Future prospective studies should focus on identifying the optimal habitual PA level (total PA volume and peak PA intensity) in accordance with an older adult’s FOF and physiological fall risk to better inform public health policies for sustainable, effective PA framework.

Data availability statement

The raw data supporting the conclusion of this article will be made available by the authors, without undue reservation.

Ethics statement

The studies involving humans were approved by the Institutional Review Board, University of Central Florida. The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.

Author contributions

RC: Data curation, Formal Analysis, Investigation, Methodology, Software, Visualization, Writing–original draft. J-HP: Conceptualization, Funding acquisition, Investigation, Methodology, Resources, Supervision, Writing–review and editing. CB: Formal Analysis, Visualization, Writing–review and editing. MC: Data curation, Investigation, Writing–review and editing. DF: Conceptualization, Funding acquisition, Writing–review and editing. RX: Conceptualization, Funding acquisition, Writing–review and editing. JS: Conceptualization, Funding acquisition, Supervision, Writing–review and editing. LT: Conceptualization, Funding acquisition, Project administration, Resources, Supervision, Writing–review and editing.

The author(s) declare that financial support was received for the research, authorship, and/or publication of this article. The research was funded by the National Institute on Aging (R03AG06799) and the National Institute on Minority Health and Health Disparities (R01MD018025) of National Institutes of Health. This research also received financial support from the University of Central Florida CONNECT CENTRAL (Interdisciplinary research seed grant; AWD00001720 and AWD00005378).

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

The author(s) declared that they were an editorial board member of Frontiers, at the time of submission. This had no impact on the peer review process and the final decision.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fragi.2024.1284694/full#supplementary-material

Ambrose, A. F., Paul, G., and Hausdorff, J. M. (2013). Risk factors for falls among older adults: a review of the literature. Maturitas 75 (1), 51–61. doi:10.1016/j.maturitas.2013.02.009

PubMed Abstract | CrossRef Full Text | Google Scholar

Brouwer, B. J., Walker, C., Rydahl, S. J., and Culham, E. G. (2003). Reducing fear of falling in seniors through education and activity programs: a randomized trial. J. Am. Geriatr. Soc. 51 (6), 829–834. doi:10.1046/j.1365-2389.2003.51265.x

Buschke, H., Kuslansky, G., Katz, M., Stewart, W. F., Sliwinski, M. J., Eckholdt, H. M., et al. (1999). Screening for dementia with the memory impairment screen. Neurology 52 (2), 231–238. doi:10.1212/wnl.52.2.231

Chan, B. K., Marshall, L. M., Winters, K. M., Faulkner, K. A., Schwartz, A. V., and Orwoll, E. S. (2007). Incident fall risk and physical activity and physical performance among older men: the Osteoporotic Fractures in Men Study. Am. J. Epidemiol. 165 (6), 696–703. doi:10.1093/aje/kwk050

Choudhury, R., Park, J. H., Banarjee, C., Thiamwong, L., Xie, R., and Stout, J. R. (2023). Associations of mutually exclusive categories of physical activity and sedentary behavior with body composition and fall risk in older women: a cross-sectional study. Int. J. Environ. Res. Public Health 20 (4), 3595. doi:10.3390/ijerph20043595

Choudhury, R., Park, J. H., Thiamwong, L., Xie, R., and Stout, J. R. (2022). Objectively measured physical activity levels and associated factors in older US women during the COVID-19 pandemic: cross-sectional study. JMIR Aging 5 (3), e38172. doi:10.2196/38172

Delbaere, K., Close, J. C., Brodaty, H., Sachdev, P., and Lord, S. R. (2010). Determinants of disparities between perceived and physiological risk of falling among elderly people: cohort study. Bmj 341, c4165. doi:10.1136/bmj

Deshpande, N., Metter, E. J., Lauretani, F., Bandinelli, S., Guralnik, J., and Ferrucci, L. (2008). Activity restriction induced by fear of falling and objective and subjective measures of physical function: a prospective cohort study. J. Am. Geriatr. Soc. 56 (4), 615–620. doi:10.1111/j.1532-5415.2007.01639.x

Fabre, J. M., Ellis, R., Kosma, M., and Wood, R. H. (2010). Falls risk factors and a compendium of falls risk screening instruments. J. Geriatr. Phys. Ther. 33 (4), 184–197. doi:10.1519/jpt.0b013e3181ff2a24

Goble, D. J., Hearn, M. C., and Baweja, H. S. (2017). Combination of BTrackS and Geri-Fit as a targeted approach for assessing and reducing the postural sway of older adults with high fall risk. Clin. Interv. Aging 12, 351–357. doi:10.2147/cia.S131047

Gregg, E. W., Pereira, M. A., and Caspersen, C. J. (2000). Physical activity, falls, and fractures among older adults: a review of the epidemiologic evidence. J. Am. Geriatr. Soc. 48 (8), 883–893. doi:10.1111/j.1532-5415.2000.tb06884.x

Hadjistavropoulos, T., Delbaere, K., and Fitzgerald, T. D. (2011). Reconceptualizing the role of fear of falling and balance confidence in fall risk. J. Aging Health 23 (1), 3–23. doi:10.1177/0898264310378039

Heesch, K. C., Byles, J. E., and Brown, W. J. (2008). Prospective association between physical activity and falls in community-dwelling older women. J. Epidemiol. Community Health 62 (5), 421–426. doi:10.1136/jech.2007.064147

Jansen, C. P., Klenk, J., Nerz, C., Todd, C., Labudek, S., Kramer-Gmeiner, F., et al. (2021). Association between everyday walking activity, objective and perceived risk of falling in older adults. Age Ageing 50 (5), 1586–1592. doi:10.1093/ageing/afab037

Jefferis, B. J., Iliffe, S., Kendrick, D., Kerse, N., Trost, S., Lennon, L. T., et al. (2014). How are falls and fear of falling associated with objectively measured physical activity in a cohort of community-dwelling older men? BMC Geriatr. 14, 114. doi:10.1186/1471-2318-14-114

John, D., Tang, Q., Albinali, F., and Intille, S. (2019). An open-source monitor-independent movement summary for accelerometer data processing. J. Meas. Phys. Behav. 2 (4), 268–281. doi:10.1123/jmpb.2018-0068

Kakara, R., Bergen, G., Burns, E., and Stevens, M. (2023). Nonfatal and fatal falls among adults aged ≥65 Years - United States, 2020-2021. MMWR Morb. Mortal. Wkly. Rep. 72 (35), 938–943. doi:10.15585/mmwr.mm7235a1

Kempen, G. I., Yardley, L., van Haastregt, J. C., Zijlstra, G. A., Beyer, N., Hauer, K., et al. (2008). The Short FES-I: a shortened version of the falls efficacy scale-international to assess fear of falling. Age Ageing 37 (1), 45–50. doi:10.1093/ageing/afm157

Levy, S. S., Thralls, K. J., and Kviatkovsky, S. A. (2018). Validity and reliability of a portable balance tracking system, BTrackS, in older adults. J. Geriatr. Phys. Ther. 41 (2), 102–107. doi:10.1519/jpt.0000000000000111

Mendes da Costa, E., Pepersack, T., Godin, I., Bantuelle, M., Petit, B., and Levêque, A. (2012). Fear of falling and associated activity restriction in older people. results of a cross-sectional study conducted in a Belgian town. Arch. Public Health 70 (1), 1. doi:10.1186/0778-7367-70-1

Milanović, Z., Pantelić, S., Trajković, N., Sporiš, G., Kostić, R., and James, N. (2013). Age-related decrease in physical activity and functional fitness among elderly men and women. Clin. Interv. Aging 8, 549–556. doi:10.2147/cia.S44112

Moore, D. S., and Ellis, R. (2008). Measurement of fall-related psychological constructs among independent-living older adults: a review of the research literature. Aging Ment. Health 12 (6), 684–699. doi:10.1080/13607860802148855

Moore, D. S., Ellis, R., Kosma, M., Fabre, J. M., McCarter, K. S., and Wood, R. H. (2011). Comparison of the validity of four fall-related psychological measures in a community-based falls risk screening. Res. Q. Exerc. Sport 82 (3), 545–554. doi:10.1080/02701367.2011.10599787

Moreland, B., Kakara, R., and Henry, A. (2020). Trends in nonfatal falls and fall-related injuries among adults aged ≥65 Years - United States, 2012-2018. MMWR Morb. Mortal. Wkly. Rep. 69 (27), 875–881. doi:10.15585/mmwr.mm6927a5

Rikli, R. E., and Jones, C. J. (1999). Functional fitness normative scores for community-residing older adults, ages 60-94. J. Aging Phys. Act. 7 (2), 162–181. doi:10.1123/japa.7.2.162

CrossRef Full Text | Google Scholar

Sawa, R., Asai, T., Doi, T., Misu, S., Murata, S., and Ono, R. (2020). The association between physical activity, including physical activity intensity, and fear of falling differs by fear severity in older adults living in the community. J. Gerontol. B Psychol. Sci. Soc. Sci. 75 (5), 953–960. doi:10.1093/geronb/gby103

Sherrington, C., Whitney, J. C., Lord, S. R., Herbert, R. D., Cumming, R. G., and Close, J. C. (2008). Effective exercise for the prevention of falls: a systematic review and meta-analysis. J. Am. Geriatr. Soc. 56 (12), 2234–2243. doi:10.1111/j.1532-5415.2008.02014.x

Shumway-Cook, A., and Woollacott, M. H. (2007). Motor control: translating research into clinical practice . United States: Lippincott Williams & Wilkins .

Google Scholar

Smith, L., Gardner, B., Fisher, A., and Hamer, M. (2015). Patterns and correlates of physical activity behaviour over 10 years in older adults: prospective analyses from the English Longitudinal Study of Ageing. BMJ Open 5 (4), e007423. doi:10.1136/bmjopen-2014-007423

Soh, S. L., Tan, C. W., Thomas, J. I., Tan, G., Xu, T., Ng, Y. L., et al. (2021). Falls efficacy: extending the understanding of self-efficacy in older adults towards managing falls. J. Frailty Sarcopenia Falls 6 (3), 131–138. doi:10.22540/jfsf-06-131

Suryadinata, R. V., Wirjatmadi, B., Adriani, M., and Lorensia, A. (2020). Effect of age and weight on physical activity. J. Public Health Res. 9 (2), 1840. doi:10.4081/jphr.2020.1840

Tennstedt, S., Howland, J., Lachman, M., Peterson, E., Kasten, L., and Jette, A. (1998). A randomized, controlled trial of a group intervention to reduce fear of falling and associated activity restriction in older adults. J. Gerontol. B Psychol. Sci. Soc. Sci. 53 (6), P384–P392. doi:10.1093/geronb/53b.6.p384

Thiamwong, L. (2020). A hybrid concept analysis of fall risk appraisal: integration of older adults' perspectives with an integrative literature review. Nurs. Forum 55 (2), 190–196. doi:10.1111/nuf.12415

Thiamwong, L., Huang, H. J., Ng, B. P., Yan, X., Sole, M. L., Stout, J. R., et al. (2020a). Shifting maladaptive fall risk appraisal in older adults through an in-home physio-fEedback and exercise pRogram (peer): a pilot study. Clin. Gerontol. 43 (4), 378–390. doi:10.1080/07317115.2019.1692120

Thiamwong, L., Ng, B. P., Kwan, R. Y. C., and Suwanno, J. (2021a). Maladaptive fall risk appraisal and falling in community-dwelling adults aged 60 and older: implications for screening. Clin. Gerontol. 44 (5), 552–561. doi:10.1080/07317115.2021.1950254

Thiamwong, L., Sole, M. L., Ng, B. P., Welch, G. F., Huang, H. J., and Stout, J. R. (2020b). Assessing fall risk appraisal through combined physiological and perceived fall risk measures using innovative Technology. J. Gerontol. Nurs. 46 (4), 41–47. doi:10.3928/00989134-20200302-01

Thiamwong, L., Stout, J. R., Park, J. H., and Yan, X. (2021b). Technology-based fall risk assessments for older adults in low-income settings: protocol for a cross-sectional study. JMIR Res. Protoc. 10 (4), e27381. doi:10.2196/27381

Thiamwong, L., Xie, R., Park, J. H., Choudhury, R., Malatyali, A., Li, W., et al. (2023). Levels of accelerometer-based physical activity in older adults with a mismatch between physiological fall risk and fear of falling. J. Gerontol. Nurs. 49 (6), 41–49. doi:10.3928/00989134-20230512-06

Tinetti, M. E., Mendes de Leon, C. F., Doucette, J. T., and Baker, D. I. (1994). Fear of falling and fall-related efficacy in relationship to functioning among community-living elders. J. Gerontol. 49 (3), M140–M147. doi:10.1093/geronj/49.3.m140

Troiano, R. P., Berrigan, D., Dodd, K. W., Mâsse, L. C., Tilert, T., and McDowell, M. (2008). Physical activity in the United States measured by accelerometer. Med. Sci. Sports Exerc. 40 (1), 181–188. doi:10.1249/mss.0b013e31815a51b3

Tudor-Locke, C., Brashear, M. M., Katzmarzyk, P. T., and Johnson, W. D. (2012). Peak stepping cadence in free-living adults: 2005-2006 NHANES. J. Phys. Act. Health 9 (8), 1125–1129. doi:10.1123/jpah.9.8.1125

Vilar-Gomez, E., Vuppalanchi, R., Gawrieh, S., Pike, F., Samala, N., and Chalasani, N. (2023). Significant dose-response association of physical activity and diet quality with mortality in adults with suspected NAFLD in a population study. Am. J. Gastroenterol. 118 (9), 1576–1591. doi:10.14309/ajg.0000000000002222

Westerterp, K. R. (2018). Changes in physical activity over the lifespan: impact on body composition and sarcopenic obesity. Obes. Rev. 19 (1), 8–13. doi:10.1111/obr.12781

Wolff-Hughes, D. L., Bassett, D. R., and Fitzhugh, E. C. (2014). Population-referenced percentiles for waist-worn accelerometer-derived total activity counts in U.S. youth: 2003 - 2006 NHANES. PLoS One 9 (12), e115915. doi:10.1371/journal.pone.0115915

Yardley, L., Beyer, N., Hauer, K., Kempen, G., Piot-Ziegler, C., and Todd, C. (2005). Development and initial validation of the falls efficacy scale-international (FES-I). Age Ageing 34 (6), 614–619. doi:10.1093/ageing/afi196

Yee, X. S., Ng, Y. S., Allen, J. C., Latib, A., Tay, E. L., Abu Bakar, H. M., et al. (2021). Performance on sit-to-stand tests in relation to measures of functional fitness and sarcopenia diagnosis in community-dwelling older adults. Eur. Rev. Aging Phys. Act. 18 (1), 1. doi:10.1186/s11556-020-00255-5

Zheng, P., Pleuss, J. D., Turner, D. S., Ducharme, S. W., and Aguiar, E. J. (2023). Dose-response association between physical activity (daily MIMS, peak 30-minute MIMS) and cognitive function among older adults: NHANES 2011-2014. J. Gerontol. A Biol. Sci. Med. Sci. 78 (2), 286–291. doi:10.1093/gerona/glac076

Zijlstra, G. A., van Haastregt, J. C., van Eijk, J. T., van Rossum, E., Stalenhoef, P. A., and Kempen, G. I. (2007). Prevalence and correlates of fear of falling, and associated avoidance of activity in the general population of community-living older people. Age Ageing 36 (3), 304–309. doi:10.1093/ageing/afm021

Keywords: falls, physical activity, accelerometry, aging, fear of falling, fall risk, MIMS

Citation: Choudhury R, Park J-H, Banarjee C, Coca MG, Fukuda DH, Xie R, Stout JR and Thiamwong L (2024) Associations between monitor-independent movement summary (MIMS) and fall risk appraisal combining fear of falling and physiological fall risk in community-dwelling older adults. Front. Aging 5:1284694. doi: 10.3389/fragi.2024.1284694

Received: 28 August 2023; Accepted: 20 March 2024; Published: 09 April 2024.

Reviewed by:

Copyright © 2024 Choudhury, Park, Banarjee, Coca, Fukuda, Xie, Stout and Thiamwong. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Joon-Hyuk Park, [email protected]

This article is part of the Research Topic

Insights into Falls Efficacy and Fear of Falling

Robust Fuzzy Model-Based \(H_2/H_\infty\) Control for Markovian Jump Systems with Random Delays and Uncertain Transition Probabilities

  • Published: 10 April 2024

Cite this article

  • Cheng Tan 1 ,
  • Binlian Zhu 1 ,
  • Jianying Di 1 &
  • Yuhuan Fei 1  

This paper studies the mixed \(H_2/H_\infty\) control for Takagi–Sugeno (T–S) fuzzy Markovian jump systems (MJSs) subject to random delays and multiple uncertain transition probabilities. In contrast to existing research, this study presents uncertainty parameters, external disturbance, random delays, and uncertain transition probabilities simultaneously in a unified T–S fuzzy model. Specifically, this study examines multiple Markov chains with partially unknown transition probabilities. These complex imperfections have a substantial adverse impact on system performance and the associated challenge of mixed \(H_2/H_\infty\) control remains unresolved. Our innovative contributions are described as follows. The proposed approach utilizes free-weighting matrix technique and Lyapunov–Krasovskii functional to get the \(H_2/H_\infty\) controller, which ensures that the stochastic T–S fuzzy systems exhibit stochastic stability and comply with the \(H_\infty\) performance index.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

h index for researchers

Data availability

No data was used for the research described in the article.

Hou, T., Ma, H.: Exponential stability for discrete-time infinite Markov jump systems. IEEE Trans. Autom. Control 61 (12), 4241–4246 (2016)

Article   MathSciNet   Google Scholar  

Lin, H., Su, H., Shu, Z., et al.: Optimal estimation in udp-like networked control systems with intermittent inputs: stability analysis and suboptimal filter design. IEEE Trans. Autom. Control 61 (7), 1794–1809 (2016)

Tan, C., Gao, C., Zhang, Z., et al.: Non-fragile guaranteed cost control for networked nonlinear Markov jump systems under multiple cyber-attacks. J. Franklin Inst. 360 (13), 9446–9467 (2023)

Yang, H., Xu, Y., Zhang, J.: Event-driven control for networked control systems with quantization and Markov packet losses. IEEE Trans. Cybern. 47 (8), 2235–2243 (2017)

Article   Google Scholar  

Grillo, S., Pievatolo, A., Tironi, E.: Optimal storage scheduling using Markov decision processes. IEEE Trans. Sustain. Energy 7 (2), 755–764 (2016)

Li, F., Xu, S., Shen, H.: Fuzzy-model-based \(H_\infty\) control for Markov jump nonlinear slow sampling singularly perturbed systems with partial information. IEEE Trans. Fuzzy Syst. 27 (10), 1952–1962 (2019)

Shen, H., Li, F., Yan, H., et al.: Finite-time event-triggered \(H_\infty\) control for T–S fuzzy Markov jump systems. IEEE Trans. Fuzzy Syst. 26 (5), 3122–3135 (2018)

Fang, H., Tu, Y., Wang, H.: Fuzzy-based adaptive optimization of unknown discrete-time nonlinear Markov jump systems with off-policy reinforcement learning. IEEE Trans. Fuzzy Syst. 30 (12), 5276–5290 (2022)

Aslam, M.S., Qaisar, I., Majid, Abdul., et al.: Adaptive event-triggered robust \(H_\infty\) control for T–S fuzzy networked Markov jump systems with time-varying delay. Asian J. Control 25 (1), 213–228 (2022)

Wu, Z., Dong, S., Shi, P., et al.: Fuzzy-model-based nonfragile guaranteed cost control of nonlinear Markov jump systems. IEEE Trans. Syst. Man Cybern. Syst. 47 (8), 2388–2397 (2017)

Zhang, L., Ning, Z., Shi, P.: Input-output approach to control for fuzzy Markov jump systems with time-varying delays and uncertain packet dropout probability. IEEE Trans. Cybern. 45 (11), 2449–2460 (2015)

Zhang, X., Wang H., Stojanovic, V.: Asynchronous fault detection for interval type-2 fuzzy nonhomogeneous higher level Markov jump systems with uncertain transition probabilities. IEEE Trans. Fuzzy Syst. 30 (7), 2487–2499 (2022)

Lian, J., Li, S.: Fuzzy control of uncertain positive Markov jump fuzzy systems with input constraint. IEEE Trans. Cybern. 51 (4), 2032–2041 (2021)

He, M., Li, J.: Resilient guaranteed cost control for uncertain T–S fuzzy systems with time-varying delays and Markov jump parameters. ISA Trans. 88 , 12–22 (2019)

Sun, J., Zhang, H., Wang, Y., Liang, H.: \(H_\infty\) control for switched it2 fuzzy nonlinear systems with multiple time delays applied in hybrid grid-connected generation. Appl Math. Comput. 395 , 125887 (2021)

MathSciNet   Google Scholar  

Liang, H., Chang, Z., Ahn, C.K.: Hybrid event-triggered intermittent control for nonlinear multi-agent systems. IEEE Trans. Netw. Sci. Eng. 10 (4), 1975–1984 (2023)

Aatabe, M., El Guezar, F., Bouzahir, H.: Constrained stochastic control of positive T–S fuzzy systems with Markov jumps and its application to a DC–DC boost converter. Trans. Inst. Meas. Control 42 (16), 3234–3242 (2020)

Qi, Q., Xie, L., Zhang, H.: Optimal control for stochastic systems with multiple controllers of different information structures. IEEE Trans. Autom. Control 66 (9), 4160–4175 (2021)

Huang, H., Li, D., Xi, Y.: Design and input-to-state practically stable analysis of the mixed \(H_2/H_\infty\) feedback robust model predictive control. IET Control Theory Appl. 6 (4), 498–505 (2012)

Wang, M., Liang, H., Pan, Y., et al.: A new privacy preservation mechanism and a gain iterative disturbance observer for multiagent systems. IEEE Trans. Netw. Sci. Eng. 11 (1), 392–403 (2023)

Chen, L., Liang, H., Pan, Y., et al.: Human-in-the-loop consensus tracking control for UAV systems via an improved prescribed performance approach. IEEE Trans. Aerosp. Elect. Syst. 59 (6), 8380–8391 (2023)

Xue, A., Wang, H., Lu, R.: Event-based \(H_\infty\) control for discrete Markov jump systems. Neurocomputing 190 , 165–171 (2016)

Xing, M., Deng, F., Li, S., et al.: Stability of nonlinear stochastic Markov jump system with mode-dependent delays and applications. Int. J. Comput. Math. 98 (8), 1683–1698 (2020)

Yang, S., Bo, Y.: Robust mixed \(H_2/H_\infty\) control of networked control systems with random delays in both backward communication links. Automatica 7 (4), 754–760 (2011)

Google Scholar  

Qiu, L., Shi, Y., Yao, F., et al.: Network-based robust \(H_2/H_\infty\) control for linear systems with two-channel random packet dropouts and time delays. IEEE Trans. Cybern. 45 (8), 1450–1462 (2015)

Petersen, I.R.: A stabilization algorithm for a class of uncertain linear systems. Syst. Control Lett. 8 (4), 351–357 (1987)

Qiu, L., Yao, F., Xu, G., et al.: Output feedback guaranteed cost control for networked control systems with random packet dropouts and time delays in forward and feedback communication links. IEEE Trans. Autom. Sci. Eng. 13 (1), 284–295 (2016)

Sun, H., Yan, L.: Robust \(H_\infty\) fuzzy control for nonlinear discrete-time stochastic systems with Markovian jump and parametric uncertainties. Math. Probl. Eng. 2014 , 11 (2014)

Wang, J., Wu, J., Cao, J., et al.: \({H}_{\infty }\) Fuzzy dynamic output feedback reliable control for Markov jump nonlinear systems with pdt switched transition probabilities and its application. IEEE Trans. Fuzzy Syst. 30 (8), 3113–3124 (2022)

Download references

Author information

Authors and affiliations.

College of Engineering, QuFu Normal University, Rizhao, 276800, China

Cheng Tan, Binlian Zhu, Jianying Di & Yuhuan Fei

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Cheng Tan .

Ethics declarations

Conflict of interest.

The authors declare no conflict of interest.

First, define

Starting from ( 16 ) and ( 17 ) together with ( 9 ) and ( 11 ), we can utilize the Schur complement to obtain the following relation

Then, we adopt the following novel Lyapunov–Krasovskii functions with

where \(\forall y_k=i\in {\mathbb{M}}\) and \(\forall d_k=m\in \mathbb {N}\)

Given \(y_k=i,y_{k+1}=j,d_k=m\) and \(d_{k+1}=n\) , we denote \(\textbf{E}[\Delta V_k]\) the expectation of the difference of every term in \(V_{q}(x_{k})\) for \(q=1,2,\ldots ,4\) .

To be specific, define

For any \(\bar{\mathbb {G}}\) associated with the system ( 11 ), \(\text {let}\, \bar{\mathbb {G}}=.\) \(\left. \begin{bmatrix}G_1&G_2&G_3&0\end{bmatrix}\right)\) , we can derive

when \(v_{k}=0\) . Then, we obtain

Combining ( 6 ), we obtain

In fact, we have

Taking ( 46 )–( 48 ) into account, we obtain that

By Jensen’s inequality, one has that

Substituting ( 52 ) into ( 51 ) and combining ( 45 ), ( 49 ), and ( 50 ), we can infer that

According to \(\Theta _{im}<0\) , we obtain

where \(\beta =inf\{\mathbb {\delta }_{\min }(-\Theta _{im},i\in {\mathbb{M}},m\in \mathbb {N})\}\) . Then, for each \(T\ge 1\) , one has

which indicates that

The Definition 1 leads to the conclusion that this situation indicates that ( 11 ) is stochastically stable.

Furthermore, one obtains that

Based on the Schur complement, \(\Gamma _{im}<0\) is derived from ( 16 ) and

which yields that

Then, in accordance with ( 54 )–( 56 ), we get

Thus, when \(k\rightarrow \infty\) , it follows that \(V_{k}(x_{k})\rightarrow \infty\) . Similarly, one has that

Therefore, ( 18 ) is obtained straightforwardly from ( 59 ). \(\square\)

Define \(\mathcal {A}=diag\{I,X,\omega _1X,\omega _2X,X\}\) , \(G_1^{-1}=X\) , \(G_2^{-1}=\omega _1X\) , and \(G_3^{-1}=\omega _2X\) , where the tuning parameters \(\omega _1>0\) and \(\omega _2>0\) are known a priori. We also provide the following notations

By pre- and post-multiplying \(\mathcal {A}^T\) and \(\mathcal {A}\) in ( 16 ), we have

By applying ( 3 ) and ( 5 ), the above equation becomes

Under \(\vert \Delta \pi _{mm}\vert \le {\bar{\varepsilon }}_1\) , it follows from ( 7 ) that

Similarly, it generates that

Using the Schur complement and Lemma 1, together with ( 61 )–( 64 ) and ( 17 ), we can easily derive ( 28 ) and ( 29 ), respectively, from ( 16 ). \(\square\)

We generate the same Lyapunov–Krasovskii functions for the system ( 11 ) as shown in Appendix 1. From ( 35 ) and ( 36 ), we can get

For any nonzero \(v_{k}\in L_2[0,\infty )\) , define \(\zeta _{k}=\begin{bmatrix}\xi _{k}^T&v_{k}^T\end{bmatrix}\) . Then,

For \(x_0=0,k=-{\bar{d}},\ldots ,-1\) , it follows that \(V_{0}(x_0)=V(\psi _0,y_0,d_0)\) and

Accordingly, it implies that

Given that ( 35 )and ( 36 ) guarantees \(\Phi _{im}<0\) , it concludes

For each \(v_{k}\in L_2[0,\infty ),z_{k}\in L_2[0,\infty )\) , it yields that

According to Definition 2, the system ( 11 ) reaches the given \(\gamma >0\) and is stochastically stable. \(\square\)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Tan, C., Zhu, B., Di, J. et al. Robust Fuzzy Model-Based \(H_2/H_\infty\) Control for Markovian Jump Systems with Random Delays and Uncertain Transition Probabilities. Int. J. Fuzzy Syst. (2024). https://doi.org/10.1007/s40815-024-01680-9

Download citation

Received : 25 April 2023

Revised : 31 December 2023

Accepted : 11 January 2024

Published : 10 April 2024

DOI : https://doi.org/10.1007/s40815-024-01680-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Mixed \(H_2/H_\infty\) control
  • Markovian jump systems
  • Takagi–Sugeno fuzzy model
  • Find a journal
  • Publish with us
  • Track your research

Special Report

Mortgage Market Index – Netherlands 1H24

Thu 11 Apr, 2024 - 9:46 AM ET

Breakthrough of 2022 Price Peak Expected: In May 2023, the national house price index (CBS HPI) reached a trough that was 6% below the peak in July 2022. The recovery has picked up pace and prices are now only 0.8% below the peak, which we expect to be exceeded in the course of 1H24. We identified wage growth and limited supply as important factors for prices bottoming out in 2023. Since 4Q23 mortgage rates have been falling, increasing borrowing capacity and supporting further growth. Market Activity May Be Past the Trough: There are cautious signs of a rebound in market activity: the number of mortgage applications in January and February 2024 was higher by 44% and 30%, respectively, yoy, according to the mortgage data network (HDN). We do not yet see the same level of increases in home sales volumes, but changes in volumes have been positive over the past three months compared with the same period last year.

h index for researchers

IMAGES

  1. What is an H-index?

    h index for researchers

  2. Author Metrics (including h-index)

    h index for researchers

  3. The distribution of the H-index.

    h index for researchers

  4. h-Index and Research Impact

    h index for researchers

  5. Research Tools: July 2021

    h index for researchers

  6. h-index

    h index for researchers

VIDEO

  1. Why did the PS2 Supercomputer exist?

  2. Essential Science Indicators

  3. International Conference on Language & Literature: Acquisition and Preservation 2023

  4. Chief Minister Higher Education Research Scheme in Uttarakhand || मुख्यमंत्री उच्च शिक्षा शोध योजना

  5. |Turnitin Plagiarism Report: Authenticity Verification and Source Analysis

  6. The Expansion of Indexes

COMMENTS

  1. h-index

    The h-index is an author-level metric that measures both the productivity and citation impact of the publications, initially used for an individual scientist or scholar. The h-index correlates with success indicators such as winning the Nobel Prize, being accepted for research fellowships and holding positions at top universities. The index is based on the set of the scientist's most cited ...

  2. What is a good H-index?

    3. 9. >. 1. In this case, the researcher scored an H-index of 6, since he has 6 publications that have been cited at least 6 times. The remaining articles, or those that have not yet reached 6 citations, are left aside. A good H-index score depends not only on a prolific output but also on a large number of citations by other authors.

  3. What is a good h-index? [with examples]

    Now let's talk numbers: what h-index is considered good? According to Hirsch, a person with 20 years of research experience with an h-index of 20 is considered good, 40 is great, and 60 is remarkable. But let's go into more detail and have a look at what a good h-index means in terms of your field of research and stage of career.

  4. Web of Science: h-index information

    Find benchmark h-indices: Because the h-index can be determined for any population of articles, it is difficult to provide overall benchmarks for the value of the h-index. Very productive researchers in subject areas with high volumes of publication and citation can show h-index values over 100 at the peak of their scientific careers.

  5. The ultimate how-to-guide on the h-index

    Step 1: List all your published articles in a table. Step 2: For each article gather the number of how often it has been cited. Step 3: Rank the papers by the number of times they have been cited. Step 4: The h-index can now be inferred by finding the entry at which the rank in the list is greater than the number of citations. Here is an ...

  6. Library Guides: Calculate your h-index: Using the h-index

    h-index = the number of publications with a citation number greater than or equal to h. For example, 15 publications cited 15 times or more, is a h-index of 15. Read more about the h-index, first proposed by J.E. Hirsch, as An index to quantify an individual's scientific research output .

  7. Calculate Your Academic Footprint: Your H-Index

    The h-index captures research output based on the total number of publications and the total number of citations to those works, providing a focused snapshot of an individual's research performance. Example: If a researcher has 15 papers, each of which has at least 15 citations, their h-index is 15.

  8. How do I find the h-index for an author?: Home

    Enter the Author's last name/first initial as directed and click search. 4. Refine your search by organizations [e.g., Mayo], research area or other filters. 5. Review your results. 6. On the Author results page, click Create Citation Report to the right of the first citation. 7. From the Citation Report screen, see the h-index in the right ...

  9. H-Index

    The h-index is a measure of publishing impact, where an author's h-index is represented by the number of papers (h) with a citation number ≥ h. For example, a scientist with an h-index of 14 has published numerous papers, 14 of which have been cited at least 14 times. Image: Screenshot of some metrics listed in an author profile in Michigan ...

  10. Do researchers know what the h-index is? And how do they ...

    The h-index is a widely used scientometric indicator on the researcher level working with a simple combination of publication and citation counts. In this article, we pursue two goals, namely the collection of empirical data about researchers' personal estimations of the importance of the h-index for themselves as well as for their academic disciplines, and on the researchers' concrete ...

  11. h-index

    h-index for institutions. Definition: The h-index of an institution is the largest number h such that at least h articles published by researchers at the institution were cited at least h times each. For example, if an institution has a h-index of 200 it's researchers have published 200 articles that have been cited 200 or more times.

  12. Finding an Author's H-Index

    The h-index, created by Jorge E. Hirsch in 2005, is an attempt to measure the research impact of a scholar. In his 2005 article Hirsch put forward "an easily computable index, h, which gives an estimate of the importance, significance, and broad impact of a scientist's cumulative research contributions."

  13. Explainer: what is an H-index and how is it calculated?

    What is the H-index and how is it calculated? The H-Index is a numerical indicator of how productive and influential a researcher is. It was invented by Jorge Hirsch in 2005, a physicist at the ...

  14. Measuring your research impact: H-Index

    The Web of Science uses the H-Index to quantify research output by measuring author productivity and impact. H-Index = number of papers ( h) with a citation number ≥ h. Example: a scientist with an H-Index of 37 has 37 papers cited at least 37 times. Advantages of the H-Index: Measures quantity and impact by a single value.

  15. LibGuides: Research metrics: Find Researcher Metrics (H-index)

    1. To find a researcher's h-index with Google Scholar, search for their name. 2. If a user profile comes up* with the correct name, discipline, and institution, click on that. 3. The h-index will be displayed for that author under "citation indices" on the top right-hand side. * If no user profile comes up, you'll need to use another tool, like ...

  16. The h-Index: Understanding its predictors, significance, and criticism

    Introduction. The h-index is a commonly used metric to measure the productivity and impact of academic researchers. It was first introduced in 2005, and since then, the h-index has become an important tool for evaluating researchers, departments, and institutions.[] The calculation of the h-index is relatively simple, yet it confuses novice authors.

  17. What is a good H-index for each academic position?

    The h-index is a metric designed to quantify the productivity and impact of a researcher, and increasingly, groups or journals. Developed by physicist Jorge Hirsch, this index is computed as the number of papers (number of publications) with citation numbers larger or equal to 'h.'. For instance, if a researcher has four papers cited at ...

  18. h-index

    The h-index is a simple way to measure the impact of your work and other people's research. It does this by looking at the number of highly impactful publications a researcher has published. The higher the number of cited publications, the higher the h-index, regardless of which journal the work was published in.

  19. LibGuides: Bibliometrics and Altmetrics: Find Your H-Index

    The h-Index is a primary author level metric designed to measure research quality over time, and accounts for both the scholarly productivity and the research impact of the author. The h-Index is calculated as follows - H stands for the number of articles that have each been cited H number of times. So, an h-Index of 30 means that the author has published 30 articles that have each been cited ...

  20. Comparison of researchers' impact indices

    1. Introduction. Different bibliometric methods are used for evaluating scientist's research impact. Hirsch [] defines h-index as, "an author has an index h if at least h number of his/her publications have h citations each". h-index is widely adopted by research community/evaluators.The reason of this adoption is that it is easy to compute, quantity and quality are simultaneously ...

  21. What is an h-index? How do I find the h-index for a particular author

    The h-index is a number intended to represent both the productivity and the impact of a particular scientist or scholar, or a group of scientists or scholars (such as a departmental or research group). The h-index is calculated by counting the number of publications for which an author has been cited by other authors at least that same number ...

  22. The h-Index: An Indicator of Research and Publication Output

    The analysis of research publications using statistical methods is called Bibliometry. There are many bibliometric indices to measure the research output of individual researcher.1 The h-index and impact factor(IF) are the most famous and widely used bibliometric indices.Jorge Eduardo Hirsch was Professor of Physics at the University of California who introduced the h-index (Hirsch Index) in ...

  23. The h-Index: A Helpful Guide for Scientists

    The h-index is a measure of research performance and is calculated as the highest number of manuscripts from an author (h) that all have at least the same number (h) of citations. The h-index is known to penalize early career researchers and does not take into account the number of authors on a paper. Alternative indexes have been created ...

  24. What Researchers Discovered When They Sent 80,000 Fake Résumés to U.S

    Known as an audit study, the experiment was the largest of its kind in the United States: The researchers sent 80,000 résumés to 10,000 jobs from 2019 to 2021. The results demonstrate how ...

  25. Frontiers

    Prior research has mostly focused on exploring the independent associations of FOF and objective fall risk measures with physical activity (PA) participation in older adults (Gregg et al., 2000; Chan et al., 2007; Zijlstra et al., 2007; Heesch et al., 2008; Mendes da Costa et al., 2012).To date, only a small number of studies have investigated the combined effects of FOF and objective fall ...

  26. Robust Fuzzy Model-Based $$H_2/H_\\infty$$ H 2 / H ∞ Control for

    This paper studies the mixed \(H_2/H_\infty\) control for Takagi-Sugeno (T-S) fuzzy Markovian jump systems (MJSs) subject to random delays and multiple uncertain transition probabilities. In contrast to existing research, this study presents uncertainty parameters, external disturbance, random delays, and uncertain transition probabilities simultaneously in a unified T-S fuzzy model.

  27. Mortgage Market Index

    Mortgage Market Index - Netherlands 1H24. Thu 11 Apr, 2024 - 9:46 AM ET. Breakthrough of 2022 Price Peak Expected: In May 2023, the national house price index (CBS HPI) reached a trough that was 6% below the peak in July 2022. The recovery has picked up pace and prices are now only 0.8% below the peak, which we expect to be exceeded in the ...