quantity surveying Recently Published Documents

Total documents.

  • Latest Documents
  • Most Cited Documents
  • Contributed Authors
  • Related Sources
  • Related Keywords

Sustainable construction and the versatility of the quantity surveying profession in Singapore

PurposeThe changing role of quantity surveyors in the new paradigm of sustainable construction requires studies into new competencies and skills for the profession. The impact of sustainable construction on quantity surveying services, engagement and how they manage challenges provided an indication of the success indicators of the quantity surveying profession in meeting the sustainable construction needs.Design/methodology/approachA five-point Likert scale was developed from the list of quantity surveying firms in Singapore. An 85% response rate from 60 quantity surveying firms contacted in this study provided 51 responses. Descriptive statistics and factor analysis were employed to evaluate the findings.FindingsThe factor analysis categorised the drivers derived from the literature into awareness of sustainable construction, adversarial role on green costing; carbon cost planning; valuing a sustainable property; common knowledge of sustainable construction; and lack of experience in sustainable construction.Social implicationsThe research findings supported the idea of increased sustainable construction skills in quantity surveying education, research and training.Originality/valueThe dearth of quantity surveyors with sustainable construction experience must focus on quantity surveying professional bodies and higher education. The quantity surveying profession needs reskilling in green costing and carbon cost planning to meet the needs of sustainable construction.

Quantity Surveying Practice

Quantity surveying profession and its prospects in nigeria.

The study assessed the prospects of the Quantity Surveying profession in Nigeria. The study identified and evaluated the level of performance of the identified functions performed by the quantity surveyors in the Nigerian Construction industry. The study reveals that there is a high level of performance of the basic functions of the quantity surveyors which include feasibility and viability studies, contract documentation, life cycle costing, preliminary cost advice, etc. The study also examined the factors militating against the effective performance of the quantity surveyor’s functions in the Nigerian Construction industry. The study identified and presented some possible factors militating against the performance of Quantity Surveying functions and some anticipated measures to enhance the quantity Surveying profession for evaluation by the respondents using structured questionnaires. The data collected were analyzed with SPSS version 23 using frequencies and mean item scores. The study revealed some major factors militating against the effective performance of the quantity surveying profession in the Nigerian Construction industry like widespread corruption in Nigeria with a mean score of 4.53, obsolete curriculum and inadequacy in modern equipment with a mean score of 4.41, professional rivalry from kindred profession with a mean score of 4.35, level of adoption of UT with mean a score of 4.32, and inadequacies in academic and professional training with a mean score of 4.18 among others. The study equally revealed some important measures requiring implementation to enhance the quantity of Surveying profession in Nigeria like a clear delineation in professional functions in the construction industry to curb professional rivalry with a mean score of 4.35, reviewing the curriculum of Tertiary Institutions with a mean score of 4.24, improving professional skills through continuing professional development with a mean score of 4.15, improving technological applications in the execution of Quantity Surveying functions with a mean score of 3.91 and professional certification in specialized areas with a mean score of 3.85.

Transdisciplinary service-learning for construction management and quantity surveying students

The transformation of higher education in South Africa has seen higher education institutions become more responsive to community matters by providing institutional support for service-learning projects. Despite service-learning being practised in many departments at the Cape Peninsula University of Technology (CPUT), there is a significant difference in the way service-learning is perceived by academics and the way in which it should be supported within the curriculum. This article reflects on a collaborative transdisciplinary service-learning project at CPUT that included the Department of Construction Management and Quantity Surveying and the Department of Urban and Regional Planning. The aim of the transdisciplinary service-learning project was for students to participate in an asset-mapping exercise in a rural communal settlement in the Bergrivier municipality in the Western Cape province of South Africa. In so doing students from the two departments were gradually inducted into the community. Once inducted, students were able to identify the community’s most urgent needs. During community engagement students from each department were paired together. This allowed transdisciplinary learning to happen with the exploration of ideas from the perspectives of both engineering and urban planning students. Students were able to construct meaning beyond their discipline. Cooperation and synergy between the departments allowed mutual, interchangeable, cooperative interaction with community members. Outcomes for the transdisciplinary service-learning project and the required commitment from students are discussed.

Admission Points Score to Predict Undergraduate Performance - Comparing Quantity Surveying vs. Real Estate

Quantity surveying and bim 5d. its implementation and analysis based on a case study approach in spain, factors influencing the adoption of building information modelling (bim) in the south african construction and built environment (cbe) from a quantity surveying perspective.

Abstract The construction industry has often been described as stagnant and out-of-date due to the lack of innovation and innovative work methods to improve the industry (WEF, 2016; Ostravik, 2015). The adoption of Building Information Modelling (BIM) within the construction industry has been relatively slow (Cao et al., 2017), particularly in the South African Construction and Built Environment (CBE) (Allen, Smallwood & Emuze, 2012). The purpose of this study was to determine the critical factors influencing the adoption of BIM in the South African CBE, specifically from a quantity surveyor’s perspective, including the practical implications. The study used a qualitative research approach grounded in a theoretical framework. A survey questionnaire was applied to correlate the interpretation of the theory with the data collected (Naoum, 2007). The study was limited to professionals within the South African CBE. The study highlighted that the slow adoption of BIM within the South African CBE was mainly due to a lack of incentives and subsequent lack of investment towards the BIM adoption. The study concluded that the South African CBE operated mainly in silos without centralised coordination. The BIM adoption was only organic. Project teams were mostly project orientated, seeking immediate solutions, and adopted the most appropriate technologies for the team’s composition. The study implies that the South African CBE, particularly the Quantity Surveying profession, still depends heavily on other role-players in producing information-rich 3D models. Without a centralised effort, the South African Quantity Surveying professionals will continue to adopt BIM technology linearly to the demand-risk ratio as BIM maturity is realised in the South African CBE.

The impact of environmental turbulence on the strategic decision-making process in Irish quantity surveying (QS) professional service firms (PSFs)

External marketing relationship practice of quantity surveying firms in the selected states in nigeria.

It has been established that marketing is very significant to the success of any organization, especially in a competitive environment. In the Quantity surveying profession, marketing might be more relevant than other professions because it is less known. The significance of marketing and competitive business environment calls for effective marketing practice by Quantity Surveying Firms (QSFs). One of the effective ways is to build a strong external marketing relationship, which exists between a firm and its client. Therefore, this paper investigated the external marketing relationship practice of QSFs with a view to enhancing firms’ productivity and client satisfaction. Forty-six (46) registered QSFs and fifty-nine (59) corporate clients in Lagos, Oyo, and Ondo States were assessed through questionnaire survey.  Data were collected on the attributes of parties involved in external marketing. The collected data were analysed using Mean Item Scores (MIS) and Analysis of Variance (ANOVA). The results reveal important attributes of clients to include “pay on time (MS=4.59)”, “willingness and readiness to take advice from the firm (MS=4.59)”, and “make expectations known clearly to the firm (MS=4.54)”. From the findings, clients averagely displayed these attributes. The result of ANOVA shows that firms viewed the importance of these clients’ attributes in the same way at p>0.05 except for one of these attributes (making expectations known clearly to the firm), which firms viewed its importance differently at p<0.05.  Furthermore, results show the important attributes of firms to include: “ability to give clients value for their money (MS=4.51)”, “knowing clients’ requirements (MS=4.51)”, and “being attentive (MS=4.47)’. Findings show that these attributes were adequately displayed by QSFs. The perceptions of clients on the importance of these firms’ attributes were the same at p>0.05. The study concluded by establishing attributes for strong external marketing relationship to include: “readiness of a client to take advice from the firm”, “ability of a client to pay on time”, “ability of a firm to satisfy the client”, and “knowing the client’s requirements”. The study recommended that QSFs and clients should endeavour to possess and display these attributes for the enhancement of service delivery in terms of firms’ productivity and clients’ satisfaction. Keywords: Attributes, Clients, External Marketing Relationship, Quantity Surveying Firms

Export Citation Format

Share document.

Integrating quality into quantity: survey research in the era of mixed methods

  • Published: 18 April 2015
  • Volume 50 , pages 1213–1231, ( 2016 )

Cite this article

quantity survey research paper

  • Sergio Mauceri 1  

1707 Accesses

7 Citations

Explore all metrics

As widely recognized during the golden age of survey research thanks to the work of the Columbia school, the use of mixed strategies allows survey research to overcome its limitations by incorporating the advantages of qualitative approaches rather than seeking alternative methods. The need to re-think survey research before embarking on this course impelled the author to undertake a critical analysis of one of the survey’s most important assumptions, proposing a shift from standardization of stimulus to standardization of meanings in order to anchor the requirement of answer comparability on a more solid basis. This proposal for rapprochement with qualitative research is followed by a more detailed section in which the author distinguishes four different types of mixed survey strategies, combining two criteria (time order and function of qualitative procedures). The most significant parts of the constructed typology are then brought together in a model called the multilevel integrated survey approach. This methodological model is concretely illustrated in an empirical study of homophobic prejudice among teenagers. The example shows how in research practice analytical mixed strategies can be creatively combined in the same survey research design, contributing to improvements in data quality and the relevance of research findings.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

Similar content being viewed by others

quantity survey research paper

Mixed-Mode Surveys and Data Quality

quantity survey research paper

Why Survey Methodology Needs Sociology and Why Sociology Needs Survey Methodology

quantity survey research paper

Using Multi-Informant Designs to Address Key Informant and Common Method Bias

Among others, in alphabetical order: Paul Beatty, Norman M. Bradburn, Charles L. Briggs, Frederick Conrad, Hanneke Houtkoop-Seenstra, Brigitte Jordan, Douglas Maynard, Hugh Mehan, Elliot G. Mishler, Nora Cate Schaeffer, Michael F. Schober, Norbert Schwarz, Howard Schuman, Lucy Suchman, Seymour Sudman, Judith M. Tanur.

Strictly speaking, then, the term ‘methods’ (even in the expression ‘Mixed methods’) should be discarded because it conveys the idea that qualitative and quantitative methods are independent and in some ways mutually exclusive. As the pragmatist philosopher John Dewey pointed out ( 1938 ), the logic of social-scientific research (the method) is unique and always follows the same criteria of scientific validation and the same general procedural steps. For this reason, it would be preferable to speak of ‘mixed research’ (Onwuegbuzie 2007 ), ‘mixed methodology’ (Tashakkori and Teddlie 2003 ), or, as in this article, mixed strategies.

It is important not to confuse deviant cases with deviant findings, introduced in the integrative in-depth survey strategy. Deviant cases are residual exceptions to confirmed hypotheses and empirical regularities, while deviant findings are empirical regularities that contradict the researcher’s theoretical expectations and thus concern a preponderant number of cases.

Barton, A.H.: Bringing society back in: survey research and macro-methodology. Am. Behav. Soc. 12 (2), 1–9 (1968)

Article   Google Scholar  

Barton, A.H.: Paul Lazarsfeld and applied social research. Soc. Sci. Hist. 3 (3–4), 4–44 (1979)

Blaikie, N.W.H.: A critique of the use of triangulation in social research. Qual. Quant. 25 , 115–136 (1991)

Bourdieu, P.: Réponses. Pour une anthropologie réflexive. Editions de Seuil, Paris (1992)

Google Scholar  

Campbell, D.T., Fiske, D.W.: Convergent and discriminant validation by the multitrait multimethod matrix. Psychol. Bull. LVI 2 , 81–105 (1959)

Campelli, E.: Da un luogo comune. Introduzione alla metodologia delle scienze sociali (Nuova edizione). Carocci, Roma (2009)

Capecchi, V.: Il contributo di Lazarsfeld alla metodologia sociologica. In: Campelli, E., Fasanella, A., Lombardo, C., Paul Felix Lazarsfeld: un “classico” marginale, Milano: Angeli (Sociologia e ricerca sociale, XX, 58/59, pp. 35–82) (1999)

Creswell, J.W.: Research Design: Qualitative, Quantitative, and Mixed Methods Approaches. Sage, London (2008)

Denzin, N.K.: The Research Act. A Theoretical Introduction to Sociological Methods, 2nd edn. McGraw-Hill, New York (1978)

Dewey, J.: Logic, the Theory of Inquiry. Henry Holt and Co, New York (1938)

Fielding, N.G., Schreier, M.: Introduction: on the compatibility between qualitative and quantitative research methods. Forum Qual. Sozialforschung/Forum: Qual. Soc. Res. 2 (1), 2204 (2001)

Fowler Jr, F.J., Mangione, T.W.: Standardized Survey Interviewing. Minimizing Interviewer-Related Error. Sage, London (1990)

Galtung, J.: Theory and Methods of Social Research. Universitet Forlaget, Oslo (1967)

Gobo, G., Mauceri, S.: Constructing Survey Data. An Interactional Approach. Sage, London (2014)

Goode, W., Hatt, P.K.: Methods in Social Research. McGraw Hill, New York (1952)

Hammersley, M.: Some notes on the terms ‘validity’ and ‘reliability’. Br. Educ. Res. J. 13 (1), 73–81 (1987)

Hughes, J.A.: The Philosophy of Social Research. Longman, New York (1980)

Hyman, H.H., Coob, W.J., Fedelman, J.F., Hart, C.W., Stember, C.H.: Interviewing in Social Research. University of Chicago Press, Chicago (1954)

Jahoda, M., Lazarsfeld, P.F., Zeisel, H.: Die Arbeitslosen von Marienthal. Hitzel, Leipzig (1933)

Jahoda, M., Lazarsfeld, P.F., Zeisel, H.: Marienthal. Sociography of an Unemployed Community. Aldline p.c, Hawthorne (1971)

Johnson, R.B., Onwuegbuzie, A.J., Turner, L.A.: Toward a definition of mixed methods research. J. Mixed Methods Res. 1 (2), 112–133 (2007)

Lazarsfeld, P.F.: The art of asking why. Three principles underlying the formulation of questionnaires. Natl. Mark. Rev. 1 (1), 32–43 (1935)

Lazarsfeld, P.F.: The controversy over detailed interviews. An offer for negotiation. Public Opin. Q. 1 , 38–60 (1944)

Lazarsfeld, P.F., Barton, A.: Some functions of qualitative analysis in social research. Frankf. Bertrage zur Sociol. 1 , 321–361 (1955)

Lazarsfeld, P.F., Berelson, B., Gaudet, H.: The People’s Choice. How the Voter Makes Up his Mind in a Presidential Campaign. Columbia University Press, New York (1944)

Lazarsfeld, P.F., Menzel, H.: On the Relation Between Individual and Collective Properties. In: Etzioni, A., Sociological, A. (eds.) Reader on Complex Organizations, pp. 499–516. Rinehart & Winston, New York (1961)

Lazarsfeld, P.F., Rosenberg, M. (eds.): The Language of Social Research. A Reader in Methodology of Social Research. Free Press, New York (1955)

Leech, N.L., Onwuegbuzie, A.J.: A typology of mixed methods research designs. Qual. Quant. 43 , 265–275 (2009)

Maynard, D.W., Houtkoop-Steenstra, H., Schaeffer, N.C., van der Zouwen, J. (eds.): Standardization and Tacit Knowledge. Interaction and Practice in the Survey Interview. Wiley, New York (2002)

Mauceri, S.: Per la qualità del dato nella ricerca sociale. Strategie di progettazione e conduzione dell’intervista con questionario. Franco Angeli, Milano (2003)

Mauceri, S.: Ri-scoprire l’analisi dei casi devianti. Una strategia metodologica di supporto dei processi teorico-interpretativi nella ricerca sociale di tipo standard. Sociol. e Ricerca Soc. 87 , 109–157 (2008). XXVIII

Mauceri, S.: Per una survey integrata e multilivello. Le lezioni dimenticate della Columbia School. Sociol. e ricerca sociale 99 , 22–65 (2012). XXXIII

Mauceri, S.: Mixed strategies for improving data quality: the contribution of qualitative procedures to survey research. Qual. Quant. 5 , 2773–2790 (2014a). XLVIII

Mauceri, S.: Discontent in call centres: a national multilevel and integrated survey on quality of working life among call handlers. SAGE Res. Methods Cases (2013a). doi: 10.4135/9781446-27305013509181 . (2014b)

Mauceri, S.: Teenage homophobia: a multilevel and integrated survey approach to the social construction of prejudice in high school. SAGE Res. Methods Cases (2013b). doi: 10.4135/9781446-27305013503433 . (2014c)

Mauceri, S.: Omofobia come costruzione sociale. Processi generativi del pregiudizio in età adolescenziale. FrancoAngeli, Milano (2015)

Merton, R.K.: Social Theory and Social Structure. The Free Press, Glencoe (1949)

Merton, R.K., Kendall, P.L.: The focused interview. Am. J. Sociol. 51 , 541–557 (1946)

Morgan, D.L.: Practical strategies for combining qualitative and quantitative methods: applications to health research. Qual. Health Res. 8 , 362–376 (1998)

Morgan, D.L.: Combining qualitative and quantitative methods paradigms lost and pragmatism regained: methodological implications of combining qualitative and quantitative methods. J. Mixed Methods Res. 1 , 48–76 (2007)

Newman, I., Ridenour, C.S., Newman, C., DeMarco, G.M.P., Jr.: A typology of research purposes and its relationship to mixed methods. In: Tashakkori, A., Teddlie, C. (eds.) Handbook of Mixed Methods in Social and Behavioral Research, pp. 167–188. Sage, Thousand Oaks, CA (2003)

Onwuegbuzie, A.J.: Mixed methods research in sociology and beyond. In: Ritzer, G. (ed.) Encyclopedia of Sociology, vol. VI, pp. 2978–2981. Blackwell, Oxford (2007)

Pawson, R.: A Measure for Measures: A Manifesto for Empirical Sociology. Routledge, London (1989)

Book   Google Scholar  

Sieber, S.D.: The integration of fieldwork and survey methods. Am. J. Sociol. 6 , 1335–1359 (1973)

Suchman, L., Jordan, B.: Interactional troubles in face-to-face survey interviews. J. Am. Stat. Assoc. 85 (409), 232–254 (1990)

Teddlie, C., Tashakkori, A.: Major issues and controversies in the use of mixed methods in the social and behavioral sciences. In Tashakkori, A., Teddlie, C. (eds.) (pp. 3–50) (2003)

Tashakkori, A., Teddlie, C. (eds.): Handbook of Mixed Methods in Social and Behavioral Research. Sage, Thousand Oaks, CA (2003)

Trow, M.: Comment on ‘participant observation and interviewing: a comparison’. Hum. Organ. 16 (Fall), 33–35 (1957)

Webb, E.J., Campbell, D.T., Schwartz, R.D., Sechrest, L.: Unobtrusive Measures: Nonreactive Research in the Social Sciences. Rand McNally, Chicago (1966)

Zelditch Jr, M.: Some methodological problems of field studies. Am. J. Sociol. 67 (March), 566–576 (1962)

Download references

Conflict of interest

The author declares that he has no conflict of interest.

Author information

Authors and affiliations.

Department of Communication and Social Research, Sapienza University of Rome, C.so d’Italia 38/a, 00198, Rome, Italy

Sergio Mauceri

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Sergio Mauceri .

Rights and permissions

Reprints and permissions

About this article

Mauceri, S. Integrating quality into quantity: survey research in the era of mixed methods. Qual Quant 50 , 1213–1231 (2016). https://doi.org/10.1007/s11135-015-0199-8

Download citation

Published : 18 April 2015

Issue Date : May 2016

DOI : https://doi.org/10.1007/s11135-015-0199-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Mixed methodology
  • Survey research
  • Multilevel approach
  • Columbia school
  • Find a journal
  • Publish with us
  • Track your research

To read this content please select one of the options below:

Please note you do not have access to teaching notes, a framework for assessing quantity surveyors’ competence.

Benchmarking: An International Journal

ISSN : 1463-5771

Article publication date: 1 October 2018

The purpose of this paper is to develop a conceptual framework for assessing quantity surveyors’ competence level.

Design/methodology/approach

Delphi survey research approach was adopted for the study. This involved a survey of panel of experts, constituted among registered quantity surveyors in Nigeria, and obtaining from them a consensus opinion on the issues relating to the assessment of quantity surveyors’ competence. In total, 27 out of the shortlisted 38 member panel provided valid results in the two rounds of Delphi survey conducted. A conceptual framework linking educational training, professional capability and professional development is developed.

The findings establish the ratings of the identified three competence criteria. On a scale of 0–100 percent rating, educational training was scored 34.04 percent, professional capability 45.22 percent and professional development 20.74 percent.

Originality/value

The proposed framework provide a conceptual approach in assessing quantity surveyor overall competence. Specifically, it demonstrates the significance of the identified three competence criteria groupings in the training, practice and development of quantity surveying profession. It could therefore serves as foundation of on how quantity surveyors are trained, developed and evaluated.

  • Quantity surveyor

Dada, J.O. and Jagboro, G.O. (2018), "A framework for assessing quantity surveyors’ competence", Benchmarking: An International Journal , Vol. 25 No. 7, pp. 2390-2403. https://doi.org/10.1108/BIJ-05-2017-0121

Emerald Publishing Limited

Copyright © 2018, Emerald Publishing Limited

Related articles

We’re listening — tell us what you think, something didn’t work….

Report bugs here

All feedback is valuable

Please share your general feedback

Join us on our journey

Platform update page.

Visit emeraldpublishing.com/platformupdate to discover the latest news and updates

Questions & More Information

Answers to the most commonly asked questions here

quantity survey research paper

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

  •  We're Hiring!
  •  Help Center

Quantity Surveying

  • Most Cited Papers
  • Most Downloaded Papers
  • Newest Papers
  • Save to Library
  • Last »
  • Construction Management Follow Following
  • Construction Project Management Follow Following
  • Procurement Follow Following
  • Project Risk Management Follow Following
  • Tendering Follow Following
  • Construction Economics Follow Following
  • Construction Technology Follow Following
  • QUANTITY SURVEYING AND CONSTRUCTION MANAGMENT Follow Following
  • Value Management Follow Following
  • Procurement Management Follow Following

Enter the email address you signed up with and we'll email you a reset link.

  • Academia.edu Publishing
  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024
  • A/B Monadic Test
  • A/B Pre-Roll Test
  • Key Driver Analysis
  • Multiple Implicit
  • Penalty Reward
  • Price Sensitivity
  • Segmentation
  • Single Implicit
  • Category Exploration
  • Competitive Landscape
  • Consumer Segmentation
  • Innovation & Renovation
  • Product Portfolio
  • Marketing Creatives
  • Advertising
  • Shelf Optimization
  • Performance Monitoring
  • Better Brand Health Tracking
  • Ad Tracking
  • Trend Tracking
  • Satisfaction Tracking
  • AI Insights
  • Case Studies

quantilope is the Consumer Intelligence Platform for all end-to-end research needs

What Are Quantitative Survey Questions? Types and Examples

diagonal green and purple lines with black background

Table of contents: 

  • Types of quantitative survey questions - with examples 
  • Quantitative question formats
  • How to write quantitative survey questions 
  • Examples of quantitative survey questions 

Leveraging quantilope for your quantitative survey 

In a quantitative research study brands will gather numeric data for most of their questions through formats like numerical scale questions or ranking questions. However, brands can also include some non-quantitative questions throughout their quantitative study - like open-ended questions, where respondents will type in their own feedback to a question prompt. Even so, open-ended answers can be numerically coded to sift through feedback easily (e.g. anyone who writes in 'Pepsi' in a soda study would be assigned the number '1', to look at Pepsi feedback as a whole).  One of the biggest benefits of using a quantitative research approach is that insights around a research topic can undergo statistical analysis; the same can’t be said for qualitative data like focus group feedback or interviews. Another major difference between quantitative and qualitative research methods is that quantitative surveys require respondents to choose from a limited number of choices in a close-ended question - generating clear, actionable takeaways. However, these distinct quantitative takeaways often pair well with freeform qualitative responses - making quant and qual a great team to use together.  The rest of this article focuses on quantitative research, taking a closer look at quantitative survey question types and question formats/layouts. 

Back to table of contents 

Types of dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139745">quantitative survey questions - with examples 

Quantitative questions come in many forms, each with different benefits depending on dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139784">your dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139740">market research objectives. Below we’ll explore some of these dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139745">quantitative dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139785">survey question dropdown#toggle" data-dropdown-menu-id-param="menu_term_281139785" data-dropdown-placement-param="top" data-term-id="281139785"> types, which are commonly used together in a single survey to keep things interesting for dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139737">respondents . The style of questioning used during dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139739">quantitative dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139750">data dropdown#toggle" data-dropdown-menu-id-param="menu_term_281139750" data-dropdown-placement-param="top" data-term-id="281139750"> collection is important, as a good mix of the right types of questions will deliver rich data, limit dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139737">respondent fatigue, and optimize the dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139757">response rate . dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139742">Questionnaires should be enjoyable - and varying the dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139755">types of dropdown#toggle" data-dropdown-menu-id-param="menu_term_281139755" data-dropdown-placement-param="top" data-term-id="281139755">quantitative research dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139755"> questions used throughout your survey will help achieve that. 

Descriptive survey questions

dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139763">Descriptive research questions (also known as usage and attitude, or, U&A questions) seek a general indication or prediction about how a dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139773">group of people behaves or will behave, how that group is characterized, or how a group thinks.

For example, a business might want to know what portion of adult men shave, and how often they do so. To find this out, they will survey men (the dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139743">target audience ) and ask descriptive questions about their frequency of shaving (e.g. daily, a few times a week, once per week, and so on.) Each of these frequencies get assigned a numerical ‘code’ so that it’s simple to chart and analyze the data later on; daily might be assigned ‘5’, a few times a week might be assigned ‘4’, and so on. That way, brands can create charts using the ‘top two’ and ‘bottom two’ values in a descriptive question to view these metrics side by side.

Another business might want to know how important local transit issues are to residents, so dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139745">quantitative survey questions will allow dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139737">respondents to indicate the degrees of opinion attached to various transit issues. Perhaps the transit business running this survey would use a sliding numeric scale to see how important a particular issue is.

Comparative survey questions

dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139782">Comparative research questions are concerned with comparing individuals or groups of people based on one or more variables. These questions might be posed when a business wants to find out which segment of its dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139743">target audience might be more profitable, or which types of products might appeal to different sets of consumers.

For example, a business might want to know how the popularity of its chocolate bars is spread out across its entire customer base (i.e. do women prefer a certain flavor? Are children drawn to candy bars by certain packaging attributes? etc.). Questions in this case will be designed to profile and ‘compare’ segments of the market.

Other businesses might be looking to compare coffee consumption among older and younger consumers (i.e. dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139741">demographic segments), the difference in smartphone usage between younger men and women, or how women from different regions differ in their approach to skincare.

Relationship-based survey questions

As the name suggests, relationship-based survey questions are concerned with the relationship between two or more variables within one or more dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139741">demographic groups. This might be a dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139759">causal link between one thing and the other - for example, the consumption of caffeine and dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139737">respondents ’ reported energy levels throughout the day. In this case, a coffee or energy drink brand might be interested in how energy levels differ between those who drink their caffeinated line of beverages and those who drink decaf/non-caffeinated beverages.

Alternatively, it might be a case of two or more factors co-existing, without there necessarily being a dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139759">causal link - for example, a particular type of air freshener being more popular amongst a certain dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139741">demographic (maybe one that is controlled wirelessly via Bluetooth is more popular among younger homeowners than one that’s plugged into the wall with no controls). Knowing that millennials favor air fresheners which have options for swapping out scents and setting up schedules would be valuable information for new product development.

Advanced method survey questions

Aside from descriptive, comparative, and relationship-based survey questions, brands can opt to include advanced methodologies in their quantitative dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139742">questionnaire for richer depth. Though advanced methods are more complex in terms of the insights output, quantilope’s Consumer Intelligence Platform automates the setup and analysis of these methods so that researchers of any background or skillset can leverage them with ease.

With quantilope’s pre-programmed suite of 12 advanced methodologies , including MaxDiff , TURF , Implicit , and more, users can drag and drop any of these into a dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139742">questionnaire and customize for their own dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139740">market research objectives.

For example, consider a beverage company that’s looking to expand its flavor profiles. This brand would benefit from a MaxDiff which forces dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139737">respondents to make tradeoff decisions between a set of flavors. A dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139737">respondent might say that coconut is their most-preferred flavor, and lime their least (when in a consideration set with strawberry), yet later on in the MaxDiff that same dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139737">respondent may say Strawberry is their most-preferred flavor (over black cherry and kiwi). While this is just one example of an advanced method, instantly you can see how much richer and more actionable these quantitative metrics become compared to a standard usage and attitude question .

Advanced methods can be used alongside descriptive, comparison, or relationship questions to add a new layer of context wherever a business sees fit. Back to table of contents 

Quantitative question formats  

So we’ve covered the kinds of dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139736">quantitative research questions you might want to answer using dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139740">market research , but how do these translate into the actual format of questions that you might include on your dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139742">questionnaire ?

Thinking ahead to your reporting process during your dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139742">questionnaire setup is actually quite important, as the available chart types differ among the types of questions asked; some question data is compatible with bar chart displays, others pie charts, others in trended line graphs, etc. Also consider how well the questions you’re asking will translate onto different devices that your dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139737">respondents might be using to complete the survey (mobile, PC, or tablet).

Single Select questions

Single select questions are the simplest form of quantitative questioning, as dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139737">respondents are asked to choose just one answer from a list of items, which tend to be ‘either/or’, ‘yes/no’, or ‘true/false’ questions. These questions are useful when you need to get a clear answer without any qualifying nuances.

yesno

Multi-select questions

Multi-select questions (aka, dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139767">multiple choice ) offer more flexibility for responses, allowing for a number of responses on a single question. dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139737">Respondents can be asked to ‘check all that apply’ or a cap can be applied (e.g. ‘select up to 3 choices’).

For example:

multiselect

Aside from asking text-based questions like the above examples, a brand could also use a single or multi-select question to ask respondents to select the image they prefer more (like different iterations of a logo design, packaging options, branding colors, etc.). 

dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139749">Likert dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139766">scale dropdown#toggle" data-dropdown-menu-id-param="menu_term_281139766" data-dropdown-placement-param="top" data-term-id="281139766"> questions

A dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139749">Likert scale   is widely used as a convenient and easy-to-interpret rating method. dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139737">Respondents find it easy to indicate their degree of feelings by selecting the response they most identify with.

likertscale

Slider scales

Slider scales are another good interactive way of formatting questions. They allow dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139737">respondents to customize their level of feeling about a question, with a bit more variance and nuance allowed than a numeric scale:

logo slider scale example

One particularly common use of a slider scale in a dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139740">market dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139770">research dropdown#toggle" data-dropdown-menu-id-param="menu_term_281139770" data-dropdown-placement-param="top" data-term-id="281139770"> study is known as a NPS (Net Promoter Score) - a way to measure dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139775">customer experience and loyalty . A 0-10 scale is used to ask customers how likely they are to recommend a brand’s product or services to others. The NPS score is calculated by subtracting the percentage of ‘detractors’ (those who respond with a 0-6) from the percentage of promoters (those who respond with a 9-10). dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139737">Respondents who select 7-8 are known as ‘passives’.

For example: 

nps

Drag and drop questions

Drag-and-drop question formats are a more ‘gamified’ approach to survey capture as they ask dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139737">respondents to do more than simply check boxes or slide a scale. Drag-and-drop question formats are great for ranking exercises - asking dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139737">respondents to place answer options in a certain order by dragging with their mouse. For example, you could ask survey takers to put pizza toppings in order of preference by dragging options from a list of possible answers to a box displaying their personal preferences:

ranking poster

Matrix questions

Matrix   questions are a great way to consolidate a number of questions that ask for the same type of response (e.g. single select yes/no, true/false, or multi-select lists). They are mutually beneficial - making a survey look less daunting for the dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139737">respondent , and easier for a brand to set up than asking multiple separate questions.

Items in a matrix question are presented one by one, as respondents cycle through the pages selecting one answer for each coffee flavor shown. 

Untitled design (5)-1

While the above example shows a single-matrix question - meaning a respondent can only select one answer per element (in this case, coffee flavors), a matrix setup can also be used for multiple-choice questions - allowing respondents to choose multiple answers per element shown, or for rating questions - allowing respondents to assign a rating (e.g. 1-5) for a list of elements at once.  Back to table of contents 

How to write dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139745">quantitative survey questions  

We’ve reviewed the types of questions you might ask in a quantitative survey, and how you might format those questions, but now for the actual crafting of the content.

When considering which questions to include in your survey, you’ll first want to establish what your research goals are and how these relate to your business goals. For example, thinking about the three types of dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139745">quantitative survey questions explained above - descriptive, comparative, and relationship-based - which type (or which combination) will best meet your research needs? The questions you ask dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139737">respondents may be phrased in similar ways no matter what kind of layout you leverage, but you should have a good idea of how you’ll want to analyze the results as that will make it much easier to correctly set up your survey.

Quantitative questions tend to start with words like ‘how much,’ ‘how often,’ ‘to what degree,’ ‘what do you think of,’ ‘which of the following’ - anything that establishes what consumers do or think and that can be assigned a numerical code or value. Be sure to also include ‘other’ or ‘none of the above’ options in your quant questions, accommodating those who don’t feel the pre-set answers reflect their true opinion. As mentioned earlier, you can always include a small number of dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139748">open-ended questions in your quant survey to account for any ideas or expanded feedback that the pre-coded questions don’t (or can’t) cover. Back to table of contents 

Examples of dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139745">quantitative survey questions  

dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139745">Quantitative survey questions impose limits on the answers that dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139737">respondents can choose from, and this is a good thing when it comes to measuring consumer opinions on a large scale and comparing across dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139737">respondents . A large volume of freeform, open-ended answers is interesting when looking for themes from qualitative studies, but impractical to wade through when dealing with a large dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139756">sample size , and impossible to subject to dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139774">statistical analysis .

For example, a quantitative survey might aim to establish consumers' smartphone habits. This could include their frequency of buying a new smartphone, the considerations that drive purchase, which features they use their phone for, and how much they like their smartphone.

Some examples of quantitative survey questions relating to these habits would be:

Q. How often do you buy a new smartphone?

[single select question]

More than once per year

Every 1-2 years

Every 3-5 years

Every 6+ years

Q. Thinking about when you buy a smartphone, please rank the following factors in order of importance:

[drag and drop ranking question]

screen size

storage capacity

Q. How often do you use the following features on your smartphone?

[matrix question]

Q. How do you feel about your current smartphone?

[sliding scale]

I love it <-------> I hate it

Answers from these above questions, and others within the survey, would be analyzed to paint a picture of smartphone usage and attitude trends across a population and its sub-groups. dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139738">Qualitative research might then be carried out to explore those findings further - for example, people’s detailed attitudes towards their smartphones, how they feel about the amount of time they spend on it, and how features could be improved. Back to table of contents 

quantilope’s Consumer Intelligence Platform specializes in automated, advanced survey insights so that researchers of any skill level can benefit from quick, high-quality consumer insights. With 12 advanced methods to choose from and a wide variety of quantitative question formats, quantilope is your one-stop-shop for all things dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139740">market research (including its dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139776">in-depth dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139738">qualitative research solution - inColor ).

When it comes to building your survey, you decide how you want to go about it. You can start with a blank slate and drop questions into your survey from a pre-programmed list, or you can get a head start with a survey dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139765">template for a particular business use case (like concept testing ) and customize from there. Once your survey is ready to launch, simply specify your dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139743">target audience , connect any panel (quantilope is panel agnostic), and watch as dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139737">respondents dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139783">answer questions in your survey in real-time by monitoring the fieldwork section of your project. AI-driven dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139764">data analysis takes the raw data and converts it into actionable findings so you never have to worry about manual calculations or statistical testing.

Whether you want to run your quantitative study entirely on your own or with the help of a classically trained research team member, the choice is yours on quantilope’s platform. For more information on how quantilope can help with your next dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139736">quantitative dropdown#toggle" data-dropdown-placement-param="top" data-term-id="281139768">research dropdown#toggle" data-dropdown-menu-id-param="menu_term_281139768" data-dropdown-placement-param="top" data-term-id="281139768"> project , get in touch below!

Get in touch to learn more about quantitative research with quantilope!

Related posts, survey results: how to analyze data and report on findings, how florida's natural leveraged better brand health tracking, quirk's virtual event: fab-brew-lous collaboration: product innovations from the melitta group, brand value: what it is, how to build it, and how to measure it.

quantity survey research paper

quantity survey research paper

Chapter 8 Survey Research: A Quantitative Technique

Why survey research.

In 2008, the voters of the United States elected our first African American president, Barack Obama. It may not surprise you to learn that when President Obama was coming of age in the 1970s, one-quarter of Americans reported that they would not vote for a qualified African American presidential nominee. Three decades later, when President Obama ran for the presidency, fewer than 8% of Americans still held that position, and President Obama won the election (Smith, 2009). Smith, T. W. (2009). Trends in willingness to vote for a black and woman for president, 1972–2008. GSS Social Change Report No. 55 . Chicago, IL: National Opinion Research Center. We know about these trends in voter opinion because the General Social Survey ( http://www.norc.uchicago.edu/GSS+Website ), a nationally representative survey of American adults, included questions about race and voting over the years described here. Without survey research, we may not know how Americans’ perspectives on race and the presidency shifted over these years.

8.1 Survey Research: What Is It and When Should It Be Used?

Learning objectives.

  • Define survey research.
  • Identify when it is appropriate to employ survey research as a data-collection strategy.

Most of you have probably taken a survey at one time or another, so you probably have a pretty good idea of what a survey is. Sometimes students in my research methods classes feel that understanding what a survey is and how to write one is so obvious, there’s no need to dedicate any class time to learning about it. This feeling is understandable—surveys are very much a part of our everyday lives—we’ve probably all taken one, we hear about their results in the news, and perhaps we’ve even administered one ourselves. What students quickly learn is that there is more to constructing a good survey than meets the eye. Survey design takes a great deal of thoughtful planning and often a great many rounds of revision. But it is worth the effort. As we’ll learn in this chapter, there are many benefits to choosing survey research as one’s method of data collection. We’ll take a look at what a survey is exactly, what some of the benefits and drawbacks of this method are, how to construct a survey, and what to do with survey data once one has it in hand.

Survey research A quantitative method for which a researcher poses the same set of questions, typically in a written format, to a sample of individuals. is a quantitative method whereby a researcher poses some set of predetermined questions to an entire group, or sample, of individuals. Survey research is an especially useful approach when a researcher aims to describe or explain features of a very large group or groups. This method may also be used as a way of quickly gaining some general details about one’s population of interest to help prepare for a more focused, in-depth study using time-intensive methods such as in-depth interviews or field research. In this case, a survey may help a researcher identify specific individuals or locations from which to collect additional data.

As is true of all methods of data collection, survey research is better suited to answering some kinds of research question more than others. In addition, as you’ll recall from Chapter 6 "Defining and Measuring Concepts" , operationalization works differently with different research methods. If your interest is in political activism, for example, you likely operationalize that concept differently in a survey than you would for a field research study of the same topic.

Key Takeaway

  • Survey research is often used by researchers who wish to explain trends or features of large groups. It may also be used to assist those planning some more focused, in-depth study.
  • Recall some of the possible research questions you came up with while reading previous chapters of this text. How might you frame those questions so that they could be answered using survey research?

8.2 Pros and Cons of Survey Research

  • Identify and explain the strengths of survey research.
  • Identify and explain the weaknesses of survey research.

Survey research, as with all methods of data collection, comes with both strengths and weaknesses. We’ll examine both in this section.

Strengths of Survey Method

Researchers employing survey methods to collect data enjoy a number of benefits. First, surveys are an excellent way to gather lots of information from many people. In my own study of older people’s experiences in the workplace, I was able to mail a written questionnaire to around 500 people who lived throughout the state of Maine at a cost of just over $1,000. This cost included printing copies of my seven-page survey, printing a cover letter, addressing and stuffing envelopes, mailing the survey, and buying return postage for the survey. I realize that $1,000 is nothing to sneeze at. But just imagine what it might have cost to visit each of those people individually to interview them in person. Consider the cost of gas to drive around the state, other travel costs, such as meals and lodging while on the road, and the cost of time to drive to and talk with each person individually. We could double, triple, or even quadruple our costs pretty quickly by opting for an in-person method of data collection over a mailed survey. Thus surveys are relatively cost effective .

Related to the benefit of cost effectiveness is a survey’s potential for generalizability . Because surveys allow researchers to collect data from very large samples for a relatively low cost, survey methods lend themselves to probability sampling techniques, which we discussed in Chapter 7 "Sampling" . Of all the data-collection methods described in this text, survey research is probably the best method to use when one hopes to gain a representative picture of the attitudes and characteristics of a large group.

Survey research also tends to be a reliable method of inquiry. This is because surveys are standardized The same questions, phrased in the same way, are posed to all participants, consistent. in that the same questions, phrased in exactly the same way, are posed to participants. Other methods, such as qualitative interviewing, which we’ll learn about in Chapter 9 "Interviews: Qualitative and Quantitative Approaches" , do not offer the same consistency that a quantitative survey offers. This is not to say that all surveys are always reliable. A poorly phrased question can cause respondents to interpret its meaning differently, which can reduce that question’s reliability. Assuming well-constructed question and questionnaire design, one strength of survey methodology is its potential to produce reliable results.

The versatility A feature of survey research meaning that many different people use surveys for a variety of purposes and in a variety of settings. of survey research is also an asset. Surveys are used by all kinds of people in all kinds of professions. I repeat, surveys are used by all kinds of people in all kinds of professions. Is there a light bulb switching on in your head? I hope so. The versatility offered by survey research means that understanding how to construct and administer surveys is a useful skill to have for all kinds of jobs. Lawyers might use surveys in their efforts to select juries, social service and other organizations (e.g., churches, clubs, fundraising groups, activist groups) use them to evaluate the effectiveness of their efforts, businesses use them to learn how to market their products, governments use them to understand community opinions and needs, and politicians and media outlets use surveys to understand their constituencies.

In sum, the following are benefits of survey research:

  • Cost-effective
  • Generalizable

Weaknesses of Survey Method

As with all methods of data collection, survey research also comes with a few drawbacks. First, while one might argue that surveys are flexible in the sense that we can ask any number of questions on any number of topics in them, the fact that the survey researcher is generally stuck with a single instrument for collecting data (the questionnaire), surveys are in many ways rather inflexible . Let’s say you mail a survey out to 1,000 people and then discover, as responses start coming in, that your phrasing on a particular question seems to be confusing a number of respondents. At this stage, it’s too late for a do-over or to change the question for the respondents who haven’t yet returned their surveys. When conducting in-depth interviews, on the other hand, a researcher can provide respondents further explanation if they’re confused by a question and can tweak their questions as they learn more about how respondents seem to understand them.

Validity can also be a problem with surveys. Survey questions are standardized; thus it can be difficult to ask anything other than very general questions that a broad range of people will understand. Because of this, survey results may not be as valid as results obtained using methods of data collection that allow a researcher to more comprehensively examine whatever topic is being studied. Let’s say, for example, that you want to learn something about voters’ willingness to elect an African American president, as in our opening example in this chapter. General Social Survey respondents were asked, “If your party nominated an African American for president, would you vote for him if he were qualified for the job?” Respondents were then asked to respond either yes or no to the question. But what if someone’s opinion was more complex than could be answered with a simple yes or no? What if, for example, a person was willing to vote for an African American woman but not an African American man? I am not at all suggesting that such a perspective makes any sense, but it is conceivable that an individual might hold such a perspective.

In sum, potential drawbacks to survey research include the following:

  • Inflexibility

Key Takeaways

  • Strengths of survey research include its cost effectiveness, generalizability, reliability, and versatility.
  • Weaknesses of survey research include inflexibility and issues with validity.
  • What are some ways that survey researchers might overcome the weaknesses of this method?
  • Find an article reporting results from survey research (remember how to use Sociological Abstracts?). How do the authors describe the strengths and weaknesses of their study? Are any of the strengths or weaknesses described here mentioned in the article?

8.3 Types of Surveys

  • Define cross-sectional surveys, provide an example of a cross-sectional survey, and outline some of the drawbacks of cross-sectional research.
  • Describe the various types of longitudinal surveys.
  • Define retrospective surveys, and identify their strengths and weaknesses.
  • Discuss some of the benefits and drawbacks of the various methods of delivering self-administered questionnaires.

There is much variety when it comes to surveys. This variety comes both in terms of time —when or with what frequency a survey is administered—and in terms of administration —how a survey is delivered to respondents. In this section we’ll take a look at what types of surveys exist when it comes to both time and administration.

In terms of time, there are two main types of surveys: cross-sectional and longitudinal. Cross-sectional surveys Surveys that are administered at one point in time. are those that are administered at just one point in time. These surveys offer researchers a sort of snapshot in time and give us an idea about how things are for our respondents at the particular point in time that the survey is administered. My own study of older workers mentioned previously is an example of a cross-sectional survey. I administered the survey at just one time.

Another example of a cross-sectional survey comes from Aniko Kezdy and colleagues’ study (Kezdy, Martos, Boland, & Horvath-Szabo, 2011) Kezdy, A., Martos, T., Boland, V., & Horvath-Szabo, K. (2011). Religious doubts and mental health in adolescence and young adulthood: The association with religious attitudes. Journal of Adolescence, 34 , 39–47. of the association between religious attitudes, religious beliefs, and mental health among students in Hungary. These researchers administered a single, one-time-only, cross-sectional survey to a convenience sample of 403 high school and college students. The survey focused on how religious attitudes impact various aspects of one’s life and health. The researchers found from analysis of their cross-sectional data that anxiety and depression were highest among those who had both strong religious beliefs and also some doubts about religion. Yet another recent example of cross-sectional survey research can be seen in Bateman and colleagues’ study (Bateman, Pike, & Butler, 2011) of how the perceived publicness of social networking sites influences users’ self-disclosures. Bateman, P. J., Pike, J. C., & Butler, B. S. (2011). To disclose or not: Publicness in social networking sites. Information Technology & People, 24 , 78–100. These researchers administered an online survey to undergraduate and graduate business students. They found that even though revealing information about oneself is viewed as key to realizing many of the benefits of social networking sites, respondents were less willing to disclose information about themselves as their perceptions of a social networking site’s publicness rose. That is, there was a negative relationship between perceived publicness of a social networking site and plans to self-disclose on the site.

One problem with cross-sectional surveys is that the events, opinions, behaviors, and other phenomena that such surveys are designed to assess don’t generally remain stagnant. Thus generalizing from a cross-sectional survey about the way things are can be tricky; perhaps you can say something about the way things were in the moment that you administered your survey, but it is difficult to know whether things remained that way for long after you administered your survey. Think, for example, about how Americans might have responded if administered a survey asking for their opinions on terrorism on September 10, 2001. Now imagine how responses to the same set of questions might differ were they administered on September 12, 2001. The point is not that cross-sectional surveys are useless; they have many important uses. But researchers must remember what they have captured by administering a cross-sectional survey; that is, as previously noted, a snapshot of life as it was at the time that the survey was administered.

One way to overcome this sometimes problematic aspect of cross-sectional surveys is to administer a longitudinal survey. Longitudinal surveys Surveys that enable a researcher to make observations over some extended period of time. are those that enable a researcher to make observations over some extended period of time. There are several types of longitudinal surveys, including trend, panel, and cohort surveys. We’ll discuss all three types here, along with another type of survey called retrospective. Retrospective surveys fall somewhere in between cross-sectional and longitudinal surveys.

The first type of longitudinal survey is called a trend survey A type of longitudinal survey where a researcher examines changes in trends over time; the same people do not necessarily participate in the survey more than once. . The main focus of a trend survey is, perhaps not surprisingly, trends. Researchers conducting trend surveys are interested in how people’s inclinations change over time. The Gallup opinion polls are an excellent example of trend surveys. You can read more about Gallup on their website: http://www.gallup.com/Home.aspx . To learn about how public opinion changes over time, Gallup administers the same questions to people at different points in time. For example, for several years Gallup has polled Americans to find out what they think about gas prices (something many of us happen to have opinions about). One thing we’ve learned from Gallup’s polling is that price increases in gasoline caused financial hardship for 67% of respondents in 2011, up from 40% in the year 2000. Gallup’s findings about trends in opinions about gas prices have also taught us that whereas just 34% of people in early 2000 thought the current rise in gas prices was permanent, 54% of people in 2011 believed the rise to be permanent. Thus through Gallup’s use of trend survey methodology, we’ve learned that Americans seem to feel generally less optimistic about the price of gas these days than they did 10 or so years ago. You can read about these and other findings on Gallup’s gasoline questions at http://www.gallup.com/poll/147632/Gas-Prices.aspx#1 . It should be noted that in a trend survey, the same people are probably not answering the researcher’s questions each year. Because the interest here is in trends, not specific people, as long as the researcher’s sample is representative of whatever population he or she wishes to describe trends for, it isn’t important that the same people participate each time.

Next are panel surveys A type of longitudinal survey in which a researcher surveys the exact same sample several times over a period of time. . Unlike in a trend survey, in a panel survey the same people do participate in the survey each time it is administered. As you might imagine, panel studies can be difficult and costly. Imagine trying to administer a survey to the same 100 people every year for, say, 5 years in a row. Keeping track of where people live, when they move, and when they die takes resources that researchers often don’t have. When they do, however, the results can be quite powerful. The Youth Development Study (YDS), administered from the University of Minnesota, offers an excellent example of a panel study. You can read more about the Youth Development Study at its website: http://www.soc.umn.edu/research/yds . Since 1988, YDS researchers have administered an annual survey to the same 1,000 people. Study participants were in ninth grade when the study began, and they are now in their thirties. Several hundred papers, articles, and books have been written using data from the YDS. One of the major lessons learned from this panel study is that work has a largely positive impact on young people (Mortimer, 2003). Mortimer, J. T. (2003). Working and growing up in America . Cambridge, MA: Harvard University Press. Contrary to popular beliefs about the impact of work on adolescents’ performance in school and transition to adulthood, work in fact increases confidence, enhances academic success, and prepares students for success in their future careers. Without this panel study, we may not be aware of the positive impact that working can have on young people.

Another type of longitudinal survey is a cohort survey A type of longitudinal survey where a researcher’s interest is in a particular group of people who share some common experience or characteristic. . In a cohort survey, a researcher identifies some category of people that are of interest and then regularly surveys people who fall into that category. The same people don’t necessarily participate from year to year, but all participants must meet whatever categorical criteria fulfill the researcher’s primary interest. Common cohorts that may be of interest to researchers include people of particular generations or those who were born around the same time period, graduating classes, people who began work in a given industry at the same time, or perhaps people who have some specific life experience in common. An example of this sort of research can be seen in Christine Percheski’s work (2008) Percheski, C. (2008). Opting out? Cohort differences in professional women’s employment rates from 1960 to 2005. American Sociological Review, 73 , 497–517. on cohort differences in women’s employment. Percheski compared women’s employment rates across seven different generational cohorts, from Progressives born between 1906 and 1915 to Generation Xers born between 1966 and 1975. She found, among other patterns, that professional women’s labor force participation had increased across all cohorts. She also found that professional women with young children from Generation X had higher labor force participation rates than similar women from previous generations, concluding that mothers do not appear to be opting out of the workforce as some journalists have speculated (Belkin, 2003). Belkin, L. (2003, October 26). The opt-out revolution. New York Times , pp. 42–47, 58, 85–86.

All three types of longitudinal surveys share the strength that they permit a researcher to make observations over time. This means that if whatever behavior or other phenomenon the researcher is interested in changes, either because of some world event or because people age, the researcher will be able to capture those changes. Table 8.1 "Types of Longitudinal Surveys" summarizes each of the three types of longitudinal surveys.

Table 8.1 Types of Longitudinal Surveys

Finally, retrospective surveys A type of survey in which participants are asked to report events from the past. are similar to other longitudinal studies in that they deal with changes over time, but like a cross-sectional study, they are administered only once. In a retrospective survey, participants are asked to report events from the past. By having respondents report past behaviors, beliefs, or experiences, researchers are able to gather longitudinal-like data without actually incurring the time or expense of a longitudinal survey. Of course, this benefit must be weighed against the possibility that people’s recollections of their pasts may be faulty. Imagine, for example, that you’re asked in a survey to respond to questions about where, how, and with whom you spent last Valentine’s Day. As last Valentine’s Day can’t have been more than 12 months ago, chances are good that you might be able to respond accurately to any survey questions about it. But now let’s say the research wants to know how last Valentine’s Day compares to previous Valentine’s Days, so he asks you to report on where, how, and with whom you spent the preceding six Valentine’s Days. How likely is it that you will remember? Will your responses be as accurate as they might have been had you been asked the question each year over the past 6 years rather than asked to report on all years today?

In sum, when or with what frequency a survey is administered will determine whether your survey is cross-sectional or longitudinal. While longitudinal surveys are certainly preferable in terms of their ability to track changes over time, the time and cost required to administer a longitudinal survey can be prohibitive. As you may have guessed, the issues of time described here are not necessarily unique to survey research. Other methods of data collection can be cross-sectional or longitudinal—these are really matters of research design. But we’ve placed our discussion of these terms here because they are most commonly used by survey researchers to describe the type of survey administered. Another aspect of survey administration deals with how surveys are administered. We’ll examine that next.

Administration

Surveys vary not just in terms of when they are administered but also in terms of how they are administered. One common way to administer surveys is in the form of self-administered questionnaires A set of written questions that a research participant responds to by filling in answers on her or his own without the assistance of a researcher. . This means that a research participant is given a set of questions, in writing, to which he or she is asked to respond. Self-administered questionnaires can be delivered in hard copy format, typically via mail, or increasingly more commonly, online. We’ll consider both modes of delivery here.

Hard copy self-administered questionnaires may be delivered to participants in person or via snail mail. Perhaps you’ve take a survey that was given to you in person; on many college campuses it is not uncommon for researchers to administer surveys in large social science classes (as you might recall from the discussion in our chapter on sampling). In my own introduction to sociology courses, I’ve welcomed graduate students and professors doing research in areas that are relevant to my students, such as studies of campus life, to administer their surveys to the class. If you are ever asked to complete a survey in a similar setting, it might be interesting to note how your perspective on the survey and its questions could be shaped by the new knowledge you’re gaining about survey research in this chapter.

Researchers may also deliver surveys in person by going door-to-door and either asking people to fill them out right away or making arrangements for the researcher to return to pick up completed surveys. Though the advent of online survey tools has made door-to-door delivery of surveys less common, I still see an occasional survey researcher at my door, especially around election time. This mode of gathering data is apparently still used by political campaign workers, at least in some areas of the country.

If you are not able to visit each member of your sample personally to deliver a survey, you might consider sending your survey through the mail. While this mode of delivery may not be ideal (imagine how much less likely you’d probably be to return a survey that didn’t come with the researcher standing on your doorstep waiting to take it from you), sometimes it is the only available or the most practical option. As I’ve said, this may not be the most ideal way of administering a survey because it can be difficult to convince people to take the time to complete and return your survey.

Often survey researchers who deliver their surveys via snail mail may provide some advance notice to respondents about the survey to get people thinking about and preparing to complete it. They may also follow up with their sample a few weeks after their survey has been sent out. This can be done not only to remind those who have not yet completed the survey to please do so but also to thank those who have already returned the survey. Most survey researchers agree that this sort of follow-up is essential for improving mailed surveys’ return rates (Babbie, 2010). Babbie, E. (2010). The practice of social research (12th ed.). Belmont, CA: Wadsworth.

In my own study of older workers’ harassment experiences, people in the sample were notified in advance of the survey mailing via an article describing the research in a newsletter they received from the agency with whom I had partnered to conduct the survey. When I mailed the survey, a $1 bill was included with each in order to provide some incentive and an advance token of thanks to participants for returning the surveys. Two months after the initial mailing went out, those who were sent a survey were contacted by phone. While returned surveys did not contain any identifying information about respondents, my research assistants contacted individuals to whom a survey had been mailed to remind them that it was not too late to return their survey and to say thank to those who may have already done so. Four months after the initial mailing went out, everyone on the original mailing list received a letter thanking those who had returned the survey and once again reminding those who had not that it was not too late to do so. The letter included a return postcard for respondents to complete should they wish to receive another copy of the survey. Respondents were also provided a telephone number to call and were provided the option of completing the survey by phone. As you can see, administering a survey by mail typically involves much more than simply arranging a single mailing; participants may be notified in advance of the mailing, they then receive the mailing, and then several follow-up contacts will likely be made after the survey has been mailed.

Earlier I mentioned online delivery as another way to administer a survey. This delivery mechanism is becoming increasingly common, no doubt because it is easy to use, relatively cheap, and may be quicker than knocking on doors or waiting for mailed surveys to be returned. To deliver a survey online, a researcher may subscribe to a service that offers online delivery or use some delivery mechanism that is available for free. SurveyMonkey offers both free and paid online survey services ( http://www.surveymonkey.com ). One advantage to using a service like SurveyMonkey, aside from the advantages of online delivery already mentioned, is that results can be provided to you in formats that are readable by data analysis programs such as SPSS, Systat, and Excel. This saves you, the researcher, the step of having to manually enter data into your analysis program, as you would if you administered your survey in hard copy format.

Many of the suggestions provided for improving the response rate on a hard copy questionnaire apply to online questionnaires as well. One difference of course is that the sort of incentives one can provide in an online format differ from those that can be given in person or sent through the mail. But this doesn’t mean that online survey researchers cannot offer completion incentives to their respondents. I’ve taken a number of online surveys; many of these did not come with an incentive other than the joy of knowing that I’d helped a fellow social scientist do his or her job, but on one I was given a printable $5 coupon to my university’s campus dining services on completion, and another time I was given a coupon code to use for $10 off any order on Amazon.com. I’ve taken other online surveys where on completion I could provide my name and contact information if I wished to be entered into a drawing together with other study participants to win a larger gift, such as a $50 gift card or an iPad.

Sometimes surveys are administered by having a researcher actually pose questions directly to respondents rather than having respondents read the questions on their own. These types of surveys are a form of interviews. We discuss interviews in Chapter 9 "Interviews: Qualitative and Quantitative Approaches" , where we’ll examine interviews of the survey (or quantitative) type and qualitative interviews as well. Interview methodology differs from survey research in that data are collected via a personal interaction. Because asking people questions in person comes with a set of guidelines and concerns that differ from those associated with asking questions on paper or online, we’ll reserve our discussion of those guidelines and concerns for Chapter 9 "Interviews: Qualitative and Quantitative Approaches" .

Whatever delivery mechanism you choose, keep in mind that there are pros and cons to each of the options described here. While online surveys may be faster and cheaper than mailed surveys, can you be certain that every person in your sample will have the necessary computer hardware, software, and Internet access in order to complete your online survey? On the other hand, perhaps mailed surveys are more likely to reach your entire sample but also more likely to be lost and not returned. The choice of which delivery mechanism is best depends on a number of factors including your resources, the resources of your study participants, and the time you have available to distribute surveys and wait for responses. In my own survey of older workers, I would have much preferred to administer my survey online, but because so few people in my sample were likely to have computers, and even fewer would have Internet access, I chose instead to mail paper copies of the survey to respondents’ homes. Understanding the characteristics of your study’s population is key to identifying the appropriate mechanism for delivering your survey.

  • Time is a factor in determining what type of survey researcher administers; cross-sectional surveys are administered at one time, and longitudinal surveys are administered over time.
  • Retrospective surveys offer some of the benefits of longitudinal research but also come with their own drawbacks.
  • Self-administered questionnaires may be delivered in hard copy form to participants in person or via snail mail or online.
  • If the idea of a panel study piqued your interest, check out the Up series of documentary films. While not a survey, the films offer one example of a panel study. Filmmakers began filming the lives of 14 British children in 1964, when the children were 7 years old. They have since caught up with the children every 7 years. In 2012, the eighth installment of the documentary, 56 Up , will come out. Many clips from the series are available on YouTube.
  • For more information about online delivery of surveys, check out SurveyMonkey’s website: http://www.surveymonkey.com .

8.4 Designing Effective Questions and Questionnaires

  • Identify the steps one should take in order to write effective survey questions.
  • Describe some of the ways that survey questions might confuse respondents and how to overcome that possibility.
  • Recite the two response option guidelines when writing closed-ended questions.
  • Define fence-sitting and floating.
  • Describe the steps involved in constructing a well-designed questionnaire.
  • Discuss why pretesting is important.

To this point we’ve considered several general points about surveys including when to use them, some of their pros and cons, and how often and in what ways to administer surveys. In this section we’ll get more specific and take a look at how to pose understandable questions that will yield useable data and how to present those questions on your questionnaire.

Asking Effective Questions

The first thing you need to do in order to write effective survey questions is identify what exactly it is that you wish to know. As silly as it sounds to state what seems so completely obvious, I can’t stress enough how easy it is to forget to include important questions when designing a survey. Let’s say you want to understand how students at your school made the transition from high school to college. Perhaps you wish to identify which students were comparatively more or less successful in this transition and which factors contributed to students’ success or lack thereof. To understand which factors shaped successful students’ transitions to college, you’ll need to include questions in your survey about all the possible factors that could contribute. Consulting the literature on the topic will certainly help, but you should also take the time to do some brainstorming on your own and to talk with others about what they think may be important in the transition to college. Perhaps time or space limitations won’t allow you to include every single item you’ve come up with, so you’ll also need to think about ranking your questions so that you can be sure to include those that you view as most important.

Although I have stressed the importance of including questions on all topics you view as important to your overall research question, you don’t want to take an everything-but-the-kitchen-sink approach by uncritically including every possible question that occurs to you. Doing so puts an unnecessary burden on your survey respondents. Remember that you have asked your respondents to give you their time and attention and to take care in responding to your questions; show them your respect by only asking questions that you view as important.

Once you’ve identified all the topics about which you’d like to ask questions, you’ll need to actually write those questions. Questions should be as clear and to the point as possible. This is not the time to show off your creative writing skills; a survey is a technical instrument and should be written in a way that is as direct and succinct as possible. As I’ve said, your survey respondents have agreed to give their time and attention to your survey. The best way to show your appreciation for their time is to not waste it. Ensuring that your questions are clear and not overly wordy will go a long way toward showing your respondents the gratitude they deserve.

Related to the point about not wasting respondents’ time, make sure that every question you pose will be relevant to every person you ask to complete it. This means two things: first, that respondents have knowledge about whatever topic you are asking them about, and second, that respondents have experience with whatever events, behaviors, or feelings you are asking them to report. You probably wouldn’t want to ask a sample of 18-year-old respondents, for example, how they would have advised President Reagan to proceed when news of the United States’ sale of weapons to Iran broke in the mid-1980s. For one thing, few 18-year-olds are likely to have any clue about how to advise a president (nor does this 30-something-year-old). Furthermore, the 18-year-olds of today were not even alive during Reagan’s presidency, so they have had no experience with the event about which they are being questioned. In our example of the transition to college, heeding the criterion of relevance would mean that respondents must understand what exactly you mean by “transition to college” if you are going to use that phrase in your survey and that respondents must have actually experienced the transition to college themselves.

If you decide that you do wish to pose some questions about matters with which only a portion of respondents will have had experience, it may be appropriate to introduce a filter question A question designed to identify some subset of survey respondents who are then asked additional questions that are not relevant to the entire sample. into your survey. A filter question is designed to identify some subset of survey respondents who are asked additional questions that are not relevant to the entire sample. Perhaps in your survey on the transition to college you want to know whether substance use plays any role in students’ transitions. You may ask students how often they drank during their first semester of college. But this assumes that all students drank. Certainly some may have abstained, and it wouldn’t make any sense to ask the nondrinkers how often they drank. Nevertheless, it seems reasonable that drinking frequency may have an impact on someone’s transition to college, so it is probably worth asking this question even if doing so violates the rule of relevance for some respondents. This is just the sort of instance when a filter question would be appropriate. You may pose the question as it is presented in Figure 8.8 "Filter Question" .

Figure 8.8 Filter Question

quantity survey research paper

There are some ways of asking questions that are bound to confuse a good many survey respondents. Survey researchers should take great care to avoid these kinds of questions. These include questions that pose double negatives, those that use confusing or culturally specific terms, and those that ask more than one question but are posed as a single question. Any time respondents are forced to decipher questions that utilize two forms of negation, confusion is bound to ensue. Taking the previous question about drinking as our example, what if we had instead asked, “Did you not drink during your first semester of college?” A response of no would mean that the respondent did actually drink—he or she did not not drink. This example is obvious, but hopefully it drives home the point to be careful about question wording so that respondents are not asked to decipher double negatives. In general, avoiding negative terms in your question wording will help to increase respondent understanding. Though this is generally true, some researchers argue that negatively worded questions should be integrated with positively worded questions in order to ensure that respondents have actually carefully read each question. See, for example, the following: Vaterlaus, M., & Higgenbotham, B. (2011). Writing survey questions for local program evaluations. Retrieved from http://extension.usu.edu/files/publications/publication/FC_Evaluation_2011-02pr.pdf

You should also avoid using terms or phrases that may be regionally or culturally specific (unless you are absolutely certain all your respondents come from the region or culture whose terms you are using). When I first moved to Maine from Minnesota, I was totally confused every time I heard someone use the word wicked . This term has totally different meanings across different regions of the country. I’d come from an area that understood the term wicked to be associated with evil. In my new home, however, wicked is used simply to put emphasis on whatever it is that you’re talking about. So if this chapter is extremely interesting to you, if you live in Maine you might say that it is “wicked interesting.” If you hate this chapter and you live in Minnesota, perhaps you’d describe the chapter simply as wicked. I once overheard one student tell another that his new girlfriend was “wicked athletic.” At the time I thought this meant he’d found a woman who used her athleticism for evil purposes. I’ve come to understand, however, that this woman is probably just exceptionally athletic. While wicked may not be a term you’re likely to use in a survey, the point is to be thoughtful and cautious about whatever terminology you do use.

Asking multiple questions as though they are a single question can also be terribly confusing for survey respondents. There’s a specific term for this sort of question; it is called a double-barreled question A question that is posed as a single question but in fact asks more than one question. . Using our example of the transition to college, Figure 8.9 "Double-Barreled Question" shows a double-barreled question.

Figure 8.9 Double-Barreled Question

quantity survey research paper

Do you see what makes the question double-barreled? How would someone respond if they felt their college classes were more demanding but also more boring than their high school classes? Or less demanding but more interesting? Because the question combines “demanding” and “interesting,” there is no way to respond yes to one criterion but no to the other.

Another thing to avoid when constructing survey questions is the problem of social desirability The idea that respondents will try to answer questions in a way that will present them in a favorable light. . We all want to look good, right? And we all probably know the politically correct response to a variety of questions whether we agree with the politically correct response or not. In survey research, social desirability refers to the idea that respondents will try to answer questions in a way that will present them in a favorable light. Perhaps we decide that to understand the transition to college, we need to know whether respondents ever cheated on an exam in high school or college. We all know that cheating on exams is generally frowned upon (at least I hope we all know this). So it may be difficult to get people to admit to cheating on a survey. But if you can guarantee respondents’ confidentiality, or even better, their anonymity, chances are much better that they will be honest about having engaged in this socially undesirable behavior. Another way to avoid problems of social desirability is to try to phrase difficult questions in the most benign way possible. Earl Babbie (2010) Babbie, E. (2010). The practice of social research (12th ed.). Belmont, CA: Wadsworth. offers a useful suggestion for helping you do this—simply imagine how you would feel responding to your survey questions. If you would be uncomfortable, chances are others would as well.

Finally, it is important to get feedback on your survey questions from as many people as possible, especially people who are like those in your sample. Now is not the time to be shy. Ask your friends for help, ask your mentors for feedback, ask your family to take a look at your survey as well. The more feedback you can get on your survey questions, the better the chances that you will come up with a set of questions that are understandable to a wide variety of people and, most importantly, to those in your sample.

In sum, in order to pose effective survey questions, researchers should do the following:

  • Identify what it is they wish to know.
  • Keep questions clear and succinct.
  • Make questions relevant to respondents.
  • Use filter questions when necessary.
  • Avoid questions that are likely to confuse respondents such as those that use double negatives, use culturally specific terms, or pose more than one question in the form of a single question.
  • Imagine how they would feel responding to questions.
  • Get feedback, especially from people who resemble those in the researcher’s sample.

Response Options

While posing clear and understandable questions in your survey is certainly important, so, too, is providing respondents with unambiguous response options The answers that are provided to for each question in a survey. . Response options are the answers that you provide to the people taking your survey. Generally respondents will be asked to choose a single (or best) response to each question you pose, though certainly it makes sense in some cases to instruct respondents to choose multiple response options. One caution to keep in mind when accepting multiple responses to a single question, however, is that doing so may add complexity when it comes to tallying and analyzing your survey results.

Offering response options assumes that your questions will be closed-ended questions A survey question for which the researcher provides respondents with a limited set of clear response options. . In a quantitative written survey, which is the type of survey we’ve been discussing here, chances are good that most if not all your questions will be closed ended. This means that you, the researcher, will provide respondents with a limited set of options for their responses. To write an effective closed-ended question, there are a couple of guidelines worth following. First, be sure that your response options are mutually exclusive . Look back at Figure 8.8 "Filter Question" , which contains questions about how often and how many drinks respondents consumed. Do you notice that there are no overlapping categories in the response options for these questions? This is another one of those points about question construction that seems fairly obvious but that can be easily overlooked. Response options should also be exhaustive . In other words, every possible response should be covered in the set of response options that you provide. For example, note that in question 10a in Figure 8.8 "Filter Question" we have covered all possibilities—those who drank, say, an average of once per month can choose the first response option (“less than one time per week”) while those who drank multiple times a day each day of the week can choose the last response option (“7+”). All the possibilities in between these two extremes are covered by the middle three response options.

Surveys need not be limited to closed-ended questions. Sometimes survey researchers include open-ended questions A survey question for which the researcher does not provide respondents with response options; instead, respondents answer in their own words. in their survey instruments as a way to gather additional details from respondents. An open-ended question does not include response options; instead, respondents are asked to reply to the question in their own way, using their own words. These questions are generally used to find out more about a survey participant’s experiences or feelings about whatever they are being asked to report in the survey. If, for example, a survey includes closed-ended questions asking respondents to report on their involvement in extracurricular activities during college, an open-ended question could ask respondents why they participated in those activities or what they gained from their participation. While responses to such questions may also be captured using a closed-ended format, allowing participants to share some of their responses in their own words can make the experience of completing the survey more satisfying to respondents and can also reveal new motivations or explanations that had not occurred to the researcher.

In Section 8.4.1 "Asking Effective Questions" we discussed double-barreled questions, but response options can also be double barreled, and this should be avoided. Figure 8.10 "Double-Barreled Response Options" is an example of a question that uses double-barreled response options.

Figure 8.10 Double-Barreled Response Options

quantity survey research paper

Other things to avoid when it comes to response options include fence-sitting and floating. Fence-sitters Respondents who present themselves as neutral when in truth they have an opinion. are respondents who choose neutral response options, even if they have an opinion. This can occur if respondents are given, say, five rank-ordered response options, such as strongly agree, agree, no opinion, disagree, and strongly disagree. Some people will be drawn to respond “no opinion” even if they have an opinion, particularly if their true opinion is the nonsocially desirable opinion. Floaters Respondents who choose a substantive answer to a question when in truth they don’t understand the question or the response options. , on the other hand, are those that choose a substantive answer to a question when really they don’t understand the question or don’t have an opinion. If a respondent is only given four rank-ordered response options, such as strongly agree, agree, disagree, and strongly disagree, those who have no opinion have no choice but to select a response that suggests they have an opinion.

As you can see, floating is the flip side of fence-sitting. Thus the solution to one problem is often the cause of the other. How you decide which approach to take depends on the goals of your research. Sometimes researchers actually want to learn something about people who claim to have no opinion. In this case, allowing for fence-sitting would be necessary. Other times researchers feel confident their respondents will all be familiar with every topic in their survey. In this case, perhaps it is OK to force respondents to choose an opinion. There is no always-correct solution to either problem.

Finally, using a matrix is a nice way of streamlining response options. A matrix Question type that that lists a set of questions for which the answer categories are all the same. is a question type that that lists a set of questions for which the answer categories are all the same. If you have a set of questions for which the response options are the same, it may make sense to create a matrix rather than posing each question and its response options individually. Not only will this save you some space in your survey but it will also help respondents progress through your survey more easily. A sample matrix can be seen in Figure 8.11 "Survey Questions Utilizing Matrix Format" .

Figure 8.11 Survey Questions Utilizing Matrix Format

quantity survey research paper

Designing Questionnaires

In addition to constructing quality questions and posing clear response options, you’ll also need to think about how to present your written questions and response options to survey respondents. Questions are presented on a questionnaire The document (either hard copy or online) that contains survey questions on which respondents read and mark their responses. , the document (either hard copy or online) that contains all your survey questions that respondents read and mark their responses on. Designing questionnaires takes some thought, and in this section we’ll discuss the sorts of things you should think about as you prepare to present your well-constructed survey questions on a questionnaire.

One of the first things to do once you’ve come up with a set of survey questions you feel confident about is to group those questions thematically. In our example of the transition to college, perhaps we’d have a few questions asking about study habits, others focused on friendships, and still others on exercise and eating habits. Those may be the themes around which we organize our questions. Or perhaps it would make more sense to present any questions we had about precollege life and habits and then present a series of questions about life after beginning college. The point here is to be deliberate about how you present your questions to respondents.

Once you have grouped similar questions together, you’ll need to think about the order in which to present those question groups. Most survey researchers agree that it is best to begin a survey with questions that will want to make respondents continue (Babbie, 2010; Dillman, 2000; Neuman, 2003). Babbie, E. (2010). The practice of social research (12th ed.). Belmont, CA: Wadsworth; Dillman, D. A. (2000). Mail and Internet surveys: The tailored design method (2nd ed.). New York, NY: Wiley; Neuman, W. L. (2003). Social research methods: Qualitative and quantitative approaches (5th ed.). Boston, MA: Pearson. In other words, don’t bore respondents, but don’t scare them away either. There’s some disagreement over where on a survey to place demographic questions such as those about a person’s age, gender, and race. On the one hand, placing them at the beginning of the questionnaire may lead respondents to think the survey is boring, unimportant, and not something they want to bother completing. On the other hand, if your survey deals with some very sensitive or difficult topic, such as child sexual abuse or other criminal activity, you don’t want to scare respondents away or shock them by beginning with your most intrusive questions.

In truth, the order in which you present questions on a survey is best determined by the unique characteristics of your research—only you, the researcher, hopefully in consultation with people who are willing to provide you with feedback, can determine how best to order your questions. To do so, think about the unique characteristics of your topic, your questions, and most importantly, your sample. Keeping in mind the characteristics and needs of the people you will ask to complete your survey should help guide you as you determine the most appropriate order in which to present your questions.

You’ll also need to consider the time it will take respondents to complete your questionnaire. Surveys vary in length, from just a page or two to a dozen or more pages, which means they also vary in the time it takes to complete them. How long to make your survey depends on several factors. First, what is it that you wish to know? Wanting to understand how grades vary by gender and year in school certainly requires fewer questions than wanting to know how people’s experiences in college are shaped by demographic characteristics, college attended, housing situation, family background, college major, friendship networks, and extracurricular activities. Keep in mind that even if your research question requires a good number of questions be included in your questionnaire, do your best to keep the questionnaire as brief as possible. Any hint that you’ve thrown in a bunch of useless questions just for the sake of throwing them in will turn off respondents and may make them not want to complete your survey.

Second, and perhaps more important, how long are respondents likely to be willing to spend completing your questionnaire? If you are studying college students, asking them to use their precious fun time away from studying to complete your survey may mean they won’t want to spend more than a few minutes on it. But if you have the endorsement of a professor who is willing to allow you to administer your survey in class, students may be willing to give you a little more time (though perhaps the professor will not). The time that survey researchers ask respondents to spend on questionnaires varies greatly. Some advise that surveys should not take longer than about 15 minutes to complete (cited in Babbie 2010), This can be found at http://www.worldopinion.com/the_frame/frame4.html , cited in Babbie, E. (2010). The practice of social research (12th ed.). Belmont, CA: Wadsworth. others suggest that up to 20 minutes is acceptable (Hopper, 2010). Hopper, J. (2010). How long should a survey be? Retrieved from http://www.verstaresearch.com/blog/how-long-should-a-survey-be As with question order, there is no clear-cut, always-correct answer about questionnaire length. The unique characteristics of your study and your sample should be considered in order to determine how long to make your questionnaire.

A good way to estimate the time it will take respondents to complete your questionnaire is through pretesting Getting feedback on a questionnaire so that it can be improved before it is administered. . Pretesting allows you to get feedback on your questionnaire so you can improve it before you actually administer it. Pretesting can be quite expensive and time consuming if you wish to test your questionnaire on a large sample of people who very much resemble the sample to whom you will eventually administer the finalized version of your questionnaire. But you can learn a lot and make great improvements to your questionnaire simply by pretesting with a small number of people to whom you have easy access (perhaps you have a few friends who owe you a favor). By pretesting your questionnaire you can find out how understandable your questions are, get feedback on question wording and order, find out whether any of your questions are exceptionally boring or offensive, and learn whether there are places where you should have included filter questions, to name just a few of the benefits of pretesting. You can also time pretesters as they take your survey. Ask them to complete the survey as though they were actually members of your sample. This will give you a good idea about what sort of time estimate to provide respondents when it comes time to actually administer your survey, and about whether you have some wiggle room to add additional items or need to cut a few items.

Perhaps this goes without saying, but your questionnaire should also be attractive. A messy presentation style can confuse respondents or, at the very least, annoy them. Be brief, to the point, and as clear as possible. Avoid cramming too much into a single page, make your font size readable (at least 12 point), leave a reasonable amount of space between items, and make sure all instructions are exceptionally clear. Think about books, documents, articles, or web pages that you have read yourself—which were relatively easy to read and easy on the eyes and why? Try to mimic those features in the presentation of your survey questions.

  • Brainstorming and consulting the literature are two important early steps to take when preparing to write effective survey questions.
  • Make sure that your survey questions will be relevant to all respondents and that you use filter questions when necessary.
  • Getting feedback on your survey questions is a crucial step in the process of designing a survey.
  • When it comes to creating response options, the solution to the problem of fence-sitting might cause floating, whereas the solution to the problem of floating might cause fence sitting.
  • Pretesting is an important step for improving one’s survey before actually administering it.
  • Do a little Internet research to find out what a Likert scale is and when you may use one.
  • Write a closed-ended question that follows the guidelines for good survey question construction. Have a peer in the class check your work (you can do the same for him or her!).

8.5 Analysis of Survey Data

  • Define response rate, and discuss some of the current thinking about response rates.
  • Describe what a codebook is and what purpose it serves.
  • Define univariate, bivariate, and multivariate analysis.
  • Describe each of the measures of central tendency.
  • Describe what a contingency table displays.

This text is primarily focused on designing research, collecting data, and becoming a knowledgeable and responsible consumer of research. We won’t spend as much time on data analysis, or what to do with our data once we’ve designed a study and collected it, but I will spend some time in each of our data-collection chapters describing some important basics of data analysis that are unique to each method. Entire textbooks could be (and have been) written entirely on data analysis. In fact, if you’ve ever taken a statistics class, you already know much about how to analyze quantitative survey data. Here we’ll go over a few basics that can get you started as you begin to think about turning all those completed questionnaires into findings that you can share.

From Completed Questionnaires to Analyzable Data

It can be very exciting to receive those first few completed surveys back from respondents. Hopefully you’ll even get more than a few back, and once you have a handful of completed questionnaires, your feelings may go from initial euphoria to dread. Data are fun and can also be overwhelming. The goal with data analysis is to be able to condense large amounts of information into usable and understandable chunks. Here we’ll describe just how that process works for survey researchers.

As mentioned, the hope is that you will receive a good portion of the questionnaires you distributed back in a completed and readable format. The number of completed questionnaires you receive divided by the number of questionnaires you distributed is your response rate The percentage of completed questionnaires returned; determined by dividing the number of completed questionnaires by the number originally distributed. . Let’s say your sample included 100 people and you sent questionnaires to each of those people. It would be wonderful if all 100 returned completed questionnaires, but the chances of that happening are about zero. If you’re lucky, perhaps 75 or so will return completed questionnaires. In this case, your response rate would be 75% (75 divided by 100). That’s pretty darn good. Though response rates vary, and researchers don’t always agree about what makes a good response rate, having three-quarters of your surveys returned would be considered good, even excellent, by most survey researchers. There has been lots of research done on how to improve a survey’s response rate. We covered some of these previously, but suggestions include personalizing questionnaires by, for example, addressing them to specific respondents rather than to some generic recipient such as “madam” or “sir”; enhancing the questionnaire’s credibility by providing details about the study, contact information for the researcher, and perhaps partnering with agencies likely to be respected by respondents such as universities, hospitals, or other relevant organizations; sending out prequestionnaire notices and postquestionnaire reminders; and including some token of appreciation with mailed questionnaires even if small, such as a $1 bill.

The major concern with response rates is that a low rate of response may introduce nonresponse bias The possible result of having too few sample members return completed questionnaires; occurs when respondents differ in important ways from nonrespondents. into a study’s findings. What if only those who have strong opinions about your study topic return their questionnaires? If that is the case, we may well find that our findings don’t at all represent how things really are or, at the very least, we are limited in the claims we can make about patterns found in our data. While high return rates are certainly ideal, a recent body of research shows that concern over response rates may be overblown (Langer, 2003). Langer, G. (2003). About response rates: Some unresolved questions. Public Perspective , May/June, 16–18. Retrieved from http://www.aapor.org/Content/aapor/Resources/PollampSurveyFAQ1/DoResponseRatesMatter/Response_Rates_-_Langer.pdf Several studies have shown that low response rates did not make much difference in findings or in sample representativeness (Curtin, Presser, & Singer, 2000; Keeter, Kennedy, Dimock, Best, & Craighill, 2006; Merkle & Edelman, 2002). Curtin, R., Presser, S., & Singer, E. (2000). The effects of response rate changes on the index of consumer sentiment. Public Opinion Quarterly, 64 , 413–428; Keeter, S., Kennedy, C., Dimock, M., Best, J., & Craighill, P. (2006). Gauging the impact of growing nonresponse on estimates from a national RDD telephone survey. Public Opinion Quarterly, 70 , 759–779; Merkle, D. M., & Edelman, M. (2002). Nonresponse in exit polls: A comprehensive analysis. In M. Groves, D. A. Dillman, J. L. Eltinge, & R. J. A. Little (Eds.), Survey nonresponse (pp. 243–258). New York, NY: John Wiley and Sons. For now, the jury may still be out on what makes an ideal response rate and on whether, or to what extent, researchers should be concerned about response rates. Nevertheless, certainly no harm can come from aiming for as high a response rate as possible.

Whatever your survey’s response rate, the major concern of survey researchers once they have their nice, big stack of completed questionnaires is condensing their data into manageable, and analyzable, bits. One major advantage of quantitative methods such as survey research, as you may recall from Chapter 1 "Introduction" , is that they enable researchers to describe large amounts of data because they can be represented by and condensed into numbers. In order to condense your completed surveys into analyzable numbers, you’ll first need to create a codebook A document that outlines how a survey researcher has translated her or his data from words into numbers. . A codebook is a document that outlines how a survey researcher has translated her or his data from words into numbers. An excerpt from the codebook I developed from my survey of older workers can be seen in Table 8.2 "Codebook Excerpt From Survey of Older Workers" . The coded responses you see can be seen in their original survey format in Chapter 6 "Defining and Measuring Concepts" , Figure 6.12 "Example of an Index Measuring Financial Security" . As you’ll see in the table, in addition to converting response options into numerical values, a short variable name is given to each question. This shortened name comes in handy when entering data into a computer program for analysis.

Table 8.2 Codebook Excerpt From Survey of Older Workers

If you’ve administered your questionnaire the old fashioned way, via snail mail, the next task after creating your codebook is data entry. If you’ve utilized an online tool such as SurveyMonkey to administer your survey, here’s some good news—most online survey tools come with the capability of importing survey results directly into a data analysis program. Trust me—this is indeed most excellent news. (If you don’t believe me, I highly recommend administering hard copies of your questionnaire next time around. You’ll surely then appreciate the wonders of online survey administration.)

For those who will be conducting manual data entry, there probably isn’t much I can say about this task that will make you want to perform it other than pointing out the reward of having a database of your very own analyzable data. We won’t get into too many of the details of data entry, but I will mention a few programs that survey researchers may use to analyze data once it has been entered. The first is SPSS, or the Statistical Package for the Social Sciences ( http://www.spss.com ). SPSS is a statistical analysis computer program designed to analyze just the sort of data quantitative survey researchers collect. It can perform everything from very basic descriptive statistical analysis to more complex inferential statistical analysis. SPSS is touted by many for being highly accessible and relatively easy to navigate (with practice). Other programs that are known for their accessibility include MicroCase ( http://www.microcase.com/index.html ), which includes many of the same features as SPSS, and Excel ( http://office.microsoft.com/en-us/excel-help/about-statistical-analysis-tools-HP005203873.aspx ), which is far less sophisticated in its statistical capabilities but is relatively easy to use and suits some researchers’ purposes just fine. Check out the web pages for each, which I’ve provided links to in the chapter’s endnotes, for more information about what each package can do.

Identifying Patterns

Data analysis is about identifying, describing, and explaining patterns. Univariate analysis Analysis of a single variable. is the most basic form of analysis that quantitative researchers conduct. In this form, researchers describe patterns across just one variable. Univariate analysis includes frequency distributions and measures of central tendency. A frequency distribution is a way of summarizing the distribution of responses on a single survey question. Let’s look at the frequency distribution for just one variable from my older worker survey. We’ll analyze the item mentioned first in the codebook excerpt given earlier, on respondents’ self-reported financial security.

Table 8.3 Frequency Distribution of Older Workers’ Financial Security

As you can see in the frequency distribution on self-reported financial security, more respondents reported feeling “moderately secure” than any other response category. We also learn from this single frequency distribution that fewer than 10% of respondents reported being in one of the two most secure categories.

Another form of univariate analysis that survey researchers can conduct on single variables is measures of central tendency. Measures of central tendency tell us what the most common, or average, response is on a question. Measures of central tendency can be taken for any level variable of those we learned about in Chapter 6 "Defining and Measuring Concepts" , from nominal to ratio. There are three kinds of measures of central tendency: modes, medians, and means. Mode A measure of central tendency that identifies the most common response given to a question. refers to the most common response given to a question. Modes are most appropriate for nominal-level variables. A median A measure of central tendency that identifies the middle point in a distribution of responses. is the middle point in a distribution of responses. Median is the appropriate measure of central tendency for ordinal-level variables. Finally, the measure of central tendency used for interval- and ratio-level variables is the mean. To obtain a mean A measure of central tendency that identifies the average response to an interval- or ratio-level question; found by adding the value of all responses on a single variable and dividing by the total number of responses to that question. , one must add the value of all responses on a given variable and then divide that number of the total number of responses.

In the previous example of older workers’ self-reported levels of financial security, the appropriate measure of central tendency would be the median, as this is an ordinal-level variable. If we were to list all responses to the financial security question in order and then choose the middle point in that list, we’d have our median. In Figure 8.12 "Distribution of Responses and Median Value on Workers’ Financial Security" , the value of each response to the financial security question is noted, and the middle point within that range of responses is highlighted. To find the middle point, we simply divide the number of valid cases by two. The number of valid cases, 180, divided by 2 is 90, so we’re looking for the 90th value on our distribution to discover the median. As you’ll see in Figure 8.12 "Distribution of Responses and Median Value on Workers’ Financial Security" , that value is 3, thus the median on our financial security question is 3, or “moderately secure.”

Figure 8.12 Distribution of Responses and Median Value on Workers’ Financial Security

quantity survey research paper

As you can see, we can learn a lot about our respondents simply by conducting univariate analysis of measures on our survey. We can learn even more, of course, when we begin to examine relationships among variables. Either we can analyze the relationships between two variables, called bivariate analysis Analysis of the relationships between two variables. , or we can examine relationships among more than two variables. This latter type of analysis is known as multivariate analysis Analysis of the relationships among multiple variables. .

Bivariate analysis allows us to assess covariation Occurs when changes in one variable happen together with changes in another. among two variables. This means we can find out whether changes in one variable occur together with changes in another. If two variables do not covary, they are said to have independence Occurs when there is no relationship between the variables in question. . This means simply that there is no relationship between the two variables in question. To learn whether a relationship exists between two variables, a researcher may cross-tabulate The process for creating a contingency table. the two variables and present their relationship in a contingency table. A contingency table Displays how variation on one variable may be contingent on variation on another. shows how variation on one variable may be contingent on variation on the other. Let’s take a look at a contingency table. In Table 8.4 "Financial Security Among Men and Women Workers Age 62 and Up" , I have cross-tabulated two questions from my older worker survey: respondents’ reported gender and their self-rated financial security.

Table 8.4 Financial Security Among Men and Women Workers Age 62 and Up

You’ll see in Table 8.4 "Financial Security Among Men and Women Workers Age 62 and Up" that I collapsed a couple of the financial security response categories (recall that there were five categories presented in Table 8.3 "Frequency Distribution of Older Workers’ Financial Security" ; here there are just three). Researchers sometimes collapse response categories on items such as this in order to make it easier to read results in a table. You’ll also see that I placed the variable “gender” in the table’s columns and “financial security” in its rows. Typically, values that are contingent on other values are placed in rows (a.k.a. dependent variables), while independent variables are placed in columns. This makes comparing across categories of our independent variable pretty simple. Reading across the top row of our table, we can see that around 44% of men in the sample reported that they are not financially secure while almost 52% of women reported the same. In other words, more women than men reported that they are not financially secure. You’ll also see in the table that I reported the total number of respondents for each category of the independent variable in the table’s bottom row. This is also standard practice in a bivariate table, as is including a table heading describing what is presented in the table.

Researchers interested in simultaneously analyzing relationships among more than two variables conduct multivariate analysis. If I hypothesized that financial security declines for women as they age but increases for men as they age, I might consider adding age to the preceding analysis. To do so would require multivariate, rather than bivariate, analysis. We won’t go into detail here about how to conduct multivariate analysis of quantitative survey items here, but we will return to multivariate analysis in Chapter 14 "Reading and Understanding Social Research" , where we’ll discuss strategies for reading and understanding tables that present multivariate statistics. If you are interested in learning more about the analysis of quantitative survey data, I recommend checking out your campus’s offerings in statistics classes. The quantitative data analysis skills you will gain in a statistics class could serve you quite well should you find yourself seeking employment one day.

  • While survey researchers should always aim to obtain the highest response rate possible, some recent research argues that high return rates on surveys may be less important than we once thought.
  • There are several computer programs designed to assist survey researchers with analyzing their data include SPSS, MicroCase, and Excel.
  • Data analysis is about identifying, describing, and explaining patterns.
  • Contingency tables show how, or whether, one variable covaries with another.
  • Codebooks can range from relatively simple to quite complex. For an excellent example of a more complex codebook, check out the coding for the General Social Survey (GSS): http://publicdata.norc.org:41000/gss/documents//BOOK/GSS_Codebook.pdf .
  • The GSS allows researchers to cross-tabulate GSS variables directly from its website. Interested? Check out http://www.norc.uchicago.edu/GSS+Website/Data+Analysis .

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HAL Author Manuscripts

Quality versus quantity: assessing individual research performance

José-alain sahel.

1 Institut de la vision INSERM : U968, Université Pierre et Marie Curie - Paris VI, CNRS : UMR7210, 17 rue Moreau 75012 Paris, FR

2 CIC - Quinze-Vingts INSERM : CIC503, Chno Des Quinze-Vingts PARIS VI 28, Rue de Charenton 75012 Paris, FR

3 Fondation Ophtalmologique Adolphe de Rothschild, 75019 Paris, FR

4 Institute of Ophthalmology University College of London (UCL), GB

Evaluating individual research performance is a complex task that ideally examines productivity, scientific impact, and research quality––a task that metrics alone have been unable to achieve. In January 2011, the French Academy of Sciences published a report on current bibliometric (citation metric) methods for evaluating individual researchers, as well as recommendations for the integration of quality assessment. Here, we draw on key issues raised by this report and comment on the suggestions for improving existing research evaluation practices.

BALANCING QUANTITY AND QUALITY

Evaluating individual scientific performance is an essential component of research assessment, and outcomes of such evaluations can play a key role in institutional research strategies, including funding schemes, hiring, firing, and promotions. However, there is little consensus and no internationally accepted standards by which to measure scientific performance objectively. Thus, the evaluation of individual researchers remains a notoriously difficult process with no standard solution. Marcus Tullius Cicero once wrote, “Non enim numero haec iudicantur, sed pondere” ( 1 ). Translation: The number does not matter, the quality does. In line with Cicero’s outlook on quality versus quantity, the French Academy of Sciences analyzed current bibliometric (citation metric) methods for evaluating individual researchers and made recommendations in January 2011 for the integration of quality assessment ( 2 ). The essence of the report is discussed in this Commentary.

Evaluation by experts in the field has been the primary means of assessing a researcher’s performance, although it can be biased by subjective factors, such as conflicts of interest, disciplinary or local favoritism, insufficient competence in the research area, or superficial examination. To ensure objective evaluation by experts, a quantitative analytical tool known as bibliometry (science metrics or citation metrics) has been integrated gradually into evaluation processes ( Fig. 1 ). Bibliometry started with the idea of an impact factor, which was first mentioned in Science in 1955 ( 3 ), and has evolved to weigh several aspects of published work, including journal impact factor, total number of citations, average number of citations per paper, average number of citations per author, average number of citations per year, the number of authors per paper, Hirsch’s h -index, Egghe’s g -index, and the contemporary h -index. The development of science metrics has accelerated recently, with the availability of online databases used to calculate bibliometric indicators, such as the Thomson Reuters Web of Science ( http://thomsonreuters.com/ ), Scopus ( http://www.scopus.com/home.url ), and Google Scholar ( http://scholar.google.com/ ). Within the past decade, metrics have secured a foothold in the evaluation of individual, team, and institutional research because the use of such metrics appears to be easier and faster than the qualitative assessment by experts. Because of the ease of use of various metrics, however, bibliometry tends to be applied in excessive and even incorrect ways, especially when used as standalone analyses.

An external file that holds a picture, illustration, etc.
Object name is halms608624f1.jpg

Can individual research performance be summarized by numbers?

CREDIT: IMAGE COURTESY OF D. FRANGOV (FRANGOV DIMITAR PLAMENOV COMPANY)

The French Academy of Sciences (FAS) is concerned that some of the current evaluation practices––in particular, the uncritical use of publication metrics––might be inadequate for evaluating individual scientific performance. In its recent review ( 2 ), the FAS addressed the advantages and limitations of the main existing quantitative indicators, stressed that judging the quality of a scientific work in terms of conceptual and technological innovation of the research is essential, and reaffirmed its position about the decisive role that experts must play in research assessment ( 2 , 4 ). It also strongly recommended that additional criteria be taken into consideration when assessing individual research performance. These criteria include teaching, mentoring, participation in collective tasks, and collaboration-building, in addition to quantitative parameters that are not measured by bibliometrics, such as number of patents, speaker invitations, international contracts, distinctions, and technology transfers. It appears that the best course of action will be a balanced combination of the qualitative (experts) and the quantitative (bibliometrics).

BIBLIOMETRICS: INDICATORS OR NOT?

Bibliometrics use mathematical and statistical methods to measure scientific output; thus, they provide a quantitative—not a qualitative—assessment of individual research performance. The most commonly used bibliometric indicators, as well as their strengths and weaknesses, are described below.

Impact factor

The impact factor, a major quantitative indicator of the quality and popularity of a journal, is defined by the median number of citations for a given period of the articles published in a journal. The impact factor of a journal is calculated by dividing the number of current-year citations by the source items published during the previous two years ( 5 ). According to the FAS, the impact factor of journals in which a researcher has published is a useful but highly controversial indicator of individual performance ( 2 ). The most common issue is variation among subject areas; in general, a basic science journal will have a higher average impact factor than journals in specialized or applied areas. Individual article quality within a journal is also not reflected by a journal’s impact factor because citations for an individual paper can be much higher or lower than what might be expected on the basis of that journal’s impact factor ( 2 , 6 , 7 ). In addition, self-citations are not corrected for when calculating the impact factor ( 6 ). On account of these limitations, the FAS considers the tendency of certain researchers to organize their work and publication policy according to the journal in which they intend to publish their article to be a dangerous practice. In extreme situations, such journal-centric behavior can trigger scientific misconduct. The FAS notes that there has been an increase in the practice of using journal impact factors for the evaluation of an individual researcher for the purpose of career advancement in some European countries, such as France, and in certain disciplines, such as biology and medicine ( 2 ).

Number of citations

The number of times an author has been cited is an important bibliometric indicator; however, it is a value that has several important limitations. First, citation number depends on the quality of the database used. Second, it does not consider where the author is located in the author list. Third, sometimes articles can have a considerable number of citations for reasons that might not relate to the quality or importance of the scientific content. Fourth, articles published in prestigious journals are privileged as compared with those with equal quality but published in journals of average notoriety. Fifth, depending on cultural issues, advantage can be given to citations of scientists from the same country, to scientists from other countries (in particular Americans, as often is the case in France), or to articles written in English rather than in French, for example ( 2 ). For these cultural reasons, novel and important papers might attract little attention for several years after their publication. Lastly, citation numbers also tend to be greater for review articles than for original research articles. Self-citations do not reflect the impact of a publication and should therefore not be included in a citation analysis when this is intended to give an assessment of the scientific achievement of a scientist ( 8 ).

New indicators ( h -index, g -index)

Recently, new bibliometric indicators borne out of databases indexing articles and their citations were introduced to address the needs of objectively evaluating individual researchers. In 2005, Jorge Hirsch proposed the h -index as a tool for quantifying the scientific impact of an individual researcher ( 9 ). The h -index of a scientist is the number of papers co-authored by the researcher with at least h citations each; for example, an h -index of 20 means that an individual researcher has co-authored 20 papers that have each been cited at least 20 times each. This index has the major advantage to measure simultaneously the scientist’s productivity (number of papers published over years) with the cumulative impact of the scientist’s output (the number of citations for each paper). Although the h -index is preferable to other standard single-number criteria (such as the total number of papers, total number of citations, or number of citations per paper), it has several disadvantages. First, it varies with scientific fields. As an example, h -indices in the life sciences are much higher than in physics ( 9 ). Second, it favors senior researchers by never decreasing with age, even if an individual discontinues scientific research ( 10 ). Third, citation databases provide different h -indexes as a result of differences in coverage, even when generated for the same author at the same time ( 11 , 12 ). Fourth, the h -index does not consider the context of the citations (such as negative findings or retracted works). Fifth, it is strongly affected by the total number of papers, which may underestimate scientists with short careers and scientists who have published only a few although notable papers. The h -index also integrates every publication of an individual researcher, regardless of his or her role in authorship, and does not distinguish articles of pathbreaking or influential scientific impact. Contemporary h -index (referred to as hc -index), as suggested by Sidiropoulos et al . ( 10 ), takes into account the age of each article and weights recently published work more heavily. As such, the hc -index may offer a fairer comparison between junior and senior academics than the regular h -index ( 13 ).

The g -index was introduced ( 14 ) to distinguish quality, giving more weight to highly cited articles. The g -index of a scientist is the highest number g of articles (a set of articles ordered in terms of decreasing citation counts) that together received g 2 or more citations; for example, a g -index of 20 means that 20 publications of a researcher have a total number of citations of at least 400. Egghe pointed out that the g -index value will always be higher than the h -index value, making easier to differentiate the performance of authors. If Researcher A has published 10 articles, and each has received 4 citations, the researcher’s h -index is 4. If the Researcher B has also written 10 articles, and 9 of them have received 4 citations each, the researcher’s h -index is also 4, regardless of how many citations the 10th article has received. However, if the tenth article has received 20 citations the g -index of the Researcher B would be 6; for 50 citations, the g -index would be 9 ( 15 ). Thus, one or several highly cited articles can influence the final g -index of an individual researcher, thus highlighting the impact of authors.

CHOOSING AN INDICATOR

Bibliometry is easy to use because of its simple calculations. However, it is important to realize that the purely bibliometric approaches are inadequate because no indicator alone can summarize the quality of the scientific performance of a researcher. The use of a set of metrics (such as number of citations, h -index, or g -index) would give a more accurate estimation of the researcher’s scientific impact. At the same time, metrics should not be made too complex because they can become a source of conceptual errors that are then difficult to identify. FAS discourages the use of metrics as a standalone evaluation tool, the use of only one bibliometric indicator, the use of the journal’s impact factor to evaluate the quality of an article, neglecting the impact of the scientific field/sub-field, and ignoring author placement in the case of multiple authorship ( 2 ).

In 2004, INSERM (the French National Institute of Health and Medical Research) introduced bibliometrics as part of its research assessment procedures. Bibliometric analysis is based on publication indicators that are validated by the evaluated researchers and are at the disposal of the evaluation committees. In addition to the basic indicators (citation numbers and journal impact factor), the measures used by INSERM include the number of publications in the first 10% of journals ranked by decreasing impact factor in a given field (top 10% impact factor, according to Thomson Reuters Journal Citation Reports) and the number of publications from an individual researcher that fall within the top 10% of articles (ranked by the total citations) in annual cohorts from each of the 22 disciplines defined by Thomson Reuters Essential Science Indicators. All indicators take into account the research fields, the year of publication, and the author’s position among the signers by assigning an index of 1 to the first or last author, an index of 0.5 for the second or the next to last author, and 0.25 for all other author positions. Notably, the author’s index can only be used in biomedical research because for other fields the rank of the authors may follow different rules, such as in physics, in which they are listed in alphabetical order.

Bibliometric indicator interpretation requires competent expert knowledge of metrics, and in order to ensure good practice, INSERM trains members of evaluation committees on state-of-the-art science metric methods. INSERM has noted that correlation analysis of publication—in other words, scoring by members of evaluation committees and the use of any bibliometric indicator alone—is rather low. For example, the articles of all teams received a number of citations irrespective of the journal in which they were published, with only low correlation between the journal impact factor and the number of times each publication was cited. No correlation was found between the journal impact factor and the individual publication citations, or between the “Top 1%” publications and the impact factor ( 16 ). INSERM analysis emphasizes the fact that each indicator has its advantages and limitations, and care must be taken not to consider them alone as “surrogate” markers of team performance. Several indicators must be taken into account when evaluating the overall output of a research team. The use of bibliometric indicators requires great vigilance; but, according to the INSERM experience, metrics enrich the evaluation committees’ debates about the scientific quality of team performance ( 16 ).

As reported by the FAS, bibliometric practices vary considerably from country to country. A worldwide Nature survey ( 17 ) emphasized that 70% of the interviewed academic scientists, department heads, and other administrators believe that bibliometrics are used for recruitment and promotions, and 63% of them consider the use of these measures to be inadequate. Many Anglo-Saxon countries use bibliometrics to evaluate performances of universities and research organizations, whereas for hiring and promotions, the curriculum vitae, interview process, and letters of recommendation “count” more than the bibliometric indicators ( 2 ). In contrast, metrics are used for recruiting in Chinese and Asian universities in general, although movement toward the use of letters of recommendation is currently underway ( 2 ). In France, an extensive use of publication metrics for individual and institutional evaluations has been noted in the biomedical sciences ( 2 ).

Research evaluation practices also vary by field and subfield owing in part to the large disparities across community sizes and the literature coverage provided by citation databases. As reviewed by the FAS, evaluation of individual researchers in the mechanical sciences, computing, and applied mathematics fields includes both the quality and the number of publications, as well as scientific awards and the number of invitations to speak at conferences, software, patents, and technology transfer agreements. Organization of scientific meetings and editorial responsibilities are also taken into consideration. The younger researchers are evaluated by experts during interviews and while they give seminars. In these fields, publication does not always play a leading role in transferring knowledge; thus, during a long professional career, metrics give rather weak and inaccurate estimation of research performance. Bibliometrics are therefore used only as a decision-making aid, but not as a main tool for evaluation.

In physics and its subfields, evaluation methods vary. In general, a combination of quantitative (number of publications, h -index) and qualitative measures (keynote and invited speeches, mentoring programs) plays a decisive role in the evaluation of senior scientists only. In astrophysics, metrics are largely used for evaluation, recruiting, promotions, and funding allocations. In chemistry, the main bibliometric indicators ( h -index, total number of citations, and number of citations per article) are taken into consideration when discussing the careers of senior researchers (those with more than 10 to 12 years of research activity). In recruiting young researchers, experts interview the candidate to examine ability to present and discuss the subject matter proficiently; the individual’s publication record is also considered. However, the national committees for chemistry of French scientific and university institutions [Centre National de la Recherche Scientifique (CNRS) and Conseil National des Universités (CNU), respectively] usually avoid bibliometrics altogether for an individual’s evaluation.

In economics, evaluation by experts in the field plays the most important role for recruitments and promotions, but bibliometric indicators are used to help this decision-making. For the humanities and social sciences (philosophy, history, law, sociology, psychology, languages, political sciences, and art) and for mathematics, the existing databases do not cover these fields sufficiently. As a consequence, these fields are not able to properly use bibliometrics. In contrast, in biology and medicine the quantitative indicators—in particular the journal’s impact factor—are widely used for evaluating individual researchers ( 2 ).

STRATEGIES AND RECOMMENDATIONS

The FAS acknowledged that bibliometrics could be a very useful evaluation tool when handled by experts in the field. According to its recommendations, the use of bibliometrics by monodisciplinary juries should be of nondecisive value; instead, the experts of these evaluation committees know the candidates well enough to compare more precisely and objectively the individual performance of each of them. In the case of pluridisciplinary (interdisciplinary) juries, bibliometrics can be successfully used, but only if the experts consider the differences between scientific fields and subfields (as mentioned above). For this purpose, the choice of indicators and the methodology to evaluate the full spectrum of research activity of a scientist should be initially validated. As emphasized by the FAS, bibliometrics should not be used for deciding which young scientists to recruit. In addition, the bibliometric set should be chosen depending on the purpose of the evaluation: recruitment, promotion, funding allocation, or distinction. Calculations should not be left to nonspecialists (such as administrators that could use the rapidly accessible data in a biased way) because the number of potential errors in judgement and assessment is too large. Frequent errors to be avoided include the homonyms, variations in the use of name initials, and the use of incomplete databases. It is important that the complete list of publications be checked by the researcher concerned. Researchers could even be asked to produce their own indicators (if provided with appropriate guidelines for calculation); these calculations should subsequently be approved. The evaluation process must be transparent and replicable, with clearly defined targets, context, and purpose of the assessment.

To improve the use of bibliometrics, a consensus has been reached by the FAS ( 2 ) to perform a series of studies and to evaluate various methodological approaches, including (i) retrospective studies to compare decisions made by experts and evaluating committees, with results potentially obtained by bibliometrics; (ii) studies to refine the existing indicators and bibliometric standards; (iii) authorship clarification; (iv) development of standards for originality and innovation; (v) discussion on the citation discrepancies on the basis of geographical- or field-localism; (vi) monitoring the bibliometric indicators of outstanding researchers (a category reserved for those who have made important and lasting research contributions their specific field and who have obtained international recognition); (vii) examining the prospective values of the indicators for researchers that changed their field orientation with time; (viii) examining the indicators of researchers receiving great awards such as Nobel Prize, Fields Medal, and medals of notorious academies and institutions; (ix) studies on how bibliometrics affect the scientific behavior of the researchers; and (x) establishment of standards of good practice in the use of bibliometrics for analyzing individual research performance.

FIXING THE FLAWS

Assessing research performance is important for recognizing productivity, innovation, and novelty and plays a major role in academic appointment and promotion. However, the means of assessment—namely, bibliometrics—are often flawed. Bibliometrics have enormous potential to assist the qualitative evaluation of individual researchers; however, none of the bibliometric indicators alone (or even considering a set of them) allow for an acceptable and well-balanced evaluation of the activity of a researcher. The use of bibliometrics should continue to evolve through in-depth discussion on what the metrics mean and how they can be best interpreted by experts in the given scientific field.

Acknowledgments

The author thanks K. Marazova (Institut de la Vision) for her major help in preparing this commentary and N. Haeffner-Cavaillon (INSERM) for critical reading and insights.

References and Notes

  • Browse Works
  • Environmental & Physical Sciences

Quantity Surveying

Quantity surveying research papers/topics, biochemical analysis of adulterants in milk.

Work on finding the milk quality and Adulterants present in the milk for decreasing their quality and increasing quantity.

THE COMPARATIVE ANALYSIS OF RECREATIONAL FACILITIES FOR HOUSING ESTATES (A CASE STUDY OF ADEWOLE HOUSING ESTATE, ILORIN KWARA STATE)

TABLE OF CONTENTSTitle pageCertification&nbsp;DedicationAcknowledgment&nbsp;AbstractTable of contentsList of MapsList of TablesList of FiguresList of Plate’sCHAPTER ONE: INTRODUCTION&nbsp; 1.1 Introduction 1-2 1.2 Statement of problems 2 1.3 Aims of the project 3 1.4 Objective of the study 3 1.5 Justification 3-4 1.6 Scope and Limitation 4 1.7 Project methodology 1.8 Study area 6 1.8.1 Historical background of Ilorin 6 1.8.2 Historical background of Adewole Housing 10Estate1.8.2.1 Location...

AN EVALUATION OF BUDGETING AND COST CONTROL OF A BUILDING CONSTRUCTION INDUSTRY

CHAPTER ONE1.0&nbsp;&nbsp;&nbsp; INTRODUCTION&nbsp;1.01 (Background of the study)According to Luke chapter 14 verses 28-30 which says for which of you intending to build a tower siteth not down first and counteth the cost whether he have sufficient to finish it? Lest haply, after he hath laid the foundation, and is not able to finish it, all that behold it began to mock him.The above quotation from the Bible summarizes the problem of abandoned project after committing huge financial resource...

Pattern of Residential Mobility in Lokoja, Kogi State, Nigeria

The study analyzed the residential mobility in Lokoja with a view to evolving predictable pattern of the factors influencing residential mobility in Lokoja Nigeria. A survey research method was adopted. Both qualitative and quantitative data collection methods were used. Collection of quantitative data was through oral interview and nonparticipant observation while quantitative data was collected using unstructured questionnaire and secondary sources such as books and journal articles. Study ...

A Proposal on Assessment of Risk on Road Projects in Nigeria Construction Industry (A Case Study of Lagos Metropolis)

TABLE OF CONTENT Title page                                                                       i Certification                                                                   ii Dedication                                                                     iii Acknowledgement     �...

Estimating and Budgeting a Medium Scale Building Project A Case Study of a Proposed Departmet of Quantity Surveying at Kwara State.

ABSTRACT This project, focus on preliminary activities that lead to draft bill of qualities for a propose multi-purpose studio hall, Ilorin. It’s centralized on preparation of bill of qualities for a studio hall and taking offs project. Meanwhile, this is to prepare the construction legation and to understand the construction arrangement between the client and the client and the construction company, TABLE OF CONTENTS Title page                               �...

Taking off Processes and Preparation of an Un-Priced Bill of Quantities (A Case Study of the Proposed Four (4) Bedroom Fully Detached Duplex at Plot 3003a and 3003b Sabonlugbe East Extention

ABSTRACT          Class work alone on measurement of building work is not enough to equipped the student with the required knowledge and skill required on the measurement of building work, it is against this background that the introduction of project work was introduce to expose the student to a practical and theoretical method of taking off of building work apart from class work so as to increase their skill and knowledge in addition to classwork. Chapter one of this project demon...

Effect of Quality Culture on Building Construction Project in Nigeria: A Case Study Of Kwara State

Research aim and objective Research question Research hypothesis Delimitation and Scope of the study Definition of Terms4 Chapter Two: Literature Review  The Construction Industry 6        Culture 6 Quality 8 Key performance indicators (KPI)9 Quality Culture9      Element of quality culture  Factors affecting the maintenance of quality culture Chapter Three: Research Methodology10     Introduction       Research design      Area of study      Population of the study   �...

Assessment of the Usage of ICT Tool in the Development of Construction Industry in Nigeria (A Case Study of Practicing Firms in Lagos State)

Abstract The construction industry is so hierarchical and fragmented in nature that some of the major participants do not consider themselves to be part of the same industry (Hindle, 2000).  This requires close coordination among a large number of specialized but interdependent organizations and individuals to achieve the cost, time and quality goals of a construction project (Toole, 2003).  Hence, according to Maqsood, (2004), a major construction process demands a heavy exchange of data a...

The Impact Of Project Management Services On Building Construction Project (A Case Study Of Ibadan Metropolis)

ABSTRACT    The need for the Nigerian construction industry to move away from the traditional forms of project procurement and embrace project management services cannot be over emphasized. This is as a result of the importance of capital projects to the development of a young nation. This study investigated the impact of project management services on building construction. The study is a survey which utilizes cross-sectional design. In all 46 survey questionnaires were administered to ...

Assessment Of Environmental Impact Of Construction Works (A Case Study Of Road Of Oke-Onigbin Via Isin To Oba Isin L.G.A. In Kwara State)

ABSTRACT The project works tends to analyse the environmental impact Assessment of construction works, a case study of the road construction in Oke-Onigbin via Isin LGA of kwara state. Chapter one introduce the topic and the aim and objective, chapter two states the review of related literatures and various effects of construction work and methods fort mitigating the effect, while methodology and the various means of collecting data- through questionnaires and oral interview  and analysing ...

Management Of Mass Housing In Nigeria (A Case Study Of Royal Valley Housing Estate)

Abstract Management is very essential in all establishments whether private or public. Also, management is very useful in all sector of economy such as Health, Agriculture, Water resources, Power, Works and Communication as well as aviation ministries. Unfortunately we would discover that nearly all our sector of economy failed as a result of lack of proper management despite the fact that we spent a lot of money on these sectors. However, this case also affect the construction industry wher...

Investigation Of Trespassing And Irregularities In Physical Planning Using Remote Sensing And Geographic Information System (Case Study: Nyala City)

Abstract Urban planning need to be regulated by laws , and in the sudan there are adequate laws, but the problem lies in inspection of the building construction as stated in the law which based on direct field work ,that need many financial and human resources . As the matter of lack of these resources the problem of the urban planning increase the spread of urban planning problems , such as trespassing on public area and roads . Digital technique of remote sensing and geographical informati...

Direct Labour System Of Project Execution In Nigeria

ABSTRACT This paper reviews the activities of direct labour approach with particular reference to my place of work. That is, Osun State College of Technology, Esa-oke. It starts by highlighting the origin of direct labour which had been in light operation since the 1930's to 1960's but which was popularised and given a full legal backing by the Shehu Shagari Administration in the late 1970's/early 1980's when the escalating cost of the contracting system of project execution became a source o...

A Comparison of Depth Interpolation by Using GIS & Neural Networks

Depth measurement is Considered as a first goal in hydrographic survey, it depends on different techniques and instruments, it’s most costly procedures. However; some mathematical models are used for condensing depths with a relatively low cost. Artificial neural networks appears and in many applications used to solve real-world forecasting, classification and function approximation problems. It is fast, intelligent and easy to use Neuro Intelligence supports all stages of neural net...

Quantity Surveying as a course deals with study of management and control of the financial aspects of the construction process. The course prepares students for middle and top management employment in the construction, property development and allied industries, as well as financial institutions and government departments. Afribary provides list of academic papers and project topics in Quantity surveying. You can browse Quantity surveying project topics, Quantity surveying thesis topics, Quantity surveying dissertation topics, Quantity surveying seminar topics, Quantity surveying essays/papers, Quantity surveying text books and lesson notes in Quantity surveying field.

Popular Papers/Topics

Assessment of construction management techniques in nigeria construction industry, a sample of quantity surveying taking-off, cost control techniques used on building construction sites in uganda, an appraisal of the quantity surveyors cost control activities in nigeria construction industry, management of cost overrun in selected building construction project in ilorin, factors affecting estimating accuracy in building construction, the challenges of digital innovation in quantity surveying profession, comparative study of relevancy of higher national diploma and bachelor of science training to core quantity surveying practices, potentialities of whistleblowing in dealing with unethical practices in the nigeria construction industries, impact of cash flow and resource control in construction projects and delivery, factors influencing the cost planning of public building project, an assessment of building collapse in nigeria (edo state as a study area), an investigation on utilization of qs based softwares, assessment of risk management practices amongst quantity surveyors in the nigerian construction industry, the impact of economic recession on public project delivery in ekiti state, nigeria.

Privacy Policy | Refund Policy | Terms | Copyright | © 2024, Afribary Limited. All rights reserved.

Home Toggle navigation FR Toggle Search Search the site Search About us About us Head office Regional offices History Archives Background materials Photos and videos Accessibility Contact us Corporate governance Board of Directors Governing Council and Senior Management Governance documents Educational resources The Economy, Plain and Simple Explainers Financial education resources Careers Take a central role at the Bank of Canada with our current opportunities and scholarships.

The Neutral Interest Rate: Past, Present and Future

The decline in safe real interest rates over the past three decades has reignited discussions on the neutral real interest rate, known as R*. We review insights from the literature on R*, addressing its determinants and estimation methods, as well as the factors influencing its decline and its future trajectory. While there is a consensus that R* has declined, alternative estimation approaches can yield substantially different point estimates over time. The estimated neutral range is large and uncertain, especially in real-time and when comparing estimates based on macroeconomic data with those inferred from financial data. Evidence suggests that factors such as increased longevity, declining fertility rates and scarcity of safe assets, as well as income inequality, contribute to lowering R*. Existing evidence also suggests the COVID-19 pandemic did not substantially impact R*. Going forward, there is an upside risk that some pre-existing trends might weaken or reverse.

DOI: https://doi.org/10.34989/sdp-2024-3

We use cookies to help us keep improving this website.

Help | Advanced Search

Computer Science > Artificial Intelligence

Title: a survey on the memory mechanism of large language model based agents.

Abstract: Large language model (LLM) based agents have recently attracted much attention from the research and industry communities. Compared with original LLMs, LLM-based agents are featured in their self-evolving capability, which is the basis for solving real-world problems that need long-term and complex agent-environment interactions. The key component to support agent-environment interactions is the memory of the agents. While previous studies have proposed many promising memory mechanisms, they are scattered in different papers, and there lacks a systematical review to summarize and compare these works from a holistic perspective, failing to abstract common and effective designing patterns for inspiring future studies. To bridge this gap, in this paper, we propose a comprehensive survey on the memory mechanism of LLM-based agents. In specific, we first discuss ''what is'' and ''why do we need'' the memory in LLM-based agents. Then, we systematically review previous studies on how to design and evaluate the memory module. In addition, we also present many agent applications, where the memory module plays an important role. At last, we analyze the limitations of existing work and show important future directions. To keep up with the latest advances in this field, we create a repository at \url{ this https URL }.

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

Partisan divides over K-12 education in 8 charts

Proponents and opponents of teaching critical race theory attend a school board meeting in Yorba Linda, California, in November 2021. (Robert Gauthier/Los Angeles Times via Getty Images)

K-12 education is shaping up to be a key issue in the 2024 election cycle. Several prominent Republican leaders, including GOP presidential candidates, have sought to limit discussion of gender identity and race in schools , while the Biden administration has called for expanded protections for transgender students . The coronavirus pandemic also brought out partisan divides on many issues related to K-12 schools .

Today, the public is sharply divided along partisan lines on topics ranging from what should be taught in schools to how much influence parents should have over the curriculum. Here are eight charts that highlight partisan differences over K-12 education, based on recent surveys by Pew Research Center and external data.

Pew Research Center conducted this analysis to provide a snapshot of partisan divides in K-12 education in the run-up to the 2024 election. The analysis is based on data from various Center surveys and analyses conducted from 2021 to 2023, as well as survey data from Education Next, a research journal about education policy. Links to the methodology and questions for each survey or analysis can be found in the text of this analysis.

Most Democrats say K-12 schools are having a positive effect on the country , but a majority of Republicans say schools are having a negative effect, according to a Pew Research Center survey from October 2022. About seven-in-ten Democrats and Democratic-leaning independents (72%) said K-12 public schools were having a positive effect on the way things were going in the United States. About six-in-ten Republicans and GOP leaners (61%) said K-12 schools were having a negative effect.

A bar chart that shows a majority of Republicans said K-12 schools were having a negative effect on the U.S. in 2022.

About six-in-ten Democrats (62%) have a favorable opinion of the U.S. Department of Education , while a similar share of Republicans (65%) see it negatively, according to a March 2023 survey by the Center. Democrats and Republicans were more divided over the Department of Education than most of the other 15 federal departments and agencies the Center asked about.

A bar chart that shows wide partisan differences in views of most federal agencies, including the Department of Education.

In May 2023, after the survey was conducted, Republican lawmakers scrutinized the Department of Education’s priorities during a House Committee on Education and the Workforce hearing. The lawmakers pressed U.S. Secretary of Education Miguel Cardona on topics including transgender students’ participation in sports and how race-related concepts are taught in schools, while Democratic lawmakers focused on school shootings.

Partisan opinions of K-12 principals have become more divided. In a December 2021 Center survey, about three-quarters of Democrats (76%) expressed a great deal or fair amount of confidence in K-12 principals to act in the best interests of the public. A much smaller share of Republicans (52%) said the same. And nearly half of Republicans (47%) had not too much or no confidence at all in principals, compared with about a quarter of Democrats (24%).

A line chart showing that confidence in K-12 principals in 2021 was lower than before the pandemic — especially among Republicans.

This divide grew between April 2020 and December 2021. While confidence in K-12 principals declined significantly among people in both parties during that span, it fell by 27 percentage points among Republicans, compared with an 11-point decline among Democrats.

Democrats are much more likely than Republicans to say teachers’ unions are having a positive effect on schools. In a May 2022 survey by Education Next , 60% of Democrats said this, compared with 22% of Republicans. Meanwhile, 53% of Republicans and 17% of Democrats said that teachers’ unions were having a negative effect on schools. (In this survey, too, Democrats and Republicans include independents who lean toward each party.)

A line chart that show from 2013 to 2022, Republicans' and Democrats' views of teachers' unions grew further apart.

The 38-point difference between Democrats and Republicans on this question was the widest since Education Next first asked it in 2013. However, the gap has exceeded 30 points in four of the last five years for which data is available.

Republican and Democratic parents differ over how much influence they think governments, school boards and others should have on what K-12 schools teach. About half of Republican parents of K-12 students (52%) said in a fall 2022 Center survey that the federal government has too much influence on what their local public schools are teaching, compared with two-in-ten Democratic parents. Republican K-12 parents were also significantly more likely than their Democratic counterparts to say their state government (41% vs. 28%) and their local school board (30% vs. 17%) have too much influence.

A bar chart showing Republican and Democratic parents have different views of the influence government, school boards, parents and teachers have on what schools teach

On the other hand, more than four-in-ten Republican parents (44%) said parents themselves don’t have enough influence on what their local K-12 schools teach, compared with roughly a quarter of Democratic parents (23%). A larger share of Democratic parents – about a third (35%) – said teachers don’t have enough influence on what their local schools teach, compared with a quarter of Republican parents who held this view.

Republican and Democratic parents don’t agree on what their children should learn in school about certain topics. Take slavery, for example: While about nine-in-ten parents of K-12 students overall agreed in the fall 2022 survey that their children should learn about it in school, they differed by party over the specifics. About two-thirds of Republican K-12 parents said they would prefer that their children learn that slavery is part of American history but does not affect the position of Black people in American society today. On the other hand, 70% of Democratic parents said they would prefer for their children to learn that the legacy of slavery still affects the position of Black people in American society today.

A bar chart showing that, in 2022, Republican and Democratic parents had different views of what their children should learn about certain topics in school.

Parents are also divided along partisan lines on the topics of gender identity, sex education and America’s position relative to other countries. Notably, 46% of Republican K-12 parents said their children should not learn about gender identity at all in school, compared with 28% of Democratic parents. Those shares were much larger than the shares of Republican and Democratic parents who said that their children should not learn about the other two topics in school.

Many Republican parents see a place for religion in public schools , whereas a majority of Democratic parents do not. About six-in-ten Republican parents of K-12 students (59%) said in the same survey that public school teachers should be allowed to lead students in Christian prayers, including 29% who said this should be the case even if prayers from other religions are not offered. In contrast, 63% of Democratic parents said that public school teachers should not be allowed to lead students in any type of prayers.

Bar charts that show nearly six-in-ten Republican parents, but fewer Democratic parents, said in 2022 that public school teachers should be allowed to lead students in prayer.

In June 2022, before the Center conducted the survey, the Supreme Court ruled in favor of a football coach at a public high school who had prayed with players at midfield after games. More recently, Texas lawmakers introduced several bills in the 2023 legislative session that would expand the role of religion in K-12 public schools in the state. Those proposals included a bill that would require the Ten Commandments to be displayed in every classroom, a bill that would allow schools to replace guidance counselors with chaplains, and a bill that would allow districts to mandate time during the school day for staff and students to pray and study religious materials.

Mentions of diversity, social-emotional learning and related topics in school mission statements are more common in Democratic areas than in Republican areas. K-12 mission statements from public schools in areas where the majority of residents voted Democratic in the 2020 general election are at least twice as likely as those in Republican-voting areas to include the words “diversity,” “equity” or “inclusion,” according to an April 2023 Pew Research Center analysis .

A dot plot showing that public school district mission statements in Democratic-voting areas mention some terms more than those in areas that voted Republican in 2020.

Also, about a third of mission statements in Democratic-voting areas (34%) use the word “social,” compared with a quarter of those in Republican-voting areas, and a similar gap exists for the word “emotional.” Like diversity, equity and inclusion, social-emotional learning is a contentious issue between Democrats and Republicans, even though most K-12 parents think it’s important for their children’s schools to teach these skills . Supporters argue that social-emotional learning helps address mental health needs and student well-being, but some critics consider it emotional manipulation and want it banned.

In contrast, there are broad similarities in school mission statements outside of these hot-button topics. Similar shares of mission statements in Democratic and Republican areas mention students’ future readiness, parent and community involvement, and providing a safe and healthy educational environment for students.

  • Education & Politics
  • Partisanship & Issues
  • Politics & Policy

Jenn Hatfield is a writer/editor at Pew Research Center

Most Americans think U.S. K-12 STEM education isn’t above average, but test results paint a mixed picture

About 1 in 4 u.s. teachers say their school went into a gun-related lockdown in the last school year, about half of americans say public k-12 education is going in the wrong direction, what public k-12 teachers want americans to know about teaching, what’s it like to be a teacher in america today, most popular.

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Age & Generations
  • Coronavirus (COVID-19)
  • Economy & Work
  • Family & Relationships
  • Gender & LGBTQ
  • Immigration & Migration
  • International Affairs
  • Internet & Technology
  • Methodological Research
  • News Habits & Media
  • Non-U.S. Governments
  • Other Topics
  • Race & Ethnicity
  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

Copyright 2024 Pew Research Center

Terms & Conditions

Privacy Policy

Cookie Settings

Reprints, Permissions & Use Policy

IMAGES

  1. Research Questionnaire Examples

    quantity survey research paper

  2. Quantity Surveying Course in Malaysia

    quantity survey research paper

  3. Research Questionnaire Sample in Qualitative Research

    quantity survey research paper

  4. 🎉 Survey research abstract. Sample Abstracts. 2019-01-16

    quantity survey research paper

  5. Quantity Surveying

    quantity survey research paper

  6. (PDF) Documenting its Applications in Quantity Surveying Research: A Review

    quantity survey research paper

VIDEO

  1. Introduction to Quantity Survey II

  2. Free Quantity Survey Training Online Lesson 1

  3. ❻ Quantity Surveying Tutorial part 6. Sub-structure Backfill & Cartaway #ኢትዮጃን #Ethiojan

  4. Quantity Surveying Opportunities : High Paying Career in Civil Engineering Podcast Part 1

  5. Quantity Survey for a G+1 building

  6. Quantity Surveying Beginners ( තේරෙන සිංහලෙන් ඉගෙනගන්න) #quantitysurveyingbeginners

COMMENTS

  1. 4043 PDFs

    Explore the latest full-text research PDFs, articles, conference papers, preprints and more on QUANTITY SURVEYING. Find methods information, sources, references or conduct a literature review on ...

  2. quantity surveying Latest Research Papers

    An 85% response rate from 60 quantity surveying firms contacted in this study provided 51 responses. Descriptive statistics and factor analysis were employed to evaluate the findings.FindingsThe factor analysis categorised the drivers derived from the literature into awareness of sustainable construction, adversarial role on green costing ...

  3. PDF Role and importance of data and technology in quantity surveying and

    in quantity surveying and cost management practice. Report written by Anil Sawhney (RICS) Project leads ... We believe that this paper is essential to the digital transformation of the industry. It provides valuable insights and ... The research team conducted the study in Q4 of 2022. Figure 1 presents a summary of the

  4. Survey Research

    Survey research uses a list of questions to collect data about a group of people. You can conduct surveys online, by mail, or in person. ... Sending out a paper survey by mail is a common method of gathering demographic information (for example, in a government census of the population). ... Quantitative research is expressed in numbers and is ...

  5. Integrating quality into quantity: survey research in the ...

    Although the mixed methods research (MMR) movement originated in the early 1980s as a way of implementing qualitative research (Creswell 2008), the aim of the present article is to show that survey research too must ride the "new" wave of quality-quantity integration and at the same time recover the forgotten lessons of its pioneer, Paul ...

  6. A framework for assessing quantity surveyors' competence

    The purpose of this paper is to develop a conceptual framework for assessing quantity surveyors' competence level.,Delphi survey research approach was adopted for the study. This involved a survey of panel of experts, constituted among registered quantity surveyors in Nigeria, and obtaining from them a consensus opinion on the issues relating ...

  7. (PDF) Documenting its Applications in Quantity Surveying Research: A

    Therefore, this research assesses key areas of quantity surveying academic research (QSAR), essentials of QSAR and factors affecting their use in practice. Primary data on the essentials of QSAR and factors affecting its use were collected through the questionnaire survey from a total of 54 stratified selected quantity surveyors in the ...

  8. Understanding and Evaluating Survey Research

    Survey research is defined as "the collection of information from a sample of individuals through their responses to questions" ( Check & Schutt, 2012, p. 160 ). This type of research allows for a variety of methods to recruit participants, collect data, and utilize various methods of instrumentation. Survey research can use quantitative ...

  9. PDF Effects of Globalization on Quantity Surveying Practice in an Emerging

    4.2 Effects of Globalization on Quantity Surveying Profession. Table 1 indicate the result of analysis relating to the effects of globalization on quantity surveying profession in the study area. The mean scores for the identified effects are above 2.50 which is the average mean score threshold of interpreting 5-point Likert scale of measurement.

  10. The Quantity Surveyor

    The Quantity Surveyor Chief Editor Prof. Deji R. Ogunsemi Department of Quantity Surveying, Federal University of Technology Akure, Nigeria Email: [email protected] Editor Dr. Ayodeji E. Oke Department of Quantity Surveying, Federal University of Technology Akure, Nigeria Email: [email protected], [email protected] Other Editorial Board Members

  11. The N Quantity I Q Surveyor

    Quantitative survey research method was adopted for the study. Primary data were collected through the administration of structured questionnaires on ... The findings have great implications on quantity surveying service delivery. The paper empirically identified parties involved in quackery of quantity surveying practice in Nigeria. Keywords ...

  12. A Practical Guide to Writing Quantitative and Qualitative Research

    INTRODUCTION. Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses.1,2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results.3,4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the ...

  13. Quantity Surveying Research Papers

    The method used was quantitative research, of a descriptive character, through a survey applied to 3236 people, which was analyzed with the use of Structural Equation Modeling. The results indicate that COVID-19 Pandemic is an important vector in people's behavioral change, which reflects on environmental sustainability and social responsibility.

  14. What Are Quantitative Survey Questions? Types and Examples

    The rest of this article focuses on quantitative research, taking a closer look at quantitative survey question types and question formats/layouts. Back to table of contents . Types of quantitative survey questions - with examples . Quantitative questions come in many forms, each with different benefits depending on your market research objectives.

  15. Survey Research: A Quantitative Technique

    Survey research A quantitative method for which a researcher poses the same set of questions, ... Several hundred papers, articles, and books have been written using data from the YDS. One of the major lessons learned from this panel study is that work has a largely positive impact on young people (Mortimer, 2003). ...

  16. (PDF) Digitalization Among the Quantity Surveyors: Strategies to

    The research has expanded the existing studies on digitalization in the quantity surveying profession, particularly in Malaysia, and assisted researchers, construction industry practitioners, and ...

  17. Quality versus quantity: assessing individual research performance

    Abstract. Evaluating individual research performance is a complex task that ideally examines productivity, scientific impact, and research quality--a task that metrics alone have been unable to achieve. In January 2011, the French Academy of Sciences published a report on current bibliometric (citation metric) methods for evaluating ...

  18. Quality over quantity: How qualitative research informs and improves

    Flewelling et al. used grounded theory to explore the experiences of caregivers of children with neurogenic bladder, deepening our understanding of this burden and overcoming limitations of survey research. Their paper identified high caregiver burden due to unexpected challenges with catheterization and urinary tract infections, providing ...

  19. Quantity Surveying Books and Book Reviews

    A survey research method was adopted. Both qualitative and quantitative data collection methods were used. ... Quantity surveying essays/papers, Quantity surveying text books and lesson notes in Quantity surveying field. Popular Papers/Topics . Assessment Of Construction Management Techniques In Nigeria Construction Industry ...

  20. Writing Survey Questions

    [View more Methods 101 Videos]. An example of a wording difference that had a significant impact on responses comes from a January 2003 Pew Research Center survey. When people were asked whether they would "favor or oppose taking military action in Iraq to end Saddam Hussein's rule," 68% said they favored military action while 25% said they opposed military action.

  21. U.S. Surveys

    Pew Research Center has deep roots in U.S. public opinion research. Launched initially as a project focused primarily on U.S. policy and politics in the early 1990s, the Center has grown over time to study a wide range of topics vital to explaining America to itself and to the world.Our hallmarks: a rigorous approach to methodological quality, complete transparency as to our methods, and a ...

  22. The Neutral Interest Rate: Past, Present and Future

    2022 Methods-of-Payment Survey Report: Cash Use Over 13 Years. November 16, 2023 ... Research paper awards; Scholarship awards; Fellowship Program; The Governor's Challenge; Collaboration. ... continues quantitative tightening. April 10, 2024 Monetary Policy Report - April 2024.

  23. Title: A Survey on Retrieval-Augmented Text Generation for Large

    Retrieval-Augmented Generation (RAG) merges retrieval methods with deep learning advancements to address the static limitations of large language models (LLMs) by enabling the dynamic integration of up-to-date external information. This methodology, focusing primarily on the text domain, provides a cost-effective solution to the generation of plausible but incorrect responses by LLMs, thereby ...

  24. After the Storm: How Emergency Liquidity Helps Small Businesses

    Our surveys provide periodic and comprehensive statistics about the nation. This data is critical for government programs, policies, and decision-making. ... Acreage Administrative Records Data Research for the American Community Survey. ... This working paper uses ACS 5-year datasets to examine county-level changes in educational attainment ...

  25. Continual Learning of Large Language Models: A Comprehensive Survey

    The recent success of large language models (LLMs) trained on static, pre-collected, general datasets has sparked numerous research directions and applications. One such direction addresses the non-trivial challenge of integrating pre-trained LLMs into dynamic data distributions, task structures, and user preferences. Pre-trained LLMs, when tailored for specific needs, often experience ...

  26. A Survey on the Memory Mechanism of Large Language Model based Agents

    Large language model (LLM) based agents have recently attracted much attention from the research and industry communities. Compared with original LLMs, LLM-based agents are featured in their self-evolving capability, which is the basis for solving real-world problems that need long-term and complex agent-environment interactions. The key component to support agent-environment interactions is ...

  27. How Democrats, Republicans differ over K-12 education

    Most Democrats say K-12 schools are having a positive effect on the country, but a majority of Republicans say schools are having a negative effect, according to a Pew Research Center survey from October 2022. About seven-in-ten Democrats and Democratic-leaning independents (72%) said K-12 public schools were having a positive effect on the way things were going in the United States.