Equidistant Likert as weighted sum of Response Categories

Authors

  • Satyendra Nath Chakrabartty Indian Statistical Institute. New Delhi (India)

DOI:

https://doi.org/10.17981/cultedusoc.14.1.2023.04

Keywords:

Likert items, Weighted sum, Monotonic, Equidistant, Normal distribution

Abstract

Introduction: Addition of scores of Likert items may not be meaningful since equidistant property is not satisfied. This implies computation of mean, standard deviation, correlation, regression and Cronbach alpha using sum of item variances and test variance could be problematic. Objective: Avoiding limitation of summative Likert scores by transforming raw item scores to continuous monotonic scores satisfying equidistant property and evaluate the methods with respect to desired properties and testing normality of transformed test scores. Methodology: The methodological paper gives three methods of transforming discrete, ordinal item scores to continuous scores by weighted sum where weights consider frequencies of different response-categories of different items and generate continuous data satisfying equidistant and monotonic properties. Results and discussions: All the proposed methods avoided major limitations of summative Likert scores, generates continuous data satisfying equidistant and monotonic properties. The method based on frequencies of response-categories for different items (Method 3) passed the normality test unlike the Method 1 and Method 2. Normally distributed transformed scores in Method 3 facilitate undertaking analysis under parametric set up. C­onclusions: Proposed methods having high correlations with summative Likert scores, retained similar factor structure and provides reconciliation to the debate on ordinal vs. interval nature of data generated from a Likert questionnaire. Considering the theoretical advantages, the Method 3 is recommended for scoring Likert items primarily due to Normal distribution of individual scores facilitating meaningfulness of operations and to undertake parametric statistical analysis.

Downloads

Download data is not yet available.

Author Biography

Satyendra Nath Chakrabartty , Indian Statistical Institute. New Delhi (India)

Master, Indian Statistical Institute. Post Graduate courses at Indian Statistical Institute, University of Calcutta and Galgotias Business School (India). Has over 65 publications to his credit. After serving Kolkata Port Trust for 25 years in various managerial positions, joined Mumbai Port Trust as Director (Planning & Research) and subsequently took over as Director, Indian Institute of Port Management (India). Retired from the position of Director, Kolkata Campus, Indian Maritime University (India). His previous assignment was Consultant, Indian Ports Association (India). ORCID: https://orcid.org/0000-0002-7687-5044

References

Arvidsson, R. (2019). On the use of ordinal scoring scales in social life cycle assessment. The International Journal of Life Cycle Assessment, 24(3), 604–606. https://doi.org/10.1007/s11367-018-1557-2

Barua, A. (2013). Methods for Decision–making in Survey Questionnaires Based on Likert Scale. Journal of Asian Scientific Research, 3(1), 35–38. https://archive.aessweb.com/index.php/5003/article/view/3446

Bürkner, P.C. & Vuorre, M. (2019). Ordinal Regression Models in Psychology: A Tutorial. Advances in Methods and Practices in Psychological Science, 2(1), 77–101. https://doi.org/10.1177/2515245918823199

Carifio, J. & Perla, R. (2007). Ten Common Misunderstandings, Misconceptions, Persistent Myths and Urban Legends about Likert Scales and Likert Response Formats and their Antidotes. Journal of Social Sciences, 3, 106–116. http://dx.doi.org/10.3844/jssp.2007.106.116

Chakrabartty, S. N. (2021). Optimum number of Response Categories. Current Psychology, 104(1), 1–15. https://doi.org/10.1007/s12144-021-01866-6

Dawes, J. (2007). Do data characteristics change according to the number of scale points used? International Journal of Market Research, 50(1), 61–77. https://doi.org/10.1177/147078530805000106

Flora, D. B. & Curran, P. J. (2004). An Empirical Evaluation of Alternative Methods of Estimation for Confirmatory Factor Analysis with Ordinal Data. Psychological Methods, 9(4), 466–491. https://doi.org/10.1037/1082-989X.9.4.466

Granberg-Rademacker, J. S. (2010). An Algorithm for Converting Ordinal Scale Measurement Data to Interval/Ratio Scale. Educational and Psychological Measurement, 70(1), 74–90. https://doi.org/10.1177/0013164409344532

Harwell, M. R. & Gatti, G. G. (2001). Rescaling ordinal data to interval data in educational research. Review of Educational Research, 71, 105–131. https://doi.org/10.3102/00346543071001105

Hinne, M. (2013). Additive conjoint measurement and the resistance toward falsifiability in psychology. Frontiers in Psychology, 4(1), 1–4. https://doi.org/10.3389/fpsyg.2013.00246

Huiping, W. & Leung, S-O. (2017). Can Likert Scales be Treated as Interval Scales?—A Simulation Study. Journal of Social Service Research, 43(4), 527–532. https://doi.org/10.1080/01488376.2017.1329775

Jamieson, S. (2005, Aug. 11). Likert scale. Encyclopedia Britannica. https://www.britannica.com/topic/Likert-Scale

Kuzon, W. M., Urbanchek, M. G. & McCabe, S. (1996). The seven deadly sins of statistical analysis. Annals of Plastic Surgery, 37, 265–272. https://doi.org/10.1097/00000637-199609000-00006

Lee, J. A. & Soutar, G. N. (2010). Is Schwartz’s value survey an interval scale, and does it really matter? Journal of Cross-Cultural Psychology, 41(1), 76–86. https://doi.Org/10.1177/0022022109348920

Lim, H.-E. (2008). The use of different happiness rating scales: bias and comparison problem? Social Indicators Research, 87, 259–267. https://doi.org/10.1007/s11205-007-9171-x

Marcus-Roberts, H. M. & Roberts, F. S. (1987). Meaningless statistics. Journal of Educational Statistics, 12, 383–394. https://doi.org/10.2307/1165056

Markus, K. A. & Borsboom, D. (2012). The cat came back: evaluating arguments against psychological measurement. Theory & Psychol, 22(4), 452–466. https://doi.org/10.1177/0959354310381155

Michell, J. (1990). An Introduction to the Logic of Psychological Measurement. ErlbaumAssociates.

Munshi, J. (2014). A method for constructing Likert scales. Social Science Research Network. https://doi.org/10.2139/ssrn.2419366

Sheng, Y. & Sheng, Z. (2012). Is coefficient alpha robust to non-normal data? Frontiers in Psychology, 3(34), 1–13. https://doi.org/10.3389/fpstg.2012.00034

Šimkovic, M. & Träuble, B. (2019). Robustness of statistical methods when measure is affected by ceiling and/or floor effect. PloS one, 14(8), 1–47. https://doi.org/10.1371/journal.pone.0220889

Simms, L. J., Zelazny, K., Williams, T. F. & Bernstein, L. (2019). Does the number of response options matter? Psychometric perspectives using personality questionnaire data. Psychological Assessment, 31(4), 557–566. https://doi.org/10.1037/pas0000648

Snell, E. (1964). A Scaling Procedure for Ordered Categorical Data. Biometrics, 20(3), 592–607. https://doi.org/10.2307/2528498

Uyumaz, G. & Sırgancı, G. (2021). Determining the Factors Affecting the Psychological Distance Between Categories in the Rating Scale. International Journal of Contemporary Educational Research, 8(3), 178–190. https://doi.org/10.33200/ijcer.858599

Wu, Ch.-H. (2007). An Empirical Study on the Transformation of Likert scale Data to Numerical Scores. Applied Mathematical Sciences, 1(58), 2851–2862. https://doi.org/10.12988/ams

Yusoff, R. & Janor, R. M. (2014). Generation of an Interval Metric Scale to Measure Attitude. SAGE Open, 4(1), 1–16. https://doi.org/10.1177/2158244013516768

Published

2022-11-29

How to Cite

Chakrabartty , S. N. . (2022). Equidistant Likert as weighted sum of Response Categories. CULTURA EDUCACIÓN Y SOCIEDAD, 14(1), 75–92. https://doi.org/10.17981/cultedusoc.14.1.2023.04