Houe of Quality
Houe of Quality
Houe of Quality
details: Access Details: [subscription number 919357206] Publisher Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 3741 Mortimer Street, London W1T 3JH, UK
Publication details, including instructions for authors and subscription information: http://www.informaworld.com/smpp/title~content=t792306966
Dooyoung Shina; Kevin M. Elliottb a The Department of Management, College of Business, Minnesota State University, Mankato, Mankato, MN, USA b The Department of Marketing & International Business, College of Business, Minnesota State University, Mankato, Mankato, MN, USA
To cite this Article Shin, Dooyoung and Elliott, Kevin M.(2001) 'Measuring Customers' Overall Satisfaction', Services
Marketing Quarterly, 22: 1, 3 19 To link to this Article: DOI: 10.1300/J396v22n01_02 URL: http://dx.doi.org/10.1300/J396v22n01_02
Downloaded By: [Indian Institute of Management (T&F Special) Consortium] At: 09:43 18 July 2010
Delivery Service: -800-342-9678. -mail ddress: getinfo@haworthpressinc. 1 E a < com>Website:<http://www.HaworthPress.com>E2001byTheHaworthPress, Inc. All rights reserved.]
ABSTRACT. Recognizing the drawbacks associated with the traditional approach of measuring customers overall satisfaction, which simply relies on a single-item measurement of overall satisfaction, an alternative approach is presented. The proposed approach utilizes: all product/service attributes, each customers varying degree of satisfaction with the attributes, and the relative importance of each attribute obtained and analyzed from all customers who participated in the survey. Each customers overall satisfaction is then determined by a weighted average of the gap between a customers expectation of performance (importance rating) and actual experience (performance rating) for each attribute, and the relative importance of each attribute as perceived by the total customer group. A comparison between a singleitem approach and multi-attributes approach along with an illustrated example is also presented using the customer satisfaction data from the airlineindustry. [ArticlecopiesavailableforafeefromTheHaworthDocument
INTRODUCTION In the face of increasingly competitive and dynamic business environment, many companies have begun to shift their focus from
Dooyoung Shin isaffiliated with the Department of Management,and Kevin M. Elliott is affiliated with the Department of Marketing & International Business, College of Business, Minnesota State University, Mankato, Mankato, MN56001. Services Marketing Quarterly, Vol. 22(1) 2001 E 2001 by The Haworth Press, Inc. All rights reserved.
manufacturer-based quality to customer-driven quality in order to address ever-changing customer needs. Focusing on customers not only enables companies to re-engineer their organizations to adapt to customer needs, but also allow them to develop a system with which companies continuously monitor how effectively they meet or exceed those needs. It is imperative for companies to identify what is important to customers, inform customers that companies intend to deliver what is important to them, then deliver what companies promise (Kurtz and Clow 1993). Aligning organizations with key customer values will undoubtedly result in increasing customer satisfaction with the products and services that companies offer, and improve business performance by creating customer loyalty. In fact, an important challenge facing todays managers appears to be to determine how to maximize customer satisfaction to remain competitive, yet at the same time keep costs down enough to make a reasonable profit. Recognizing the importance of satisfying customers, many companies have also exhibited their commitment to customer satisfaction through mission statements, goals/objectives, business strategies, and promotional themes. Peterson and Wilson (1992) argue that virtually all company activities, programs, and policies should be evaluated in terms of their contribution to satisfying customers. While customer satisfaction is essential for a companys success, it is a subtle yet complex phenomenon. Customer satisfaction with a product or service refers to the favorability of the individuals subjective evaluation of the various outcomes and experiences associated with using or consuming it (Oliver and DeSarbo 1989). Customer satisfaction is being shaped continually by repeated experiences with a companys products and services over the product lifetime. Most researchers agree that customer satisfaction involves evaluation, and that evaluation is the result of a comparison process (Zeithaml, Berry and Parasuraman 1993). Understanding the consequences of customer satisfaction has been a concern of researchers and practitioners for many years. The concern is derived from the generally accepted philosophy that for a business to be successful and profitable it must satisfy customers. Bitner (1990) argues that customer loyalty and retention depends heavily upon customer satisfaction. Similarly, Patterson, Johnson, and Spreng (1997) demonstrate empirically a very strong link between customer satisfac-
Downloaded By: [Indian Institute of Management (T&F Special) Consortium] At: 09:43 18 July 2010
tion and repurchase intentions. The relative costs of customer retention and customer acquisition have enhanced the desire to build and maintain long term relationships with customers. This is especially true in the service sector where customer acquisition costs are generally higher than customer retention costs (Ennew, Binks, and Chiplin 1994). For many firms, customer retention is an avenue through which a competitive advantage can be gained. Successful organizations have come to realize that it is better to invest now (retain customers) than to invest later (attract new customers). Through survey questionnaires, companies constantly monitor their customer perceptions on product/service quality, satisfaction level, brand image and other various issues of interest to companies. The questions typically identify key parameters which not only measure customer loyalty, or repurchase intentions, but also assesses what is unsatisfactory and to what extent customers are satisfied overall. This information is then being used internally to fine-tune future products/ services, and manage customer values more effectively. The preponderance of theoretical and empirical research in the literature has focused on understanding and measuring customer satisfaction via customer satisfaction surveys. Typically, a customers overall satisfaction is assessed on the basis of simple, single-item rating scale. Even though the importance of customer satisfaction has been widely recognized, most customer satisfaction measurements are designed to simply assess the global or net satisfaction with a product or service. Also, despite the apparent complexity of the customer satisfaction construct, survey designers have often used a single-item rating scale of four to seven points between the extremes of very dissatisfied and very satisfied, or poor and excellent. This approach, however, fails to recognize the customers varying degree of satisfaction with each service or product attribute. While customer satisfaction is considered the key to securing customer retention and generating superior long-term financial performance, there are critical issues that should be addressed before the relationship between customer satisfaction and customer loyalty is examined. For example, How accurately customer satisfaction can be measured, and how much differences exist, if any, between many different levels of customer satisfaction and customer loyalty?, deserve scrutiny from both practitioners and academicians. Jones and Sasser (1995) argue that merely satisfying customers who
Downloaded By: [Indian Institute of Management (T&F Special) Consortium] At: 09:43 18 July 2010
Downloaded By: [Indian Institute of Management (T&F Special) Consortium] At: 09:43 18 July 2010
have the freedom to make choices is not enough to keep them loyal, and suggested that Xeroxs totally satisfied customers were six times more likely to repurchase Xerox products over the next 18 months than its satisfied customers. Based on AT&T Customer Satisfaction Survey, Gale and Wood (1994) also report that there is a significant difference in terms of willingness to repurchase between those who rated Excellent and those who rated Good on overall satisfaction. While these two examples clearly indicate that there is a strong need to increase the level of customer satisfaction in order to increase customer loyalty, they also pose a very important question: How can a customers overall satisfaction be measured accurately so that a relationship with customer loyalty may be further explored? The purpose of this article is to examine the drawbacks of the current single-item rating scale, and to present an alternative approach to measuring overall customer satisfaction. A multiple-attributes weighted gap score analysis is proposed as an alternative method for assessing customers overall satisfaction that should have increased diagnostic value to both academicians and practitioners. An illustrated example, a comparison between a single-item approach and multi-attributes approach is also presented using the customer satisfaction data from the airline industry. MEASURING CUSTOMERS OVERALL SATISFACTION Customer satisfaction evaluation is typically based on a cognitive process in which customers compare their prior expectations of product/service outcomes (i.e., product/service performance and other important attributes) to those actually obtained from the product/service (Zeithaml, Berry, and Parasuraman 1993). Customer satisfaction results when actual performance meets or exceeds the consumers expectations. Likewise, if expectations exceed actual performance, dissatisfaction will result. A Single-Item Rating Scale: Traditional Approach Traditionally, customers overall satisfaction with a product or service has been measured by either a simple yes or no question, or
Downloaded By: [Indian Institute of Management (T&F Special) Consortium] At: 09:43 18 July 2010
with the degree of overall satisfaction (e.g., from completely dissatisfied to completely satisfied, or poor to excellent, etc.). In a customer survey, customers are often asked to answer the question, which appears at the end of the questionnaire, How would you rate your level of overall satisfaction with our product/service? The primary weakness of this approach is that it fails to recognize the numerous quality attributes of each product/service, and customers varying degree of satisfaction with each attribute. Even though this type of question is simple to answer and analyze, information generated may not accurately reflect what quality attributes of the product/service customers consider critically important in achieving satisfaction, how they perceive the performance of each attribute, and how a customers overall satisfaction is actually shaped in light of various product/service attributes. Customers may not be able to recall numerous items they have evaluated and fully reflect on their overall satisfaction. Customers may simply rely upon only a few attributes with which they consider important, or their satisfactions are greatly impacted. A Multi-Attributes Rating Scale: An Alternative Approach Recognizing the drawbacks associated with the traditional approach of measuring customers overall satisfaction, which simply relies on a single-item measurement of overall satisfaction, an alternative approach is presented. The proposed approach utilizes: all product/service attributes, each customers varying degree of satisfaction with the attributes, and the relative importance of each attribute obtained and analyzed from all customers who participated in the survey. Each customers overall satisfaction is determined by a weighted average of the gap between a customers expectation of performance (importance rating) and actual experience (performance rating) for each attribute, and the relative importance of each attribute as perceived by the total customer group. Each customer can then be classified as very satisfied to very dissatisfied or any other scale preferred by managers, on the basis of a computed satisfaction score rather than by a customers own self-reported score. Vavra (1997) argues that many practitioners prefer a composite satisfaction score because it is more statistically reliable than a single measure. Moreover, a composite score becomes a better choice when
companies compare overall satisfaction over many different time periods. Presented below is a detailed description of the proposed approach. The following notations are presented for the purpose of illustrating the measurement of overall customer satisfaction. Iij = Importance rating of the i-th attribute by the j-th customer (expectations score).
Aij = Actual quality rating of the i-th attribute by the j-th customer (performance perception score).
Downloaded By: [Indian Institute of Management (T&F Special) Consortium] At: 09:43 18 July 2010
Gij = Iij Aij, the rating gap of the i-th attribute by the j-th customer. Wi = Weight (relative importance) of the i-th attribute determined on the basis of customer survey. Wi reflects an overall viewpoint of all customers surveyed on the i-th attribute. OSj =
Iij reflects each individual customers personal expectations and preferences with regard to airline service attributes. Importance ratings should vary across customers as each brings with them differences in past experiences and perceptions of what airline service should be. Aij represents the degree of satisfaction with each airline service attribute, and is based on the actual quality received and experienced by customers. Aij can be measured in many different situations (e.g., immediately after service, after a certain period, etc.), and the findings will likely be different depending on the particular situation. This study used a post-purchase/experience assessment of service attribute satisfaction. Gij indicates a gap between the importance rating and actual performance rating assessed by customer j for service attribute i. If the amount of actual service performance received (Aij ) meets or exceeds expected service performance (Iij), then customer satisfaction results with respect to attribute i. If the amount of actual service provided is less than expected service, the result is customer dissatisfaction with regard to attribute i. Hence, a positive Gij indicates customer js dissatisfaction on the attribute i while a negative Gij suggests that the attribute i exceeded the customers expectation.
Wi is computed by recognizing diversified customer perceptions and expectations (i.e., Iijs). Its main purpose is to determine the relative importance of quality attributes that influence customer satisfaction. Wi is obtained by calculating the sum of each rating for service attribute i and then dividing the sum by the total points of all attributes. Each Wi represents customers overall perceived importance (weight) on the attribute i compared with the other attributes, and it reflects an overall viewpoint of all customers surveyed. OSj indicates customer js overall satisfaction score determined objectively on the basis of the data (information) provided by the customer. As discussed before, OSj does not rely on the response to a single-item question which typically measures customers overall satisfaction. Each OSj is computed by summing up the product of the gap (Gij ) and relative importance of each attribute (Wi). In the survey of customer satisfaction of the airline industry that will be discussed in the following section, OSj is computed using a composite measure of weighted gap scores for the 19 expectations scores and the 19 performance perception scores across each respondent. A negative OSj would indicate favorable overall satisfaction toward airline service, while a positive OSj would suggest that customer j is dissatisfied with airline service overall. Based on this analysis, one could classify customers into various groups according to their overall satisfaction score (OSj). If, for example, four groups (levels) of customer satisfaction are preferred, customers may be classified as follows: Group 1 (Excellent) = customers whose OSjs are less than or equal to 1, Group 2 (Good) = customers whose OSjs are between 0 and 1, Group 3 (Fair) = customers whose OSjs are between 0 and 1, and Group 4 (Poor) = customers whose OSjs are greater than or equal to 1. Typically, creating smaller intervals of OSj would generate more detailed information about customer satisfaction. Table 1 illustrates a simple example of the computation used to compute OSj for Customer A and Customer B. Customer As level of overall satisfaction is determined by computing a composite score of weighted importance scores and gap scores (importance actual scores). As shown in Table 1, the weighted importance score for attribute 1 (modern equipment) was .046. This value is then multiplied by the gap score (importance score of 8 actual score of 6 = 2). This process is then repeated for the other 18
Downloaded By: [Indian Institute of Management (T&F Special) Consortium] At: 09:43 18 July 2010
10
Downloaded By: [Indian Institute of Management (T&F Special) Consortium] At: 09:43 18 July 2010
Note: a. For simplicity, the above example assumes a customer base of two (Customer A & B). b. W1 = (8 + 7) (8 + 9 - - - 9 + 10) + (7 + 9 - - - 8 + 9) - - - - = 15 = .046 324
c. Overall satisfaction for Customer A would be: Customer A = (.046) (2) + (.056) (1) + (.046) (0) + (.062) (2) + (.052) (0) + (.059) (3) + (.059)(3) + (.062) (2) + (.046) (1) + (.046) ( 1) + (.052) (0) + (.049) (1) + (.049) ( 1) + (.062) (2) + (.046) (1) + (.043)(0) + (.052) (0) + (.052) (1) + (.059) (2) = 1.09 Customer As overall satisfaction level would be Poor (Group 4). d. Overall satisfaction for Customer B would be: Customer B = (.046) ( 2) + (.056) (0) + (.046) ( 2) + (.062) (1) + (.052) (0) + (.059) (2) + (.059) (0) + (.062) (1) + (.046) ( 2) + (.046) (0) + (.052) ( 1) + (.049) ( 1) + (.049) (0) + (.062) (2) + (.046) ( 1) + (.043)( 2) + (.052) (0) + (.052) ( 1) + (.059) (0) = .195 Customer Bs overall satisfaction level would be Good (Group 2).
attributes, with an overall satisfaction score being computed for Customer A by summing the 19 individual attribute scores. Customer As overall satisfaction score of 1.09 indicates that his/her perceived performance scores (actual scores) does not meet or exceed expectations (importance scores) regarding airline service. If we follow the classification discussed above, Customer As overall satisfac-
11
Downloaded By: [Indian Institute of Management (T&F Special) Consortium] At: 09:43 18 July 2010
tion level will be considered Poor. Therefore, he belongs to Group 4. Customer Bs overall satisfaction score of .195 indicates that perceived performance exceeded expectations. Customer Bs overall satisfaction level would be considered Good, and he belongs to Group 2. This method of computing customer satisfaction via a multiple-attributes composite score is very advantageous in that it may reflect customers changing perceptions and expectations. Moreover, it can provide customer-driven standards and focus. For example, importance weights of the attributes obtained in a survey conducted in a certain period may not be the same as the ones obtained in the previous or future periods because of a rapidly changing business environment. Changing expectations and perceptions of customers may cause shifts in importance ratings. In addition, when companies improve the quality of a product/service, customers may recognize these improvements and change their perceptions which will ultimately impact their overall satisfaction with the product or service. Companies can continuously update information about changing perceptions and expectations of customers. Weighted importance ratings should also enable companies to identify key drivers of customer satisfaction and help them set the priorities for improvement efforts. These priorities seemingly would help companies determine where to allocate limited resources effectively and how to make concerted efforts on the attributes considered important by customers. The alternative approach to assessing customer satisfaction proposed in this paper does consider weighted importance ratings of product/service attributes. METHODOLOGY In order to examine the practical value and resulting implications of measuring customers overall satisfaction using a multiple-attributes weighted gap score approach, the airline industry was selected for analysis. Airline carriers today appear to emphasize the delivery of better service (e.g., on-time arrival, easy reservations, tour packages, etc.) in an effort to win new customers and to increase loyalty with existing customers. Given this apparent effort by airline carriers to improve on the level of customer service, the airline industry was deemed an appropriate industry to investigate customer satisfaction.
12
Sample Data were gathered from mail questionnaires concerning service quality of airline carriers. A list of 3,400 names of frequent flyers was obtained from a national mailing list company. A random sample of 2,450 names was then selected from the list and mailed the service quality questionnaire. The overall response to the mail survey was 18.5 percent, with 452 usable questionnaires being returned. No follow-ups or incentives for completion of the questionnaire were offered.
Downloaded By: [Indian Institute of Management (T&F Special) Consortium] At: 09:43 18 July 2010
Questionnaire In order to measure customer satisfaction with multi-attributes and an overall satisfaction, a 39-item instrument was used. The measurement instrument captured most of the service quality items proposed by SERVPERF (Cronin et al. 1992) and SERVQUAL (Parasuraman et al. 1988), as well as complaints of the flying public as identified by Bolton and Chapman (1989) and Consumer Reports (July 1991). The items were Likert-type statements on a seven-point scale ranging from Strongly Disagree (1) to Strongly Agree (7). The first 19 items measured customer expectations of airline performance (importance ratings). The next 19 items measured customer perceptions of actual airline performance (actual ratings). The last item was a self-reported overall satisfaction score using the response cue How would you rate your level of satisfaction overall with your primary airline carrier? Responses to this question were formulated using a 7-point scale from Very Dissatisfied (1) to Very Satisfied (7). Data Analysis A three-step data analysis procedure was used for this study. First, each customers overall satisfaction with airline service quality was computed using the proposed multiple-attributes approach. As discussed, this approach utilizes composite weighted gap scores for the 19 expectations (importance) scores and the 19 performance perception (actual) scores across each respondent. Next, in order to observe a relationship between two approaches, overall satisfaction scores were compared (single-item satisfaction vs.
13
Downloaded By: [Indian Institute of Management (T&F Special) Consortium] At: 09:43 18 July 2010
multiple-attributes weighted gap scores) for 18 respondents randomly selected from the sample. Differences between respondents self-reported ratings on the single-item satisfaction scale and their corresponding computed multiple-attributes satisfaction scores were examined. A correlation analysis has also been conducted to examine whether there exists any statistically significant linear relationship between two approaches. Last, stepwise regression analysis was used to predict the dependent variable of customers overall satisfaction scores obtained on the basis of the proposed approach. The actual performance ratings of 19 attributes were considered as the independent variables. RESULTS Single versus Multiple-Attributes Satisfaction Scores A comparison of customer satisfaction scores for 18 randomly selected respondents is presented in Table 2. The self-reported satisfaction scores using the single-item 7-point Likert scale were compared with the computed satisfaction scores using multiple-attributes weighted gap scores. As shown in Table 2, Customer #60 seems to be satisfied with the airline service. His response of 6 (satisfied) is also fairly consistent with the computed satisfaction score of 2.0555. Customer #367 seems to be very dissatisfied as indicated by his response of 1 (Very Dissatisfied). His response is also very consistent with the computed satisfaction score of 4.6327. While overall, there appears to be a moderate yet significant linear relationship between the single-item overall satisfaction scores and the multiple-attributes overall satisfaction scores (r = 0.5681, p < .000), the results indicate that answers to a single-item satisfaction scale with a response cue of How would you rate your overall satisfaction level? may not accurately reflect what respondents indicated earlier in the questionnaire regarding their satisfaction with individual service attributes. For example, Customer #21 indicated a 7 (very satisfied) on the single-item satisfaction scale. However, Customer #21s computed weighted gap satisfaction score is 1.0399. This positive value of 1.0399 seemingly indicates some level of overall dissatisfaction with airline service. Similarly, Customer #302 indicated a 4 (neutral) on
14
TABLE 2. Comparison of Single-Item (Self-Reported) versus Multiple Attributes (Computed) Satisfaction Scores
Customer ID # 3 21 49 60 68 73 125 143 171 204 235 247 271 302 336 367 407 447 Single-Itema Self -Reported Satisfaction Score (OSs) 2 7 4 6 1 4 3 7 6 7 5 3 7 4 6 1 3 6 Multiple-Attributesb Computed Satisfaction Score (OSm) 2.8303 1.0399 1.1414 2.0555 0.2870 3.1258 4.2703 0.4692 0.0034 0.5888 1.7901 5.8050 0.6385 1.4586 1.3859 4.6327 2.0843 0.9397
Downloaded By: [Indian Institute of Management (T&F Special) Consortium] At: 09:43 18 July 2010
Note: a. Obtained using a 7-point Likert scale: (1) Very Dissatisfied, (7) Very Satisfied b. Obtained using the proposed approach. c. Mean: OSs = 4.993, OSm = 1.891, based on 452 respondents. d. Correlation between OSs and OSm : r = -.5681, P < .000
the single-item satisfaction scale, yet the corresponding weighted gap satisfaction score is 1.4586. This negative satisfaction score indicates that actual airline performance exceeded this customers expectations (importance scores), thus some level of significant overall satisfaction appears to exist. While the effectiveness of the single-item approach may not be discounted based on only a few examples, it demonstrates that customers inconsistent behaviors in rating their overall satisfaction do exist. Predicting Customer Satisfaction Table 3 shows weighted importance scores (Wis) for the 452 respondents across the 19 airline service attributes as well as regression coefficients used to predict customer satisfaction with resulting t-test
15
Downloaded By: [Indian Institute of Management (T&F Special) Consortium] At: 09:43 18 July 2010
Modern Equipment .0412 Appealing Facilities .0444 Employees Appearance .0446 Baggage Handling .0533 In-Flight Service .0501 Interest in Problems .0535 Accurate Information .0521 On-Time Performance .0509 Check-In Service .0528 Informs of Delays .0533 Seat and Leg Room .0502 Helps Customers .0479 Instills Confidence .0487 Adequate Security .0518 Minimizes Oversales .0533 Employee Knowledge .0470 Convenient Hours .0496 Clean Planes .0467 Understands Needs .0524 Adjusted R square .47622 F-test statistic 69.3404
.1352 .1039
3.23 2.25
.0013 .0252
.2736 .1219
6.53 2.62
.0000 .0090
.1441
3.63
.0003
3.74
.0002
Note: a. Wis (weighted importance scores) are based on n = 452. b. Only coefficients significant at p < .05 are presented. c. Customer satisfaction was computed using the proposed multi-attributes approach.
statistics. In analyzing the weighted importance scores, the following six service attributes appear to have the highest importance scores: (1) interest in problems (W6 = .0535), (2) baggage handling (W4 = .0533), (3) informs of delays (W10 = .0533), (4) minimizes oversales (W15 = .0533), (5) check-in service (W9 = .0528), and (6) understands needs (W19 = .0524). The results of stepwise regression analysis used to predict customer satisfaction is also presented in Table 3. As shown in Table 3, the adjusted R2 for the regression model was .47622 (F-test statistic =
16
Downloaded By: [Indian Institute of Management (T&F Special) Consortium] At: 09:43 18 July 2010
69.3404 and p < .0000). The significant variables in the model that appear to directly impact overall customer satisfaction with airline performance are: (1) baggage handling (p < .0013), (2) interest in problems (p < .0252), (3) seat and leg room (p < .0000), (4) instills confidence (p < .0090), (5) employee knowledge (p < .0003), and (6) understands needs (p < .0002). In comparing the six airline service attributes having the highest weighted importance scores (Wi) with the airline service attributes that were significant in predicting overall customer satisfaction, there are some similarities as well as differences. Three of the service attri-butes appear in both the list of highest importance scores and the list of significant predictor variables (baggage handling, interest in problems, and understands needs). In contrast, three attributes were shown to be significant predictors of customer satisfaction (seat and leg room, instills confidence, and employee knowledge), but were not included in the highest importance scores. This may indicate that relying solely on importance ratings (or highest average scores) without statistically significant justification could result in an incorrect interpretation of survey data when trying to find the key drivers of customers overall satisfaction. MANAGERIAL IMPLICATIONS/CONCLUSIONS Measuring Customer Satisfaction The results in Table 2 suggest that respondents may not thoroughly reflect upon their previous responses within a questionnaire regarding satisfaction of individual product/service attributes when asked to assess their overall satisfaction on a single-item satisfaction scale at the end of the questionnaire. This may be due in part to the numerous individual questions they have just answered, thus making it difficult to remember all of their responses. Another explanation might be that respondents just reflect upon their most recent answers (i.e., previous three or four questions) when responding to a final overall satisfaction question. Regardless of the reason for this phenomenon occurring, an important implication for managers emerges. When assessing overall customer satisfaction with a companys product or service, a composite
17
Downloaded By: [Indian Institute of Management (T&F Special) Consortium] At: 09:43 18 July 2010
satisfaction score which incorporates multiple attributes would appear to have more diagnostic value for strategic decision-making. First, the composite satisfaction score may be a more accurate and objective reflection of the overall satisfaction that a customer has regarding a product or service. Second, the single-item satisfaction score will not indicate why the customer is satisfied or dissatisfied. His/her level of overall satisfaction/dissatisfaction may be a function of a single dimension of the product or service. For example, poor baggage handling may be the primary reason why a customer indicates a high level of dissatisfaction with airline service. All other expectations the customer had regarding airline service were at least met and did not significantly impact his/her perceived satisfaction with airline service. From a management perspective, this information is important to know. The multiple-attributes satisfaction score would allow additional analysis that could pinpoint that baggage handling is consistently impacting overall customer satisfaction with airline service, while the single-item satisfaction score would not permit this type of analysis. Predicting Customer Satisfaction Another important implication for managers is derived from the results in Table 3. Table 3 indicates that relying solely on a single measure, such as self-reported importance attribute scores orsignificant variables in regression analysis, to predict overall customer satisfaction may be too simplistic and somewhat inaccurate. Using a combination of measures, with the analysis of common important/significant variables, may be more meaningful in predicting overall customer satisfaction. For example, based on the findings in Table 3, one could reasonably conclude that baggage handling, interest in problems, and understands needs are three of the more critical airline service attributes impacting customer satisfaction. These are the only three customer attributes that appeared in the list of top self-reported important attributes and the list of significant predictor variables in the regression model. As a manager, knowing that these three attributes consistently appear as both important and significant predictor variables of customer satisfaction would seemingly provide a more reliable and valid indicator as to what impacts a customers perception of satisfaction. Knowing what influences customer satisfaction is the first step in improving it.
18
Downloaded By: [Indian Institute of Management (T&F Special) Consortium] At: 09:43 18 July 2010
As indicated by the wealth of research conducted on customer satisfaction, measuring customers overall satisfaction accurately is a very challenging and daunting task for many managers. While the results of this exploratory study appear to suggest that a multiple-attributes weighted composite gap score approach in measuring customers overall satisfaction provides diagnostic value and managerial insights to managers, there are many other issues to be examined. For example, Peterson and Wilsons (1992) research reveals that measurements of customer satisfaction exhibit tendencies of confounding and methodological contamination. They argue that issues like response rate bias, data collection mode bias, the manner in which questions are asked, measurement timing, and so on, can significantly affect the results of satisfaction survey. This undoubtedly means that more effort and further research are required to improve the measurement of customer satisfaction. The debate will obviously continue regarding intrapersonal characteristics, methodological considerations, and other practical issues related to measuring customer satisfaction. It is hoped that this article will be helpful in addressing those issues as they relate specifically to managers. REFERENCES
Bitner, Mary Jo (1990), Evaluating Service Encounters: The Effects of Physical Surroundings and Employee Responses, Journal of Marketing, 54 (April), 69-82. Bolton, Ruth N. and Randall G. Chapman (1989), The Structure of Customer Complaint Behavior in the Airline Industry, Developments in Marketing Science, 12, 546-551. Brown, Stephen W. and Teresa A. Swartz (1989), A Gap Analysis of Professional Service Quality, Journal of Marketing, 53 (April), 92-98. Carman, James M. (1990), Consumer Perceptions of Service Quality: An Assessment of the SERVQUAL Dimensions, Journal of Retailing, 66 (1), 33-55. Cronin,J.Joseph and Steven A.Taylor (1992),Measuring ServiceQuality: AReexamination and Extension, Journal of Marketing, 56 (July), 55-68. and (1994), SERVPERF VersusSERVQUAL: Reconciling Performance-Based and Perceptions-Minus-Expectations Measurement of Service Quality, Journal of Marketing, 58 (January), 125-131. Ennew,ChristineT.,Martin R.Binks,andBrian Chiplin(1994),CustomerSatisfaction and Customer Retention: An Examination of Small Businesses and Their Banks in the UK, Developments in Marketing Science, 17, Eds. Elizabeth J. Wilson and William C. Black, 188-192. Gale,B.T.and Wood,R.C.(1994),ManagingCustomerValue:Creating Qualityand Service That Customers Can See, The Free Press, New York, NY.
19
Downloaded By: [Indian Institute of Management (T&F Special) Consortium] At: 09:43 18 July 2010
Jones, O. Thomas and W. Earl Sasser, Jr., (November-December 1995), Why Satisfied Customers Defect, Harvard Business Review, 73(6), 88-99. Kurtz,David L.and Kenneth E.Clow(1993),ManagingConsumerExpectationsof Services, The Journal of Marketing Management, 2 (2), 19-25. Oliver, Richard L. and Wayne S. DeSarbo (1989), Processing of the Satisfaction Responsein Consumption: ASuggested Frameworkand ResearchProposition, JournalofConsumerSatisfaction,DissatisfactionandComplainingBehavior,2, 1-16. Parasuraman, A., Valarie A. Zeithaml, and Leonard L. Berry (1988),SERVQUAL: AMultiple-ItemScaleforMeasuring ConsumerPerceptionsofServiceQuality, Journal of Retailing, 64 (Spring), 12-37. , , and (1994), Reassessment of Expectations as a Comparison Standard in Measuring Service Quality: Implications for Further Research, Journal of Marketing, 58 (January), 111-124. Patterson,Paul G.,LesterW. Johnson and Richard A. Spreng (1997), Modeling the Determinants of Customer Satisfaction for Business-to-Business Professional Services, Journal of Academy of the Marketing Science, 25 (Winter), 4-17. Peterson,Robert A. and Wilson, WilliamR. (1992),Measuring CustomerSatisfaction: Fact and Artifact, Journal of the Academy of Marketing Science, Vol.20, No. 1, 1992, 61-71. Teas, R. Kenneth (1993a), Consumer Expectations and the Measurement of PerceivedServiceQuality,JournalofProfessionalServicesMarketing,8(2),33-53. (1993b),Expectations,PerformanceEvaluation,and ConsumersPerceptions of Quality, Journal of Marketing, 57 (October), 18-34. The Best and Worst Airlines, Consumer Reports, July 1991, 462-469. Vavra, G. Terry (1997), Improving Your Measurement of Customer Satisfaction, Quality Press, Milwaukee, WI. Zeithaml,ValarieA.,Leonard L. Berry and A.Parasuraman (1993),The Natureand Determination of Customer Expectation of Service, Journal of Academy of Marketing Science, 21 (1), 1-12.