User Expertise in Contemporary Information Systems: Conceptualization, Measurement and Application
User Expertise in Contemporary Information Systems: Conceptualization, Measurement and Application
User Expertise in Contemporary Information Systems: Conceptualization, Measurement and Application
Information Systems:
Sharmistha Dey
BA (Calcutta University) MBA (Griffith) MIS (Griffith)
The study employed data gathered from 220 respondents, representing three
organizations. The analysis outlines the importance of more generic motivational
aspects captured through the ‘affective’ variable in defining expertise for a
contemporary IS, explaining most amount of variance in the latent variable. Cognitive
competence and skill-based too were significant contributors and explained adequate
amount of variance in the latent construct. Years of experience, a construct considered
as important in most domains, was found non-significant.
Table of Contents
List of Tables ........................................................................................................ 8
List of Figures....................................................................................................... 9
Acknowledgements .............................................................................................. 1
Chapter 1: Introduction......................................................................................... 3
Thesis Outline..................................................................................................... 20
Chapter 2: Introduction....................................................................................... 21
Scale Construction................................................................................................. 49
The IS-Impact Measurement Model ..................................................................... 53
Research Methodology ....................................................................................... 54
Survey Administration........................................................................................... 65
The Instrument...................................................................................................... 66
Respondent Anonymity and Confidentiality ......................................................... 67
Chapter Summary ............................................................................................... 67
Chapter 4: Data analysis and Results ................................................................. 68
Implications ........................................................................................................ 98
This research journey was long and challenging. I am grateful to all who
supported, trusted and motivated me in this endeavour.
Thank You….
To my supervisors, Prof. Christine Bruce and Dr. Sukanlaya Sawang for their
immense support and guidance. You helped me see light at the end of the tunnel.
To the JAIS panel members at PACIS 2012 Ho Chi Min city, Vietnam, which
ultimately led to the publication at Information and Management in 2013.
To my parents (Ashim and Minati Dey) and my brother (Rana) for your
constant love and support.
To my son, Ishaan for providing a lot of joy in this otherwise uphill journey.
And most of all, to my Dadu (Basudeb Dey) who supported my every dream
and ambition. Thank you for teaching me that ‘nothing is impossible’!
1|Page
STATEMENT OF ORIGINAL AUTHORSHIP
The work contained in this thesis has not been previously submitted to meet
requirements for an award at this or any other higher education institution. To the best
of my knowledge and belief, the thesis contains no material previously published or
written by another person except where due reference is made.
Date: _____________01/10/2013____________
2|Page
Chapter 1: Introduction
CHAPTER 1: INTRODUCTION
This chapter provides a broad overview of the research reported in this thesis
and it introduces the research strategy of the thesis. The chapter begins with a
discussion on the background and motivations of this research. Next the chapter
discusses the significance of the research and the research objectives. The role of self
efficacy and user competence is discussed next. This is followed by the hypothesis
and research questions. The unit of analysis is discussed next. Then the research
context, impact of culture and research design of the study is introduced. This chapter
then introduces the preliminary research model. The thesis outline succinctly
describes each of the five chapters in this thesis.
3|Page
Chapter 1: Introduction
User expertise, however, is not a simple reflection of one’s innate abilities and
capabilities, but rather a combination of acquired complex skills, experience and
knowledge capabilities (Ericsson and Smith 1991; Hunt 2006; Norman 2006; Yates
and Tschirhart 2006). Eriksson et al. (1993), demonstrate that both extended
deliberate practice and deliberate learning of skills have a strong positive relationship
with expertise. Simon and Chase (1973), demonstrated that in certain disciplines it
takes approximately 10-years of intensive deliberate practice to attain a high degree of
proficiency. In Information Systems, research on user competence (e.g. Munro, Huff
et al. 1997) and computer self-efficacy (Bandura 1977a; Bandura 1977b; Bandura
1997; Bandura 2007), provide a wealth of knowledge on how to conceptualize and
measure ‘staff computing ability’ (Munro, Huff et al. 1997). Yet, as Marakas et al.
(2007) observed, “[past studies on both self-efficacy and user competence] have
focused heavily on models in very distinct domains”, predominantly using simple
information systems (e.g. spreadsheets, word processing) and lacking emphasis on
user expertise in contemporary IS.
4|Page
Chapter 1: Introduction
high levels of success with ES by focusing on effective use of the system (LeRouge
and Webb 2004). Moreover, contemporary system users experience a steep learning
curve after ‘going-live’ at the shakedown phase, gaining knowledge of the system
features and functions through exploration and undergoing training to add value to
their business processes at the later parts of the system lifecycle (i.e. onwards/upwards
phase) (Markus and Tanis 2000; Nah, Lau et al. 2001). Users’ expertise with
Information Systems has been recognized as crucial of its effect on workplace
productivity (Bowen 1986; Magnet 1994; Higginbotham 1997; Little 1997).
Thus, new measures and evaluation models are required to gauge the
proficiency of users, such as Enterprise Systems (Marakas, Johnson et al. 2007;
Gable, Sedera et al. 2008). Nonetheless, most end user computing and computer self
efficacy studies continue to rely on instruments and measures that were validated with
a far too simplistic view of a complex information system. In example, Munro et al.
(1997) observed end user computing using word processing applications, while
Marakas et al. (2007) observed computer self efficacy using spreadsheets and word
processing applications. Munro et al. (1997) defines user competence stating that
“...end users essentially need to know about, and able to use, three things: EUC
software, hardware, and concepts and practices. These, then, are the three major EUC
6|Page
Chapter 1: Introduction
“domains (p 47)”. The Marakas et al. (2007) instrument of computer self efficacy on
seven task related constructs focusing on: General efficacy, Windows Efficacy,
Spreadsheet Efficacy, Word-Processing Efficacy, Internet Efficacy, Database Efficacy
and a test on Task Performance.
Enterprise IT… (1) have multiple user groups using the same system for
different purposes, (2) longer lifecycles, where the system use and proficiency could
change, (3) introduces continual changes to the organizational structures and business
processes, (4) has a process orientation, rather than single-task / functional nature, (5)
users do not require technical knowledge (e. g. server aspects), as such tasks are done
by dedicated technical staff. Given the substantial differences between Function IT
and Enterprise IT, it is essential that one understands how expertise can be
characterized in contemporary Enterprise IT.
In figure 1, cells marked with ‘A’ denote where past studies of computer user
competence concentrate on, whereas the cells marked as ‘B’ provide the scope for this
research. The scope (i.e. cells) must be selected with care, understanding the intent of
the study context, acknowledging that some combinations of cells are less realistic
and less informative. The study recommends that the primary consideration herein
should be the type of the system. Thus, as a rule-of-thumb, the study suggests that the
selection of cells be based, first on the system, next the domains, and finally the
measurement approach.
As such, this study ‘by-design’ is scoped to address the areas marked as ‘B’ in
the conceptual framework. It is recognized that it would have been best to have
conducted the study over multiple axis for comparative purposes. This would have
helped increase generalizability of the findings. Future studies could benefit by doing
this. For example, future studies could extend the evaluation method to both self-
evaluation as well as the classical method.
7|Page
Chapter 1: Introduction
A B
A B
A B
A B
Type of System
The system centric approach is central to this study approach given that the
selection of the system (x-axis of the cube labelled as “Type of System”) influences
the selection of appropriate measures. In other words, the primary measure of
selection.
the same way, if we adopt in an oil and gas company or in an organization dealing
with higher education. Yet, in Enterprise IT, the context will change the way a system
is configured and the features and functions of the system. Similarly, it is highly
unlikely that user of a Function IT system makes significant increases to knowledge,
after gaining familiarity of the basic, and day-to-day functions. Yet, the learning curve
of an Enterprise IT is steeper, longer and is incremental. Finally, most ES operational
and management users do not have prior knowledge of Enterprise Systems. Even if
they have some amount of experience, given the contextual differences (factor (ii) in
the list above), prior knowledge cannot be easily employable.
9|Page
Chapter 1: Introduction
This research conceives both the model constructs and its measures as
formative, manifested in extensive attention to the completeness and necessity of
constructs and measures of expertise. In order to ensure this, the expertise model
specification and validation proceeded from an inclusive view of expertise,
commencing with the three theoretical foundations of theories of learning (Kraiger,
Ford et al. 1993), employed in past studies. Conceived primarily through a ‘system
centric’ viewpoint, the study presented a conceptual framework for which IS expertise
can be understood. The index of expertise will encourage future researchers to
continue a cumulative tradition of research and to further extend the understanding of
user expertise in contemporary Information Systems.
The primary objective of the study is to identify and validate a set of qualities
that would usefully capture expertise of an individual in the context of Information
Systems. The study would not intentionally prepare qualities of expertise for each and
every position or role in the context of IS. Such a detailed approach would be too
detailed to execute and repeat and would not gain the benefits of generalization and
repeatability. Thus, the objective of the study is to derive the salient generic qualities
of expertise which individuals can use to relate to their specific roles and positions
when answering the survey questions. It is believed that such an approach would not
only yield useful information, but also allows a cumulative practice in research and
practice. Once the salient characteristics are identified, the study will then apply the
classification of expertise of IS in a system evaluation.
10 | P a g e
Chapter 1: Introduction
Once the salient characteristics are identified, the guidelines are then used in an
IS evaluation to determine whether the classifications according to the varying levels
of expertise (novice, intermediate and expert) adds further value in IS success
evaluations, for which this research employs the IS-Impact measurement model of
Gable Sedera and Chan (2008) using the prior validated 27 measures.
This research study has three main interrelated aims: (1) identify the
characteristics of expertise (2) validate a maximally generalisable expertise
measurement model; and (3) the three groups based on their levels of expertise, has
different views in system evaluations. This research does not propose a means of how
a novice could become an expert (highest level of expertise).
11 | P a g e
Chapter 1: Introduction
the “known” tasks or activities. For example, Marcolin et al. (2000) observed user
competence of Spreadsheets and Word Processing, focusing on specific functions that
users perform within the software (e.g. formatting).
In derivation of items and measurement too the approaches of Self-Efficacy and
User Competence have similarities.
In designing measures for self-efficacy, Bandura (2006; page 207) states
“…there is no all-purpose measure of perceived self-efficacy. The “one measure fits
all” approach usually has limited explanatory and predictive value because most of
the items in an all-purpose test may have little or no relevance to the domain of
functioning. Moreover, in an effort to serve all purposes, items in such a measure are
usually cast in general terms divorced from the situational demands and
circumstances. This leaves much ambiguity about exactly what is being measured or
the level of task and situational demands that must be managed. Scales of perceived
self-efficacy must be tailored to the particular domain of functioning that is the object
of interest. Thus the measures of ‘affective’ were derived using the key high-level
premises of Enterprise Systems, observing whether the users are able to withstand and
are motivated to change and evolve with the evolution of the Enterprise System.
The measures of skill-based and cognitive were developed using the studies of User
Competence (e.g. Marcolin et al. 2000).
The main hypothesis of the study is that Information Systems users have
significantly different levels of expertise, and that they can be usefully classified
according to their degree of proficiency. Thus, it was also expected, if the derived
classification is correct and meaningful, the evaluations that they make of a system
are also significantly different. The study design and the research model have been
derived to accommodate the hypothesis.
Two research questions have been derived to achieve the objectives of this
study.
13 | P a g e
Chapter 1: Introduction
In seeking answers for the first research question, this research attempts to
derive the possible characteristics of expertise to an a-priori model and then distil the
salient characteristics through empirical validation. The possible characteristics of
expertise are derived through a cross-discipline literature review that focuses on
Information System studies of user competence (Bandura 1977a; Bandura 1977b;
Bandura 1997; Munro 1997; Bandura 2007), computer self efficacy (Bandura 1986;
Marakas, Johnson et al. 2007), psychology studies of expertise (Chase and Simon
1973; Ericsson and Smith 1991; Hunt 2006) and knowledge management literature in
the discipline of Information Systems (Davenport 1998).
Despite the wealth of research on user competence, self efficacy and related
topics, research has less knowledge of how to classify users based on expertise. This
research attempts to fill this void by using data analysis triangulation (discussed in
chapter 4).
The second research question derives its answers through the application of the
classifications of expertise derived through the first research question. Herein, the
14 | P a g e
Chapter 1: Introduction
UNIT OF ANALYSIS
Pinsonneault and Kraemer (1993) classified the types of unit of analysis: 1)
individual, 2) work group, 3) department, 4) organisation, 5) application and 6)
project. Given the research constructs will be gathering expertise levels from
individuals, the unit of analysis in this research is the Individual User in a particular
organisation using an operational Information System.
The selection of the individual users as the unit of analysis is consistent with the
intended application of the expertise model in the context of system evaluations as
well. The IS-Impact measurement model too requires that responses are gathered at
the individual level on their assessment of an operational Information System.
The individual users of this study must have substantial direct exposure to the
operational Information System (in this case, data was gathered from operational
Enterprise System applications). Since the strategic Management do not receive
adequate direct exposure to the operational system, they are excluded from this study.
As expected, external user cohorts like suppliers and customers were also eliminated
from the scope of the study.
Given the background, motivations, research questions and the unit of analysis,
this research requires quantitative data from a reasonably large sample of regular
users of an operational Information System. It was also given consideration to select
respondents from the same IS application to avoid any extraneous influence on the
data analysis.
15 | P a g e
Chapter 1: Introduction
Thus, three medium sized organizations located in India were selected for the
data collection. The three organizations were selected given that they had
implemented the same enterprise wide software – SAP – and were located in the same
geographical region. Due to ethical agreements between the Queensland University of
Technology and the organizations, their names are replaced with pseudonyms.
Glass: Glass is the leading manufacturer of glass bottles for medical, cosmetics
& beverage industry in India. They too implemented SAP Logistics in 2007 and
include approximately 100 concurrent SAP users.
IMPACT OF CULTURE
All three organizations representing India may introduce some elements of bias
through cultural influence. As per Hofstede (information available through
http://geert-hofstede.com) culture can be analyzed using five dimensions: (i) power
distance, (ii) individualism, (iii) masculinity, (iv) uncertainty avoidance, and (v) long-
16 | P a g e
Chapter 1: Introduction
Observing the descriptions for each of the constructs that Hofstede describes
“Culture” with (i.e. power distance, individualism, masculinity, uncertainty
avoidance, and long-term orientation), two types of possible influences are observed:
(i) it can be argued that these five aspects may have a bearing on how one evaluates
him/her-self through self-evaluation mechanism employed in this study, and (ii)
impact of the five factors of culture on the possible antecedents and / or consequences
of expertise.
Despite the influence of culture on the relative weights and its nomological net,
culture is unlikely to influence the construct it-self. In other words, the four constructs
validated in this study would still make their statistically significant contribution. In a
similar manner, one could use the GLOBE measures of House (2004) to understand
the impact of national culture on our perceptions of the influence of national culture
on system evaluations and expertise.
17 | P a g e
Chapter 1: Introduction
The figure below depicts the design of this research through research strategy
exploration to findings and interpretation. There are five stages in the research
design: Literature review (chapter 2), Mapping (chapter 3), Survey (chapter 4) and
Confirmatory Validation (chapter 4).
The preliminary research model is depicted in the figure below. It denotes the central
focus of the study on expertise and the application of the expertise model through the
employment of the IS-Impact measurement model. The a-priori expertise model
includes constructs such as (1) Cognitive Competence, (2) Skill-Based, (3) Affective,
and (4) Years of Experience.
18 | P a g e
Chapter 1: Introduction
Expertise IS-Impact
Constructs of Expertise
The path in the diagram does not depict causality or process nature between the
two key constructs. Instead, it simply highlights how the expertise model is applied in
the context of IS success / evaluations.
CHAPTER SUMMARY
This research study has three main, interrelated aims: (1) define the salient
characteristics of expertise in Information Systems relevant for system evaluation, (2)
derive a formative model of Expertise (also referred to herein as degree of
proficiency). Using the expertise model, this study will derive three mutually
exclusive respondent groups for evaluation of a contemporary Information System
(IS); and (3) in order to validate the salience of the derived respondent groups’
characteristics, the three groups, Expert, Intermediate and Novice, are then applied in
the context of Information Systems evaluation using the IS-Impact measurement
model (Gable, Sedera et al. 2008).
19 | P a g e
Chapter 1: Introduction
THESIS OUTLINE
This thesis is structured in the following manner. The significance of the
research and research gaps were introduced in chapter 1. In addition, chapter 1 also
introduced the key constructs of the study. The review of literature reported in
Chapter 2 will next provide an in-depth discussion of those key constructs introduced
in chapter 1 and chapter 3 will demonstrate how the expertise model has been
operationalized in the current study context. Given this approach, I acknowledge that
certain aspects in relation to the model constructs will repeat. But this approach was
taken after careful considerations with the interest of clarity and better understanding
through cumulative knowledge in mind.
This chapter reports the results of the literature relevant to this research. The
literature review presented herein evaluates prior work to provide a background of the
key concepts researched in this study.
This chapter begins with a discussion of the conceptual model that was
introduced in chapter 1. The key constructs of the conceptual model and the
arguments presented herein lead to the a-priori model.
This chapter describes the quantitative analysis including empirical results and
hypotheses tests. The chapter is divided into the following sections. The first part
focuses on descriptive statistics. In the next section, the structural model including
nomological validity is explained. Subsequently the study conducts the “application
study” to uncover the findings that are valuable to this research and discuss the
research findings.
This chapter summarizes the research related works, and outlines possible
contributions, limitations and suggests follow-on works. It begins with a summary of
the research, and subsequently addresses the generalizability of the findings.
20 | P a g e
Chapter 2: Literature Review
CHAPTER 2: INTRODUCTION
This chapter reports the results of the literature relevant to this research. The literature
review presented herein evaluates prior work to provide a background of the key concepts
researched in this study. The literature review has eight (8) main objectives: (1) to help the
candidate determine and articulate the current level of knowledge and to assess where the
further research is required, (2) to aid in identifying the salient characteristics of Expertise,
(3) to identify issues and ‘gaps’ in the existing literature, (4) to introduce theory which
usefully relate to the explanation of the key constructs, (5) to serve as a source of explanation
of phenomena observed in model and hypotheses testing, (6) to develop candidate’s research
skills, to do environmental scans, to read in a targeted way, (7) to develop candidate’s skills
of critical appraisal and your capacity to identify the objectives and arguments of those you
are reading, and to articulate their strengths and weaknesses and (8) to think laterally and
creatively about future potential research areas.
The objective of the literature review is to develop an appreciation of the current body
of knowledge in relation to the notion of expertise and how it relates to system success. The
understanding that is developed through the past body of knowledge is then employed in
chapter 3 against our pragmatic approach of understanding expertise of an Enterprise System.
As such, our definition of expertise will be informed and formulated by the literature and then
be improved for the current context. This would allow the researcher to compare prior
definitions and constructs of expertise, and then demonstrate their validity and
generalizability to the new study context.
21 | P a g e
Chapter 2: Literature Review
By expanding the research design provided in Chapter 1, figure 5 depicts the literature
review process in detail. The process of searching for relevant literature was carried out in six
(6) stages. In the first stage the study defined the research strategy to find appropriate sources
for this study. The strategy included identifying top refereed journals in the information
system area such as MIS Quarterly, Journal of the Association of IS, Information and
Management, Journal of MIS, Information Systems Research and others from popular
databases ProQuest and Science Direct.
The A-ranking conferences in IS were also considered and prioritised, including the
International Conference on Information Systems, Pacific Asia Conference on Information
Systems, European Conference on Information Systems, and Australian conference on
Information Systems. In the second stage, the study searched the literature by using key
questions and terms. For example, papers were searched by the use of search terms including
“Experts”, “Expertise”, “User Competence”, “User Expertise”, “Self-Efficacy”, “End User
Computing”, “Computer Self-Efficacy” and “Degree of Proficiency”. In the next phase it was
searched cross disciplinary literature (Psychology and Sociology) using the search terms
“Expertise”, “Expert” and “Degree of Proficiency”. In these disciplines “Expertise” has been
researched extensively. In the fourth stage, abstracts from the collected papers were reviewed
in order to ensure that the study captured the issues relevant to this research topic, and to
eliminate any irrelevant material. In the next stage, all the appropriate papers, books and
theses and other resources including soft copies and hard copies were selected. Finally, in the
sixth stage, every source that provided evidence relevant to the key questions, terms and
concepts were gathered to ensure all the relevant literature was adequately covered.
23 | P a g e
Chapter 2: Literature Review
analyzing ‘success’ at multiple cohorts has been discussed amongst academics for several
decades, yet with no clear consensus on how to classify employment cohorts usefully for
system evaluations. Furthermore, there is no universal agreement on what employment
cohorts should be canvassed.
This review below seeks to identify the salient stakeholders of ES and illustrate the
importance of assessing ES-success from multiple perspectives. The two-phased study
analyses data of 310 respondents and examines 81 IS-success studies. The study identifies
three key employment cohorts in the context of ES and highlights the importance of
measuring ES-success from a multi-stakeholder view point.
The purpose of this literature review is to understand prior studies that had helped to
identify the employment cohorts used in the IS-success studies. As expected, it was noted that
the discussions of the employment cohorts are deep-rooted in management literature, than in
the IS literature. The employment cohorts identified in the literature below, together with
their descriptions, were used in the content analysis and the empirical statistical data analysis.
1. Anthony (1965) provided the main foundations for employment cohort
classification in management science. He referred to three levels of employment
in an organization; (1) Strategic, (2) Management and (3) Operational. The
Strategic level focuses on deciding organizational-wide objectives and allocates
necessary resources to achieve the objectives. The Strategic level is involved in
complex, irregular decision making and focuses on providing policies to govern
the entire organization. At the Strategic level, information requirements are ad-
24 | P a g e
Chapter 2: Literature Review
hoc in nature and there is reliance on predictive information for long term
organizational goals. At the management level, information requirements are
focused on assuring that the resources, both human and financial, are used
effectively and efficiently to accomplish goals stated at the Strategic level. The
characteristics of information required by the management level are different to
those required at the Strategic level. The management level deals with rhythmic
(but not repetitive) and prescribed procedures. Managers tend to prefer
integrated, procedural information that is for a precise task. Furthermore,
managers tend to prefer ‘goal congruent’ information systems. At the
Operational level, employees are involved in highly structured and specific
tasks that are routine and transactional. Tasks carried out at the Operational
level are precise and are governed by the organizational rules and procedures.
The Operational level tends to deal with real time data focused on individual
events with little or no emphasis on key organizational performance indicators.
The three levels of employment introduced by (Anthony 1965) tend to be
hierarchical on several dimensions: (1) time span of decisions (i.e. long,
medium and short term), (2) importance of a single action (i.e. critical,
important and common) and (3) the level of judgment (i.e. strong, moderate and
modest). In relation to contemporary IS like Enterprise Systems, the operational
staff engages with the system as a Transaction Processing System on a daily-
basis, Management Staff interact with the system as a Management Information
System and the Strategic Staff uses the system sporadically as an Executing
Information System.
Singleton, Mclean et al. (1988) used the employment classification of Anthony (1965) and
concluded that contemporary organizations need a ‘shared vision’ across the ranks of
employment. Furthermore, they emphasized the importance of gathering information from all
employment levels to evaluate a portfolio of Information Systems. Studies reported (Alloway
and Quillard 1983; Seddon, Calvert et al. 2010; Strong and Volkoff 2010) reported that 79%
of frequently used management support systems relied heavily on underlying transaction
processing systems. Cheney and Dickson (1982) found differences in levels of satisfaction
across the employment cohorts. Vlahos and Ferratt (1995) studied perceived value, use of
information systems and satisfaction levels across employment cohorts. They found that the
25 | P a g e
Chapter 2: Literature Review
‘line employees’ (similar to Operational level of Anthony, (1965)) have a higher satisfaction
levels compared to the management and Strategic levels. Furthermore, the Vlahos and Farret
(1995) study found higher satisfaction levels among Technical support staff.
The Shang and Seddon framework classifies potential Enterprise Systems benefits into
21 lower level measures organized around 5 main categories: Operational benefits,
managerial benefits, strategic benefits, IT infrastructure benefits and organizational benefits.
The strategic benefits in the Shang and Seddon (2000) ERP benefits framework relate to the
Strategic level of Anthony’s (1965) classification, while the operational and managerial
benefits are related to the Operational and Management levels. The identification of the IT
infrastructure benefits is an important contribution of the Shang and Seddon ERP benefits
framework, highlighting the IT benefits that Enterprise Systems generate to an organization.
Shang and Seddon (2000; 2002) and Singletary, Pawlowski et al. (2003) identify Technical
staff as a distinct and important employment cohort in Enterprise Systems evaluations.
Furthermore, literature suggests that the management level employees as the most appropriate
cohort from which to gather perceptions of Enterprise Systems benefits. To the contrary,
Tallon, Kraemer et al. (2000) highlighted the importance of capturing intangible benefits of
26 | P a g e
Chapter 2: Literature Review
Enterprise System, proposing Strategic managers as the most appropriate single employment
cohort.
Definitions of Expert/Expertise
Prior research suggests that ‘expertise’ is not a simple reflection of one’s innate abilities
and capabilities, but rather a combination of acquired complex skills, experience and
knowledge capabilities (Ericsson and Smith 1991; Hunt 2006; Norman 2006; Yates and
Tschirhart 2006). Foundational work by Eriksson et al. (1993), demonstrates that both
extended deliberate practice and deliberate learning of skills have a strong positive
relationship with individual performance.
Despite its widespread use, the term ‘expertise’ has been rarely defined in past IS
studies. Thus, this study derives definitions through analogues research domains. These
definitions help this research form its notion of expertise, recognizing that expertise in a
contemporary IS is vastly different to those of other disciplines.
One of the earliest characterizations of expertise is derived through the work of Chase
and Simon (1973). They believed that the attainment by experts of many other forms of
expertise, in fact “any skilled activity (e.g. Football, music)”, was the result of acquiring,
during many years of experience in their domain, vast amounts of knowledge and the ability
to perform pattern-based retrieval”. Though their definition and characterization highlights
that one does not have to have innate expertise and that longer repetitive behaviour could lead
to some level of expertise, they fail to recognize the dynamism of the discipline / area where
the expertise is sought. Frensch and Sternberg (1989) concur with Chase and Simon (1973)
27 | P a g e
Chapter 2: Literature Review
Surprisingly, in recent times, Petcovic et al. (2007) defined an expert using the same
definitions stating that an expert is an individual with the highest level of expertise of the
domain and is someone who has spent many hours training or solving problems in a specific
domain.
To the contrary, Feltovich et al. (1997) explained that, becoming an expert in the 21st
century professional workplace involves a complex array of knowledge and skills as well as
processes. The authors contend that, “the new workplace emphasises such things as the need
for dealing with deep understanding, the ubiquity of change and novelty, the simultaneous
occurrence of processes, the interactiveness and interdependence of processes and people, the
demand for customisation/particularisation in both products and procedures, non-
hierarchical-linear management structures and the like”.
Swanson and Holton (2001) agree with Feltovich et al. (1997) observations of
expertise, as an expert “displays behaviour within a specialised domain and/or related domain
in the form of consistently demonstrated actions of an individual that are both optimally
efficient in their execution and effective in their results”. Their hypothesised dimensions of
expertise include problem-solving skills, experience, and knowledge. The authors consider
the concept to be dynamic and domain-specific.
Eraut (1994) defined Expertise through models of progression from novice to expert,
through a correspondence between cognitive processes and the characteristics of the task, or
through processes of developing professional creativity and intuitive capacity in problematic
situations.
28 | P a g e
Chapter 2: Literature Review
Germain and Ruiz (2009) defines an “expert” as someone who manifests the following
qualities with respect to their work role: (i) specific education, training and knowledge, (ii)
ability to assess importance in work-related situations, (iii) capacity to improve themselves,
(iv) intuition (v) self-assurance and (vi) confidence in their knowledge.
The aforementioned review of literature on expertise helps this research in formulating
an appropriate definition of expertise for the study, identify broad characterization of
expertise and observe possible antecedents that must be used in IS nomological testing
(chapter 4).
In fact, there is disagreement about the existence of a single definition. Hoffman et al.
(1995) suggest that there are almost as many definitions of “experts” as there are researchers
who study them. Some of the conceptual research studies in the USA have identified various
common themes or dimensions associated with expertise, namely knowledge, experience in
the field, and problem-solving skills (Swanson and Holton 2001), as well as self-
enhancement characteristics such as self-assurance, intuition, and capacity to improve
themselves (Germain 2005; 2006). Although there is no consensus among IS researchers,
expertise is commonly defined as a combination of knowledge, experience and problem-
solving skills in a particular domain.
Eraut (1994) has summarized the different theories of expertise on the basis of the
study of the professional processes that lie behind the theories and models of development.
Accordingly, expertise can be defined through models of progression from novice to expert,
through processes of decision making involving memory and analytical skills, through a
correspondence between cognitive processes and the characteristics of the task, or through
processes of developing professional creativity and intuitive capacity in problematic
situations. Although one definition of expertise cannot accurately represent scholars’ views,
expertise could be summed up as a process through which skills are acquired of the domain,
decision making, willingness to adapt, analytical, and for problem solving.
Years of Experience
‘Years of experience’ is one of the most commonly researched constructs in association
with the level of expertise. Social Science research on expert performance and expertise (Chi,
Glaser et al. 1988; Ericsson and Smith 1991) has shown that important characteristics of
experts' superior performance are acquired through experience arguing that exceptional
performance is an outcome of the environmental circumstances, such as the duration and
structure of activities. Eriksson et al. (1993) hypothesized that the individuals’ performances
are a monotonic function of the deliberate practice. They argued that the accumulated amount
of deliberate practice and the level of performance an individual achieves at a given age is a
function of the starting age for practice and the weekly amount of practice.
The view that merely engaging in a sufficient amount of practice, regardless of the
structure of that practice, leads to maximal performance, has a long and contested history and
is demonstrated in a series of classic studies of Morse code operators. Bryan et al. (1897) and
Bryan et al. (1899) identified plateaus in skill acquisition, when for long periods subjects
seemed unable to attain further improvements. However, they observed, with extended
efforts, operators could restructure their skill to overcome plateaus. Keller (1958) later
30 | P a g e
Chapter 2: Literature Review
showed that these plateaus in Morse code reception were not an inevitable characteristic of
skill acquisition, but could be avoided by different and better training methods.
Software knowledge refers to knowledge about the product, which includes the
knowledge on how to use it. It represents the selection and use of technical knowledge to
analyse (e.g., capture requirements), design (e. g., decide on the design pattern and identify
best practices), implement (e. g., programme) and maintain (e. g., troubleshoot) the ES
31 | P a g e
Chapter 2: Literature Review
software. It reflects the need for knowledge specific to a particular ES solution. The ES is
usually a comprehensive package such as a Systems Application and Products (SAP)
solution. Understanding the ES package requires a product-specific knowledge.
Moreover, in general (and regardless of the study context), ‘training’ has been identified as a
critical aspect that contributes to employees’ knowledge. Such formal training programs
ensure wider distribution of highly context-specific knowledge that can be particularly useful
throughout the phases of an IS lifecycle (Pan and Chen 2005). In the interest of understanding
the contribution of formal training on software and business knowledge, this study includes
‘formal training’ as an antecedent of overall knowledge.
32 | P a g e
Chapter 2: Literature Review
The levels of expertise (figure 6), also known as the ‘degree of proficiency’, is
generally associated with skills, expertise and knowledge, which extends over a continuum,
from novice → intermediate → expert, where an ‘expert’ holds the highest degree of
proficiency (Eriksson and Charness 1994). Expertise, in general, is defined as superior
performance in terms of success, swiftness, and/or accuracy. In between two extremes of
experts and novices are the intermediates.
Novice: a novice has only factual and free-context rules acquired from training and is
typically at the early stage of the career (Dreyfus 1992).
Expert: an expert has recognized knowledge and expertise who can comment
authoritatively on an issue and often is asked to give an opinion with regard to the specific
facts (Bainbridge 1989; Olsen 1989) Experts seem to have prolonged or intense experience
through practice and education on their field of expertise.
ability’ (Munro, Huff et al. 1997). Yet, as Marakas et al., (2007) observed, “[these
disciplines] have focused heavily on models in very distinct domains”, focusing
predominantly on simple information systems (i.e. spreadsheets, word processing) and
lacking emphasis on contemporary IS. At a time where organizations are in a transition from
in-house, custom-made, stand-alone applications to integrated, complex, customizable
software packages (Gable, Sedera et al. 2008), this study argues for the importance of re-
visiting Contemporary Information Systems User Expertise. Given the unwieldy expression
‘Contemporary Information Systems User Expert/ies’, further reference to this concept is
simply ‘Expert/ies’, where the contemporary nature of the system and user expertise is
implied. User expertise in contemporary IS could make the difference between performing
the bare minimum to optimal, value-adding usage (Burton-Jones and Straub 2006; Burton-
Jones and Gallivan 2007), where higher level of expertise contributing to better system usage.
Research on computer self efficacy and user competence provides a useful theoretical
background to this study. For decades organisations have tried to identify the important
elements that affect users’ competence. The most likely factors would be organisational, task,
individual and technological. Better understanding of the end user computing process will
enable managers to develop effective strategies for improving individual skill and usage
levels.
User Competence means, how users differ in their capability, and how these differences
relate to other individual characteristics. Munro et al. (1997) summed up the User
Competence construct as multi-faceted. They proposed that the construct “composed of an
individual’s breadth and depth of knowledge of end user technologies, and his or her ability
to creatively apply these technologies”. Their research led them to conceptualise User
Competence as consisting of three independent dimensions: 1) breadth- this dimension refers
to the extent, or variety, of different end user tools, skills, and knowledge that an individual
possesses and can bring to impact on his or her work 2) depth- this second dimension refers
to an individual’s End User Computing (EUC) capability. This dimension represents the
completeness of the user’s current knowledge of a particular EUC sub-domain (for example,
using a spreadsheet). Individuals will differ in their knowledge based on the extent of their
use of its capabilities, and 3) finesse- this dimension is defined as “the ability to creatively
apply EUC.” Some end users would be known to be power users with respect to certain EUC
technologies. The power users had more that the average level knowledge of the commands
34 | P a g e
Chapter 2: Literature Review
and capabilities of certain application packages or technologies. They then apply this
knowledge to exercise innovativeness and creativity in the practical use of the technology.
Munro et al. (1997) also looked at the correlation between User Competence and self-
efficacy. They concluded that end user self-efficacy is significantly related to User
Competence and they further mention that higher self-efficacy leads to greater competence.
They also observed that self-efficacy was more closely related to an individual’s depth of
knowledge than to the breadth of his or her experience.
In a study conducted by Marcolin et al. (2000) User Competence (UC) has been defined
“as the user’s potential to apply technology to its fullest possible extent so as to maximize
performance of specific job tasks”. Marcolin et al’s.(2000) Conceptualization of an
individual’s competence originates from Kraiger et al’s. (1993) identification of three
different outcomes associated with learning: 1) cognitive outcomes: this refers to the
knowledge users have about what a technology is and how to use it. Others (Anderson 1980;
Kraiger, Ford et al. 1993) have referred to this as declarative knowledge, 2) skill-based
outcomes: in this phase learners develop their ability to generalize procedures to novel tasks
and they can improve their performance by moving beyond the initial steps learned into more
fluid and efficient processes. In other words the individual displays the ability to adopt/adapt
to a new environment. For example, “those learning word processing might proceed from the
knowledge that bold formatting to text can be accomplished by highlighting the text and then
selecting “bold” from a menu or toolbar, and that underline is accomplished in the same way”
(Marcolin, Compeau et al. 2000), and 3) affective outcomes: this outcome includes attitude
and motivation. Kraiger et al. (1993) defines this outcome as if a learner’s “values have
undergone some change... then learning has occurred”. In other words, it refers to the
individual being proactive in learning beyond what has been provided. These three outcomes
represent different conceptualizations of an individual’s competence and can be used to
understand differences in the effectiveness with which people use technology.
35 | P a g e
Chapter 2: Literature Review
measure may have little or no relevance to the selected domain of functioning... scales of
perceived self-efficacy must be tailored to the particular domains of functioning that are the
object of interest” (Bandura 2001 p.1).
Enterprise Systems
This study tests the Expertise Model (Chapter 4) in the context of Enterprise Systems.
Data was gathered from three organisations using Enterprise Systems. This is discussed in
detail in Chapter 4. In order to understand the research context it is important to understand
the background and characteristics of an Enterprise System.
36 | P a g e
Chapter 2: Literature Review
The four quadrants of the IS-impact measurement model (Gable, Sedera et al. 2008) are
derived from the most widely cited IS success model by DeLone and McLean (1992). The
DeLone and McLean model consists of six constructs: quality measures of system and
information, performance-related outcomes of individual and organisational impacts, and
attitudinal outcomes of use and satisfaction. For a range of reasons, use and satisfaction
constructs are not included in the Gable et al. (2008) model. They argue that the use construct
is considered to be an antecedent to IS impact. They also believe that the satisfaction
construct is an immediate consequence of IS impact. Furthermore, early studies of IS success,
such as the work of Rai et al. (2002), report that the satisfaction construct is readily measured
indirectly through other constructs such as information quality and system quality.
37 | P a g e
Chapter 2: Literature Review
Gable et al. (2008) propose individual impact (II) as individual capabilities and
effectiveness that are influenced by IS application. This construct accommodates diverse
individual impact measurements of system usage to all employment cohorts, applications,
capabilities and functionalities of the ES. Organisational impact (OI) refers to benefits
received by the IS application at the organisational level, focussing on variables related to
organisational impacts include items of cost reduction, productivity improvements and
business process change. The system quality (SQ) construct represents the quality of the IS
itself, and is designed to capture how the system performs from technical and design
perspectives. This construct is measured by items such as ease of use, ease of learning and
alignment with user requirements. In contrast with the system quality, the construct of
information quality (IQ) is concerned with the system’s output quality and refers to the
information produced in reports and on-screen (DeLone and McLean 1992; Gable, Sedera et
al. 2008; Gorla, Somers et al. 2010) . Table 1 lists the measures offered by the IS-impact
model for the validity of ES success. There are 27 measures which appropriately assess the
ES success and avoid overlapping measures as in the IS success model by DeLone and
McLean (1992) as shown in the table below.
Constructs Measures
Individual Impact • Learning
• Awareness/recall
• Decision effectiveness
• Individual productivity
Organisational Impact • Organisational cost
38 | P a g e
Chapter 2: Literature Review
• Staff requirements
• Cost reduction
• Overall productivity
• Improved outcomes/outputs
• Increased capacity
• E-government
• Business process change
System Quality • Ease of use
• Ease of learning
• User requirements
• System features
• System accuracy
• Flexibility
• Sophistication
• Integration
• Customisation
Information Quality • Content accuracy
• Availability
• Usability
• Understandability
• Format
• Conciseness
Table 1: IS-impact measures
CHAPTER SUMMARY
This literature review began with an overview of the research and literature review
strategy. It was then followed by a discussion on prior studies conducted in the area of
Expertise. The literature review then introduced the topics of User Competence and Self-
Efficacy. The next section introduces some common definitions of Expert and Expertise as
reviewed in literature. The three groups: Novice, Intermediate and Expert are introduced as,
this study empirically tests the difference in scores that a novice, intermediate and Expert
gives in relation to evaluating a system. This study then discusses the relation between years
of experience and expertise. The next section introduces the construct Cognitive
Competence/Knowledge and this study discusses how this contributes to an individual’s
39 | P a g e
Chapter 2: Literature Review
40 | P a g e
Chapter 3: Research Model Development
This chapter begins with a discussion of the conceptual model that was introduced in
chapter 1. The key constructs of the conceptual model and the arguments presented
herein lead to the a-priori model. The constructs of the a-priori research model that
were derived through a detailed review of related literature, are summarized herein.
In Chapter 2 literature related to the four constructs (Cognitive Competence, Skill-
Based, Affective and Years of Experience) have been reviewed and discussed in
detail. In this chapter the research model and its constructs, sub-constructs and the
items relating to each sub-construct is explained. Next, the data collection
methodology is presented here. This study employs empirical data gathered through a
survey. The chapter discusses the appropriateness of the survey method for the study
purposes. Moreover, this chapter will discuss the data collection procedures. This
chapter then concludes with a summary.
41 | P a g e
Chapter 3: Research Model Development
42 | P a g e
Chapter 3: Research Model Development
Observing past research in Information Systems (e.g. Munro, Huff et al. 1997;
Bandura 2007; Marakas, Johnson et al. 2007), Psychology (e.g. Pamela, Michael et al.
2001; Page and Uncles 2004), and Sociology (e.g. Eriksson, Krampe et al. 1993;
Eriksson and Charness 1994), this study identifies three salient considerations for
developing constructs and measures for contemporary IS user expertise: (i) type of
system, (ii) measurement constructs/ domain, and (iii) evaluation method. The
approach in this study is similar to the one reported in Marcolin et al. (2000), wherein
their discussion of User Competence of Spreadsheets and Word Processing, included:
(1) Measurement Method – self-report, paper-and-pencil test, Hands-on, observer
assessment, (2) Knowledge Domain Areas – Software and Hardware Knowledge, and
(3) Conceptualization of Competence – Cognitive, Skill-based and Affective. This
study agrees that Measurement method, Conceptualization and Knowledge Domain
areas are still important in understanding ones expertise; the essential difference being
the inclusion of the type of the system.
This study argues that one could conceive expertise using any combination of
these three considerations. In figure 8, cells marked with ‘A’ denote where past
studies of computer self-efficacy and user competence concentrate on, which cells
marked as ‘B’ provide the scope for this research. The scope (i.e. cells) must be
selected with care, understanding the intent of the study context, acknowledging that
some combinations of cells are less realistic and less informative. This study
recommends that the primary consideration herein should be the ‘type of the system’.
Thus, as a rule-of-thumb, this study suggests that the selection of cells be based, first
on the system, next the on the domains, and finally selecting the measurement
approach. One should then commence developing measures appropriate to the
selected context (Burton-Jones and Straub 2006).
43 | P a g e
Chapter 3: Research Model Development
A B
A B
A B
A B
Type of System
44 | P a g e
Chapter 3: Research Model Development
On the other hand, Enterprise IT is new to most IS users and they specify
business processes and impose complements throughout the organization (McAfee
2006). The processes and the task sequences of the processes, data format and, in
most cases use of an Enterprise System are mandated by the organization.
Furthermore, Enterprise IT users – unlike Function IT – are rarely required to use
more than one Enterprise System. This means that the ability of a user to adopt new
technological applications, as employed in computer efficacy studies is less relevant
to the context of Enterprise IT. Instead, the focus must be on how well a user evolves
from being a novice user, presumably at the ‘go-live’ time, then developing their
expertise over the lifecycle.
45 | P a g e
Chapter 3: Research Model Development
46 | P a g e
Chapter 3: Research Model Development
Evaluation Method
Three types of measurement methods have been employed in past expertise /
competence; (i) self-reported measures (e.g. Bilili, Raymond et al. 1998), (ii) classical
method (e.g. Eriksson and Charness 1994; Compeau and Higgins 1995a), and (iii)
observer assessment (e.g. Rockart and Flannery 1983). Self-reported measures are
provided by individuals assessing their own abilities, while in the classical approach 1
expertise is measured by the investigator based on how well one responds to a set of
questions. In general, the classical approach is appropriate when expertise can be
measured using a set of finite questions that are not subjected to external / contextual
factors (e.g. in mathematics). The observer assessment method involves rating of
skills of an individual by an independent observer, in most cases by the colleagues.
Studies have shown that all three methods provide a reasonable assessment of an
individual’s skills, knowledge and in general, expertise (Germain and Ruiz 2009). In
particular, Germain and Ruiz (2009) observed a strong correlation between expertises
measured using the self-assessment method and the classical approach. The method of
measurement must be selected with care, paying close attention to its suitability to the
phenomena of measurement. For example, Mann (2010) and Moskal (2010) note
lesser-skilled are more likely to exaggerate their skills. Germain (2009) and Germain
and Ruiz (2009) note that the classical method cannot be employed in studies where
there is no finite answer, and the answer is moderated by the context (Germain 2009;
Germain and Ruiz 2009).
The conceptual model in figure 9 is derived through the cells marked with ‘B’
in figure 8, which illustrate the scope of this research. The choice of considerations
(marked as ‘B’) was guided by theoretical and pragmatic considerations. In this study
the system of interest is Enterprise Systems, domains including cognitive,
motivational, and skill-based outcomes, using the self-evaluation measurement
approach. The decision with respect to choosing the self-reported measures follows
closely the conceptualizations of the type of the system and measures of expertise
derived through a five-phased study design (figure 3 and related discussion).
1
The classical approach can be further divided into hands-on and paper-and-pencil tests.
47 | P a g e
Chapter 3: Research Model Development
Results of the mapping and content validation stages, of the research design
helped to form the expertise a-priori model constructs and measures. Specifying a
parsimonious a-priori model for expertise involved: (i) elimination and consolidation
of domains; (ii) introduction of new domains or measures; and (iii) revisiting the
relevance of the domains identified in literature review. Thus, in the interest of
parsimony, and consistent with formative index development procedures (Jarvis,
MacKenzie et al. 2003; Petter, Straub et al. 2007; Cenfetelli and Bassellier 2009;
Diamantopoulos 2009), 4 constructs were included as measures in the expertise a-
priori model. Thus, it was deemed appropriate to identify a single measure that can be
used for an item for each measure to be included in the a-priori model.
Expertise
Cognitive Years of
Skill-Based Affective
Competence Experience
First, work by Petter et al. (2007) has cast doubt on the validity of many
mainstream constructs employed in IS research over the past 3 decades; critiquing the
almost universal conceptualization and validation of these constructs as reflective
when in many studies the measures appear to have been implicitly operationalized as
48 | P a g e
Chapter 3: Research Model Development
Next, for the four constructs of Figure 9, appropriate measures were identified
through past literature. In addition, as identified in the literature review, the researcher
decided to include ‘knowledge sharing’ as an antecedent of expertise.
Scale Construction
The questions to measure Cognitive Competence, this study derives questions
based on the Munro et al. (1997), End User Sophistication questionnaire. Munro et al.
(1997), in their study of User Competence employed, a scale to gauge the depth of
cognitive competence. Those measures yielded a self-reported knowledge score that
was based on an assessment of how well (on a scale from 1 to 7) respondents knew
the particular package with which the questionnaire was based on. Combining the
49 | P a g e
Chapter 3: Research Model Development
scale of Munro et al. (1997) with the core knowledge types for Enterprise Systems as
per past Enterprise Systems knowledge management literature (e.g. Davenport 1998;
Sedera and Gable 2010), the following six questions are employed to gauge cognitive
competence (table 2).
C1: I fully understand the core knowledge necessary for [name of the business
process].
C2: My knowledge of SAP is more than enough to perform my day-to-day
functioning of the [name of the business process].
C3: I rarely contact SAP helpdesk for software related problems in relation to the
[name of the business process].
C4: I rarely make mistakes when completing my [name of the business process] using
SAP.
C5: I have an in-depth knowledge of the functions of the [name of the business
process] that I must do on a day-to-day basis.
C6: I have a good knowledge of the organizational goals, procedures and guidelines.
Table 2: Cognitive Competence Measures
S1: I regularly refer to corporate database (e.g. intranet) for updates and gain new
knowledge of my [name of the business process].
S2: I regularly observe changes to company policies and guidelines through
information repositories relevant to my [name of the business process].
S3: I try to find better ways of doing my [name of the business process] in the SAP
system.
S4: I am eager to learn improvements in the SAP system related to my [name of the
business process].
Table 3: Skill-Based Measures
Expertise
Cognitive Years of
Skill-Based Affective
Competence Experience
C1: Core S1: Process A1: Software changes E1: Years in the
C2: Software S2: Organization A2: Business Process industry sector
C3: Software trouble S3: Business Changes E2: Years with the
shooting Application A3: Departmental organization
C4: Software S4: Software Changes
application application A4: Organizational
C5: Business process Changes
C6: Organization A5: Roles and
Responsibility Changes
The a-priori model does not purport (is not concerned with) any causality
among the constructs; rather the constructs are posited to be formative constructs of
the multidimensional concept – Expertise. As per the guidelines for identifying
formative variables, constructs and measures of expertise; (i) need not co-vary, (ii) are
not interchangeable, (iii) cause the core-construct as opposed to being caused by it,
and (iv) may have different antecedents and consequences in potentially quite
different nomological nets (Jarvis, MacKenzie et al. 2003; Petter, Straub et al. 2007;
Cenfetelli and Bassellier 2009).
Once the expertise model is specified, this study employs the ‘knowledge
sharing’ construct to further validate the expertise construct in its nomological net (as
per formative construct validation guidelines). According to Jarvis et al. (2003), these
other constructs can be either antecedents or consequences of the phenomena under
51 | P a g e
Chapter 3: Research Model Development
investigation 2. Thus, consistent with Jarvis et al. (2003) and Bagozzi (1994), and with
the (third) guideline of Diamantopoulos and Winklhofer (2001) for validating
formative constructs in a nomological network, this study next tests the relationship
between expertise and ‘knowledge sharing’ as one of its immediate consequences.
KNOWLEDGE SHARING
1. I regularly share my knowledge of SAP with my colleagues.
2. I often suggest improvements of [name of the business process] to my managers /
colleagues.
3. My colleagues come to me for assistance when they are faced with a work related
issue.
4. I have colleagues and workmates helping me with using SAP for my [name of the
business process] (inversely worded).
5. I regularly contribute to knowledge sharing forums within my organization.
Table 5: Knowledge Sharing Measures
IS success is employed to apply the expertise classification that this study will
derive to understand whether groupings based on different levels of expertise
demonstrate significant differences in their success evaluations. These discussions are
forthcoming in this study. In attention to reducing Common Method Variance, items
for expertise, knowledge sharing and IS success were not grouped under their
construct headings.
2
Bagozzi, R. (1994). Structural equation models in Marketing Research: Basic Principals.
Principals in Marketing R. Bagozzi. Oxford, Blackwell: 317-385.
suggests, “After all, the substantive reason behind index construction is likely to be how the
index functions as a predictor or predicted variable” (p. 332).
52 | P a g e
Chapter 3: Research Model Development
Once the measures of expertise are validated and yields classifications based on
their expertise, the study explores whether the cohorts of expertise demonstrate
significant differences in their evaluation of their ES. The reasons for selecting IS
success as the ‘application’ area are several; (i) the natural alliance between success
evaluation and expertise, where in practice, ‘expert views’ are frequently sought in
system evaluations, (ii) respondents having different views is a key notion purported
in IS success studies, yet according to many, a concept that is under investigated (e.g.
Cameron and Whetten 1983; Grover, Jeong et al. 1996; Seddon, Staples et al. 1999)
and (iii) the popularity of IS success studies (e.g. DeLone and McLean 2003;
Sabherwal, Jeyaraj et al. 2006; Gable, Sedera et al. 2008; Petter, DeLone et al. 2008)
suggesting that this application is relevant and meaningful to a greater community.
Using the 27 measures, this study follows all four (4) dimensions, as shown in Table
6, namely, Organisational Impact (OI), Individual Impact (II), System Quality (SQ),
and Information Quality (IQ). This study employs all 27 measures of the IS impact
measurement model. The IS Impact measurement model instrument items are listed in
Appendix D.
53 | P a g e
Chapter 3: Research Model Development
Dimensions Definitions
Information This is to measure the quality of the Enterprise Systems outputs.
Quality
System Quality These measures are used to examine the performance of the ES
from a technical and design perspective.
Individual These measures are the items that assess the extent to which the
Impact ES has influenced the capabilities and effectiveness of key users
on behalf of the organisation.
Organisational These measures represent the assessment of the extent to which
Impact the ES has promoted improvement in organisational results and
capabilities.
Table 6: Dimensions of the IS-Impact Measurement Model
RESEARCH METHODOLOGY
In this section of the chapter, this study discusses the operationalisation of the
research model and the application of the survey method (as shown in Figure 1 in
Chapter 1). The data collection objectives are discussed followed by the
appropriateness of the survey methodology for this study. The following sections
present the design of the survey process in detail and the procedures to operationalise
the research model constructs. This chapter then discusses the steps taken to minimise
the common method variance (CMV). Lastly, the respondent anonymity and
confidentiality are discussed.
54 | P a g e
Chapter 3: Research Model Development
representative sample of companies and respondents are selected that meet the criteria
outlined in chapter 1 (figure 1). Also, the data collection instrument must be designed
in a way that allows gathering of perceptual evaluations of their own skills (self-
assessment).
In the context of expertise, it is necessary that this study identifies and validates
the contributions of the salient dimensions. Thus, rather than selecting a qualitative
method, this research was inclined to selecting a quantitative approach.
Survey research has the potential to add to the inventory of previously well-
developed survey research instruments (Ishman 1996). Benbasat et al. (1987) state
that such an inventory of instruments allows the Management Information System
(MIS) field to be more productive and could excel in research without re-inventing
55 | P a g e
Chapter 3: Research Model Development
Figure 11 depicts the main steps of the survey design. This survey design is
further expanded from the research design as previously shown in Figure 3 (in
Chapter 1). The survey design process includes six steps: 1) design the survey
instrument; 2) select the data sample; 3) validate the content of the survey instrument;
4) pilot test the survey instrument; 5) revise the survey instrument; and 6) deploy the
survey.
56 | P a g e
Chapter 3: Research Model Development
Deploy Survey
57 | P a g e
Chapter 3: Research Model Development
The design considerations relating to the format of the survey are as follows:
The following section discusses each aspect and design considerations of the
survey.
58 | P a g e
Chapter 3: Research Model Development
As per literature review, much conceptual work has been done on expertise in
referent disciplines, like psychology and sociology. Such literature provided with a
generous amount of background literature, which were not considered in past self-
efficacy and user competence studies.
59 | P a g e
Chapter 3: Research Model Development
instrument. In pilot testing, it was specifically assessed and proven that the
instructions provided are adequate in prompting respondents of ‘their’ tasks and work
environment. Furthermore, this ‘generalization’ of the questions made the survey
instrument easy to complete and comprehend. However, the candidate acknowledges
some of the limitations of this approach.
The large number of questions on the expertise model warranted close attention
to suitable wording of each question to ensure that all questions gather information
only on the assigned measure.
Contextual Information
The survey questions included as much contextual information as possible to
minimize potential weaknesses of the generalized questions and to minimize
disadvantages of using the deductive survey approach. Question items were designed
with the individual perspectives in mind to relate the questions to the respondents and
thereby removing any response biasness that comes with ‘here-say’. Furthermore, the
survey included an introduction to each success dimension, which made explicit
exemplary statements to the sample organizations.
It was also decided to use the term ‘SAP’ as the reference to the Enterprise
System, instead of the version release term ‘SAP R/3’ for two reasons. First, SAP
now disassociates itself with specific versions and next generation SAP products.
Within responding organizations, though it is the same Enterprise System application,
there exist different versions of the SAP software. The use of a single term (SAP) in
the survey without specific versions eliminated possible confusions of the
respondents. Secondly, since there is only one Enterprise Systems application in all
the data collection organizations (i.e. SAP), the candidate refrained from using the
term Enterprise Systems (or ERP) to generalize the system. The candidate
acknowledges the possible confusion between SAP the system and SAP the company.
To overcome this possible limited confusion, specific instructions were given in the
cover letter stating, “henceforth simply referred to as ‘SAP’ - not to be confused with
the company SAP”.
60 | P a g e
Chapter 3: Research Model Development
61 | P a g e
Chapter 3: Research Model Development
decision regarding the scale selection is pertained to the length of the scale (e.g. 1 to
5; 1 to 7) and usually it is up to the researcher to select the length of a scale. A ‘good’
scale should accommodate sufficient variability among the respondents. According to
(Lissitz and Green 1975), reliability of a scale increases with the increments of the
number of choices up to five in a scale, but levels off beyond.
A single scale (a seven point LIKERT scale with Strongly Disagree, Neutral to
Strongly Agree) was used throughout the survey to reduce the complexity of the
survey. Using the seven-point scale is more accurate, and gives much more
information to generate statistical measurements of respondent’s attitudes and
opinions 3. The scale is based on how respondents feel, indicated as: Strongly
Disagree, Neutral and Strongly Agree as points 1, 4 and 7 respectively, as seen in
Table 7. The advantages that this study identified in employing a single scale
throughout the survey include: 1) ease of understanding, 2) ease of completion, 3)
minimal instructions, and 4) possibly higher response rate.
Mandatory Questions
All questions in the survey instrument were made mandatory. However, the data
collection procedures did not provide any facility to check completeness of a survey
submission. However, endorsement from the senior management and in-person
attention to data collection made respondents more attuned to completing the survey
instrument without any missing data. In future studies, the candidate recommends a
web based data collection with a facility to check the completeness of the responses
3
Also see the report entitled “rating scales can influence results,” Quirk’s Marketing Research
Review, http://www.quirks.com/articles/a1986/19861003.aspx?searchID=4971371&sort=9
62 | P a g e
Chapter 3: Research Model Development
CONTENT VALIDATION
Research can gather valuable information by conducting a content validity
study. Content validation is important to ensure that all individual items of the survey
instrument match the intended concepts sufficiently well (Sekaran 2000). Content
validity refers to the extent to which the items on a measure assess the same content
or how well the content material was sampled in the measure, which can be
characterised as face validity. As far as content validity is concerned, and following
Bollen (1989) and Schouten et al. (2010), all the items that encompass the constructs
in this study result from: 1) a strong review of literature, and 2) face validity.
Face Validation
This study uses face validation to examine the appropriateness of the
questionnaire items’ soundness, language and appearance. This is essential for
validating our survey instrument as to whether it looks valid to the respondents and
63 | P a g e
Chapter 3: Research Model Development
whether the language is appropriate to ensure all the questions meet the research
objective and can be easily understood by respondents. Previous studies (Lynn 1986;
Grant and Davis 1997) recommend a of minimum three experts with a range of up to
ten experts depending on the desired diversity of knowledge. Before deploying the
survey in this study, this study conducted a pilot test in a representative sample of
employees at Pharma 1. The respondents helped to identify problems with wording or
meaning, readability, ease of response and content validity (Schouten, Grol et al.
2010).
In developing our survey instrument, the researcher was aware that questions
should be short, simple and specific as the wording of questions has an important
influence on the responses that are given (Williams, Edwards et al. 2003). Difficult
questions may produce inaccurate responses, or the respondents may fail to complete
the questionnaire. Following the guidelines, this study designed our survey questions
with a consistent format throughout the instrument and logically organised the
questions without rigidly following the structure of the research model.
64 | P a g e
Chapter 3: Research Model Development
SURVEY POPULATION
It was deemed important that this study gathers data from all frequent Enterprise
System users for several reasons; (1) gathering data from more than one cohort allows
to internally validate findings of one cohort with another, (2) comparability of
findings to demonstrate further justification of the expertise construct, (3) from an
Enterprise System success viewpoint, it is essential that all frequent users are
canvassed as the success of the system hinges upon strong appropriate use of the
system, (4) allows candidate to observe differences in system evaluations across the
standard hierarchy of employment as well as the levels of expertise.
The survey was designed to seek opinions and views from all frequent users of
SAP. It was important that respondents are direct and regular users of the system. As
discussed earlier, it was therefore decided to target all operational and management
employment cohorts, leaving out the Strategic Managers and Technical Staff.
Technical staff were not included to avoid any possible biasness that they could
introduce through their inclination to rate System and Information Qualities as high,
given those constructs will be considered as proxy measures of the goodness of the IT
departments. Moreover, the sporadic use of the SAP system by the strategic staff was
deemed inappropriate for the knowledge of the software aspect in the expertise model.
The single, general-purpose instrument accommodates the data collection from the
two employment cohorts. A key assumption, which was confirmed later, made in the
data collection is that all the respondents are perceived to have adequate knowledge in
answering all questions on the status of the Enterprise Systems at the sampled
organizations regardless of their involvement with the SAP system.
Survey Administration
Once the pre-pilot survey instrument was finalized, it was pilot tested in a
representative sample of employees at Pharma 1. Feedback received from the
workshop participants resulted changes to the order of the questionnaire and adding of
substantial introductions to each of the main constructs.
65 | P a g e
Chapter 3: Research Model Development
The Instrument
The survey instrument included three main sections: (1) section 1 on respondent
demographics, (2) section 2 on expertise a-priori model dimensions, and (3) section 3
on IS-Impact measurement model questions.
The survey gathered demographic data that are much useful for descriptive and
comparative data analysis. The survey gathered demographic information on
respondents: 1) employment status, 2) details of their involvement with the SAP
system, 3) general employment position description, 4) number of years with the
current organisation.
66 | P a g e
Chapter 3: Research Model Development
employed to apply the expertise classification that this study will derive to understand
whether groupings based on different levels of expertise demonstrate significant
differences in their success evaluations. These discussions are forthcoming in chapter
4. The 27 questions on IS success is illustrated in appendix D.
The survey instrument was circulated to all 350 direct operational and
management users of the three organizations. Altogether 220 valid responses were
captured, yielding a response rate of 63%.
CHAPTER SUMMARY
This chapter described the research model and data collection methodology. A
detailed view of the expertise research model is provided with justifications on each
of the salient dimensions. The chapter also described the procedure of the selection of
dimensions using referent disciplines and the development of measures using
analogous literature from IS discipline. To test the research model and hypothesis,
data was collected using the survey technique in a questionnaire format. The survey
methodology is discussed in relation to the objectives of the research described in
chapter 1. The appropriateness of the survey methodology for this research has been
discussed in great detail. This chapter also outlined the detailed attention paid by the
candidate in formulating questions, designing the format and in operationalizing the
instrument. Therefore, the researcher believes that the validity of the data is
satisfactory, and that the data can contribute to strong research findings.
67 | P a g e
Chapter 4: Data Analysis and Results
This chapter describes the quantitative analysis including empirical results and
hypotheses tests. The chapter is divided into the following sections. The first part
focuses on descriptive statistics. In the next section, the structural model including
nomological validity is explained. The formative construct validity reported in this
chapter follows the accepted guidelines of Cenfetelli and Bassellier (2009) and
Klarner et al. (2013). In following their guidelines, first, the inter-construct
reliability was established, making sure that items designed and conceived do measure
what they are supposed to. Next, the measurement model was established. This
determines how the individual items contribute to the formation of the construct.
Next, the structural model was tested. The structural model assesses the unique
contribution that each construct makes towards expertise. Subsequently this study
conducted the “application study” to uncover the findings that are valuable to this
research and discuss the research findings.
68 | P a g e
Chapter 4: Data Analysis and Results
Prepare Data
- Create data file, enter data,
Describe Data
- Characterize sample, assess
Measurement Model
- Test formative constructs
In the first phase, data was prepared for the analysis. In this step this study
created a data file, entered the data and did the cleaning process. In the second phase,
data in a manageable form was described. In further detailed analysis, this study
69 | P a g e
Chapter 4: Data Analysis and Results
measured the research model by validating the constructs according to the available
formative tests. All tests were conducted using SPSS. In the next phase, the study
model was tested using content and construct validity tests. Lastly, the three
groupings (novice, intermediate and expert) were applied to the IS Impact
measurement model to test the hypothesis of this study.
Data was collected from 220 daily users of SAP in managerial and operational
groups. Three medium sized organisations in India were involved. Only organisations
that have implemented an ES were chosen. Also, only the managerial and operational
groups were chosen since they are the daily and direct users of the system.
DATA PREPARATION
The study received 225 completed questionnaires, of which 220 were used. The
number of completed questionnaires represented an overall response rate of 63
percent which were considered to be a sufficient achievement. The data was prepared
in Microsoft excel and then imported to SPSS for analysis.
The survey data was screened for unusual patterns, non-response bias and
outliers. The responses were reviewed to determine if the respondents were diligent in
completing the questions. Of the 225 responses, 5 were removed because of missing
data and perceived frivolity. Removal of these responses left 220 useable surveys. The
following sections discuss the analyses in detail through five topics: descriptive
70 | P a g e
Chapter 4: Data Analysis and Results
DESCRIPTIVE STATISTICS
This study uses descriptive statistics to describe the basic features of this data.
This section outlines the demographic statistics through the classification of
employment cohorts, working experience and the data distribution. The intentions of
the analysis are: 1) to demonstrate that the sample has all appropriate cohorts to
examine expertise across user groups; 2) to show that the sample sufficiently
represents the regular ES users; and 3) to reveal that all user groups could be usefully
categorised into three groups of expertise. The subsequent sections discuss the
descriptive statistics in further detail.
Managerial 88 40%
71 | P a g e
Chapter 4: Data Analysis and Results
statistical dispersion, measuring how widely spread are the values in a data set. The
purpose of a standard deviation is to express on a standardised scale how different the
actual data is from the expected average value. If the data points are all close to the
mean, then the standard deviation is close to zero. If many data points are far from the
mean then the standard deviation is from zero. If all the data values are equal, then the
standard deviation is zero. Table 9 shows the mean and standard deviation values for
the individual measures. The standard deviation in Table 9 indicates that all the
respondents had responded in a similar way.
Affective
A1:I can easily adapt to any changes to the SAP system 220 5.21 1.03
required for the [name of the business process].
A2:I can easily adapt to changes in my [name of the business 220 4.78 0.91
process].
A3:I can easily adapt to changes in my department, related to 220 5.12 0.91
my [name of the business process].
A4:I can easily absorb any changes in my organizational 220 5.71 1.001
structure, related to [name of the business process].
A5: I am ready to accept new roles and responsibilities related 220 5.11 0.9
to my [name of the business process] when necessary.
Cognitive
C1: I fully understand the core knowledge necessary for [name 220 5.16 0.89
of the business process].
C2: My knowledge of SAP is more than enough to perform my 220 5.12 0.883
day-to-day functioning of the [name of the business process].
C3: I rarely contact SAP helpdesk for software related 220 5.45 0.912
problems in relation to the [name of the business process].
72 | P a g e
Chapter 4: Data Analysis and Results
C4: I rarely make mistakes when completing my [name of the 220 5.34 0.901
business process] using SAP.
C5: I have an in-depth knowledge of the functions of the [name 220 5.55 1.012
of the business process] that I must do on a day-to-day basis.
C6: I have a good knowledge of the organizational goals, 220 5.22 0.98
procedures and guidelines.
Knowledge Sharing
KS1: I regularly share my knowledge of SAP with my 220 4.92 1.02
colleagues.
KS2: I often suggest improvements of [name of the business 220 4.77 0.892
process] to my managers / colleagues.
KS3: My colleagues come to me for assistance when they are 220 4.22 0.782
faced with a work related issue.
KS4: I have colleagues and workmates helping me with using 220 4.13 0.76
SAP for my [name of the business process] (inversely worded).
KS5: I regularly contribute to knowledge sharing forums 220 4.66 0.74
within my organization.
Table 9: Suitability of the measures
Data Distribution
To determine whether or not the research is normally distributed, the normal
probability and scatterplot were examined. All points lie in a reasonably straight
diagonal line from the bottom left to top right. This suggests no major deviation from
normality. The scatterplot of standardised residuals also shows the same condition.
Non-Response Bias
Non-response bias is a serious concern for studies based on data collected
through surveys. Past studies have shown that older persons, women, individuals from
upper social classes, and persons with higher education are more prone to return and
respond to survey questionnaires. In this study, the non-respondents were sent a
reminder email 10 days after the initial surveys were collected. We established that
non-response bias is unlikely, given that respondents and non-respondents have
almost identical characteristics, where the percentage of management and operational
non-respondents were 38% and 62% respectively. Similarly, the average ‘sector-
wide’ experience of a non-respondent manager was 14.2 years, while the non-
respondent operational staff had 14 years of experience. Respondent operational staff
had, on average, 3.2 years of experience, while the non-respondent operational staff
73 | P a g e
Chapter 4: Data Analysis and Results
had 3.6 years of average. Therefore, all indicators suggest that the respondent sample
is a representative sample of the population.
Thus, results of the data analysis are arranged under 3 headings: (i) model and
construct validation established through content validity, discriminant and criterion
validity, structural model testing and nomological net testing using knowledge sharing
as a consequence of expertise; (ii) developing a classification of respondents based on
their level of expertise, using two complementary methods – the classical method and
cluster analysis; and (iii) application of the model and its expertise groupings on IS
success. To the extent that the respondent classification derived in (ii) is meaningful
and that the expertise groups (iii) demonstrate significant differences in their success
evaluation will provide further validity and reliability to the expertise construct
derived in step (i).
75 | P a g e
Chapter 4: Data Analysis and Results
application for the modeling of SEM. To evaluate the partial least square (PLS)
estimation, the research follows the suggestions by Chin (1998) and Henseler et al.
(2009). The research model (set out in Chapter 3) was tested by examining the
magnitude and significance of the structural paths in the PLS analyses and the
percentage of the variance explained in the constructs. In the research model, four
constructs were modeled as formative. Also, this study tested the nomological net of
expertise using knowledge sharing as its immediate consequence.
Pilot Testing
Before deploying the survey, this study conducted a pilot test in a representative
sample of 21 employees at Pharma 1. The respondents helped to identify problems
with wording or meaning, readability, ease of response and content validity
(Schouten, Grol et al. 2010).
1. In pilot testing, it was specifically assessed and proven that the instructions
provided are adequate in prompting respondents of ‘their’ tasks and work
environment. Furthermore, this ‘generalization’ of the questions made the
survey instrument easy to complete and comprehend. However, the
candidate acknowledges some of the limitations of this approach.
2. Pilot testing suggested that respondents require higher level of concentration
for answering such questions and alluded to issues when positioning them
earlier in the survey instrument.
3. An important outcome of pilot testing was the facilitation of content
analysis. This study paid close attention to content validity through a
thorough literature review that yielded themes and items that appear logical
76 | P a g e
Chapter 4: Data Analysis and Results
4
The four-step approach followed here is analogous to the Q-sort approach suggested by Kendall, K.
E., J. R. Buffington, et al. (1987). "The relationship of organizational subcultures to DSS user satisfaction."
Human Systems Management 7(1): 31-39.
, Kendall, J. E. and K. K. E. (1993). "Metaphors and methodologies: Living beyond the systems
machine." MIS Quarterly 17(2): 149.
, Tractinsky, N. and S. L. Jarvenpaa (1995). "Information systems design decisions in a global
versus domestic context." MIS Quarterly 19(4): 28.
for attaining content validity.
77 | P a g e
Chapter 4: Data Analysis and Results
what it should theoretically relate to, and therefore whether the scales relate to the
items that could be correlated. Discriminant validity is the degree to which two or
more measurements designed to measure different theoretical constructs are not
correlated. This test estimates the degree to which a measurement scale reflects only
characteristics from the construct measured and not attributes from other constructs.
To demonstrate the reliability and validity of the measurement scale, the study
undertook specific analyses using SPSS and SmartPLS. The analyses include
confirmatory factor analysis for each construct to verify that individual items
represent the same theoretical concept. The study tests the hypotheses of the estimated
model using path coefficient (correlation), effect size and R², together with statistical
significance level from the bootstrapping procedure.
5
See Marakas et al. (2007) for a discussion of why CSE and GCSE constructs must be
conceived as formative. Marakas et al. also discuss (table 1, page 20) the key differences between
formative and reflective constructs under 4 properties: direction of causality, interchangeability of
78 | P a g e
Chapter 4: Data Analysis and Results
Investigating Multicollinearity
The internal consistency of the formative construct was performed by a
multicollinearity test and test of indicator validity (path coefficient significance)
(Petter, Straub et al. 2007). Multicollinearity indicates that the specification of
indicators was not accomplished successfully, as high covariance might mean that
indicators explain the same aspect of the domain (Andreev, Heart et al. 2009). The
magnitude of multicollinearity can be examined by the variance of inflation factor
(VIF) and the tolerance value, which is reciprocal of the VIF. The value of VIF<10
shows the absence of multicollinearity. The significance of the path coefficients was
statistically tested using a t-test. A test coefficient significance and calculation of the
t-statistic were performed by applying the bootstrapping procedure.
indicators/items, covariation amongst indicators, and nomological net. This study employs all of four
properties.
6
The candidate acknowledges that some (e.g. Bollen and Lenox (1991) as cited in Petter et al. (2007)
suggest retaining non-significant indicators in attention to completeness and content validity.
79 | P a g e
Chapter 4: Data Analysis and Results
employed is to accept constructs with loadings of 0.70 or more, which implies that
there is more shared variance between the dimension and its manifest variable than
error variance (Kaiser 1974; Carmines and Zeller 1979; Hulland 1999; Dwivedi,
Choudrie et al. 2006).
The VIF statistic was used to determine if the formative indicators were too
highly correlated. This is because, if the multicollinearity between the construct
indicators is too high, it can destabilize the research model (Roberts and Thatcher
2009). The maximum VIF value for the construct of Skill-based came to 2.45. The
VIF values for the constructs of Affective ranged from 2.34 to 4.2, Cognitive ranged
from 1.10 to 4.1, and Knowledge sharing ranged from 1.39 to 2.67. All values are
well below the threshold of 10, as suggested by the traditional rule of thumb. This
indicates that there is no threat to the validity in these constructs.
80 | P a g e
Chapter 4: Data Analysis and Results
Affective
A1: I can easily adapt to any changes to the SAP system required 3.12
for the [name of the business process].
A2: I can easily adapt to changes in my [name of the business 4.2
process].
A3: I can easily adapt to changes in my department, related to my 2.34
[name of the business process].
A4: I can easily absorb any changes in my organizational 3.56
structure, related to [name of the business process].
A5: I am ready to accept new roles and responsibilities related my 2.67
[name of the business process] when necessary.
Cognitive
C1: I fully understand the core knowledge necessary for [name of 4.1
the business process].
C2: My knowledge of SAP is more than enough to perform my 3.74
day-to-day functioning of the [name of the business process].
C3: I rarely contact SAP helpdesk for software related problems 2.34
in relation to the [name of the business process].
C4: I rarely make mistakes when completing my [name of the 1.103
business process] using SAP.
C5: I have an in-depth knowledge of the functions of the [name 4.01
of the business process] that I must do on a day-to-day basis.
C6: I have a good knowledge of the organizational goals, 3.13
procedures and guidelines.
Knowledge Sharing
KS1: I regularly share my knowledge of SAP with my colleagues. 2.34
KS2: I often suggest improvements of [name of the business 2.13
process] to my managers / colleagues.
81 | P a g e
Chapter 4: Data Analysis and Results
In the second part, the structural model (inner) was tested by estimating the
paths between the constructs in the model to determine the significance as well as the
predictive ability of the model. With the analysis of the measurement model
completed, the structural model of the relationships between the various latent
constructs was analysed. To determine the significance of the paths, the results of the
bootstrapping 400 re-sampling technique was run in PLS. All the paths were
significant, which indicates that the research model is empirically confirmed by the
data. Table 11 displays the results of the structural model testing of the research
model.
The individual path coefficient of the PLS structural model can be interpreted as
standardised beta coefficients of ordinary least square regressions. The structural
paths provide a partial empirical validation of the theoretically assumed relationships
between latent variables (Henseler and Fassott 2009). To determine the confidence
intervals of the path coefficients and statistical inference, the re-sampling technique of
bootstrapping is used (Tenenhaus, Vinzi et al. 2005). This research used the PLS
technique to validate the structural model and to test the hypothesised relationships as
this procedure is able to model latent construct conditions of small to medium sample
size (Limayem, Khalifa et al. 2004). The result shows how well the measures relate to
each construct and whether the hypothesised relations as discussed in the previous
82 | P a g e
Chapter 4: Data Analysis and Results
sections are empirically true. It also provides more accurate estimates of the paths
among constructs that may be biased when using a multiple regression technique.
Tests of significance for all paths were conducted using the bootstrap re-sampling
method
The study model, now including the years of experience, is next tested using the
Partial Least Squares (PLS) procedure (Wold 1989), and employing the SmartPLS
software (Ringle 2005). PLS facilitates concurrent analysis of (i) the relationship
between constructs and their corresponding constructs and (ii) the empirical
relationships among model constructs. The significance of all model paths was tested
with the bootstrap re-sampling procedure (Gefen, Straub et al. 2000; Petter, Straub et
al. 2007). Table 11 reports the outer model weights, outer model loadings, and t-
statistics. From table 11 it is observed that, with the exception of years of experience,
loadings are generally large and positive, with each dimension contributing
significantly to the formation of each construct.
7
The two criterion measures were included at the end of the instrument, separate from other items, in
attention to minimizing possible common method variance.
8
It is noted that a single reverse-coded item was appropriately correlate negatively with the criterion
items.
83 | P a g e
Chapter 4: Data Analysis and Results
A summary of the result is shown in Figure 13. The significant path is indicated
with an asterisk (*).
84 | P a g e
Chapter 4: Data Analysis and Results
Figure 13 depicts the structural model with path coefficient (β) between
Expertise and knowledge sharing, R2 for knowledge sharing significant level of 0.05
alpha. Supporting our prepositions, further validating the construct, results show that
expertise is significantly associated with knowledge sharing (path coefficient (β) =
0.602, p < 0.005, t = 14.21); the squared multiple correlation coefficient (R2) of 0.34
indicating that expertise explains 34% the variance in the endogenous construct.
10
Bagozzi, R. (1994). Structural equation models in Marketing Research: Basic Principals.
Principals in Marketing R. Bagozzi. Oxford, Blackwell: 317-385.
suggests, “After all, the substantive reason behind index construction is likely to be how the
index functions as a predictor or predicted variable” (p. 332).
85 | P a g e
Chapter 4: Data Analysis and Results
** significant at 0.005
Knowledge
0.602** Sharing
Expertise
t = 14.21 R2 = 0.34
Cognitive Years of
Skill-Based Affective
Competence Experience
In summary, our results of the analyses confirm the validity and reliability of
our measurement of expertise, using cognitive competence, affective and skill-based
constructs. However, despite its prominence in related past literature as a determining
factor of one’s expertise, ‘years of experience’ does not make a significant
contribution to the expertise of an IS user. This may be attributed to the dynamic
nature of contemporary Enterprise Systems, where the pace of technology evolution
outstrips expertise gained through years of experience. It appears that, unless other
criteria are fulfilled, one’s years of experience solely does not contribute to one’s
expertise of IS. On the other hand, ‘affective’ construct is the single strongest
indicator of IS expertise, highlighting the importance of socio-behavioural
characteristics of expertise. Cognitive competence, though significant and substantial,
makes a ‘lesser’ contribution compared with ‘affective’ and ‘skill-based’ constructs.
86 | P a g e
Chapter 4: Data Analysis and Results
Applying this notion to this study constructs, this study first calculates the mean
scores of each constructs, for every respondent. Next, the sample mean and sample
standard deviation are calculated for construct. The classification in table 12 is
derived using the following simple equations: Novice = Respondent’s mean construct <
(sample mean construct - sample standard deviation construct), while an Expert =
Respondent’s mean construct > (sample mean construct + sample standard deviation
construct). The remainder are considered intermediates. Table 12 shows the expertise
classification derived for each of the 4 constructs. Furthermore, this study derives a
‘composite construct of expertise’ using the three variables of Affective, Cognitive
and Skill-based. This was deemed appropriate as the constructs were conceived as
formative and can be added to derive the overarching construct of expertise. The
composite classification is next employed to compare results in the forthcoming
cluster analysis.
The distribution of percentages arrived using the classical method for the
groupings of novice, intermediate and experts is almost identical across the three
87 | P a g e
Chapter 4: Data Analysis and Results
11
Literature suggests the use of step-wise clustering (against other methods like K-means,
Hierarchical clustering) in instances where the objective is more exploratory, than confirmatory Punj,
G. and D. W. Stewart (1983). "Clustering in Marketing Research: A Review and Suggestions for
Application." Journal of Marketing Research 10(May): 134-148.
.
12
As stated earlier, this criterion item relates to a quasi third-party evaluation of one’s expertise
(akin to observer evaluation method of expertise in figure 1).
88 | P a g e
Chapter 4: Data Analysis and Results
‘expert views’ are frequently sought in system evaluations, (ii) respondents having
different views is a key notion purported in IS success studies, yet according to many,
a concept that is under investigated (e.g. Cameron and Whetten 1983; Grover, Jeong
et al. 1996; Seddon, Staples et al. 1999) and (iii) the popularity of IS success studies
(e.g. DeLone and McLean 2003; Sabherwal, Jeyaraj et al. 2006; Gable, Sedera et al.
2008; Petter, DeLone et al. 2008) suggesting that this application is relevant and
meaningful to a greater community. To measure IS success, this study employs 27
measures of the IS success model of Gable Sedera and Chan (2008) in Appendix D
and was collected using respondents of the survey. The Gable et al. (2008) IS Success
model too is conceptualized as a formative, multidimensional index comprised of four
dimensions – Individual Impact, Organizational Impact, System Quality and
Information Quality. This multidimensional conception of success has garnered some
endorsement in recent literature; in example, Petter et al. (2008) cite Gable et al.
(2008) model as one of the most comprehensive, and comprehensively validated IS
success measurement models to-date.
From table 13 the significant differences between the Experts, Intermediates and
Novices, in relation to Information Quality, System Quality, Individual Impacts and
Organization Impacts (with the exception of System Quality) are showed. These
89 | P a g e
Chapter 4: Data Analysis and Results
observed differences concur with our preposition that users with different levels of
expertise evaluate the same system differently.
CHAPTER SUMMARY
This study sought to conceptualize, measure and apply the notion of
Contemporary Information Systems User Expertise. Our discussion on the conceptual
framework highlighted the need to revisit the notions of user expertise in
Contemporary IS. Most past studies of computer self-efficacy and user competence
focus on function IT (e.g. spreadsheets and word processing as common examples),
highlighting the need to re-conceptualize user expertise of a complex, contemporary,
and organizational-wide Information System (where Enterprise System is an
archetype of). As Marakas, et al. (2007) highlight “...for business and information
systems, real world tasks are neither simple nor single domain focussed. Rather, they
often draw on multiple skill sets and require an individual to be able to perform tasks
that span several skill domains... (p.40)”. Our conceptualization, measurement and
application of Contemporary Information Systems User Expertise are driven to
address this gap in research.
This research conceived both the model constructs and its measures as
formative, manifested in extensive attention to the completeness and necessity of
constructs and measures of expertise. In order to ensure this, the expertise model
specification and validation proceeded from an inclusive view of expertise,
commencing with the three theoretical foundations of theories of learning (Kraiger,
Ford et al. 1993), employed in past studies. Conceived primarily through a ‘system
centric’ viewpoint, the study presented a conceptual framework for which IS expertise
can be understood (figure 1in Chapter 1).
The literature review identified the constructs of expertise, consistent with past
studies. Conceptual arguments that drew on past research, combined with this citation
analysis, suggested the sufficiency of the three constructs to develop specific
measures for contemporary IS user expertise. This study also included years of
experience, purely as an exploratory exercise to test its relevance and its contribution
to contemporary Information Systems user expertise. The a-priori model was tested
using survey data of 220 operational and managerial users representing three SAP
using companies, conforming to all formative data analysis techniques, corroborating
90 | P a g e
Chapter 4: Data Analysis and Results
evidence of multiple data analysis methods. In addition, this study investigated the
nomological relationship between expertise and one of its immediate consequences of
knowledge sharing – demonstrating further validity of our expertise construct.
This study next sought to derive a simple, yet useful classification of expertise. The
study classified the respondent sample into three groups based on expertise,
employing the classical method using standard deviations and mean scores and
method 2 employing an exploratory cluster analysis, yielding almost identical results.
The classification of user expertise into three groups, by itself useful, provides further
credibility to the constructs and measures of our expertise model. Next, this study
applied the expertise model and the classification of users in IS success domain,
exploring whether experts, intermediates and novices perceived their information
system success differently.
91 | P a g e
Chapter 5: Conclusions, Implications and Limitations
This chapter summarizes the research related works, and outlines possible
contributions, limitations and suggests follow-on works. It begins with a summary of
the research, and subsequently addresses the generalizability of the findings. It is then
followed with a discussion on the major implications for both research and practice.
Next, limitations of the research are summarized and possible future research
directions are addressed. The section on future research provides alternative methods
to strengthen the findings of this research and explains additional related research
questions that might be addressed with new methods and new data.
92 | P a g e
Chapter 5: Conclusions, Implications and Limitations
RESEARCH SUMMARY
This study sought to conceptualize, measure and apply the notion of
Contemporary Information Systems User Expertise. The discussion on the conceptual
framework highlighted the need to revisit the notions of user expertise in
Contemporary IS. Most past studies of computer self-efficacy and user competence
focus on function IT (e.g. spreadsheets and word processing as common examples),
highlighting the need to re-conceptualize user expertise of a complex, contemporary,
and organizational-wide Information System (where Enterprise System is an
archetype of). As Marakas et al. (2007) highlight “...for business and information
systems, real world tasks are neither simple nor single domain focussed. Rather, they
often draw on multiple skill sets and require an individual to be able to perform tasks
that span several skill domains... (p 40)”. The conceptualization, measurement and
application of Contemporary Information Systems User Expertise in this study are
driven to address this gap in research.
The main hypothesis of the study is that Information Systems users have
significantly different levels of expertise, and that they can be usefully classified
according to their degree of proficiency. Thus, this study expected, if the derived
classification is correct and meaningful, the evaluations that they make of a system
are also significantly different.
The study design and the research model have been derived to accommodate the
hypothesis.
Through the driving research hypothesis, two research questions are derived:
The answer to the first research question was achieved through the development
of an expertise measurement model, which led to the derivation of a classification
method that can be used to understand expertise cohorts. Once the expertise
characteristics were determined and the cohorts were identified, the study next sought
93 | P a g e
Chapter 5: Conclusions, Implications and Limitations
Once the boundaries of the current research were established through a system
related point of view, the study next developed the constructs necessary to measure
cognitive competence, skill-based and affective constructs. The current study herein
employs the learning theory, self efficacy theory, expertise constructs employed in
social psychology and the concepts of user competence were employed. Specifically,
the overall framework for the study was derived through the concepts of learning
theory proposed by Kraiger, Ford et al. (1993). The result of this phase was an
appropri model with four constructs. This research conceived both the model
constructs and its measures as formative, manifested in extensive attention to the
completeness and necessity of constructs and measures of expertise. In order to ensure
this, the expertise model specification and validation proceeded from an inclusive
view of expertise, commencing with the three theoretical foundations of theories of
learning (Kraiger, Ford et al. 1993), employed in past studies.
Next the measures for the four constructs were derived through literature. The
study developed a 22 item scale to measure the four constructs of the expertise model
(skill-based, affective, congnitive, knowledge sharing) and the antecedent of expertise
(in this study knowledge sharing). The instrument also included two items that were
designed as criterion measures. All 22 items used the Likert scale ranging from 1-7.
Though the items are grouped under its construct in Appendix C for the reviewer’s
convenience, the actual survey instrument did not group or label the items to
minimize common method bias.
94 | P a g e
Chapter 5: Conclusions, Implications and Limitations
The primary observations gathered through the literature review of the measures
and constructs provided, qualified the constructs and measures of the current study.
Conceptual arguments that drew on past research suggested the sufficiency of the
three constructs to develop specific measures for contemporary IS user expertise. The
study also included years of experience, purely as an exploratory exercise to test its
relevance and its contribution to contemporary Information Systems user expertise.
The a-priori model was tested using survey data of 220 operational and managerial
users representing three SAP using companies, conforming to all formative data
analysis techniques, corroborating evidence of multiple data analysis methods. In
addition, the study investigated the nomological relationship between expertise and
one of its immediate consequences of knowledge sharing – demonstrating further
validity of the expertise construct.
The current study next sought to derive a simple, yet useful classification of
expertise. The study classified the respondent sample into three groups based on
expertise, employing the classical method (method 1) using standard deviations and
mean scores and method 2 employing an exploratory cluster analysis, yielding almost
identical results. The classification of user expertise into three groups, by itself useful,
provides further credibility to the constructs and measures of our expertise model.
Next, the study applied the expertise model and the classification of users in IS
success domain, exploring whether experts, intermediates and novices perceived their
information system success differently.
95 | P a g e
Chapter 5: Conclusions, Implications and Limitations
A conceptual framework
was designed to identify
the focus of the study Chapter 3
context
Respondents were
grouped based on their
expertise, into three Chapter 4
groups using the Standard
Deviation Method
Respondent groups
demonstrated statistically
significant differences for Chapter 4
the dimensions of IS
Impact model
Table 14: Research Questions, Method and Where reported
96 | P a g e
Chapter 5: Conclusions, Implications and Limitations
This research:
Moreover,
13
Recognizing that most past studies focussing on user competence and self-efficacy had
employed classroom experiments using college graduates.
97 | P a g e
Chapter 5: Conclusions, Implications and Limitations
IMPLICATIONS
This study makes several strong implications to both research and practice.
Such contributions can be discussed under several headings: (1) implications by
identifying characteristics necessary to assess expertise of end user of a contemporary
Information System, (2) implications of this classification to system success /
evaluation studies, (3) implications to the methodology and (4) implications to
practice. The section below provides the discussion for each of these points.
98 | P a g e
Chapter 5: Conclusions, Implications and Limitations
Therefore, to the extent that the expertise model and its constructs are robust
across other contemporary systems, contexts, and lifecycle phases, user expertise may
serve as a validated dependent / mediating / moderating variable in ongoing research.
Next, the classification method that this study derived provides a tentative
guideline on how one could identify an expert in an organization. It is tentative,
because the current guidelines require further validation in diverse circumstances. It is
noted that all current methods for identifying expertise in Enterprise IT are based on
‘classical methods’. In other words, most organizations conduct ‘tests’ to understand
user knowledge. Such tests are skewed highly towards ‘product knowledge’ and do
not project the true picture of an expert in an organizational IS.
Similarly, past methods for classifying respondents based on expertise did not
provide clear cut-off values for self-evaluated respondents. The classification schema
triangulated through the criterion measures and standard deviations provide a clear
cut-off values that can be employed in future studies.
Similarly, the study results demonstrate that those with diverse levels of
expertise perceive system success differently. For decades of IS success research that
contributed a wealth of research contributions, this finding means that success differs
on the evaluator’s perspective. Though the same message was echoed by Cameron
and Whetton (1983), where they identified the ‘perspective of success’ as a major
99 | P a g e
Chapter 5: Conclusions, Implications and Limitations
question that must be considered in evaluations, this has been largely ignored in IS
success studies that have primarily focussed on construct validation.
First, most studies in the domain of expertise or system success seek a causal /
process relationship. Such studies typically study the relationship between constructs
using such methods like regression, correlation and /or partial least squares. This
study approach of using IS success construct is unique, in that the study employed IS
success as the ‘application area’ for the findings identified through the expertise
model.
Implications to Practice
For the practice, our study makes several contributions. First, (i) This study
provides a meaningful way of understanding expertise in a contemporary IS. (ii)
Practitioners could employ the model to emulate ‘expert qualities’ to assist novices
and intermediates to perform at higher levels and ultimately become experts. (iii) It
too highlights that, since one’s IS expertise does not necessarily depend on their
innate abilities and years of experience, thus, productivity improvements sought
through IS can be achieved by appropriate interventions. (iv) This study also
highlighted, that any program geared toward improving performance would require
interventions focussed not only on enhancing systems related skills, but on more
behavioural aspects (in this study Motivation and Skill-Based). Finally, (iv) for those
practitioners engaged in system evaluations, this study provides evidence that experts,
intermediates and novices perceive system success differently. The four practical
considerations are elaborated below.
100 | P a g e
Chapter 5: Conclusions, Implications and Limitations
First, for the practice, deriving constructs that describe one’s expertise will
provide a meaningful way of understanding expertise in a contemporary IS. The
current understanding of expertise is highly focussed on cognitive skills of an
employee. This heavy dependence on cognitive skills in current thinking is
particularly true for the operational staff. This study demonstrated that, though the
cognitive skills are important, motivational and skill-based constructs make a better
contribution to describing the expertise construct.
It too highlights that, since one’s IS expertise does not necessarily depend on
their innate abilities or years of experience, thus, productivity improvements sought
through IS can be achieved by appropriate interventions. In prior literature, there has
been much focus on years of experience as a strong contributor to expertise. To the
contrary, this study found that years of experience is non-significant and does not
101 | P a g e
Chapter 5: Conclusions, Implications and Limitations
make any contribution to either operational or management staff levels. This could be
attributed to the dynamism of the Information Systems discipline, where the evolution
of the system outperforms the capabilities derived through years of experience.
This study also highlighted, that any program geared toward improving
performance would require interventions focussed not only on enhancing systems
related skills, but on more behavioural aspects (in this study affective and skill-based).
For those practitioners engaged in system evaluations, this study provides evidence
that experts, intermediates and novices perceive system success differently.
First, the data collection method may be perceived as a limitation of the study.
The study model was developed and validated with data collected from only three
organizations, using the same Enterprise System (i.e. SAP) representing the same
industry sector (i.e. manufacturing). The homogeneity of the context helped the study
validate the measures, without the effect of extraneous variables – yet, it may raise
questions about whether the initial list of constructs and measures used in the
development of the a-priori model was complete and representative of contemporary
IS in general, and whether the final list of measures and constructs are, indeed,
generalizable.
102 | P a g e
Chapter 5: Conclusions, Implications and Limitations
Third, though the study was ‘by-design’ scoped to address the areas marked as
‘B’ in the conceptual framework (see chapter 1), it would have been best to have
conducted the study over multiple axis for comparative purposes. This would have
helped increase generalizability of the findings. Future studies could benefit by doing
this. For example, future studies could extend the evaluation method to both self-
evaluation as well as classical method.
103 | P a g e
Appendices
APPENDICES
Appendix A – System Classifications
To demonstrate the differences between the types of systems, we employ McAfee
(2006). The table below, derived using McAfee (2006), compares the three types of
systems.
Use is optional
Examples Spreadsheets, Emails, instant ERP, CRM and
computer aided messaging, wikis, blogs SCM
design, statistical and mash-ups
software
Automation Some degree of Very low level of High level of
automation (e.g. Spell automation automation
check)
Key-User- More likely to More likely to Multiple Key-
Groups have a single Key- have a single Key-User- User-Group using the
User-Group Group same system very
differently
Considerations Most users Limited work- High automation
for Expertise would remain oriented functionality of business processes
proficient with the Access to system Many key-user-groups
basic system features have different types of
features is equal across
uses
Potential to all key-user-groups
improve performance Must consider
Depth of use
through deeper and mandatory and non-
would not result in
exploratory use substantial mandatory uses
improvements For processes with
high automation,
frequency of use will
only provide
104 | P a g e
Appendices
observations of
efficiency
We outline four salient differences between Enterprise IT (EIT) and Function IT (FIT)
along the following aspects that justify the need to develop a better understanding for
Enterprise IT expertise.
make fewer mistakes in using them. Yet, the differences between a ‘novice’
user and an ‘experienced’ user in Function IT is minimal. Moreover, the
degree of proficiency required by each key user group (i.e. operational staff,
managers and strategic) too is substantially different for Enterprise IT.
Whereas in Function IT, user expertise of an application (e.g. for word
processing) largely remains the same across multiple user groups.
106 | P a g e
Appendices
107 | P a g e
Appendices
KNOWLEDGE SHARING
6. I regularly share my knowledge of SAP with my colleagues.
7. I often suggest improvements of [name of the business process] to my managers /
colleagues.
8. My colleagues come to me for assistance when they are faced with a work related
issue.
9. I have colleagues and workmates helping me with using SAP for my [name of the
business process] (inversely worded).
10. I regularly contribute to knowledge sharing forums within my organization.
108 | P a g e
Appendices
109 | P a g e
Appendices
REFERENCES
Alloway, R. M. and J. A. Quillard (1983). "User Managers' Systems Needs." MIS
Quarterly 7: 27-41.
Andreev, P., T. Heart, et al. (2009). Validating formative Partial Least Squares (PLS)
models: Methodological review and empirical illustration. International Conference
on Information Systems, Phoenix, Arizona.
Bagozzi, R. P. (1980). Causal Models In Marketing. New York, John Wiley & Sons.
Bancroft, N. H., H. Seip, et al. (1998). Implementing SAP R/3: How To Introduce A
Large System Into A Large Organization. Greenwich, CT, Manning Publications.
111 | P a g e
References
Benbasat, I., D. K. Goldstein, et al. (1987). "The Case Study Research Strategy In
Studies Of Information Systems." MIS Quarterly Sept 1987: 369-386.
Bender, S. and A. Fish (2000). "The transfer of knowledge and the retention of
expertise: the continuing need for global assignment." Journal of knowledge
management 4(2): 125-137.
Bilili, S., L. Raymond, et al. (1998). "Impact of task uncertainity, end user
involvement and competence on the success of end user computing." Information &
Management 33(3): 137-153.
Bollen, K. A. (1989). Structural Equations With Latent Variables. New York, Wiley.
Bowen, W. (1986). The puny payoff from office computers. Fortune. 121: 20-24.
Bryan, W. L. and N. Harter (1897). "Studies in the physiology and psychology of the
telegraphic language." Psychological Review 4: 27-53.
112 | P a g e
References
Chang, S.-J., A. v. Witteloostuijn, et al. (2010). "From the editors: Common method
variance in international business research." Journal of International Business Studies
41: 178-184.
Chase, W. G. and H. A. Simon, Eds. (1973). The mind's eye in chess. Visual
information processing New York, Academic Press.
Chi, M. T. H., R. Glaser, et al. (1988). The nature of expertise. Hillsdale, NJ,
Erlbaum.
Chin, W. W., N. Johnson, et al. (2008). "A fast form approach to measuring
technology acceptance and other constructs." MIS Quaterly 32(4): 687-703.
113 | P a g e
References
Curtis, B., Ed. (1986). By the way, did anyone study any real programmes? Empirical
Studies of Programmers. Norwood, NJ
Ablex.
114 | P a g e
References
Edwards, J. R. and R. P. Bagozzi (2000). "On the nature and direction of relationships
between constructs and their measures." Psychol Methods 5: 155–174.
Engestro¨m, Y., J. Virkkunen, et al. (1996). "The change laboratory as a tool for
transforming work." Lifelong Learning in Europe 1(2): 10-17.
Eriksson, K. A., R. T. Krampe, et al. (1993). "The role of deliberate practice in the
aquisition of expert performance." Psychological Review 1993: 363-406.
Feltovich, P., R. Spiro, & , et al. (1997). Issues of expert flexibility in contexts
characterized by complexity and change. . Expertise in context: Human and machine.
K. M. F. P. J. Feltovich, & R. R. Hoffman Menlo Park, CA, AAAI/MIT Press: 125-
146.
Gable, G. G., T. Chan, et al. (2003). Offsetting ERP Risk Through Maintaining
Standardized Application Software. Second-wave Enterprise Resource Planning
Systems. G. Shanks, P. Seddon and L. Willcocks. Cambridge, Cambridge University
Press: 220-237.
115 | P a g e
References
Gable, G. G., J. Scott, et al. (1998). Cooperative ERP Life Cycle Knowledge
Management. Proceedings of the 9th Australasian Conference on Information Systems
Sydney, New South Wales, Australia, Association for Information Systems.
Gefen, D. and D. Straub (2005). "A Practical Guide to Factorial Validity Using PLS-
Graph: Tutorial and Annotated Example." Communications of the Association for
Information Systems 16: 91-103.
Gefen, D., D. W. Straub, et al. (2000). "Structural Equation Modeling and Regression:
Guidelines for Research Practice." Communications of the AIS 4(7): 1-79.
Grant, J. S. and L. L. Davis (1997). "Selection and use of content experts for
instrument development." Research in Nursing and Health 20: 269-274.
116 | P a g e
References
Henseler, J. and G. Fassott (2009). Testing moderating effects in PLS path models:
An illustration of available procedures. Handbook of partial least squares: Concepts,
methods, and applications. W. W. C. V. E. Vinzi, J. Hensler and H. Wang. Berlin,
Springer.
Hinds, P. J. (1999). "The curse of expertise: the effects of expertise and debiasing
methods on predictions of novice performance." journal of experimental Psychology:
Applied 5(2): 205-221.
House, Robert J.; et al., eds. (2004), Culture, leadership, and organizations: the
GLOBE study of 62 societies, SAGE
117 | P a g e
References
Jarvis, C. B., S. B. MacKenzie, et al. (2003). "A critical review of construct indicators
and measurement model misspecification in marketing and consumer research."
Journal of consumer research 30: 199-216.
118 | P a g e
References
Limayem, M., M. Khalifa, et al. (2004). "CASE tools usage and impact on system
development performance." Journal of Organisational Computing and electronic
commerce 14(3): 153-174
Little, B. (1997). The have-nots of the new technology. The Globe and Mail. Toronto,
Ontario.
Marakas, G. M., R. D. Johnson, et al. (2007). "The evolving nature of the computer
self-efficacy construct: an empirical investigation of measurement construction,
validity, reliability and stability over time." Journal of the Association for Information
Systems 8(1): 16-46.
Marakas, G. M., M. Y. Yi, et al. (1998). "The multilevel and multifaceted character of
computer self-efficacy: Toward clarification of the construct and an integrative
framework for research." Information Systems Research 9(2): 126-163.
Marcolin, B. L., S. L. Huff, et al. (1992). End user sophistication: Measurement and
research model. Proceedings of ASAC. Quebec: 108-120.
119 | P a g e
References
Nah, F. F., J. L. Lau, et al. (2001). "Critical Factors for Successful Implementation of
Enterprise Systems." Business Process Management Journal 7(3): 285-296.
Olsen, S. E., and Rasmussen, J. (1989). The reflective expert and the prenovice: Notes
on skill-rule and knowledge-based performance in the setting of instruction and
training,. Developing skills with information technology. L. B. a. S. A. R. Q. (eds.).
New York, Wiley: pp. 9-33.
Page, K. and M. Uncles (2004). "Consumer Knowledge of the World Wide Web:
Conceptualization and Measurement." Psychology and Marketing 21(8): 573-591.
Pallant, J. (2005). SPSS Survival Manual: A step by step guide to data analysis using
SPSS for Windows (version 12). NSW, Australia, Allen & Unwin.
120 | P a g e
References
Rai, A., Lang, S.S., and Welker, R.B. (2002). "Assessing the Validity of IS Success
Models: An Empirical Test and Theoretical Analysis." Information Systems Research,
13(1): 50-69.
Reichheld, F. F. (1996). The loyalty effect: The hidden force behind growth, profits,
and lasting value. Boston, MA, Harvard Business School Press.
121 | P a g e
References
Ringle, C., Wende, S., and Will, A. (2005). SmartPLS 2.0 (beta), University of
Hamburg. Retrieved March 28, 2007 from http://www.smartpls.de.
Sabherwal, R., A. Jeyaraj, et al. (2006). "Information System Success: Individual and
Organizational Determinants." Management Science 52(12): 1849-1864.
Seddon, P. B., C. Calvert, et al. (2010). "A Multi-Project Model of Key Factors
Affecting Organizational Benefits From Enterprise Systems." MIS Quarterly 34(2):
305-A11.
Sedera, D. and G. Gable (2004). A Factor and Structural Equation Analysis of the
Enterprise Systems Success Measuement Model. Twenty-Fifth International
Conference on Information Systems
Sekaran, U. (2000). Research methods for business: A skill building approach. New
York, John Wiley & Sons.
Simon, H. A. and W. G. Chase (1973). "Skill in chess. ." American Scientist 61: 394-
403.
122 | P a g e
References
Soh, C., S. K. Sia, et al. (2000). "Cultural Fits And Misfits: Is ERP A Universal
Solution?" Communications of the ACM 43(4): 47-51.
Swap, W., D. Leonard, et al. (2001). "Using mentoring and storytelling to transfer
knowledge in the workplace." Journal of Management Information Systems 18(1): 95-
114.
123 | P a g e
References
Van der Heijde, C. M. and B. I. J. M. Van der Heijden (2006). "A competence-based
and multidimensional operationalization and measurement of employability." Human
Resource Management 45(3): 449-476.
Van der Heijden, B. I. J. M. (2005). No One Has Ever Promised You a Rose Garden.
On Shared Responsibility and Employability Enhancing Practices throughout Careers,
Inaugural Lecture. Van Gorcum, Assen, Maastricht School of Management/Open
University of The Netherlands.
Wu, J.-H., Y.-M. Wang, et al. (2002). An Examination Of ERP User Satisfaction In
Taiwan. Proceedings of the 35th Annual Hawaii International Conference on System
Sciences, Big Island, Hawaii.
Xu, P. and B. Ramesh (2003). A Tool for the capture and use of Process knowledge in
process tailoring. Proceedings of the 36th Hawaii International Conference on System
Sciences (HICSS’03), Hawaii, IEEE.
Xuea, Y., H. Liang, et al. (2005). "ERP implementation failures in China: Case
studies with implications for ERP vendors." International Journal of Production
Economics 97: 279-295.
124 | P a g e
References
Yoon, Y. G., Tor; O'neai, Quinton. (1995). "Exploring the factors associated with
expert systems success." MIS Quarterly Vol. 19(Issue 1): pp 83-106.
Zach, O. (2010). ERP system success assessment in SMEs. the 33rd Information
Systems Research Conference in Scandinavia (IRIS33), Arhus.
125 | P a g e