Chapter Four

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 2

Data Analysis, Presentation, and

Interpretation
The goal of data analysis is to find meaning in the data which can lead to a conclusion and
assist with decision making. To accurately interpret data, several data analysis methods must
be conducted. First, data must be prepared and cleaned, as in its raw form it is often not in a
suitable state to be analyzed. Following this, it is common for data to be restructured and/or
converted. This process is often referred to as on-the-fly computation and is a crucial part of
statistical analysis. There are several statistical analysis methods that can be used on a set of
data. These range from simple methods such as mean, median, and mode averages, through to
more complex methods such as regression analysis, analysis of variance (ANOVA), and
cluster analysis. Ideally, the method chosen will depend on the situation and the information
sought. A detailed explanation of these methods is beyond the scope of this chapter, but
generally, these will provide a solution or result, and a confidence level, which is then used to
make an inference about the situation.

1.1. Statistical Analysis


Data analysis, presentation, and interpretation are the phrases used to describe the application
of statistical techniques to analyze a set of data to find out decisions. Data analysis and
interpretation are the most important features of the data investigation and discussion process.
According to Kendall and Stuart, data analysis is a process of inspecting, cleansing,
transforming, and modeling data to discover useful information, inform conclusions, and
support decision-making. The data analysis in this study is mainly about finding out the
difference in students' satisfaction and other related variables between Malay and non-Malay
students. Data analysis in this study involves a few types of statistical analysis, which are
frequency analysis, reliability analysis, descriptive analysis, and also inferential analysis.
After the data has been cleaned, a frequency analysis of all variables will be done. This
analysis is conducted to understand the variables and their underlying structure. For instance,
the items of independent variables like race, income, and so on must have a clear structure
and produce interpretable results. This frequency analysis enables us to examine the
properties of the variables and the distribution of all variable values. The next analysis is
reliability analysis, which mainly focuses on one type of independent variable, which is
students' satisfaction. This analysis is conducted to test whether the measure of data is
consistent. It is important to ensure that the data produces measures that are free from errors,
stable, and produce consistent results. In addition, it is also important to measure the
reliability of the multi-item variable. Text for descriptive analysis can be a few examples and
brief comments for the analysis done. An example of the output frequency analysis is that
there is no seriously unbalanced frequency distribution between Malay and non-Malay
students. If it exists, we can create a new variable by combining the frequency values, for
instance, a new variable of income range. The same goes for the place of residence variable,
we can combine it into students that stay at the local, which is near to the university, and
students that stay at a foreign place. Next, an example of the interpretive output for reliability
analysis is that the value of alpha Cronbach for Malay and Non-Malay students on the
satisfaction variable is α = 0.87 and 0.85. Because the value of alpha for both variables is
above 0.70, the measure is acceptable. An interpretation for the independent t-test result
should be that there is no significant difference in satisfaction between Malay and non-Malay
students because the p-value is 0.763, which is more than 0.05. This analysis is better if
presented using some figures, pie charts, bar charts, and other graphical representations.

1.2. Data Visualization Techniques


Data visualization is "the graphical presentation of information. It uses the attributes
(variables) of the data to represent the data graphically - that is, to create a picture of the data"
(Friendly, 2008). In recent years, the importance of data visualization has grown. A vast
amount of data has become available and the complexity of problems has risen dramatically.
This has resulted in the necessity of analytical reasoning to understand the data and make
insightful decisions based on the data. Data visualization as it applies to a social sciences
inquiry is relatively new. The first impetus comes from the field of computer science.
Information visualization uses computer graphics to understand and present information; data
mining uses statistical techniques to find useful knowledge from amounts of data; and
graphical statistics uses charts, maps, and other graphic displays to analyze and communicate
data. All of these are aimed at gaining a better understanding of information and making
effective decisions. More recently, these computer-based methodologies have been woven
into the fabric of social science methodology (Lee, 2005). Data visualization is multimodal (it
uses different forms of representation such as charts, graphs, and maps) and an active (the
investigator manipulates the data and representations) process. A great advantage of
visualization is that the investigator gets an intuitive feel for patterns and relationships that
might not be apparent in other abstract representations of data. This is important because
correct insights from data often lead to important decisions.

1.3. Interpretation of Findings


Findings in this study have been interpreted using a theoretical framework to make data and
patterns of influence explicit to make more general conclusions. HLM was used to identify
the important factors in the internal and external environment that affect service provision. It
resulted in a highly complex, multi-layered model of the library's strategic management.
Unfortunately, only a small portion of this can be represented in this report but it is useful to
concentrate on the core findings. The influence of the external environment was assessed via
the two strategic issues, resources and identity. The strong relationship with identity suggests
that librarians consider this to be an important part of the external image they present but it is
the wrong enemy, as the negative coefficient for the change in identity shows a move away
from the present desired state, albeit into the messy library. An improvement in library
resources is considered to be an important factor in stopping the decline of service levels. It is
clear this links to the prevalent view that the cause of the crisis is an internal one and to move
back up the service level hierarchy and onto a higher plain will require concerted effort. The
transitional states identified in the resource and change management decisions suggest a
planned approach, beginning with a promotion or firing the wrong sort of library staff, and
focusing on the short-term pain for the long-term gain of reducing overload via change
management and then moving toward an improved service level.

You might also like