3
3
3
Abstract—Art objects can evoke certain emotions. Color is a We need a machine that can objectively perceive objects
arXiv:2311.18518v1 [cs.CV] 30 Nov 2023
fundamental element of visual art and plays a significant role despite subjective color perception. [4].
in how art is perceived. This paper introduces a novel approach The relationship between color and emotion has been stud-
to classifying emotions in art using Fuzzy Sets. We employ a
fuzzy approach because it aligns well with human judgments’ ied in various areas, ranging from psychology to art theory.
imprecise and subjective nature. Extensive fuzzy colors (n=120) The colors we see can greatly impact our emotions. Re-
and a broad emotional spectrum (n=10) allow for a more human- searchers such as J.H. Xin have found that the lightness and
consistent and context-aware exploration of emotions inherent chroma of colors are more influential than the hue itself [5].
in paintings. First, we introduce the fuzzy color representation Other studies, including one conducted by Banu Manav [6],
model. Then, at the fuzzification stage, we process the Wiki
Art Dataset of paintings tagged with emotions, extracting fuzzy have also shown that our emotional response to colors can
dominant colors linked to specific emotions. This results in vary based on their lightness and saturation. In a related
fuzzy color distributions for ten emotions. Finally, we convert experiment, the researchers in [7] explored how changes in
them back to a crisp domain, obtaining a knowledge base of valence and arousal affected emotional response. They were
color-emotion associations in primary colors. Our findings reveal able to accurately map 4 different emotions to 5 parameters:
strong associations between specific emotions and colors; for
instance, gratitude strongly correlates with green, brown, and lightness, chroma, hue, valence, and arousal, with an accuracy
orange. Other noteworthy associations include brown and anger, rate of 80%.
orange with shame, yellow with happiness, and gray with fear. Currently, there are several gaps in research on this topic.
Using these associations and Jaccard similarity, we can find the Technically, there is a limitation in recognizing and compre-
emotions in the arbitrary untagged image. We conducted a 2AFC hending pictures. Aesthetically, there is no apparent correla-
experiment involving human subjects to evaluate the proposed
method. The average hit rate of 0.77 indicates a significant corre- tion between the information conveyed by a picture and the
lation between the method’s predictions and human perception. response to it.
The proposed method is simple to adapt to art painting retrieval The interaction between different emotions is still poorly
systems. The study contributes to the theoretical understanding understood, and the studies conducted so far have been limited
of color-emotion associations in art, offering valuable insights for to a handful of emotions (usually 4 to 6). This is because
various practical applications besides art, like marketing, design,
and psychology. emotions are difficult to represent using a specific metric.
Moreover, it is important to shift the focus from the content
Index Terms—Fuzzy sets, emotions in art, color palette, classifi- of paintings to their forms. However, it seems impossible to
cation, color-emotion model, art image analysis, color perception,
image processing emotion detection. achieve universal calculations in this regard.
It is crucial to acknowledge that various factors influence
perception and should be evaluated in different contexts. In
I. I NTRODUCTION this paper, our focus is on art and we present a technique
Nowadays, an increasing amount of affective information to extract the emotional color palette from art objects by
is distributed worldwide in all formats [1]. The elements using fuzzy sets. This method is particularly useful as color
that contribute to the sensory experience of audio and visual ambiguity and perception are both subjective and inaccurate
content include acoustic quality, facial expressions and ges- by nature. Traditional color-emotion studies have primarily
tures, background music, ambient noise, color schemes, and relied on basic color categories, which may impact emotional
image filters. It is crucial to assess incoming data objectively, experiences evoked by art.
including emotional reactions. [2], [3]. The current paper proposes an emotion classification ap-
Color is a fundamental element of visual art and plays a big proach for art objects using Fuzzy Sets and Logic Approach
role in the emotional responses induced by the art image. It is (see Fig. 1). It matches colors in the art domain to the spectrum
important to classify art not just in terms of age or style but of human emotions. The fuzzy approach is well-suited for
also in terms of the emotions they create in a viewer. That can color and emotion detection. How people perceive color and
facilitate the identification of art expressing similar emotions emotion is inherently subjective and differs from person to
and has a potential application in context-based image retrieval person. The fuzzy approach can effectively represent this
systems. imprecision in the perception of color-emotion associations
Emotion detection aims to determine the presence and in a way that classical binary logic might struggle to capture.
intensity of emotion as close as possible to human perception. As it can be seen in Fig. 1, we aim to bridge the gap
2
Fig. 1. Bridging the semantic gap between low-level features in art objects and high-level semantic concepts of emotions. Lyonel Feininger ”Carnival in
Arcueil” painting.
As we can see from the above, the level of interest in love 50% 160
this field appears to be substantial. However, there are also anger 50% 21
some technical and semantic limitations to research. One sadness 50% 159
important constraint on most works discussed in this area is gratitude 50% 8
the limited number of colors and their crisp nature, which is in fear 50% 202
contrast to human color perception. Some of the experiments shame 50% 5
were conducted without a validated dataset, and psychological
surprise 50% 532
experiments were conducted without having control of partic-
shyness 30% 6
ipants’ conditions.
trust 50% 358
B. Models of Emotion
Several emotion models have been proposed to describe and A. Dataset
categorize human emotions. Here are a few examples of well-
To proceed, we must first locate a set of paintings that have
known ones: Russell’s Circumplex Model [19], Ekman’s Six
been previously tagged with specific emotions. This step can
Basic Emotions [28], Plutchik’s Wheel of Emotions, James-
be divided into two parts: training and testing. We use Wiki
Lange Theory [29], Cannon-Bard Theory [30], [31], Barrett’s
Art Dataset. This resource contains works from 195 artists,
Conceptual Act Theory [32], The Geneva Emotion Wheel [33],
42129 images for training, and 10628 images for testing [36].
Schachter-Singer Two-Factor Theory [34], Mehrabian’s PAD
This dataset has classified 20 emotions into three categories:
Model [35].
positive, negative, and other/mixed.
In our research, we are using Russell’s emotional model.
The images in this dataset were manually tagged with
Russell’s Circumplex Model is a two-dimensional model or-
the emotions they evoke in humans. At least ten annotators
ganizing emotions based on two primary dimensions: valence
annotated each work of art. There are 3 sets of annotations,
(pleasure-displeasure axis) and arousal (activation-deactivation
namely, based on title, image, and combined (image and title).
axis) [19]. According to Russell’s approach, emotions don’t
For our study, we selected annotations based on the image only
exist in discrete categories but along continuous dimensions.
(without a title) because our analysis is based on colors in the
This is consistent with the fuzzy approach, which allows for
image, ignoring any associated text, including author, title, and
the fuzziness and transitionality of emotional borders. Color
genre.
and emotion relationships in art may be difficult to categorize
using rigid boundaries. Thus, any emotion can be, to some The labeled dataset is divided into three variants according
extent, pleasant/unpleasant and active/passive. to aggregation thresholds of 30%, 40%, and 50%, respectively.
The model was developed based on subjective feelings [19]. A label is selected if at least 30%, 40%, and 50% of the
Participants were asked to group 28 emotion words based on responses, or three out of every 10 respondents, suggest that a
perceived similarity, and a statistical technique was used to particular emotion is applicable. The authors claim that 40%
group related words. of the dataset is generous aggregation criteria; nonetheless, we
As you can see in Figure 4, a set of emotions is distributed selected 50% of the dataset for nine emotions and 30% of the
on the wheel. We are using some of them as tags for art dataset for the emotion of shyness, which only contains one
paintings. image in the 50% of the dataset (see Table I).
Using this dataset and comparing it with Russell’s emotional
model, we chose the main emotions for our model. As you can
III. M ETHODS
see in Table I, these are the ten most relevant emotional words
In this paper, we propose a classification system that takes and their distribution in the dataset. Refer to Table I for details.
an art piece and predicts the emotional response it can gener- Sample art images from the WikiArts Emotions Dataset are
ate. presented in Fig. 5
We aim to enhance the current image emotion classification It already has some annotations. As mentioned before, we
and retrieval algorithms by utilizing fuzzy logic. Our approach can compare these results to determine how much the proposed
comprises several stages. Firstly, we will identify the most method corresponds to the accuracy of emotion recognition.
relevant emotions within the art domain and acquire a rel-
evant dataset. We will then proceed to process the images
by smoothing and normalizing them. Subsequently, we will B. Theory of Visual Attention
retrieve color palettes from the prepared data and map them to Color perception involves categorizing colors through visual
image emotion tags. To increase the precision of the mapping recognition and attentional selection [37], [38].
process, we will convert the RGB color palettes into fuzzy Let us go over the main ideas from a theory of visual atten-
palettes. tion. Bundesen introduces three types of perceptual categories
5
in his work: color, shape, and location categories [38]. Our [45], [46], text [47]–[50], face [42], [43], [51], [52] and
model is primarily based on color analysis. Using Bundesen’s product [53], [54] emotion analysis.
notation, colors represent emotions as perceptual categories. Fuzzy colors are suitable for contextual relevance: they are
According to Bundesen’s theory of visual attention (TVA), often associated with specific contexts or cultural domains,
perceptual categorization involves identifying whether an input greatly influencing emotional responses. By incorporating a
element x belongs to a certain category i. This is expressed fuzzy sets approach, accounting for the cultural, contextual,
as E(x, i), with E(x, i) ∈ [0, 1] representing the certainty of x and individual variations in color-emotion associations leads
belonging to i. The set of all perceptual units is denoted by S, to a more comprehensive analysis. The next section presents
and the set of all categories is denoted by R. fundamental concepts from fuzzy theory utilized in our study.
The concept of saliency refers to the strength of sensory 1) Fuzzy Sets: Fuzzy sets, first introduced by Zadeh [55],
evidence that suggests an element belongs to a perceptual allow membership degrees to be indicated by a number be-
category. This is denoted by the notation η(x, i), where x tween 0 and 1. In Boolean logic, we only deal with a pair
is the element and i is the perceptual category. Additionally, of numbers, 0,1. However, when it comes to fuzzy sets, we
each category is assigned a pertinence value, denoted by πi , consider all the numbers within a range of [0,1]. This range is
representing its importance for a specific task T . We can called a membership function (MF) and is denoted as µA (x),
calculate the attentional weight of x using pertinence and which can be used to represent fuzzy sets.
salience values: MFs are mathematical techniques for modeling the meaning
ωx = ∑ η(x, i)πi (1) of symbols by indicating flexible membership to a set. We can
i∈R use it to represent uncertain concepts like age, performance,
It is important to note that in Equation (1), only categories building height, etc. Therefore, MF’s key function is to convert
that have positive pertinence values have an impact on ωx . a crisp value to a membership level in a fuzzy set. We also
When it comes to visual attention, two mechanisms are often need to show triangular and trapezoidal MF here.
used - filtering and pigeonholing. Filtering refers to the process 2) Linguistic Variables: Zadeh [56] defined a linguistic
of selecting an item, denoted by x, from a set of items, denoted variable as a variable whose values are words or sentences
by S, that belongs to a particular category, denoted by i. For in a natural or artificial language rather than numbers. For
example, a client might be searching for a dark and dramatic instance, the label ’deep’ can be considered a linguistic value
art object, and filtering can be used to narrow down the art
catalog to show only those objects that fit this description.
On the other hand, Pigeonholing refers to the process of
identifying a suitable category, denoted by i, for a given item,
denoted by x, that belongs to a set of categories, denoted by R.
In other words, if we have an item, we can use pigeonholing
to determine its category. For example, when a manager of
the art gallery collects artists’ or experts’ judgments on some
art object and then categorizes them as being, e.g., happy,
romantic, fearful, etc.
of the variable ’Color Intensity,’ similar to a number but with D. HSI Color Model
less precision. A linguistic variable’s set of all linguistic values
Because of the high correlation between RGB color space
is called a term set.
attributes and their failure to replicate human color perception,
3) Fuzzy Hedges: There are two types of modifiers or
other color spaces, such as HS* (HSL, HSI, HSV), are favored
hedges: reinforcing and weakening modifiers. These modifiers
for modeling human color perception [37], [57].
are used to either strengthen or weaken a statement. The hedge
”very” represents the reinforcing modifier: Three attributes—hue (H, such as yellow, cyan, or blue),
intensity (I, pale vs. dark), and saturation (S, saturated vs.
tvery (u) = u2 (2) dull)—are used in the HSI paradigm to express colors (see
Fig. 6). As a result, it is more in line with how colors are
Weakening modifiers are the second type of modifiers. For perceived by the human visual system, which is predicated on
example, ”more-or-less” is a hedge that indicates a degree of three interpretations of color: brightness, purity, and category.
uncertainty. All three interpretations perfectly correspond to HSI.
√
tmore-or-less (u) = u (3) The Hue describes the color in terms of an angle between
0 and 360 degrees. The Saturation describes how much the
Additionally, the hedge of ”not” can be expressed as:
color is mixed with light, with a range between 0 and 100. The
tnot (u) = 1 − u (4) Intensity component ranges from 0 to 255, where 0 represents
black and 255 represents white.
Hedges can be applied several times. For example, not
very good performance is the example of a combined hedge
consisting of two atomic hedges not and very. E. Proposed Approach
4) Fuzzy Operations: In fuzzy logic, the α-cut (alpha cut)
The main idea of the proposed approach can be seen in
is a crisp set that contains all the members of a given fuzzy
Fig. 7. First, we process a dataset comprising art images. We
subset f whose values are greater than or equal to a certain
extract fuzzy dominant colors and their frequencies associated
threshold α (which ranges from 0 to 1). Mathematically, we
with certain emotions for each image in the dataset. As a result,
can represent the α-cut of the fuzzy subset f as:
we get a fuzzy color distribution for each of the ten emotions.
Finally, we convert them back to a crisp domain, obtaining
fα = {x : µ f (x) ≥ α}
color-emotion associations in the form of basic colors.
We can use the α-cuts to perform set operations on fuzzy We employ a fuzzy approach because it enables us to
sets. For instance, given two fuzzy sets A and B, we can evaluate emotions in a way that is consistent with human
obtain their union and intersection by taking the union and judgment. We obtain a more logical and human-consistent
intersection of their respective α-cuts. That is: output by partitioning the set of potential emotions into subsets
corresponding to linguistic tags [58]. Similarly, fuzzy sets
(A ∪ B)α = Aα ∪ Bα , (A ∩ B)α = Aα ∩ Bα make it simple to show how an emotion gradually changes
from one label to another [56].
Fuzzy logic is suitable to cope with subjective metrics, In the theory of visual attention, an art image is represented
such as emotions and broad concepts, like color because it by the variable x, while S represents a database of all images.
is consistent with human perception. [?]. Initially, we get the The variable i represents an emotion category, such as fear or
color palette in RGB format; then we translate it into HSI and happiness, which can be composite (e.g. love and anger). The
FHSI. collection of all emotion categories is represented by R.
7
TABLE II
F UZZY ATTRIBUTES OF THE FUZZY COLOR MODEL . A DAPTED FROM [54], [37].
Fig. 9. Fuzzy color representation model, which incorporates 120 hues, each characterized by three linguistic terms: hue, saturation, and intensity.
1) Fuzzification of HSI Colors: Triangular Membership Fuzzy partitions were adapted from [54], [37], [57], [58],
Function: but the changes were done to the ”Low” Saturation because
0,
x≤a we don’t have ”Any” Category. These partitions were obtained
x−a , a ≤ x ≤ b
based on a survey on human color categorization.
µ(x) = b−a
c−x
c−b
, b≤x≤c Table II shows the information about term sets and domains
0, x≥c of each fuzzy variable in our color model (H, S, I). Figure 8
presents the membership functions for fuzzy H, S, I variables.
Trapezoidal Membership Function: Figure 9 presents the fuzzy colors of the proposed representa-
tion model, which incorporates 120 distinct fuzzy colors. We
0,
x≤a
x−a have divided the intensity into five fuzzy sets, namely ”Low,”
b−a , a ≤ x ≤ b
”Medium,” and ”High,” with the domain X = [0, 100] and the
µ(x) = 1, b≤x≤c universal set U = 0, 1, 2,..., 99, 100. Similarly, the attribute of
d−x
d−c , c ≤ x ≤ d hue has been divided into eight fuzzy sets, including ”Red,”
”Orange,” ”Yellow,” ”Green,” ”Cyan,” ”Blue,” ”Violet,” and
0, x≥d
8
Fig. 10. Fuzzy colors examples from the proposed fuzzy color model.
”Magenta,” with the domain X = [0, 360] and the universal set Data: image M1 in RGB format
U = 0, 1, 2,..., 359, 360. Lastly, the attribute of saturation has Result: The histogram that represents the most
been divided into three fuzzy sets, namely ”Dark,” ”Deep,” prominent color in image M1
”Medium,” ”Pale,” and ”Light,” with the domains X = [0, /* The image’s dominant color
255], and the respective universal set U = 0, 1, 2,..., 254, 255. histogram, CH (image) is a vector
By doing this, we have created a comprehensive and accurate (hC1 , ..., hCn ), where each element hCi
spectrum of fuzzy colors. represents the frequency of color
Some color representatives can be seen in Fig 10. On the Ci in the image. */
figure, we have a table like Hue: red, Intensity: deep, etc., and Function FindFuzzyDomColors (image)
an explanation that a fuzzy color is a region, not a point. FuzzyColors ← an empty dictionary;
FuzzyDomColors ← an empty array;
A fuzzy color is a subset of points within a crisp color
/* initialize the frequency of each
space, which in this case is the HSI space, as mentioned in
fuzzy color to 0 */
[37], [54], [59]. We define the domains of the attributes H, S,
while not at end of image do
and I as DH , DS , and DI , respectively.
read current pixel;
Definition 1: Fuzzy color C is a linguistic label whose se- process current pixel;
mantic is represented in crisp HSI color space by a normalized /* convert a pixel from RGB to
fuzzy subset of DH × DS × DI . HSI, then fuzzify - convert
It can be inferred from Definition 1 that there is always to fuzzy HSI */
at least one crisp color that fully belongs to any given fuzzy f c ← computed fuzzy color;
color C. This prompts us to extend the concept of fuzzy color FuzzyColors[ f c] + +;
to the notion of fuzzy color space. As per the fundamental end
idea of a fuzzy color space [59]: /* Return the five most frequent
Definition 2: Fuzzy color space is a set of fuzzy colors that fuzzy colors. The number of
define a partition of DH × DS × DI . colors returned depends on the
Definition 3: Fuzzy color palette is a combination of context of the application. */
several fuzzy colors [45]. FuzzyDomColors ← 5 keys from FuzzyColors
Each element in a fuzzy color palette is a fuzzy color (area) with max frequency;
rather than a sharp color (point) [45]. Let us take Salmon color return FuzzyDomColors;
as an example (see Fig. 10). We convert crisp inputs into fuzzy Algorithm 1: Finding the image dominant colors [37],
sets for the fuzzification process. For example, if the color is in [54].
RGB format (Salmon: R=255, G=160, B=122), we first convert
it into HSI model (H = 17, S = 32%, I = 179%), then to the
fuzzy color model (H = Red, S = Medium, I = Pale. Hue, in We have conducted extensive research on this topic and
this case, is partial ’Red’ and ’Orange’, while Saturation is analyzed various psychological articles, including [61], [62].
partially ‘Medium’ and partially ‘Low’. Our goal was to select a comprehensive list of emotions
2) Selected Emotions: This research aims to explore the based on previous studies, which is crucial for comparing our
connection between color palettes and human emotions. As model’s accuracy with existing results.
emotions are highly subjective and influenced by various The selected dominant emotions in an art context are
factors such as gender, race, mood, and life experiences [60], gratitude, happiness, anger, love, trust, anger, fear, surprise,
we have chosen to focus on the context of art to obtain a sadness, and shame. As a result, we will conduct further
more objective assessment. To achieve this, we have exam- research on these emotions.
ined various online galleries, articles, and datasets to identify 3) Image Preprocessing: The original art images were in
which emotions are most commonly associated with paintings. various sizes, but have been normalized to (200,200) and
Additionally, Kang et al. have already researched color and converted to RGB mode. Next, we transformed each pixel to
emotion pairings, as detailed in their article [14]. HSI and then fuzzy HSI. The algorithm for fuzzy dominant
9
Fig. 14. Examples of Jaccard similarity calculation using happy, shy, shame emotion palettes and the John William Godward painting - Under the Blossom
that Hangs on the Bough, 1917.
(Fig. 13). is the total number of unique colors in image and emotion
palettes I and E.
2) Two-alternative Forced choice:
B. Performance evaluation a) Method description: The experiment involved a total
In this section, we present the performance evaluation of 177 subjects, bachelor and master students of Kazakh-
results of the model. British Technical University. All subjects who provided an-
1) The Jaccard similarity: The Jaccard similarity is used swers passed the Ichihara color test. Two students’ results were
to measure the similarity between the two palettes, namely excluded because they did not prove to have normal vision.
Emotion Palette (E) and Image Palette (I). It calculates the The experiment utilized a well-known behavioral measure -
similarity between two sets by comparing their intersection Two-alternative forced choice (2AFC) [63], [64]. The primary
and union. In this case, the intersection is the set of common idea is to compare individuals’ real emotion choices based on
colors that appear in both palettes, while the union is the total their evaluations to predicted emotions.
number of unique colors in both palettes.The Jaccard similarity In one trial of a standard 2AFC experiment, participants
is calculated by dividing the size of the intersection by the size indicate which of two visual choices displayed simultaneously
of the union. they find more appealing or appropriate for a certain category.
The Jaccard similarity value ranges from 0 (completely dis- For every pair that could exist, this process is repeated.
n(n − 1)
similar) to 1 (completely similar). A higher Jaccard similarity Therefore, trials are needed for 2AFC to measure
indicates a greater association between the image and emotion 2
choices for n stimuli. When a participant is presented with
color palettes. Equation (5) as follows: only two mutually exclusive stimuli, it makes the decision
|E ∩ I| job easier for them. We use The Spearman-Karber method to
J(E, I) = (5) calculate 2afc measures [63].
|E ∪ I|
b) Procedure: The experiment corresponds to the filter-
Where |E ∩ I| is the size of the set of common colors that ing mechanism of the theory of visual attention [38], choosing
appear in both image and emotion palettes I and E, |E ∪ I| an item x ∈ S for a target category i.
12
TABLE III
BASIC COLORS DISTRIBUTION ACROSS EMOTIONS ( MEASURES INDICATED IN PERCENTS )
Emotion Red Orange Yellow Green Cyan Blue Black Brown Beige Purple Gray
ˆ ≈ 0.183, SE(µ2AFC
Fig. 18. Psychometric function in a choice task. µ2AFC ˆ )≈
0.013
TABLE IV
2AFC EXPERIMENT RESULTS
Anger Shyness Happiness Sadness Gratitude Shame Fear Trust Love Surprise
# hit rates 165 163 148 97 146 146 156 81 166 70
hit rate, % 0,95 0,94 0,86 0,56 0,84 0,84 0,9 0,47 0,96 0,4
Difference in Emotion Predictions 0,76 0,37 0,05 0,13 0,2 0,38 0,37 0,12 0,05 0,27
Fig. 19. Experiment hit rates distribution across emotions. C. Sample Application (Proof of Concept)
The emotion classification method we proposed is simple
Next, let us find the standard error, which is also referred to adapt for emotion-based retrieval of art paintings. It may
to as a ”perceptual noise” [63] (see Eq. (8)): be advantageous to retrieve art images from big multimedia
archives. The method can be easily adapted for the matching
v
u k
u ĝ · (1 − ĝi ) engine that retrieves art objects with similar emotions. For
ˆ ) = t∑ i
SE(µ2AFC · (xi+1 − xi−1 )2 (8) the arbitrary art image, we get the fuzzy color dominant
i=1 ni − 1 palette (Algorithm 1). Next, we find the relevant emotions to a
given art object using the color knowledge base we previously
where ni represents the number of observations at stimulus created (See Fig. 22).
level i. The standard error associated with the threshold Figure 21 demonstrates the interface of a prototype system
estimates µ̂2AFC is approximately equal to 0.013. for this task. It can potentially use fuzzy natural queries,
retrieve emotion labels (e.g., trust) and their desired intensity
(e.g., very high), and use fuzzy sets and fuzzy hedges to fetch
the corresponding items. The proposed approach can be used
in various applications, e.g., in the matching engine, that uses
emotions similarity measurement to implement retrieval.
Our method manages uncertainty and imprecision in data.
Thus, the system has the potential to assist in recognizing emo-
tions in content, suggesting recommendations, and improving
user interactions on interactive platforms.
V. D ISCUSSION
Several recent studies have explored the emotional effects
of art objects on humans [65], [4], [20], [22]. Conceptually
similar works were completed by psychologists via surveys
Fig. 20. Survey participants hit rates distribution. Outliers are marked with and case studies among real people. We identified certain
orange color. Their responses were excluded from the analysis. patterns and differences across various emotions by compar-
ing the experimental findings and established psychological
As can be seen from Fig. 20, we compared participants’ studies concerning color-emotion associations.
responses regarding the most voted answers and highlighted Yellow emerged as a dominant color associated with hap-
some outliers. According to categorical data analysis, we piness in both the experimental findings and established psy-
consider the respondent an outlier in case of less than 50% chological studies [61], [62] Several studies [61], [66], [67]
responses matching other participants’ most selected choice. matched our results regarding anger, identifying it with red
There are two outliers identified. We also did not analyze the and black colors. Fear is expressed with gray [66] and shame
answers of two respondents who did not pass the Ichihara with red at most researches [62], [67]. Generally, green colors
color blindness test (with IDs 107, and 155). appeared to be associated mostly with positive emotions, and
15
VI. C ONCLUSION
In this study, we introduced a novel approach for emotion
retrieval using color-based features and fuzzy sets, providing
valuable insights into the intersection of computational meth-
ods and human emotional perception in visual art analysis.
The system processed a diverse art dataset to generate
color palettes associated with ten distinct emotions. We ex-
perimentally validated our approach with 2AFC experiment
and discovered that it has good predictive power of the color-
emotion associations (average hit rate is 0.77). This suggests
a substantial alignment between the emotions inferred by the
system and those perceived by human observers, validating the
system’s ability to approximate emotional states accurately.
Adopting the Fuzzy Sets and Logic approach is advan-
tageous due to its consistency with human perception. Our
method enhances algorithm interpretability and adaptability
in human-like applications. This system could aid in content-
based emotion recognition, recommendation systems, and en-
hancing user experiences in interactive platforms.
Our research contributes to a more comprehensive un-
derstanding of color-emotion associations, offering valuable
insights for various practical applications besides art, like
marketing, design, and psychology
As for the limitations, the system’s performance may vary
Fig. 21. Prototype Art Painting Retrieval System
across different art styles, cultural contexts, and individual
perceptions. Next, detecting the closest emotion to an image
based solely on its presence doesn’t account for varying
color frequencies. Colors, texture, and composition can evoke
shyness in the experiment was displayed with blue and cyan
emotions [68], creating additional dependencies. We concen-
as dominant colors. Moreover, both sets of studies showed a
trated on color features only, ignoring the objects in the art
strong correlation between sadness and gray and black. The
image, although they also contribute to the overall emotional
results of [61] also matched our results in associating green
association. Color can have an impact not only if it is abundant
with gratitude, although the experimental findings also noted
but also if it contrasts sharply with the main color scheme of
a similarity with brown.
the background.
Conversely, our findings did not align with psychological re- Future research could focus on expanding the dataset to
sults regarding love-associated colors, with no dominant color encompass more diverse artwork, incorporating contextual
evident in this emotional context. Also, the surprise primarily information, and refining the fuzzy logic model to enhance ac-
featured light tones like beige and yellow, deviating from the curacy. We see great potential for personalized color-emotion
typical association with black in psychological studies. applications. Understanding the emotional nuances of various
Common colors such as gray, brown, and black appeared colors leads to developing personalized color schemes that
dominantly in both sets of studies. However, blue did not dom- evoke specific emotional responses tailored to individual pref-
inate in the experimental findings as it did in psychological erences or target audiences. We also plan to improve the model
studies, highlighting a slight disparity. with additional dependencies, such as pixel importance.
[7] S. Wang, R. Ding, Y. Hu, and H. Wang, “Analysis of relationships [27] W.-N. Wang and Y.-L. Yu, “Image emotional semantic query based
between color and emotion by classification based on associations,” on color semantic description,” in 2005 International Conference on
vol. 1, 2008. Machine Learning and Cybernetics, vol. 7, 2005, pp. 4571–4576 Vol.
[8] J. Lee and E. Park, “Fuzzy similarity-based emotional classification of 7.
color images,” IEEE Transactions on Multimedia, vol. 13, 2011. [28] P. Ekman, “An argument for basic emotions,” Cognition & Emotion,
[9] X. P. Gao, J. H. Xin, T. Sato, A. Hansuebsai, M. Scalzo, K. Kajiwara, vol. 6, no. 3-4, pp. 169–200, 1992.
S. S. Guan, J. Valldeperas, M. J. Lis, and M. Billger, “Analysis of cross- [29] W. James, “What is an emotion?” Mind, vol. IX, pp. 188–205, 1884.
cultural color emotion,” Color Research and Application, vol. 32, pp. [30] W. Cannon, “Bodily changes in pain, hunger, fear and rage,” The
223–229, 6 2007. American Journal of Psychology, vol. 27, no. 3, pp. 469–485, 1915.
[10] X. Lu, Identifying Emotions Aroused from Paintings, G. Hua and [31] ——, “Emotion and conduct,” Prentice-Hall, 1929.
H. Jégou, Eds. Springer International Publishing, 2016, vol. 9913. [On- [32] L. F. Barrett, “The theory of constructed emotion: an active inference ac-
line]. Available: http://link.springer.com/10.1007/978-3-319-46604-0 count of interoception and categorization,” Social cognitive and affective
[11] D. Joshi, R. Datta, E. Fedorovskaya, Q. T. Luong, J. Z. Wang, J. Li, and neuroscience, vol. 12, no. 1, pp. 1–23, 2017.
J. Luo, “Aesthetics and emotions in images,” IEEE Signal Processing [33] K. R. Scherer, “Emotion classification: A review of methods and their
Magazine, vol. 28, pp. 94–115, 2011. applications,” Social Science Information, vol. 39, no. 3, pp. 433–468,
[12] M. Solli and R. Lenz, “Color emotions for multi-colored images,” Color 2000.
Research and Application, vol. 36, pp. 210–221, 6 2011. [34] S. Schachter and J. Singer, “A cognitive-attributional analysis of emo-
[13] F. Hoenig, “Defining computational aesthetics.” tion,” Psychological Review, vol. 69, no. 5, pp. 379–399, 1962.
[14] D. Kang, H. Shim, and K. Yoon, “A method for extracting emotion using [35] A. Mehrabian, “A semantic space for nonverbal behavior,” Journal of
colors comprise the painting image,” Multimedia Tools and Applications, Consulting and Clinical Psychology, vol. 37, no. 1, p. 109, 1989.
vol. 77, 2018. [36] S. M. Mohammad and S. Kiritchenko, “Wikiart emotions: An
[15] M. Solli, R. Lenz, and L. University, “Color emotions for annotated dataset of emotions evoked by art.” [Online]. Available:
image classification and retrieval,” 2008. [Online]. Available: http: https://en.wikipedia.org/wiki/List
//urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-42582 [37] P. Shamoi, “Fuzzy model for human color perception and its application
[16] R. Lenz and M. Solli, “Color semantics for image indexing,” 2010. in e-commerce - apparel color coordination,” PhD thesis, mar 2019.
[Online]. Available: http://diameter.itn.liu.se/colse/ [38] C. Bundesen, “Theory of visual attention,” Psychological review, vol. 97,
[17] M. Solli and R. Lenz, “Color based bags-of-emotions,” vol. 5702 LNCS, pp. 523–47, 11 1990.
2009, pp. 573–580. [39] P. Shamoi, D. Sansyzbayev, and N. Abiley, “Comparative overview of
[18] “Shigenobu kobayashi the aim and method of the color image scale.” color models for content-based image retrieval,” in 2022 International
[19] J. A. Russell, “A circumplex model of affect,” Journal of Personality Conference on Smart Information Systems and Technologies (SIST),
and Social Psychology, vol. 39, pp. 1161–1178, 12 1980. 2022, pp. 1–6.
[20] A. Wedolowska, D. Weber, and B. Kostek, “Predicting emotion from [40] A. Ualibekova and P. Shamoi, “Music emotion recognition using k-
color present in images and video excerpts by machine learning,” IEEE nearest neighbors algorithm,” in 2022 International Conference on Smart
Access, vol. 11, pp. 66 357–66 373, 2023. Information Systems and Technologies (SIST), 2022, pp. 1–6.
[21] B. Ranjgar, M. Khoshlahjeh Azar, A. Sadeghi-Niaraki, and S.-M. Choi, [41] P. Kozlov, A. Akram, and P. Shamoi, “Fuzzy approach for audio-video
“A novel method for emotion extraction from paintings based on emotion recognition in computer games for children,” 2023.
luscher’s psychological color test: Case study iranian-islamic paintings,” [42] L. Chen, W. Su, M. Wu, W. Pedrycz, and K. Hirota, “A fuzzy deep neural
IEEE Access, vol. 7, pp. 120 857–120 871, 2019. network with sparse autoencoder for emotional intention understanding
[22] H. Zhang and M. Xu, “Weakly supervised emotion intensity prediction in human–robot interaction,” IEEE Transactions on Fuzzy Systems,
for recognition of emotions in images,” IEEE Transactions on Multime- vol. 28, no. 7, pp. 1252–1264, 2020.
dia, vol. 23, pp. 2033–2044, 2021. [43] L. Chen, M. Zhou, M. Wu, J. She, Z. Liu, F. Dong, and K. Hirota,
[23] J. Chamorro-Martı́nez, J. M. Soto-Hidalgo, P. M. Martı́nez-Jiménez, “Three-layer weighted fuzzy support vector regression for emotional
and D. Sánchez, “Fuzzy color spaces: A conceptual approach to color intention understanding in human–robot interaction,” IEEE Transactions
vision,” IEEE Transactions on Fuzzy Systems, vol. 25, no. 5, pp. 1264– on Fuzzy Systems, vol. 26, no. 5, pp. 2524–2538, 2018.
1280, 2017. [44] S. Vashishtha and S. Susan, “Unsupervised fuzzy inference system for
[24] J. Chamorro-Martı́nez and J. M. Keller, “Granular modeling of fuzzy speech emotion recognition using audio and text cues (workshop paper),”
color categories,” IEEE Transactions on Fuzzy Systems, vol. 28, no. 9, in 2020 IEEE Sixth International Conference on Multimedia Big Data
pp. 1897–1909, 2020. (BigMM), 2020, pp. 394–403.
[25] M. Mengı́bar-Rodrı́guez and J. Chamorro-Martı́nez, “An image-based [45] P. Shamoi, M. Muratbekova, A. Izbassar, A. Inoue, and H. Kawanaka,
approach for building fuzzy color spaces,” Information Sciences, vol. “Towards a universal understanding of color harmony: Fuzzy approach,”
616, pp. 577–592, 2022. 2023.
[26] M. Nachtegael, D. V. D. Weken, V. D. Witte, S. Schulte, T. Mélange, [46] J. Yang, J. Li, X. Wang, Y. Ding, and X. Gao, “Stimuli-aware visual
and E. E. Kerre, “Color image retrieval using fuzzy similarity measures emotion analysis,” IEEE Transactions on Image Processing, vol. 30, pp.
and fuzzy partitions.” 7432–7445, 2021.
17
[47] E. Shamoi, A. Turdybay, P. Shamoi, I. Akhmetov, A. Jaxylykova, and Muragul Muratbekova received a B.S. degree in
A. Pak, “Sentiment analysis of vegan related tweets using mutual information systems from the Kazakh-British Tech-
information for feature selection,” PeerJ Computer Science, vol. 8, p. nical University, Almaty, Kazakhstan, in 2022. She
e1149, Dec. 2022. [Online]. Available: https://doi.org/10.7717/peerj-cs. is currently pursuing an M.S. degree in IT manage-
1149 ment at the same university and works as a senior
[48] S. B. Sadkhan and A. D. Radhi, “Fuzzy logic used in textual emo- software developer in a leading telecommunication
tion detection,” in 2017 Second Al-Sadiq International Conference on company in Kazakhstan. She participated in a num-
Multidisciplinary in IT and Communication Science and Applications ber of conferences (EUSPN 2023, FSDM 2023)
(AIC-MITCSA), 2017, pp. 242–245. and one grant project. Her research interests include
[49] F. Es-Sabery, I. Es-Sabery, A. Hair, B. Sainz-De-Abajo, and B. Garcia- visual perception computing, emotion engineering,
Zapirain, “Emotion processing by applying a fuzzy-based vader lexicon image processing.
and a parallel deep belief network over massive data,” IEEE Access,
vol. 10, pp. 87 870–87 899, 2022.
[50] Y. M. Tashtoush and D. A. Al Aziz Orabi, “Tweets emotion prediction
by using fuzzy logic system,” in 2019 Sixth International Conference
on Social Networks Analysis, Management and Security (SNAMS), 2019,
pp. 83–90.
[51] Y.-D. Zhang, Z.-J. Yang, H.-M. Lu, X.-X. Zhou, P. Phillips, Q.-M. Liu,
and S.-H. Wang, “Facial emotion recognition based on biorthogonal
wavelet entropy, fuzzy support vector machine, and stratified cross
validation,” IEEE Access, vol. 4, pp. 8375–8385, 2016.
[52] D.-J. Kim and Z. Bien, “Design of “personalized” classifier using soft
computing techniques for “personalized” facial expression recognition,”
IEEE Transactions on Fuzzy Systems, vol. 16, no. 4, pp. 874–885, 2008.
[53] K. Y. Chan and U. Engelke, “Varying spread fuzzy regression for
affective quality estimation,” IEEE Transactions on Fuzzy Systems,
vol. 25, no. 3, pp. 594–613, 2017.
[54] P. Shamoi, A. Inoue, and H. Kawanaka, “Modeling aesthetic preferences:
Color coordination and fuzzy sets,” Fuzzy Sets Syst., vol. 395, pp. 217–
234, 2020.
[55] L. A. Zadeh, “Fuzzy sets,” Information and Control, vol. 8, pp.
338–353, 1965. [Online]. Available: http://www-bisc.cs.berkeley.edu/
Zadeh-1965.pdf
[56] L. Zadeh, “The concept of a linguistic variable and its application to
approximate reasoning-iii,” Inf. Sciences, vol. 9, no. 1, pp. 43–80, 1975.
[57] P. Shamoi, A. Inoue, and H. Kawanaka, “Fuzzy Color Space for Apparel Pakizar Shamoi received the B.S. and M.S. degrees
Coordination,” Open Journal of Information Systems (OJIS), pp. 20–28, in information systems from the Kazakh-British
2014. Technical University, Almaty, Kazakhstan in 2011
[58] P. Shamoi, A. Inoue, , and H. Kawanaka, “Fhsi: Toward more human- and 2013, and the Ph.D. degree in engineering from
consistent color representation,” Journal of Advanced Computational Mie University, Tsu, Japan, in 2019. In her academic
Intelligence and Intelligent Informatics, vol. 20, no. 3, 2016. journey, she has held various teaching and research
[59] J. Chamorro-Martı́nez, D. Sánchez, J. M. Soto-Hidalgo, and P. Martı́nez- positions at Kazakh-British Technical University,
Jiménez, “Histograms for fuzzy color spaces,” in Eurofuse 2011. Berlin, where she has been serving as a professor in the
Heidelberg: Springer Berlin Heidelberg, 2012, pp. 339–350. School of Information Technology and Engineering
[60] T. Clarke and A. Costall, “The emotional connotations of color: A since August 2020. She is the author of 1 book and
qualitative investigation,” Color Research and Application, vol. 33, pp. more than 28 scientific publications. Awards for the
406–410, 10 2008. best paper at conferences were received five times. Her research interests
[61] J. M. B. Fugate and C. L. Franco, “What color is your anger? assessing include artificial intelligence and machine learning in general, focusing on
color-emotion pairings in english speakers,” Frontiers in Psychology, fuzzy sets and logic, soft computing, representing and processing colors in
vol. 10, 2019. computer systems, natural language processing, computational aesthetics, and
[62] D. Jonauskaite, C. A. Parraga, M. Quiblier, and C. Mohr, “Feeling human-friendly computing and systems. She took part in the organization
blue or seeing red? similar patterns of emotion associations with colour and worked in the org. committee (as head of the session and responsible for
patches and colour terms,” i-Perception, vol. 11, 1 2020. special sessions) of several international conferences - IFSA-SCIS 2017, Otsu,
[63] R. Ulrich and J. Miller, “Threshold estimation in two-alternative forced- Japan; SCIS-ISIS 2022, Mie, Japan; EUSPN 2023, Almaty, Kazakhstan. She
choice (2afc) tasks: The spearman-kärber method,” Perception & Psy- served as a reviewer at several international conferences, including IEEE: SIST
chophysics, vol. 66, pp. 517–533, 2004. 2023, SMC 2022, SCIS-ISIS 2022, SMC 2020, ICIEV-IVPR 2019, ICIEV-
[64] M. Jogan and A. Stocker, “A new two-alternative forced choice method IVPR 2018.
for the unbiased characterization of perceptual bias and discriminability,”
Journal of Vision, vol. 14, no. 3, p. 20, 2014.
[65] P. Goel, M. Goyal, and R. R. Shah, “Arten-net: An emotion classification
system for art (student consortium),” in 2020 IEEE Sixth International
Conference on Multimedia Big Data (BigMM), 2020, pp. 302–306.
[66] C. Damiano, P. Gayen, M. Rezanejad, A. Banerjee, G. Banik, P. Patnaik,
J. Wagemans, and D. Walther, “Anger is red, sadness is blue: Emotion
depictions in abstract visual art by artists and non-artists,” Journal of
vision, vol. 23, p. 1, 04 2023.
[67] D. Jonauskaite, A. Abu-Akel, N. Dael, D. Oberfeld, A. Abdel-Khalek,
A. Al-rasheed, J.-P. Antonietti, V. Bogushevskaya, A. Chamseddine,
E. Chkonia, V. Corona, E. Fonseca-Pedrero, Y. Griber, G. Grimshaw,
A. Hasan, H. Jelena, M. Hirnstein, B. Karlsson, E. Laurent, and C. Mohr,
“Universal patterns in color-emotion associations are further shaped by
linguistic and geographic proximity,” Psychological Science, vol. 31, pp.
1245–1260, 10 2020.
[68] M. Lucassen, T. Gevers, and A. Gijsenij, “Texture affects color emotion,”
Color Research and Application, vol. 36, pp. 426 – 436, 12 2011.