Various_Applications_2_V5

Download as pdf or txt
Download as pdf or txt
You are on page 1of 45

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/387361071

Exploring Concepts of HyperFuzzy, HyperNeutrosophic, and HyperPlithogenic


Sets II

Preprint · January 2025


DOI: 10.13140/RG.2.2.10244.23686

CITATIONS READS

0 21

1 author:

Takaaki Fujita

186 PUBLICATIONS 775 CITATIONS

SEE PROFILE

All content following this page was uploaded by Takaaki Fujita on 24 December 2024.

The user has requested enhancement of the downloaded file.


Exploring Concepts of HyperFuzzy, HyperNeutrosophic, and
HyperPlithogenic Sets II
Takaaki Fujita 1 ∗
1 Independent Researcher, Shinjuku, Shinjuku-ku, Tokyo, Japan.

Abstract
This paper delves into the advancements of classical set theory to address the complexities and uncertainties
inherent in real-world phenomena. It highlights three major extensions of traditional set theory—Fuzzy
Sets [287], Neutrosophic Sets [237], and Plithogenic Sets [242]—and examines their further generalizations
into Hyperfuzzy [106], HyperNeutrosophic [90], and Hyperplithogenic Sets [90].

Building on previous research [83], this study explores the potential applications of HyperNeutrosophic Sets
and SuperHyperNeutrosophic Sets across various domains. Specifically, it extends fundamental concepts such
as Neutrosophic Logic, Cognitive Maps, Graph Neural Networks, Classifiers, and Triplet Groups through these
advanced set structures and briefly analyzes their mathematical properties.

Keywords: Fuzzy set, Neutrosophic set, Hyperstructure, Hyperfuzzy set, Hyperneutrosophic set
MSC 2010 classifications: 03E72: Fuzzy set theory, 03B52: Fuzzy logic; logic of vagueness

1 Introduction
This paper is closely related to [83]. Readers are encouraged to review [83] in advance, as needed.

1.1 Fuzzy Sets, Neutrosophic Sets, and Plithogenic Sets

Set theory, a cornerstone of mathematics, provides a framework for analyzing collections of elements called
”sets” [61, 139]. This study examines three major extensions—Fuzzy Sets [287], Neutrosophic Sets [237],
and Plithogenic Sets [242]—and their generalizations into Hyperfuzzy [106], HyperNeutrosophic [90], and
Hyperplithogenic Sets [90].

These frameworks address various dimensions of uncertainty. Fuzzy Sets represent imprecision through
membership values between 0 and 1 [287]. Neutrosophic Sets enhance this by adding truth, indeterminacy,
and falsity components, offering richer analyses of complex systems [237]. Plithogenic Sets further extend
these ideas to handle multidimensional uncertainty and contradictions, making them particularly effective for
analyzing highly complex systems [243, 255].

1.2 Hyperfuzzy, HyperNeutrosophic, and Hyperplithogenic Sets

Extensions of Fuzzy Sets [90, 106, 144, 261], Neutrosophic Sets [90], Plithogenic Sets [90], Soft Sets [1,
84, 99, 121, 137, 213, 226, 229, 241, 249], Rough Sets [90], and Vague Sets [90] have been developed using
Hyperstructures and 𝑛-SuperHyperstructures.

For instance, Fuzzy Sets have been extended into Hyperfuzzy Sets [80,106,144–147,172,176,177,190,261] and
SuperHyperfuzzy Sets [90]. Similarly, Neutrosophic Sets have been extended into HyperNeutrosophic Sets [90]
and SuperHyperNeutrosophic Sets [90], while Plithogenic Sets have been extended into HyperPlithogenic
Sets [90] and SuperHyperPlithogenic Sets [90].

1.3 Our Contribution in This Paper

This section highlights the contributions of this paper. Building on previous research [83], we investigate the
potential applications of HyperNeutrosophic Sets and SuperHyperNeutrosophic Sets in various domains.

The study focuses primarily on theoretical exploration and mathematical formulation. For example, we extend
concepts such as Neutrosophic Logic, Cognitive Maps, Graph Neural Networks, Classifiers, and Triplet Groups
using HyperNeutrosophic Sets and SuperHyperNeutrosophic Sets, and briefly analyze their properties.

Future research should include experimental validation and application-oriented studies to facilitate practical
implementation in specific fields. Through this work, we aim to advance this area of study and encourage
further exploration and development of related topics.

1
1.4 Structure of the Paper

The structure of this paper is outlined as follows.

1 Introduction 1
1.1 Fuzzy Sets, Neutrosophic Sets, and Plithogenic Sets . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Hyperfuzzy, HyperNeutrosophic, and Hyperplithogenic Sets . . . . . . . . . . . . . . . . . . 1
1.3 Our Contribution in This Paper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.4 Structure of the Paper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2 Preliminaries and Definitions 2
2.1 Basics of Set Theory and Others . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.2 Hyperstructure and Superhyperstructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.3 Fuzzy Set, Hyperfuzzy Set, and Superhyperfuzzy Set . . . . . . . . . . . . . . . . . . . . . . 5
2.4 Neutrosophic, HyperNeutrosophic, and SuperHyperNeutrosophic Sets . . . . . . . . . . . . . 5
2.5 HyperPlithogenic Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3 Result: Application of HyperNeutrosophic Sets to Various Sciences 8
3.1 Neutrosophic Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.2 HyperNeutrosophic Graph Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.3 Neutrosophic Cognitive Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.4 Neutrosophic Classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.5 Neutrosophic Triplet Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4 Additional Result: Hyperfuzzy Extension 27
4.1 Neuro-Hyperfuzzy System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.2 Hyperfuzzy control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5 Future Work: Further Exploration of HyperUncertain Extensions 34

2 Preliminaries and Definitions


This section outlines the essential concepts and definitions necessary for understanding the discussions in this
paper. While we aim to present the fundamental ideas concisely, a comprehensive exploration of all related
terms lies beyond the scope of this work. Readers are encouraged to consult the cited references for a more
in-depth understanding.

2.1 Basics of Set Theory and Others

This subsection provides a brief overview of foundational principles in set theory. For a detailed discussion,
we recommend standard references such as [123, 139, 143].
Definition 2.1 (Set). [139] A set is a well-defined collection of distinct objects, referred to as elements. For
any object 𝑥, it is always determinable whether 𝑥 is an element of a given set. If 𝑥 belongs to a set 𝐴, this is
denoted as 𝑥 ∈ 𝐴. Sets are often represented using curly braces. For example, 𝐴 = {1, 2, 3} represents a set
containing the elements 1, 2, and 3.
Definition 2.2 (Subset). [139] A set 𝐴 is called a subset of another set 𝐵, written as 𝐴 ⊆ 𝐵, if every element
of 𝐴 is also an element of 𝐵. This relationship is formally expressed as:

𝐴 ⊆ 𝐵 ⇐⇒ ∀𝑥 (𝑥 ∈ 𝐴 =⇒ 𝑥 ∈ 𝐵).

If 𝐴 ⊆ 𝐵 and 𝐴 ≠ 𝐵, 𝐴 is referred to as a proper subset of 𝐵, denoted by 𝐴 ⊂ 𝐵.


Definition 2.3 (Empty Set). [139] The empty set, denoted as ∅, is the unique set containing no elements. It is
formally defined as:
∀𝑥 (𝑥 ∉ ∅).
For example, the empty set can be represented as ∅ = {}.
Definition 2.4 (Universal Set). [139] The universal set, denoted by 𝑈, represents the set containing all objects
under consideration within a specific context. Any set 𝐴 under analysis is a subset of 𝑈. Formally:

𝐴⊆𝑈 for any set 𝐴.

2
Although some concepts may not have a direct connection to set theory, the following fundamental mathematical
definitions will also be employed. As these are basic definitions, readers are encouraged to refer to relevant
literature as needed.
Definition 2.5 (Real Numbers). (cf. [70, 127]) The set of real numbers, denoted by R, includes all rational
and irrational numbers, which can be represented as points on the real number line. Examples are integers,
fractions, and roots.
Definition 2.6 (Natural Numbers). (cf. [160]) The set of natural numbers, denoted by N, consists of all positive
integers starting from 1:
N = {1, 2, 3, . . .}.
Some conventions also include 0, depending on the context.
Definition 2.7 (Homomorphism). (cf. [75, 220]) Let ( 𝐴, ★) and (𝐵, ◦) be two algebraic structures. A homo-
morphism is a function 𝑓 : 𝐴 → 𝐵 that satisfies:
𝑓 (𝑎 ★ 𝑎 ′ ) = 𝑓 (𝑎) ◦ 𝑓 (𝑎 ′ ) for all 𝑎, 𝑎 ′ ∈ 𝐴.
Definition 2.8 (Operation). [139] An operation is a function or rule that combines elements of a set 𝑆 to
produce another element within 𝑆. Formally, an operation ◦ on 𝑆 is defined as:
◦ : 𝑆 × 𝑆 → 𝑆.
Examples include addition and multiplication, which are operations on the set of real numbers R.
Definition 2.9 (Binary Operation). [34] A binary operation on a set 𝑆 is a function ∗ : 𝑆 × 𝑆 → 𝑆 that combines
two elements 𝑎, 𝑏 ∈ 𝑆 to produce another element 𝑎 ∗ 𝑏 ∈ 𝑆. Examples include addition and subtraction, both
of which are binary operations on R.
Definition 2.10 (Graph). [64–66] A graph, denoted 𝐺 = (𝑉, 𝐸), consists of:

• 𝑉: A set of vertices (or nodes).


• 𝐸: A set of edges, where each edge is an unordered pair of vertices {𝑢, 𝑣}, 𝑢, 𝑣 ∈ 𝑉.
Definition 2.11 (Directed Graph). (cf. [22, 66]) A directed graph, denoted 𝐺 = (𝑉, 𝐸), consists of:

• 𝑉: A set of vertices (or nodes).


• 𝐸: A set of directed edges, where each edge is an ordered pair of vertices (𝑢, 𝑣), 𝑢, 𝑣 ∈ 𝑉.
Definition 2.12 (Matrix). [25] A matrix is a rectangular array of elements arranged in rows and columns,
typically denoted as 𝐴 = [𝑎 𝑖 𝑗 ], where 𝑎 𝑖 𝑗 represents the element in the 𝑖-th row and 𝑗-th column.
Definition 2.13 (Adjacency Matrix of a Graph). (cf. [114]) Let 𝐺 = (𝑉, 𝐸) be a graph with vertex set 𝑉 and
edge set 𝐸. The adjacency matrix of 𝐺, denoted as 𝐴 = [𝑎 𝑖 𝑗 ], is a square matrix of size |𝑉 | × |𝑉 |, where |𝑉 | is
the number of vertices in 𝐺. Each entry 𝑎 𝑖 𝑗 is defined as:
(
1, if there is an edge from vertex 𝑣 𝑖 to 𝑣 𝑗 ,
𝑎𝑖 𝑗 =
0, otherwise.

For undirected graphs, the adjacency matrix 𝐴 is symmetric, whereas for directed graphs, 𝐴 may not be
symmetric.
Definition 2.14 (Weight Matrix). (cf. [193, 267]) A weight matrix is a matrix where each element represents a
weight or parameter, often used to describe relationships in graphs, neural networks, or optimization problems,
such as edge weights or neural connection strengths.
Definition 2.15 (Approximation). (cf. [206]) Approximation refers to the process of representing a function
or a value by another function or value that is close to the original within a specified level of accuracy. It is
fundamental in numerical analysis and machine learning.
Definition 2.16 (Vector). (cf. [63]) A vector is an ordered tuple of elements, typically from a field R or C,
representing a point in an 𝑛-dimensional space or a directed quantity with both magnitude and direction.

3
2.2 Hyperstructure and Superhyperstructure

This subsection introduces the concepts of Hyperstructure and Superhyperstructure, advanced mathematical
frameworks designed to represent hierarchical and multi-layered systems. A Hyperstructure builds upon
the powerset of a base set to model relationships within collections of elements. Extending this notion,
a Superhyperstructure utilizes the 𝑛-th powerset to represent intricate hierarchical systems across multiple
layers [81, 82, 253, 254]. Below, we formalize the 𝑛-th powerset and its related constructs.
Definition 2.17 (Base Set). A base set is the foundational set 𝑆 from which powersets and hyperstructures are
constructed. Formally:
𝑆 = {𝑥 | 𝑥 is an element within the specified domain}.
All subsets and operations within P (𝑆) or P𝑛 (𝑆) are derived from the elements of 𝑆.

Definition 2.18 (Powerset). [86, 215] The powerset of a set 𝑆, denoted as P (𝑆), is the collection of all subsets
of 𝑆, including the empty set and 𝑆 itself:

P (𝑆) = { 𝐴 | 𝐴 ⊆ 𝑆}.

Definition 2.19 (𝑛-th Powerset). (cf. [86, 235, 253])

The 𝑛-th powerset of a set 𝐻, denoted 𝑃𝑛 (𝐻), is defined recursively. Starting with the standard powerset, the
construction proceeds as:

𝑃1 (𝐻) = 𝑃(𝐻), 𝑃𝑛+1 (𝐻) = 𝑃(𝑃𝑛 (𝐻)), for 𝑛 ≥ 1.

The 𝑛-th non-empty powerset, denoted 𝑃𝑛∗ (𝐻), excludes the empty set:

𝑃1∗ (𝐻) = 𝑃∗ (𝐻), ∗


𝑃𝑛+1 (𝐻) = 𝑃∗ (𝑃𝑛∗ (𝐻)).

Here, 𝑃∗ (𝐻) is the powerset of 𝐻 excluding the empty set.

To formalize the concepts of Hyperstructure and Superhyperstructure, we proceed with the following definitions.
Definition 2.20 (Classical Structure). (cf. [235, 253]) A Classical Structure is a mathematical framework
defined on a non-empty set 𝐻 equipped with one or more Classical Operations that satisfy specific axioms. A
Classical Operation is a function:
#0 : 𝐻 𝑚 → 𝐻,
where 𝑚 ≥ 1 and 𝐻 𝑚 represents the 𝑚-fold Cartesian product of 𝐻. Examples include addition and multipli-
cation in algebraic structures like groups and rings.
Definition 2.21 (Hyperstructure). (cf. [86, 235, 253]) A Hyperstructure extends a Classical Structure by oper-
ating on the powerset of a base set. Formally:

H = (P (𝑆), ◦),

where 𝑆 is the base set, P (𝑆) is its powerset, and ◦ is an operation defined on subsets of P (𝑆).
Definition 2.22 (𝑛-Superhyperstructure). (cf. [235, 253]) An 𝑛-Superhyperstructure generalizes a Hyperstruc-
ture by utilizing the 𝑛-th powerset of a base set. It is defined as:

SH 𝑛 = (P𝑛 (𝑆), ◦),

where 𝑆 is the base set, P𝑛 (𝑆) is the 𝑛-th powerset of 𝑆, and ◦ is an operation on elements of P𝑛 (𝑆).

A representative example of a superhyperstructure is the SuperHypergraph, which incorporates advanced


elements such as superedges and supervertices, offering a more abstract and versatile framework for hierarchical
modeling [38, 85, 90, 91, 93–96, 105, 116, 117, 183, 245, 246, 248, 251, 253]. Additionally, concepts such as
SuperHyperfunction have also been explored in the literature [250, 252].

4
2.3 Fuzzy Set, Hyperfuzzy Set, and Superhyperfuzzy Set

This subsection presents the formal definitions of Fuzzy Set, Hyperfuzzy Set, and Superhyperfuzzy Set. These
concepts extend the traditional notion of fuzzy values into hierarchical structures, offering more refined tools
for representing uncertainty.
Definition 2.23 (Fuzzy Set). [287–291] A fuzzy set 𝜏 in a non-empty universe 𝑌 is a mapping 𝜏 : 𝑌 → [0, 1].
A fuzzy relation on 𝑌 is a fuzzy subset 𝛿 of 𝑌 × 𝑌 . If 𝜏 is a fuzzy set in 𝑌 and 𝛿 is a fuzzy relation on 𝑌 , 𝛿 is
called a fuzzy relation on 𝜏 if:

𝛿(𝑦, 𝑧) ≤ min{𝜏(𝑦), 𝜏(𝑧)} for all 𝑦, 𝑧 ∈ 𝑌 .

Example 2.24 (Fuzzy Set: Membership in a Fitness Club). Consider the universe𝑌 = {John, Alice, Bob, Sarah},
representing a group of people. A fuzzy set 𝜏 defines their membership in a fitness club based on their partici-
pation level, where:


 1.0, if the person is a regular member (e.g., Alice),

 0.8,

 if the person participates occasionally (e.g., Bob),
𝜏(𝑦) =


 0.3, if the person rarely participates (e.g., Sarah),

 0.0,
 if the person is not a member (e.g., John).

The mapping 𝜏 : 𝑌 → [0, 1] intuitively represents the degree of belonging for each individual in the club.
Definition 2.25 (Hyperfuzzy Set). [29, 106, 144, 189, 261] Let 𝑋 be a non-empty set. A hyperfuzzy set over 𝑋
˜ [0, 1]), where 𝑃(
is defined as a mapping 𝜇˜ : 𝑋 → 𝑃( ˜ [0, 1]) represents the set of all non-empty subsets of the
interval [0, 1].
Example 2.26 (Hyperfuzzy Set: Customer Satisfaction Ratings). Customer satisfaction ratings are often ana-
lyzed from the perspective of fuzzy set theory [76,76,135,166]. Consider a set 𝑋 = {Product A, Product B, Product C},
representing three products. A hyperfuzzy set 𝜇˜ maps each product to a set of customer satisfaction ratings,
where:


 {0.9, 1.0}, for Product A (highly satisfied customers),


˜
𝜇(𝑥) = {0.4, 0.6, 0.8}, for Product B (moderately satisfied customers),

 {0.1, 0.2},

for Product C (low satisfaction levels).

Here, 𝜇˜ : 𝑋 → 𝑃( ˜ [0, 1]), where each product is associated with a set of satisfaction levels, capturing the
diverse opinions of customers.
Definition 2.27 (𝑛-SuperHyperFuzzy Set). [90] Let 𝑋 be a non-empty set. An 𝑛-SuperHyperFuzzy Set is a
recursive extension of fuzzy sets, hyperfuzzy sets, and superhyperfuzzy sets, defined as:

𝜇˜ 𝑛 : P̃𝑛 (𝑋) → P̃𝑛 ( [0, 1]),

where:

• P̃1 (𝑋) = P̃ (𝑋), and for 𝑘 ≥ 2,


P̃𝑘 (𝑋) = P̃ ( P̃𝑘−1 (𝑋)),
represents the 𝑘-th nested family of non-empty subsets of 𝑋.
• P̃𝑛 ( [0, 1]) is similarly defined for the interval [0, 1].
• 𝜇˜ 𝑛 maps each element 𝐴 ∈ P̃𝑛 (𝑋) to a non-empty subset 𝜇˜ 𝑛 ( 𝐴) ⊆ [0, 1], which represents the
membership degrees of 𝐴 at the 𝑛-th hierarchical level.

2.4 Neutrosophic, HyperNeutrosophic, and SuperHyperNeutrosophic Sets

Neutrosophic Sets enhance Fuzzy Sets by incorporating the concept of indeterminacy, allowing them to model
situations that are neither entirely true nor false [237]. This framework offers a more comprehensive approach
to handling real-world scenarios characterized by significant uncertainty and complexity, making it a focus of
extensive research [100, 101, 155, 236, 238, 240, 247, 256, 257, 259, 260]. The formal definitions are provided
below.

5
Definition 2.28 (Neutrosophic Set). [237] Let 𝑋 be a non-empty set. A Neutrosophic Set 𝐴 on 𝑋 is defined
by three membership functions:

𝑇𝐴 : 𝑋 → [0, 1], 𝐼 𝐴 : 𝑋 → [0, 1], 𝐹𝐴 : 𝑋 → [0, 1],

where 𝑇𝐴 (𝑥), 𝐼 𝐴 (𝑥), and 𝐹𝐴 (𝑥) represent the degrees of truth, indeterminacy, and falsity for each 𝑥 ∈ 𝑋. These
values satisfy the condition:
0 ≤ 𝑇𝐴 (𝑥) + 𝐼 𝐴 (𝑥) + 𝐹𝐴 (𝑥) ≤ 3.
Example 2.29 (Neutrosophic Set: Decision-Making in Hiring). Decision-making is often studied in conjunc-
tion with Neutrosophic Sets [6,11,52,60,164,181,198,285]. Consider 𝑋 = {Candidate A, Candidate B, Candidate C},
representing applicants for a job. A Neutrosophic Set 𝐴 defines the suitability of each candidate, where:

𝑇𝐴 (𝑥), 𝐼 𝐴 (𝑥), 𝐹𝐴 (𝑥)

denote the degrees of truth (suitability), indeterminacy (uncertainty), and falsity (unsuitability) for each candi-
date:
𝑇𝐴 (Candidate A) = 0.8, 𝐼 𝐴 (Candidate A) = 0.1, 𝐹𝐴 (Candidate A) = 0.1,
𝑇𝐴 (Candidate B) = 0.5, 𝐼 𝐴 (Candidate B) = 0.4, 𝐹𝐴 (Candidate B) = 0.1,
𝑇𝐴 (Candidate C) = 0.3, 𝐼 𝐴 (Candidate C) = 0.2, 𝐹𝐴 (Candidate C) = 0.5.
Here, the Neutrosophic Set models the hiring committee’s confidence, uncertainty, and rejection levels for each
applicant.
Definition 2.30 (HyperNeutrosophic Set). [90] Let 𝑋 be a non-empty set. A HyperNeutrosophic Set on 𝑋 is a
mapping 𝜇˜ : 𝑋 → 𝑃( ˜ [0, 1] 3 ), where 𝑃(
˜ [0, 1] 3 ) is the family of all non-empty subsets of the unit cube [0, 1] 3 .
For each 𝑥 ∈ 𝑋, 𝜇(𝑥)
˜ 3
⊆ [0, 1] represents a collection of membership values, with each element comprising
degrees of truth (𝑇), indeterminacy (𝐼), and falsity (𝐹). These components satisfy:

0 ≤ 𝑇 + 𝐼 + 𝐹 ≤ 3.

Example 2.31 (HyperNeutrosophic Set: Product Feedback Analysis). Consider 𝑋 = {Product X, Product Y, Product Z},
representing three products. A HyperNeutrosophic Set 𝜇˜ maps each product to a set of customer opinions,
where each opinion is a triple (𝑇, 𝐼, 𝐹) representing truth (positive feedback), indeterminacy (uncertainty), and
falsity (negative feedback):

˜
𝜇(Product X) = {(0.9, 0.1, 0.0), (0.8, 0.2, 0.0)},

˜
𝜇(Product Y) = {(0.6, 0.3, 0.1), (0.5, 0.4, 0.1), (0.7, 0.2, 0.1)},
˜
𝜇(Product Z) = {(0.4, 0.5, 0.1), (0.3, 0.6, 0.1)}.
This representation captures the diversity of customer feedback, with multiple sets of opinions reflecting varying
perspectives.
Definition 2.32 (𝑛-SuperHyperNeutrosophic Set). [90] Let 𝑋 be a non-empty set. An 𝑛-SuperHyperNeutrosophic
Set is a recursive extension of Neutrosophic and HyperNeutrosophic Sets, defined as:

𝐴˜ 𝑛 : P̃𝑛 (𝑋) → P̃𝑛 ( [0, 1] 3 ),

where:

• P̃1 (𝑋) = P̃ (𝑋), and for 𝑘 ≥ 2,


P̃𝑘 (𝑋) = P̃ ( P̃𝑘−1 (𝑋)),
denotes the 𝑘-th nested family of non-empty subsets of 𝑋.
• P̃𝑛 ( [0, 1] 3 ) is defined analogously for the unit cube [0, 1] 3 .
• The mapping 𝐴˜ 𝑛 assigns to each 𝐴 ∈ P̃𝑛 (𝑋) a subset 𝐴˜ 𝑛 ( 𝐴) ⊆ [0, 1] 3 , representing the degrees of truth
(𝑇), indeterminacy (𝐼), and falsity (𝐹) for 𝐴 at the 𝑛-th hierarchical level.

For each 𝐴 ∈ P̃𝑛 (𝑋) and (𝑇, 𝐼, 𝐹) ∈ 𝐴˜ 𝑛 ( 𝐴), the following condition holds:

0 ≤ 𝑇 + 𝐼 + 𝐹 ≤ 3.

6
2.5 HyperPlithogenic Set

The Plithogenic Set extends traditional set theories, such as Neutrosophic and Fuzzy Sets, by incorporating
multi-dimensional attributes and contradictions [242, 243]. Below, we present its formal definition.
Definition 2.33 (Plithogenic Set). [242, 243] Let 𝑆 be a universal set, and 𝑃 ⊆ 𝑆. A Plithogenic Set 𝑃𝑆 is
defined as:
𝑃𝑆 = (𝑃, 𝑣, 𝑃𝑣, 𝑝𝑑𝑓 , 𝑝𝐶𝐹),
where:

• 𝑣: an attribute.

• 𝑃𝑣: the set of possible values for the attribute 𝑣.


• 𝑝𝑑𝑓 : 𝑃 × 𝑃𝑣 → [0, 1] 𝑠 : the Degree of Appurtenance Function (DAF), mapping elements and attribute
values to a membership degree.
• 𝑝𝐶𝐹 : 𝑃𝑣 × 𝑃𝑣 → [0, 1] 𝑡 : the Degree of Contradiction Function (DCF), quantifying contradictions
between attribute values.

These functions satisfy the following axioms:

1. Reflexivity of DCF:
𝑝𝐶𝐹 (𝑎, 𝑎) = 0, for all 𝑎 ∈ 𝑃𝑣.

2. Symmetry of DCF:
𝑝𝐶𝐹 (𝑎, 𝑏) = 𝑝𝐶𝐹 (𝑏, 𝑎), for all 𝑎, 𝑏 ∈ 𝑃𝑣.
Example 2.34 (Examples of Plithogenic Sets). [87, 98] The Plithogenic Set has various special cases:

• If 𝑠 = 𝑡 = 1, the set is called a Plithogenic Fuzzy Set.

• If 𝑠 = 2, 𝑡 = 1, it becomes a Plithogenic Intuitionistic Fuzzy Set.


• If 𝑠 = 3, 𝑡 = 1, it is referred to as a Plithogenic Neutrosophic Set.
Definition 2.35 (HyperPlithogenic Set). [90] Let 𝑋 be a non-empty set, and 𝐴 a set of attributes. For each
𝑣 ∈ 𝐴, let 𝑃𝑣 be the range of possible values of 𝑣. A HyperPlithogenic Set 𝐻𝑃𝑆 on 𝑋 is defined as:
𝑛
𝐻𝑃𝑆 = (𝑃, {𝑣 𝑖 }𝑖=1 𝑛
, {𝑃𝑣 𝑖 }𝑖=1 ˜ 𝑖 } 𝑛 , 𝑝𝐶𝐹),
, { 𝑝𝑑𝑓 𝑖=1

where:

• 𝑃 ⊆ 𝑋: a subset of the universe.

• For each 𝑣 𝑖 ∈ 𝐴, 𝑃𝑣 𝑖 : the set of possible values for 𝑣 𝑖 .


• 𝑝𝑑𝑓
˜ 𝑖 : 𝑃 × 𝑃𝑣 𝑖 → 𝑃( ˜ [0, 1] 𝑠 ): the Hyper Degree of Appurtenance Function (HDAF), assigning mem-
bership degrees as sets.
• 𝑝𝐶𝐹 : 𝑖=1
Ð𝑛 Ð𝑛
𝑃𝑣 𝑖 × 𝑖=1 𝑃𝑣 𝑖 → [0, 1] 𝑡 : the Degree of Contradiction Function (DCF).

Definition 2.36 (𝑛-SuperHyperPlithogenic Set). [90] Let 𝑋 be a non-empty set, and let 𝑉 = {𝑣 1 , 𝑣 2 , . . . , 𝑣 𝑛 }
be a set of attributes with respective ranges 𝑃𝑣𝑖 . An 𝑛-SuperHyperPlithogenic Set 𝑆𝐻𝑃𝑆 𝑛 is defined recursively
as:
𝑛
𝑆𝐻𝑃𝑆 𝑛 = (𝑃𝑛 , 𝑉, {𝑃𝑣𝑖 }𝑖=1 ˜ 𝑖(𝑛) } 𝑛 , 𝑝𝐶𝐹 (𝑛) ),
, { 𝑝𝑑𝑓 𝑖=1

where:

7
• 𝑃1 ⊆ 𝑋, and for 𝑘 ≥ 2,
𝑃 𝑘 = P̃ (𝑃 𝑘−1 ),
representing the 𝑘-th nested family of subsets.
• For each 𝑣 𝑖 , 𝑃𝑣𝑖 : the set of possible values of 𝑣 𝑖 .

˜ 𝑖(𝑛) : 𝑃𝑛 × 𝑃𝑣𝑖 → P̃ ( [0, 1] 𝑠 ): the HDAF at the 𝑛-th level.


• 𝑝𝑑𝑓

• 𝑝𝐶𝐹 (𝑛) : 𝑖=1


Ð𝑛 Ð𝑛
𝑃𝑣𝑖 × 𝑖=1 𝑃𝑣𝑖 → [0, 1] 𝑡 : the DCF, satisfying:
1. Reflexivity: 𝑝𝐶𝐹 (𝑛) (𝑎, 𝑎) = 0,
2. Symmetry: 𝑝𝐶𝐹 (𝑛) (𝑎, 𝑏) = 𝑝𝐶𝐹 (𝑛) (𝑏, 𝑎).

3 Result: Application of HyperNeutrosophic Sets to Various Sciences

In this section, we explore the application of HyperNeutrosophic Sets across various scientific domains,
following the approach outlined in [83]. It is important to note that if HyperNeutrosophic Sets prove applicable
to a specific domain, it is reasonable to assume that Hyperfuzzy Sets and Hyperplithogenic Sets could also be
utilized in similar contexts. Moreover, for SuperHyperNeutrosophic Sets, it is equally logical to investigate
the potential applications of SuperHyperfuzzy Sets and SuperHyperplithogenic Sets within the same or related
fields.

3.1 Neutrosophic Logic

Logic is the systematic study of reasoning, involving principles and rules to distinguish valid arguments, truth,
and consistency [54, 72, 233]. Neutrosophic Logic builds upon classical and fuzzy logic [287] by introducing
three degrees—truth, indeterminacy, and falsity—allowing for nuanced reasoning under uncertainty [18, 30,
45, 97, 102, 212, 239, 244]. The concept of a Neutrosophic Set can be viewed as an application of Neutrosophic
Logic within the framework of set theory. As evident from previous discussions and references such as [83],
it is both natural and necessary to explicitly extend Neutrosophic Logic into HyperNeutrosophic Logic and
𝑛-SuperHyperNeutrosophic Logic. While the discussion here centers on HyperNeutrosophic Sets and 𝑛-
SuperHyperNeutrosophic Sets, similar analyses can also be conducted for Hyperfuzzy Sets, 𝑛-SuperHyperfuzzy
Sets, Hyperplithogenic Sets, and 𝑛-SuperHyperplithogenic Sets.

Definition 3.1 (Neutrosophic Logic). [237] Let 𝑝 be a proposition. In Neutrosophic Logic, the truth value of
𝑝 is given by an ordered triple
𝑣( 𝑝) = (𝑇, 𝐼, 𝐹) ∈ [0, 1] 3 ,
where 𝑇 denotes the degree of truth, 𝐼 denotes the degree of indeterminacy, and 𝐹 denotes the degree of falsity.
These satisfy the following condition:
0 ≤ 𝑇 + 𝐼 + 𝐹 ≤ 3.
Unlike many-valued logics that fix 𝑇 + 𝐹 ≤ 1, Neutrosophic Logic allows 𝑇, 𝐼, 𝐹 to vary somewhat indepen-
dently, thereby capturing paradoxical and uncertain statements more flexibly.
Example 3.2 (Neutrosophic Example). Consider a proposition 𝑝 with

𝑣( 𝑝) = (0.7, 0.2, 0.4).

Here 𝑇 = 0.7, 𝐼 = 0.2, 𝐹 = 0.4, and 0.7 + 0.2 + 0.4 = 1.3 ≤ 3. Thus 𝑝 can be viewed as mostly true, with
moderate falsity and some degree of indeterminacy.
Definition 3.3 (HyperNeutrosophic Logic). Let 𝑝 be a proposition. In HyperNeutrosophic Logic, the truth
value of 𝑝 is given by a non-empty subset of [0, 1] 3 :

𝑣( 𝑝) ⊆ [0, 1] 3 , 𝑣( 𝑝) ≠ ∅,

where each element (𝑇, 𝐼, 𝐹) ∈ 𝑣( 𝑝) satisfies 0 ≤ 𝑇 + 𝐼 + 𝐹 ≤ 3.

8
Example 3.4 (HyperNeutrosophic Example). Suppose we have two expert opinions about 𝑝. One expert assigns
(𝑇, 𝐼, 𝐹) = (0.7, 0.2, 0.4), and another expert assigns (𝑇, 𝐼, 𝐹) = (0.4, 0.1, 0.8). Then the HyperNeutrosophic
valuation can be taken as 
𝑣( 𝑝) = (0.7, 0.2, 0.4), (0.4, 0.1, 0.8) .
This set-based valuation captures multiple sources of uncertain or even conflicting information.
Definition 3.5 (𝑛-SuperHyperNeutrosophic Logic). Let 𝑋 be a non-empty set and define recursively
 
P̃1 (𝑋) = 𝐴 ⊆ 𝑋 : 𝐴 ≠ ∅ , P̃𝑘 (𝑋) = 𝐵 ⊆ P̃𝑘−1 (𝑋) : 𝐵 ≠ ∅ (𝑘 ≥ 2).
A 𝑛-SuperHyperNeutrosophic valuation 𝑣( 𝑝) is defined to be an element of
P̃𝑛 [0, 1] 3 ,


i.e. an 𝑛-th level nested non-empty subset of the unit cube [0, 1] 3 . At every level, each (𝑇, 𝐼, 𝐹) ∈ [0, 1] 3 must
satisfy 0 ≤ 𝑇 + 𝐼 + 𝐹 ≤ 3.
Example 3.6 (𝑛 = 2 SuperHyperNeutrosophic Example). An example of a 2-SuperHyperNeutrosophic valua-
tion 𝑣( 𝑝) could be
n o
𝑣( 𝑝) = { (0.7, 0.2, 0.4), (0.4, 0.1, 0.8)}, { (0.6, 0.3, 0.2)} .

Here each inner set, such as {(0.7, 0.2, 0.4), (0.4, 0.1, 0.8)}, is itself a valid HyperNeutrosophic subset of
[0, 1] 3 . We then collect those subsets into a larger non-empty set, forming a second-level structure.
Theorem 3.7. It holds as follows.

(1) When 𝑛 = 1, an 𝑛-SuperHyperNeutrosophic valuation is exactly a HyperNeutrosophic valuation.


(2) If we restrict a HyperNeutrosophic valuation to be a singleton {(𝑇, 𝐼, 𝐹)}, it recovers Neutrosophic Logic.
Hence 𝑛-SuperHyperNeutrosophic Logic generalizes both HyperNeutrosophic and Neutrosophic Logics.

Proof. (1) By definition, for 𝑛 = 1 we have


P̃1 ( [0, 1] 3 ) = 𝐴 ⊆ [0, 1] 3 : 𝐴 ≠ ∅ .


Thus a 1-SuperHyperNeutrosophic valuation 𝑣( 𝑝) is simply a non-empty subset of [0, 1] 3 , which is precisely


the definition of a HyperNeutrosophic valuation.

(2) In Neutrosophic Logic, 𝑣( 𝑝) is a single triple (𝑇, 𝐼, 𝐹) ∈ [0, 1] 3 . If we embed it into HyperNeutrosophic
Logic by forming the singleton {(𝑇, 𝐼, 𝐹)}, this is clearly a non-empty subset of [0, 1] 3 , thus satisfying the
HyperNeutrosophic requirements. Therefore, the singleton case of HyperNeutrosophic valuations coincides
with Neutrosophic valuations.

Combining these, we see that 𝑛-SuperHyperNeutrosophic Logic (for 𝑛 = 1) equals HyperNeutrosophic Logic,
while Neutrosophic Logic is recovered as the singleton case within HyperNeutrosophic sets. For 𝑛 > 1, the
framework further generalizes these logics by allowing nested families of HyperNeutrosophic sets. □

3.2 HyperNeutrosophic Graph Neural Network

A neural network is a computational model inspired by biological neural systems, designed for tasks such
as pattern recognition, data classification, and prediction [8, 13, 23, 159, 273, 281, 282]. Building upon this
foundation, a Graph Neural Network (GNN) extends neural networks to graph structures, enabling the modeling
of relationships between nodes, edges, and their associated features [58, 142, 187, 211, 227, 231, 271, 277, 294,
298]. Readers may refer to the lecture notes or the introduction for further details(cf. [2, 58, 74, 142, 187, 211,
227,231,283,294]). Building on this concept, Hypergraph Neural Networks (HGNNs) extend traditional Graph
Neural Networks (GNNs) by utilizing hyperedges to model higher-order relationships involving multiple nodes
simultaneously [35,79,122,128,141,266,275]. Additionally, related concepts, such as the n-SuperHypergraph
Neural Network, have also been proposed [86].

Considering these aspects, this paper examines Hyperneutrosophic Graph Neural Networks and Superhyper-
neutrosophic Graph Neural Networks (cf. [86]). First, several graph concepts addressing various types of
uncertainty are briefly introduced below.

9
Definition 3.8 (Unified Framework for Uncertain Graphs). (cf. [88]) Let 𝐺 = (𝑉, 𝐸) be a classical graph,
where 𝑉 is the set of vertices and 𝐸 is the set of edges. Depending on the type of graph, each vertex 𝑣 ∈ 𝑉 and
edge 𝑒 ∈ 𝐸 is associated with membership values to represent various degrees of truth, indeterminacy, falsity,
and other measures of uncertainty.

1. Fuzzy Graph (cf. [26, 103, 107, 186, 196, 217, 277])
• Each vertex 𝑣 ∈ 𝑉 is assigned a membership degree 𝜎(𝑣) ∈ [0, 1].
• Each edge 𝑒 = (𝑢, 𝑣) ∈ 𝐸 is assigned a membership degree 𝜇(𝑢, 𝑣) ∈ [0, 1].
2. Intuitionistic Fuzzy Graph (IFG) (cf. [7, 138, 269, 296])
• Each vertex 𝑣 ∈ 𝑉 has two values: 𝜇 𝐴 (𝑣) ∈ [0, 1] (degree of membership) and 𝜈 𝐴 (𝑣) ∈ [0, 1]
(degree of non-membership), satisfying 𝜇 𝐴 (𝑣) + 𝜈 𝐴 (𝑣) ≤ 1.
• Each edge 𝑒 = (𝑢, 𝑣) ∈ 𝐸 has two values: 𝜇 𝐵 (𝑢, 𝑣) ∈ [0, 1] and 𝜈 𝐵 (𝑢, 𝑣) ∈ [0, 1], with 𝜇 𝐵 (𝑢, 𝑣) +
𝜈 𝐵 (𝑢, 𝑣) ≤ 1.
3. Neutrosophic Graph (cf. [32, 33, 113, 129, 150, 246, 258])
• Each vertex 𝑣 ∈ 𝑉 is associated with a triplet

𝜎(𝑣) = (𝜎𝑇 (𝑣), 𝜎𝐼 (𝑣), 𝜎𝐹 (𝑣))

, where
𝜎𝑇 (𝑣), 𝜎𝐼 (𝑣), 𝜎𝐹 (𝑣) ∈ [0, 1]
and 𝜎𝑇 (𝑣) + 𝜎𝐼 (𝑣) + 𝜎𝐹 (𝑣) ≤ 3.
• Each edge 𝑒 = (𝑢, 𝑣) ∈ 𝐸 is associated with a triplet 𝜇(𝑒) = (𝜇𝑇 (𝑒), 𝜇 𝐼 (𝑒), 𝜇 𝐹 (𝑒)).
4. Quadripartitioned Neutrosophic Graph (QNG) (cf. [131–133, 225, 232])
• Each vertex 𝑣 ∈ 𝑉 is associated with a quadripartitioned neutrosophic membership

𝜎(𝑣) = (𝜎1 (𝑣), 𝜎2 (𝑣), 𝜎3 (𝑣), 𝜎4 (𝑣))

, where
𝜎1 (𝑣), 𝜎2 (𝑣), 𝜎3 (𝑣), 𝜎4 (𝑣) ∈ [0, 1]
and
𝜎1 (𝑣) + 𝜎2 (𝑣) + 𝜎3 (𝑣) + 𝜎4 (𝑣) ≤ 4
.
• Each edge 𝑒 = (𝑢, 𝑣) ∈ 𝐸 is associated with a quadripartitioned membership

𝜎(𝑒) = (𝜎1 (𝑒), 𝜎2 (𝑒), 𝜎3 (𝑒), 𝜎4 (𝑒))

, satisfying:
𝜎1 (𝑒) ≤ min{𝜎1 (𝑢), 𝜎1 (𝑣)},
𝜎2 (𝑒) ≤ min{𝜎2 (𝑢), 𝜎2 (𝑣)},
𝜎3 (𝑒) ≤ max{𝜎3 (𝑢), 𝜎3 (𝑣)},
𝜎4 (𝑒) ≤ max{𝜎4 (𝑢), 𝜎4 (𝑣)}.

Example 3.9 (Fuzzy Graph). Let 𝐺 = (𝑉, 𝐸), where 𝑉 = {𝑣 1 , 𝑣 2 , 𝑣 3 } and 𝐸 = {(𝑣 1 , 𝑣 2 ), (𝑣 2 , 𝑣 3 )}. Each vertex
𝑣 ∈ 𝑉 is assigned a membership degree:

𝜎(𝑣 1 ) = 0.8, 𝜎(𝑣 2 ) = 0.5, 𝜎(𝑣 3 ) = 0.7.

Each edge 𝑒 ∈ 𝐸 is assigned a membership degree:

𝜇(𝑣 1 , 𝑣 2 ) = 0.6, 𝜇(𝑣 2 , 𝑣 3 ) = 0.9.

This defines a Fuzzy Graph where vertices and edges have varying degrees of membership.

10
Example 3.10 (Neutrosophic Graph). Let 𝐺 = (𝑉, 𝐸), where 𝑉 = {𝑣 1 , 𝑣 2 , 𝑣 3 } and 𝐸 = {(𝑣 1 , 𝑣 2 ), (𝑣 2 , 𝑣 3 )}.
Each vertex 𝑣 ∈ 𝑉 is associated with a triplet 𝜎(𝑣) = (𝜎𝑇 (𝑣), 𝜎𝐼 (𝑣), 𝜎𝐹 (𝑣)):
𝜎(𝑣 1 ) = (0.7, 0.2, 0.1), 𝜎(𝑣 2 ) = (0.6, 0.3, 0.1), 𝜎(𝑣 3 ) = (0.8, 0.1, 0.1).
Each edge 𝑒 ∈ 𝐸 is associated with a triplet 𝜇(𝑒) = (𝜇𝑇 (𝑒), 𝜇 𝐼 (𝑒), 𝜇 𝐹 (𝑒)):
𝜇(𝑣 1 , 𝑣 2 ) = (0.5, 0.3, 0.2), 𝜇(𝑣 2 , 𝑣 3 ) = (0.6, 0.2, 0.2).
This defines a Neutrosophic Graph with truth, indeterminacy, and falsity values for vertices and edges.

The Neutrosophic Graph Neural Network, along with its extensions, the HyperNeutrosophic Graph Neural
Network and the SuperHyperNeutrosophic Graph Neural Network, are introduced below. While the discussion
here centers on HyperNeutrosophic Sets and 𝑛-SuperHyperNeutrosophic Sets, similar analyses can also be
conducted for Hyperfuzzy Sets, 𝑛-SuperHyperfuzzy Sets, Hyperplithogenic Sets, and 𝑛-SuperHyperplithogenic
Sets.
Definition 3.11. In general, feature spaces represent the domains of attributes for vertices and edges, denoted
by 𝑋𝑉 and 𝑋𝐸 , respectively [21,56,151]. Aggregation rules are operations that combine features or information
from multiple elements, such as vertices or edges, into a unified representation, denoted as R 𝑁 (cf. [43, 167,
188]). Learnable parameters, denoted as Θ, are adjustable variables (e.g., weights in neural networks) optimized
during training to improve model performance (cf. [299]).
Definition 3.12 (Neutrosophic Graph Neural Network (N-GNN) [88]). A Neutrosophic Graph Neural Network
(N-GNN) is a GNN that leverages neutrosophic logic to handle uncertain, indeterminate, and inconsistent data
in graph-structured settings. Formally, an N-GNN is an 8-tuple:
 
N-GNN = 𝐺, 𝑋𝑉 , 𝑋𝐸 , N𝑉 , N𝐸 , R 𝑁 , D 𝑁 , Θ ,

where:

1. 𝐺 = (𝑉, 𝐸) is a graph with vertex set 𝑉 and edge set 𝐸.


2. 𝑋𝑉 and 𝑋𝐸 are the feature spaces for vertices and edges, respectively.
3. N𝑉 : 𝑋𝑉 → [0, 1] 3 and N𝐸 : 𝑋𝐸 → [0, 1] 3 are neutrosophic fuzzification functions, mapping features
to triples (𝑇, 𝐼, 𝐹) satisfying 𝑇 + 𝐼 + 𝐹 ≤ 3.
4. R 𝑁 is a set of neutrosophic aggregation rules specifying how neutrosophic information is combined
among vertices and edges.
5. D 𝑁 is a neutrosophic defuzzification function that transforms aggregated neutrosophic values into a crisp
or probabilistic output (e.g., a real number or a probability vector [174]).
6. Θ is the set of learnable parameters (e.g., weights in neural layers or rule parameters).

N-GNN Layer. Given a vertex feature 𝑥 𝑣 ∈ 𝑋𝑉 for 𝑣 ∈ 𝑉 and an edge feature 𝑥𝑢𝑣 ∈ 𝑋𝐸 for an edge (𝑢, 𝑣) ∈ 𝐸,
the neutrosophic fuzzification layer outputs:
N𝑉 (𝑥 𝑣 ) = (𝜇𝑇𝑣 , 𝜇 𝑣𝐼 , 𝜇 𝐹𝑣 ), N𝐸 (𝑥𝑢𝑣 ) = (𝜇𝑇𝑢𝑣 , 𝜇𝑢𝑣 𝑢𝑣
𝐼 , 𝜇 𝐹 ),

where each triple fulfills 𝜇𝑇 + 𝜇 𝐼 + 𝜇 𝐹 ≤ 3.

Neutrosophic Aggregation. Let AGG 𝑁 (·) be a neutrosophic aggregation operator guided by the rule set
R 𝑁 . For a vertex 𝑣, a typical update rule might be:
 
ℎ 𝑣(𝑙+1) = 𝜎 AGG 𝑁 { ℎ𝑢(𝑙) , N𝐸 (𝑥𝑢𝑣 ) | 𝑢 ∈ N (𝑣)} ,

where ℎ 𝑣(𝑙) denotes the hidden representation of vertex 𝑣 at layer 𝑙, N (𝑣) is the neighborhood of 𝑣, and 𝜎 is a
non-linear activation function [68,230] (e.g., ReLU). After several layers, the defuzzification step D 𝑁 produces
a final crisp or probabilistic output.

11
Example 3.13 (A Simple N-GNN on a Triangular Graph). Scenario: Suppose we have a small graph 𝐺 = (𝑉, 𝐸)
with three vertices 𝑉 = {𝐴, 𝐵, 𝐶} and three edges 𝐸 = {( 𝐴, 𝐵), (𝐵, 𝐶), (𝐶, 𝐴)} (cf. [263]). Each vertex and
edge has certain uncertain features that we wish to model using neutrosophic logic.

1. Vertex Features:
Let us assume each vertex 𝑣 has a single feature 𝑥 𝑣 (e.g., an uncertain sensor reading [77]). We define:

𝑥 𝐴 = 0.7, 𝑥 𝐵 = 0.5, 𝑥𝐶 = 0.9.

Since these sensor readings contain some noise or uncertainty, we convert them into neutrosophic triples
(𝑇, 𝐼, 𝐹) as follows:

N𝑉 (𝑥 𝐴) = (0.6, 0.3, 0.1), N𝑉 (𝑥 𝐵 ) = (0.4, 0.2, 0.4), N𝑉 (𝑥𝐶 ) = (0.8, 0.1, 0.1).

Each triple must satisfy 𝑇 + 𝐼 + 𝐹 ≤ 3; here they all sum to 1.0 ≤ 3.


2. Edge Features:
Each edge (𝑢, 𝑣) ∈ 𝐸 also has a feature 𝑥 𝑢𝑣 (e.g., an uncertain measure of connection strength). For
simplicity:
𝑥 𝐴𝐵 = 0.2, 𝑥 𝐵𝐶 = 0.6, 𝑥𝐶 𝐴 = 0.4.
Using the neutrosophic fuzzification N𝐸 , suppose:

N𝐸 (𝑥 𝐴𝐵 ) = (0.3, 0.4, 0.3), N𝐸 (𝑥 𝐵𝐶 ) = (0.5, 0.3, 0.2), N𝐸 (𝑥𝐶 𝐴) = (0.4, 0.2, 0.4).

3. Initial Hidden States:


Assign an initial hidden representation ℎ 𝑣(0) ∈ R𝑑 to each vertex (e.g., 𝑑 = 2 dimensions). For instance:

ℎ (0)
𝐴 = (1.0, 0.0), ℎ 𝐵(0) = (0.5, 0.5), ℎ𝐶(0) = (0.0, 1.0).

4. Neutrosophic Aggregation:
We define a neutrosophic aggregation rule AGG 𝑁 that combines:

(𝜇𝑇𝑣 , 𝜇 𝑣𝐼 , 𝜇 𝐹𝑣 ) for 𝑣 ∈ 𝑉 and (𝜇𝑇𝑢𝑣 , 𝜇𝑢𝑣 𝑢𝑣


𝐼 , 𝜇𝐹 ) for (𝑢, 𝑣) ∈ 𝐸

using some operator, e.g., neutrosophic min or a product-based approach adapted for (𝑇, 𝐼, 𝐹).
A simplified update for vertex 𝐴 at layer 1 might look like:
 
ℎ (1)
𝐴 = 𝜎 𝑊 · AGG 𝑁 { ℎ (0)
𝐵 , N𝐸 (𝑥 𝐴𝐵 ), ℎ (0)
𝐶 , N𝐸 (𝑥 𝐶 𝐴 )} ,

where 𝑊 is a weight matrix, and 𝜎 is an activation function (like ReLU).

5. Final Defuzzification:
After 2 or 3 message-passing layers, each ℎ 𝑣(final) might be defuzzified via D 𝑁 to produce a class label
(e.g., in a classification setting) or a real-valued score (in a regression setting). For example, one could
aggregate the final (𝑇, 𝐼, 𝐹) into a single confidence measure by 𝜇𝑇 − 𝜇 𝐹 or other transformations, and
then map ℎ 𝑣(final) to a label.

6. Interpretation:
A higher 𝑇 in (𝑇, 𝐼, 𝐹) suggests the data is more likely to be “true” or valid. A higher 𝐼 indicates
indeterminacy or lack of clarity. A higher 𝐹 signals contradictions or false components.
By tracking these three degrees, the N-GNN can learn to handle nodes or edges with uncertain or
conflicting information more effectively than a standard GNN.

This small triangular graph example shows how even a beginner can view Neutrosophic Graph Neural Networks
in action. Each vertex and edge has a neutrosophic triple representing its uncertain state, and the GNN aggregates
these values through specialized neutrosophic rules. The final output offers a robust way to manage uncertainty,
indeterminacy, and contradiction in the data.

12
Definition 3.14 (HyperNeutrosophic Graph Neural Network (HN-GNN)). A HyperNeutrosophic Graph Neural
Network (HN-GNN) generalizes the N-GNN by allowing each vertex or edge to have a set of neutrosophic triples,
rather than a single triple. Formally, an HN-GNN is a 9-tuple:
 
HN-GNN = 𝐺, 𝑋𝑉 , 𝑋𝐸 , H N 𝑉 , H N 𝐸 , R 𝐻 𝑁 , D 𝐻 𝑁 , AGG𝑠𝑒𝑡 , Θ ,

where:

1. 𝐺 = (𝑉, 𝐸) is a graph.
2. 𝑋𝑉 , 𝑋𝐸 are vertex and edge feature spaces.

3. H N 𝑉 : 𝑋𝑉 → P ( [0, 1] 3 ) and H N 𝐸 : 𝑋𝐸 → P ( [0, 1] 3 ) are hyperneutrosophic fuzzification functions,


mapping each vertex (or edge) to a non-empty subset of [0, 1] 3 . Each element (𝑇𝑘 , 𝐼 𝑘 , 𝐹𝑘 ) ∈ H N 𝑉 (𝑥 𝑣 )
(or H N 𝐸 (𝑥𝑢𝑣 )) satisfies 𝑇𝑘 + 𝐼 𝑘 + 𝐹𝑘 ≤ 3.
4. AGG𝑠𝑒𝑡 is a set-level aggregation operator that collapses or summarizes each hyperneutrosophic set into
either (a) a single representative triple, or (b) a small set of representative triples used in the subsequent
GNN computation.
5. R 𝐻 𝑁 is the hyperneutrosophic rule set for combining hyperneutrosophic information from neighboring
vertices and edges.
6. D 𝐻 𝑁 is the hyperneutrosophic defuzzification function, producing a final crisp output from the hyper-
neutrosophic representations.
7. Θ is the set of learnable parameters in the model.

HN-GNN Layer. At each layer 𝑙, for a vertex 𝑣:


n o
H N 𝑉 (𝑥 𝑣 ) = (𝜇𝑇𝑣 (𝑖), 𝜇 𝑣𝐼 (𝑖), 𝜇 𝐹𝑣 (𝑖)) 𝑖 ∈ I𝑣 ⊆ [0, 1] 3 ,

where I𝑣 indexes multiple neutrosophic evaluations. An edge (𝑢, 𝑣) has:


n o
H N 𝐸 (𝑥𝑢𝑣 ) = (𝜇𝑇𝑢𝑣 ( 𝑗), 𝜇𝑢𝑣
𝐼 ( 𝑗), 𝜇 𝑢𝑣
𝐹 ( 𝑗)) 𝑗 ∈ J𝑢𝑣 .

We first aggregate each hyperneutrosophic set into a suitable representation (e.g., average or maximum triple),
or keep multiple triples for a richer representation. The node update then proceeds similarly to an N-GNN, but
with set-based inputs instead of single triples:
 
ℎ 𝑣(𝑙+1) = 𝜎 AGG 𝐻 𝑁 { ℎ𝑢(𝑙) , H N 𝐸 (𝑥 𝑢𝑣 ) | 𝑢 ∈ N (𝑣)} .

Definition 3.15 (𝑛-SuperHyperNeutrosophic Graph Neural Network (𝑛-SHN-GNN)). An 𝑛-SuperHyperNeutrosophic


Graph Neural Network (𝑛-SHN-GNN) is a further generalization of the HN-GNN, in which each vertex or edge
is endowed with an 𝑛-SuperHyperNeutrosophic Set instead of a HyperNeutrosophic Set. Formally, an 𝑛-SHN-
GNN is a 9-tuple:
 
n-SHN-GNN = 𝐺, 𝑋𝑉 , 𝑋𝐸 , SH N 𝑉(𝑛) , SH N 𝐸(𝑛) , R 𝑆𝐻 (𝑛) (𝑛)
𝑁 , D𝑆𝐻 𝑁 , AGG𝑛 , Θ ,

where:

1. 𝐺 = (𝑉, 𝐸) is a graph.

2. 𝑋𝑉 , 𝑋𝐸 are vertex and edge feature spaces.

3. SH N 𝑉(𝑛) : 𝑋𝑉 → P̃𝑛 ( [0, 1] 3 ) and SH N 𝐸(𝑛) : 𝑋𝐸 → P̃𝑛 ( [0, 1] 3 ) map each vertex (or edge) to an
𝑛-SuperHyperNeutrosophic Set of neutrosophic triples. Concretely, each vertex (or edge) is associated
with an 𝑛-th nested family of subsets of [0, 1] 3 . Each triple (𝑇, 𝐼, 𝐹) must satisfy 𝑇 + 𝐼 + 𝐹 ≤ 3.

13
4. AGG𝑛 is an aggregation operator that collapses each 𝑛-SuperHyperNeutrosophic Set into a small number
of representative neutrosophic triples (e.g., using hierarchical combination rules).
(𝑛)
5. R 𝑆𝐻 𝑁 is the rule set for combining 𝑛-SuperHyperNeutrosophic information among neighbors.
(𝑛)
6. D𝑆𝐻 𝑁 is the defuzzification step, producing the final crisp or fuzzy outputs from the aggregated
hierarchical sets.
7. Θ is the set of trainable parameters.

Hierarchical Set Representation. Let P̃𝑛 ( [0, 1] 3 ) denote the 𝑛-th nested power set of the neutrosophic cube.
For a vertex 𝑣:
SH N 𝑉(𝑛) (𝑥 𝑣 ) ∈ P̃𝑛 ( [0, 1] 3 ),
which might be represented recursively:
 
SH N 𝑉(1) (𝑥 𝑣 ) = P̃ [0, 1] 3 , SH N 𝑉(2) (𝑥 𝑣 ) = P̃ SH N 𝑉(1) (𝑥 𝑣 ) ,

...

At each level, sets of sets of neutrosophic triples are nested, capturing multi-level uncertainties or multi-source
conflicting information.

Layer Update in an 𝑛-SHN-GNN. At layer 𝑙, suppose each vertex 𝑣 has hidden representation ℎ 𝑣(𝑙) . To
update ℎ 𝑣(𝑙+1) , do:   
ℎ 𝑣(𝑙+1) = 𝜎 AGG𝑛 SH N 𝐸(𝑛) (𝑥𝑢𝑣 ), ℎ𝑢(𝑙) 𝑢∈ N (𝑣) .

Here, AGG𝑛 must systematically process the nested hierarchical sets from each edge (𝑢, 𝑣) or from the vertex
features SH N 𝑉(𝑛) (𝑥 𝑣 ). After a user-defined number of layers, D𝑆𝐻
(𝑛)
𝑁 is applied to produce the final output
(e.g., classification scores or regression values).

Key Properties of an 𝑛-SHN-GNN:

• Deep Hierarchical Uncertainty: The 𝑛-th nested sets encode multiple layers of contradictory, uncertain,
or aggregated data sources.

• Flexible Aggregation: Each level requires a well-defined rule to merge or reduce the hierarchical sets
into workable forms for neural computations.
• Generalization of All Previous Cases: Setting 𝑛 = 0 or 𝑛 = 1 reduces to classical or hyperneutrosophic
graph neural networks, respectively, thus unifying these frameworks under one hierarchy.
Remark 3.16. The above definitions of N-GNN, HN-GNN, and 𝑛-SHN-GNN assume typical forward-pass,
layer-by-layer neural network operations. Training is done by gradient-based optimization (e.g., backpropa-
gation [120, 170, 216, 278]) on a loss function that measures predictive performance. The novel aspect is the
representation of edges and vertices with (hyper)neutrosophic or 𝑛-superhyperneutrosophic sets of membership
values, enabling richer modeling of uncertainty and ambiguity in graph-structured data.
Theorem 3.17 (Generalization Property). An 𝑛-SuperHyperNeutrosophic Graph Neural Network (𝑛-SHN-
GNN) strictly generalizes both the HyperNeutrosophic Graph Neural Network (HN-GNN) and the Neutrosophic
Graph Neural Network (N-GNN). Specifically:

• If 𝑛 = 1, the 𝑛-SHN-GNN reduces to the HN-GNN.

• If 𝑛 = 0, the 𝑛-SHN-GNN reduces to the N-GNN.

14
Proof. Case 𝑛 = 0: By definition, an 𝑛-SuperHyperNeutrosophic Set becomes a single neutrosophic triple
(𝑇, 𝐼, 𝐹) ∈ [0, 1] 3 if 𝑛 = 0. Consequently, every vertex or edge in the (0)-SHN-GNN is associated with a
single neutrosophic triple, which matches exactly the data representation in a standard Neutrosophic Graph
Neural Network (N-GNN). Hence, a (0)-SHN-GNN is identical to an N-GNN in all respects (membership
representation, aggregator design, defuzzification steps, etc.).

Case 𝑛 = 1: If 𝑛 = 1, the membership for each vertex or edge is a nonempty subset of [0, 1] 3 , i.e. a
HyperNeutrosophic Set, rather than an 𝑛-th nested structure. Thus, the architecture becomes exactly that of a
HyperNeutrosophic Graph Neural Network (HN-GNN), where each vertex/edge can hold multiple neutrosophic
triples simultaneously but not nested sets-of-sets. Hence, a (1)-SHN-GNN is isomorphic to an HN-GNN.

Case 𝑛 > 1: In this situation, each vertex or edge is assigned an 𝑛-fold nested hyperstructure of neutrosophic
triples, providing a strictly richer representation than either HN-GNN (𝑛 = 1) or N-GNN (𝑛 = 0). Therefore,
𝑛-SHN-GNN (𝑛 > 1) strictly generalizes both HN-GNN and N-GNN, as it can simulate them by appropriate
“flattening” of membership sets or by choosing 𝑛 = 0, 1. □

Notation 3.18. For brevity, let

n-SHN-GNN = 𝐺, 𝑋𝑉 , 𝑋𝐸 , SH N 𝑉(𝑛) , SH N 𝐸(𝑛) , R 𝑆𝐻


(𝑛) (𝑛) 
𝑁 , D𝑆𝐻 𝑁 , AGG𝑛 , Θ

be our canonical reference model.


Theorem 3.19 (Well-Definedness of Layer Updates). Let AGG𝑛 be an aggregation operator that maps from
   | N (𝑣) |+1
P̃𝑛 [0, 1] 3 → P̃𝑛 [0, 1] 3

or [0, 1] 𝑑 ,

for some 𝑑 ∈ N. Suppose AGG𝑛 is closed under the domain of membership sets and preserves the condition
𝑇 + 𝐼 + 𝐹 ≤ 3. Then each layer update in an 𝑛-SHN-GNN is well-defined:
 
ℎ 𝑣(𝑙+1) = 𝜎 AGG𝑛 {SH N 𝐸(𝑛) (𝑥𝑢𝑣 ), ℎ𝑢(𝑙) }𝑢∈ N (𝑣) ,

yields valid ℎ 𝑣(𝑙+1) in the intended codomain (e.g., [0, 1] 𝑑 ).

Proof. By assumption, AGG𝑛 takes as input a finite set of objects that are each either:

• Elements of P̃𝑛 ( [0, 1] 3 ), i.e. 𝑛-SuperHyperNeutrosophic sets.

• Real vector embeddings ℎ𝑢(𝑙) from the preceding layer (if the aggregator merges representation vectors
directly).

Since AGG𝑛 is assumed to be closed under the domain of membership sets, it produces an output that remains
in P̃𝑛 ( [0, 1] 3 ) (or in [0, 1] 𝑑 ). Furthermore, each triple (𝑇, 𝐼, 𝐹) within the aggregator’s output is guaranteed to
satisfy 𝑇 + 𝐼 + 𝐹 ≤ 3. Thus, the output is a well-defined 𝑛-superhyperneutrosophic representation or a standard
vector embedding, suitable for subsequent neural network operations or final defuzzification. The activation
function 𝜎 (e.g. ReLU) preserves the property of valid real vector outputs or set membership constraints,
concluding the well-definedness of each layer update. □
Theorem 3.20 (Continuity of the Forward Pass). Assume each aggregator AGG𝑛 and activation function 𝜎 in
the 𝑛-SHN-GNN is continuous. Then, as a function of the input features {𝑥 𝑣 } 𝑣 ∈𝑉 and {𝑥𝑢𝑣 } (𝑢,𝑣) ∈𝐸 , the final
output of the 𝑛-SHN-GNN is continuous.

Proof. Let 𝐿 denote the number of layers, and write

ℎ 𝑣(𝑙)

𝑣 ∈𝑉
for 𝑙 = 0, 1, . . . , 𝐿.

15
At 𝑙 = 0, we have ℎ 𝑣(0) = Enc SH N 𝑉(𝑛) (𝑥 𝑣 ) or a direct embedding of the vertex features, which is continuous

by assumption of the encoding function Enc. The aggregator AGG𝑛 is continuous in its arguments, and 𝜎 is
also continuous. Therefore, each update
 
ℎ 𝑣(𝑙+1) = 𝜎 AGG𝑛 ({ℎ𝑢(𝑙) , SH N 𝐸(𝑛) (𝑥𝑢𝑣 )}𝑢∈ N (𝑣) )

is a composition of continuous mappings in terms of {ℎ𝑢(𝑙) } and the input sets {SH N 𝐸(𝑛) (𝑥 𝑢𝑣 )}. By induction
on the layer index 𝑙, continuity is preserved at each layer, culminating in a continuous final output {ℎ 𝑣(𝐿) } 𝑣 ∈𝑉 .
Hence the entire forward pass from input feature sets {𝑥 𝑣 , 𝑥𝑢𝑣 } to the final output {ℎ 𝑣(𝐿) } is continuous. □
Theorem 3.21 (Reduction Homomorphism for Layer Mapping). Let 𝜌 𝑛→𝑚 be a map P̃𝑛 ( [0, 1] 3 ) → P̃𝑚 ( [0, 1] 3 )
with 𝑚 < 𝑛, defined by recursively selecting or aggregating subsets in the nested structure. Suppose each layer
aggregator AGG𝑛 commutes with 𝜌 𝑛→𝑚 . Then the 𝑛-SHN-GNN naturally reduces to an 𝑚-SHN-GNN.

Proof. Define the reduction map 𝜌 𝑛→𝑚 such that for each 𝐴 ∈ P̃𝑛 ( [0, 1] 3 ), we find a corresponding 𝐵 ∈
P̃𝑚 ( [0, 1] 3 ). Concretely, 𝜌 𝑛→𝑚 flattens the nested subsets from level 𝑛 down to level 𝑚 by either discarding
certain nesting levels or merging them. If each layer aggregator AGG𝑛 satisfies
 
𝜌 𝑛→𝑚 AGG𝑛 ({𝐴𝑖 }𝑖 ∈𝐼 ) = AGG𝑚 ({𝜌 𝑛→𝑚 ( 𝐴𝑖 )}𝑖 ∈𝐼 ),

then we have commutativity of aggregator and flattening. Hence, after applying 𝜌 𝑛→𝑚 to each vertex/edge
membership set at every layer, the system evolves exactly as if it were an 𝑚-SHN-GNN. Consequently, the
entire forward pass of the 𝑛-SHN-GNN, under 𝜌 𝑛→𝑚 , produces the same outputs as the 𝑚-SHN-GNN using
the aggregator AGG𝑚 . This proves that 𝑛-SHN-GNN reduces to an 𝑚-SHN-GNN under the existence of such
a homomorphism 𝜌 𝑛→𝑚 . □
Definition 3.22 (Fixed Point). (cf. [111]) A fixed point of a function 𝑓 : 𝑋 → 𝑋 is an element 𝑥 ∗ ∈ 𝑋 such that
𝑓 (𝑥 ∗ ) = 𝑥 ∗ . Fixed points represent states or solutions where the application of the function leaves the element
unchanged.
Theorem 3.23 (Existence and Uniqueness of a Fixed Point under Contractive Aggregation). Assume each
vertex update in an 𝑛-SHN-GNN is given by a contraction mapping in the space of real embeddings (or suitably
metricized set space). Formally, suppose there exists 𝜆 ∈ (0, 1) such that for all pairs of states H, H′ ∈ X |𝑉 | ,
 
𝑑 AGG𝑛 (H), AGG𝑛 (H′ ) ≤ 𝜆 𝑑 (H, H′ ),

where 𝑑 is a metric on the space of states. Then there exists a unique fixed point H∗ such that

H∗ = AGG𝑛 H∗ .


Proof. This theorem is a direct application of the Banach Fixed Point Theorem [110, 126, 192] (or Contraction
Mapping Principle [31]). The aggregator AGG𝑛 is interpreted as a function on the entire set of node states
H ∈ X |𝑉 | . By assumption, it is a 𝜆-contraction with 𝜆 < 1. Therefore, there exists a unique fixed point H∗
satisfying H∗ = AGG𝑛 (H∗ ). Existence follows from standard contraction mapping arguments, and uniqueness
arises because any other fixed point would produce a contradiction to the strict contraction property. □
Definition 3.24 (Universal Approximation). (cf. [118, 202]) The Universal Approximation Theorem states
that a sufficiently large neural network with appropriate activation functions can approximate any continuous
function to arbitrary accuracy on a compact domain. This property underlies the expressive power of neural
networks in learning complex mappings.
Theorem 3.25 (Universal Approximation of 𝑛-Nested Uncertainty). Let F be a class of target functions that
map from P̃𝑛 ( [0, 1] 3 )-structured input to real output, i.e.,

𝑓 : P̃𝑛 ( [0, 1] 3 ) × · · · × P̃𝑛 ( [0, 1] 3 ) → R.

Suppose each aggregator AGG𝑛 can be realized as a universal approximator for functions over P̃𝑛 ( [0, 1] 3 ).
Then an 𝑛-SHN-GNN with sufficient hidden layer width and depth can approximate any target function 𝑓 ∈ F
arbitrarily well.

16
Proof. This statement extends the universal approximation property of neural networks to the domain of
nested uncertain sets P̃𝑛 ( [0, 1] 3 ). The key idea is that the aggregator AGG𝑛 (plus any standard feedforward
sub-layers) must have enough expressive capacity to approximate arbitrary continuous mappings of the inputs
from P̃𝑛 ( [0, 1] 3 ). Under standard assumptions of neural universal approximation (e.g., multi-layer perceptrons
with sufficient width and suitable activation), we can embed or encode each nested set structure into a finite-
dimensional space, apply a universal approximator, and decode as necessary. Provided the aggregator supports
transformations rich enough (e.g., a deep parametric function), it can approximate any continuous function
on the domain P̃𝑛 ( [0, 1] 3 ). This argument follows the usual universal approximation theorem, adapted to an
embedding space for the nested sets. Convergence in approximation is then guaranteed by classical results on
feedforward networks with continuous activation functions (e.g., sigmoids or ReLU). □

3.3 Neutrosophic Cognitive Maps

A Cognitive Map is a directed graph that models concepts (nodes) and their causal relationships (edges),
where the edges are assigned weighted influences [19, 203, 223, 279]. Over time, various extensions have been
developed, including Fuzzy Cognitive Maps [17, 162, 168, 199, 201, 210], Intuitionistic Fuzzy Cognitive Maps
[69, 134, 175, 200], Neutrosophic Cognitive Maps [9, 148, 185, 194, 207], Dynamic Cognitive Maps [37, 180],
Hesitant fuzzy Cognitive Maps [49–51], Rough Cognitive Maps [46, 47], Cognitive Hypermaps [92], and
Cognitive n-SuperHypermaps [92].

This subsection focuses on the HyperNeutrosophic Cognitive Map and the 𝑛-SuperHyperNeutrosophic Cogni-
tive Map. Their definitions, associated theorems, and relevant properties are detailed below. While the discus-
sion here centers on HyperNeutrosophic Sets and 𝑛-SuperHyperNeutrosophic Sets, similar analyses can also be
conducted for Hyperfuzzy Sets, 𝑛-SuperHyperfuzzy Sets, Hyperplithogenic Sets, and 𝑛-SuperHyperplithogenic
Sets.
Definition 3.26 (Limit Cycle). (cf. [14]) In general, a limit cycle is a closed trajectory in the phase space of a
dynamical system such that trajectories starting in its vicinity asymptotically approach it (stable limit cycle) or
diverge from it (unstable limit cycle). It represents periodic behavior of the system.
Definition 3.27 (Neutrosophic Cognitive Map (NCM)). [9, 148, 185, 194, 207] Neutrosophic Cognitive Map
(NCM) is a directed graph G = (𝐶, 𝐸) whose vertices (concepts) are linked by edges (causal relationships)
weighted by neutrosophic triples. Specifically:

1. 𝐶 = {𝐶1 , 𝐶2 , . . . , 𝐶𝑛 } is a finite set of 𝑛 concepts representing variables, events, or processes in a system.


2. 𝐸 ⊆ 𝐶 × 𝐶 is the set of directed edges, where each edge (𝐶𝑖 , 𝐶 𝑗 ) indicates a causal influence from 𝐶𝑖 to
𝐶𝑗.

3. Each edge (𝐶𝑖 , 𝐶 𝑗 ) has a neutrosophic weight

𝑊𝑖 𝑗 = (𝑇𝑖 𝑗 , 𝐼𝑖 𝑗 , 𝐹𝑖 𝑗 ), with 𝑇𝑖 𝑗 + 𝐼𝑖 𝑗 + 𝐹𝑖 𝑗 ≤ 1,

where
𝑇𝑖 𝑗 ∈ [0, 1] (truth or positive influence), 𝐼𝑖 𝑗 ∈ [0, 1] (indeterminacy),
𝐹𝑖 𝑗 ∈ [0, 1] (falsity or negative influence).

Adjacency Matrix of an NCM. The adjacency matrix 𝑊 of an NCM is a matrix whose (𝑖, 𝑗)-th entry is
𝑊𝑖 𝑗 = (𝑇𝑖 𝑗 , 𝐼𝑖 𝑗 , 𝐹𝑖 𝑗 ). Each row-column entry thus encodes the neutrosophic weights from one concept to
another.

State Vector. At time 𝑡, the system’s state is given by a vector

𝐴(𝑡) = [ 𝑎 1 (𝑡), 𝑎 2 (𝑡), . . . , 𝑎 𝑛 (𝑡) ],

where each 𝑎 𝑖 (𝑡) ∈ [0, 1] denotes the activation level of concept 𝐶𝑖 at time 𝑡.

17
State Update Rule. The state evolves in discrete time. Given the adjacency matrix 𝑊, the new state 𝐴(𝑡 + 1)
is computed by:  
𝐴(𝑡 + 1) = Threshold 𝐴(𝑡) · 𝑊 ,

where
𝑛
∑︁
 
𝐴(𝑡) · 𝑊 𝑗 = 𝑇𝑖 𝑗 · 𝑎 𝑖 (𝑡) − 𝐹𝑖 𝑗 · 𝑎 𝑖 (𝑡) + 𝐼𝑖 𝑗 · 𝑎 𝑖 (𝑡) ,
𝑖=1

and Threshold(·) re-scales or normalizes the result to remain within [0, 1] 𝑛 .

Fixed Point and Limit Cycle. A fixed point is a state 𝐴∗ where 𝐴∗ = Threshold( 𝐴∗ · 𝑊). A limit cycle is a
finite sequence of states { 𝐴(𝑡), 𝐴(𝑡 + 1), . . . , 𝐴(𝑡 + 𝑘)} that repeats periodically.

Example 3.28 (NCM Illustration). Scenario: Suppose we have three concepts related to a simplified economic
model:

• 𝐶1 : Employment rate

• 𝐶2 : Investment level
• 𝐶3 : Consumer confidence

Let these concepts form an NCM with the following edges and neutrosophic weights:

𝑊12 = (0.6, 0.2, 0.0), 𝑊23 = (0.4, 0.3, 0.1), 𝑊31 = (0.0, 0.2, 0.5),

and assume no other direct influences exist, so any missing edges have weight (0, 0, 0).

Interpretation of Weights:

• 𝑊12 = (0.6, 0.2, 0.0): If the employment rate (𝐶1 ) rises, it positively influences investment level (𝐶2 ) with
a truth degree of 0.6. There’s some uncertainty (0.2) about the relationship, and no negative influence.
• 𝑊23 = (0.4, 0.3, 0.1): Investment level (𝐶2 ) tends to increase consumer confidence (𝐶3 ) but with
moderate uncertainty and a small negative component (e.g., risk of inflation).

• 𝑊31 = (0.0, 0.2, 0.5): Consumer confidence (𝐶3 ) might negatively impact employment rate if, for
example, overconfidence leads to unstable spending (falsity = 0.5). There’s also 0.2 uncertainty in this
link.

State Update:
𝐴(𝑡) = [ 𝑎 1 (𝑡), 𝑎 2 (𝑡), 𝑎 3 (𝑡) ].
Then    
𝐴(𝑡 + 1) = Threshold 𝐴(𝑡) · 𝑊 = Threshold 𝑋1 , 𝑋2 , 𝑋3 ,

where, for instance,


3
∑︁ 
𝑋2 = 𝑇𝑖2 − 𝐹𝑖2 + 𝐼𝑖2 𝑎 𝑖 (𝑡).
𝑖=1

If we pick an initial state 𝐴(0) = [0.5, 0.3, 0.8], we iteratively update 𝐴(1), 𝐴(2), . . . until the system stabilizes
or settles into a cycle.

Why It’s Useful:

• Capturing Uncertainty: Each weight includes truth, indeterminacy, and falsity—helpful when exact
causal strengths are not fully known.

18
• Modeling Complex Feedback Loops: NCMs can capture cyclical influences (e.g., 𝐶1 → 𝐶2 and 𝐶2 → 𝐶3
and possibly 𝐶3 → 𝐶1 ).
• Possible Outcomes: The model might converge to a fixed point (e.g., stable employment-investment-
confidence levels) or oscillate if the feedback loops are strong or uncertain.

This example provides an illustration of how to construct and interpret an NCM in a simple economic context.
Definition 3.29 (HyperNeutrosophic Cognitive Map (HNCM)). A HyperNeutrosophic Cognitive Map (HNCM)
is a generalization of a Neutrosophic Cognitive Map, in which each directed edge is associated with a set of
neutrosophic weights rather than a single neutrosophic triple. Formally, let

G = (𝐶, 𝐸)

be a directed graph where:

• 𝐶 = {𝐶1 , 𝐶2 , . . . , 𝐶𝑛 } is a finite set of 𝑛 concepts.


• 𝐸 ⊆ 𝐶 × 𝐶 is a set of directed edges.

For each edge (𝐶𝑖 , 𝐶 𝑗 ) ∈ 𝐸, the associated weight is a HyperNeutrosophic Set

𝑊𝑖 𝑗 ⊆ [0, 1] 3 ,

where each element of 𝑊𝑖 𝑗 is a triple 𝑇𝑘 (𝐶𝑖 , 𝐶 𝑗 ), 𝐼 𝑘 (𝐶𝑖 , 𝐶 𝑗 ), 𝐹𝑘 (𝐶𝑖 , 𝐶 𝑗 ) satisfying:

0 ≤ 𝑇𝑘 (𝐶𝑖 , 𝐶 𝑗 ) + 𝐼 𝑘 (𝐶𝑖 , 𝐶 𝑗 ) + 𝐹𝑘 (𝐶𝑖 , 𝐶 𝑗 ) ≤ 3.

That is, for edge (𝐶𝑖 , 𝐶 𝑗 ),


n o
⊆ [0, 1] 3 ,

𝑊𝑖 𝑗 = 𝑇𝑘 (𝐶𝑖 , 𝐶 𝑗 ), 𝐼 𝑘 (𝐶𝑖 , 𝐶 𝑗 ), 𝐹𝑘 (𝐶𝑖 , 𝐶 𝑗 ) 𝑘 ∈ K𝑖 𝑗

where K𝑖 𝑗 is an index set representing multiple evaluations or sources of uncertainty for the causal relationship
from 𝐶𝑖 to 𝐶 𝑗 .

HNCM Adjacency Representation. The adjacency structure of an HNCM can be recorded in a matrix W
whose (𝑖, 𝑗)-th entry is the hyperneutrosophic set 𝑊𝑖 𝑗 . That is,

W = 𝑊𝑖 𝑗 , 𝑊𝑖 𝑗 ⊆ [0, 1] 3 .
 

 
State Vector and Update Rule. Let 𝐴(𝑡) = 𝑎 1 (𝑡), 𝑎 2 (𝑡), . . . , 𝑎 𝑛 (𝑡) be the state of the concepts at time
𝑡, where 𝑎 𝑖 (𝑡) ∈ [0, 1]. An HNCM typically requires an aggregation operator to combine the multiple
neutrosophic triples in 𝑊𝑖 𝑗 . One common approach is to define a function Agg : P ( [0, 1] 3 ) → [0, 1] 3 that
aggregates the set of triples in 𝑊𝑖 𝑗 into a single effective triple:
 
Agg 𝑊𝑖 𝑗 = 𝑇 𝑖 𝑗 , 𝐼 𝑖 𝑗 , 𝐹 𝑖 𝑗 ,

where
𝑇 𝑖 𝑗 + 𝐼 𝑖 𝑗 + 𝐹 𝑖 𝑗 ≤ 3.
Once aggregated, the state update rule can be modeled analogously to a standard NCM, for example:
𝑛 h
∑︁ i

𝐴(𝑡 + 1) 𝑗 = Threshold 𝑇 𝑖 𝑗 𝑎 𝑖 (𝑡) − 𝐹 𝑖 𝑗 𝑎 𝑖 (𝑡) + 𝐼 𝑖 𝑗 𝑎 𝑖 (𝑡) ,
𝑖=1

where Threshold is a normalization (cf. [20, 36]) or clipping function (cf. [44]) ensuring ( 𝐴(𝑡 + 1)) 𝑗 ∈ [0, 1].
Alternative aggregation or update rules may be used depending on the application.

19
Definition 3.30 (𝑛-SuperHyperNeutrosophic Cognitive Map (𝑛-SHNCM)). An 𝑛-SuperHyperNeutrosophic
Cognitive Map is a further generalization of a HyperNeutrosophic Cognitive Map, where each directed edge is
associated with an 𝑛-SuperHyperNeutrosophic Set rather than a HyperNeutrosophic Set. Formally, let

G = (𝐶, 𝐸)

be a directed graph where:

• 𝐶 = {𝐶1 , 𝐶2 , . . . , 𝐶𝑛 } is a finite set of 𝑛 concepts.

• 𝐸 ⊆ 𝐶 × 𝐶 is a set of directed edges.

For each edge (𝐶𝑖 , 𝐶 𝑗 ) ∈ 𝐸, the associated weight is an 𝑛-SuperHyperNeutrosophic Set

𝑊𝑖(𝑛) 3
𝑗 : P̃𝑛 ({(𝐶𝑖 , 𝐶 𝑗 )}) → P̃𝑛 [0, 1] ,

where P̃𝑛 denotes the 𝑛-th nested family of non-empty subsets, as in the definition of 𝑛-SuperHyperNeutrosophic
Sets. Concretely, 𝑊𝑖(𝑛) 3
𝑗 assigns to each 𝐴 ∈ P̃𝑛 ({(𝐶𝑖 , 𝐶 𝑗 )}) a subset of [0, 1] , such that:

0 ≤ 𝑇+𝐼+𝐹 ≤ 3

for each triple (𝑇, 𝐼, 𝐹) in that subset.

Intuitive Interpretation. The 𝑛-SuperHyperNeutrosophic Set on each edge (𝐶𝑖 , 𝐶 𝑗 ) encapsulates not just
multiple neutrosophic evaluations (𝑘-indexed), but multiple levels (or layers) of hierarchical uncertainty. For
instance, the first level might capture direct uncertainty from data, the second level might capture expert
disagreements about that data, and so forth, up to 𝑛 nested levels.

Adjacency Representation. The adjacency structure of an 𝑛-SHNCM can be encoded in an 𝑛-level super-
hyperneutrosophic matrix, with each entry 𝑊𝑖(𝑛)
𝑗 being an 𝑛-SuperHyperNeutrosophic Set that maps nested
3
subsets to subsets of [0, 1] .

State Vector and Update Rule. Let

𝐴(𝑡) = [ 𝑎 1 (𝑡), 𝑎 2 (𝑡), . . . , 𝑎 𝑛 (𝑡) ]

be the state vector at time 𝑡. To apply an update, one must first define a procedure to collapse or aggregate
the 𝑛-SuperHyperNeutrosophic Set on each edge into an effective triple or small set of triples suitable for
computing a concept’s activation. For example, one might define:

Agg𝑛 𝑊𝑖(𝑛) ⊆ [0, 1] 3



𝑗

as an aggregation operator that extracts the relevant truth, indeterminacy, and falsity values from the hierarchical
structure. Then, each concept’s next state ( 𝐴(𝑡 + 1)) 𝑗 can be computed using a generalized version of the NCM
or HNCM update mechanism:
𝑛 h
∑︁ i
( 𝐴(𝑡 + 1)) 𝑗 = Threshold 𝑇 𝑖 𝑗 𝑎 𝑖 (𝑡) − 𝐹 𝑖 𝑗 𝑎 𝑖 (𝑡) + 𝐼 𝑖 𝑗 𝑎 𝑖 (𝑡) ,
𝑖=1

where {𝑇 𝑖 𝑗 , 𝐼 𝑖 𝑗 , 𝐹 𝑖 𝑗 } ⊆ Agg𝑛 𝑊𝑖(𝑛)



𝑗 and Threshold ensures the result remains in [0, 1].

20
Key Properties of an 𝑛-SHNCM:

• Hierarchical Uncertainty: Multiple nested layers of indeterminacy or conflicting data are represented
within each edge.

• Complex Aggregation: The user must specify an aggregation operator Agg𝑛 to interpret the hierarchical
structure for updating concept states.
• Generalized Dynamics: Similar to standard cognitive maps, the system may evolve to fixed points, limit
cycles, or exhibit chaotic behavior, but now under deeper multi-level uncertainty.

Theorem 3.31 (Generalization Property of 𝑛-SuperHyperNeutrosophic Cognitive Maps). An 𝑛-SuperHyperNeutrosophic


Cognitive Map (𝑛-SHNCM) strictly generalizes both the HyperNeutrosophic Cognitive Map (HNCM) and the
Neutrosophic Cognitive Map (NCM). Concretely:

1. If 𝑛 = 0, the 𝑛-SHNCM reduces to a standard Neutrosophic Cognitive Map.

2. If 𝑛 = 1, the 𝑛-SHNCM reduces to a HyperNeutrosophic Cognitive Map.

For 𝑛 > 1, it provides additional nesting of neutrosophic information, thereby generalizing both HNCMs
(𝑛 = 1) and NCMs (𝑛 = 0).

Proof. We recall the following definitions:

• A Neutrosophic Cognitive Map (NCM) assigns, to each directed edge (𝐶𝑖 , 𝐶 𝑗 ), exactly one neutrosophic
triple (𝑇𝑖 𝑗 , 𝐼𝑖 𝑗 , 𝐹𝑖 𝑗 ), with 𝑇𝑖 𝑗 + 𝐼𝑖 𝑗 + 𝐹𝑖 𝑗 ≤ 1.
• A HyperNeutrosophic Cognitive Map (HNCM) assigns, to each edge (𝐶𝑖 , 𝐶 𝑗 ), a set of neutrosophic
triples 𝑊𝑖 𝑗 ⊆ [0, 1] 3 . Each triple in that set must satisfy 𝑇 + 𝐼 + 𝐹 ≤ 3. Typically, after an aggregation
step, one obtains an effective triple (or small set of triples) to compute causal influences.
• An 𝑛-SuperHyperNeutrosophic Cognitive Map (𝑛-SHNCM) assigns, to each edge (𝐶𝑖 , 𝐶 𝑗 ), an 𝑛-
SuperHyperNeutrosophic Set of neutrosophic triples. This set is recursively or hierarchically defined up
to 𝑛 levels, capturing additional layers of uncertainty or contradictory evaluations.

Case 𝑛 = 0: By definition, a 0-SuperHyperNeutrosophic Set for each edge is just a single neutrosophic triple
𝑇𝑖 𝑗 , 𝐼𝑖 𝑗 , 𝐹𝑖 𝑗 . Hence each edge (𝐶𝑖 , 𝐶 𝑗 ) holds precisely one triple. This matches exactly the data structure of
a standard Neutrosophic Cognitive Map. Therefore, when 𝑛 = 0, the 𝑛-SHNCM coincides with an NCM.

Case 𝑛 = 1: When 𝑛 = 1, each edge (𝐶𝑖 , 𝐶 𝑗 ) is assigned a HyperNeutrosophic Set of neutrosophic triples
rather than just one triple. This is precisely the definition of a HyperNeutrosophic Cognitive Map (HNCM).
Hence for 𝑛 = 1, the 𝑛-SHNCM reduces to the HNCM framework.

Case 𝑛 > 1: For 𝑛 > 1, an 𝑛-SuperHyperNeutrosophic Set is a nested, higher-order generalization of a


HyperNeutrosophic Set. One obtains successively deeper layers of neutrosophic evaluations. The resulting
map can capture more complex or multi-level contradictory opinions. This structure naturally subsumes the
single-level sets of a HNCM (𝑛 = 1) and the single triple of an NCM (𝑛 = 0).

Hence, for each integer 𝑛 ≥ 0:

𝑛 = 0 ⇒ standard NCM, 𝑛 = 1 ⇒ HNCM, 𝑛 > 1 ⇒ strict generalization beyond HNCM.

Thus, an 𝑛-SHNCM indeed generalizes both HNCM (𝑛 = 1) and NCM (𝑛 = 0). □

21
Theorem 3.32 (Layer-by-Layer Aggregation Consistency). Let Agg𝑛 be an aggregation operator that collapses
each 𝑛-SuperHyperNeutrosophic Set 𝑊𝑖(𝑛)
𝑗 into finitely many (or one) representative triple(s). Suppose for each
edge (𝐶𝑖 , 𝐶 𝑗 ),
Agg𝑛 𝑊𝑖(𝑛) ⊆ [0, 1] 3 ,

𝑗
and for every triple (𝑇, 𝐼, 𝐹) in the output, we have 𝑇 + 𝐼 + 𝐹 ≤ 3. If Agg𝑛 is layer-wise consistent, i.e. it
respects the nested structure in P̃𝑛 [0, 1] 3 at each level, then the final aggregated map is well-defined for an
𝑛-SHNCM.

Statement. Layer-wise consistency means that if 𝑉𝑖(𝑘𝑗 ) ⊂ P̃𝑘 ( [0, 1] 3 ) is a 𝑘-fold subset at level 𝑘, and 𝑉𝑖(𝑘+1)
𝑗
extends it at level (𝑘 + 1), then

Agg 𝑘+1 (𝑉𝑖(𝑘+1)


𝑗 ) coincides or refines Agg 𝑘 (𝑉𝑖(𝑘𝑗 ) ),
ensuring no contradictions among nested layers. Under this assumption, the aggregator Agg𝑛 produces a
unique or consistent triple set at the final output, guaranteeing a well-defined 𝑛-SHNCM adjacency represen-
tation.

Proof. Consider the nested sets:


𝑉 (1) ⊆ 𝑉 (2) ⊆ · · · ⊆ 𝑉 (𝑛) ,
where each 𝑉 (𝑘 ) ∈ P̃𝑘 ( [0, 1] 3 ). By hypothesis, Agg𝑛 composes Agg 𝑘 layer by layer. Specifically, Agg 𝑘+1 𝑉 (𝑘+1) ,

restricted to level 𝑘, must match Agg 𝑘 𝑉 (𝑘 ) . This compositional property ensures that the final aggregator

output at level 𝑛 is independent of the path chosen to aggregate intermediate subsets. Hence, the aggregator
is well-defined across all edges (𝐶𝑖 , 𝐶 𝑗 ). As each triple (𝑇, 𝐼, 𝐹) satisfies 𝑇 + 𝐼 + 𝐹 ≤ 3, the neutrosophic
constraints remain intact, thereby giving a well-defined final adjacency representation for the 𝑛-SHNCM. □
Theorem 3.33 (Fixed Point under Contractive Aggregation). Let an 𝑛-SHNCM have a state update rule

𝐴(𝑡 + 1) = 𝐹𝑛 𝐴(𝑡) ,

where 𝐹𝑛 : [0, 1] 𝑛 → [0, 1] 𝑛 is formed by aggregating each 𝑛-SuperHyperNeutrosophic edge set 𝑊𝑖(𝑛) 𝑗 and
summing influences. Assume there exists a metric 𝑑 (·, ·) on [0, 1] 𝑛 such that 𝐹𝑛 is a strict contraction, i.e. for
all 𝐴, 𝐵 ∈ [0, 1] 𝑛 :

𝑑 𝐹𝑛 ( 𝐴), 𝐹𝑛 (𝐵) ≤ 𝜆 𝑑 ( 𝐴, 𝐵), for some constant 0 < 𝜆 < 1.
Then there exists a unique fixed point 𝐴∗ ∈ [0, 1] 𝑛 such that
𝐹𝑛 ( 𝐴∗ ) = 𝐴∗ .

Proof. This is an application of the Banach Fixed Point Theorem (also known as the Contraction Mapping
Principle). Since 𝐹𝑛 is defined on the complete metric space ( [0, 1] 𝑛 , 𝑑) and satisfies 𝑑 (𝐹𝑛 ( 𝐴), 𝐹𝑛 (𝐵)) ≤
𝜆 𝑑 ( 𝐴, 𝐵) with 𝜆 < 1, there is a unique fixed point 𝐴∗ satisfying 𝐴∗ = 𝐹𝑛 ( 𝐴∗ ). Existence follows by iterative
updates from any initial state, and uniqueness follows because a strict contraction cannot admit two distinct
fixed points. □
Theorem 3.34 (Boundedness of 𝐴(𝑡) for 𝑛-SHNCM). In an 𝑛-SHNCM, let 𝐴(𝑡) ∈ [0, 1] 𝑛 evolve via any
aggregator-based update rule. Then for all times 𝑡 ≥ 0, we have 𝐴(𝑡) ∈ [0, 1] 𝑛 . In other words, the state
remains in the unit hypercube [0, 1] 𝑛 regardless of the complexity or depth 𝑛 of the superhyperneutrosophic
edges.

Proof. By definition, each aggregator Agg𝑛 𝑊𝑖(𝑛)



𝑗 returns neutrosophic triples (𝑇, 𝐼, 𝐹) with 𝑇 + 𝐼 + 𝐹 ≤ 3
and each 𝑇, 𝐼, 𝐹 ∈ [0, 1]. The update for each concept 𝐶 𝑗 is typically:
𝑛
∑︁  
( 𝐴(𝑡 + 1)) 𝑗 = Threshold 𝑇 𝑖 𝑗 𝑎 𝑖 (𝑡) − 𝐹 𝑖 𝑗 𝑎 𝑖 (𝑡) + 𝐼 𝑖 𝑗 𝑎 𝑖 (𝑡) ,
𝑖=1

where Threshold is some normalization or clipping that maps real values into [0, 1]. Since 𝑎 𝑖 (𝑡) ∈ [0, 1]
and 𝑇 𝑖 𝑗 , 𝐹 𝑖 𝑗 , 𝐼 𝑖 𝑗 ∈ [0, 1], the inner expression is bounded (in, say, [−𝑛, +𝑛]). The Threshold function ensures
( 𝐴(𝑡 + 1)) 𝑗 ∈ [0, 1]. Therefore, 𝐴(𝑡 + 1) ∈ [0, 1] 𝑛 , and by induction on 𝑡 (starting from some 𝐴(0) ∈ [0, 1] 𝑛 ),
all future states remain in [0, 1] 𝑛 . □

22
3.4 Neutrosophic Classifier

A classifier is a function or algorithm that assigns input data to predefined categories or classes based on learned
patterns or rules from training data [12, 28, 234]. A Neutrosophic Classifier leverages neutrosophic logic to
address uncertainty by assigning truth, indeterminacy, and falsity degrees for classification tasks [15,16,71,284].

A related concept is the Fuzzy Classifier, which uses fuzzy logic to handle uncertainty by assigning membership
degrees to classes, enabling flexible and imprecise decision boundaries [39, 165, 208, 300]. Additionally, the
Intuitionistic Fuzzy Classifier is another known approach in this context [154, 264, 272].

This section introduces the HyperNeutrosophic Classifier and the 𝑛-SuperHyperNeutrosophic Classifier, which
extend the Neutrosophic Classifier framework to higher levels of complexity and abstraction.
Definition 3.35 (Supervised Learning). [40, 55, 59, 119] In general, Supervised Learning is a method in
machine learning that involves learning a relationship or mapping between a set of input variables 𝑋 and an
output variable 𝑌 based on labeled training data. The goal is to construct a model 𝑓 , which minimizes the risk
and accurately predicts the output for unseen data. The training dataset consists of pairs (𝑥𝑖 , 𝑦 𝑖 ), where 𝑥𝑖 ∈ 𝑋
and 𝑦 𝑖 ∈ 𝑌 . Using the principle of risk minimization, the model 𝑓 is optimized to generalize well to new data.
Once trained, the model can be applied to infer outputs for new, unlabeled inputs.
Notation 3.36. All definitions assume a supervised learning context in which we have:

Training set D = {(𝑥 1 , 𝑦 1 ), (𝑥2 , 𝑦 2 ), . . . , (𝑥 𝑚 , 𝑦 𝑚 )},


where 𝑥 𝑖 ∈ R𝑑 (feature space), and 𝑦 𝑖 ∈ C = {𝑐 1 , 𝑐 2 , . . . , 𝑐 𝐾 } is the set of class labels.
Definition 3.37 (Neutrosophic Classifier). Let X = R𝑑 be the feature space, and let C = {𝑐 1 , 𝑐 2 , . . . , 𝑐 𝐾 } be a
finite set of classes. A Neutrosophic Classifier is a function

N : X → [0, 1] 3 × · · · × [0, 1] 3 (𝐾 times),

so that for each input 𝑥 ∈ X, the classifier outputs a 𝐾-tuple of neutrosophic membership triples:

N (𝑥) = (𝑇1 (𝑥), 𝐼1 (𝑥), 𝐹1 (𝑥)), . . . , (𝑇𝐾 (𝑥), 𝐼 𝐾 (𝑥), 𝐹𝐾 (𝑥)) ,

where each triple (𝑇 𝑗 (𝑥), 𝐼 𝑗 (𝑥), 𝐹 𝑗 (𝑥)) ∈ [0, 1] 3 satisfies

𝑇 𝑗 (𝑥) + 𝐼 𝑗 (𝑥) + 𝐹 𝑗 (𝑥) ≤ 3.

Intuitively, 𝑇 𝑗 (𝑥) represents the degree of truth-membership (confidence that 𝑥 is in class 𝑐 𝑗 ), 𝐼 𝑗 (𝑥) the degree
of indeterminacy, and 𝐹 𝑗 (𝑥) the degree of falsity.

Classification Decision: Typically, one might apply a defuzzification or neutrosophic decision function to
map the neutrosophic outputs to a crisp label in C. For example:
 
𝑦ˆ (𝑥) = arg max 𝑇 𝑗 (𝑥) − 𝐹 𝑗 (𝑥) ,
1≤ 𝑗 ≤𝐾

or any other decision rule that accounts for (𝑇 𝑗 , 𝐼 𝑗 , 𝐹 𝑗 ).


Definition 3.38 (HyperNeutrosophic Classifier). Let X = R𝑑 and C = {𝑐 1 , . . . , 𝑐 𝐾 }. A HyperNeutrosophic
Classifier is a function
H N : X → P ( [0, 1] 3 ) ,
𝐾

where for each 𝑥 ∈ X and for each class 𝑐 𝑗 ,

H N (𝑥) 𝑗 = 𝑊 𝑗 (𝑥) ⊆ [0, 1] 3 ,

is a HyperNeutrosophic Set of possible triples {(𝑇𝑘 , 𝐼 𝑘 , 𝐹𝑘 )}. Each triple (𝑇𝑘 , 𝐼 𝑘 , 𝐹𝑘 ) within 𝑊 𝑗 (𝑥) must satisfy

𝑇𝑘 + 𝐼 𝑘 + 𝐹𝑘 ≤ 3.

23
Interpretation: Rather than assigning one neutrosophic triple to class 𝑐 𝑗 , the HyperNeutrosophic Classifier
assigns a nonempty subset of [0, 1] 3 . Different elements in this subset might represent multiple experts’
opinions, different data sources, or uncertain/conflicting evaluations of the membership degrees for class 𝑐 𝑗 .

Decision Rule: One commonly aggregates each set 𝑊 𝑗 (𝑥) into a single representative triple, e.g.
 
(𝑇, 𝐼, 𝐹) 𝑗 (𝑥) = Agg 𝑊 𝑗 (𝑥) ,

then apply a neutrosophic-based decision rule, such as



𝑦ˆ (𝑥) = arg max 𝑇 𝑗 (𝑥) − 𝐹 𝑗 (𝑥) .
1≤ 𝑗 ≤𝐾

Definition 3.39 (𝑛-SuperHyperNeutrosophic Classifier). Let X = R𝑑 and C = {𝑐 1 , . . . , 𝑐 𝐾 }.

An 𝑛-SuperHyperNeutrosophic Classifier is a function


 𝐾
SH N (𝑛) : X → P̃𝑛 [0, 1] 3 ,

where P̃𝑛 ( [0, 1] 3 ) denotes the 𝑛-th nested family of non-empty subsets of the unit cube [0, 1] 3 . Concretely,
for each 𝑥 ∈ X and class 𝑐 𝑗 ,
SH N (𝑛) (𝑥) 𝑗 = W𝑗(𝑛) (𝑥),

where W𝑗(𝑛) (𝑥) ∈ P̃𝑛 ( [0, 1] 3 ). Each nested membership structure satisfies the constraint

∀(𝑇, 𝐼, 𝐹) ∈ W𝑗(𝑛) (𝑥) : 𝑇 + 𝐼 + 𝐹 ≤ 3.

Interpretation: At level 𝑛 = 0, W𝑗(0) (𝑥) is simply a single triple (𝑇, 𝐼, 𝐹), matching a standard Neutrosophic
Classifier. At level 𝑛 = 1, W𝑗(1) (𝑥) ⊆ [0, 1] 3 is a HyperNeutrosophic set, i.e. multiple triples. For 𝑛 ≥ 2, one
obtains hierarchically nested sets-of-sets, capturing multi-level or multi-source uncertainty at each depth.

Decision Step: One typically defines an aggregation operator Agg𝑛 : P̃𝑛 ( [0, 1] 3 ) → [0, 1] 3 to collapse
each nested membership structure to a single triple. Then the classification decision can be performed via a
neutrosophic-based rule on these aggregated triples.
Theorem 3.40 (Reduction Property of 𝑛-SuperHyperNeutrosophic Classifier). An 𝑛-SuperHyperNeutrosophic
Classifier reduces to a HyperNeutrosophic Classifier if 𝑛 = 1, and reduces to a standard Neutrosophic Classifier
if 𝑛 = 0.

Proof. By definition, P̃0 ( [0, 1] 3 ) is just a single triple (𝑇, 𝐼, 𝐹). Thus, if 𝑛 = 0, SH N (0) assigns exactly one
triple to each class, i.e. a Neutrosophic Classifier. If 𝑛 = 1, P̃1 ( [0, 1] 3 ) = P ( [0, 1] 3 ) is a set of one or more
triples in [0, 1] 3 , forming a HyperNeutrosophic set. Hence SH N (1) becomes a HyperNeutrosophic Classifier.
For 𝑛 > 1, each class membership is an 𝑛-th nested structure, strictly more general than 𝑛 = 1 or 𝑛 = 0. □
Theorem 3.41 (Well-definedness of SH N (𝑛) ). Let SH N (𝑛) be an 𝑛-SuperHyperNeutrosophic Classifier.
Suppose for each class 𝑐 𝑗 , W𝑗(𝑛) (𝑥) ∈ P̃𝑛 ( [0, 1] 3 ) is generated via a function

Φ (𝑛)
𝑗 : X → P̃𝑛 ( [0, 1] 3 ),

which respects the condition 𝑇 + 𝐼 + 𝐹 ≤ 3. Then SH N (𝑛) is well-defined for all 𝑥 ∈ X.

Proof. We must check that each W𝑗(𝑛) (𝑥) is indeed in P̃𝑛 ( [0, 1] 3 ). By assumption, Φ (𝑛)
𝑗 constructs an element
of the 𝑛-th nested power set. Moreover, each triple within the structure satisfies 𝑇 + 𝐼 + 𝐹 ≤ 3. Since
𝑗 ∈ {1, . . . , 𝐾 } is finite, the classifier output is a 𝐾-tuple of valid 𝑛-SuperHyperNeutrosophic sets. Thus,
SH N (𝑛) is properly defined on the entire domain X. □

24
Theorem 3.42 (Continuity of a Parametric SH N (𝑛) ). Suppose SH N (𝑛) depends on parameters Θ, i.e.
each W𝑗(𝑛) (𝑥) is generated by a continuous function of (𝑥, Θ) into P̃𝑛 ( [0, 1] 3 . If the aggregator and set-
construction steps are all continuous with respect to Θ and 𝑥, then SH N (𝑛) is a continuous map from X × Θ
into P̃𝑛 ( [0, 1] 3 ) .
𝐾

Proof. Each step in producing W𝑗(𝑛) (𝑥) is assumed continuous with respect to the real parameters Θ and the
input 𝑥. The nested structure P̃𝑛 ( [0, 1] 3 ) can be embedded or represented in a suitable topological space (e.g.
via canonical encodings or representing each level’s subsets). Under a standard product topology, continuity
follows by composition of continuous maps at each stage. Therefore, SH N (𝑛) (𝑥; Θ) is continuous in both
arguments. □

Theorem 3.43 (Fixed-Structure Theorem for Classifier Consistency). Consider a training set {(𝑥 𝑖 , 𝑦 𝑖 )}.
Suppose the classifier SH N (𝑛) uses a fixed, non-trainable aggregator Agg𝑛 that maps each W𝑗(𝑛) (𝑥) to
a single triple (𝑇, 𝐼, 𝐹) 𝑗 (𝑥). If Agg𝑛 and the mapping from input to W𝑗(𝑛) (𝑥) remain unchanged dur-
ing training, then the classification boundaries are determined by the aggregator result. Specifically, if
𝑦ˆ (𝑥) = arg max 𝑗 (𝑇 𝑗 (𝑥) − 𝐹 𝑗 (𝑥)), any parameter updates that do not alter W𝑗(𝑛) (𝑥) or aggregator logic cannot
change the decision boundary.

Proof. Because the aggregator Agg𝑛 and the membership structure W𝑗(𝑛) (𝑥) are assumed fixed, none of the
computed triple (𝑇, 𝐼, 𝐹) 𝑗 (𝑥) can change. Hence, for all 𝑥, the final numeric score 𝑇 𝑗 (𝑥) − 𝐹 𝑗 (𝑥) is invariant.
Therefore, the classification boundary (i.e. the set of 𝑥 ∈ X where max 𝑗 (𝑇 𝑗 (𝑥) − 𝐹 𝑗 (𝑥)) is tied or changes
among classes) remains the same. Thus, no parameter modifications that do not affect W𝑗(𝑛) or aggregator
logic can alter the classifier’s decision function. □

Theorem 3.44 (Universal Approximation under Suitable Encodings). Let SH N (𝑛) be a parametric 𝑛-
SuperHyperNeutrosophic classifier, which encodes each 𝑛-SuperHyperNeutrosophic set into a finite-dimensional
vector (through an appropriate embedding) and then applies a universal approximator (e.g. a multilayer per-
ceptron). Suppose the embedding and aggregator are sufficiently flexible to represent or approximate any
continuous function X → P̃𝑛 ( [0, 1] 3 ) . Then, for any continuous target classification function 𝑓 ∗ (𝑥) (map-
𝐾
ping 𝑥 to some ideal membership structure in P̃𝑛 ( [0, 1] 3 )), there exists a sequence of parameter settings Θ𝑚
such that
lim SH N (𝑛) (𝑥; Θ𝑚 ) − 𝑓 ∗ (𝑥) = 0,
𝑚→∞

uniformly on compact subsets of X. Hence, SH N (𝑛) is a universal approximator in the domain of nested
neutrosophic set-based classification.

Proof. The universal approximation argument proceeds by noting that each element of P̃𝑛 ( [0, 1] 3 ) can be
encoded into a finite (though possibly large) dimensional representation (e.g., by enumerating or sampling from
the nested membership sets). A sufficiently expressive neural network or parametric system can approximate any
continuous mapping from X → Z for Z ⊂ R 𝑀 . Composing this neural net with a suitable decoding step that
reconstructs or interprets the aggregated sets ensures that the entire map can approximate any continuous target
function 𝑓 ∗ . The details rely on standard universal approximation theorems plus a consistent encoding/decoding
scheme for the nested sets. Convergence in the sup norm (or uniform metric) follows from classical results in
neural net approximation theory. □

3.5 Neutrosophic Triplet Group

A Neutrosophic Triplet is an ordered triple that adheres to specific neutral and anti-properties with respect to a
binary operation [10,221,222,256,258]. Related concepts, such as Neutrosophic Duplets, are also recognized in
the literature [149, 179, 295]. This framework has been further extended to encompass the HyperNeutrosophic
Triplet and the 𝑛-SuperHyperNeutrosophic Triplet.

25

Definition 3.45 (Neutrosophic Triplet). [256] Let 𝑁, ★ be a nonempty set 𝑁 with a binary operation 
★ : 𝑁 × 𝑁 → 𝑁. For an element 𝑎 ∈ 𝑁, a neutrosophic triplet is an ordered triple 𝑎, neut(𝑎), anti(𝑎) such
that:
𝑎 ★ neut(𝑎) = neut(𝑎) ★ 𝑎 = 𝑎, and 𝑎 ★ anti(𝑎) = anti(𝑎) ★ 𝑎 = neut(𝑎).
The elements 𝑎, neut(𝑎), and anti(𝑎) are said to form a neutrosophic triplet. In this context:

• neut(𝑎) is called a neutral of 𝑎 (which replaces or generalizes an identity-like element but only relative
to 𝑎).
• anti(𝑎) is called an anti of 𝑎 (which replaces or generalizes an inverse-like element but only relative to
𝑎).

Definition 3.46 (Neutrosophic Triplet Group). A Neutrosophic Triplet Group (NTG) is a pair 𝑁, ★ such that:

1. Closure: For any 𝑥, 𝑦 ∈ 𝑁, we have 𝑥 ★ 𝑦 ∈ 𝑁.


2. Associativity: For all 𝑥, 𝑦, 𝑧 ∈ 𝑁, (𝑥 ★ 𝑦) ★ 𝑧 = 𝑥 ★ (𝑦 ★ 𝑧).
3. Existence of Neutrosophic Triplets: For every 𝑎 ∈ 𝑁, there exist neut(𝑎), anti(𝑎) ∈ 𝑁 such that

𝑎 ★ neut(𝑎) = neut(𝑎) ★ 𝑎 = 𝑎, 𝑎 ★ anti(𝑎) = anti(𝑎) ★ 𝑎 = neut(𝑎).

We emphasize that neut(𝑎) replaces the notion of a group identity for the specific element 𝑎, and anti(𝑎)
replaces the notion of an inverse for 𝑎. Unlike a classical group, these neutral and anti elements can vary with
𝑎.

Example 3.47. Consider 𝑍6 , ×6 , where ×6 is multiplication modulo 6. We observe:

2 ×6 4 = 8 ≡ 2 (mod 6), 2 ×6 2 = 4 (mod 6).

Hence for 𝑎 = 2:
neut(2) = 4, anti(2) = 2,
satisfying
2 ★ 4 = 4 ★ 2 = 2, 2 ★ 2 = 4, 4 ★ 4 = 4.
 
Thus 2, 4, 2 is a neutrosophic triplet. Checking associativity and closure reveals 𝑍6 , ×6 is not a classical
group, yet it has valid neutrosophic triplets for certain elements. If each element admits such triplets, it forms
an NTG (modulo verifying all conditions).

Definition 3.48 (Commutative NTG). An NTG 𝑁, ★ is commutative if 𝑥 ★ 𝑦 = 𝑦 ★ 𝑥 for all 𝑥, 𝑦 ∈ 𝑁.
Definition
 3.49 (HyperNeutrosophic Triplet Group). A HyperNeutrosophic Triplet Group (HNTG) is a pair
𝑁, ★ such that:

1. Closure & Associativity: For all 𝑥, 𝑦 ∈ 𝑁, 𝑥 ★ 𝑦 ∈ 𝑁, and ★ is associative.


2. HyperNeutrosophic Triplets: For each 𝑎 ∈ 𝑁, there is a set of neutrals NeutSet(𝑎) ⊆ 𝑁 and a set of antis
AntiSet(𝑎) ⊆ 𝑁, such that for every 𝑏 ∈ NeutSet(𝑎) and 𝑐 ∈ AntiSet(𝑎),

𝑎 ★ 𝑏 = 𝑏 ★ 𝑎 = 𝑎, and 𝑎 ★ 𝑐 = 𝑐 ★ 𝑎 = 𝑏.

In other words, each element 𝑎 has multiple possible pairs (𝑏, 𝑐) forming hyperneutrosophic triplets
(𝑎, 𝑏, 𝑐).
Remark 3.50. In a HyperNeutrosophic Triplet Group, each 𝑎 can have infinitely many neutrals NeutSet(𝑎)
and infinitely many antis AntiSet(𝑎). This is an extension of the single-triple idea where we replace neut(𝑎)
by a set of possibilities and anti(𝑎) by another set.

Example 3.51. Let 𝑁, ★ be the same set as in a Neutrosophic Triplet Group example, but suppose for each
element 𝑎, we define a set of neutral candidates NeutSet(𝑎) ⊆ 𝑁 and a set of anti candidates AntiSet(𝑎) ⊆ 𝑁.
As long as for each 𝑏 ∈ NeutSet(𝑎) and 𝑐 ∈ AntiSet(𝑎) the required conditions hold, we get a valid HNTG.

26
Definition 3.52 (𝒏-SuperHyperNeutrosophic
 Triplet Group). Let 𝑛 be a nonnegative integer. An 𝑛-SuperHyperNeutrosophic
Triplet Group (𝑛-SHNTG) is a pair 𝑁, ★ such that:

1. Closure & Associativity: ★ is associative and closed on 𝑁.

2. Nested Triplets Up to Level 𝑛: For each 𝑎 ∈ 𝑁, we have an 𝑛-fold nested family of possible neutrals and
antis. More explicitly, at level 𝑘 ≤ 𝑛, we define

NeutSet (𝑘 ) (𝑎) ⊆ 𝑁, AntiSet (𝑘 ) (𝑎) ⊆ 𝑁.

At each level 𝑘, for every pair (𝑏, 𝑐) with 𝑏 ∈ NeutSet (𝑘 ) (𝑎) and 𝑐 ∈ AntiSet (𝑘 ) (𝑎), the triple (𝑎, 𝑏, 𝑐)
satisfies
𝑎 ★ 𝑏 = 𝑏 ★ 𝑎 = 𝑎, 𝑎 ★ 𝑐 = 𝑐 ★ 𝑎 = 𝑏.
Additionally, the levels are nested in the sense that NeutSet (𝑘+1) (𝑎) refines or extends NeutSet (𝑘 ) (𝑎) and
similarly for AntiSet (𝑘 ) (𝑎).
Theorem 3.53 (Generalization Property). Every 𝑛-SHNTG with 𝑛 = 0 reduces to a Neutrosophic Triplet Group
(NTG), and every 𝑛-SHNTG with 𝑛 = 1 reduces to a HyperNeutrosophic Triplet Group (HNTG).

Proof. If 𝑛 = 0, we allow no hyper-sets of neutrals or antis—only single elements. Hence the definition
collapses exactly to a Neutrosophic Triplet Group: each 𝑎 has a single neut(𝑎) and single anti(𝑎). If 𝑛 = 1, for
each 𝑎 we define NeutSet (1) (𝑎), AntiSet (1) (𝑎) as sets of neutrals and antis, recovering the HNTG definition
from Definition 3.49. □

Theorem 3.54 (Reduction Homomorphism). Let 𝜌 𝑛→𝑚 be a surjective map from the 𝑛-SuperHyperNeutrosophic
structure to an 𝑚-SuperHyperNeutrosophic structure, with 𝑚 < 𝑛. Suppose

𝜌 𝑛→𝑚 (𝑎 ★ 𝑏) = 𝜌 𝑛→𝑚 (𝑎) ★ 𝜌 𝑛→𝑚 (𝑏),

and similarly for all nest levels of neutrals and antis. Then (𝑁, ★) at level 𝑛 reduces to (or homomorphically
maps onto) the (𝑚)-SHNTG structure.

Proof. We define 𝜌 𝑛→𝑚 to flatten or select subsets from the 𝑛-level hyperstructures. If 𝜌 𝑛→𝑚 respects the
binary operation ★ (i.e., is an algebraic homomorphism) and commutes with the nested neutrals and antis at
each level, the resulting image is a valid 𝑚-SHNTG. Details parallel standard homomorphism arguments in
universal algebra but adapted to nested triplets. □

Theorem 3.55 (Existence of Trivial Triplets at Each Level). In an 𝑛-SHNTG, for every idempotent element
𝑥 (i.e. 𝑥 ★ 𝑥 = 𝑥) at any level 𝑘 ≤ 𝑛, the triple (𝑥, 𝑥, 𝑥) forms a trivial (hyper)neutrosophic triplet (or nested
family thereof).

Proof. If 𝑥 ★𝑥 = 𝑥, then for each 𝑘 ≤ 𝑛, one can place 𝑥 in NeutSet (𝑘 ) (𝑥) and AntiSet (𝑘 ) (𝑥), satisfying 𝑥 ★𝑥 = 𝑥
and 𝑥 ★ 𝑥 = 𝑥. □
Theorem 3.56 (Associativity Preservation). In a HyperNeutrosophic or 𝑛-SHNTG, associativity is a global
property that must hold for all elements, not just for those forming a single triplet. Consequently, modifying
the set of possible neutrals/antis for certain elements must preserve associativity across the entire structure.

Proof. Follows from standard arguments in universal algebra: The binary operation ★ must be globally
associative, i.e. (𝑥 ★ 𝑦) ★ 𝑧 = 𝑥 ★ (𝑦 ★ 𝑧) for every 𝑥, 𝑦, 𝑧 ∈ 𝑁. Defining or modifying hyperneutrosophic sets
NeutSet (𝑘 ) (𝑥) or AntiSet (𝑘 ) (𝑥) does not remove the requirement that ★ remains associative on all of 𝑁. □

4 Additional Result: Hyperfuzzy Extension

In this section, we explore extensions based on Hyperfuzzy and Superhyperfuzzy concepts rather than Hyper-
Neutrosophic sets.

27
4.1 Neuro-Hyperfuzzy System

A Neuro-Fuzzy System integrates the learning capabilities of neural networks with the reasoning mechanisms
of fuzzy logic, enabling effective decision-making in uncertain environments [41, 42, 78, 125, 136, 156, 158].
This hybrid approach leverages the strengths of both neural and fuzzy systems, making it suitable for a wide
range of applications. Additionally, a related concept, the Neuro-Neutrosophic System, has been discussed in
the literature [89, 270].

In this subsection, we examine the Neuro-Hyperfuzzy System and its extension, the Neuro-Superhyperfuzzy
System. Definitions and relevant details are provided below.
Definition 4.1 (Neuro-Fuzzy System). (cf. [78, 125, 156]) A Neuro-Fuzzy System (NFS) is a hybrid intelligent
system that integrates fuzzy logic-based reasoning with the learning capabilities of neural networks. Formally,
it is represented as a tuple:
N = (X, Y, 𝐹, 𝑅, L),
where:

• X ⊆ R𝑛 : The input space, where 𝑛 is the number of input variables.


• Y ⊆ R𝑚 : The output space, where 𝑚 is the number of output variables.
• 𝐹 = { 𝑓1 , 𝑓2 , . . . , 𝑓 𝑝 }: A set of fuzzy membership functions defined on X.
• 𝑅 = {𝑟 1 , 𝑟 2 , . . . , 𝑟 𝑞 }: A set of fuzzy rules, each of the form:
𝑛
Û 𝑚
Û 
𝑟 𝑘 : IF (𝑥 𝑖 is 𝜇 𝑘𝑖 ) THEN 𝑦 𝑗 is 𝜈 𝑘 𝑗 ,
𝑖=1 𝑗=1

where 𝜇 𝑘𝑖 and 𝜈 𝑘 𝑗 are fuzzy sets for the inputs and outputs, respectively.
• L: A learning algorithm that adjusts 𝐹 and 𝑅 based on training data 𝑇 = {(𝑥 𝑡 , 𝑦 𝑡 )}𝑡=1
𝑁
, where 𝑥 𝑡 ∈ X
and 𝑦 𝑡 ∈ Y.
Definition 4.2 (System Structure of Neuro-Fuzzy System). The architecture of an NFS can be represented as
a multi-layer feedforward network consisting of the following layers:

1. Input Layer: Directly represents the input vector 𝑥 = (𝑥1 , 𝑥2 , . . . , 𝑥 𝑛 ).


2. Fuzzification Layer: Transforms crisp inputs into fuzzy values using membership functions:

𝜇 𝑘𝑖 (𝑥𝑖 ) = 𝑓 𝑘𝑖 (𝑥𝑖 ), ∀𝑘, 𝑖.

3. Rule Layer: Computes the firing strength of each fuzzy rule 𝑟 𝑘 as:
𝑛
Û
Activation: 𝐴 𝑘 = 𝜇 𝑘𝑖 (𝑥𝑖 ),
𝑖=1
Ó
where is a t-norm (e.g., the minimum operator).
4. Aggregation Layer: Aggregates the outputs of all rules using:
𝑞
Ü
𝜈(𝑦) = ( 𝐴 𝑘 · 𝜈 𝑘 (𝑦)) ,
𝑘=1
Ô
where is a t-conorm (e.g., the maximum operator), and 𝜈 𝑘 (𝑦) is the output fuzzy set for rule 𝑟 𝑘 .
5. Defuzzification Layer: Converts the aggregated fuzzy output into a crisp value:

𝑦 ∈Y
𝑦 · 𝜈(𝑦) 𝑑𝑦
𝑦= ∫ .
𝑦 ∈Y
𝜈(𝑦) 𝑑𝑦

28
Definition 4.3 (Learning Algorithm). The learning process optimizes the parameters of 𝐹 and 𝑅 to minimize
a loss function:
𝑁
1 ∑︁
L (Θ) = ∥𝑦 𝑡 − 𝑦ˆ (𝑥 𝑡 ; Θ) ∥ 2 ,
𝑁 𝑡=1
where:

• Θ: The set of all parameters, including fuzzy membership function parameters and rule weights.

• 𝑦ˆ (𝑥 𝑡 ; Θ): The output of the NFS for input 𝑥 𝑡 under the current parameter set Θ.

Gradient-based methods or heuristic approaches are employed to adjust Θ iteratively.


Definition 4.4 (Constraints and Interpretability). To ensure interpretability and consistency:

• Fuzzy membership functions 𝜇 𝑘𝑖 must satisfy overlap constraints (e.g., intersections at membership
degree 0.5).
• Rules 𝑅 should be mutually non-contradictory and logically consistent.

Definition 4.5 (Neuro-Hyperfuzzy System (NHFS)). A Neuro-Hyperfuzzy System (NHFS) is a hybrid frame-
work integrating a neural network architecture with hyperfuzzy-based membership functions and rules. For-
mally, we define it as a 5-tuple
˜ 𝑅,
˜ L𝐻 ,

N𝐻 = X, Y, 𝐹,
where:

• X ⊆ R𝑛 : the input space (𝑛 real-valued features).

• Y ⊆ R𝑚 : the output space (𝑚 real-valued target variables).


• 𝐹˜ = { 𝑓˜1 , . . . , 𝑓˜𝑝 }: a set of hyperfuzzy membership functions, each

𝑓˜𝑘 : X → 𝑃(
˜ [0, 1]),

meaning for every input 𝑥 ∈ X, 𝑓˜𝑘 (𝑥) ⊆ [0, 1] is a (non-empty) set of membership values.

• 𝑅˜ = {𝑟˜1 , . . . , 𝑟˜𝑞 }: a set of hyperfuzzy rules. Each rule 𝑟˜𝑘 has the form
𝑛
Û 𝑚
Û
 
𝑟˜𝑘 : IF 𝑥 𝑖 is 𝜇˜ 𝑘𝑖 THEN 𝑦 𝑗 is 𝜈˜ 𝑘 𝑗 ,
𝑖=1 𝑗=1

where 𝜇˜ 𝑘𝑖 , 𝜈˜ 𝑘 𝑗 are hyperfuzzy membership functions applied to inputs 𝑥𝑖 and outputs 𝑦 𝑗 , respectively.

• L 𝐻 : a learning algorithm that updates 𝐹˜ and 𝑅˜ based on training data {(𝑥 𝑡 , 𝑦 𝑡 )}𝑡=1
𝑁
, aiming to minimize
a specified loss function (e.g., MSE) while accommodating the hyperfuzzy membership framework.
Definition 4.6 (Neuro-𝑛-SuperHyperfuzzy System (N𝑆𝐻𝐹S)). Let P̃𝑛 ( [0, 1]) be the 𝑛-SuperHyperfuzzy
extension of [0, 1]. A Neuro-𝑛-SuperHyperfuzzy System extends the Neuro-Hyperfuzzy System to 𝑛 nested
levels of hyperfuzzy membership:  
N𝑆𝐻,𝑛 = X, Y, 𝐹˜𝑛 , 𝑅˜ 𝑛 , L 𝐻,𝑛 ,

where:

• 𝐹˜𝑛 = { 𝑓˜𝑛,1 , . . . , 𝑓˜𝑛, 𝑝 }, each


𝑓˜𝑛,𝑘 : X → P̃𝑛 ( [0, 1]),
describes membership via 𝑛-level nested sets in [0, 1].

29
• 𝑅˜ 𝑛 = {𝑟˜𝑛,1 , . . . , 𝑟˜𝑛,𝑞 }, each rule 𝑟˜𝑛,𝑘 is:
𝑛
Û 𝑚
Û
 
𝑟˜𝑛,𝑘 : IF 𝑥 𝑖 is 𝜇˜ 𝑛,𝑘𝑖 THEN 𝑦 𝑗 is 𝜈˜ 𝑛,𝑘 𝑗 .
𝑖=1 𝑗=1

Here, 𝜇˜ 𝑛,𝑘𝑖 , 𝜈˜ 𝑛,𝑘 𝑗 are 𝑛-SuperHyperfuzzy membership functions.


• L 𝐻,𝑛 is a learning algorithm adapted to handle the 𝑛-SuperHyperfuzzy membership structure. It
optimizes both 𝐹˜𝑛 and 𝑅˜ 𝑛 using training data under higher-order uncertainties.
Theorem 4.7 (Universal Approximation for Neuro-Hyperfuzzy Systems). Let X ⊆ R𝑛 be a compact set, and
let 𝑓 : X → R𝑚 be a continuous function. Then, for every 𝜀 > 0, there exists a Neuro-Hyperfuzzy System
˜ 𝑅,
N𝐻 = (X, Y, 𝐹, ˜ L𝐻 )

such that for all 𝑥 ∈ X,


∥b
𝑓 (𝑥) − 𝑓 (𝑥) ∥ < 𝜀,
𝑓 (𝑥) is the NHFS output.
where b

Proof. Step 1: Discretization of the Input Space. Since X is compact in R𝑛 , we can construct a finite covering
by hyper-rectangles or grid points {𝑥 ℓ }ℓ=1
𝐿 such that each point of X lies within 𝛿-distance of at least one 𝑥 ℓ .

By the uniform continuity of 𝑓 on a compact set, there exists 𝛿 > 0 ensuring | 𝑓 (𝑥) − 𝑓 (𝑥 ℓ )| < 𝜀/2 whenever
∥𝑥 − 𝑥 ℓ ∥ < 𝛿.

Step 2: Construction of Hyperfuzzy Membership Functions. For each 𝑥 ℓ , define a hyperfuzzy membership
function
𝑓˜ℓ : X → 𝑃(
˜ [0, 1])
such that: 1. If 𝑥 is within 𝛿-distance of 𝑥 ℓ , 𝑓˜ℓ (𝑥) includes a subset of membership degrees near 1 (e.g.,
[0.8, 1] ⊆ 𝑓˜ℓ (𝑥)). 2. Otherwise, 𝑓˜ℓ (𝑥) is concentrated near lower membership degrees (e.g., [0, 0.2]).

Here, each 𝑓˜ℓ (𝑥) is a set of membership degrees rather than a single number, accommodating local variations.
Conceptually, this divides X into overlapping “regions,” each associated with a hyperfuzzy membership
structure around 𝑥 ℓ .

Step 3: Defining Hyperfuzzy Rules. Let 𝑅˜ = {𝑟˜1 , . . . , 𝑟˜𝐿 }, where each rule 𝑟˜ℓ is triggered mostly around 𝑥 ℓ .
We denote an output function 𝑦 ℓ ≈ 𝑓 (𝑥 ℓ ) for each ℓ. Then:
𝑛
Û 
𝑟˜ℓ : IF 𝑥𝑖 is 𝜇˜ ℓ𝑖 THEN 𝑦 is 𝜈˜ℓ ,
𝑖=1

where 𝜇˜ ℓ𝑖 is essentially 𝑓˜ℓ focusing on the 𝑖-th dimension, and 𝜈˜ℓ is a hyperfuzzy set capturing 𝑦 ℓ in [0, 1]
for some normalized representation or membership-coded output. The combination of these rules covers the
entire input domain.

Step 4: Rule Aggregation and Defuzzification. The system aggregates contributions from each 𝑟˜ℓ . When 𝑥
is near 𝑥 ℓ , 𝑓˜ℓ (𝑥) will be high (subset near 1). Thus, rule 𝑟˜ℓ dominates the output, yielding an approximation
close to 𝑦 ℓ ≈ 𝑓 (𝑥 ℓ ). By uniform continuity, if 𝑥 is near 𝑥 ℓ , 𝑓 (𝑥) is near 𝑓 (𝑥 ℓ ), ensuring

∥b
𝑓 (𝑥) − 𝑓 (𝑥) ∥ < 𝜀

for appropriate choice of membership boundaries and defuzzification methods.

Step 5: Learning Algorithm. The learning algorithm L 𝐻 refines these memberships and rules to further reduce
approximation error. With gradient-based or other optimization methods, the membership sets 𝑓˜ℓ and output
𝑓 (𝑥) within 𝜀 of 𝑓 (𝑥).
sets 𝜈˜ℓ converge to a configuration that keeps b

Since each step can be made arbitrarily precise by refining the covering and adjusting membership sets, the
Neuro-Hyperfuzzy System attains universal approximation. □

30
Theorem 4.8 (Universal Approximation for Neuro-𝑛-SuperHyperfuzzy Systems). Let X ⊆ R𝑛 be compact,
and let 𝑓 : X → R𝑚 be continuous. Then, for every 𝜀 > 0, there exists a Neuro-𝑛-SuperHyperfuzzy System
 
N𝑆𝐻,𝑛 = X, Y, 𝐹˜𝑛 , 𝑅˜ 𝑛 , L 𝐻,𝑛

such that
∥b
𝑓𝑛 (𝑥) − 𝑓 (𝑥) ∥ < 𝜀 for all 𝑥 ∈ X,
𝑓𝑛 (𝑥) is the system output under 𝑛-SuperHyperfuzzy membership.
where b

Proof. The construction is similar to that in Theorem 4.7, but each membership function 𝑓˜𝑛,𝑘 and rule 𝑟˜𝑛,𝑘
uses an 𝑛-SuperHyperfuzzy Set, providing nested or higher-level uncertainties.

Step 1: Nested Membership Definition. For each grid point 𝑥 ℓ , define

𝑓˜𝑛,ℓ : X → P̃𝑛 ( [0, 1]),

so that 𝑓˜𝑛,ℓ (𝑥) includes multi-level membership subsets. Each level (1 through 𝑛) refines the degree of
confidence or uncertainty, allowing flexible overlaps among different “regions” of 𝑥-space.

Step 2: 𝑛-SuperHyperfuzzy Rules. Construct 𝑞 rules 𝑟˜𝑛,1 , . . . , 𝑟˜𝑛,𝑞 , each referencing 𝑓˜𝑛,ℓ membership sets. A
typical rule might be:
𝑛
Û 
𝑟˜𝑛,ℓ : IF 𝑥𝑖 is 𝜇˜ 𝑛,ℓ𝑖 THEN 𝑦 is 𝜈˜ 𝑛,ℓ ,
𝑖=1

where 𝜇˜ 𝑛,ℓ𝑖 is a membership set in P̃𝑛 ( [0, 1]) capturing the 𝑛-level membership for 𝑥𝑖 , and 𝜈˜ 𝑛,ℓ similarly
captures the output membership.

Step 3: Hierarchical Aggregation. When 𝑥 is close to 𝑥 ℓ , the nested membership 𝑓˜𝑛,ℓ (𝑥) becomes strongly
activated at various levels, thus 𝑟˜𝑛,ℓ dominates the final output. Because we still rely on continuity of 𝑓 and
the refinement principle, we can ensure ∥ b 𝑓𝑛 (𝑥) − 𝑓 (𝑥) ∥ < 𝜀 by sufficiently dense covering in X and careful
arrangement of nested membership sets.

Step 4: Learning Algorithm for 𝑛 Levels. The learning process L 𝐻,𝑛 can simultaneously tune membership
sets at each hierarchy (from level 1 to level 𝑛). This does not limit approximation capacity; it merely provides
additional structure for capturing uncertain or contradictory sources of information. By adjusting these levels,
the system can approximate 𝑓 within 𝜀.

Hence, the Neuro-𝑛-SuperHyperfuzzy System achieves universal approximation under the same topological
and continuity assumptions as in the hyperfuzzy case. □
Remark 4.9. The above theorems emphasize that extending fuzzy membership to hyperfuzzy or 𝑛-SuperHyperfuzzy
structures does not reduce the ability to approximate continuous functions on compact domains; rather, it en-
larges the representational space of uncertainty.
Corollary 4.10. In both Theorem 4.7 and Theorem 4.8, if the system is allowed to increase the number of rules
𝑞 and refine membership sets arbitrarily, the approximation error can be made arbitrarily small.

Proof. This corollary follows directly from the proofs of Theorems 4.7 and 4.8, where the coverings can be
made finer and membership sets can be tuned more precisely as 𝑞 → ∞ or as membership boundaries become
sharper. □

31
4.2 Hyperfuzzy control

A control system is a framework managing and regulating processes or devices to achieve desired outputs by
adjusting inputs(cf. [3, 24, 73, 104, 115, 228]). The Control System is extended using Fuzzy Sets. Its main
feature includes operations such as Fuzzification and Defuzzification. The Fuzzy Control System has been
extensively studied in various research fields [53, 124, 130, 161, 173, 178, 182, 184, 205, 274, 276, 297]. The
definition is provided below.
Definition 4.11. (cf. [67,67,140,195]) Let 𝑋 = {𝑥1 , 𝑥2 , . . . , 𝑥 𝑛 } be the universe of discourse for input variables
and 𝑌 = {𝑦 1 , 𝑦 2 , . . . , 𝑦 𝑚 } for output variables. A fuzzy control system consists of the following components:

1. Fuzzification: A mapping 𝐹𝑥 : 𝑋 → [0, 1] that transforms crisp input 𝑥𝑖 into a fuzzy set 𝜇(𝑥𝑖 ), where
𝜇(𝑥𝑖 ) ∈ [0, 1] is the membership degree of 𝑥𝑖 .
2. Fuzzy Rule Base: A set of linguistic rules 𝑅 𝑘 of the form:

𝑅 𝑘 : If 𝑥1 is 𝐴1𝑘 and 𝑥2 is 𝐴2𝑘 and . . . then 𝑦 is 𝐵 𝑘 ,

where 𝐴𝑖𝑘 and 𝐵 𝑘 are fuzzy sets defined on 𝑋 and 𝑌 , respectively.


3. Inference Mechanism: A function Φ : F (𝑋) → F (𝑌 ) that applies fuzzy logical operators (e.g., Min-Max
or Max-Product) to infer the fuzzy output based on the rule base.
4. Defuzzification: A process that converts the fuzzy output Φ(𝜇(𝑋)) into a crisp value 𝑦 ∗ using a defuzzi-
fication method such as: ∫
𝑦 ∈𝑌
𝑦 · 𝜇(𝑦) 𝑑𝑦

𝑦 = ∫ ,
𝑦 ∈𝑌
𝜇(𝑦) 𝑑𝑦
where 𝜇(𝑦) is the membership degree of 𝑦 in the output fuzzy set.

Definition 4.12 (Hyperfuzzy Control System). Let 𝑋 = {𝑥1 , . . . , 𝑥 𝑛 } and 𝑌 = {𝑦 1 , . . . , 𝑦 𝑚 } be universes of


discourse for inputs and outputs, respectively. A hyperfuzzy control system extends the classical fuzzy control
system by replacing each fuzzy set with a hyperfuzzy set:

1. Hyperfuzzification: A mapping 𝐹˜𝑥 : 𝑋 → 𝑃( ˜ [0, 1]), so each crisp input 𝑥 𝑖 is mapped to a non-empty
subset of [0, 1] instead of a single membership degree.
2. Hyperfuzzy Rule Base: A set of hyperfuzzy rules 𝑅˜ 𝑘 of the form:

𝑅˜ 𝑘 : If 𝑥 1 is 𝐴˜ 1𝑘 and . . . then 𝑦 is 𝐵˜ 𝑘 ,

where each 𝐴˜ 𝑖𝑘 , 𝐵˜ 𝑘 is a hyperfuzzy set on 𝑋 or 𝑌 .

3. Hyperfuzzy Inference: An operator Φ̃ : F̃ (𝑋) → F̃ (𝑌 ) that aggregates hyperfuzzy sets. Various


extended t-norms/t-conorms or set-based operations can be used to combine membership sets instead of
single values.
4. Hyperfuzzy Defuzzification: A method mapping the final hyperfuzzy output Φ̃( 𝜇(𝑋)) ˜ into a crisp value
𝑦 ∗ , potentially by selecting representative degrees from each subset or using bounding strategies:
 
𝑦 ∗ = Defz Φ̃ 𝜇(𝑋)
˜ ,

where Defz(·) is an extended defuzzification operator for hyperfuzzy sets.


Definition 4.13 (𝑛-SuperHyperfuzzy Control System). Let 𝑋 and 𝑌 be universes for inputs and outputs,
respectively. An 𝑛-SuperHyperfuzzy Control System is defined by extending the hyperfuzzy components
(Definition 4.12) to 𝑛 nested levels:

1. 𝑛-SuperHyperfuzzification: A mapping

𝐹˜𝑥,𝑛 : 𝑋 → P̃𝑛 ( [0, 1]),

associating each input 𝑥𝑖 ∈ 𝑋 with an 𝑛-superhyperfuzzy set of membership degrees in [0, 1].

32
2. 𝑛-SuperHyperfuzzy Rule Base: A set of rules 𝑅˜ 𝑛,𝑘 of the form:

𝑅˜ 𝑛,𝑘 : If 𝑥1 is 𝐴˜ 1,𝑛
𝑘
and . . . then 𝑦 is 𝐵˜ 𝑛𝑘 ,

where each 𝐴˜ 𝑖,𝑛


𝑘 ,𝐵˜ 𝑛𝑘 is an 𝑛-superhyperfuzzy set on 𝑋 or 𝑌 .

3. 𝑛-SuperHyperfuzzy Inference: An operator Φ̃𝑛 that merges the multi-level membership subsets according
to extended logic operations.
4. 𝑛-SuperHyperfuzzy Defuzzification: A process converting the final multi-level hyperfuzzy output into a
crisp control output 𝑦 ∗ .

Here we present two illustrative theorems on stability and robustness. They assume a dynamical system model:

¤
𝑥(𝑡) = 𝐹 𝑥(𝑡), 𝑢(𝑡) ,

where 𝑥(𝑡) is the system state and 𝑢(𝑡) is the control input produced by a hyperfuzzy or 𝑛-superhyperfuzzy
controller.
Definition 4.14 (Robustness). (cf. [163, 286]) Robustness refers to the ability of a system, model, or algorithm
to maintain its performance or functionality under perturbations, uncertainties, or adverse conditions in its
inputs or environment.
¤ =
Theorem 4.15(Stability under Hyperfuzzy Control). Consider a control system with state-space model 𝑥(𝑡)
𝐹 𝑥(𝑡), 𝑢 HF (𝑡) , where 𝑢 HF (𝑡) is generated by a hyperfuzzy control system. Suppose:

1. 𝐹 (·, ·) is continuous and locally Lipschitz in 𝑥 for each fixed 𝑢 HF .

2. There exists a hyperfuzzy rule base ensuring that for each neighborhood of the equilibrium 𝑥 ∗ = 0,
the hyperfuzzy inference produces a control input 𝑢 HF that reduces a Lyapunov function 𝑉 (𝑥) (cf.
[27, 62, 265, 280]).

Then 𝑥 = 0 is Lyapunov-stable; i.e., for any 𝜀 > 0, there exists 𝛿 > 0 such that ∥𝑥(0) ∥ < 𝛿 implies ∥𝑥(𝑡) ∥ < 𝜀
for all 𝑡 > 0.

Proof. Lyapunov Function Construction. By hypothesis, we have a scalar function 𝑉 : R𝑛 → R ≥0 continuous,


with 𝑉 (0) = 0 and 𝑉 (𝑥) > 0 for 𝑥 ≠ 0. Further, assume 𝑉 is radially unbounded in a local neighborhood of
interest.

Hyperfuzzy Control Influence. The hyperfuzzy rule base ensures that when 𝑥 is near 0, the membership sets
produce a control 𝑢 HF such that 𝑉¤ (𝑥) = ∇𝑉 (𝑥) · 𝐹 (𝑥, 𝑢 HF ) remains non-positive. Since membership sets in
[0, 1] are now replaced by ⊆ [0, 1], the control 𝑢 HF can be chosen consistently from these subsets to maintain
𝑉¤ (𝑥) ≤ 0.

Local Stability Conclusion. By standard Lyapunov arguments, if 𝑉¤ (𝑥) ≤ 0 in a neighborhood around 𝑥 = 0,


𝑥 = 0 is stable. More precisely, for any 𝜀 > 0, choose a region Ω 𝜀 = {𝑥 : 𝑉 (𝑥) < 𝛾(𝜀)} that implies ∥𝑥∥ < 𝜀.
The local Lipschitz continuity of 𝐹 and continuity of 𝑉 confirm that once 𝑥(0) is in Ω 𝜀 , 𝑉 (𝑥(𝑡)) cannot
increase, hence 𝑥(𝑡) remains in Ω 𝜀 . Thus 𝑥 = 0 is Lyapunov-stable. □

Theorem 4.16 (Robustness under 𝑛-SuperHyperfuzzy Control). Let 𝑥(𝑡) ¤ = 𝐹 𝑥(𝑡), 𝑢 𝑛-SHF (𝑡) be controlled
by an 𝑛-superhyperfuzzy system (Definition 4.13) with multi-level membership sets. Suppose:

1. 𝐹 (𝑥, 𝑢) is continuous in (𝑥, 𝑢) and locally Lipschitz in 𝑥 for each feasible 𝑢.


2. The 𝑛-superhyperfuzzy rule base can generate control inputs that counteract bounded external distur-
bances 𝑑 (𝑡) up to a known magnitude 𝐷 max .

33
Then the closed-loop system is robustly stable against disturbances of magnitude at most 𝐷 max , provided the
𝑛-superhyperfuzzy sets at each level are appropriately tuned to reduce a Lyapunov function or maintain an
invariant set.

Proof. We introduce an augmented system model:



¤ = 𝐹 𝑥(𝑡), 𝑢 𝑛-SHF (𝑡) + 𝑑 (𝑡),
𝑥(𝑡)

with ∥𝑑 (𝑡) ∥ ≤ 𝐷 max . By the assumption on the 𝑛-superhyperfuzzy rule base, each membership set at level
𝑘 = 1, . . . , 𝑛 provides a “range” of possible control actions. For any 𝑥 in a certain neighborhood, one can choose
an action 𝑢 𝑛-SHF from these superhyperfuzzy sets to balance or compensate for 𝑑 (𝑡) within ∥𝑑 (𝑡) ∥ ≤ 𝐷 max .

Let 𝑉 (𝑥) be a Lyapunov function that decreases under 𝑢 𝑛-SHF in the absence of disturbance. Because the
multi-level membership offers flexible or nested sets of control values, for every 𝑥 with ∥𝑥∥ ≤ 𝑅, there exists
𝑢 𝑛-SHF (𝑥) such that 𝑉¤ (𝑥) ≤ −𝛼(∥𝑥∥) if ∥𝑑 (𝑡) ∥ ≤ 𝐷 max , for some positive function 𝛼. Thus 𝑉 (𝑥(𝑡)) cannot
grow unbounded. Consequently, 𝑥(𝑡) remains in a bounded region (or converges near 0) even under external
disturbance up to 𝐷 max . This shows robust stability in the sense that the system can handle disturbances within
the specified bound. □
Remark 4.17. In Theorem 4.16, the nested membership structure of 𝑛-superhyperfuzzy sets allows the control
logic to switch or adapt among multiple levels of uncertainty, improving robustness compared to single-level
hyperfuzzy or classical fuzzy approaches.

5 Future Work: Further Exploration of HyperUncertain Extensions

This section briefly outlines future directions for this research. We anticipate advancements in extending various
concepts using Hyperfuzzy Sets, SuperHyperfuzzy Sets, HyperNeutrosophic Sets, SuperHyperNeutrosophic
Sets, Hyperplithogenic Sets, and SuperHyperplithogenic Sets.

Potential areas of extension include:

• Neutrosophic Queueing Systems [57, 214, 293].


• Neutrosophic Geometry [5, 153, 157].

• Fuzzy Queueing Systems [152, 204].


• Fuzzy Matroids [108, 109, 112, 169, 191, 224].
• Fuzzy Topology [4, 171, 209, 262, 268, 292].
• Fuzzy Geometry [48, 197, 218, 219].

We hope this research inspires further exploration and expansion in these domains.

Funding

This research did not receive any financial support from external sources.

Acknowledgments

We sincerely thank all those who offered guidance, support, and inspiration throughout the course of this
research. Additionally, we express our appreciation to the readers for their interest in our work and to the
authors of the referenced literature, whose significant contributions have informed and enriched this study.

34
Data Availability

As this research is entirely theoretical and mathematical, no data or statistical analysis was performed. Future
researchers are encouraged to pursue empirical or data-driven studies to build upon these findings.

Ethical Approval

This study is purely theoretical, involving no experimental procedures with humans or animals, and thus
requires no ethical approval.

Conflicts of Interest

The authors declare no conflicts of interest related to the publication of this research.

Disclaimer

This paper discusses theoretical concepts that have not yet been practically implemented or tested. Future
empirical validation and refinement of these ideas are encouraged. While we have taken care to ensure accuracy
and proper attribution, unintended errors or omissions may exist. Readers are advised to independently verify
referenced sources. The views and interpretations presented here are solely those of the authors and do not
reflect the opinions of their institutions.

References
[1] Mujahid Abbas, Ghulam Murtaza, and Florentin Smarandache. Basic operations on hypersoft sets and hypersoft point. Infinite
Study, 2020.
[2] Ralph Abboud, Ismail Ilkan Ceylan, Martin Grohe, and Thomas Lukasiewicz. The surprising power of graph neural networks with
random node initialization. arXiv preprint arXiv:2010.01179, 2020.
[3] Tarek F. Abdelzaher, Yixin Diao, Joseph L. Hellerstein, Chenyang Lu, and Xiaoyun Zhu. Introduction to control theory and its
application to computing systems. 2008.
[4] Nasim Abdolmaleki, M. Ahmadi, Hadi Tabatabaee Malazi, and Sebastiano Milardo. Fuzzy topology discovery protocol for
sdn-based wireless sensor networks. Simul. Model. Pract. Theory, 79:54–68, 2017.
[5] Mohammad Abobala and Ahmad Hatip. An algebraic approach to neutrosophic euclidean geometry. Neutrosophic Sets and Systems,
43(1):10, 2021.
[6] Ather Abdulrahman Ageeli. A neutrosophic decision-making methods of the key aspects for supply chain management in interna-
tional business administrations. International Journal of Neutrosophic Science, 23(1):155–167, 2023.
[7] Muhammad Akram and Noura Omair Alshehri. Intuitionistic fuzzy cycles and intuitionistic fuzzy trees. The Scientific World
Journal, 2014(1):305836, 2014.
[8] Qeethara Al-Shayea. Artificial neural networks in medical diagnosis. International Journal of Research Publication and Reviews,
2024.
[9] Salah Hasan Saleh Al-subhi, Iliana Pérez Pupo, Roberto Garcı́a Vacacela, Pedro Piñero, Pérez, and Maikel Yelandi Leyva Vázquez.
A new neutrosophic cognitive map with neutrosophic sets on connections, application in project management. Neutrosophic Sets
and Systems, 22:63–75, 2018.
[10] Mumtaz Ali, Florentin Smarandache, and Mohsin Khan. Study on the development of neutrosophic triplet ring and neutrosophic
triplet field. Mathematics, 6(4):46, 2018.
[11] Alireza Aliahmadi and Hamed Nozari. Evaluation of security metrics in aiot and blockchain-based supply chain by neutrosophic
decision-making method. Supply Chain Forum: An International Journal, 24:31 – 42, 2022.
[12] Keith Allan. Classifiers. Language, 53(2):285–311, 1977.
[13] Uri Alon and Eran Yahav. On the bottleneck of graph neural networks and its practical implications. arXiv preprint arXiv:2006.05205,
2020.
[14] Andreas S Andreou, NH Mateou, and George A Zombanakis. Optimization in genetically evolved fuzzy cognitive maps supporting
decision-making: the limit cycle case. In Proceedings. 2004 International Conference on Information and Communication
Technologies: From Theory to Applications, 2004., pages 377–378. IEEE, 2004.
[15] Abdul Quaiyum Ansari, Ranjit Biswas, and Swati Aggarwal. Neutrosophic classifier: An extension of fuzzy classifer. Applied soft
computing, 13(1):563–573, 2013.
[16] Abdul Quaiyum Ansari, Priyanka Sharma, and Manjari Tripathi. Automatic seizure detection using neutrosophic classifier. Physical
and Engineering Sciences in Medicine, 43:1019–1028, 2020.

35
[17] Ioannis D. Apostolopoulos and Peter P. Groumpos. Fuzzy cognitive maps: Their role in explainable artificial intelligence. Applied
Sciences, 2023.
[18] Charles Ashbacher. Introduction to Neutrosophic logic. Infinite Study, 2014.
[19] Robert Axelrod. Structure of decision : the cognitive maps of political elites. 2015.
[20] Jimmy Lei Ba. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
[21] Francis Bach. Exploring large feature spaces with hierarchical multiple kernel learning. Advances in neural information processing
systems, 21, 2008.
[22] Jørgen Bang-Jensen and Gregory Gutin. Classes of directed graphs, volume 11. Springer, 2018.
[23] Peter L Bartlett, Dylan J Foster, and Matus J Telgarsky. Spectrally-normalized margin bounds for neural networks. Advances in
neural information processing systems, 30, 2017.
[24] Robert N. Bateson. Introduction to control system technology. 1973.
[25] Richard Bellman. Introduction to matrix analysis. SIAM, 1997.
[26] Anushree Bhattacharya and Madhumangal Pal. A fuzzy graph theory approach to the facility location problem: A case study in the
indian banking system. Mathematics, 11(13):2992, 2023.
[27] Franco Blanchini. Nonquadratic lyapunov functions for robust control. Autom., 31:451–461, 1995.
[28] Lashon B. Booker, David E. Goldberg, and John H. Holland. Classifier systems and genetic algorithms. Artif. Intell., 40:235–282,
1989.
[29] Hashem Bordbar, Mohammad Rahim Bordbar, Rajab Ali Borzooei, and Young Bae Jun. N-subalgebras of bck= bci-algebras which
are induced from hyperfuzzy structures. Iranian Journal of Mathematical Sciences and Informatics, 16(2):179–195, 2021.
[30] Salah Bouzina. Fuzzy logic vs. neutrosophic logic: operations logic. Neutrosophic Sets and Systems, 14:29–34, 2016.
[31] Robert M Brooks and Klaus Schmitt. The contraction mapping principle and some applications. Electronic Journal of Differential
Equations, pages 09–90, 2009.
[32] Said Broumi, Mohamed Talea, Assia Bakali, and Florentin Smarandache. Interval valued neutrosophic graphs. Critical Review,
XII, 2016:5–33, 2016.
[33] Said Broumi, Mohamed Talea, Assia Bakali, and Florentin Smarandache. Single valued neutrosophic graphs. Journal of New
theory, (10):86–101, 2016.
[34] Richard Hubert Bruck. A survey of binary systems, volume 20. Springer, 1971.
[35] Derun Cai, Moxian Song, Chenxi Sun, Baofeng Zhang, linda Qiao, and Hongyan Li. Hypergraph structure learning for hypergraph
neural networks. In International Joint Conference on Artificial Intelligence, 2022.
[36] Matteo Carandini and David J Heeger. Normalization as a canonical neural computation. Nature reviews neuroscience, 13(1):51–62,
2012.
[37] Joao Paulo Carvalho. On the semantics and the use of fuzzy cognitive maps and dynamic cognitive maps in social sciences. Fuzzy
Sets Syst., 214:6–19, 2013.
[38] Y. V. M. Cepeda, M. A. R. Guevara, E. J. J. Mogro, and R. P. Tizano. Impact of irrigation water technification on seven directories
of the san juan-patoa river using plithogenic 𝑛-superhypergraphs based on environmental indicators in the canton of pujili, 2021.
Neutrosophic Sets and Systems, 74:46–56, 2024.
[39] Xiaoguang Chang and John H Lilly. Evolutionary design of a fuzzy classifier from data. IEEE Transactions on Systems, Man, and
Cybernetics, Part B (Cybernetics), 34(4):1894–1906, 2004.
[40] Liang Chen, Paul Bentley, Kensaku Mori, Kazunari Misawa, Michitaka Fujiwara, and Daniel Rueckert. Self-supervised learning
for medical image analysis using image context restoration. Medical image analysis, 58:101539, 2019.
[41] Wei Chen, Wei Chen, Hamid Reza Pourghasemi, M. Panahi, Aiding Kornejady, Jiale Wang, Xiaoshen Xie, and Shubo Cao. Spatial
prediction of landslide susceptibility using an adaptive neuro-fuzzy inference system combined with frequency ratio, generalized
additive model, and support vector machine techniques. Geomorphology, 297:69–85, 2017.
[42] Wei Chen, M. Panahi, Paraskevas Tsangaratos, Himan Shahabi, Ioanna Ilia, Somayeh Panahi, Shao jun Li, Abolfazl Jaafari,
and Baharin Bin Ahmad. Applying population-based evolutionary algorithms and a neuro-fuzzy system for modeling landslide
susceptibility. CATENA, 2019.
[43] Graciela Chichilnisky. Social aggregation rules and continuity. The Quarterly Journal of Economics, 97(2):337–352, 1982.
[44] Jungwook Choi, Zhuo Wang, Swagath Venkataramani, Pierce I-Jen Chuang, Vijayalakshmi Srinivasan, and Kailash Gopalakrishnan.
Pact: Parameterized clipping activation for quantized neural networks. arXiv preprint arXiv:1805.06085, 2018.
[45] Victor Christianto and Florentin Smarandache. A review of seven applications of neutrosophic logic: In cultural psychology,
economics theorizing, conflict resolution, philosophy of science, etc. J, 2018.
[46] Zhang Chunying, Liu Lu, Ouyang Dong, and Liang Ruitao. Research of rough cognitive map model. In Advanced Research on
Electronic Commerce, Web Application, and Communication: International Conference, ECWAC 2011, Guangzhou, China, April
16-17, 2011. Proceedings, Part II, pages 224–229. Springer, 2011.
[47] Zhang Chunying, Liu Lu, Liang Ruitao, and Wang Jing. Rough center mining algorithm of rough cognitive map. Procedia
engineering, 15:3461–3465, 2011.
[48] Terry D Clark, Jennifer M Larson, John N Mordeson, Joshua D Potter, Mark J Wierman, Terry D Clark, Jennifer M Larson, John N
Mordeson, Joshua D Potter, and Mark J Wierman. Fuzzy geometry. Applying Fuzzy Mathematics to Formal Models in Comparative
Politics, pages 65–80, 2008.
[49] Veysel Çoban and Sezi Çevik Onar. Strategic analysis of solar energy pricing process with hesitant fuzzy cognitive map. Energy
Management?Collective and Computational Intelligence with Theory and Applications, pages 195–227, 2018.

36
[50] Veysel Çoban and Sezi Çevik Onar. Analysis of solar energy generation capacity using hesitant fuzzy cognitive maps. International
Journal of Computational Intelligence Systems, 10(1):1149–1167, 2017.
[51] Veysel Çoban and Sezi Çevik Onar. Modeling renewable energy usage with hesitant fuzzy cognitive map. Complex & Intelligent
Systems, 3:155–166, 2017.
[52] Mihaela Colhon, Monica Tilea, Ana González-Marcos, Alina Stela Resceanu, Florentin Smarandache, and Fermı́n Navaridas-
Nalda. A neutrosophic decision-making model for determining young people’s active engagement. Int. J. Inf. Technol. Decis. Mak.,
23:569–598, 2023.
[53] Mario Collotta, Renato Ferrero, Edoardo Giusto, Mohammad Ghazi Vakili, Jacopo Grecuccio, Xiangjie Kong, and Ilsun You. A
fuzzy control system for energy-efficient wireless devices in the internet of vehicles. International Journal of Intelligent Systems,
36:1595 – 1618, 2021.
[54] Irving M Copi, Carl Cohen, and Kenneth McMahon. Introduction to logic. Routledge, 2016.
[55] Pádraig Cunningham, Matthieu Cord, and Sarah Jane Delany. Supervised learning. In Machine learning techniques for multimedia:
case studies on organization and retrieval, pages 21–49. Springer, 2008.
[56] Theodoros Damoulas and Mark A Girolami. Combining feature spaces for classification. Pattern Recognition, 42(11):2671–2683,
2009.
[57] K Dayana, T Poongodi, and B Vennila. Study on single server finite capacity neutrosophic queueing model. Computational and
Applied Mathematics, 44(1):1–30, 2025.
[58] Ailin Deng and Bryan Hooi. Graph neural network-based anomaly detection in multivariate time series. In AAAI Conference on
Artificial Intelligence, 2021.
[59] Remi Denton, Sam Gross, and Rob Fergus. Semi-supervised learning with context-conditional generative adversarial networks.
arXiv preprint arXiv:1611.06430, 2016.
[60] Muhammet Deveci, Vladimir Simić, and Ali Ebadi Torkayesh. Remanufacturing facility location for automotive lithium-ion
batteries: An integrated neutrosophic decision-making model. Journal of Cleaner Production, 317:128438, 2021.
[61] Keith Devlin. The joy of sets: fundamentals of contemporary set theory. Springer Science & Business Media, 1994.
[62] Moritz Diehl, Rishi Amrit, and James B. Rawlings. A lyapunov function for economic optimizing model predictive control. IEEE
Transactions on Automatic Control, 56:703–707, 2011.
[63] Joseph Diestel and B Faires. On vector measures. Transactions of the American Mathematical Society, 198:253–271, 1974.
[64] Reinhard Diestel. Graduate texts in mathematics: Graph theory.
[65] Reinhard Diestel. Graph theory 3rd ed. Graduate texts in mathematics, 173(33):12, 2005.
[66] Reinhard Diestel. Graph theory. Springer (print edition); Reinhard Diestel (eBooks), 2024.
[67] Dimiter Driankov, Hans Hellendoorn, and Michael Reinfrank. An introduction to fuzzy control (2nd ed.). 1996.
[68] Shiv Ram Dubey, Satish Kumar Singh, and Bidyut Baran Chaudhuri. Activation functions in deep learning: A comprehensive
survey and benchmark. Neurocomputing, 503:92–108, 2022.
[69] Mehtap Dursun and Guray Gumus. Intuitionistic fuzzy cognitive map approach for the evaluation of supply chain configuration
criteria. Mathematical Methods in the Applied Sciences, 43(13):7788–7801, 2020.
[70] Philip Ehrlich. Real numbers, generalizations of the reals, and theories of continua, volume 242. Springer Science & Business
Media, 2013.
[71] Mohamed Elhoseny, Mahmoud Abdel-salam, and Ibrahim M Elhasnony. Extended fuzzy neutrosophic classifier for accurate
intrusion detection and classification. International Journal of Neutrosophic Science (IJNS), 24(4), 2024.
[72] Herbert B. Enderton. A mathematical introduction to logic. 1972.
[73] Virgil Eveleigh and David P. Lindorff. Introduction to control system design. IEEE Transactions on Systems, Man, and Cybernetics,
3:529–529, 1973.
[74] Wenqi Fan, Yao Ma, Qing Li, Yuan He, Eric Zhao, Jiliang Tang, and Dawei Yin. Graph neural networks for social recommendation.
In The world wide web conference, pages 417–426, 2019.
[75] Ilijas Farah. Approximate homomorphisms. Combinatorica, 18:335–348, 1998.
[76] Mehdi Fasanghari and Farzad Habibipour Roudsari. The fuzzy evaluation of e-commerce customer satisfaction. 2008.
[77] Noha S Fayed, Mohammed M Elmogy, Ahmed Atwan, and Eman El-Daydamony. Efficient occupancy detection system based on
neutrosophic weighted sensors data fusion. IEEE Access, 10:13400–13427, 2022.
[78] Shuang Feng and C.L. Philip Chen. Fuzzy broad learning system: A novel neuro-fuzzy model for regression and classification.
IEEE Transactions on Cybernetics, 50:414–424, 2020.
[79] Yifan Feng, Haoxuan You, Zizhao Zhang, R. Ji, and Yue Gao. Hypergraph neural networks. In AAAI Conference on Artificial
Intelligence, 2018.
[80] Takaaki Fujita. General plithogenic soft rough graphs and some related graph classes. Advancing Uncertain Combinatorics through
Graphization, Hyperization, and Uncertainization: Fuzzy, Neutrosophic, Soft, Rough, and Beyond, page 437.
[81] Takaaki Fujita. Expanding horizons of plithogenic superhyperstructures: Applications in decision-making, control, and neuro
systems. 2024.
[82] Takaaki Fujita. Expanding horizons of plithogenic superhyperstructures: Applications in decision-making, control, and neuro
systems. Technical report, Center for Open Science, 2024.
[83] Takaaki Fujita. Exploring concepts of hyperfuzzy, hyperneutrosophic, and hyperplithogenic sets. 2024. DOI:
10.13140/RG.2.2.12216.87045.

37
[84] Takaaki Fujita. Note for hypersoft filter and fuzzy hypersoft filter. Multicriteria Algorithms With Applications, 5:32–51, 2024.
[85] Takaaki Fujita. Short note of supertree-width and n-superhypertree-width. Neutrosophic Sets and Systems, 77:54–78, 2024.
[86] Takaaki Fujita. Superhypergraph neural networks and plithogenic graph neural networks: Theoretical foundations. arXiv preprint
arXiv:2412.01176, 2024.
[87] Takaaki Fujita. Survey of intersection graphs, fuzzy graphs and neutrosophic graphs. Advancing Uncertain Combinatorics through
Graphization, Hyperization, and Uncertainization: Fuzzy, Neutrosophic, Soft, Rough, and Beyond, page 114, 2024.
[88] Takaaki Fujita. Survey of intersection graphs, fuzzy graphs and neutrosophic graphs. Advancing Uncertain Combinatorics through
Graphization, Hyperization, and Uncertainization: Fuzzy, Neutrosophic, Soft, Rough, and Beyond, page 114, 2024.
[89] Takaaki Fujita. A theoretical exploration of hyperconcepts: Hyperfunctions, hyperrandomness, hyperdecision-making, and beyond
(including a survey of hyperstructures). 2024.
[90] Takaaki Fujita. Advancing Uncertain Combinatorics through Graphization, Hyperization, and Uncertainization: Fuzzy, Neutro-
sophic, Soft, Rough, and Beyond. Biblio Publishing, 2025.
[91] Takaaki Fujita. Exploration of graph classes and concepts for superhypergraphs and n-th power mathematical structures. 2025.
[92] Takaaki Fujita. A theoretical exploration of hyperconcepts: Hyperfunctions, hyperrandomness, hyperdecision-making, and beyond
(including a survey of hyperstructures). Preprint in Researchgate, 2025.
[93] Takaaki Fujita and Florentin Smarandache. Advancing Uncertain Combinatorics through Graphization, Hyperization, and Uncer-
tainization: Fuzzy, Neutrosophic, Soft, Rough, and Beyond (Second Volume). Biblio Publishing, 2024.
[94] Takaaki Fujita and Florentin Smarandache. Advancing Uncertain Combinatorics through Graphization, Hyperization, and Uncer-
tainization: Fuzzy, Neutrosophic, Soft, Rough, and Beyond (Third Volume). Biblio Publishing, 2024.
[95] Takaaki Fujita and Florentin Smarandache. A concise study of some superhypergraph classes. Neutrosophic Sets and Systems,
77:548–593, 2024.
[96] Takaaki Fujita and Florentin Smarandache. Fundamental computational problems and algorithms for superhypergraphs. 2024.
[97] Takaaki Fujita and Florentin Smarandache. Introduction to upside-down logic: Its deep relation to neutrosophic logic and
applications. Advancing Uncertain Combinatorics through Graphization, Hyperization, and Uncertainization: Fuzzy, Neutrosophic,
Soft, Rough, and Beyond (Third Volume), 2024.
[98] Takaaki Fujita and Florentin Smarandache. A review of the hierarchy of plithogenic, neutrosophic, and fuzzy graphs: Survey
and applications. In Advancing Uncertain Combinatorics through Graphization, Hyperization, and Uncertainization: Fuzzy,
Neutrosophic, Soft, Rough, and Beyond (Second Volume). Biblio Publishing, 2024.
[99] Takaaki Fujita and Florentin Smarandache. A short note for hypersoft rough graphs. HyperSoft Set Methods in Engineering, 3:1–25,
2024.
[100] Takaaki Fujita and Florentin Smarandache. Uncertain automata and uncertain graph grammar. Neutrosophic Sets and Systems,
74:128–191, 2024.
[101] Takaaki Fujita and Florentin Smarandache. Local-neutrosophic logic and local-neutrosophic sets: Incorporating locality with
applications. Advancing Uncertain Combinatorics through Graphization, Hyperization, and Uncertainization: Fuzzy, Neutrosophic,
Soft, Rough, and Beyond, page 51, 2025.
[102] Takaaki Fujita and Florentin Smarandache. Local-neutrosophic logic and local-neutrosophic sets: Incorporating locality with
applications. Advancing Uncertain Combinatorics through Graphization, Hyperization, and Uncertainization: Fuzzy, Neutrosophic,
Soft, Rough, and Beyond, page 51, 2025.
[103] A Nagoor Gani and K Radha. On regular fuzzy graphs. 2008.
[104] K. C. Garner. Introduction to control system performance measurements. 1968.
[105] Masoud Ghods, Zahra Rostami, and Florentin Smarandache. Introduction to neutrosophic restricted superhypergraphs and neutro-
sophic restricted superhypertrees and several of their properties. Neutrosophic Sets and Systems, 50:480–487, 2022.
[106] Jayanta Ghosh and Tapas Kumar Samanta. Hyperfuzzy sets and hyperfuzzy group. Int. J. Adv. Sci. Technol, 41:27–37, 2012.
[107] Puspendu Giri, Somnath Paul, and Bijoy Krishna Debnath. A fuzzy graph theory and matrix approach (fuzzy gtma) to select the
best renewable energy alternative in india. Applied Energy, 358:122582, 2024.
[108] Roy Goetschel and William Voxman. Fuzzy matroids. 1988.
[109] Roy Goetschel and William Voxman. Fuzzy matroid sums and a greedy algorithm. Fuzzy Sets and Systems, 37:189–200, 1990.
[110] M Eshaghi Gordji, M Ramezani, Manuel De La Sen, and Yeol Je Cho. On orthogonal sets and banach fixed point theorem. Fixed
Point Theory, 18(2):569–578, 2017.
[111] Andrzej Granas, James Dugundji, et al. Fixed point theory, volume 14. Springer, 2003.
[112] Fu gui Shi. (l, m)-fuzzy matroids. Fuzzy Sets Syst., 160:2387–2400, 2009.
[113] Muhammad Gulistan, Naveed Yaqoob, Zunaira Rashid, Florentin Smarandache, and Hafiz Abdul Wahab. A study on neutrosophic
cubic graphs with real life applications in industries. Symmetry, 10(6):203, 2018.
[114] Krystal Guo and Bojan Mohar. Hermitian adjacency matrix of digraphs and mixed graphs. Journal of Graph Theory, 85(1):217–248,
2017.
[115] Francis Joseph Hale. Introduction to control system analysis and design. 1973.
[116] Mohammad Hamidi, Florentin Smarandache, and Mohadeseh Taghinezhad. Decision Making Based on Valued Fuzzy Superhyper-
graphs. Infinite Study, 2023.
[117] Mohammad Hamidi and Mohadeseh Taghinezhad. Application of Superhypergraphs-Based Domination Number in Real World.
Infinite Study, 2023.

38
[118] Barbara Hammer and Kai Gersmann. A note on the universal approximation capability of support vector machines. neural
processing letters, 17:43–53, 2003.
[119] Trevor Hastie, Robert Tibshirani, Jerome Friedman, Trevor Hastie, Robert Tibshirani, and Jerome Friedman. Overview of supervised
learning. The elements of statistical learning: Data mining, inference, and prediction, pages 9–41, 2009.
[120] Robert Hecht-Nielsen. Theory of the backpropagation neural network. In Neural networks for perception, pages 65–93. Elsevier,
1992.
[121] R. Hema, R. Sudharani, and M. Kavitha. A novel approach on plithogenic interval valued neutrosophic hypersoft sets and its
application in decision making. Indian Journal Of Science And Technology, 2023.
[122] Nasimeh Heydaribeni, Xinrui Zhan, Ruisi Zhang, Tina Eliassi-Rad, and Farinaz Koushanfar. Hypop: Distributed constrained
combinatorial optimization leveraging hypergraph neural networks. ArXiv, abs/2311.09375, 2023.
[123] Doeko Homan. Very basic set theory. 2023.
[124] Amirsoheil Honarbari, Sajad Najafi-Shad, Mohsen Saffari Pour, S. S. M. Ajarostaghi, and Ali Hassannia. Mppt improvement for
pmsg-based wind turbines using extended kalman filter and fuzzy control system. Energies, 2021.
[125] Haoyuan Hong, M. Panahi, Ataollah Shirzadi, Tianwu Ma, Junzhi Liu, A-Xing Zhu, Wei Chen, Ioannis Kougias, and Nerantzis
Kazakis. Flood susceptibility assessment in hengfeng area coupling adaptive neuro-fuzzy inference system with genetic algorithm
and differential evolution. The Science of the total environment, 621:1124–1141, 2018.
[126] WA267432 Horn. Some fixed point theorems for compact maps and flows in banach spaces. Transactions of the American
Mathematical Society, 149(2):391–404, 1970.
[127] Krzysztof Hryniewiecki. Basic properties of real numbers. Formalized Mathematics, 1(1):35–40, 1990.
[128] Chao Hu, Ruishi Yu, Binqi Zeng, Yu Zhan, Ying Fu, Quan Zhang, Rongkai Liu, and Heyuan Shi. Hyperattack: Multi-gradient-
guided white-box adversarial structure attack of hypergraph neural networks. ArXiv, abs/2302.12407, 2023.
[129] Liangsong Huang, Yu Hu, Yuxia Li, PK Kishore Kumar, Dipak Koley, and Arindam Dey. A study of regular and irregular
neutrosophic graphs with real life applications. Mathematics, 7(6):551, 2019.
[130] Jiage Huo, Jianghua Zhang, and Felix T. S. Chan. A fuzzy control system for assembly line balancing with a three-state degradation
process in the era of industry 4.0. International Journal of Production Research, 58:7112 – 7129, 2020.
[131] S Satham Hussain, N Durga, Rahmonlou Hossein, and Ghorai Ganesh. New concepts on quadripartitioned single-valued neutro-
sophic graph with real-life application. International Journal of Fuzzy Systems, 24(3):1515–1529, 2022.
[132] S Satham Hussain, Hossein Rashmonlou, R Jahir Hussain, Sankar Sahoo, Said Broumi, et al. Quadripartitioned neutrosophic graph
structures. Neutrosophic Sets and Systems, 51(1):17, 2022.
[133] Satham Hussain, Jahir Hussain, Isnaini Rosyida, and Said Broumi. Quadripartitioned neutrosophic soft graphs. In Handbook of
Research on Advances and Applications of Fuzzy Sets and Logic, pages 771–795. IGI Global, 2022.
[134] Dimitris K Iakovidis and Elpiniki Papageorgiou. Intuitionistic fuzzy cognitive maps for medical decision making. IEEE Transactions
on Information Technology in Biomedicine, 15(1):100–107, 2010.
[135] Aulia Ishak, R. Ginting, and Wang Wanli. Evaluation of e-commerce services quality using fuzzy ahp and topsis. IOP Conference
Series: Materials Science and Engineering, 1041, 2021.
[136] Abolfazl Jaafari, Eric K. Zenner, M. Panahi, and Himan Shahabi. Hybrid artificial intelligence models based on a neuro-fuzzy
system and metaheuristic optimization algorithms for spatial prediction of wildfire probability. Agricultural and Forest Meteorology,
2019.
[137] Muhammad Naveed Jafar, Muhammad Haris Saeed, Kainat Muniba Khan, Faten S. Alamri, and Hamiden A. Wahed Khalifa.
Distance and similarity measures using max-min operators of neutrosophic hypersoft sets with application in site selection for solid
waste management systems. IEEE Access, PP:1–1, 2022.
[138] Chiranjibe Jana, Tapan Senapati, Monoranjan Bhowmik, and Madhumangal Pal. On intuitionistic fuzzy g-subalgebras of g-algebras.
Fuzzy Information and Engineering, 7(2):195–209, 2015.
[139] Thomas Jech. Set theory: The third millennium edition, revised and expanded. Springer, 2003.
[140] David F. Jenkins and Kevin M. Passino. An introduction to nonlinear analysis of fuzzy control systems. J. Intell. Fuzzy Syst.,
7:75–103, 1999.
[141] Jianwen Jiang, Yuxuan Wei, Yifan Feng, Jingxuan Cao, and Yue Gao. Dynamic hypergraph neural networks. In International Joint
Conference on Artificial Intelligence, 2019.
[142] Weiwei Jiang and Jiayun Luo. Graph neural network for traffic forecasting: A survey. ArXiv, abs/2101.11174, 2021.
[143] Calvin Jongsma. Basic set theory and combinatorics. Undergraduate Texts in Mathematics, 2019.
[144] Young Bae Jun, Kul Hur, and Kyoung Ja Lee. Hyperfuzzy subalgebras of bck/bci-algebras. Annals of Fuzzy Mathematics and
Informatics, 2017.
[145] Young Bae Jun, Min Su Kang, and Seok Zun Song. Several types of bipolar fuzzy hyper bck-ideals in hyper bck-algebras. 2012.
[146] Young Bae Jun, Seon Jeong Kim, and Seok Zun Song. Hyper permeable values and energetic sets in $bck/bci$-algebras. 2020.
[147] Young Bae Jun, Seok-Zun Song, and Seon Jeong Kim. Distances between hyper structures and length fuzzy ideals of bck/bci-algebras
based on hyper structures. Journal of Intelligent & Fuzzy Systems, 35(2):2257–2268, 2018.
[148] Dr.W.B.Vasantha Kandasamy and Florentin Smarandache. Fuzzy cognitive maps and neutrosophic cognitive maps. 2003.
[149] Ilanthenral Kandasamy and Florentin Smarandache. Algebraic structure of neutrosophic duplets in neutrosophic rings. 2018.
[150] Vasantha Kandasamy, K Ilanthenral, and Florentin Smarandache. Neutrosophic graphs: a new dimension to graph theory. Infinite
Study, 2015.

39
[151] Bingyi Kang, Yu Li, Sa Xie, Zehuan Yuan, and Jiashi Feng. Exploring balanced feature spaces for representation learning. In
International conference on learning representations, 2020.
[152] Chiang Kao, Chang-Chung Li, and Shih-Pin Chen. Parametric programming to the analysis of fuzzy queues. Fuzzy sets and
systems, 107(1):93–100, 1999.
[153] Chaitali Kar, Bappa Mondal, and Tapan Kumar Roy. An inventory model under space constraint in neutrosophic environment:
a neutrosophic geometric programming approach. Neutrosophic Sets and Systems: An International Book Series in Information
Science and Engineering, 21(2018):93–109, 2018.
[154] B Kavitha, S Karthikeyan, and P Sheeba Maybell. Emerging intuitionistic fuzzy classifiers for intrusion detection system. Journal
of Advances in Information Technology, 2(2):99–108, 2011.
[155] M Kaviyarasu, Muhammad Aslam, Farkhanda Afzal, Maha Mohammed Saeed, Arif Mehmood, and Saeed Gul. The connectivity
indices concept of neutrosophic graph and their application of computer network, highway system and transport network flow.
Scientific Reports, 14(1):4891, 2024.
[156] Faezehossadat Khademi, Sayed Mohammadmehdi Jamal, Neela Deshpande, and Shreenivas N. Londhe. Predicting strength of
recycled aggregate concrete using artificial neural network, adaptive neuro-fuzzy inference system and multiple linear regression.
International journal of sustainable built environment, 5:355–369, 2016.
[157] Huda E Khalid, Florentin Smarandache, and Ahmed K Essa. The basic notions for (over, off, under) neutrosophic geometric
programming problems. Infinite Study, 2018.
[158] Ali Khosravi, Ricardo Nicolau Nassar Koury, Luiz Henrique Jorge Machado, and Juan J. Garcia Pabon. Prediction of wind speed
and wind direction using artificial neural network, support vector regression and adaptive neuro-fuzzy inference system. Sustainable
Energy Technologies and Assessments, 25:146–160, 2018.
[159] Günter Klambauer, Thomas Unterthiner, Andreas Mayr, and Sepp Hochreiter. Self-normalizing neural networks. Advances in
neural information processing systems, 30, 2017.
[160] Stephen Cole Kleene. General recursive functions of natural numbers. Mathematische annalen, 112(1):727–742, 1936.
[161] Dimitrios Kontogiannis, Dimitrios Bargiotas, and Aspassia Daskalopulu. Fuzzy control system for smart energy management in
residential buildings based on environmental data. Energies, 2021.
[162] Bart Kosko. Fuzzy cognitive maps. Int. J. Man Mach. Stud., 24:65–75, 1986.
[163] Smita Krishnaswamy, Igor L. Markov, and John P. Hayes. Design for robustness. 2013.
[164] Tarun Kumar and Mukesh Kumar Sharma. Neutrosophic decision-making for allocations in solid transportation problems.
OPSEARCH, 2024.
[165] Ludmila I Kuncheva. Fuzzy classifier design, volume 49. Physica, 2012.
[166] Shulin Lan, Hao Zhang, Ray Y. Zhong, and George Q. Huang. A customer satisfaction evaluation model for logistics services using
fuzzy analytic hierarchy process. Ind. Manag. Data Syst., 116:1024–1042, 2016.
[167] Jérôme Lang, Gabriella Pigozzi, Marija Slavkovik, and Leendert Van der Torre. Judgment aggregation rules based on minimization.
In Proceedings of the 13th conference on theoretical aspects of rationality and knowledge, pages 238–246, 2011.
[168] Maikel Leon. Aggregating procedure for fuzzy cognitive maps. The International FLAIRS Conference Proceedings, 2023.
[169] Xiaonan Li. Three-way fuzzy matroids and granular computing. International Journal of Approximate Reasoning, 114:44–50,
2019.
[170] Timothy P Lillicrap, Adam Santoro, Luke Marris, Colin J Akerman, and Geoffrey Hinton. Backpropagation and the brain. Nature
Reviews Neuroscience, 21(6):335–346, 2020.
[171] Kimfung Liu, Wenzhong Shi, and Hua Zhang. A fuzzy topology-based maximum likelihood classification. Isprs Journal of
Photogrammetry and Remote Sensing, 66:103–114, 2011.
[172] Yong Lin Liu, Hee Sik Kim, and J. Neggers. Hyperfuzzy subsets and subgroupoids. J. Intell. Fuzzy Syst., 33:1553–1562, 2017.
[173] Zhi Liu, Ardashir Mohammadzadeh, Hamza Turabieh, Majdi M. Mafarja, Shahab S. Band, and Amir H. Mosavi. A new online
learned interval type-3 fuzzy control system for solar energy management systems. IEEE Access, 9:10498–10508, 2021.
[174] Yingqi Lu, Pai Zhu, Donglin Wang, and Michel Fattouche. Machine learning techniques with probability vector for cooperative
spectrum sensing in cognitive radio networks. In 2016 IEEE wireless communications and networking conference, pages 1–6.
IEEE, 2016.
[175] Chao Luo, Nannan Zhang, and Xingyuan Wang. Time series prediction based on intuitionistic fuzzy cognitive map. Soft Computing,
24:6835–6850, 2020.
[176] M MAHARIN. An over view on hyper fuzzy subgroups. Scholar: National School of Leadership, 9(1.2), 2020.
[177] Mourad Oqla Massa’deh. On hyper q-fuzzy normal hx subgroup, conjugate and its.
[178] Willians Ribeiro Mendes, Fábio Meneghetti Ugulino de Araújo, Ritaban Dutta, and Derek M. Heeren. Fuzzy control system for
variable rate irrigation using remote sensing. Expert Syst. Appl., 124:13–24, 2019.
[179] Hamiyet Merkepci and Katy D. Ahmad. On the conditions of imperfect neutrosophic duplets and imperfect neutrosophic triplets.
Galoitica: Journal of Mathematical Structures and Applications, 2022.
[180] Yuan Miao, Zhi qiang Liu, Shi Li, and Chee Kheong Siew. Dynamical cognitive network-an extension of fuzzy cognitive map.
Proceedings 11th International Conference on Tools with Artificial Intelligence, pages 43–46, 1999.
[181] Arunodaya Raj Mishra, Dragan Pamucar, Pratibha Rani, Rajeev Shrivastava, and Ibrahim M. Hezam. Assessing the sustainable
energy storage technologies using single-valued neutrosophic decision-making framework with divergence measure. Expert Syst.
Appl., 238:121791, 2023.

40
[182] Bijan Moaveni, Fatemeh Rashidi Fathabadi, and Ali Molavi. Fuzzy control system design for wheel slip prevention and tracking of
desired speed profile in electric trains. Asian Journal of Control, 24:388 – 400, 2020.
[183] E. J. Mogro, J. R. Molina, G. J. S. Canas, and P. H. Soria. Tree tobacco extract (Nicotiana glauca) as a plithogenic bioinsecticide
alternative for controlling fruit fly (Drosophila immigrans) using 𝑛-superhypergraphs. Neutrosophic Sets and Systems, 74:57–65,
2024.
[184] Ghazal Mohsenian, Sadegh Khalili, Mohammad I. Tradat, Yaman M. Manaserh, Srikanth Rangarajan, Anuroop Desu, Dushyant
Thakur, Kourosh Nemati, Kanad Ghose, and Bahgat G. Sammakia. A novel integrated fuzzy control system toward automated local
airflow management in data centers. Control Engineering Practice, 2021.
[185] Kalyan Mondal, Surapati Pramanik, and Nandalal Ghosh. A study on problems of hijras in west bengal based on neutrosophic
cognitive maps. 2014.
[186] John N Mordeson and Sunil Mathew. Advanced topics in fuzzy graph theory, volume 375. Springer, 2019.
[187] Christopher Morris, Martin Ritzert, Matthias Fey, William L Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe.
Weisfeiler and leman go neural: Higher-order graph neural networks. In Proceedings of the AAAI conference on artificial
intelligence, volume 33, pages 4602–4609, 2019.
[188] Giuseppe Munda. Choosing aggregation rules for composite indicators. Social indicators research, 109(3):337–354, 2012.
[189] Z Nazari and B Mosapour. The entropy of hyperfuzzy sets. Journal of Dynamical Systems and Geometric Theories, 16(2):173–185,
2018.
[190] Zohreh Nazari and Batool Mosapour. The entropy of hyperfuzzy sets. Journal of Dynamical Systems and Geometric Theories,
16:173 – 185, 2018.
[191] Ladislav A. Novak. On goetschel and voxman fuzzy matroids. Fuzzy Sets Syst., 117:407–412, 2001.
[192] Sandra Oltra and Oscar Valero. Banach’s fixed point theorem for partial metric spaces. 2004.
[193] Oluseyi Olurotimi, Amir Dembo, and Thomas Kailath. Neural network weight matrix synthesis using optimal control techniques.
In Neural Information Processing Systems, 1989.
[194] Rodolfo González Ortega, Marcos David Oviedo Rodrı́guez, Maikel Yelandi Leyva Vázquez, Jesús Estupiñán, Ricardo, João
Alcione Sganderla Figueiredo, and Florentin Smarandache. Pestel analysis based on neutrosophic cognitive maps and neutrosophic
numbers for the sinos river basin management. 2019.
[195] Lena Osterhagen. An introduction to fuzzy control. 2016.
[196] Madhumangal Pal, Sovan Samanta, and Ganesh Ghorai. Modern trends in fuzzy graph theory. Springer, 2020.
[197] Sankar K. Pal and Ashish Ghosh. Fuzzy geometry in image analysis. Fuzzy Sets and Systems, 48:23–40, 1992.
[198] Dragan Pamucar, Morteza Yazdani, Radojko Obradovic, Anil Kumar, and Mercedes Torres-Jiménez. A novel fuzzy hybrid
neutrosophic decision-making approach for the resilient supplier selection problem. International Journal of Intelligent Systems,
35(12):1934–1986, 2020.
[199] Elpiniki I. Papageorgiou. Review study on fuzzy cognitive maps and their applications during the last decade. 2011 IEEE
International Conference on Fuzzy Systems (FUZZ-IEEE 2011), pages 828–835, 2011.
[200] Elpiniki I Papageorgiou and Dimitris K Iakovidis. Intuitionistic fuzzy cognitive maps. IEEE Transactions on Fuzzy Systems,
21(2):342–354, 2012.
[201] Elpiniki I. Papageorgiou and Jose L. Salmeron. A review of fuzzy cognitive maps research during the last decade. IEEE Transactions
on Fuzzy Systems, 21:66–79, 2013.
[202] Jooyoung Park and Irwin W Sandberg. Universal approximation using radial-basis-function networks. Neural computation,
3(2):246–257, 1991.
[203] Michael Peer, Iva K. Brunec, Nora S. Newcombe, and Russell A. Epstein. Structuring knowledge with cognitive maps and cognitive
graphs. Trends in Cognitive Sciences, 25:37–54, 2020.
[204] Yannis A Phillis and Runtong Zhang. Fuzzy service control of queueing systems. IEEE Transactions on Systems, Man, and
Cybernetics, Part B (Cybernetics), 29(4):503–517, 1999.
[205] Robert Piotrowski. Supervisory fuzzy control system for biological processes in sequencing wastewater batch reactor. Urban Water
Journal, 17:325 – 332, 2020.
[206] Tomaso Poggio and Federico Girosi. Networks for approximation and learning. Proceedings of the IEEE, 78(9):1481–1497, 1990.
[207] Surapati Pramanik, Sourendranath Chackrabarti, Nandalal Ghosh, and Kumar Ashutosh. A study on problems of construction
workers in west bengal based on neutrosophic cognitive maps. International Journal of Innovative Research in Science, Engineering
and Technology, 2:6387–6394, 2014.
[208] Mahardhika Pratama, Witold Pedrycz, and Edwin Lughofer. Evolving ensemble fuzzy classifier. IEEE Transactions on Fuzzy
Systems, 26(5):2552–2567, 2018.
[209] Pao-Ming Pu and Ying ming Liu. Fuzzy topology. i. neighborhood structure of a fuzzy point and moore-smith convergence. Journal
of Mathematical Analysis and Applications, 76:571–599, 1980.
[210] Dunwang Qin, Zhen Peng, and Lifeng Wu. Deep attention fuzzy cognitive maps for interpretable multivariate time series prediction.
Knowl. Based Syst., 275:110700, 2023.
[211] Jiezhong Qiu, Qibin Chen, Yuxiao Dong, Jing Zhang, Hongxia Yang, Ming Ding, Kuansan Wang, and Jie Tang. Gcc: Graph
contrastive coding for graph neural network pre-training. Proceedings of the 26th ACM SIGKDD International Conference on
Knowledge Discovery & Data Mining, 2020.
[212] Mohammad Naved Qureshi and Mohd Vasim Ahamad. An improved method for image segmentation using k-means clustering with
neutrosophic logic. Procedia computer science, 132:534–540, 2018.

41
[213] Atiqe Ur Rahman, Muhammad Haris Saeed, and Hamiden Abd El-Wahed Khalifa. Multi-attribute decision-making based on
aggregations and similarity measures of neutrosophic hypersoft sets with possibility setting. Journal of Experimental & Theoretical
Artificial Intelligence, 36:161 – 186, 2022.
[214] Heba Rashad and Mai Mohamed. Neutrosophic theory and its application in various queueing models: case studies. Neutrosophic
Sets and Systems, 42:117–135, 2021.
[215] Judith Roitman. Introduction to modern set theory, volume 8. John Wiley & Sons, 1990.
[216] Raul Rojas and Raúl Rojas. The backpropagation algorithm. Neural networks: a systematic introduction, pages 149–182, 1996.
[217] Azriel Rosenfeld. Fuzzy graphs. In Fuzzy sets and their applications to cognitive and decision processes, pages 77–95. Elsevier,
1975.
[218] Azriel Rosenfeld. The fuzzy geometry of image subsets. In Readings in Fuzzy Sets for Intelligent Systems, pages 633–639. Elsevier,
1993.
[219] Azriel Rosenfeld. Fuzzy geometry: An updated overview. Information Sciences, 110(3-4):127–133, 1998.
[220] Benjamin Rossman. Homomorphism preservation theorems. J. ACM, 55:15:1–15:53, 2008.
[221] Memet Şahin and Abdullah Kargın. Neutrosophic triplet metric topology. Infinite Study, 2019.
[222] Memet Şahin, Abdullah Kargın, and İsmet Yıldız. Neutrosophic triplet field and neutrosophic triplet vector space based on set
valued neutrosophic quadruple number. Quadruple Neutrosophic Theory And Applications, 1:52, 2020.
[223] Alexei Samsonovich and Bruce L. McNaughton. Path integration and cognitive mapping in a continuous attractor neural network
model. The Journal of Neuroscience, 17:5900 – 5920, 1997.
[224] Musavarah Sarwar and Muhammad Akram. New applications of m-polar fuzzy matroids. Symmetry, 9:319, 2017.
[225] S Satham Hussain, Durga Nagarajan, Hossein Rashmanlou, and Farshid Mofidnakhaei. Novel supply chain decision making model
under m-polar quadripartitioned neutrosophic environment. Journal of Applied Mathematics and Computing, pages 1–26, 2024.
[226] P Sathya, Nivetha Martin, and Florentine Smarandache. Plithogenic forest hypersoft sets in plithogenic contradiction based
multi-criteria decision making. Neutrosophic Sets and Systems, 73:668–693, 2024.
[227] Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model.
IEEE transactions on neural networks, 20(1):61–80, 2008.
[228] Klaus Schmidt. Control system design. 2005.
[229] Francina Shalini. Trigonometric similarity measures of pythagorean neutrosophic hypersoft sets. Neutrosophic Systems with
Applications, 2023.
[230] Sagar Sharma, Simone Sharma, and Anidhya Athaiya. Activation functions in neural networks. Towards Data Sci, 6(12):310–316,
2017.
[231] Oleksandr Shchur, Maximilian Mumme, Aleksandar Bojchevski, and Stephan Günnemann. Pitfalls of graph neural network
evaluation. ArXiv, abs/1811.05868, 2018.
[232] Xiaolong Shi, Saeed Kosari, Hossein Rashmanlou, Said Broumi, and S Satham Hussain. Properties of interval-valued quadriparti-
tioned neutrosophic graphs with real-life application. Journal of Intelligent & Fuzzy Systems, 44(5):7683–7697, 2023.
[233] Sajjan G. Shiva. Introduction to logic design. 2018.
[234] Tobias Sing, Oliver Sander, Niko Beerenwinkel, and Thomas Lengauer. Rocr: visualizing classifier performance in r. Bioinformatics,
21 20:3940–1, 2005.
[235] F. Smarandache. Introduction to superhyperalgebra and neutrosophic superhyperalgebra. Journal of Algebraic Hyperstructures and
Logical Algebras, 2022.
[236] Florentin Smarandache. Neutrosophic overset, neutrosophic underset, and neutrosophic offset. similarly for neutrosophic over-
/under-/offlogic, probability, and statisticsneutrosophic, pons editions brussels, 170 pages book, 2016.
[237] Florentin Smarandache. A unifying field in logics: Neutrosophic logic. In Philosophy, pages 1–141. American Research Press,
1999.
[238] Florentin Smarandache. Neutrosophic set-a generalization of the intuitionistic fuzzy set. International journal of pure and applied
mathematics, 24(3):287, 2005.
[239] Florentin Smarandache. A unifying field in logics: neutrosophic logic. Neutrosophy, neutrosophic set, neutrosophic probability:
neutrsophic logic. Neutrosophy, neutrosophic set, neutrosophic probability. Infinite Study, 2005.
[240] Florentin Smarandache. Neutrosophic set–a generalization of the intuitionistic fuzzy set. Journal of Defense Resources Management
(JoDRM), 1(1):107–116, 2010.
[241] Florentin Smarandache. Extension of soft set to hypersoft set, and then to plithogenic hypersoft set. Neutrosophic sets and systems,
22(1):168–170, 2018.
[242] Florentin Smarandache. Plithogenic set, an extension of crisp, fuzzy, intuitionistic fuzzy, and neutrosophic sets-revisited. Infinite
study, 2018.
[243] Florentin Smarandache. Plithogeny, plithogenic set, logic, probability, and statistics. arXiv preprint arXiv:1808.03948, 2018.
[244] Florentin Smarandache. Extended nonstandard neutrosophic logic, set, and probability based on extended nonstandard analysis.
Symmetry, 11(4):515, 2019.
[245] Florentin Smarandache. n-superhypergraph and plithogenic n-superhypergraph. Nidus Idearum, 7:107–113, 2019.
[246] Florentin Smarandache. Extension of HyperGraph to n-SuperHyperGraph and to Plithogenic n-SuperHyperGraph, and Extension
of HyperAlgebra to n-ary (Classical-/Neutro-/Anti-) HyperAlgebra. Infinite Study, 2020.

42
[247] Florentin Smarandache. NeutroGeometry & AntiGeometry are alternatives and generalizations of the Non-Euclidean Geometries
(revisited), volume 5. Infinite Study, 2021.
[248] Florentin Smarandache. Introduction to the n-SuperHyperGraph-the most general form of graph today. Infinite Study, 2022.
[249] Florentin Smarandache. Practical applications of IndetermSoft Set and IndetermHyperSoft Set and introduction to TreeSoft Set as
an extension of the MultiSoft Set. Infinite Study, 2022.
[250] Florentin Smarandache. The SuperHyperFunction and the Neutrosophic SuperHyperFunction (revisited again), volume 3. Infinite
Study, 2022.
[251] Florentin Smarandache. Decision making based on valued fuzzy superhypergraphs. 2023.
[252] Florentin Smarandache. SuperHyperFunction, SuperHyperStructure, Neutrosophic SuperHyperFunction and Neutrosophic Super-
HyperStructure: Current understanding and future directions. Infinite Study, 2023.
[253] Florentin Smarandache. Foundation of superhyperstructure & neutrosophic superhyperstructure. Neutrosophic Sets and Systems,
63(1):21, 2024.
[254] Florentin Smarandache. Superhyperstructure & neutrosophic superhyperstructure, 2024. Accessed: 2024-12-01.
[255] Florentin Smarandache and Mohamed Abdel-Basset. Optimization Theory Based on Neutrosophic and Plithogenic Sets. Academic
Press, 2020.
[256] Florentin Smarandache and Mumtaz Ali. Neutrosophic triplet group. Neural Computing and Applications, 29(7):595–601, 2018.
[257] Florentin Smarandache and Mumtaz Ali. Neutrosophic triplet group (revisited). Neutrosophic sets and Systems, 26(1):2, 2019.
[258] Florentin Smarandache and Mumtaz Ali. Neutrosophic triplet group (revisited). Neutrosophic sets and Systems, 26(1):2, 2019.
[259] Florentin Smarandache and NM Gallup. Generalization of the intuitionistic fuzzy set to the neutrosophic set. In International
Conference on Granular Computing, pages 8–42. Citeseer, 2006.
[260] Florentin Smarandache and AA Salama. Neutrosophic crisp set theory. 2015.
[261] Seok-Zun Song, Seon Jeong Kim, and Young Bae Jun. Hyperfuzzy ideals in bck/bci-algebras. Mathematics, 5(4):81, 2017.
[262] Alexander P Šostak. On a fuzzy topological structure. In Proceedings of the 13th Winter School on Abstract Analysis, pages 89–103.
Circolo Matematico di Palermo, 1985.
[263] Teresa E Steele and Timothy D Weaver. The modified triangular graph: a refined method for comparing mortality profiles in
archaeological samples. Journal of Archaeological Science, 29(3):317–322, 2002.
[264] Eulalia Szmidt, Janusz Kacprzyk, and Marta Kukier. Intuitionistic fuzzy classifier for imbalanced classes. In Artificial Intelligence
and Soft Computing: 12th International Conference, ICAISC 2013, Zakopane, Poland, June 9-13, 2013, Proceedings, Part I 12,
pages 483–492. Springer, 2013.
[265] Kazuo Tanaka, Tsuyoshi Hori, and Hua O. Wang. A multiple lyapunov function approach to stabilization of fuzzy control systems.
IEEE Trans. Fuzzy Syst., 11:582–589, 2003.
[266] Lev Telyatnikov, Maria Sofia Bucarelli, Guillermo Bernardez, Olga Zaghen, Simone Scardapane, and Pietro Lió. Hypergraph
neural networks through the lens of message passing: A common perspective to homophily and architecture design. ArXiv,
abs/2310.07684, 2023.
[267] Ankit Thakkar and Kinjal Chaudhari. Predicting stock trend using an integrated term frequency-inverse document frequency-based
feature weight matrix with neural networks. Appl. Soft Comput., 96:106684, 2020.
[268] S. P. Tiwari and Anupam K. Singh. Fuzzy preorder, fuzzy topology and fuzzy transition system. In Indian Conference on Logic
and Its Applications, 2013.
[269] Vakkas Ulucay and Memet Sahin. Intuitionistic fuzzy soft expert graphs with application. Uncertainty discourse and applications,
1(1):1–10, 2024.
[270] TS Umamaheswari and P Sumathi. Enhanced firefly algorithm (efa) based gene selection and adaptive neuro neutrosophic inference
system (annis) prediction model for detection of circulating tumor cells (ctcs) in breast cancer analysis. Cluster Computing,
22:14035–14047, 2019.
[271] Guillaume Verdon, Trevor McCourt, Enxhell Luzhnica, Vikash Singh, Stefan Leichenauer, and Jack Hidary. Quantum graph neural
networks. arXiv preprint arXiv:1909.12264, 2019.
[272] MR Vinodh and PJ Arui Leena Rose. Intelligent deep adaptive intuitionistic fuzzy classifier based dyslexia prediction among
children at its early stage. Journal of Electrical Systems, 20(3s):52–61, 2024.
[273] Jia Wang and Zhenyuan Wang. Using neural networks to determine sugeno measures by statistics. Neural Networks, 10:183–195,
1997.
[274] Lina Wang, Binrui Wang, and M. Zhu. Multi-model adaptive fuzzy control system based on switch mechanism in a greenhouse.
Applied Engineering in Agriculture, 36:549–556, 2020.
[275] Yuxin Wang, Quan Gan, Xipeng Qiu, Xuanjing Huang, and David Paul Wipf. From hypergraph energy functions to hypergraph
neural networks. In International Conference on Machine Learning, 2023.
[276] Zhanshan Wang, Jian Sun, and Huaguang Zhang. Stability analysis of t-s fuzzy control system with sampled-dropouts based on
time-varying lyapunov function method. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 50:2566–2577, 2020.
[277] Tong Wei, Junlin Hou, and Rui Feng. Fuzzy graph neural network for few-shot learning. In 2020 International joint conference on
neural networks (IJCNN), pages 1–8. IEEE, 2020.
[278] Paul J Werbos. Backpropagation through time: what it does and how to do it. Proceedings of the IEEE, 78(10):1550–1560, 1990.
[279] Andrew M. Wikenheiser and Geoffrey Schoenbaum. Over the river, through the woods: cognitive maps in the hippocampus and
orbitofrontal cortex. Nature Reviews Neuroscience, 17:513–523, 2016.

43
[280] F. Wesley Wilson. The structure of the level surfaces of a lyapunov function. Journal of Differential Equations, 3:323–329, 1967.
[281] Jianxin Wu. Introduction to convolutional neural networks. National Key Lab for Novel Software Technology. Nanjing University.
China, 5(23):495, 2017.
[282] Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and S Yu Philip. A comprehensive survey on graph
neural networks. IEEE transactions on neural networks and learning systems, 32(1):4–24, 2020.
[283] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? arXiv preprint
arXiv:1810.00826, 2018.
[284] Ibrahim Yasser, Abeer Twakol, AA Abd El-Khalek, Ahmed Samrah, and AA Salama. Covid-x: novel health-fog framework based
on neutrosophic classifier for confrontation covid-19. Neutrosophic Sets and Systems, 35:1–21, 2020.
[285] Morteza Yazdani, Ali Ebadi Torkayesh, ?eljko Stević, Prasenjit Chatterjee, Sahand Asgharieh Ahari, and Violeta Doval Hernandez.
An interval valued neutrosophic decision-making structure for sustainable supplier selection. Expert Syst. Appl., 183:115354, 2021.
[286] Jyh-Cheng Yu and Kosuke Ishii. Design for robustness based on manufacturing variation patterns. Journal of Mechanical Design,
120:196–202, 1998.
[287] Lotfi A Zadeh. Fuzzy sets. Information and control, 8(3):338–353, 1965.
[288] Lotfi A Zadeh. Biological application of the theory of fuzzy sets and systems. In The Proceedings of an International Symposium
on Biocybernetics of the Central Nervous System, pages 199–206. Little, Brown and Comp. London, 1969.
[289] Lotfi A Zadeh. A fuzzy-set-theoretic interpretation of linguistic hedges. 1972.
[290] Lotfi A Zadeh. Fuzzy sets and their application to pattern classification and clustering analysis. In Classification and clustering,
pages 251–299. Elsevier, 1977.
[291] Lotfi A Zadeh. Fuzzy logic, neural networks, and soft computing. In Fuzzy sets, fuzzy logic, and fuzzy systems: selected papers by
Lotfi A Zadeh, pages 775–782. World Scientific, 1996.
[292] Lotfi A. Zadeh. Fuzzy topology. i. neighborhood structure of a fuzzy point and moore-smith convergence. 2003.
[293] Mohamed Bisher Zeina. Neutrosophic event-based queueing model. International Journal of Neutrosophic Science, 6(1):48–55,
2020.
[294] Chuxu Zhang, Dongjin Song, Chao Huang, Ananthram Swami, and N. Chawla. Heterogeneous graph neural network. Proceedings
of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2019.
[295] Xiaohong Zhang, Florentin Smarandache, and Xingliang Liang. Neutrosophic duplet semi-group and cancellable neutrosophic
triplet groups. Symmetry, 9:275, 2017.
[296] Hua Zhao, Zeshui Xu, Shousheng Liu, and Zhong Wang. Intuitionistic fuzzy mst clustering algorithms. Computers & Industrial
Engineering, 62(4):1130–1140, 2012.
[297] Liang Zhao, Shaocheng Qu, W. Zhang, and Zhili Xiong. An energy-saving fuzzy control system for highway tunnel lighting. Optik,
2019.
[298] Wufan Zhao, Claudio Persello, and Alfred Stein. Extracting planar roof structures from very high resolution images using graph
neural networks. ISPRS Journal of Photogrammetry and Remote Sensing, 187:34–45, 2022.
[299] Ziwei Zheng, Huizhi Liang, Vaclav Snasel, Vito Latora, Panos Pardalos, Giuseppe Nicosia, and Varun Ojha. On learnable
parameters of optimal and suboptimal deep learning models. arXiv preprint arXiv:2408.11720, 2024.
[300] Enwang Zhou and Alireza Khotanzad. Fuzzy classifier design using genetic algorithms. Pattern Recognition, 40(12):3401–3414,
2007.

44

View publication stats

You might also like