2021 Onyelowe Application
2021 Onyelowe Application
2021 Onyelowe Application
net/publication/350514129
Article in Multiscale and Multidisciplinary Modeling, Experiments and Design · December 2021
DOI: 10.1007/s41939-021-00093-7
CITATIONS READS
56 631
5 authors, including:
All content following this page was uploaded by Kennedy Chibuzor Onyelowe on 16 April 2021.
ORIGINAL PAPER
Abstract
Artificial neural network (ANN) method has been applied in the present work to predict the California bearing ratio (CBR),
unconfined compressive strength (UCS), and resistance value (R) of expansive soil treated with recycled and activated com-
posites of rice husk ash. Pavement foundations suffer from poor design and construction, poor material handling and utiliza-
tion and management lapses. The evolutions of soft computing techniques have produced various algorithms developed to
overcome certain lapses in performance. Three of such algorithms from ANN are Levenberg–Muarquardt Backpropagation
(LMBP), Bayesian Programming (BP), and Conjugate Gradient (CG) algorithms. In this work, the expansive soil classified
as A-7-6 group soil was treated with hydrated-lime activated rice husk ash (HARHA) in varying proportions between 0.1
and 12% by weight of soil at the rate of 0.1% to produce 121 datasets. These were used to predict the behavior of the soil’s
strength parameters (CBR, UCS and R) utilizing the evolutionary hybrid algorithms of ANN. The predictor parameters were
HARHA, liquid limit (wL), (plastic limit (wP), plasticity index (IP), optimum moisture content (wOMC), clay activity (AC),
and (maximum dry density (δmax). A multiple linear regression (MLR) was also conducted on the datasets in addition to
ANN to serve as a check and linear validation mechanism. MLR and ANN methods agreed in terms of performance and fit
at the end of computing and iteration. However, the response validation on the predicted models showed a good correlation
above 0.9 and a great performance index. Comparatively, the LMBP algorithm yielded an accurate estimation of the results
in lesser iterations than the Bayesian and the CG algorithms, while the Bayesian technique produced the best result with the
required number of iterations to minimize the error. And finally, the LMBP algorithm outclassed the other two algorithms
in terms of the predicted models’ accuracy.
Keywords Soft computing · Artificial intelligence · Artificial neural network (ANN) · Machine learning in geotechnics ·
Back-propagation algorithm · Levenberg–muarquardt algorithm · Bayesian algorithm · Conjugate gradient algorithm ·
Sustainable construction materials
13
Vol.:(0123456789)
Multiscale and Multidisciplinary Modeling, Experiments and Design
on certain primary strength properties, which include; (a) posterior weight distribution to approximate the outcome.
California bearing ratio (CBR), (b) unconfined compressive The outcome although shows that the MCMC-based Bayes-
strength (UCS), and (c) resistance value (R). It is impor- ian ANN performed better in this paper than the conven-
tant to note that previous research designs have depended tional model selection methods. Hosseini et al. (Hosseini
only on the CBR evaluation in their designs for the strength et al. 2018) employed the ANN LMBP algorithm to predict
and reliability of pavement foundation (Van and Duc and soil mechanical resistance and compared the results with
Onyelowe 2018; Van et al. 2018). Sadly, while CBR is a the conventional multiple regression (MR) by making use
good determinant property in highway foundation design, of bulk density and volumetric soil water content as predic-
it does not deal with lateral failure determination. There- tors. The results showed that the intelligent method of ANN
fore, the combination of CBR, UCS, and R-value gives a performed well. Although Sariev, and Germano (Sariev and
more dependable and reliable design and even time moni- Germano 2019) stated in their work that the Bayesian-based
toring of the structures’ performance, tests through which ANN tends to overfit data under statistical and evolutionary
CBR, UCS, and R are estimated are standardized by stand- models as its drawback, it was used with high performance
ard conditions (BS 1377—2, 3 1990; BS 1924 1990; BS in the probability of default estimation through regulariza-
5930 2015). It is highly complicated and time-consuming to tion technique. Saldaña et al. (Saldaña et al. 2020), utilized
determine these properties in the laboratory due to repeated the traditional LMBP-based ANN algorithm to predict UCS
test runs to achieve accurate results with less human or with p-wave velocity, density, and porosity as predictors.
equipment error (Kisi and Uncuoglu 2005). Similar com- The unconfined compressive strength (UCS) of cement kiln
plications are encountered during a stabilization procedure dust (CKD) treated expansive clayey soil was predicted by
when an expansive soil requires strength improvement Salahudeen et al. (Salahudeen et al. 2020) using LMBP-
before utilization as a subgrade material (Kisi and Unc- based ANN. The model performance was evaluated using
uoglu 2005; Van and Duc and Onyelowe, K.C. 2018; Van mean square error (MSE), and the coefficient of determi-
et al. 2018). The aim of this paper is the assessment of nation, and the results showed satisfactory performance
the effect of a hybrid binder; hydrated-lime activated rice in the prediction model. Additionally, the particle swarm
husk ash on the strength properties of expansive soil and optimization-based ANN was used by Abdi et al. (Abdi
the development and training of an artificial neural network et al. 2020) to predict UCS of sandstones, and the results
(ANN), first with the Levenberg–Muarquardt backpropaga- showed a reliability with a correlation of 0.974, of the PSO-
tion algorithm (LMBP) and correlated with the performance based ANN model to predict UCS and recommended its
of Bayesian and Conjugate Gradient (CG) algorithms to utilization as a feasible tool in soft computing geotechnics.
predict CBR, UCS and R performance with the addition Erzin and Turkoz (Erzin and Turkoz 2016) employed ANN
of binder based on the specimen data from 121 tests. How- also in their work to predict CBR values of sands and the
ever, there have been researches previously and ongoing results showed that the predicted model and those obtained
in soft computing employment in civil engineering design from experiments matched greatly. The performance indices
and operation. Ferentinou and Fakir (Ferentinou and Fakir were also used, which showed high prediction performance.
2017) employed Levenberg–Muarquardt backpropaga- In a two-case study presented by Kisi and Uncuoglu (Kisi
tion algorithm (LMBP) based ANN in the prediction of and Uncuoglu 2005), three backpropagation training algo-
UCS of various rocks using four (4) predictor parameters rithms; Levenberg–Marquardt (LM), Conjugate Gradient
at the input end. The results returned at 0.99 and 0.92 for (CG) and Resilient Backpropagation (RBP) algorithms
training and test states, respectively, showing the validity were employed to predict stream flow forecasting and lat-
of LMBP-based ANN in predicting geotechnical proper- eral stress determination in cohesionless soils. The primary
ties supporting ANN as alternative tool in soft computing focus of this study (Kisi and Uncuoglu 2005) was the con-
geotechnics. Nawi et al. (Nawi et al. 2013) presented that vergence velocities in training and performance in testing.
the LMBP algorithms have noticeable drawbacks such as The results in the two cases showed LMBP algorithm was
sticking in a local minimum and slow rate of convergence faster and had better performance than the other algorithms
and proposed an improved form trained on Cuckoo search due to its design to approach second-order training speed
algorithm, which increased the convergence rate of the without passing through the computation of Hessian matrix,
hybrid learning method. Kingston et al. (2016) utilized the the RBP algorithm presented results with the best accuracy
Bayesian algorithm’s ability to compare models of varying in the period of testing due to its ability to transfer func-
complexity to select the most appropriate ANN structure as tions in the hidden layers by squashing, which compresses
a tool in water resources engineering. However, the Bayes- range of infinite inputs into finite outputs. In the above-
ian method employs alternative methods to estimate com- cited results, the algorithms currently being used in ANN
peting models’ probabilities, which is called the Markov programming have performed optimally with high and low
Chain Monte Carlo (MCMC), which simulates from the points. Additionally, the literature review has revealed that
13
Multiscale and Multidisciplinary Modeling, Experiments and Design
ANN has been used successfully in predicting CBR and the focus is to reduce the global error to a nearest minimum
UCS but has not been employed in the triocombination of for better deployment and better outcomes. Generally, the
CBR, UCS, and R predictions for the purpose more sus- global error is estimated with Eq. 1 (Kisi and Uncuoglu
tainable pavement design, construction, and performance 2005);
monitoring. The present work compares the performance of P
a different set of three algorithms and tries to propose the 1∑
E= E (1)
best approach under the present predictors being used and P P=1 P
the type of soil being studied. The results of this research
work promise to present a design, construction, and perfor- where P = total number of training patterns, EP = error for
mance evaluation plan to follow in a smart environment for training pattern, and (Kisi and Uncuoglu 2005);
efficient earthwork delivery. N
1 ∑( )2
EP = o i − ti (2)
2 i=1
2 Materials and methods
N = total number of output nodes, Oi = network output at i th
output node, ti = the target output at the ith output node. Gen-
2.1 Materials preparation
erally, it is important to reduce this global error (see Eq. 1)
in every evolutionary programming algorithm by adjusting
Expansive clay soil was prepared. Tests were conducted
the biases and weights.
on both untreated and treated soils to determine the data-
sets by observing the effects of stabilization on the pre-
3.2 Levenberg–Marquardt Bachpropagation
dictor parameters presented in Appendix, Table 8, needed
(LMBP) Algorithm
for evolutionary predictive modeling. The hydrated-lime
activated rice husk ash (HARHA) is a hybrid geomate-
This was designed to overcome the computation of Hes-
rial binder developed by blending rice husk ash with 5%
sian matrix by approaching the second-order speed of train-
by weight hydrated-lime (Ca(OH)2) and allowed for 24 h
ing. As it is usual in feed-forward training network (FFTN)
for activation reaction to complete. The hydrated lime
where the performance function has the form of a sum of
served as the alkali activator. Rice husk is an agro-indus-
squares, the Hessian matrix is usually estimated with Eq. 3
trial waste derived from rice processing in rice mills and
and this is usually an approximation (Kisi and Uncuoglu
homes disposed of in landfills. Through controlled direct
2005);
combustion proposed by Onyelowe et al. (Onyelowe et al.
2019), the rice husk mass was turned into ash to form H = JT J (3)
rice husk ash (RHA). The HARHA was used in varying
proportions between 0.1 and 12 in increments of 0.1 to where J is the Jacobian matrix. J contains the 1st derivative
treat the clayey soil. The response behavior with differ- of the network errors with respect to biases and weights.
ent properties were tested, observed, and recorded (see And the gradient is usually estimated with (Kisi and Unc-
Table 8 in Appendix). uoglu 2005);
g = JT e (4)
3 Methods where e is the vector of network errors. The iterations or tri-
als used to obtain the best feet in LMBP algorithm usually
3.1 The algorithms of Artificial Neural Network reduce the performance function.
(ANN)
3.3 Bayesian programming (BP) algorithm
7-10-3 artificial neural network training from matlab toolbox
software was used in this work, which signifies the input The Bayesian programming algorithm is a concept backed
parameters to the number of neurons and the number of by Bayes’s theorem, which states that any prior notions per-
outputs. This learning method has gone through a series of taining to an uncertain parameter are updated and modi-
modifications and hybridization, in a bid to improve its per- fied based on new data to produce a posterior probability of
formance, speed, global error, and convergence efficiency. In the unknown quantity (Quan et al. 2015; Zhan et al. 2012).
this work, three training algorithms were deployed: LMBP, Baye’s theorwm can be used based on ANN to compute the
Bayesian Programming (BP), and CG algorithms to pre- posterior distribution of the network weights (w) given a set
sent the algorithm with the best results fits astounding rate, of N target data y and assumed model structure (Quan et al.
reduced global error, and best performance index. However, 2015; Zhan et al. 2012).
13
Multiscale and Multidisciplinary Modeling, Experiments and Design
Fig. 1 a Levenberg–Marquardt Backpropagation Training Algorithm Flowchat (Kisi and Uncuoglu 2005). b Conjugate Gradient Training Algo-
rithm Flowchat (Kisi and Uncuoglu 2005). c Bayesian Programming Training Algorithm Flowchat (Quan et al. 2015; Zhan et al. 2012)
13
Multiscale and Multidisciplinary Modeling, Experiments and Design
3.4 Conjugate gradient (CG) algorithm is modified at each trial in CGA and the search along the
CG direction determines the size of the step, which in turn
The most negative of the gradients, the weight in the steep- minimizes the PF along the model line. This algorithm
est descent direction of the model are adjusted with the method resumes by searching along the direction of the
basic backpropagation algorithm, which is why the per- steepest descent, the first iteration;
formance function (PF) is reduced rapidly. Though this
happens in an ANN model, it does not produce the fastest
P0 = −g0 (5)
convergence. A search is performed along the conjugate And to obtain the optimal distance to travel along the
directions in the deployment of CGA, and a higher rate search direction, a line search is performed (Kisi and Unc-
of convergence in the steepest direction is produced in the uoglu 2005);
model (Kisi and Uncuoglu 2005). This is the key factor
being deployed in CGA models. Additionally, the step size xk+1 = xk + 𝛼k gk (6)
13
Multiscale and Multidisciplinary Modeling, Experiments and Design
Fig. 3 Distribution histogram for input (in blue) and output (in green) parameters
13
Multiscale and Multidisciplinary Modeling, Experiments and Design
Table 4 Statistical functions for Parameters Minimum Maximum Mean Median Standard Skewness Kurtosis
input and output parameters deviation
Table 5 Setting parameters for the ANN model compared with the actual result to obtain the error function.
Parameter Setting
By so doing, the network’s learnable parameters (weight
and bias) are adjusted such as to decrease the network’s
Sampling error until the desired output result is achieved (Rezaei et al.
Training record 85 2009).
Validation/testing 36
General
Type Input–output and curve fitting 4 Results and discussions
Number of hidden neurons 10
Training algorithm Levenberg–Marquardt 4.1 Architecture of the ANN Model
Data division Random
Figure 2 represents the working of the ANN model. Leven-
berg–Marquardt (LM backpropagation algorithm was used
achieve the best fit. The input data are consistently fed to the in model development. During the forward pass, weights
network and in each instance, the calculated output result is are assigned to the variables according to the desired output
Fig. 4 Training state of ANN model for (a) CBR, (b) UCS, (c) R
13
Multiscale and Multidisciplinary Modeling, Experiments and Design
Fig. 5 Best performance validation for (a) CBR, (b) UCS, (c) R with corresponding epochs
values. Depending on the evaluation criteria, the weights with the use of HARHA. In contrast, liquid limit, plastic
are readjusted to minimize the errors (Rezaei et al. 2009; limit, plastic index, and clay activity manifested a similar
Shi and Zhu 2008). intensity of negative relation with CBR. The value of CBR
seems to be unaffected relative to OMC. The maximum dry
4.2 Pearson correlation analysis density greatly influenced the value of CBR, depicting a
strong positive relationship. A similar type of trend was
According to previous studies, the current research employed observed for the value of UCS and R values.
Pearson correlation coefficients to measure the linear rela- The distribution histograms were plotted for the input
tionship between the input and output parameters (Fan, et al. and output parameters, as shown in Fig. 3. A slight or no
2002; Adler and Parmryd 2010; Benesty et al. 2008). The skewness was observed in both types of parameters used.
use of HARHA influenced the values of CBR, UCS, and R The essential statistical functions have been listed in
almost in a similar manner presented in Tables 1, 2, 3. The Table 3, depicting the satisfying values of skewness and
value of CBR depicted a strong positive linear relationship kurtosis.
13
Multiscale and Multidisciplinary Modeling, Experiments and Design
Fig. 7 Experimental and predicted trends for (a) CBR, (b) UCS, (c) R with error analysis
Table 6 Calculation of statistical parameters for performance evalua- 4.3 Statistical functions for input and output
tion of the proposed models parameters of the model
Model Statistical Training set Testing set Validation set
parameter The setting parameters and statistical functions of the ANN
models are listed in Tables 4 and 5. 70% of the total data set
CBR MAE 0.0962 0.2198 0.1649
was used for training the model, while 30% of the data was
RSE 3.17E-6 0.00043 0.0006
equally divided among testing and validation data sets. The
RMSE 4.98 4.76 1.19
analysis was carried out via the machine learning toolbox of
UCS MAE 0.42 0.92 1.27
MATLAB R2020b. The 10 number of hidden neurons were
RSE 8.9E-8 0.02 0.08
used based on the best model as per the evaluation criteria
RMSE 12.4 14.12 1.19
as shown in the setting of parameters for the ANN models
R MAE 0.57 0.035 0.038
presented in Table 4. The ANN was allowed to randomly
RSE 2.39E-5 0.0096 0.0097
pick the data points for training and validation data sets.
RMSE 4.32 4.93 1.19
Table 7 Performance index Model Statistical parameter Training set Testing set Validation set OBF
and objective function of the
proposed model CBR R2 0.9998 0.9996 0.9994 0.077
RRMSE 0.20 0.20 0.05
ρ 0.100 0.104 0.028
UCS R2 0.999 0.989 0.935 0.028
RRMSE 0.08 0.07 0.08
ρ 0.04 0.035 0.0002
R R2 0.87 0.99 0.99 0.09
RRMSE 0.23 0.20 0.04
ρ 0.12 0.10 0.02
13
Multiscale and Multidisciplinary Modeling, Experiments and Design
Fig. 8 Comparison of best performance validation (1) CBR model (2) UCS model (3) R model using the Bayesian algorithm (a), (d), (g), conju-
gate gradient algorithm (b), (e), (h), and LMBP algorithm (c), (f), (i)
4.4 ANN model training and performance The comparison of experimental results and predicted
validation values is presented in Fig. 7 and Tables 6 and 7. The coef-
ficient of determination (R2) for the three models is more
Figure 4 manifests the training state of the ANN models. significant than 0.95, representing the most robust agree-
The gradient was reduced to 0.1666 after 20 iterations for the ment of the experimental results to the predicted one. The
CBR model, whereas the minimum gradient for UCS, and other functions such as mean absolute error (MAE) (Benesty
R models was achieved in 15 and 14 iterations, respectively. 2009; Willmott and Matsuura 2005), relative squared error
The validation of the models was carried out using a (RSE), root mean squared error (RMSE) (Willmott et al.
mean square error (MAE). The best performance for the 2009), relative root mean square error (RRMSE), perfor-
validation of the CBR model was achieved in 14th iteration, mance indicator(ρ) (Iqbal 2020; Babanajad et al. 2017), and
and the error observed was 0.16405 as depicted in Fig. 5. objective function OBF were also used for the model evalua-
Similarly, the best performance of the validation of UCS tion. The mathematical equations of the statistical evaluation
and R models was achieved at 9 and 8 epochs, respectively. functions are presented in Eq. 9–16.
The MAE observed at these epochs are 2.9159 and 0.00768, �
∑n
respectively. (e − mi )2
RMSE = i=1 i (9)
The error histogram of the three models was drawn in n
Fig. 6, which reflects the strong correlation of the experi-
mental and predicted results. Almost 95% of the data yields
an error lesser than 1%.
13
Multiscale and Multidisciplinary Modeling, Experiments and Design
Fig. 9 Comparison of training states of (1) CBR model (2) UCS model (3) R model using the Bayesian algorithm (a), (d), (g), conjugate gradi-
ent algorithm (b), (e), (h), and LMBP algorithm (c), (f), (i)
∑n � ∑n − −
e − mi �� (ei − ei )(mi − mi )
MAE = i=1 � i
(10) R= � i=1
n ∑n − 2 ∑n − 2 (14)
(e − ei )
i=1 i i=1
(mi − mi )
∑n
i=1 (mi − ei )2
RSE = (11) RRMSE
(15)
∑n − 2
(e −ei ) 𝜌=
i=1 1+R
∑n (n − n ) (n )
i=1 (ei − pi )2 OBF = T v
ρT + 2 v ρv (16)
NSE = 1 −
∑n − 2 (12) n n
i=1 (mi − mi )
where ei and mi are nth experimental and model TSR(%),
− −
� respectively; ei and mi denotes the average values of experi-
∑n
(ei − mi )2 mental and model TSR(%), respectively;n is the number of
1 i=1
RRMSE = −
�e� (13) samples in the data set. And the subscripts T and V represent
n
� �
� � the training and validation data, and n is the total number
of sample points, All statistical error evaluation functions
13
Multiscale and Multidisciplinary Modeling, Experiments and Design
Fig. 10 Comparison of errors of (1) CBR model (2) UCS model (3) R model using the Bayesian algorithm (a), (d), (g), conjugate gradient algo-
rithm (b), (e), (h), and LMBP algorithm (c), (f), (i)
satisfied the performance of the three models. The proxi- and R models, respectively. In contrast, the minimum MAE
mal value of OBF to zero reflects that the models are not for the validation data set using the conjugate gradient
overfitted. algorithm was observed after 30, 12, and 9 iterations for
the CBR, UCS, and R models, respectively. These obser-
4.5 Comparative analysis of employed algorithms vations were recorded as 14, 9, and 8 epochs for the three
models, i.e., CBR, UCS, and R, respectively. The authors
Three algorithms were compared in terms of iterations observed a similar pattern of the number of iterations dur-
required for the best training and validation performance ing the models’ training states, as depicted in Fig. 9. The
using mean squared error (MAE) as shown in Fig. 8. Lev- error analysis (see Fig. 10) illustrated that LMBP algo-
enberg–Marquardt algorithm required a lesser number of rithms outclass the other two types of algorithms regard-
iterations, followed by conjugate gradient algorithm and ing a close agreement to the experimental values and this
Bayesian algorithm. The minimum error for the train- results agree with the findings of Alaneme et al., (2020;
ing data set achieved using the Bayesian algorithm was Alaneme et al. 2020). However, the Bayesian and conju-
recorded after 366, 558, and 160 iterations for CBR, UCS, gate gradient algorithms also yielded acceptable errors for
13
Multiscale and Multidisciplinary Modeling, Experiments and Design
the specific problem. The extent of error was smaller in rately followed the experimental trend with a very close
the case of Bayesian algorithms relative to the conjugate agreement.
gradient algorithm. • While comparing the effect of changing algorithms, it
was concluded that the LMBP algorithm yields an accu-
rate estimation of the results in comparatively lesser
5 Conclusions iterations compared to the Bayesian and the conjugate
gradient algorithms; hence, showing a faster rate of com-
From the preceding comparative model prediction utilizing puting.
Levenberg–Muarquardt Backpropagation (LMBP), Bayesian • The pattern, Bayesian > Conjugate gradient > LMBP, was
and Conjugate Gradient (CG) algorithms of the evolution- observed for the required number of iterations to mini-
ary Artificial Neural Network (ANN) for the prediction of mize the error.
hydrated-lime activated rice husk ash (HARHA) modified • Finally, the LMBP algorithm outclasses the other two
expansive soil for sustainable earthwork in a smart environ- algorithms in terms of the predicted models’ accuracy
ment, the following can be remarked; (Table 8).
Table 8 121 datasets of input HARHA (%) Input soil hydraulic-prone properties Output soil strength properties
and output parameters
wL(%) wP(%) IP(%) wOMC(%) AC 𝛿max(g/cm ) CBR (%) UCS28(kN/m2) RValue
3
13
Multiscale and Multidisciplinary Modeling, Experiments and Design
Table 8 (continued) HARHA (%) Input soil hydraulic-prone properties Output soil strength properties
wL(%) wP(%) IP(%) wOMC(%) AC 𝛿max(g/cm ) CBR (%) UCS28(kN/m2) RValue
3
13
Multiscale and Multidisciplinary Modeling, Experiments and Design
Table 8 (continued) HARHA (%) Input soil hydraulic-prone properties Output soil strength properties
wL(%) wP(%) IP(%) wOMC(%) AC 𝛿max(g/cm ) CBR (%) UCS28(kN/m2) RValue
3
13
Multiscale and Multidisciplinary Modeling, Experiments and Design
References Onyelowe KC, Onyia ME, Onukwugha ER, Nnadi OC, Onuoha IC,
Jalal FE (2020) Polynomial relationship of compaction properties
of silicate-based RHA modified expansive soil for pavement sub-
Abdi Y, Momeni E, Khabir RR (2020) A reliable PSO-based ANN
grade purposes Epitőanyag—J Silicate Based Composite Materi-
approach for predicting unconfined compressive strength of sand-
als 72(6):223–228. https://doi.org/https://doi.org/10.14382/epito
stones. Open Construction Building Technol J 2020 14: 237–249.
anyag-jsbcm.2020.36
DOI: https://doi.org/10.2174/1874836802014010237
Onyelowe KC, Onyia M, Onukwugha ER, Bui Van D, Obimba-Wogu
Adler J (2010) Parmryd J (2010) Quantifying colocalization by correla-
J, Ikpa C (2020) Mechanical properties of fly ash modified asphalt
tion: pearson correlation coeeficient is superior to the Mander, s
treated with crushed waste glasses as fillers for sustainable pave-
overlap coefficient. Cytometry A 77(8):733–742
ments. Epitőanyag–Journal of Silicate Based and Composite
Alaneme GU, Onyelowe KC, Onyia ME, Bui Van D, Mbadike EM,
Materials 72(6):219–222. https://doi.org/https://d oi.o rg/1 0.1 4382/
Ezugwu CN, Dimonyeka MU, Attah IC, Ogbonna C, Abel C, Ikpa
epitoanyag-jsbcm.2020.35
CC, Udousoro IM (2020) Modeling volume change properties of
Onyelowe KC, Alaneme GU, Onyia ME, Bui Van D, Diomonyeka MU,
hydrated-lime activated rice husk ash (HARHA) modified soft
Nnadi E, Ogbonna C, Odum LO, Aju DE, Abel C, Udousoro IM,
soil for construction purposes by artificial neural network (ANN).
Onukwugha E (2021) Comparative modeling of strength proper-
Umudike J Eng Technol (UJET) 6(1):1–12. https://doi.org/https://
ties of hydrated-lime activated rice-husk-ash (HARHA) modified
doi.org/10.33922/j.ujet_v6i1_9
soft soil for pavement construction purposes by artificial neural
Babanajad SK, Gandomi AH, Alavi AH (2017) New prediction models
network (ANN) and fuzzy logic (FL). Jurnal Kejuruteraan 33(2)
for concrete ultimate strength under true-triaxial stress states: An
Quan S, Sun P, Wu G, Hu J (2015) One bayesian network construction
evolutionary approach. Adv Eng Softw 2017(110):55–68
algorithm based on dimensionality reduction. In: 5th international
Benesty J et al. (2009) Pearson correlation coefficient, in Noise reduc-
conference on computer sciences and automation engineering
tion in speech proceeding, 2009, Springer, p. 1–4
(ICCSAE 2015), Atlantis Publishers, p. 222–229
Benesty J, Chen J, Huang Y (2008) On the importance of the Pear-
Rezaei K, Guest B, Friedrich A, Fayazi F, Nakhaei M, Beitollahi A et al
son correlation coefficient in noise reduction. IEEE Trans Audio
(2009) Feed forward neural network and interpolation function
Speech Language Proc 16(4):757–765
models to predict the soil and subsurface sediments distribution
BS 1377 - 2, 3, 1990. Methods of Testing Soils for Civil Engineering
in Bam. Iran Acta Geophys 2009(57):271–293. https://d oi.o rg/1 0.
Purposes, British Standard Institute, London
2478/s11600-008-0073-3
BS 5930, (2015). Methods of Soil Description, British Standard Insti-
Salahudeen AB, Sadeeq JA, Badamasi A, Onyelowe KC (2020) Pre-
tute, London
diction of unconfined compressive strength of treated expansive
BS 1924, (1990). Methods of Tests for Stabilized Soil, British Standard
clay using back-propagation artificial neural networks. Nigerian
Institute, London
Journal of Engineering, Faculty of Engineering Ahmadu Bello
Erzin Y, Turkoz D (2016) Use of neural networks for the prediction
University Samaru - Zaria, Nigeria. Vol. 27, No. 1, April 2020.
of the CBR value of some Aegean sands. Neural Comput Applic
ISSN: 0794 – 4756. Pp. 45 – 58
27:1415–1426. https://doi.org/10.1007/s00521-015-1943-7
Saldaña M, Pérez-Rey JGI, Jeldres M, Toro N (2020) Applying statis-
Fan X et al. (2002). An evaluation model of supply chain performances
tical analysis and machine learning for modeling the UCS from
using 5DBSC and LMBP neural network algorithm
P-Wave velocity, density and porosity on dry travertine. Appl Sci
Ferentinou M, Fakir M (2017) An ANN approach for the prediction
10:4565. https://doi.org/10.3390/app10134565
of uniaxial compressive strength, of some sedimentary and Igne-
Sariev E, Germano G (2019). Bayesian regularized artificial neural
ous Rocks in Eastern KwaZulu-Natal. Symp Int Soc Rock Mech
networks for the estimation of the probability of default. Quantita-
Proc Eng 191(2017):1117–1125. https://d oi.o rg/1 0.1 016/j.p roeng.
tive Finance, 20: 2, 311–328, doi: https://doi.org/10.1080/14697
2017.05.286
688.2019.1633014
Hosseini M, Naeini SARM, Dehghani AA, Zeraatpisheh M (2018)
Shi BH, Zhu XF (2008) On improved algorithm of LMBP neural net-
Modeling of soil mechanical resistance using intelligentmethods.
works. Control Eng China 2008(2):016
J Soil Sci Plant Nutr 18(4):939–951
Van B, Duc and Onyelowe, K.C. (2018) Adsorbed complex and labo-
Iqbal MF et al (2020) Prediction of mechanical properties of green
ratory geotechnics of Quarry Dust (QD) stabilized lateritic soils.
concrete incorporating waste foundry sand based on gene expres-
Environ Technol Innovation 10:355–368. https://doi.org/10.
sion programming. J Hazard Mater 2020(384):121322
1016/j.eti.2018.04.005
Kingston GB, Maier HR, Lambert MF (2016) A Bayesian approach
Van Bui D, Onyelowe KC, Van Nguyen M (2018) Capillary rise, suc-
to artificial neural network model selection. Centre Appl
tion (absorption) and the strength development of HBM treated
Model Water Eng School Civ Environ Eng Univ Adelaide Bull
with QD base Geopolymer. Int J Pavement Res Technol [in press].
6(2016):1853–1859
https://doi.org/10.1016/j.ijprt.2018.04.003
Kisi O, Uncuoglu E (2005) Comparison of three back-propagation
Willmott CJ, Matsuura K (2005) Advantages of the mean absolute
training algorithms for two case studies. Indian J Eng Materials
error (MAE) over the root mean square error (RMSE) in assessing
Sci 12(2005):434–442
average model performance. Clim Res 30(1):79–82
Nawi NM, Khan A, Rehman MZ, (2013) A new levenberg marquardt
Willmott CJ, Matsuura K, Robeson SM (2009) Ambiguities inher-
based back propagation algorithm trained with cuckoo search. In:
ent in sums-of-squares-based error statisitics. Atmosp Environ
The 4th international conference on electrical engineering and
43(3):749–752
informatics (ICEEI 2013), Procedia Technology 11 (2013): p. 18
Zhan Z, Fu Y, Yang RJ et al. (2012) A Bayesian inference based model
– 23. https://doi.org/https://doi.org/10.1016/j.protcy.2013.12.157
interpolation and extrapolation. SAE Int J Materials Manuf 5(2).
Onyelowe KC, Van Bui D, Ubachukwu O et al (2019) Recycling and
Doi: https://doi.org/10.4271/2012-01-0223
reuse of solid wastes; a hub for ecofriendly, ecoefficient and sus-
tainable soil, concrete, wastewater and pavement reengineering.
Int J Low-Carbon Technol 14(3):440–451. https://doi.org/10.
1093/Ijlct/Ctz028
13