JSFC - Volume 10 - Issue 1 - FGHBC
JSFC - Volume 10 - Issue 1 - FGHBC
JSFC - Volume 10 - Issue 1 - FGHBC
٥٧٢
– –
ﻣﻠﺨﺺ ﺍﻟﺒﺤﺚ
ﻫﺫﺍ ﺍﻟﺒﺤﺙ ﻴﻌﺘﺒﺭ ﺍﻤﺘﺩﺍﺩﺍ ﻟﺒﺤﺙ ) Alshawadfi (2003ﻭﺍﻟﺫﻱ ﻗـﺩﻡ ﻓﻴـﻪ
ﻁﺭﻴﻘﺔ ﺠﺩﻴﺩﺓ ﻭﻤﺅﺜﺭﺓ ﻟﻠﺘﻨﺒﺅ ﺒﻨﻤﺎﺫﺝ ،ARMAﺒﺎﺴﺘﺨﺩﺍﻡ ﺃﺤـﺩ ﻁـﺭﻕ ﺍﻟـﺫﻜﺎﺀ
ﺍﻻﺼﻁﻨﺎﻋﻲ ﻭﻫﻲ ﻁﺭﻴﻘﺔ ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ،ﻜﻤﺎ ﻗﺎﺭﻥ ﺒﻴﻥ ﺍﻟﻁﺭﻴﻘﺔ ﺍﻟﻤﻘﺘﺭﺤـﺔ
ﻭﻁﺭﻴﻘﺔ ﺒﻭﻜﺱ-ﺠﻴﻨﻜﻨﺯ.ﻭﻴﺘﻀﻤﻥ ﻫﺫﺍ ﺍﻟﺒﺤﺙ ﻫﺩﻓﻴﻥ:
٥٧٣
– –
ﻭﻗﺩ ﺃﻅﻬﺭﺕ ﻨﺘﺎﺌﺞ ﺍﻟﺒﺤﺙ ﻭﺠﻭﺩ ﻗﺩﺭﺓ ﻋﺎﻟﻴﺔ ﻟﻠﻁﺭﻴﻘـﺔ ﺍﻟﻤﻘﺘﺭﺤـﺔ ﻟﻠﺘﻨﺒـﺅ
ﻟﻨﻤﺎﺫﺝ ARMAXﺒﺎﺴﺘﺨﺩﺍﻡ ﺃﺤﺩ ﻁﺭﻕ ﺍﻟﺫﻜﺎﺀ ﺍﻻﺼﻁﻨﺎﻋﻲ -ﻭﻫـﻲ ﻁﺭﻴﻘـﺔ
ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ -ﻋﻠﻰ ﺍﻟﺘﻨﺒﺅ ﺒﺎﻟﻘﻴﻡ ﺍﻟﻤﺴﺘﻘﺒﻠﻴﺔ ﻟﻠﺴﻠﺴﻠﺔ ﺍﻟﺯﻤﻨﻴﺔ ﺍﻟﻤﻌﻁﺎﺓ ﻭﺫﻟﻙ
ﺒﻁﺭﻴﻘﺔ ﺁﻟﻴﺔ ،ﺤﻴﺙ ﺃﻅﻬﺭﺕ ﺍﻟﻨﺘﺎﺌﺞ ﻤﻥ ﺩﺭﺍﺴﺔ 32000ﻋﻴﻨﺔ ﻭﺍﻟﺘﻲ ﺘﻡ ﺘﻭﻟﻴـﺩﻫﺎ
ﺃﻥ ﻤﺘﻭﺴﻁ ﻤﺭﺒﻌﺎﺕ ﺃﺨﻁﺎﺀ ﺍﻟﺘﻨﺒﺅ MSEﺨﺎﺼﺔ ﻓﻲ ﺤﺎﻟﺔ ﺍﻟﻌﻴﻨـﺎﺕ ﺍﻟـﺼﻐﻴﺭﺓ،
ﻭﻤﺘﻭﺴﻁ ﺍﻟﻘﻴﻤﺔ ﺍﻟﻤﻁﻠﻘﺔ ﻟﺨﻁﺄ ﺍﻟﺘﻨﺒﺅ ، MADﻭﻜﺫﻟﻙ ﻤﺘﻭﺴﻁ ﺍﻟﻨﺴﺒﺔ MAEP
ﺃﻓﻀل ﻤﻥ ﻨﻅﻴﺭﺘﻬﺎ ﻓﻲ ﺃﺴﻠﻭﺏ ﺒﻭﻜﺱ-ﺠﻴﻨﻜﻨﺯ ،ﻭﺫﻟﻙ ﻜﻤﺘﻭﺴﻁ ﻋﺎﻡ ﺒﺎﻟﻨﺴﺒﺔ ﻟﻜل
ﺍﻟﻌﻴﻨﺎﺕ ﻭﻜل ﺍﻟﻨﻤﺎﺫﺝ ﻭﻟﻠﻤﺸﺎﻫﺩﺍﺕ ﺍﻟﻤﺴﺘﻘﺒﻠﻴﺔ ﺍﻟﺜﻼﺙ ﺍﻷﻭﻟﻰ .ﻭﺒﻨﺎﺀﺍ ﻋﻠﻰ ﺫﻟـﻙ
ﻴﻤﻜﻥ ﺍﻟﻘﻭل ﺒﺼﻼﺤﻴﺔ ﺍﻟﻁﺭﻴﻘﺔ ﺍﻟﻤﻘﺘﺭﺤﺔ ﻟﻠﺘﻨﺒـﺅ ﺒـﺎﻟﻘﻴﻡ ﺍﻟﻤـﺴﺘﻘﺒﻠﻴﺔ ﻟﻠـﺴﻼﺴل
ﺍﻟﺯﻤﻨﻴﺔ ﺍﻟﻤﻭﻟﺩﺓ ﻤﻥ ﻨﻤﺎﺫﺝ .ARMAX
] [١ﻣﻘﺪﻣﺔ
ﺇﻥ ﺘﻭﺍﻓﺭ ﻗﻭﺍﻋﺩ ﺒﻴﺎﻨﺎﺕ ﺩﻗﻴﻘﺔ ﻭﻤﻤﺜﻠﺔ ﻟﻭﺍﻗﻊ ﺍﻟﻅـﺎﻫﺭﺓ ﺃﻭ ﺍﻟﻅـﻭﺍﻫﺭ ﻤﺤـل
ﺍﻟﺩﺭﺍﺴﺔ ﺜﻡ ﺍﺴﺘﺨﺩﺍﻡ ﺃﺴﻠﻭﺏ ﻋﻠﻤﻲ ﻭﺃﺩﻭﺍﺕ ﺘﺤﻠﻴل ﻹﺠﺭﺍﺀ ﻤﻌﺎﻟﺠـﺔ ﺍﻟﻜﺘﺭﻭﻨﻴـﺔ
Electronic processingﺒﺩﻻ ﻤﻥ ﺍﻟﻤﻌﺎﻟﺠﺔ ﺍﻟﻴﺩﻭﻴﺔ Manual processing
ﻟﻬﺫﻩ ﺍﻟﺒﻴﺎﻨﺎﺕ ﻫﻭ ﺍﻟﻤﻁﻠﻭﺏ ﻓﻲ ﺍﻟﻭﻗﺕ ﺍﻟﺤﺎﻟﻲ .ﻭﺍﻟﻬﺩﻑ ﻫـﻭ ﺍﻟﺤـﺼﻭل ﻋﻠـﻰ
ﻤﻌﻠﻭﻤﺎﺕ ﻜﺎﻓﻴﺔ ﻜﻤﺎ ﻭﻜﻴﻔﺎ ﻻﺤﺘﻴﺎﺠﺎﺕ ﺍﻟﺒﺎﺤﺜﻴﻥ ﺃﻭ ﺼﺎﻨﻌﻲ ﺍﻟﻘﺭﺍﺭﺍﺕ.
ﻭﻴﻌـﺭﻑ Artificial intelligenceﺒﺄﻨـﻪ ﻋﻠـﻡ
ﻴﺘﻀﻤﻥ ﻤﺠﻤﻭﻋﺔ ﺃﺴﺎﻟﻴﺏ ﻭﻁﺭﻕ ﺠﺩﻴﺩﺓ ﻓﻲ ﺒﺭﻤﺠﺔ ﺃﻨﻅﻤﺔ ﺍﻟﺤﺎﺴـﺏ ﺘـﺴﺘﺨﺩﻡ
ﻟﺘﻁﻭﻴﺭ ﺃﻨﻅﻤﺔ ﺘﺤﺎﻜﻰ ﺒﻌﺽ ﻋﻨﺎﺼﺭ ﺫﻜﺎﺀ ﺍﻹﻨﺴﺎﻥ ﻭﺘﺴﻤﺢ ﻟﻬﺎ ﺒﺎﻟﻘﻴﺎﻡ ﺒﻌﻤﻠﻴـﺎﺕ
ﺍﺴﺘﻨﺘﺎﺠﻴﺔ ﻋﻠﻰ ﺤﻘﺎﺌﻕ ﻭﻗﻭﺍﻨﻴﻥ ﻴﺘﻡ ﺘﻤﺜﻴﻠﻬﺎ ﻓﻲ ﺫﺍﻜﺭﺓ ﺍﻟﺤﺎﺴﺏ.ﺒﻌﺒـﺎﺭﺓ ﺃﺨـﺭﻯ
ﺍﻟﺫﻜﺎﺀ ﺍﻻﺼﻁﻨﺎﻋﻲ :ﻋﻠﻡ ﻴﺘﻨﺎﻭل ﻜﻴﻔﻴﺔ ﺠﻌل ﺍﻵﻟـﺔ – ﺃﻱ ﺍﻟﺤﺎﺴـﺏ – ﺘـﺅﺩﻯ
ﻋﻤﻠﻴﺎﺕ ﻤﻨﺎﻅﺭﺓ ﻟﻘﺩﺭﺍﺕ ﺍﻟﺒﺸﺭ ﺍﻟﻌﻘﻠﻴﺔ.
ﻭﻗﺩ ﻅﻬﺭ ﺍﻟﺫﻜﺎﺀ ﺍﻻﺼﻁﻨﺎﻋﻲ ﻓﻲ ﺍﻟﺨﻤﺴﻴﻨﺎﺕ ﻤﻥ ﺍﻟﻘﺭﻥ ﺍﻟﻤﺎﻀـﻲ ﻨﺘﻴﺠـﺔ
ﺍﻟﺜﻭﺭﺓ ﺍﻟﺘﻲ ﺤﺩﺜﺕ ﻓﻲ ﻤﺠﺎﻟﻲ ﺍﻟﻤﻌﻠﻭﻤﺎﺕ ﻭﺍﻟﺘﺤﻜﻡ ﺍﻵﻟﻲ ،ﻭﺘﺭﻤﻰ ﺃﺒﺤﺎﺜـﻪ ﺇﻟـﻰ
٥٧٤
– –
ﺘﺤﻘﻴﻕ ﻫﺩﻓﻴﻥ ﺭﺌﻴﺴﻴﻥ :ﺍﻷﻭل :ﺍﻟﻭﺼﻭل ﺇﻟﻰ ﻓﻬﻡ ﻋﻤﻴﻕ ﻟﻠﺫﻜﺎﺀ ﺍﻹﻨـﺴﺎﻨﻲ ﻋـﻥ
ﻁﺭﻴﻕ ﻤﺤﺎﻜﺎﺘﻪ .ﺍﻟﺜﺎﻨﻲ :ﺍﻻﺴﺘﺜﻤﺎﺭ ﺍﻷﻓﻀل ﻟﻠﺤﺎﺴﺏ ﺍﻵﻟﻲ ﻭﺍﻟﻌﻤل ﻋﻠﻰ ﺍﺴﺘﻐﻼل
ﺇﻤﻜﺎﻨﺎﺘﻪ ﻜﺎﻓﺔ ،ﻭﺨﺼﻭﺼﺎ ﺒﻌﺩ ﺍﻟﺘﻁﻭﺭ ﺍﻟﺴﺭﻴﻊ ﻓﻲ ﻗﺩﺭﺍﺕ ﺍﻟﺤﺎﺴﺒﺎﺕ ﻭﺭﺨـﺹ
ﺜﻤﻨﻬﺎ.ﻭﻟﻐﺎﺘﻪ ﻫﻲ:ﻟﻐﺔ :Lispﻭﻫﻰ ﻟﻐﺔ ﻤﻌﺎﻟﺠـﺔ ﺍﻟﻠـﻭﺍﺌﺢ (List Processing
) ،Languageﻭﻟﻐﺔ :Prologﻭﻫﻰ ﻟﻐﺔ ﺍﻟﺒﺭﻤﺠﺔ ﺒـﺎﻟﻤﻨﻁﻕ (Programming
) ،in Logicﻭﻫﺫﻩ ﺍﻟﻠﻐﺎﺕ ﺘﺘﻴﺢ ﻟﻠﻤﺒﺭﻤﺞ ﺇﻤﻜﺎﻨﻴﺎﺕ ﻜﺒﻴﺭﺓ ﻓﻲ ﻜﺘﺎﺒﺔ ﺍﻟﺒﺭﺍﻤﺞ.
ﻭﻤﻥ ﺍﻟﻤﺠﺎﻻﺕ ﺍﻟﻤﺭﺘﺒﻁﺔ ﺒﻬﺫﺍ ﺍﻟﻌﻠﻡ :ﻋﻠﻡ ﺍﻟـﺘﺤﻜﻡ ﺍﻵﻟـﻲ ،Cybernetics
ﻭﻋﻠﻡ ﺍﻟﺭﻭﺒﻭﺕ ،Roboticsﻭﺍﻟـﺘﻌﻠﻡ ﺒﻤـﺴﺎﻋﺩﺓ ﺍﻟﺤﺎﺴـﺏ ،CAIﻭﺍﻟﺘـﺼﻤﻴﻡ
ﺍﻟﻬﻨﺩﺴﻲ ﺒﻤﺴﺎﻋﺩﺓ ﺍﻟﺤﺎﺴﺏ ،CADﻭﺍﻟﺘﺭﺠﻤﺔ ﺍﻵﻟﻴﺔ ،MTﻭﺍﻟﺘﻤﻴﻴـﺯ ﺍﻵﻟـﻲ
،Patternﻭﺍﻷﻟﻌــــﺎﺏ ﺍﻻﻟﻜﺘﺭﻭﻨﻴــــﺔ ﻟﻸﻨﻤــــﺎﻁ recognition
ﻜﺎﻟﺸﻁﺭﻨﺞ......ﻭﻏﻴﺭﻫﺎ.
ﻭﻴﺘﻀﻤﻥ ﺍﻟﺫﻜﺎﺀ ﺍﻻﺼﻁﻨﺎﻋﻲ ﻓﺭﻭﻋﺎ ﻋﺩﻴﺩﺓ ﻤﻥ ﺃﻫﻤﻬـﺎ :ﺍﻟـﻨﻅﻡ ﺍﻟﺨﺒﻴـﺭﺓ
،Expert Systemsﻭﺍﻟﺘﻨﻘﻴﺏ ﻓﻲ ﺍﻟﺒﻴﺎﻨﺎﺕ ، Data Miningﻭﺍﻟﺨﻭﺍﺭﺯﻤﻴـﺎﺕ
ﺍﻟﺠﻴﻨﻴﺔ ،Genetic algorithmsﻭﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ .Neural Networks
ﻭﺨﻼل ﻫﺫﺍ ﺍﻟﺒﺤﺙ ﺴﻨﺒﻴﻥ ﻜﻴﻑ ﻴﻤﻜـﻥ ﺍﺴـﺘﺨﺩﺍﻡ ﺃﺤـﺩ ﻓـﺭﻭﻉ ﺍﻟـﺫﻜﺎﺀ
ﺍﻻﺼﻁﻨﺎﻋﻲ ﻭﻫﻭ ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﻓﻰ ﺘﺤﻠﻴل ﺍﻟﺴﻼﺴل ﺍﻟﺯﻤﻨﻴﺔ.
٥٧٥
– –
٥٧٦
– –
٥٧٧
– –
ﻫﺫﺍ ﺍﻟﺒﺤﺙ ﻴﻌﺘﺒﺭ ﺍﻤﺘﺩﺍﺩﺍ ﻟﺒﺤـﺙ ) Alshawadfi(2003ﻭﺍﻟـﺫﻱ ﺍﻗﺘـﺭﺡ
ﻁﺭﻴﻘﺔ ﺠﺩﻴﺩﺓ ﻭﻤﺅﺜﺭﺓ ﻟﻠﺘﻨﺒﺅ ﺒﻨﻤﺎﺫﺝ ،ARMAﻜﻤﺎ ﻗﺎﺭﻥ ﺍﻟﻁﺭﻴﻘـﺔ ﺍﻟﻤﻘﺘﺭﺤـﺔ
ﺒﻁﺭﻴﻘﺔ ﺒﻭﻜﺱ-ﺠﻴﻨﻜﻨﺯ
ﻭﻴﺘﻀﻤﻥ ﺍﻟﺒﺤﺙ ﺍﻟﺤﺎﻟﻲ ﻫﺩﻓﻴﻥ:
ﺍﻟﻬﺩﻑ ﺍﻷﻭل :ﺘﻌﻤﻴﻡ ﻁﺭﻴﻘـﺔ ) Alshawadfi (2003ﻟﻠﺘﻨﺒـﺅ ﺒﺎﻟـﺴﻼﺴل
ﺍﻟﺯﻤﻨﻴﺔ ﺍﻟﻤﻭﻟﺩﺓ ﻤﻥ ﻨﻤـﺎﺫﺝ ARMAXﺒﺎﺴـﺘﺨﺩﺍﻡ ﺍﻟـﺸﺒﻜﺎﺕ ﺍﻟﻌـﺼﺒﻴﺔ
ﺍﻻﺼﻁﻨﺎﻋﻴﺔ ) ،(ANNﻭﻟﺘﺤﻘﻴﻕ ﻫﺫﺍ ﺍﻟﻬﺩﻑ ﻴـﺘﻡ ﺘﻭﻟﻴـﺩ 32000ﻋﻴﻨـﺔ
ﺒﺄﺤﺠﺎﻡ ﻤﺨﺘﻠﻔﺔ ﻤﻥ ﻨﻤﺎﺫﺝ ARMAXﺒﻤﻌﻠﻤﺎﺕ ﻤﺨﺘﻠﻔﺔ ﺘـﺴﺘﺨﺩﻡ ﻟﺘـﺩﺭﻴﺏ
ﺍﻟﺸﺒﻜﺔ،ﺒﻌﺩ ﺫﻟﻙ ﻴﺘﻡ ﻤﻘﺎﺭﻨﺔ ﺍﻟﺒﻴﺎﻨﺎﺕ ﺍﻟﻤﻭﻟﺩﺓ ﺒﺎﻟﺒﻴﺎﻨﺎﺕ ﺍﻟﺤﻘﻴﻘﻴﺔ ﻟﻘﻴـﺎﺱ ﺩﻗـﺔ
ﺍﻟﺘﻨﺒﺅﺍﺕ.
ﺍﻟﻬﺩﻑ ﺍﻟﺜـﺎﻨﻲ :ﻤﻘﺎﺭﻨـﺔ ﺃﺩﺍﺀ ﺍﻟﻁﺭﻴﻘـﺔ ﺍﻟﻤﻘﺘﺭﺤـﺔ ﻟﻠـﺸﺒﻜﺎﺕ ﺍﻟﻌـﺼﺒﻴﺔ
ﺍﻻﺼﻁﻨﺎﻋﻴﺔ ﻤﻊ ﺃﺩﺍﺀ ﻁﺭﻴﻘﺔ " ﺒﻭﻜﺱ ﻭ ﺠﻴﻨﻜﻨـﺯ " ﻟﺘﻭﻀـﻴﺢ ﺃﻱ ﻤﻨﻬﻤـﺎ
ARMAXﻭﺫﻟﻙ ﻤﻥ ﺨﻼل ﻗﻴﺎﺱ ﺩﻗﺔ ﺍﻟﺘﻨﺒﺅﺍﺕ ﻟﻜل ﺃﻓﻀل ﻟﻠﺘﻨﺒﺅ ﺒﻨﻤﺎﺫﺝ
ﻤﻥ ﺍﻟﻁﺭﻴﻘﺘﻴﻥ ﺒﺎﺴﺘﺨﺩﺍﻡ ﺜﻼﺙ ﻤﻘﺎﻴﻴﺱ:
-١ﻤﺘﻭﺴﻁ ﻤﺭﺒﻌﺎﺕ ﺍﻟﺨﻁﺄ )Mean of Squared Error (MSE
-٢ﻤﺘﻭﺴﻁ ﺍﻻﻨﺤﺭﺍﻓﺎﺕ ﺍﻟﻤﻁﻠﻘﺔ )Mean Absolute Deviation (MAD
-٣ﻨﺴﺒﺔ ﺍﻟﺤﺎﻻﺕ ﺍﻟﺘﻲ ﺘﺘﺤﻘﻕ ﻓﻴﻬﺎ ﺃﺨﻁﺎﺀ ﻤﻁﻠﻘﺔ ﺃﻗل ﻓﻲ ﺍﻟﻁﺭﻴﻘﺘﻴﻥ
The Percentage of cases of Minimum Absolute error
)(MAEP
ﺘﺄﺘﻰ ﺃﻫﻤﻴﺔ ﺍﻟﺒﺤﺙ ﻓﻲ ﺍﺴﺘﺨﺩﺍﻤﻪ ﻷﺤﺩ ﻁﺭﻕ ﺍﻟـﺫﻜﺎﺀ ﺍﻻﺼـﻁﻨﺎﻋﻲ ﻭﻫـﻲ
ﻁﺭﻴﻘﺔ ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﻓﻲ ﺍﻟﺘﻨﺒﺅ ﺒﺎﻟﺴﻼﺴل ﺍﻟﺯﻤﻨﻴﺔ ﺍﻟﻤﻭﻟـﺩﺓ ﻤـﻥ ﻨﻤـﺎﺫﺝ
ARMAXﻜﻁﺭﻴﻘﺔ ﺠﺩﻴﺩﺓ ﻴﻤﻜﻥ ﺍﺴﺘﺨﺩﺍﻤﻬﺎ ﻟﻠﺘﻨﺒﺅ ﺒﻜﺜﻴﺭ ﻤـﻥ ﺍﻟﻅـﻭﺍﻫﺭ ﻓـﻲ
ﻤﺠﺎﻻﺕ ﻋﺩﻴﺩﺓ ،ﺤﻴﺙ ﺃﻨﻬﺎ ﺘﻤﺘﺎﺯ ﻋﻥ ﻏﻴﺭﻫﺎ ﻤﻥ ﺍﻷﺴﺎﻟﻴﺏ ﺍﻹﺤﺼﺎﺌﻴﺔ ﺍﻟﺘﻘﻠﻴﺩﻴـﺔ
٥٧٨
– –
ﺒﻤﺯﺍﻴﺎ ﻤﻨﻬﺎ :ﺃﻨﻬﺎ ﺘﻔﻴﺩ ﻓﻲ ﺍﻟﺤﺼﻭل ﻋﻠﻰ ﺘﻨﺒﺅﺍﺕ ﺠﻴﺩﺓ ،ﺤﻴـﺙ ﺃﻨﻬـﺎ ﺘﻌﻁـﻰ
ﻤﺠﻤﻭﻉ ﻤﺭﺒﻌﺎﺕ ﺃﺨﻁﺎﺀ ﻏﻴﺭ ﺨﻁﻴﺔ ﺃﻗل ،ﻜﻤﺎ ﺃﻥ ﻁﺭﻴﻘـﺔ ﺍﻟـﺸﺒﻜﺎﺕ ﺍﻟﻌـﺼﺒﻴﺔ
ﺍﻻﺼﻁﻨﺎﻋﻴﺔ ) (ANNﺘﺤﺘﺎﺝ ﻋﻴﻨﺎﺕ ﺃﺼﻐﺭ ﻨﺴﺒﻴﺎ ﻓﻲ ﺍﺨﺘﺒﺎﺭ ﻤﺼﺩﺍﻗﻴﺔ ﺍﻟﻨﻤﺎﺫﺝ،
ﻭﻤﻥ ﻤﺯﺍﻴﺎﻫﺎ ﺃﻴﻀﺎ ﺃﻨﻪ ﻴﺘﻡ ﺍﻟﺘﻨﺒﺅ ﻓﻴﻬﺎ ﺒﺼﻭﺭﺓ ﺁﻟﻴﺔ ﻭﺃﻨﻬﺎ ﺘﺼﻠﺢ ﻓﻲ ﺤﺎﻟﺔ ﺍﻟﻨﻤﺎﺫﺝ
ﺍﻟﺨﻁﻴﺔ ﻭﻏﻴﺭ ﺍﻟﺨﻁﻴﺔ ،ﻜﻤﺎ ﻴﻤﻜﻥ ﺍﺴﺘﺨﺩﺍﻤﻬﺎ ﻟﺘﻘﻴﻴﻡ ﻗﺩﺭﺓ ﺍﻷﺴﺎﻟﻴﺏ ﺍﻹﺤـﺼﺎﺌﻴﺔ
ﺍﻷﺨﺭﻯ ﻓﻲ ﺘﻘﺩﻴﺭ ﺍﻟﻨﻤﺎﺫﺝ ﻭﺍﺴﺘﺨﺩﺍﻤﻬﺎ ﻓﻲ ﺍﻟﺘﻨﺒﺅ.
] [ ٢ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﺍﻻﺻﻄﻨﺎﻋﻴﺔ
ﺘﻌﺩ ﺍﻟـﺸﺒﻜﺎﺕ ﺍﻟﻌـﺼﺒﻴﺔ ﺍﻻﺼـﻁﻨﺎﻋﻴﺔ )Neural Networks (ANN
Artificialﺃﺤﺩ ﺃﻫﻡ ﻁﺭﻕ ﺍﻟﺫﻜﺎﺀ ﺍﻻﺼﻁﻨﺎﻋﻲ ،ﻭ ﺘﺘﻤﺤﻭﺭ ﻓﻜﺭﺘﻬﺎ ﺤﻭل ﻤﺤﺎﻜﺎﺓ
ﻗﺩﺭﺓ ﺍﻟﻌﻘل ﺍﻟﺒﺸﺭﻯ ﻋﻠﻰ ﺍﻟﺘﻌﺭﻑ ﻋﻠﻰ ﺍﻷﻨﻤﺎﻁ ،ﻭﺘﻤﻴﻴـﺯ ﺍﻷﺸـﻴﺎﺀ ،ﺒﺎﺴـﺘﺨﺩﺍﻡ
ﺍﻟﺤﺎﺴﺏ ﺍﻵﻟﻲ ﻤﻥ ﺨﻼل ﺇﺘﺒﺎﻉ ﻋﻤﻠﻴﺔ ﺍﻟﺘﻌﻠﻴﻡ ﺍﻟﺫﺍﺘﻴﺔ ﺍﻟﺘﻲ ﺘﺤـﺩﺙ ﻓـﻲ ﺍﻟﻌﻘـل،
ﻭﺍﻟﺘﻲ ﻴﺘﻡ ﻓﻴﻬﺎ ﺍﻻﺴﺘﻔﺎﺩﺓ ﻤﻥ ﺍﻟﺨﺒﺭﺍﺕ ﺍﻟﺴﺎﺒﻘﺔ ﻓﻲ ﺴﺒﻴل ﺍﻟﻭﺼﻭل ﺇﻟـﻰ ﺃﻓـﻀل
ﻨﺘﺎﺌﺞ ﻓﻲ ﺍﻟﻤﺴﺘﻘﺒل).ﻨﻅﺭ :ﺤﺎﺠﻰ ،ﻭﺍﻟﻤﺤﻤﻴﺩ،(١٩٩٩) ،ﺹ .( ١٩
ﺸﻜل )(١-٢
ﻨﻤﻭﺫﺝ ﻟﻭﺤﺩﺓ ﺘﺸﻐﻴل
inputs Weights
Outputs
X1 W1
Neuron
X2 W2 n
)F(Y Y
wi xi
i 1
٥٧٩
– –
٥٨٠
– –
ﺍﻟﻤﺨﺭﺠﺔ،ﻭ ﺘﻘﻭﻡ ﺍﻟﺸﺒﻜﺔ ﺒﻤﻘﺎﺭﻨﺔ ﺍﻟﻨﺘﺎﺌﺞ ﺍﻟﺘﻲ ﺘﻘﺩﺭﻫﺎ ﻟﻠﻤﺘﻐﻴﺭﺍﺕ ﺍﻟﺨﺎﺭﺠﺔ ﻟﻜـل
ﻋﻴﻨﺔ ﻤﺩﺨﻠﺔ ﺒﺎﻟﻘﻴﻡ ﺍﻟﻔﻌﻠﻴﺔ ﻟﻬﺫﻩ ﺍﻟﻤﺘﻐﻴﺭﺍﺕ ،ﻭﺒﻨﺎﺀ ﻋﻠﻰ ﺫﻟﻙ ﺘﻘﻭﻡ ﺍﻟﺸﺒﻜﺔ ﺒـﺈﺠﺭﺍﺀ
ﺍﻟﺘﻌﺩﻴﻼﺕ ﻋﻠﻰ ﺃﻭﺯﺍﻥ ﺍﻻﺘﺼﺎل ﺒﻬﺩﻑ ﺘﻘﻠﻴل ﺍﻷﺨﻁﺎﺀ ﻓﻲ ﺍﻟﻨﺘﺎﺌﺞ ،ﺜﻡ ﺘﻌﺎﺩ ﻋﻤﻠﻴﺔ
ﺍﻟﺘﺩﺭﻴﺏ ﻋﺩﺓ ﻤﺭﺍﺕ ﺇﻟﻰ ﺃﻥ ﻴﺘﻡ ﺍﻟﻭﺼﻭل ﺇﻟﻰ ﻨﺘﺎﺌﺞ ﻤﻘﺒﻭﻟﺔ.
] [٢-١-٢ﺍﻟﺘﻌﻠﻴﻡ ﻏﻴﺭ ﺍﻹﺸﺭﺍﻓﻲ Unsupervised Learning
ﺘﺘﺸﺎﺒﻪ ﻫﺫﻩ ﺍﻟﻁﺭﻴﻘﺔ ﻤﻊ ﻁﺭﻴﻘﺔ ﺍﻟﺘﻌﻠﻴﻡ ﺍﻹﺸﺭﺍﻓﻴﺔ ﺇﻻ ﺃﻨﻬﺎ ﺘﺨﺘﻠﻑ ﻋﻨﻬﺎ ﻓـﻲ
ﺃﻥ ﺍﻟﻌﻴﻨﺎﺕ ﺍﻟﻤﺴﺘﺨﺩﻤﺔ ﻓﻲ ﻋﻤﻠﻴﺔ ﺍﻟﺘﺩﺭﻴﺏ ﻻ ﺘﺘـﻀﻤﻥ ﺃﻴـﺔ ﻗـﻴﻡ ﻟﻠﻤﺘﻐﻴـﺭﺍﺕ
ﺍﻟﺨﺎﺭﺠﺔ ،ﻭﺘﺘﻜﻭﻥ ﺍﻟﺒﻴﺎﻨﺎﺕ ﺍﻟﺩﺍﺨﻠﺔ ﺇﻟﻰ ﺍﻟﺸﺒﻜﺔ ﻤﻥ ﻋﺩﺓ ﻗﻁﺎﻋﺎﺕ ﺃﻭ ﻤﺠﻤﻭﻋﺎﺕ،
ﺤﻴﺙ ﺘﺘﺩﺭﺏ ﺍﻟﺸﺒﻜﺔ ﻓﻲ ﻫﺫﻩ ﺍﻟﺤﺎﻟﺔ ﻋﻠﻰ ﺍﻜﺘﺸﺎﻑ ﺍﻟﻤﻤﻴﺯﺍﺕ ﻏﻴﺭ ﺍﻟﻅﺎﻫﺭﺓ ﻓـﻲ
ﻤﺠﻤﻭﻋﺔ ﺍﻟﺒﻴﺎﻨﺎﺕ ﺍﻟﻤﺴﺘﺨﺩﻤﺔ ﻓﻲ ﻋﻤﻠﻴﺔ ﺍﻟﺘﺩﺭﻴﺏ ،ﻭﻤـﻥ ﺜـﻡ ﺍﺴـﺘﺨﺩﺍﻡ ﺘﻠـﻙ
ﺍﻟﻤﻤﻴﺯﺍﺕ ﻓﻲ ﺘﻘﺴﻴﻡ ﺒﻴﺎﻨﺎﺕ ﺍﻟﻤﺩﺨﻼﺕ ﺇﻟﻰ ﻤﺠﻤﻭﻋـﺎﺕ ﻤﺨﺘﻠﻔـﺔ ﻓﻴﻤـﺎ ﺒﻴﻨﻬـﺎ
ﻭﻤﺘﻘﺎﺭﺒﺔ ﺩﺍﺨل ﻜل ﻤﺠﻤﻭﻋﺔ.
] [٣-١-٢ﺍﻟﺘﻌﻠﻴﻡ ﺒﺈﻋﺎﺩﺓ ﺍﻟﺘﺩﻋﻴﻡReinforcement Learning :
ﻫﺫﻩ ﺍﻟﻁﺭﻴﻕ ﺨﻠﻴﻁ ﺒﻴﻥ ﺍﻟﻁﺭﻗﺘﻴﻥ ﺍﻟـﺴﺎﺒﻘﺘﻴﻥ ،ﺤﻴـﺙ ﻻ ﻴﻔـﺼﺢ ﻟﻠـﺸﺒﻜﺔ
ﺍﻟﻌﺼﺒﻴﺔ ﻋﻥ ﺍﻟﻘﻴﻡ ﺍﻟﺤﻘﻴﻘﻴﺔ ﻟﻠﻤﺨﺭﺠﺎﺕ ﻜﻤﺎ ﻫﻭ ﺍﻟﺤﺎل ﻓﻲ ﻁﺭﻴﻘﺔ ﺍﻟﺘﺩﺭﻴﺏ ﻏﻴـﺭ
ﺍﻹﺸﺭﺍﻓﻴﺔ ﻭﻟﻜﻥ ﻴﺸﺎﺭ ﻟﻠﺸﺒﻜﺔ ﺒﺼﺤﺔ ﻨﺘﺎﺌﺠﻬﺎ ﺍﻟﻤﺤﺼﻠﺔ ﺃﻭ ﺨﻁﺌﻬﺎ ﻜﻤﺎ ﻓﻲ ﻁﺭﻴﻘﺔ
ﺍﻟﺘﻌﻠﻴﻡ ﺍﻹﺸﺭﺍﻓﻴﺔ.
Typical Architectures
ﻋﻤﻠﻴﺔ ﺘﻨﻅﻴﻡ ﺍﻟﺨﻼﻴﺎ ﺍﻟﻌﺼﺒﻴﺔ ﻓﻲ ﻁﺒﻘﺎﺕ ﻭﻜﻴﻔﻴﺔ ﺍﻻﺘـﺼﺎل ﺒـﻴﻥ ﻫـﺫﻩ
ﺍﻟﺨﻼﻴﺎ ﻟﺘﻜﻭﻴﻥ ﺍﻟﺸﺒﻜﺔ ﺘﺴﻤﻰ ﺒﻬﻴﻜل ﺍﻟﺸﺒﻜﺔ ،Architecturesﻭ ﺒﻭﺠﻪ ﻋﺎﻡ ﻤﻥ
ﺍﻟﻤﻤﻜﻥ ﺘﻘﺴﻴﻡ ﻫﻴﻜل ﺍﻟﺸﺒﻜﺔ ﺍﻟﻌﺼﺒﻴﺔ ﺍﻻﺼﻁﻨﺎﻋﻴﺔ ﺇﻟﻰ ﺜﻼﺜـﺔ ﺃﻨـﻭﺍﻉ ﺭﺌﻴـﺴﻴﺔ
ﻫﻲ (١):ﺸﺒﻜﺔ ﻭﺤﻴﺩﺓ ﺍﻟﻁﺒﻘﺔ ﺫﺍﺕ ﺍﻟﺘﻐﺫﻴـﺔ ﺍﻷﻤﺎﻤﻴـﺔ Single-Layer Feed
(٢) ،forward Networkﺸﺒﻜﺔ ﻤﺘﻌـﺩﺩﺓ ﺍﻟﻁﺒﻘـﺎﺕ ﺫﺍﺕ ﺍﻟﺘﻐﺫﻴـﺔ ﺍﻷﻤﺎﻤﻴـﺔ
(٣) ،Multi-Layer Feed forward Networkﺸﺒﻜﺔ ﻤﺘﻌﺩﺩﺓ ﺍﻟﻁﺒﻘﺎﺕ ﺫﺍﺕ
٥٨١
– –
٥٨٢
– –
٥٨٣
– –
ﺸﻜل )(٢-٢
ﺍﻟﺸﺒﻜﺔ ﻤﺘﻌﺩﺩﺓ ﺍﻟﻁﺒﻘﺎﺕ ﺫﺍﺕ ﺘﻐﺫﻴﺔ ﺃﻤﺎﻤﻴﺔ
Hidden layer
F
F
F
٥٨٤
– –
٥٨٥
– –
ﺸﻜل )(٣-٢
ﺍﻟﺸﺒﻜﺔ ﻤﺘﻌﺩﺩﺓ ﺍﻟﻁﺒﻘﺎﺕ ﺫﺍﺕ ﺍﻟﺘﻐﺫﻴﺔ ﻤﺭﺘﺩﺓ
Hidden layer
Input layer
F
Output layer
F
z
z
z
] [ ٣ﳕﻮﺫﺝ ARMAX
ﺃﻱ ﺃﻨﻪ ﻓﻲ ﻫﺫﺍ ﺍﻟﻨﻤﻭﺫﺝ ﺘﻨﺤﺩﺭ ﺍﻟﻤﺸﺎﻫﺩﺍﺕ ﺍﻟﺤﺎﻟﻴﺔ Ytﻋﻠﻰ ﺍﻟﻘـﻴﻡ ﺍﻟـﺴﺎﺒﻘﺔ
ﻟﻠﺴﻠﺴﻠﺔ ﺍﻟﺤﺎﻟﻴﺔ ﻭﻋﻠﻰ ﺍﻟﻘﻴﻡ ﺍﻟﺤﺎﻟﻴﺔ Xtﻭ ﺍﻟﺴﺎﺒﻘﺔ ﻟﺴﻠﺴﻠﺔ ﺃﺨﺭﻯ ،ﻭﺘﻨﺤﺩﺭ ﻜـﺫﻟﻙ
٥٨٦
– –
ﺤﻴﺙ:
t=1,2,3,…..,n :Ytﻫــﻰ ﺍﻟﻤــﺸﺎﻫﺩﺓ Yﻋﻨــﺩ ﺍﻟــﺯﻤﻥ ،t
:Xtﻫﻰ ﻤﺘﻐﻴﺭﺍﺕ ﺨﺎﺭﺠﻴﺔ ﻤﺴﺘﻘﻠﺔ ﻋﻥ et
ﺘﻤﺜل ﺴﻠﺴﻠﺔ ﻤﻥ ﺍﻟﻤﺘﻐﻴﺭﺍﺕ ﺍﻟﻌﺸﻭﺍﺌﻴﺔ ﺍﻟﻤﺴﺘﻘﻠﺔ ﻭﺍﻟﺘـﻰ ﻟﻬـﺎ ﺘﻭﺯﻴـﻊ :et
ﻁﺒﻴﻌﻰ ﻤﺘﻭﺴﻁﻪ ﺍﻟﺼﻔﺭ ،ﻭﺘﺒﺎﻴﻨﻪ 2
ﺤﻴﺙ:
Bﻫﻰ ﻤﻌﺎﻤل ﺍﻹﺯﺍﺤﺔ ﻟﻠﺨﻠﻑ ﺨﻁﻭﺓ ﻭﺍﺤﺩﺓ ،ﺃﻯ ﺃﻥ:
٥٨٧
– –
p h q
Yt i Yt i X t j k t k t … … … (3-3)
i 1 j 0 k 1
:ﻛﻣﺎ ﻳﻣﻛن ﻛﺗﺎﺑﺔ ھذا اﻟﻧﻣوذج ﻓﻲ ﺻﻳﻐﺔ ﻣﺻﻔوﻓﺎت ﻛﻣﺎ ﻳﻠﻲ
Y XW E … … … (3-4)
:ﺣﻳث
ym y m1 ... y m p 1 xm x m 1 ... x m h 1 m m 1 ... m q 1
y y m ... y m p x m 1 xm ... x m h m 1 m ... m q
m 1
... ... ... ... ... ... ... ... ... ... ...
... ... ... ... ... ... ... ... ... ... ...
X ... ... ... ... ... ... ... ... ... ... ...
... ... ... ... ... ... ... ... ... ... ...
... ... ... ... ... ... ... ... ... ... ...
y n 2 y n 3 ... y n p 1 x n 2 x n 3 ... ... n 2 n 3 ... n q 1
y n 1 y n 2 ... yn p x n 1 x n 2 ... x n h n 1 n 2 ... n q
Y y m 1 ym2 .... .... yn
E m 1 m 2 ... ... n , m max( p , h) … … … (3-5)
ﺘﻜـﻭﻥX ﻓﻰ ﻤﺘﺠـﻪ ﺍﻟﻤـﺩﺨﻼﺕ، xt 1 ﻭ، yt 1 ﻭﻨﻼﺤﻅ ﺃﻥ ﻜل ﻤﻥ
et 1 ﺒﻴﻨﻤﺎ،ﻤﻌﻠﻭﻤﺔ
ﻤـﻥ (0) ﻭ ﻜـﺫﻟﻙ، t 1 ﻭﻴﺘﻡ ﺍﺴﺘﺨﺩﺍﻡ ﺍﻟﺨﻁﺄ ﺍﻟﻤﻘﺩﺭ،ﺘﻜﻭﻥ ﻤﺠﻬﻭﻟﺔ
.ﺍﻟﻤﻤﻜﻥ ﺃﻥ ﻨﻀﻊ ﻟﻬﺎ ﻗﻴﻤﺔ ﺍﺒﺘﺩﺍﺌﻴﺔ ﺘﺴﺎﻭﻯ ﺍﻟﺼﻔﺭ
٥٨٨
– –
٥٨٩
– –
) ،E|Xﻭﻜﺫﻟﻙ ﺍﻟﺩﺍﻟـﺔ ) ( X ) =E(Z| X ﺤﻴﺙ ﻴﻔﺘﺭﺽ ﺃﻥ (= 0
ﻏﺎﻟﺒﺎ ﻤﺎ ﺘﻜﻭﻥ ﻏﻴﺭ ﻤﻌﻠﻭﻤﺔ ﻭﻴﺘﻡ ﺘﻘﺭﻴﺒﻬﺎ ﺒﻭﺍﺴﻁﺔ ﺍﻟﺩﺍﻟﺔ ) ، g( X,Wﺤﻴـﺙ W
ﺘﻤﺜل ﻤﺘﺠﻪ ﺍﻟﻤﻌﺎﻟﻡ ﻭﺘﺴﻤﻰ ﺍﻷﻭﺯﺍﻥ ،ﻭﺍﻟﺘﻲ ﺘﻜﻭﻥ ﻤﺠﻤﻭﻋﺔ ﺠﺯﺌﻴﺔ ﻓـﻲ ﻓـﺭﺍﻍ
ﺍﻟﻤﻌﺎﻟﻡ parameter spaceﻭﺘﻘﺩﺭ ﻤﻥ ﻤﺠﻤﻭﻋﺔ ﺍﻟﺒﻴﺎﻨﺎﺕ ﺍﻟﻤﺩﺭﺒﺔ)ﺍﻟﻌﻴﻨﺔ( ،ﻭﻜل
ﻤﻥ ﻓﺭﺍﻍ ﺍﻟﻤﻌﺎﻟﻡ ﻭﻤﺘﺠﻪ ﺍﻟﻤﻌـﺎﻟﻡ ﻴﻌﺘﻤـﺩ ﻋﻠـﻰ ﺩﺍﻟـﺔ ﺍﻟﺘﻘﺭﻴـﺏ ﺍﻟﺘـﻲ ﺘـﻡ
ﺍﺨﺘﻴﺎﺭﻫﺎ).g(X,W
ﻭﺤﻴﺙ ﺃﻨﻪ ﻤﻥ ﺍﻟﻤﻤﻜﻥ ﺍﻟﺤﺼﻭل ﻋﻠﻰ ﺘﻨﺒﺅﺍﺕ ﻤﺸﺎﻫﺩﺍﺕ ﺍﻟﺴﻠﺴﻠﺔ ﺍﻟﻤﺴﺘﻘﺒﻠﻴﺔ
yt+lﻓﻲ ﻨﻤﻭﺫﺝ ARMAXﻜﻤﺠﻤﻭﻉ ﻤﺭﺠﺢ ﻤﻥ ﺍﻟﻤﺸﺎﻫﺩﺍﺕ ﺍﻟﺴﺎﺒﻘﺔ ﻟﻠﺴﻼﺴل
،ytﻭxtﺒﺎﻹﻀــﺎﻓﺔ ﺇﻟــﻰ ﺍﻟﻤﺘﻐﻴــﺭ ﺍﻟﻌــﺸﻭﺍﺌﻲ )tεﺃﻨﻅــﺭ and Reinsel
،( (1994),p.446Box, Jenkinsﺃﻱ ﺍﻨﻪ ﻴﺼﺒﺢ ﻜﻨﻤﻭﺫﺝ ARXﻜﻤﺎ ﻴﻠﻲ:
y t l j y t l j j X t l j t l
j 1 j 1 )… (4-4
ﺤﻴﺙ ﺘﻤﺜل jπﻭ jηﺍﻷﻭﺯﺍﻥ ﺍﻟﻤﺭﺠﺤﺔ ﻭﻫﻰ ﻋﺒﺎﺭﺓ ﻋﻥ:
) ( B
( B) ) (1 1 B 2 B ......
) ( B
) ( B
( B) ) (1 1 B 2 B ......
) ( B
)(4-5
٥٩٠
– –
٥٩١
– –
ﺸﻜل )(١-٤
ﻨﻤﻭﺫﺝ ﺍﻻﻨﺤﺩﺍﺭ ﺍﻟﺨﻁﻰ ﻜﺸﺒﻜﺔ ﻋﺼﺒﻴﺔ ﺍﺼﻁﻨﺎﻋﻴﺔ
Z
XW
1
X1 Xk
X2
ﺯﺯﺯﺯﺯﺯ
ﺤﻴﺙ ﻴﻭﻀﺢ ﻫﺫﺍ ﺍﻟﺸﻜل ﻜﻴﻔﻴﺔ ﺘﻤﺜﻴل ﻨﻤﻭﺫﺝ ﺍﻻﻨﺤﺩﺍﺭ ﺍﻟﺨﻁـﻰ ﺒﺎﺴـﺘﺨﺩﺍﻡ
ﻨﻤﻭﺫﺝ ﺸﺒﻜﺔ ﻋﺼﺒﻴﺔ ﻭﻴﺘﻜﻭﻥ ﻫﺫﺍ ﺍﻟﺒﻨﺎﺀ ﻤﻥ ﻭﺤﺩﺓ ﻭﺍﺤـﺩﺓ ﺘـﻀﻡ ﺍﻟﻤﺘﻐﻴـﺭﺍﺕ
ﺍﻟﺨﻁﻴﺔ ﺍﻟﻤﺩﺨﻠﺔ x1,x2,………xk
ﺒﺠﻭﺍﺭ ﺍﻟﺜﺎﺒﺕ ،1ﻤﻊ ﻤﺘﺠﻪ ﺍﻟﻤﻌﺎﻟﻡ )ﺍﻷﻭﺯﺍﻥ( wﻭﻫﺫﺍ ﺍﻟﻨﻤﻭﺫﺝ ﻫﻭ ﺃﺒـﺴﻁ
ﺍﻟﻨﻤﺎﺫﺝ ﺍﻟﺭﻴﺎﻀﻴﺔ ﻟﻠﺸﺒﻜﺎﺕ ﻭﻴﺴﻤﻰ ﺍﻟﺨﻠﻴﺔ ﺍﻟﺨﻁﻴـﺔ ﺍﻻﻨـﻀﺒﺎﻁﻴﺔ Adaptive
) ،Linear Neurons (ADALINEﻭﻻ ﻴﺤﺘﻭﻯ ﻫﺫﺍ ﺍﻟﻨﻤـﻭﺫﺝ ﻋﻠـﻰ ﻁﺒﻘـﺔ
ﻤﺨﻔﻴﺔ ﺒل ﺘﻨﺘﻘل ﺍﻟﻤﻌﻠﻭﻤﺎﺕ ﻤﺒﺎﺸﺭﺓ ﻤﻥ ﺍﻟﻁﺒﻘﺔ ﺍﻷﻤﺎﻤﻴﺔ ﺇﻟﻰ ﺍﻟﻨﻬﺎﺌﻴﺔ ﻤﻥ ﺨـﻼل
ﺩﺍﻟﺔ ﺘﺤﻭﻴل ﻤﻨﺎﺴﺒﺔ:
Z g ( X , W ) XW )... (4-9
ﺤﻴﺙ Zﻤﺘﻐﻴﺭ ﺫﻭ ﺒﻌﺩ ﻭﺍﺤﺩ )ﻭﺤﺩﺓ ﺇﺨﺭﺍﺝ ﻭﺍﺤﺩﺓ( ،ﻭﻨﻤﻭﺫﺝ ﺍﻟـﺸﺒﻜﺔ ﻓـﻲ
ﻫﺫﻩ ﺍﻟﺤﺎﻟﺔ ﻴﻌﻁﻰ ﺒﺎﻟﻌﻼﻗﺔ ﺍﻟﺭﻴﺎﻀﻴﺔ)ﺍﻟﻤﺤﺩﺩﺓ(:
٥٩٢
– –
MSE
ﻓﻴﻤﺎ ﻴﻠﻲ ﺠﺩﻭل ) (5-1ﺍﻟﺫﻱ ﻴﺤﺘﻭﻯ ﻋﻠﻰ ﻨﺘﺎﺌﺞ ﻤﻘﻴﺎﺱ MSEﻟﻜـل ﻤـﻥ
ﺘﻨﺒﺅﺍﺕ ﺒﻭﻜﺱ-ﺠﻴﻨﻜﻨﺯ ﻭﺘﻨﺒﺅﺍﺕ ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﺍﻻﺼـﻁﻨﺎﻋﻴﺔ ﻤـﻊ ﻨﻤـﺎﺫﺝ
ARMAXﺍﻟﻤﺤﺩﺩﺓ ﺒﺄﺤﺠﺎﻡ ﻋﻴﻨﺎﺕ ﻤﺨﺘﻠﻔﺔ ،ﻜﺫﻟﻙ ﺠﺩﻭل) (5-2ﺍﻟﺫﻱ ﻴﺤﺘـﻭﻯ
ﻋﻠﻰ ﻨﺘﺎﺌﺞ ﻤﻘﻴﺎﺱ MSEﻟﻠﺜﻼﺜﺔ ﻤﺸﺎﻫﺩﺍﺕ ﺍﻟﻤـﺴﺘﻘﺒﻠﻴﺔ ﻋﻨـﺩ ﺃﺤﺠـﺎﻡ ﻋﻴﻨـﺎﺕ
ﻤﺨﺘﻠﻔﺔ:
٥٩٣
– –
(١-٥) ﺠﺩﻭل
ﺠﻴﻨﻜﻨﺯ ﻭﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﻤﻭﺯﻋﺔ ﻭﻓﻘﺎ ﻷﺤﺠﺎﻡ- ﻟﺘﻨﺒﺅﺍﺕ ﺒﻭﻜﺱMSE ﻗﻴﻡ
ﻋﻴﻨﺎﺕ ﻭﻨﻤﺎﺫﺝ ﻤﺨﺘﻠﻔﺔ
P,h,q
0.0026 0.0038 0.0034 0.0026 0.0022 0.0013 NN (1,1,0)
0.0035 0.0034 0.0037 0.0034 0.0036 0.0037 BJ
0.0027 0.0037 0.0035 0.0026 0.0023 0.0015 NN (1,2,0)
0.0028 0.0027 0.0030 0.0027 0.0027 0.0030 BJ
0.0023 0.0033 0.0030 0.0024 0.0018 0.0012 NN (1,1,1)
0.0032 0.0028 0.0036 0.0029 0.0032 0.0036 BJ
0.0020 0.0027 0.0026 0.0021 0.0018 0.0010 NN (0,1,0)
0.0020 0.0018 0.0021 0.0019 0.0019 0.0021 BJ
0.0019 0.0026 0.0023 0.0023 0.0017 0.0010 NN (0,2,0)
0.0018 0.0017 0.0019 0.0018 0.0017 0.0019 BJ
0.0025 0.0033 0.0033 0.0029 0.0020 0.0014 NN (0,1,1)
0.00188 0.0018 0.0019 0.0019 0.0019 0.0019 BJ
0.0 029 0.0040 0.0038 0.0029 0.0021 0.0017 NN (0,1,2)
0.00198 0.0019 0.0020 0.0020 0.0020 0.0020 BJ
0.0026 0.0035 0.0031 0.0028 0.0021 0.0015 NN (0,2,1)
0.0021 0.0019 0.0022 0.0020 0.0020 0.0022 BJ
0.0024 0.0026 0.0031 0.0026 0.0020 0.0013 NN Average
0.0024 0.0022 0.0023 0.0023 0.0024 0.0025 BJ
٥٩٤
– –
ﺠﺩﻭل )(٢-٥
ﻗﻴﻡ MSEﻟﻠﻤﺸﺎﻫﺩﺍﺕ ﺍﻟﺜﻼﺜﺔ ﺍﻟﻤﺴﺘﻘﺒﻠﻴﺔ ﻟﺘﻨﺒﺅﺍﺕ ﺒﻭﻜﺱ-ﺠﻴﻨﻜﻨﺯ ﻭﺍﻟﺸﺒﻜﺎﺕ
ﻤﻊ ﺃﺤﺠﺎﻡ ﻋﻴﻨﺎﺕ ﻤﺨﺘﻠﻔﺔ
Average Zn+3 Zn+2 Zn+1 Method Pred.horizon
sam.Size
0.0013 0.0013 0.0013 0.0013 NN 25
0.0026 0.0025 0.0027 0.0025 BJ
0.0020 0.0020 0.0019 0.0020 NN 40
0.0024 0.0025 0.0023 0.0023 BJ
0.0026 0.0026 0.0025 0.0026 NN 60
0.0023 0.0022 0.0024 0.0024 BJ
0.0031 0.0031 0.0032 0.0030 NN 100
0.0023 0.0023 0.0022 0.0024 BJ
0.0034 0.0034 0.0033 0.0034 NN 150
0.0022 0.0021 0.0023 0.0024 BJ
0.0024 0.0025 0.0024 0.0024 NN Average
0.0024 0.0023 0.0024 0.0024 BJ
٥٩٥
– –
ﺸﻜل )(١-٥
ﻗﻴﻡ MSEﻟﺘﻨﺒﺅﺍﺕ ﺒﻭﻜﺱ-ﺠﻴﻨﻜﻨﺯ ﻭﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﻭﻓﻘﺎ ﻟﺤﺠﻡ ﺍﻟﻌﻴﻨﺔ
0.004
0.0035
0.003
0.0025
0.002
0.0015
0.001
0.0005
0
25 40 60 100 150 Average
NN BJ
٥٩٦
– –
ﺸﻜل )(٢-٥
ﻗﻴﻡ MSEﻟﺘﻨﺒﺅﺍﺕ ﺒﻭﻜﺱ-ﺠﻴﻨﻜﻨﺯ ﻭﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﻟﻨﻤﺎﺫﺝ ARMAX
0.004
0.0035
0.003
0.0025
0.002
0.0015
0.001
0.0005
0
) ) ) ) ) ) ) ) e
,0 ,0 ,1 ,0 ,0 ,1 ,2 ,1 ag
,1 ,2 ,1 ,1 ,2 ,1 ,1 ,2
(1 (1 (1 (0 (0 (0 (0 (0 ver
A
NN BJ
٥٩٧
– –
ﺸﻜل )(٣-٥
ﻗﻴﻡ MSEﻟﻠﻤﺸﺎﻫﺩﺍﺕ ﺍﻟﺜﻼﺜﺔ ﺍﻟﻤﺴﺘﻘﺒﻠﻴﺔ ﻟﺘﻨﺒﺅﺍﺕ ﺒﻭﻜﺱ-ﺠﻴﻨﻜﻨﺯ ﻭﺍﻟﺸﺒﻜﺎﺕ
ﺍﻟﻌﺼﺒﻴﺔ
0.0026
0.00255
0.0025
0.00245
0.0024
0.00235
0.0023
0.00225
0.0022
0.00215
Zn+1 Zn+2 Zn+3 Average
NN BJ
MAD
ﻓﻴﻤﺎ ﻴﻠﻲ ﺠﺩﻭل ) (5-3ﻭﺍﻟﺫﻱ ﻴﺤﺘﻭﻯ ﻋﻠﻰ ﻨﺘﺎﺌﺞ ﻤﻘﻴﺎﺱ MADﻟﻜل ﻤـﻥ
ﺘﻨﺒﺅﺍﺕ ﺒﻭﻜﺱ-ﺠﻴﻨﻜﻨﺯ ﻭﺘﻨﺒﺅﺍﺕ ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﻟﻨﻤﺎﺫﺝ ARMAXﺍﻟﻤﺤﺩﺩﺓ
ﺒﺄﺤﺠﺎﻡ ﻋﻴﻨﺎﺕ ﻤﺨﺘﻠﻔﺔ ،ﻭﻜﺫﻟﻙ ﺠﺩﻭل ) (5-4ﻴﺤﺘـﻭﻯ ﻋﻠـﻰ ﻨﺘـﺎﺌﺞ ﻤﻘﻴـﺎﺱ
MADﻟﻠﻤﺸﺎﻫﺩﺍﺕ ﺍﻟﻤﺴﺘﻘﺒﻠﻴﺔ ﺍﻟﺜﻼﺜﺔ ﺍﻷﻭﻟﻰ.
ﺠﺩﻭل )(٣-٥
ﻗﻴﻡ MADﻟﺘﻨﺒﺅﺍﺕ ﺒﻭﻜﺱ-ﺠﻴﻨﻜﻨﺯ ﻭﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ
ﻤﻭﺯﻋﺔ ﻭﻓﻘﺎ ﻷﺤﺠﺎﻡ ﺍﻟﻌﻴﻨﺎﺕ ﻭﻨﻤﺎﺫﺝ ARMAXﺍﻟﻤﺨﺘﻠﻔﺔ.
Average 150 100 60 40 25 Method Sam.size
P,h,q
0.0006 0.0007 0.0007 0.0006 0.0006 0.0004 NN )(1,1,0
0.0014 0.0014 0.0014 0.0014 0.0014 0.0014 BJ
0.0006 0.0008 0.0007 0.0006 0.0006 0.0004 NN )(1,2,0
0.0013 0.0013 0.0013 0.0013 0.0013 0.0013 BJ
٥٩٨
– –
P,h,q
0.0006 0.0007 0.0007 0.0006 0.0005 0.0004 NN (1,1,1)
0.0013 0.0013 0.0013 0.0013 0.0013 0.0014 BJ
0.0005 0.0006 0.0006 0.0006 0.0005 0.0003 NN (0,1,0)
0.0011 0.0011 0.0011 0.0011 0.0011 0.0011 BJ
0.0005 0.0006 0.0006 0.0006 0.0005 0.0003 NN (0,2,0)
0.0011 0.0010 0.0011 0.0011 0.0010 0.0011 BJ
0.0006 0.0007 0.0007 0.0007 0.0005 0.0004 NN (0,1,1)
0.0011 0.0011 0.0011 0.0011 0.0011 0.0011 BJ
0.0007 0.0008 0.0008 0.0007 0.0005 0.0005 NN (0,1,2)
0.0011 0.0011 0.0011 0.0011 0.0011 0.0011 BJ
0.0006 0.0007 0.0007 0.0006 0.0005 0.0004 NN (0,2,1)
0.0011 0.0011 0.0011 0.0011 0.0011 0.0012 BJ
0.0006 0.0007 0.0007 0.0006 0.0005 0.0004 NN Average
0.0012 0.0012 0.0012 0.0012 0.0012 0.0012 BJ
(٤-٥) ﺠﺩﻭل
ﺠﻴﻨﻜﻨﺯ ﻭﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﻤﻭﺯﻋﺔ ﻭﻓﻘﺎ ﻷﺤﺠﺎﻡ- ﻟﻁﺭﻴﻘﺘﻲ ﺒﻭﻜﺱMAD ﻗﻴﻡ
.ﺍﻟﻌﻴﻨﺎﺕ ﻭﺍﻟﻤﺸﺎﻫﺩﺍﺕ ﺍﻟﻤﺴﺘﻘﺒﻠﻴﺔ ﺍﻟﺜﻼﺜﺔ ﺍﻷﻭﻟﻰ
٥٩٩
– –
0.0014
0.0012
0.001
0.0008
0.0006
0.0004
0.0002
0
25 40 60 0 0 e
10 15 ag
ver
A
NN BJ
٦٠٠
– –
ﻜل ﺘﻠﻙ ﺍﻟﻨﻤﺎﺫﺝ ،ﻜﻤﺎ ﻜﺎﻥ ﺍﻟﻤﺘﻭﺴﻁ ﺍﻟﻌﺎﻡ ﻟﻘﻴﻡ MADﻓﻲ ﺤﺎﻟﺔ ﺘﻨﺒﺅﺍﺕ ﻁﺭﻴﻘـﺔ
ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﻭﻫﻭ ﻴﺴﺎﻭﻯ 0.0006ﺃﻗل ﻤﻥ ﻗﻴﻤﺘﻪ ﻓﻲ ﺤﺎﻟﺔ ﺘﻨﺒﺅﺍﺕ ﻁﺭﻴﻘﺔ
ﺒﻭﻜﺱ-ﺠﻴﻨﻜﻨﺯ ﺍﻟﺫﻱ ﻴﺴﺎﻭﻯ .0.0012ﻭﻤﻥ ﺍﻟﻤﻤﻜﻥ ﻤﻼﺤﻅﺔ ﻫﺫﻩ ﺍﻟﻨﺘﺎﺌﺞ ﻤـﻥ
ﺨﻼل ﺍﻟﺸﻜل ﺍﻟﺒﻴﺎﻨﻲ ﺍﻟﺘﺎﻟﻲ:
ﺸﻜل )(٥-٥
MADﻟﺘﻨﺒﺅﺍﺕ ﺒﻭﻜﺱ-ﺠﻴﻨﻜﻨﺯ ﻭﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﻟﻨﻤﺎﺫﺝ ARMAX
0.0016
0.0014
0.0012
0.001
0.0008
0.0006
0.0004
0.0002
0
) ) ) ) ) ) ) ) e
,0 ,0 ,1 ,0 ,0 ,1 ,2 ,1 ag
,1 ,2 ,1 ,1 ,2 ,1 ,1 ,2 er
(1 (1 (1 (0 (0 (0 (0 (0 v
A
NN BJ
٦٠١
– –
ﺸﻜل )(٦-٥
ﻗﻴﻡ MADﻟﻠﻤﺸﺎﻫﺩﺍﺕ ﺍﻟﺜﻼﺜﺔ ﺍﻟﻤﺴﺘﻘﺒﻠﻴﺔ ﻟﺒﻭﻜﺱ-ﺠﻴﻨﻜﻨﺯ ﻭﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ
0.0014
0.0012
0.001
0.0008
0.0006
0.0004
0.0002
0
Zn+1 Zn+2 Zn+3 Average
NN BJ
MAEP
ﻓﻴﻤﺎ ﻴﻠﻲ ﺠﺩﻭل ) (5-5ﻴﺤﺘﻭﻯ ﻋﻠﻰ ﻨﺘﺎﺌﺞ ﻤﻘﻴﺎﺱ ﻤﺘﻭﺴﻁ ﻨـﺴﺒﺔ ﺍﻷﺨﻁـﺎﺀ
ﺍﻟﻤﻁﻠﻘﺔ ﺍﻷﻗل ﻟﺘﻨﺒﺅﺍﺕ ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﺇﻟﻰ ﺒـﻭﻜﺱ-ﺠﻴﻨﻜﻨـﺯ MAEPﻤـﻊ
ﻨﻤﺎﺫﺝ ARMAXﺍﻟﻤﺤﺩﺩﺓ ﺒﺄﺤﺠﺎﻡ ﻋﻴﻨﺎﺕ ﻤﺨﺘﻠﻔﺔ ،ﻜﺫﻟﻙ ﺠﺩﻭل) (5-6ﻴﺤﺘـﻭﻯ
ﻋﻠﻰ ﻨﺘﺎﺌﺞ ﻤﻘﻴﺎﺱ MAEPﻟﻠﺜﻼﺜﺔ ﻤﺸﺎﻫﺩﺍﺕ ﺍﻟﻤﺴﺘﻘﺒﻠﻴﺔ ﺍﻷﻭﻟﻰ.
ﺠﺩﻭل )(٥-٥
ﻗﻴﻡ MAEPﻟﻨﻤﺎﺫﺝ ARMAXﻤﻊ ﺃﺤﺠﺎﻡ ﺍﻟﻌﻴﻨﺎﺕ ﺍﻟﻤﺨﺘﻠﻔﺔ
P,h,q
1.60576 1.4471 1.4634 1.4622 1.5274 2.1287 )(1,1,0
1.5335 1.2904 1.3637 1.5107 1.5013 2.0014 )(1,2,0
1.5111 1.2854 1.3023 1.3715 1.5993 1.9970 )(1,1,1
1.5436 1.3918 1.4337 1.4354 1.3547 2.1024 )(0,1,0
1.5782 1.4306 1.4411 1.2716 1.6115 2.1362 )(0,2,0
1.54158 1.4169 1.3975 1.3574 1.5951 1.9410 )(0,1,1
1.42622 1.2324 1.2024 1.3928 1.6167 1.6868 )(0,1,2
1.53322 1.3700 1.4579 1.4188 1.4985 1.9209 )(0,2,1
1.534148 1.3581 1.3828 1.4026 1.5381 1.9893 Average
٦٠٢
– –
ﺠﺩﻭل )(٦-٥
ﻗﻴﻡ MAEPﻟﻠﻤﺸﺎﻫﺩﺍﺕ ﺍﻟﻤﺴﺘﻘﺒﻠﻴﺔ ﺍﻟﺜﻼﺜﺔ
٦٠٣
– –
ﺸﻜل )(٧-٥
ﻗﻴﻡ ﻤﺘﻭﺴﻁ ﺍﻟﻨﺴﺏ MAEPﻭﻓﻘﺎ ﻟﺤﺠﻡ ﺍﻟﻌﻴﻨﺔ
MAEP
2.5
2
1.5
1
0.5
0
25 40 60 100 150 Average
MAEP
1.65
1.6
1.55
1.5
1.45
1.4
1.35
1.3
) ) ) ) ) ) ) ) e
,0 ,0 ,1 ,0 ,0 ,1 ,2 ,1 ag
,1 ,2 ,1 ,1 ,2 ,1 ,1 ,2
(1 (1 (1 (0 (0 (0 (0 (0 ver
A
MAEP
٦٠٤
– –
ﻭﻓﻘﺎ ﻷﻓﻕ ﺍﻟﺘﻨﺒﺅ :ﻨﻼﺤﻅ ﻓﻲ ﺠﺩﻭل ) (5-6ﺃﻥ ﻗﻴﻡ ﻤﻘﻴﺎﺱ ﻤﺘﻭﺴـﻁ ﻨـﺴﺒﺔ
ﺍﻷﺨﻁﺎﺀ ﺍﻟﻤﻁﻠﻘﺔ ﺍﻷﻗل ﻟﻠﺸﺒﻜﺎﺕ ﺍﻟﻌـﺼﺒﻴﺔ ﺇﻟـﻰ ﺒـﻭﻜﺱ-ﺠﻴﻨﻜﻨـﺯ MAEP
ﻟﻠﻤﺸﺎﻫﺩﺍﺕ ﺍﻟﺜﻼﺜﺔ ﺍﻟﻤﺴﺘﻘﺒﻠﻴﺔ ﻜﺒﻴﺭﺓ ﺒﻤﻌﻨﻰ ﺃﻨﻬﺎ ﺘﺯﻴﺩ ﻋﻥ ﻨـﺴﺒﺔ ،%100ﻭﺫﻟـﻙ
ﻴﺩل ﻋﻠﻰ ﺘﻔﻭﻕ ﺃﺩﺍﺀ ﻁﺭﻴﻘﺔ ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﻋﻠﻰ ﻁﺭﻴﻘـﺔ ﺒـﻭﻜﺱ-ﺠﻴﻨﻜﻨـﺯ
ﺨﻼل ﻓﺘﺭﺓ ﺍﻟﺘﻨﺒﺅ .ﻭﻤﻥ ﺍﻟﻤﻤﻜﻥ ﻤﻼﺤﻅﺔ ﻫﺫﻩ ﺍﻟﻨﺘﺎﺌﺞ ﻤﻥ ﺨﻼل ﺍﻟـﺸﻜل ﺍﻟﺒﻴـﺎﻨﻲ
ﺍﻟﺘﺎﻟﻲ.
ﺸﻜل )(٩-٥
ﻗﻴﻡ MAEPﻟﻠﻤﺸﺎﻫﺩﺍﺕ ﺍﻟﻤﺴﺘﻘﺒﻠﻴﺔ ﺍﻟﺜﻼﺜﺔ
MAEP
1.55
1.545
1.54
1.535
1.53
1.525
1.52
1.515
1.51
Zn+1 Zn+2 Zn+3 Average
MAEP
] [٦ﺍﳋﻼﺻﺔ
ﻴﻌﺘﺒﺭ ﻫﺫﺍ ﺍﻟﺒﺤﺙ ﺍﻤﺘﺩﺍﺩﺍ ﻟﺒﺤـﺙ ) Alshawadfi(2003ﻭﺍﻟـﺫﻱ ﺍﻗﺘـﺭﺡ
ﻁﺭﻴﻘﺔ ﺠﺩﻴﺩﺓ ﻭﻤﺅﺜﺭﺓ ﻟﻠﺘﻨﺒﺅ ﺒﻨﻤﺎﺫﺝ ARMAﺒﺎﺴـﺘﺨﺩﺍﻡ ﺃﺤـﺩ ﻁـﺭﻕ ﺍﻟـﺫﻜﺎﺀ
ﺍﻻﺼﻁﻨﺎﻋﻲ ﻭﻫﻲ ﻁﺭﻴﻘﺔ ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ،ﻜﻤﺎ ﻗﺎﺭﻥ ﺒﻴﻥ ﺍﻟﻁﺭﻴﻘﺔ ﺍﻟﻤﻘﺘﺭﺤـﺔ
ﻭﻁﺭﻴﻘﺔ ﺒﻭﻜﺱ-ﺠﻴﻨﻜﻨﺯ.ﻭﻴﺘﻀﻤﻥ ﻫﺫﺍ ﺍﻟﺒﺤﺙ ﻫﺩﻓﻴﻥ :ﺘﻌﻤـﻴﻡ ﻁﺭﻴﻘـﺔ
Alshawadfiﻟﻠﺘﻨﺒﺅ ﺒﺎﻟﺴﻼﺴل ﺍﻟﺯﻤﻨﻴـﺔ ﺍﻟﻤﻭﻟـﺩﺓ ﻤـﻥ ﻨﻤـﺎﺫﺝ ARMAX
ﺒﺎﺴﺘﺨﺩﺍﻡ ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﺍﻻﺼﻁﻨﺎﻋﻴﺔ ) ،(ANNﻭﻟﺘﺤﻘﻴﻕ ﻫﺫﺍ ﺍﻟﻬـﺩﻑ ﺘـﻡ
٦٠٥
– –
ﻭﻗﺩ ﺃﻭﻀﺤﺕ ﻨﺘﺎﺌﺞ ﺍﻟﺒﺤﺙ ﺍﻟﻘﺩﺭﺓ ﺍﻟﻌﺎﻟﻴﺔ ﻟﻠﻁﺭﻴﻘﺔ ﺍﻟﻤﻘﺘﺭﺤﺔ ﻟﻠﺘﻨﺒﺅ ﻟﻨﻤـﺎﺫﺝ
ARMAXﺒﺎﺴﺘﺨﺩﺍﻡ ﺃﺤﺩ ﻁﺭﻕ ﺍﻟﺫﻜﺎﺀ ﺍﻻﺼﻁﻨﺎﻋﻲ -ﻭﻫﻲ ﻁﺭﻴﻘﺔ ﺍﻟـﺸﺒﻜﺎﺕ
ﺍﻟﻌﺼﺒﻴﺔ -ﻋﻠﻰ ﺍﻟﺘﻨﺒﺅ ﺒﺎﻟﻘﻴﻡ ﺍﻟﻤﺴﺘﻘﺒﻠﻴﺔ ﻟﻠﺴﻠﺴﻠﺔ ﺍﻟﺯﻤﻨﻴﺔ ﺍﻟﻤﻌﻁﺎﺓ ﻭﺫﻟﻙ ﺒﻁﺭﻴﻘـﺔ
ﺁﻟﻴﺔ ،ﺤﻴﺙ ﺃﻅﻬﺭﺕ ﺍﻟﻨﺘﺎﺌﺞ ﻤﻥ ﺩﺭﺍﺴﺔ 32000ﻋﻴﻨﺔ ﻭﺍﻟﺘـﻲ ﺘـﻡ ﺘﻭﻟﻴـﺩﻫﺎ ﺃﻥ
ﻤﺘﻭﺴﻁ ﻤﺭﺒﻌﺎﺕ ﺃﺨﻁﺎﺀ ﺍﻟﺘﻨﺒﺅ MSEﺨﺎﺼﺔ ﻓـﻲ ﺤﺎﻟـﺔ ﺍﻟﻌﻴﻨـﺎﺕ ﺍﻟـﺼﻐﻴﺭﺓ،
ﻭﻤﺘﻭﺴﻁ ﺍﻟﻘﻴﻤﺔ ﺍﻟﻤﻁﻠﻘﺔ ﻟﺨﻁـﺄ ﺍﻟﺘﻨﺒـﺅ ، MADﻭﻜـﺫﻟﻙ ﻤﺘﻭﺴـﻁ ﺍﻟﻨـﺴﺒﺔ
٦٠٦
– –
ﺍﳌﺮﺍﺟﻊ
ﺍﳌﺮﺍﺟﻊ ﺍﻟﻌﺮﺑﻴﺔ:ﺃﻭﻻ
.(ﻡ١٩٩٩)، ﻤﺤﻤـﺩ ﻋﺒـﺩ ﺍﻟﻬـﺎﺩﻱ ﺍﻟﻤﺤﻤﻴـﺩ،( ﺠﻌﻔﺭ ﻤﺤﻤﺩ ﺤـﺎﺠﻰ١)
ﺍﻟﺘﻨﺒﺅ ﺒﺄﺴﻌﺎﺭ ﺼﺭﻑ ﺍﻟـﺩﻴﻨﺎﺭ ﺍﻟﻜـﻭﻴﺘﻲ ﻤﻘﺎﺒـل ﺍﻟـﺩﻭﻻﺭ:"ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ
،(١٩٩٩) ﻴﻨﺎﻴﺭ،١ ﻋﺩﺩ،٦ ﻤﺠﻠﺩ، ﺍﻟﻤﺠﻠﺔ ﺍﻟﻌﺭﺒﻴﺔ ﻟﻠﻌﻠﻭﻡ ﺍﻹﺩﺍﺭﻴﺔ،" ﺍﻷﻤﺭﻴﻜﻲ
.٣٥ -١٧ ﺹ
" ﺍﻟﺫﻜﺎﺀ ﺍﻟﺼﻨﺎﻋﻰ ﺩﻟﻴل ﺍﻟﻨﻅﻡ ﺍﻟﺫﻜﻴﺔ.(ﻡ٢٠٠٤ )،( ﻤﻴﺸﻴل ﻨﺠﻴﻨﻔﻴﺘﺴﻜﻰ٢)
ﺍﻟﻤﻤﻠﻜـﺔ، ﺍﻟﺭﻴـﺎﺽ، ﺩﺍﺭ ﺍﻟﻤﺭﻴﺦ ﻟﻠﻨﺸﺭ،" ﺘﻌﺭﻴﺏ ﺴﺭﻭﺭ ﻋﻠﻰ ﺇﺒﺭﺍﻫﻴﻡ ﺴﺭﻭﺭ
.٢٥٢ ﺹ،ﺍﻟﻌﺭﺒﻴﺔ ﺍﻟﺴﻌﻭﺩﻴﺔ
ﺍﳌﺮﺍﺟﻊ ﺍﻷﺟﻨﺒﻴﺔ:ﺛﺎﻧﻴﺎ
(1) Arminger, G. and Enache, D. (1996), "Statistical Models
and Artificial Neural Networks". In: Bock, H.H. and
Polasek, W. (Eds.): Data Analysis and Information
Systems, Vol. 7, Springer Verlag, Heidelberg, 243-260.
(2) Al-Shawadfi, Gamal A.(1994). "Bayesian Inference of
ARMAX Models", Scientific Magazine, Faculty of
Commerce, Al-Azhar University, Cairo, Egypt, Vol. 20
July 1994.
(3) Al-Shawadfi, Gamal A.(1996). "Bayesian Estimation for
the Parameters of the Seasonal ARMAX Models",
Scientific Magazine, Faculty of Commerce, Ain Shams
University Cairo, Egypt, Vol. 1 July 1996,PP. 139-151.
٦٠٧
– –
٦٠٨
MATLABTOOLBOX
%.. toolbox for Time Series Forecasting Training &Testing Using Neural Network
Technique
%....file name : train2010...output file out2010.mat , out2010';
diary('outout2010')
clear all;
tic;
mu=0; sigma=1; mm=60; m=mm-10; n=1; m0=500; n1=32; n2=8; n3=4; h=3;
ss01(n2,h)=0.0;ss02(n2,h)=0.0;sb01(n2,h)=0.0;sb02(n2,h)=0.0;ss(n2,h)=0;sb(n2,h)=0;
p=[1 1 0; 1 1 0; 1 1 0; 1 1 0; 1 2 0; 1 2 0; 1 2 0; 1 2 0; 1 1 1; 1 1 1; 1 1 1;1 1 1;
0 1 0; 0 1 0; 0 1 0; 0 1 0; 0 2 0; 0 2 0; 0 2 0; 0 2 0; 0 1 1; 0 1 1; 0 1 1; 0 1 1;
0 1 2; 0 1 2; 0 1 2; 0 1 2; 0 2 1; 0 2 1; 0 2 1; 0 2 1];
a=[.3 .5 .7 .9 .3 .5 .7 .9 .3 .5 .7 .9 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0;
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0;
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0;
.3 .5 .7 .9 .3 .3 .5 .5 .3 .5 .7 .9 .3 .5 .7 .9 .3 .3 .5 .5 .3 .5 .7 .9 .3 .5 .7 .9 .3 .3 .5 .5;
0 0 0 0 -.5 .5 -.7 .3 0 0 0 0 0 0 0 0 -.5 .5 -.7 -.5 0 0 0 0 0 0 0 0 -.5 .5 -.7 -.5;
0 0 0 0 0 0 0 0 .3 .5 .7 .9 0 0 0 0 0 0 0 0 .3 .5 .7 .9 .3 .3 .5 .5 .3 .5 .7 .9;
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -.5 .5 -.7 -.5 0 0 0 0];
' 1....... initialization arrays ...... ....';
for i=1:m0;
e(mm,1)=0.0;e0(m,1)=0.0;
E(mm,1)=0.0; E0(m,1)=0.0;
x(mm,n1)=0;y(mm,n1)=0;
% .......generating samples.. ....;
e=normrnd(mu,sigma,mm,n);
E=normrnd(mu,sigma,mm,n);
x(1,:) = e(1)*ones(1,n1);
x(2,:) = e(2)*ones(1,n1) + a(3,:).*x(1,:);
y(1,:) = E(1)*ones(1,n1);
y(2,:) = E(2)*ones(1,n1)+ a(1,:).*y(1,:) + a(3,:).*x(2,:)- a(4,:).*x(1,:) -a(6,:)*E(1);
for i1=3:mm;
x(i1,:) = e(i1)*ones(1,n1) + a(4,:).*x(i1-1,:) + a(5,:).*x(i1-2,:);
y(i1,:) = E(i1)*ones(1,n1) + a(1,:).*y(i1-1) + a(2,:).*y(i1-2)+ a(3,:).*x(i1) -
a(4,:).*x(i1-1) - a(5,:).*x(i1-2)-a(6,:).*E(i1-1,:) -a(7,:).*E(i1-2,:);
end;
x0=x(11:mm,:); x1=x0(1:(m-h),:); x2=x0((m-h+1):m,:);
y0=y(11:mm,:); y1=y0(1:(m-h),:); y2=y0((m-h+1):m,:);
z=[x0; y0] ; z0=unstkc(z,m,2*n1); z1=z0(1:(m-h),:) ;z2=z0((m-h+1):m,:);
if i==1;
xx = x0 ; xx1= x1 ; xx2 = x2 ;
yy = y0 ; yy1= y1 ; yy2 = y2 ;
zz = z0 ; zz1= z1 ; zz2 = z2 ;
else;
xx = [xx x0] ; xx1= [xx1 x1] ; xx2 = [xx2 x2 ];
yy = [yy y0] ; yy1= [yy1 y1] ; yy2 = [yy2 y2] ;
zz = [zz z0] ; zz1= [zz1 z1] ; zz2 = [zz2 z2];
- ٦٠٩ -
end; end;
%2.........transforming data............;
xx0 = 0.8*(xx-ones(m,1)*min(xx))./(ones(m,1)*(max(xx)-min(xx)))+0.1;
xx01 = xx0(1:(m-h),:); xx02=xx0((m-h+1):m,:);
yy0 = 0.8*(yy-ones(m,1)*min(yy))./(ones(m,1)*(max(yy)-min(yy)))+0.1;
yy01 = yy0(1:(m-h),:); yy02=yy0((m-h+1):m,:);
zz0 = 0.8*(zz-ones(m,1)*min(zz))./(ones(m,1)*(max(zz)-min(zz)))+0.1;
zz01 = zz0(1:(m-h),:);zz02=zz0((m-h+1):m,:);
%3.........training ,testing and predicting phase............;
j00=0
for j = 1 : m0;
for j0 = 1 : n1;
j00 = j00 + 1
j1 = fix((j0-1)/n3)+1;
zz00 = zz0(:,2*j00-1:2*j00);zz001= zz01(:,2*j00-1:2*j00);
z000 = zz(:,2*j00-1:2*j00);z0001= zz1(:,2*j00-1:2*j00);
yy00 = yy0(:,j00) ; yy001= yy01(:,j00); yy002 = yy02(:,j00);
y000 = yy(:,j00) ; y0001= yy1(:,j00) ; y0002 = yy2(:,j00);
net =newff([0 1;0 1],[3 1],{'logsig' 'tansig'}) ;
% net=init(net);
net.trainparam.epochs =50;
%net.trainparam.gole =0.001;
net = train(net,zz00',yy00');
f = sim(net,zz00');
YN = min(y000)+ (f - 0.1)*(max(y000)- min(y000))/0.8;
pc = [p(j0,:), 1] ;
th = armax(z000,pc);
YB = predict(z000,th,h);
for j3=1:h;
s01 = abs(y0002(j3)-YN(m-h+j3));
b01 = abs(y0002(j3)-YB(m-h+j3));
s02 = (s01)^2;
b02 = (b01)^2;
ss01(j1,j3) = ss01(j1,j3) + s01;
sb01(j1,j3) = sb01(j1,j3) + b01;
ss02(j1,j3) = ss02(j1,j3) + s02;
sb02(j1,j3) = sb02(j1,j3) + b02;
if s01 < b01 ; ss(j1,j3) = ss(j1,j3)+1;
elseif s01 ==b01 ;sb(j1,j3)=sb(j1,j3)+0.5
; ss(j1,j3) = ss(j1,j3)+0.5;
else;s01 > b01 ; sb(j1,j3) = sb(j1,j3)+1;
end; end; end; end;
ss03=(ones(1,n2)*ss01)/(n2*n3*m0);
sb03=(ones(1,n2)*sb01)/(n2*n3*m0);
ss04=(ss01*ones(h,1))/(h*n3*m0);
sb04=(sb01*ones(h,1))/(h*n3*m0);
ss05=(ones(1,n2)*ss02)/(n2*n3*m0);
- ٦١٠ -
sb05=(ones(1,n2)*sb02)/(n2*n3*m0);
ss06=(ss02*ones(h,1))/(h*n3*m0);
sb06=(sb02*ones(h,1))/(h*n3*m0);
ss3=(ones(1,n2)*ss)/(n2*n3*m0);
sb3=(ones(1,n2)*sb)/(n2*n3*m0);
ss4=(ss*ones(h,1))/(h*n3*m0);
sb4=(sb*ones(h,1))/(h*n3*m0);
s1=sum(ss03)/h;
s11=sum(sb03)/h;
s2=sum(ss05)/h;
s22=sum(sb05)/h;
s3=sum(ss3)/h;
s4=sum(sb3)/h;
' 4 comparison between Neural Network and BOX JENKINS forecasts ...Final
results.........';
disp 'mse results'
MSE=[ss02,ss06;[ss05,s2]]
MSE1=[sb02,sb06;[sb05,s22]]
- ٦١١ -
Artificial intelligence and time series analysis
Prof. Dr. Gamal Alshawadfi Dr. Abd El-Wahab Hagag
Head of Statistics Department Assistant professor
MAIL: Dr_Gamal1@yahoo.com MAIL:Wahabstat@yahoo.com
Mobile: 0020-01066543923 Mobile: 0020-01224709386
Abstract
This paper has two objects. First, we present artificial neural
networks method for forecasting linear and nonlinear ARAMAX time
series. Second, we compare the proposed method with the well known
Box-Jenkins method through a simulation study . To achieve these
objects 32000 samples, generated from different ARMAX models,
different sizes (25,40,60,100,150), were used for the network training.
Then the system was tested for generated data . The accuracy of the
neural network forecasts(NNF) is compared with the corresponding Box-
Jenkins forecasts(BJF) by using three tools: the mean square error
(MSE) , the mean absolute deviation of error (MAD) and the ratio of
closeness from the true values (MPE) . A suitable computer program
was designed (MATLAB TOOLBOX) for NN training , testing and
comparing with Box-Jenkins method .
The forecasts of theproposed NN approach, as shown from three
measures, seem to provide better results than the classical forecasting
Box-Jenkins approach . The results suggest thatthe ANN approach may
provide a superior alternative to the Box-Jenkins forecasting approach
for developing forecasting models in situations that do not require
modeling of the internal structure of the series .
The numerical results show that the proposed approach has a
good performance for the forecasting of ARMAX(p,h,q) models.
- ٦١٢ -