JSFC - Volume 10 - Issue 1 - FGHBC

Download as pdf or txt
Download as pdf or txt
You are on page 1of 42

‫ﺍﻟﺬﻛﺎء ﺍﻻﺻﻄﻨﺎﻋﻲ ﻭﲢﻠﻴﻞ ﺍﻟﺴﻼﺳﻞ ﺍﻟﺰﻣﻨﻴﺔ‬

‫ﺩ‪ /‬ﻋﺒﺪﺍﻟﻮﻫﺎﺏ ﺍﻟﺴﻴﺪ ﺣﺠﺎﺝ‬ ‫ﺩ‪ /‬ﲨﺎﻝ ﺍﲪﺪ ﺍﻟﺸﻮﺍﺩﻓﻰ‬

‫ﻤﺩﺭﺱ ﺒﻘﺴﻡ ﺍﻹﺤﺼﺎﺀ‬ ‫ﺭﺌﻴﺱ ﻗﺴﻡ ﺍﻹﺤﺼﺎﺀ‬

‫ﻜﻠﻴﺔ ﺍﻟﺘﺠﺎﺭﺓ – ﺠﺎﻤﻌﺔ ﺍﻷﺯﻫﺭ‬ ‫ﻜﻠﻴﺔ ﺍﻟﺘﺠﺎﺭﺓ – ﺠﺎﻤﻌﺔ ﺍﻷﺯﻫﺭ‬


‫‪MAIL:Wahabstat@yahoo.com‬‬ ‫‪Mail Dr_Gamal1@yahoo.com‬‬
‫‪Mobile: 0020-01224709386‬‬ ‫‪Mobile: 0020-01066543923‬‬
 – –

٥٧٢
‫‪ – –‬‬

‫ﻣﻠﺨﺺ ﺍﻟﺒﺤﺚ‬
‫ﻫﺫﺍ ﺍﻟﺒﺤﺙ ﻴﻌﺘﺒﺭ ﺍﻤﺘﺩﺍﺩﺍ ﻟﺒﺤﺙ )‪ Alshawadfi (2003‬ﻭﺍﻟﺫﻱ ﻗـﺩﻡ ﻓﻴـﻪ‬
‫ﻁﺭﻴﻘﺔ ﺠﺩﻴﺩﺓ ﻭﻤﺅﺜﺭﺓ ﻟﻠﺘﻨﺒﺅ ﺒﻨﻤﺎﺫﺝ ‪ ،ARMA‬ﺒﺎﺴﺘﺨﺩﺍﻡ ﺃﺤـﺩ ﻁـﺭﻕ ﺍﻟـﺫﻜﺎﺀ‬
‫ﺍﻻﺼﻁﻨﺎﻋﻲ ﻭﻫﻲ ﻁﺭﻴﻘﺔ ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ‪ ،‬ﻜﻤﺎ ﻗﺎﺭﻥ ﺒﻴﻥ ﺍﻟﻁﺭﻴﻘﺔ ﺍﻟﻤﻘﺘﺭﺤـﺔ‬
‫ﻭﻁﺭﻴﻘﺔ ﺒﻭﻜﺱ‪-‬ﺠﻴﻨﻜﻨﺯ‪.‬ﻭﻴﺘﻀﻤﻥ ﻫﺫﺍ ﺍﻟﺒﺤﺙ ﻫﺩﻓﻴﻥ‪:‬‬

‫‪ :‬ﺘﻌﻤﻴﻡ ﻁﺭﻴﻘﺔ ‪ Alshawadfi‬ﻟﻠﺘﻨﺒﺅ ﺒﺎﻟﺴﻼﺴل ﺍﻟﺯﻤﻨﻴﺔ ﺍﻟﻤﻭﻟﺩﺓ ﻤـﻥ‬


‫ﻨﻤﺎﺫﺝ ‪ ARMAX‬ﺒﺎﺴـﺘﺨﺩﺍﻡ ﺍﻟـﺸﺒﻜﺎﺕ ﺍﻟﻌـﺼﺒﻴﺔ ﺍﻻﺼـﻁﻨﺎﻋﻴﺔ )‪،(ANN‬‬
‫ﻭﻟﺘﺤﻘﻴﻕ ﻫﺫﺍ ﺍﻟﻬﺩﻑ ﺘﻡ ﺘﻭﻟﻴﺩ‪ 32000‬ﻋﻴﻨﺔ ﺒﺄﺤﺠﺎﻡ ﻤﺨﺘﻠﻔـﺔ )‪25-40-60-100-‬‬
‫‪ (150‬ﻤﻥ ﻨﻤﺎﺫﺝ ‪ ARMAX‬ﺒﻤﻌﻠﻤﺎﺕ ﻤﺨﺘﻠﻔﺔ ﺘﺴﺘﺨﺩﻡ ﻟﺘﺩﺭﻴﺏ ﺍﻟﺸﺒﻜﺔ‪،‬ﺒﻌﺩ ﺫﻟﻙ‬
‫ﺘﻡ ﻤﻘﺎﺭﻨﺔ ﺍﻟﺒﻴﺎﻨﺎﺕ ﺍﻟﻤﻭﻟﺩﺓ ﺒﺎﻟﺒﻴﺎﻨﺎﺕ ﺍﻟﺤﻘﻴﻘﻴﺔ ﻟﻘﻴﺎﺱ ﺩﻗﺔ ﺍﻟﺘﻨﺒﺅﺍﺕ‪.‬‬
‫‪ :‬ﻤﻘﺎﺭﻨﺔ ﺃﺩﺍﺀ ﺍﻟﻁﺭﻴﻘـﺔ ﺍﻟﻤﻘﺘﺭﺤـﺔ ﻟﻠـﺸﺒﻜﺎﺕ ﺍﻟﻌـﺼﺒﻴﺔ‬
‫ﺍﻻﺼﻁﻨﺎﻋﻴﺔ ﻤﻊ ﺃﺩﺍﺀ ﻁﺭﻴﻘﺔ "ﺒﻭﻜﺱ ﻭ ﺠﻴﻨﻜﻨﺯ " ﻟﺘﻭﻀﻴﺢ ﺃﻱ ﻤﻨﻬﻤـﺎ ﺃﻓـﻀل‬
‫ﻟﻠﺘﻨﺒﺅ ﺒﻨﻤﺎﺫﺝ ‪ ARMAX‬ﻭﺫﻟﻙ ﻤﻥ ﺨﻼل ﻗﻴﺎﺱ ﺩﻗـﺔ ﺍﻟﺘﻨﺒـﺅﺍﺕ ﻟﻜـل ﻤـﻥ‬
‫‪Mean of‬‬ ‫ﺍﻟﻁﺭﻴﻘﺘﻴﻥ ﺒﺎﺴﺘﺨﺩﺍﻡ ﺜﻼﺙ ﻁﺭﻕ ﻫﻲ ﻤﺘﻭﺴﻁ ﻤﺭﺒﻌـﺎﺕ ﺍﻟﺨﻁـﺄ‬
‫‪Mean‬‬ ‫‪ ،Squared Error‬ﻤﺘﻭﺴــﻁ ﺍﻻﻨﺤﺭﺍﻓــﺎﺕ ﺍﻟﻤﻁﻠﻘــﺔ‬ ‫)‪(MSE‬‬
‫)‪ ،Absolute Deviation (MAD‬ﻨﺴﺒﺔ ﺍﻟﺤﺎﻻﺕ ﺍﻟﺘﻲ ﺘﺘﺤﻘﻕ ﻓﻴﻬـﺎ ﺃﺨﻁـﺎﺀ‬
‫ﻤﻁﻠﻘﺔ ﺃﻗل ﻓـﻲ ﺍﻟﻁـﺭﻴﻘﺘﻴﻥ ‪The Percentage of cases of Minimum‬‬
‫‪Absolute error (MAEP).‬‬
‫ﻭﻻﻨﺠﺎﺯ ﻋﻤﻠﻴﺔ ﺘﺩﺭﻴﺏ ﺍﻟﺸﺒﻜﺔ ﻭﺍﺨﺘﺒﺎﺭﻫـﺎ ﻭﺤـﺴﺎﺏ ﺘﻨﺒـﺅﺍﺕ ﻁﺭﻴﻘﺘـﻲ‬
‫ﺒﻭﻜﺱ‪-‬ﺠﻴﻨﻜﻨﺯ ﻭﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﻤﻊ ﺍﻟﻤﻘﺎﺭﻨﺔ ﺒﻴﻥ ﺍﻟﻁﺭﻴﻘﺘﻴﻥ ﺘﻡ ﺍﺴﺘﺨﺩﺍﻡ ﺤﺯﻡ‬
‫ﺒﺭﺍﻤﺞ ﻤﺎﺘﻼﺏ ‪ .MATLAB‬ﻭﺘﻡ ﺘﺼﻤﻴﻡ ﺒﺭﻨﺎﻤﺞ ﻜﻤﺒﻴﻭﺘﺭ ﻤﻨﺎﺴﺏ ﻀﻤﻥ ﺤـﺯﻡ‬
‫ﺒﺭﺍﻤﺞ ﻤﺎﺘﻼﺏ ‪) MATLAB‬ﺃﻨﻅﺭ ﺍﻟﻤﻠﺤﻕ ﺍﻟﻤﺭﻓﻕ ﻤﻊ ﺍﻟﺒﺤﺙ ( ﻟﺘﻭﻟﻴﺩ ﺍﻟﺒﻴﺎﻨﺎﺕ‬
‫ﻭ ﻻﻨﺠﺎﺯ ﻋﻤﻠﻴﺔ ﺘﺩﺭﻴﺏ ﺍﻟﺸﺒﻜﺔ ﻭﺍﺨﺘﺒﺎﺭﻫﺎ ﻭﺤﺴﺎﺏ ﺘﻨﺒﺅﺍﺕ ﻁﺭﻴﻘﺘـﻲ ﺒـﻭﻜﺱ‪-‬‬
‫ﺠﻴﻨﻜﻨﺯ ﻭﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﻤﻊ ﺍﻟﻤﻘﺎﺭﻨﺔ ﺒﻴﻨﻬﻤﺎ‪.‬‬

‫‪٥٧٣‬‬
‫‪ – –‬‬

‫ﻭﻗﺩ ﺃﻅﻬﺭﺕ ﻨﺘﺎﺌﺞ ﺍﻟﺒﺤﺙ ﻭﺠﻭﺩ ﻗﺩﺭﺓ ﻋﺎﻟﻴﺔ ﻟﻠﻁﺭﻴﻘـﺔ ﺍﻟﻤﻘﺘﺭﺤـﺔ ﻟﻠﺘﻨﺒـﺅ‬
‫ﻟﻨﻤﺎﺫﺝ ‪ ARMAX‬ﺒﺎﺴﺘﺨﺩﺍﻡ ﺃﺤﺩ ﻁﺭﻕ ﺍﻟﺫﻜﺎﺀ ﺍﻻﺼﻁﻨﺎﻋﻲ ‪ -‬ﻭﻫـﻲ ﻁﺭﻴﻘـﺔ‬
‫ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ‪ -‬ﻋﻠﻰ ﺍﻟﺘﻨﺒﺅ ﺒﺎﻟﻘﻴﻡ ﺍﻟﻤﺴﺘﻘﺒﻠﻴﺔ ﻟﻠﺴﻠﺴﻠﺔ ﺍﻟﺯﻤﻨﻴﺔ ﺍﻟﻤﻌﻁﺎﺓ ﻭﺫﻟﻙ‬
‫ﺒﻁﺭﻴﻘﺔ ﺁﻟﻴﺔ‪ ،‬ﺤﻴﺙ ﺃﻅﻬﺭﺕ ﺍﻟﻨﺘﺎﺌﺞ ﻤﻥ ﺩﺭﺍﺴﺔ ‪ 32000‬ﻋﻴﻨﺔ ﻭﺍﻟﺘﻲ ﺘﻡ ﺘﻭﻟﻴـﺩﻫﺎ‬
‫ﺃﻥ ﻤﺘﻭﺴﻁ ﻤﺭﺒﻌﺎﺕ ﺃﺨﻁﺎﺀ ﺍﻟﺘﻨﺒﺅ‪ MSE‬ﺨﺎﺼﺔ ﻓﻲ ﺤﺎﻟﺔ ﺍﻟﻌﻴﻨـﺎﺕ ﺍﻟـﺼﻐﻴﺭﺓ‪،‬‬
‫ﻭﻤﺘﻭﺴﻁ ﺍﻟﻘﻴﻤﺔ ﺍﻟﻤﻁﻠﻘﺔ ﻟﺨﻁﺄ ﺍﻟﺘﻨﺒﺅ ‪ ، MAD‬ﻭﻜﺫﻟﻙ ﻤﺘﻭﺴﻁ ﺍﻟﻨﺴﺒﺔ ‪MAEP‬‬
‫ﺃﻓﻀل ﻤﻥ ﻨﻅﻴﺭﺘﻬﺎ ﻓﻲ ﺃﺴﻠﻭﺏ ﺒﻭﻜﺱ‪-‬ﺠﻴﻨﻜﻨﺯ‪ ،‬ﻭﺫﻟﻙ ﻜﻤﺘﻭﺴﻁ ﻋﺎﻡ ﺒﺎﻟﻨﺴﺒﺔ ﻟﻜل‬
‫ﺍﻟﻌﻴﻨﺎﺕ ﻭﻜل ﺍﻟﻨﻤﺎﺫﺝ ﻭﻟﻠﻤﺸﺎﻫﺩﺍﺕ ﺍﻟﻤﺴﺘﻘﺒﻠﻴﺔ ﺍﻟﺜﻼﺙ ﺍﻷﻭﻟﻰ‪ .‬ﻭﺒﻨﺎﺀﺍ ﻋﻠﻰ ﺫﻟـﻙ‬
‫ﻴﻤﻜﻥ ﺍﻟﻘﻭل ﺒﺼﻼﺤﻴﺔ ﺍﻟﻁﺭﻴﻘﺔ ﺍﻟﻤﻘﺘﺭﺤﺔ ﻟﻠﺘﻨﺒـﺅ ﺒـﺎﻟﻘﻴﻡ ﺍﻟﻤـﺴﺘﻘﺒﻠﻴﺔ ﻟﻠـﺴﻼﺴل‬
‫ﺍﻟﺯﻤﻨﻴﺔ ﺍﻟﻤﻭﻟﺩﺓ ﻤﻥ ﻨﻤﺎﺫﺝ ‪.ARMAX‬‬
‫]‪ [١‬ﻣﻘﺪﻣﺔ‬
‫ﺇﻥ ﺘﻭﺍﻓﺭ ﻗﻭﺍﻋﺩ ﺒﻴﺎﻨﺎﺕ ﺩﻗﻴﻘﺔ ﻭﻤﻤﺜﻠﺔ ﻟﻭﺍﻗﻊ ﺍﻟﻅـﺎﻫﺭﺓ ﺃﻭ ﺍﻟﻅـﻭﺍﻫﺭ ﻤﺤـل‬
‫ﺍﻟﺩﺭﺍﺴﺔ ﺜﻡ ﺍﺴﺘﺨﺩﺍﻡ ﺃﺴﻠﻭﺏ ﻋﻠﻤﻲ ﻭﺃﺩﻭﺍﺕ ﺘﺤﻠﻴل ﻹﺠﺭﺍﺀ ﻤﻌﺎﻟﺠـﺔ ﺍﻟﻜﺘﺭﻭﻨﻴـﺔ‬
‫‪ Electronic processing‬ﺒﺩﻻ ﻤﻥ ﺍﻟﻤﻌﺎﻟﺠﺔ ﺍﻟﻴﺩﻭﻴﺔ ‪Manual processing‬‬
‫ﻟﻬﺫﻩ ﺍﻟﺒﻴﺎﻨﺎﺕ ﻫﻭ ﺍﻟﻤﻁﻠﻭﺏ ﻓﻲ ﺍﻟﻭﻗﺕ ﺍﻟﺤﺎﻟﻲ‪ .‬ﻭﺍﻟﻬﺩﻑ ﻫـﻭ ﺍﻟﺤـﺼﻭل ﻋﻠـﻰ‬
‫ﻤﻌﻠﻭﻤﺎﺕ ﻜﺎﻓﻴﺔ ﻜﻤﺎ ﻭﻜﻴﻔﺎ ﻻﺤﺘﻴﺎﺠﺎﺕ ﺍﻟﺒﺎﺤﺜﻴﻥ ﺃﻭ ﺼﺎﻨﻌﻲ ﺍﻟﻘﺭﺍﺭﺍﺕ‪.‬‬
‫ﻭﻴﻌـﺭﻑ ‪ Artificial intelligence‬ﺒﺄﻨـﻪ ﻋﻠـﻡ‬
‫ﻴﺘﻀﻤﻥ ﻤﺠﻤﻭﻋﺔ ﺃﺴﺎﻟﻴﺏ ﻭﻁﺭﻕ ﺠﺩﻴﺩﺓ ﻓﻲ ﺒﺭﻤﺠﺔ ﺃﻨﻅﻤﺔ ﺍﻟﺤﺎﺴـﺏ ﺘـﺴﺘﺨﺩﻡ‬
‫ﻟﺘﻁﻭﻴﺭ ﺃﻨﻅﻤﺔ ﺘﺤﺎﻜﻰ ﺒﻌﺽ ﻋﻨﺎﺼﺭ ﺫﻜﺎﺀ ﺍﻹﻨﺴﺎﻥ ﻭﺘﺴﻤﺢ ﻟﻬﺎ ﺒﺎﻟﻘﻴﺎﻡ ﺒﻌﻤﻠﻴـﺎﺕ‬
‫ﺍﺴﺘﻨﺘﺎﺠﻴﺔ ﻋﻠﻰ ﺤﻘﺎﺌﻕ ﻭﻗﻭﺍﻨﻴﻥ ﻴﺘﻡ ﺘﻤﺜﻴﻠﻬﺎ ﻓﻲ ﺫﺍﻜﺭﺓ ﺍﻟﺤﺎﺴﺏ‪.‬ﺒﻌﺒـﺎﺭﺓ ﺃﺨـﺭﻯ‬
‫ﺍﻟﺫﻜﺎﺀ ﺍﻻﺼﻁﻨﺎﻋﻲ‪ :‬ﻋﻠﻡ ﻴﺘﻨﺎﻭل ﻜﻴﻔﻴﺔ ﺠﻌل ﺍﻵﻟـﺔ – ﺃﻱ ﺍﻟﺤﺎﺴـﺏ – ﺘـﺅﺩﻯ‬
‫ﻋﻤﻠﻴﺎﺕ ﻤﻨﺎﻅﺭﺓ ﻟﻘﺩﺭﺍﺕ ﺍﻟﺒﺸﺭ ﺍﻟﻌﻘﻠﻴﺔ‪.‬‬
‫ﻭﻗﺩ ﻅﻬﺭ ﺍﻟﺫﻜﺎﺀ ﺍﻻﺼﻁﻨﺎﻋﻲ ﻓﻲ ﺍﻟﺨﻤﺴﻴﻨﺎﺕ ﻤﻥ ﺍﻟﻘﺭﻥ ﺍﻟﻤﺎﻀـﻲ ﻨﺘﻴﺠـﺔ‬
‫ﺍﻟﺜﻭﺭﺓ ﺍﻟﺘﻲ ﺤﺩﺜﺕ ﻓﻲ ﻤﺠﺎﻟﻲ ﺍﻟﻤﻌﻠﻭﻤﺎﺕ ﻭﺍﻟﺘﺤﻜﻡ ﺍﻵﻟﻲ‪ ،‬ﻭﺘﺭﻤﻰ ﺃﺒﺤﺎﺜـﻪ ﺇﻟـﻰ‬

‫‪٥٧٤‬‬
‫‪ – –‬‬

‫ﺘﺤﻘﻴﻕ ﻫﺩﻓﻴﻥ ﺭﺌﻴﺴﻴﻥ‪ :‬ﺍﻷﻭل‪ :‬ﺍﻟﻭﺼﻭل ﺇﻟﻰ ﻓﻬﻡ ﻋﻤﻴﻕ ﻟﻠﺫﻜﺎﺀ ﺍﻹﻨـﺴﺎﻨﻲ ﻋـﻥ‬
‫ﻁﺭﻴﻕ ﻤﺤﺎﻜﺎﺘﻪ‪ .‬ﺍﻟﺜﺎﻨﻲ‪ :‬ﺍﻻﺴﺘﺜﻤﺎﺭ ﺍﻷﻓﻀل ﻟﻠﺤﺎﺴﺏ ﺍﻵﻟﻲ ﻭﺍﻟﻌﻤل ﻋﻠﻰ ﺍﺴﺘﻐﻼل‬
‫ﺇﻤﻜﺎﻨﺎﺘﻪ ﻜﺎﻓﺔ‪ ،‬ﻭﺨﺼﻭﺼﺎ ﺒﻌﺩ ﺍﻟﺘﻁﻭﺭ ﺍﻟﺴﺭﻴﻊ ﻓﻲ ﻗﺩﺭﺍﺕ ﺍﻟﺤﺎﺴﺒﺎﺕ ﻭﺭﺨـﺹ‬
‫ﺜﻤﻨﻬﺎ‪.‬ﻭﻟﻐﺎﺘﻪ ﻫﻲ‪:‬ﻟﻐﺔ ‪ :Lisp‬ﻭﻫﻰ ﻟﻐﺔ ﻤﻌﺎﻟﺠـﺔ ﺍﻟﻠـﻭﺍﺌﺢ ‪(List Processing‬‬
‫)‪ ،Language‬ﻭﻟﻐﺔ ‪:Prolog‬ﻭﻫﻰ ﻟﻐﺔ ﺍﻟﺒﺭﻤﺠﺔ ﺒـﺎﻟﻤﻨﻁﻕ ‪(Programming‬‬
‫)‪ ،in Logic‬ﻭﻫﺫﻩ ﺍﻟﻠﻐﺎﺕ ﺘﺘﻴﺢ ﻟﻠﻤﺒﺭﻤﺞ ﺇﻤﻜﺎﻨﻴﺎﺕ ﻜﺒﻴﺭﺓ ﻓﻲ ﻜﺘﺎﺒﺔ ﺍﻟﺒﺭﺍﻤﺞ‪.‬‬
‫ﻭﻤﻥ ﺍﻟﻤﺠﺎﻻﺕ ﺍﻟﻤﺭﺘﺒﻁﺔ ﺒﻬﺫﺍ ﺍﻟﻌﻠﻡ‪ :‬ﻋﻠﻡ ﺍﻟـﺘﺤﻜﻡ ﺍﻵﻟـﻲ ‪،Cybernetics‬‬
‫ﻭﻋﻠﻡ ﺍﻟﺭﻭﺒﻭﺕ ‪ ،Robotics‬ﻭﺍﻟـﺘﻌﻠﻡ ﺒﻤـﺴﺎﻋﺩﺓ ﺍﻟﺤﺎﺴـﺏ ‪ ،CAI‬ﻭﺍﻟﺘـﺼﻤﻴﻡ‬
‫ﺍﻟﻬﻨﺩﺴﻲ ﺒﻤﺴﺎﻋﺩﺓ ﺍﻟﺤﺎﺴﺏ ‪ ،CAD‬ﻭﺍﻟﺘﺭﺠﻤﺔ ﺍﻵﻟﻴﺔ ‪ ،MT‬ﻭﺍﻟﺘﻤﻴﻴـﺯ ﺍﻵﻟـﻲ‬
‫‪ ،Pattern‬ﻭﺍﻷﻟﻌــــﺎﺏ ﺍﻻﻟﻜﺘﺭﻭﻨﻴــــﺔ‬ ‫ﻟﻸﻨﻤــــﺎﻁ ‪recognition‬‬
‫ﻜﺎﻟﺸﻁﺭﻨﺞ‪......‬ﻭﻏﻴﺭﻫﺎ‪.‬‬
‫ﻭﻴﺘﻀﻤﻥ ﺍﻟﺫﻜﺎﺀ ﺍﻻﺼﻁﻨﺎﻋﻲ ﻓﺭﻭﻋﺎ ﻋﺩﻴﺩﺓ ﻤﻥ ﺃﻫﻤﻬـﺎ‪ :‬ﺍﻟـﻨﻅﻡ ﺍﻟﺨﺒﻴـﺭﺓ‬
‫‪ ،Expert Systems‬ﻭﺍﻟﺘﻨﻘﻴﺏ ﻓﻲ ﺍﻟﺒﻴﺎﻨﺎﺕ‪ ، Data Mining‬ﻭﺍﻟﺨﻭﺍﺭﺯﻤﻴـﺎﺕ‬
‫ﺍﻟﺠﻴﻨﻴﺔ ‪ ،Genetic algorithms‬ﻭﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ‪.Neural Networks‬‬
‫ﻭﺨﻼل ﻫﺫﺍ ﺍﻟﺒﺤﺙ ﺴﻨﺒﻴﻥ ﻜﻴﻑ ﻴﻤﻜـﻥ ﺍﺴـﺘﺨﺩﺍﻡ ﺃﺤـﺩ ﻓـﺭﻭﻉ ﺍﻟـﺫﻜﺎﺀ‬
‫ﺍﻻﺼﻁﻨﺎﻋﻲ ﻭﻫﻭ ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﻓﻰ ﺘﺤﻠﻴل ﺍﻟﺴﻼﺴل ﺍﻟﺯﻤﻨﻴﺔ‪.‬‬

‫‪ ‬ﻋﺒﺎﺭﺓ ﻋﻥ ﻤﺠﻤﻭﻋﺔ ﻤﺸﺎﻫﺩﺍﺕ ﺃﻭ ﻗﻴﺎﺴﺎﺕ ﻤﺭﺘﺒﺔ ﺤﺴﺏ‬


‫ﺍﻟﺯﻤﻥ‪ ،‬ﻭ ﻴﻌﺘﺒﺭ ﺃﺴﻠﻭﺏ ﺘﺤﻠﻴل ﺍﻟﺴﻼﺴل ﺍﻟﺯﻤﻨﻴـﺔ ‪Time Series Analysis‬‬
‫ﺃﺤﺩ ﺍﻷﺩﻭﺍﺕ ﺍﻟﻤﺴﺘﺨﺩﻤﺔ ﻓﻲ ﺍﻟﺘﺤﻠﻴل ﺍﻹﺤﺼﺎﺌﻲ ﻟﻠﺒﻴﺎﻨﺎﺕ ﻏﻴﺭ ﺍﻟﻤـﺴﺘﻘﻠﺔ ﺯﻤﻨﻴـﺎﹰ‬
‫ﺒﻤﻌﻨﻰ ﺃﻥ ﺍﻟﻤﺸﺎﻫﺩﺍﺕ ﺍﻟﻤﺘﺘﺎﻟﻴﺔ ﻋﺎﺩﺓ ﻤﺎ ﺘﻜﻭﻥ ﻏﻴﺭ ﻤـﺴﺘﻘﻠﺔ ﺃﻱ ﺘﻌﺘﻤـﺩ ﻋﻠـﻰ‬
‫ﺒﻌﻀﻬﺎ ﺍﻟﺒﻌﺽ‪.‬‬
‫ﺘﺤﻠﻴل ﺍﻟﺴﻼﺴل ﺍﻟﺯﻤﻨﻴﺔ ﻴﻘﺼﺩ ﺒﻪ ﺍﻟﻁﺭﻴﻘﺔ ﺃﻭ ﺍﻟﻁﺭﻕ ﺍﻟﻤﺴﺘﺨﺩﻤﺔ ﻟﻨﻤﺫﺠـﺔ‬
‫ﺒﻴﺎﻨﺎﺕ ﺴﻼﺴل ﺯﻤﻨﻴﺔ ﻭﺍﻟﺘﻨﺒﺅ ﺒﺎﻟﻘﻴﻡ ﺍﻟﻤﺴﺘﻘﺒﻠﻴﺔ ﻟﻬﺫﻩ ﺍﻟﺴﻼﺴل‪ .‬ﻭﻗﺩ ﺤﻅﻲ ﺘﺤﻠﻴـل‬

‫‪٥٧٥‬‬
‫‪ – –‬‬

‫ﺍﻟﺴﻼﺴل ﺍﻟﺯﻤﻨﻴﺔ ﺒﺎﻫﺘﻤﺎﻡ ﻜﺒﻴﺭ ﻓﻲ ﻤﺠﺎﻻﺕ ﻋﺩﻴﺩﺓ ﻤﻨﻬـﺎ ﺍﻻﻗﺘـﺼﺎﺩ ﻭﺍﻷﻋﻤـﺎل‬


‫ﻭﺍﻟﺘﺨﻁﻴﻁ ﻭﺍﻟﻤﺒﻴﻌﺎﺕ ﻭﺍﻟﻌﻠﻭﻡ ﺍﻟﻁﺒﻴﻌﻴﺔ ﻭ ﺍﻟﻬﻨﺩﺴﻴﺔ ﻭﻏﻴﺭﻫﺎ‪.‬‬
‫ﻭﺒﺎﻟﺭﻏﻡ ﻤﻥ ﺃﻥ ﺩﺭﺍﺴﺔ ﻨﻤﺎﺫﺝ ﺍﻟﺴﻼﺴل ﺍﻟﺯﻤﻨﻴﺔ ﻗﺩ ﺒﺩﺃﺕ ﻤﻨـﺫ ﺴـﻨﻭﺍﺕ‬
‫ﻋﺩﻴﺩﺓ ﺇﻻ ﺃﻨﻪ ﻴﻤﻜﻥ ﺍﻟﻘﻭل ﺒﺄﻥ ﺒﻭﻜﺱ ﻭ ﺠﻴﻨﻜﻨﺯ ﻗﺩ ﺠﻌﻼ ﺍﺴﺘﺨﺩﺍﻡ ﻫﺫﻩ ﺍﻟﻨﻤـﺎﺫﺝ‬
‫ﺃﻜﺜﺭ ﺍﻨﺘﺸﺎﺭﺍﹰ ﻭﺒﻴﻨﺎ ﺇﻤﻜﺎﻨﻴﺔ ﺍﺴﺘﺨﺩﺍﻤﻬﺎ ﻓﻲ ﺍﻟﺒﻴﺎﻨﺎﺕ ﺍﻟﻤﻭﺴﻤﻴﺔ ﻜﻤﺎ ﺃﻨﻬﻤﺎ ﺃﻭﻀـﺤﺎ‬
‫ﻜﻴﻔﻴﺔ ﺘﻁﺒﻴـﻕ ﻫﺫﻩ ﺍﻟﻨﻤﺎﺫﺝ ﻋﻤﻠﻴﺎﹰ‪.‬‬
‫ﻭﺘﺘﻀﻤﻥ ﻁﺭﻴﻘﺔ ﺒﻭﻜﺱ‪-‬ﺠﻴﻨﻜﻨﺯ ﻟﺘﺤﻠﻴل ﺍﻟﺴﻼﺴل ﺍﻟﺯﻤﻨﻴﺔ ﺃﺭﺒﻌﺔ ﻤﺭﺍﺤـل‬
‫ﻫﻲ‪ :‬ﺘﻭﺼﻴﻑ ﺍﻟﻨﻤﻭﺫﺝ‪ ،‬ﻭﺘﻘﺩﻴﺭ ﻤﻌﺎﻟﻤﻪ‪ ،‬ﻭ ﺍﻟﺘﺤﻘﻕ ﻤـﻥ ﺼـﻼﺤﻴﺘﻪ‪ ،‬ﻭﺃﺨﻴـﺭﺍ‬
‫ﻤﺭﺤﻠﺔ ﺍﻟﺘﻨﺒﺅ ﺤﻴﺙ ﻴﺘﻡ ﺍﺴﺘﺨﺩﺍﻡ ﺍﻟﻨﻤﻭﺫﺝ ﺍﻟﻤﻘﺘﺭﺡ ﻓﻲ ﺍﻟﺘﻨﺒﺅ ﺒـﺎﻟﻘﻴﻡ ﺍﻟﻤـﺴﺘﻘﺒﻠﻴﺔ‬
‫ﻟﻠﺴﻼﺴل ﺍﻟﺯﻤﻨﻴﺔ‪ .‬ﻭﻟﻤﺯﻴﺩ ﻤﻥ ﺍﻟﺘﻔﺼﻴل ﻋـﻥ ﻫـﺫﻩ ﺍﻟﻁﺭﻴﻘـﺔ) ﺍﻨﻅـﺭ)‪(Box-‬‬
‫‪.Jenkins (1976‬ﻜﻤﺎ ﺘﻭﺠﺩ ﻁﺭﻕ ﺃﺨﺭﻯ ﻟﺘﺤﻠﻴل ﺍﻟﺴﻼﺴل ﺍﻟﺯﻤﻨﻴﺔ ﻤﻨﻬﺎ ﻁـﺭﻕ‬
‫ﺘﻘﻠﻴﺩﻴﺔ ﻭﻁﺭﻕ ﺒﺎﺴﺘﺨﺩﺍﻡ ﺃﺴﻠﻭﺏ ﺒﻴﻴﺯ ﺴـﻭﺍﺀ ﺒﺎﺴـﺘﺨﺩﺍﻡ ﺘﻜـﺎﻤﻼﺕ ﻋﺩﺩﻴـﺔ ﺃﻭ‬
‫ﺘﻘﺭﻴﺒﺎﺕ ﻤﻨﺎﺴﺒﺔ ﻟﺩﺍﻟﺔ ﺍﻹﻤﻜﺎﻥ ﺍﻷﻋﻅﻡ ﺃﻭ ﺤﺘﻰ ﺒﺎﺴﺘﺨﺩﺍﻡ ﺍﻟﻁﺭﻕ ﺍﻟﻘﺎﺌﻤﺔ ﻋﻠـﻰ‬
‫ﺍﻟﻤﻌﺎﻴﻨﺔ‪ .‬ﺃﻨﻅﺭ ﻋﻠﻰ ﺴﺒﻴل ﺍﻟﻤﺜﺎل‪:‬‬
‫‪Harvey &Phillips(1977), Box-Jenkins (1976)، Shaarawy‬‬
‫‪& Alshawadfi (1987), Alshawadfi (1994.1996).‬‬
‫ﻭﺒﺎﻟﺭﻏﻡ ﻤﻥ ﺍﻻﻨﺘﺸﺎﺭ ﺍﻟﻬﺎﺌل ﻷﺴﻠﻭﺏ " ﺒﻭﻜﺱ ﻭ ﺠﻴﻨﻜﻨﺯ " ﻤﻨﺫ ﺍﻟـﺴﺒﻌﻴﻨﻴﺎﺕ‬
‫ﻤﻥ ﺍﻟﻘﺭﻥ ﺍﻟﻌﺸﺭﻴﻥ ﻭ ﺍﻟﺫﻱ ﻤﺎﺯﺍل ﻴﻔﻭﻕ ﻜل ﺍﻷﺴﺎﻟﻴﺏ ﺍﻟﺤﺩﻴﺜﺔ ﻟﺘﺤﻠﻴل ﺍﻟـﺴﻼﺴل‬
‫ﺍﻟﺯﻤﻨﻴﺔ‪ ،‬ﺇﻻ ﺃﻥ ﺘﻁﺒﻴﻕ ﻫﺫﺍ ﺍﻷﺴﻠﻭﺏ ﺒﺸﻜل ﺩﻗﻴﻕ ﻴﺘﻁﻠﺏ ﻤﻬﺎﺭﺓ ﻭﻤﻤﺎﺭﺴﺔ ﻗـﺩ ﻻ‬
‫ﺘﺘﻭﺍﻓﺭ ﻓﻲ ﻜﺜﻴﺭ ﻤﻥ ﻏﻴﺭ ﺍﻟﻤﺘﺨﺼﺼﻴﻥ‪ ،‬ﻋﻼﻭﺓ ﻋﻠﻰ ﺫﻟﻙ ﻓﺈﻨﻨـﺎ ﻨﻌـﻴﺵ ﺍﻟﻴـﻭﻡ‬
‫ﻋﺼﺭ ﺍﻟﻤﻌﻠﻭﻤﺎﺘﻴﺔ ﻭﺍﻟﻌﺎﻟﻡ ﺍﻟﺭﻗﻤﻲ ﺍﻟﺫﻱ ﻴﺤﻭﻯ ﻜل ﺃﻨﺸﻁﺔ ﺍﻟﻤﺎل ﻭﺍﻷﻋﻤﺎل ﻓـﻲ‬
‫ﺒﻴﺌﺔ ﻤﺘﻜﺎﻤﻠﺔ ﻭﻤﺘﺸﺎﺒﻜﺔ‪ ،‬ﻭﺍﻨﻁﻼﻗﺎ ﻤﻥ ﺃﻥ ﺘﻘﻨﻴﺎﺕ ﺍﻟﻤﻌﻠﻭﻤﺎﺘﻴـﺔ ﻭﺍﻟﺤﻭﺴـﺒﺔ ﻓـﻲ‬
‫ﺘﻁﻭﺭ ﺴﺭﻴﻊ ﻓﻘﺩ ﺠﺎﺀﺕ ﻫﺫﻩ ﺍﻟﺩﺭﺍﺴﺔ ﻟﻠﺒﺤﺙ ﻋﻥ ﻁﺭﻴﻘﺔ ﻤﺭﻨﺔ ﻴﻤﻜﻨﻬـﺎ ﺍﻟﺘﻌﺎﻤـل‬
‫ﻤﻊ ﺍﻟﺒﻴﺌﺔ ﺍﻟﺩﻴﻨﺎﻤﻴﻜﻴﺔ‪ ،‬ﻭ ﺇﻴﺠﺎﺩ ﺘﻨﺒﺅﺍﺕ ﺒﺩﻗﺔ ﻋﺎﻟﻴﺔ ﻟﻠﻘﻴﻡ ﺍﻟﻤﺴﺘﻘﺒﻠﻴﺔ ﺒﻁﺭﻴﻘﺔ ﺁﻟﻴﺔ‪.‬‬

‫‪٥٧٦‬‬
‫‪ – –‬‬

‫ﻭﺘﻌﺩ ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﺍﻻﺼـﻁﻨﺎﻋﻴﺔ ‪(ANN)Network Artificial‬‬


‫‪ Neural‬ﻤﻥ ﺃﻫﻡ ﺍﻟﻁﺭﻕ ﺍﻟﻤﺭﻨﺔ ﺍﻟﺘﻲ ﻟﺩﻴﻬﺎ ﺍﻟﻘﺩﺭﺓ ﻋﻠﻰ ﺍﻟﺘﻌﻠﻡ ﺍﻟـﺫﺍﺘﻲ ﺍﻟـﺴﺭﻴﻊ‬
‫ﻭﺴﺭﻋﺔ ﺍﻟﺘﻐﻴﺭ ﻓﻲ ﺍﻟﺒﻴﺌﺔ ﺍﻟﺩﻴﻨﺎﻤﻴﻜﻴﺔ‪ ،‬ﻭﺫﻟﻙ ﻤﻥ ﺨﻼل ﻗﻭﺍﻋﺩﻫﺎ ﺍﻟﺨﺎﺼﺔ ﻭﻭﻓـﻕ‬
‫ﻤﻨﻬﺠﻴﺔ ﻤﺤﺩﺩﺓ ﻭﺘﻌﺯﻴﺯ ﻫﺫﺍ ﺍﻟﻤﻨﻬﺞ ﻋﺒﺭ ﺍﻟﺘﺩﺭﻴﺏ ﺒﻁﺭﻴﻘﺔ ﺘﺤﺎﻜﻰ ﻓﻴﻬـﺎ ﻋﻤـل‬
‫ﺍﻟﺨﻼﻴﺎ ﺍﻟﻌﺼﺒﻴﺔ ﺍﻟﺩﻤﺎﻏﻴﺔ ﻤﻥ ﺤﻴﺙ ﺒﻨﻴﺘﻬﺎ ﻭﻤﻌﺎﻟﺠﺘﻬﺎ ﻟﻠﻤﺩﺨﻼﺕ‪.‬‬
‫‪ ‬‬
‫ﺘﻜﻤﻥ ﺍﻟﻤﺸﻜﻠﺔ ﺃﻨﻪ ﻓﻲ ﺤﺎﻟﺔ ﺍﻨﺤﺩﺍﺭ ﺍﻟﻤﺸﺎﻫﺩﺍﺕ ﺍﻟﺤﺎﻟﻴﺔ ‪ Yt‬ﻋﻠﻰ ﺍﻷﺨﻁـﺎﺀ‬
‫ﺍﻟﺴﺎﺒﻘﺔ ﺃﻱ ﻓﻲ ﺤﺎﻟﺔ ﻨﻤـﺎﺫﺝ ﺍﻟﻤﺘﻭﺴـﻁﺎﺕ ﺍﻟﻤﺘﺤﺭﻜـﺔ ‪Moving Average‬‬
‫)‪ ،(MA‬ﺃﻭ ﻨﻤــﺎﺫﺝ ﺍﻻﻨﺤــﺩﺍﺭ ﺍﻟــﺫﺍﺘﻲ) ‪ Auto-Regressive ( AR‬ﻤــﻊ‬
‫ﺍﻟﻤﺘﻭﺴﻁﺎﺕ ﺍﻟﻤﺘﺤﺭﻜﺔ )ﺍﻟﻤﺨﺘﻠﻁﺔ( )‪ ،(ARMA‬ﺃﻭ ﺍﻟﻨﻤﺎﺫﺝ ﺍﻟﻤﺨﺘﻠﻁﺔ ﻤﻊ ﻤﺘﻐﻴﺭ‬
‫ﺨﺎﺭﺠﻲ )‪ ،(exogenous inputs ARMAX‬ﻴﻜﻭﻥ ﻟﺩﺍﻟـﺔ ﺍﻹﻤﻜـﺎﻥ ﺸـﻜﻼ‬
‫ﻤﻌﻘﺩﺍ ﺒﺴﺒﺏ ﺃﻥ ﺍﻷﺨﻁﺎﺀ ﻟﻬﺎ ﻋﻼﻗﺔ ﻏﻴﺭ ﺨﻁﻴﺔ ﻓﻲ ﺍﻟﻤﻌﻠﻤﺎﺕ ﻤﻤﺎ ﻴﺼﻌﺏ ﻤﻌـﻪ‬
‫ﺍﻟﺘﻌﺒﻴﺭ ﻋﻥ ﺩﺍﻟﺔ ﺍﻟﺘﻭﺯﻴﻊ ﺍﻻﺤﺘﻤﺎﻟﻲ ﻟﺒﻌﺽ ﺃﻭ ﻜل ﻤﻌﺎﻟﻡ ﺍﻟﻨﻤـﻭﺫﺝ ﻓـﻲ ﺸـﻜل‬
‫ﻗﻴﺎﺴﻲ ﺃﻭ ﺒﺎﺴﺘﺨﺩﺍﻡ ﺃﺤﺩ ﺍﻟﺘﻭﺯﻴﻌﺎﺕ ﺍﻟﻤﻌﺭﻭﻓﺔ ﻜﻤﺎ ﻭﺃﻨﻪ ﻓﻲ ﺤﺎﻟﺔ ﺍﺴﺘﺨﺩﺍﻡ ﺃﺴﻠﻭﺏ‬
‫ﺒﻴﻴﺯ ﻴﺼﻌﺏ ﺇﻴﺠﺎﺩ ﻤﺼﻔﻭﻓﺔ ﺍﻟﺘﺒﺎﻴﻥ ﻭﺍﻟﺘﻐﺎﻴﺭ "ﻤﻘﻠﻭﺏ ﻤﺼﻔﻭﻓﺔ ﺍﻟﺩﻗـﺔ" ﻭﻤﺤـﺩﺩ‬
‫ﻤﺼﻔﻭﻓﺔ ﺍﻟﺘﺒﺎﻴﻥ ﻭﺍﻟﺘﻐﺎﻴﺭ‪ ،‬ﻭﺍﺨﺘﻴﺎﺭ ﺩﺍﻟﺔ ﺍﺤﺘﻤﺎل ﻗﺒﻠﻴﺔ ﻤﻨﺎﺴﺒﺔ ﻟﻬﺎ‪ ،‬ﺨﺎﺼـﺔ ﻤـﻊ‬
‫ﺯﻴﺎﺩﺓ ﺤﺠﻡ ﺍﻟﻌﻴﻨﺔ‪.‬‬
‫ﻭﻟﺘﺠﻨﺏ ﺘﻠﻙ ﺍﻟﻤﺸﻜﻠﺔ ﻟﺯﻡ ﺍﻟﺒﺤﺙ ﻋﻥ ﻁﺭﻴﻘﺔ ﺠﺩﻴـﺩﺓ ﻟﻠﺘﻨﺒـﺅ ﺒﺨـﻼﻑ‬
‫ﺍﻟﻁﺭﻕ ﺍﻹﺤﺼﺎﺌﻴﺔ ﺍﻟﺘﻘﻠﻴﺩﻴﺔ ﻭﺍﻟﺘﻲ ﺘﻌﺘﻤﺩ ﻋﻠﻰ ﺩﺍﻟﺔ ﺍﻹﻤﻜﺎﻥ ﻓﻲ ﺘﻘﺩﻴﺭ ﻤﻌﻠﻤﺎﺘﻬـﺎ‪،‬‬
‫ﻭﻫﺫﻩ ﺍﻟﻁﺭﻴﻘﺔ ﻫﻲ ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﺍﻻﺼﻁﻨﺎﻋﻴﺔ‪ ،‬ﺤﻴﺙ ﺘﻤﺘﺎﺯ ﻫـﺫﻩ ﺍﻟﻁﺭﻴﻘـﺔ‬
‫ﺒﺄﻨﻬﺎ ﺘﻌﻤل ﺒﺼﻭﺭﺓ ﺁﻟﻴﺔ ﻻﺨﺘﻴﺎﺭ ﻨﻤﻭﺫﺝ ﻤﻨﺎﺴﺏ ﻴﻭﻀﺢ ﻜﻴﻔﻴﺔ ﺍﻋﺘﻤﺎﺩ ﺍﻟﻤﺨﺭﺠﺎﺕ‬
‫ﻋﻠﻰ ﺍﻟﻤﺩﺨﻼﺕ‪.‬‬

‫‪٥٧٧‬‬
‫‪ – –‬‬

‫‪ ‬‬
‫ﻫﺫﺍ ﺍﻟﺒﺤﺙ ﻴﻌﺘﺒﺭ ﺍﻤﺘﺩﺍﺩﺍ ﻟﺒﺤـﺙ )‪ Alshawadfi(2003‬ﻭﺍﻟـﺫﻱ ﺍﻗﺘـﺭﺡ‬
‫ﻁﺭﻴﻘﺔ ﺠﺩﻴﺩﺓ ﻭﻤﺅﺜﺭﺓ ﻟﻠﺘﻨﺒﺅ ﺒﻨﻤﺎﺫﺝ ‪ ،ARMA‬ﻜﻤﺎ ﻗﺎﺭﻥ ﺍﻟﻁﺭﻴﻘـﺔ ﺍﻟﻤﻘﺘﺭﺤـﺔ‬
‫ﺒﻁﺭﻴﻘﺔ ﺒﻭﻜﺱ‪-‬ﺠﻴﻨﻜﻨﺯ‬
‫ﻭﻴﺘﻀﻤﻥ ﺍﻟﺒﺤﺙ ﺍﻟﺤﺎﻟﻲ ﻫﺩﻓﻴﻥ‪:‬‬
‫‪ ‬ﺍﻟﻬﺩﻑ ﺍﻷﻭل‪ :‬ﺘﻌﻤﻴﻡ ﻁﺭﻴﻘـﺔ )‪ Alshawadfi (2003‬ﻟﻠﺘﻨﺒـﺅ ﺒﺎﻟـﺴﻼﺴل‬
‫ﺍﻟﺯﻤﻨﻴﺔ ﺍﻟﻤﻭﻟﺩﺓ ﻤﻥ ﻨﻤـﺎﺫﺝ ‪ ARMAX‬ﺒﺎﺴـﺘﺨﺩﺍﻡ ﺍﻟـﺸﺒﻜﺎﺕ ﺍﻟﻌـﺼﺒﻴﺔ‬
‫ﺍﻻﺼﻁﻨﺎﻋﻴﺔ )‪ ،(ANN‬ﻭﻟﺘﺤﻘﻴﻕ ﻫﺫﺍ ﺍﻟﻬﺩﻑ ﻴـﺘﻡ ﺘﻭﻟﻴـﺩ ‪ 32000‬ﻋﻴﻨـﺔ‬
‫ﺒﺄﺤﺠﺎﻡ ﻤﺨﺘﻠﻔﺔ ﻤﻥ ﻨﻤﺎﺫﺝ ‪ ARMAX‬ﺒﻤﻌﻠﻤﺎﺕ ﻤﺨﺘﻠﻔﺔ ﺘـﺴﺘﺨﺩﻡ ﻟﺘـﺩﺭﻴﺏ‬
‫ﺍﻟﺸﺒﻜﺔ‪،‬ﺒﻌﺩ ﺫﻟﻙ ﻴﺘﻡ ﻤﻘﺎﺭﻨﺔ ﺍﻟﺒﻴﺎﻨﺎﺕ ﺍﻟﻤﻭﻟﺩﺓ ﺒﺎﻟﺒﻴﺎﻨﺎﺕ ﺍﻟﺤﻘﻴﻘﻴﺔ ﻟﻘﻴـﺎﺱ ﺩﻗـﺔ‬
‫ﺍﻟﺘﻨﺒﺅﺍﺕ‪.‬‬
‫‪ ‬ﺍﻟﻬﺩﻑ ﺍﻟﺜـﺎﻨﻲ‪ :‬ﻤﻘﺎﺭﻨـﺔ ﺃﺩﺍﺀ ﺍﻟﻁﺭﻴﻘـﺔ ﺍﻟﻤﻘﺘﺭﺤـﺔ ﻟﻠـﺸﺒﻜﺎﺕ ﺍﻟﻌـﺼﺒﻴﺔ‬
‫ﺍﻻﺼﻁﻨﺎﻋﻴﺔ ﻤﻊ ﺃﺩﺍﺀ ﻁﺭﻴﻘﺔ " ﺒﻭﻜﺱ ﻭ ﺠﻴﻨﻜﻨـﺯ " ﻟﺘﻭﻀـﻴﺢ ﺃﻱ ﻤﻨﻬﻤـﺎ‬
‫‪ ARMAX‬ﻭﺫﻟﻙ ﻤﻥ ﺨﻼل ﻗﻴﺎﺱ ﺩﻗﺔ ﺍﻟﺘﻨﺒﺅﺍﺕ ﻟﻜل‬ ‫ﺃﻓﻀل ﻟﻠﺘﻨﺒﺅ ﺒﻨﻤﺎﺫﺝ‬
‫ﻤﻥ ﺍﻟﻁﺭﻴﻘﺘﻴﻥ ﺒﺎﺴﺘﺨﺩﺍﻡ ﺜﻼﺙ ﻤﻘﺎﻴﻴﺱ‪:‬‬
‫‪ -١‬ﻤﺘﻭﺴﻁ ﻤﺭﺒﻌﺎﺕ ﺍﻟﺨﻁﺄ )‪Mean of Squared Error (MSE‬‬
‫‪ -٢‬ﻤﺘﻭﺴﻁ ﺍﻻﻨﺤﺭﺍﻓﺎﺕ ﺍﻟﻤﻁﻠﻘﺔ )‪Mean Absolute Deviation (MAD‬‬
‫‪ -٣‬ﻨﺴﺒﺔ ﺍﻟﺤﺎﻻﺕ ﺍﻟﺘﻲ ﺘﺘﺤﻘﻕ ﻓﻴﻬﺎ ﺃﺨﻁﺎﺀ ﻤﻁﻠﻘﺔ ﺃﻗل ﻓﻲ ﺍﻟﻁﺭﻴﻘﺘﻴﻥ‬
‫‪The Percentage of cases of Minimum Absolute error‬‬
‫)‪(MAEP‬‬
‫‪ ‬‬
‫ﺘﺄﺘﻰ ﺃﻫﻤﻴﺔ ﺍﻟﺒﺤﺙ ﻓﻲ ﺍﺴﺘﺨﺩﺍﻤﻪ ﻷﺤﺩ ﻁﺭﻕ ﺍﻟـﺫﻜﺎﺀ ﺍﻻﺼـﻁﻨﺎﻋﻲ ﻭﻫـﻲ‬
‫ﻁﺭﻴﻘﺔ ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﻓﻲ ﺍﻟﺘﻨﺒﺅ ﺒﺎﻟﺴﻼﺴل ﺍﻟﺯﻤﻨﻴﺔ ﺍﻟﻤﻭﻟـﺩﺓ ﻤـﻥ ﻨﻤـﺎﺫﺝ‬
‫‪ ARMAX‬ﻜﻁﺭﻴﻘﺔ ﺠﺩﻴﺩﺓ ﻴﻤﻜﻥ ﺍﺴﺘﺨﺩﺍﻤﻬﺎ ﻟﻠﺘﻨﺒﺅ ﺒﻜﺜﻴﺭ ﻤـﻥ ﺍﻟﻅـﻭﺍﻫﺭ ﻓـﻲ‬
‫ﻤﺠﺎﻻﺕ ﻋﺩﻴﺩﺓ‪ ،‬ﺤﻴﺙ ﺃﻨﻬﺎ ﺘﻤﺘﺎﺯ ﻋﻥ ﻏﻴﺭﻫﺎ ﻤﻥ ﺍﻷﺴﺎﻟﻴﺏ ﺍﻹﺤﺼﺎﺌﻴﺔ ﺍﻟﺘﻘﻠﻴﺩﻴـﺔ‬

‫‪٥٧٨‬‬
‫‪ – –‬‬

‫ﺒﻤﺯﺍﻴﺎ ﻤﻨﻬﺎ‪ :‬ﺃﻨﻬﺎ ﺘﻔﻴﺩ ﻓﻲ ﺍﻟﺤﺼﻭل ﻋﻠﻰ ﺘﻨﺒﺅﺍﺕ ﺠﻴﺩﺓ‪ ،‬ﺤﻴـﺙ ﺃﻨﻬـﺎ ﺘﻌﻁـﻰ‬
‫ﻤﺠﻤﻭﻉ ﻤﺭﺒﻌﺎﺕ ﺃﺨﻁﺎﺀ ﻏﻴﺭ ﺨﻁﻴﺔ ﺃﻗل‪ ،‬ﻜﻤﺎ ﺃﻥ ﻁﺭﻴﻘـﺔ ﺍﻟـﺸﺒﻜﺎﺕ ﺍﻟﻌـﺼﺒﻴﺔ‬
‫ﺍﻻﺼﻁﻨﺎﻋﻴﺔ )‪ (ANN‬ﺘﺤﺘﺎﺝ ﻋﻴﻨﺎﺕ ﺃﺼﻐﺭ ﻨﺴﺒﻴﺎ ﻓﻲ ﺍﺨﺘﺒﺎﺭ ﻤﺼﺩﺍﻗﻴﺔ ﺍﻟﻨﻤﺎﺫﺝ‪،‬‬
‫ﻭﻤﻥ ﻤﺯﺍﻴﺎﻫﺎ ﺃﻴﻀﺎ ﺃﻨﻪ ﻴﺘﻡ ﺍﻟﺘﻨﺒﺅ ﻓﻴﻬﺎ ﺒﺼﻭﺭﺓ ﺁﻟﻴﺔ ﻭﺃﻨﻬﺎ ﺘﺼﻠﺢ ﻓﻲ ﺤﺎﻟﺔ ﺍﻟﻨﻤﺎﺫﺝ‬
‫ﺍﻟﺨﻁﻴﺔ ﻭﻏﻴﺭ ﺍﻟﺨﻁﻴﺔ‪ ،‬ﻜﻤﺎ ﻴﻤﻜﻥ ﺍﺴﺘﺨﺩﺍﻤﻬﺎ ﻟﺘﻘﻴﻴﻡ ﻗﺩﺭﺓ ﺍﻷﺴﺎﻟﻴﺏ ﺍﻹﺤـﺼﺎﺌﻴﺔ‬
‫ﺍﻷﺨﺭﻯ ﻓﻲ ﺘﻘﺩﻴﺭ ﺍﻟﻨﻤﺎﺫﺝ ﻭﺍﺴﺘﺨﺩﺍﻤﻬﺎ ﻓﻲ ﺍﻟﺘﻨﺒﺅ‪.‬‬
‫] ‪ [ ٢‬ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﺍﻻﺻﻄﻨﺎﻋﻴﺔ‬
‫ﺘﻌﺩ ﺍﻟـﺸﺒﻜﺎﺕ ﺍﻟﻌـﺼﺒﻴﺔ ﺍﻻﺼـﻁﻨﺎﻋﻴﺔ )‪Neural Networks (ANN‬‬
‫‪Artificial‬ﺃﺤﺩ ﺃﻫﻡ ﻁﺭﻕ ﺍﻟﺫﻜﺎﺀ ﺍﻻﺼﻁﻨﺎﻋﻲ‪ ،‬ﻭ ﺘﺘﻤﺤﻭﺭ ﻓﻜﺭﺘﻬﺎ ﺤﻭل ﻤﺤﺎﻜﺎﺓ‬
‫ﻗﺩﺭﺓ ﺍﻟﻌﻘل ﺍﻟﺒﺸﺭﻯ ﻋﻠﻰ ﺍﻟﺘﻌﺭﻑ ﻋﻠﻰ ﺍﻷﻨﻤﺎﻁ‪ ،‬ﻭﺘﻤﻴﻴـﺯ ﺍﻷﺸـﻴﺎﺀ‪ ،‬ﺒﺎﺴـﺘﺨﺩﺍﻡ‬
‫ﺍﻟﺤﺎﺴﺏ ﺍﻵﻟﻲ ﻤﻥ ﺨﻼل ﺇﺘﺒﺎﻉ ﻋﻤﻠﻴﺔ ﺍﻟﺘﻌﻠﻴﻡ ﺍﻟﺫﺍﺘﻴﺔ ﺍﻟﺘﻲ ﺘﺤـﺩﺙ ﻓـﻲ ﺍﻟﻌﻘـل‪،‬‬
‫ﻭﺍﻟﺘﻲ ﻴﺘﻡ ﻓﻴﻬﺎ ﺍﻻﺴﺘﻔﺎﺩﺓ ﻤﻥ ﺍﻟﺨﺒﺭﺍﺕ ﺍﻟﺴﺎﺒﻘﺔ ﻓﻲ ﺴﺒﻴل ﺍﻟﻭﺼﻭل ﺇﻟـﻰ ﺃﻓـﻀل‬
‫ﻨﺘﺎﺌﺞ ﻓﻲ ﺍﻟﻤﺴﺘﻘﺒل‪).‬ﻨﻅﺭ‪ :‬ﺤﺎﺠﻰ‪ ،‬ﻭﺍﻟﻤﺤﻤﻴﺩ‪،(١٩٩٩) ،‬ﺹ ‪.( ١٩‬‬

‫ﺸﻜل )‪(١-٢‬‬
‫ﻨﻤﻭﺫﺝ ﻟﻭﺤﺩﺓ ﺘﺸﻐﻴل‬

‫‪inputs‬‬ ‫‪Weights‬‬
‫‪Outputs‬‬
‫‪X1‬‬ ‫‪W1‬‬
‫‪Neuron‬‬
‫‪X2‬‬ ‫‪W2‬‬ ‫‪n‬‬
‫)‪F(Y‬‬ ‫‪Y‬‬
‫‪ wi xi‬‬
‫‪i 1‬‬

‫‪Summations‬‬ ‫‪Transfer function‬‬


‫‪Xn‬‬
‫‪Wn‬‬

‫‪٥٧٩‬‬
‫‪ – –‬‬

‫ﻭﺍﻟﺸﻜل)‪(١-٢‬ﺍﻟﺴﺎﺒﻕ ﻴﺒﻴﻥ ﻭﺤﺩﺓ ﺘﺸﻐﻴل ﺒﺴﻴﻁﺔ ﺩﺍﺨل ﺍﻟـﺸﺒﻜﺔ ﺍﻟﻌـﺼﺒﻴﺔ‬


‫)ﺍﻨﻅﺭ‪ ،.(Turban, et al. (2005) :‬ﺤﻴﺙ ﻴﻜﻭﻥ ﻟﻜل ﻭﺤـﺩﺓ ﺘـﺸﻐﻴل ﻤـﺴﺎﺭ‬
‫ﺇﺩﺨﺎل ﻭﺍﺤﺩ ﺃﻭ ﺃﻜﺜﺭ ﺘﻨﺤﺼﺭ ﻤﻬﻤﺘﻪ ﻓﻲ ﻨﻘل ﺍﻟﻤﻌﻠﻭﻤﺎﺕ ﻤﻥ ﺍﻟﻌﺎﻟﻡ ﺍﻟﺨﺎﺭﺠﻲ ﺇﻟﻰ‬
‫ﻭﺤﺩﺓ ﺍﻟﺘﺸﻐﻴل ﺍﻟﺫﻱ ﺘﻘﻭﻡ ﺒﺩﻭﺭﻫﺎ ﺒﻌﻤﻠﻴﺔ ﺍﻟﺘﺠﻤﻴﻊ ﺍﻟﺒﺴﻴﻁﺔ‪ ،‬ﺜﻡ ﺘﺤﻭﻴل ﺍﻟﻤﻌﻠﻭﻤﺎﺕ‬
‫ﺒﻭﺍﺴﻁﺔ ﺩﺍﻟﺔ ﺘﻔﻌﻴﻠﻴﺔ ﺘﻌﺭﻑ ﺒﺩﺍﻟﺔ ﺍﻟﺘﺤﻭﻴل‪ ،‬ﺒﻌﺩ ﺫﻟﻙ ﺘﻨﻘل ﺍﻟﻤﻌﻠﻭﻤﺎﺕ ﻜﻤﺨﺭﺠﺎﺕ‬
‫ﻤﻥ ﺨﻼل ﻤﺴﺎﺭ ﺍﻟﻤﺨﺭﺠﺎﺕ‪.‬ﻭﺒﺸﻜل ﺭﻴﺎﻀﻲ ﻓﺎﻥ ﻜل ﺨﻠﻴﺔ ﻋﺼﺒﻴﺔ ﺘﺴﺘﻘﺒل ﻋـﺩﺩ‬
‫ﻤﻥ ﺇﺸـﺎﺭﺍﺕ ﺍﻟﻤـﺩﺨﻼﺕ )‪ Input Signals(X1, X2,… Xn‬ﻭﺍﻟﺘـﻲ ﺘﻤﺎﺜـل‬
‫ﺍﻟﺘﻔﺭﻋﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ‪ ،Dendrites‬ﺜﻡ ﻴﺘﻡ ﻀﺭﺏ ﻜل ﺍﻟﻤـﺩﺨﻼﺕ ﻓـﻲ ﺍﻷﻭﺯﺍﻥ‬
‫)‪ (W1, W2,……, Wn‬ﺍﻟﺘﻲ ﺘﻌﺩ ﺍﻟﻭﺴﻴﻠﺔ ﺍﻷﺴﺎﺴﻴﺔ ﻟﺫﺍﻜﺭﺓ ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ‬
‫ﺍﻻﺼﻁﻨﺎﻋﻴﺔ ﻁﻭﻴﻠﺔ ﺍﻟﻤﺩﻯ ﻭﺘﻌﺒﺭ ﻋﻥ ﺩﺭﺠﺔ ﺃﻫﻤﻴﺔ ﺍﻟﻤﺩﺨﻼﺕ‪ ،،‬ﻭﺤﺎﺼل ﺍﻟﺠﻤﻊ‬
‫‪ summations‬ﻴﺘﻡ ﻤﻌﺎﻟﺠﺘﻪ ﺒﺎﺴﺘﺨﺩﺍﻡ ﺩﺍﻟﺔ ﺘﺤﻭﻴل‪F(Y)Transfer function‬‬
‫ﻟﻨﺤﺼل ﻤﻨﻬﺎ ﻋﻠﻰ ﺇﺸﺎﺭﺍﺕ ﺍﻟﻤﺨﺭﺠﺎﺕ ) ‪Output Signals(Y‬‬
‫‪Learning of the Neural Network‬‬
‫ﺘﻌﻠﻡ ﺍﻟﺸﺒﻜﺔ ﺍﻟﻌﺼﺒﻴﺔ ﺍﻻﺼﻁﻨﺎﻋﻴﺔ ﻫﻲ ﺍﻟﻌﻤﻠﻴﺔ ﺍﻟﺘﻲ ﻤـﻥ ﺨﻼﻟﻬـﺎ ﺘﻘـﻭﻡ‬
‫ﺍﻟﺸﺒﻜﺔ ﺍﻟﻌﺼﺒﻴﺔ ﺒﺘﻌﺩﻴل ﻨﻔﺴﻬﺎ ﺍﺴﺘﺠﺎﺒﺔ ﻟﻠﻤﺩﺨﻼﺕ ﻤﻥ ﺃﺠـل ﺍﻟﺤـﺼﻭل ﻋﻠـﻰ‬
‫ﺍﻟﻤﺨﺭﺠﺎﺕ ﺍﻟﻤﺴﺘﻬﺩﻓﺔ‪ .‬ﻭﺒﺄﺴﻠﻭﺏ ﺁﺨﺭ ﻓﻬﻲ ﻋﻤﻠﻴﺔ ﺍﻜﺘﺴﺎﺏ ﺍﻟﻤﻌﺭﻓﺔ‪ ،‬ﺤﻴﺙ ﺘﺒﺤﺙ‬
‫ﺍﻟﺸﺒﻜﺔ ﺍﻟﻌﺼﺒﻴﺔ ﻋﻥ ﺍﻟﻤﻌﺭﻓﺔ ﻤﻥ ﻤﺠﻤﻭﻋﺔ ﺒﻴﺎﻨﺎﺕ ﺍﻟﻌﻴﻨﺔ‪ .‬ﻭﺃﺜﻨﺎﺀ ﻋﻤﻠﻴﺔ ﺍﻟـﺘﻌﻠﻡ‬
‫ﺘﻌﺩل ﺍﻟﺸﺒﻜﺔ ﺃﻭﺯﺍﻥ ﺍﻻﺘﺼﺎل ﻋﻠﻰ ﺃﺴﺎﺱ ﺍﻟﻤـﺩﺨﻼﺕ ﺍﻟـﻭﺍﺭﺩﺓ ﺤﺘـﻰ ﺘﻘﺘـﺭﺏ‬
‫ﺍﻟﻤﺨﺭﺠﺎﺕ ﻤﻥ ﺍﻟﻤﺨﺭﺠﺎﺕ ﺍﻟﺤﻘﻴﻘﻴﺔ ﺃﻭ ﺍﻟﻤﺴﺘﻬﺩﻓﺔ ) ﺍﻨﻅﺭ)‪.( Zurada (1992‬‬
‫ﻭ ﻴﺘﻡ ﺘﻌﻠﻴﻡ ﺃﻭ ﺘﺩﺭﻴﺏ ﺍﻟﺸﺒﻜﺔ ﺍﻟﻌﺼﺒﻴﺔ ﻋﻠﻰ ﺍﻜﺘﺸﺎﻑ ﺍﻟﻌﻼﻗﺎﺕ ﺍﻟﻨﻤﻁﻴﺔ‬
‫ﻓﻲ ﺍﻟﺒﻴﺎﻨﺎﺕ ﺒﺈﺘﺒﺎﻉ ﺇﺤﺩﻯ ﺍﻟﻁﺭﻕ ﺍﻵﺘﻴﺔ‪:‬‬
‫]‪ [١-١-٢‬ﺍﻟﺘﻌﻠﻴﻡ ﺍﻹﺸﺭﺍﻓﻲ‪Supervised Learning :‬‬
‫ﺘﻌﺩ ﻫﺫﻩ ﺍﻟﻁﺭﻴﻘﺔ ﻫﻲ ﺍﻷﻜﺜﺭ ﺍﻨﺘﺸﺎﺭﺍ ﻓﻲ ﺘﺩﺭﻴﺏ ﺍﻟﺸﺒﻜﺔ ﺍﻟﻌـﺼﺒﻴﺔ‪ ،‬ﻭﻓﻴﻬـﺎ‬
‫ﺘﺠﻤﻊ ﻋﻴﻨﺎﺕ ﻤﻥ ﺍﻟﺒﻴﺎﻨﺎﺕ ﺘﺤﺘﻭﻯ ﻗﻴﻤﺎﹰ ﻟﻠﻤﺘﻐﻴﺭﺍﺕ ﺍﻟﻤﺩﺨﻠـﺔ ﻭﻗﻴﻤـﺎﹰ ﻟﻠﻤﺘﻐﻴـﺭﺍﺕ‬

‫‪٥٨٠‬‬
‫‪ – –‬‬

‫ﺍﻟﻤﺨﺭﺠﺔ‪،‬ﻭ ﺘﻘﻭﻡ ﺍﻟﺸﺒﻜﺔ ﺒﻤﻘﺎﺭﻨﺔ ﺍﻟﻨﺘﺎﺌﺞ ﺍﻟﺘﻲ ﺘﻘﺩﺭﻫﺎ ﻟﻠﻤﺘﻐﻴﺭﺍﺕ ﺍﻟﺨﺎﺭﺠﺔ ﻟﻜـل‬
‫ﻋﻴﻨﺔ ﻤﺩﺨﻠﺔ ﺒﺎﻟﻘﻴﻡ ﺍﻟﻔﻌﻠﻴﺔ ﻟﻬﺫﻩ ﺍﻟﻤﺘﻐﻴﺭﺍﺕ‪ ،‬ﻭﺒﻨﺎﺀ ﻋﻠﻰ ﺫﻟﻙ ﺘﻘﻭﻡ ﺍﻟﺸﺒﻜﺔ ﺒـﺈﺠﺭﺍﺀ‬
‫ﺍﻟﺘﻌﺩﻴﻼﺕ ﻋﻠﻰ ﺃﻭﺯﺍﻥ ﺍﻻﺘﺼﺎل ﺒﻬﺩﻑ ﺘﻘﻠﻴل ﺍﻷﺨﻁﺎﺀ ﻓﻲ ﺍﻟﻨﺘﺎﺌﺞ‪ ،‬ﺜﻡ ﺘﻌﺎﺩ ﻋﻤﻠﻴﺔ‬
‫ﺍﻟﺘﺩﺭﻴﺏ ﻋﺩﺓ ﻤﺭﺍﺕ ﺇﻟﻰ ﺃﻥ ﻴﺘﻡ ﺍﻟﻭﺼﻭل ﺇﻟﻰ ﻨﺘﺎﺌﺞ ﻤﻘﺒﻭﻟﺔ‪.‬‬
‫]‪ [٢-١-٢‬ﺍﻟﺘﻌﻠﻴﻡ ﻏﻴﺭ ﺍﻹﺸﺭﺍﻓﻲ ‪Unsupervised Learning‬‬
‫ﺘﺘﺸﺎﺒﻪ ﻫﺫﻩ ﺍﻟﻁﺭﻴﻘﺔ ﻤﻊ ﻁﺭﻴﻘﺔ ﺍﻟﺘﻌﻠﻴﻡ ﺍﻹﺸﺭﺍﻓﻴﺔ ﺇﻻ ﺃﻨﻬﺎ ﺘﺨﺘﻠﻑ ﻋﻨﻬﺎ ﻓـﻲ‬
‫ﺃﻥ ﺍﻟﻌﻴﻨﺎﺕ ﺍﻟﻤﺴﺘﺨﺩﻤﺔ ﻓﻲ ﻋﻤﻠﻴﺔ ﺍﻟﺘﺩﺭﻴﺏ ﻻ ﺘﺘـﻀﻤﻥ ﺃﻴـﺔ ﻗـﻴﻡ ﻟﻠﻤﺘﻐﻴـﺭﺍﺕ‬
‫ﺍﻟﺨﺎﺭﺠﺔ‪ ،‬ﻭﺘﺘﻜﻭﻥ ﺍﻟﺒﻴﺎﻨﺎﺕ ﺍﻟﺩﺍﺨﻠﺔ ﺇﻟﻰ ﺍﻟﺸﺒﻜﺔ ﻤﻥ ﻋﺩﺓ ﻗﻁﺎﻋﺎﺕ ﺃﻭ ﻤﺠﻤﻭﻋﺎﺕ‪،‬‬
‫ﺤﻴﺙ ﺘﺘﺩﺭﺏ ﺍﻟﺸﺒﻜﺔ ﻓﻲ ﻫﺫﻩ ﺍﻟﺤﺎﻟﺔ ﻋﻠﻰ ﺍﻜﺘﺸﺎﻑ ﺍﻟﻤﻤﻴﺯﺍﺕ ﻏﻴﺭ ﺍﻟﻅﺎﻫﺭﺓ ﻓـﻲ‬
‫ﻤﺠﻤﻭﻋﺔ ﺍﻟﺒﻴﺎﻨﺎﺕ ﺍﻟﻤﺴﺘﺨﺩﻤﺔ ﻓﻲ ﻋﻤﻠﻴﺔ ﺍﻟﺘﺩﺭﻴﺏ‪ ،‬ﻭﻤـﻥ ﺜـﻡ ﺍﺴـﺘﺨﺩﺍﻡ ﺘﻠـﻙ‬
‫ﺍﻟﻤﻤﻴﺯﺍﺕ ﻓﻲ ﺘﻘﺴﻴﻡ ﺒﻴﺎﻨﺎﺕ ﺍﻟﻤﺩﺨﻼﺕ ﺇﻟﻰ ﻤﺠﻤﻭﻋـﺎﺕ ﻤﺨﺘﻠﻔـﺔ ﻓﻴﻤـﺎ ﺒﻴﻨﻬـﺎ‬
‫ﻭﻤﺘﻘﺎﺭﺒﺔ ﺩﺍﺨل ﻜل ﻤﺠﻤﻭﻋﺔ‪.‬‬
‫]‪ [٣-١-٢‬ﺍﻟﺘﻌﻠﻴﻡ ﺒﺈﻋﺎﺩﺓ ﺍﻟﺘﺩﻋﻴﻡ‪Reinforcement Learning :‬‬
‫ﻫﺫﻩ ﺍﻟﻁﺭﻴﻕ ﺨﻠﻴﻁ ﺒﻴﻥ ﺍﻟﻁﺭﻗﺘﻴﻥ ﺍﻟـﺴﺎﺒﻘﺘﻴﻥ‪ ،‬ﺤﻴـﺙ ﻻ ﻴﻔـﺼﺢ ﻟﻠـﺸﺒﻜﺔ‬
‫ﺍﻟﻌﺼﺒﻴﺔ ﻋﻥ ﺍﻟﻘﻴﻡ ﺍﻟﺤﻘﻴﻘﻴﺔ ﻟﻠﻤﺨﺭﺠﺎﺕ ﻜﻤﺎ ﻫﻭ ﺍﻟﺤﺎل ﻓﻲ ﻁﺭﻴﻘﺔ ﺍﻟﺘﺩﺭﻴﺏ ﻏﻴـﺭ‬
‫ﺍﻹﺸﺭﺍﻓﻴﺔ ﻭﻟﻜﻥ ﻴﺸﺎﺭ ﻟﻠﺸﺒﻜﺔ ﺒﺼﺤﺔ ﻨﺘﺎﺌﺠﻬﺎ ﺍﻟﻤﺤﺼﻠﺔ ﺃﻭ ﺨﻁﺌﻬﺎ ﻜﻤﺎ ﻓﻲ ﻁﺭﻴﻘﺔ‬
‫ﺍﻟﺘﻌﻠﻴﻡ ﺍﻹﺸﺭﺍﻓﻴﺔ‪.‬‬
‫‪ Typical Architectures ‬‬
‫ﻋﻤﻠﻴﺔ ﺘﻨﻅﻴﻡ ﺍﻟﺨﻼﻴﺎ ﺍﻟﻌﺼﺒﻴﺔ ﻓﻲ ﻁﺒﻘﺎﺕ ﻭﻜﻴﻔﻴﺔ ﺍﻻﺘـﺼﺎل ﺒـﻴﻥ ﻫـﺫﻩ‬
‫ﺍﻟﺨﻼﻴﺎ ﻟﺘﻜﻭﻴﻥ ﺍﻟﺸﺒﻜﺔ ﺘﺴﻤﻰ ﺒﻬﻴﻜل ﺍﻟﺸﺒﻜﺔ ‪ ،Architectures‬ﻭ ﺒﻭﺠﻪ ﻋﺎﻡ ﻤﻥ‬
‫ﺍﻟﻤﻤﻜﻥ ﺘﻘﺴﻴﻡ ﻫﻴﻜل ﺍﻟﺸﺒﻜﺔ ﺍﻟﻌﺼﺒﻴﺔ ﺍﻻﺼﻁﻨﺎﻋﻴﺔ ﺇﻟﻰ ﺜﻼﺜـﺔ ﺃﻨـﻭﺍﻉ ﺭﺌﻴـﺴﻴﺔ‬
‫ﻫﻲ‪ (١):‬ﺸﺒﻜﺔ ﻭﺤﻴﺩﺓ ﺍﻟﻁﺒﻘﺔ ﺫﺍﺕ ﺍﻟﺘﻐﺫﻴـﺔ ﺍﻷﻤﺎﻤﻴـﺔ ‪Single-Layer Feed‬‬
‫‪ (٢) ،forward Network‬ﺸﺒﻜﺔ ﻤﺘﻌـﺩﺩﺓ ﺍﻟﻁﺒﻘـﺎﺕ ﺫﺍﺕ ﺍﻟﺘﻐﺫﻴـﺔ ﺍﻷﻤﺎﻤﻴـﺔ‬
‫‪ (٣) ،Multi-Layer Feed forward Network‬ﺸﺒﻜﺔ ﻤﺘﻌﺩﺩﺓ ﺍﻟﻁﺒﻘﺎﺕ ﺫﺍﺕ‬

‫‪٥٨١‬‬
‫‪ – –‬‬

‫ﺍﻟﺘﻐﺫﻴﺔ ﺍﻟﻤﺭﺘﺩﺓ ‪).Multi-Layer Recurrent Network‬ﺍﻨﻅﺭ‪Demuth & :‬‬


‫‪ (Beale 1998‬ﻭ ﺴﻨﻭﻀﺢ ﻜل ﻨﻭﻉ ﻓﻴﻤﺎ ﻴﻠﻲ‪:‬‬
‫]‪ [١-٢-٢‬ﺸﺒﻜﺔ ﻭﺤﻴﺩﺓ ﺍﻟﻁﺒﻘﺔ ﺫﺍﺕ ﺍﻟﺘﻐﺫﻴﺔ ﺍﻷﻤﺎﻤﻴﺔ‬
‫ﻭﻫﻰ ﺃﺒﺴﻁ ﻫﻴﺎﻜل ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﺍﻻﺼﻁﻨﺎﻋﻴﺔ‪ ،‬ﻭﺍﻟﻨﻭﻉ ﺍﻷﻜﺜﺭ ﺸـﻴﻭﻋﺎ‬
‫ـﺼﺒﻴﺔ‬
‫ـﺸﺒﻜﺎﺕ ﺍﻟﻌـ‬
‫ـﻴﻐﺔ ﻟﻠـ‬
‫ـﺴﻁ ﺼـ‬
‫ـﻭ ﺍﺒـ‬
‫ـﺴﻤﻰ ‪ ،Perceptron‬ﻭﻫـ‬
‫ـﻪ ﻴـ‬
‫ﻤﻨـ‬
‫ﺍﻻﺼﻁﻨﺎﻋﻴﺔ‪،‬ﻜﻤﺎ ﺃﻨﻪ ﺍﻟﻨﻤﻭﺫﺝ ﺍﻟﺫﻱ ﺘﺒﻨﻰ ﻋﻠﻴﻪ ﺍﻷﻨﻭﺍﻉ ﺍﻷﺨﺭﻯ ﻤـﻥ ﺍﻟـﺸﺒﻜﺎﺕ‬
‫ﻭﺤﻴﺩﺓ ﺍﻟﻁﺒﻘﺔ‪ ،‬ﻜﻤﺎ ﻴﻌﺩ ﺍﺒﺴﻁ ﺃﻨﻭﺍﻉ ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﺃﻤﺎﻤﻴﺔ ﺍﻟﺘﻐﺫﻴﺔ ﺤﻴﺙ ﺘﻨﺘﻘل‬
‫ﺍﻟﻤﻌﻠﻭﻤﺎﺕ ﻤﻥ ﺍﻟﻁﺒﻘﺔ ﺍﻟﻤﺩﺨﻼﺕ ﺇﻟﻰ ﻁﺒﻘﺔ ﺍﻟﻤﺨﺭﺠﺎﺕ ﻤﺒﺎﺸﺭﺓ‪ ،‬ﻜﻤﺎ ﺃﻨﻬﺎ ﺘـﺘﻌﻠﻡ‬
‫ﻋﻥ ﻁﺭﻴﻕ ﺍﻟﺘﻌﻠﻴﻡ ﺍﻹﺸﺭﺍﻓﻲ‪ .‬ﻭﺍﻟﺸﻜل )‪ (2-1‬ﺍﻟﺴﺎﺒﻕ ﻴﻭﻀـﺢ ﻫﻴﻜـل ﺸـﺒﻜﺔ‬
‫ﻋﺼﺒﻴﺔ ﻭﺤﻴﺩﺓ ﺍﻟﻁﺒﻘﺔ ﺒﺘﻐﺫﻴﺔ ﺃﻤﺎﻤﻴﺔ‪.‬ﻭﻤﻥ ﺨﻼل ﺘﻠﻙ ﺍﻟﺸﺒﻜﺔ ﺘﺘﻡ ﻋﻤﻠﻴﺔ ﺍﻟﺘﻌﻠﻡ ﻋﻥ‬
‫ﻁﺭﻴﻕ ﻗﻴﺎﻡ ﻭﺤﺩﺍﺕ ﺍﻟﻤﻌﺎﻟﺠﺔ ﺒﻌﺩﺓ ﺨﻁﻭﺍﺕ ﺭﺌﻴﺴﻴﺔ ﻫﻲ‪:‬‬
‫ـﻸﻭﺯﺍﻥ‬
‫ـﺔ ﻟـ‬
‫ـﻴﻡ ﺍﻻﺒﺘﺩﺍﺌﻴـ‬
‫ـﺸﻭﺍﺌﻲ ﻟﻠﻘـ‬
‫ـﺩ ﺍﻟﻌـ‬
‫ـﻰ‪ :‬ﺍﻟﺘﺤﺩﻴـ‬
‫ـﻭﺓ ﺍﻷﻭﻟـ‬
‫ﺍﻟﺨﻁـ‬
‫‪ ،w1,w2,…..,wn‬ﻭﻟﻠﻘﻴﻤﺔ ﺍﻟﺤﺩﻴﺔ ‪ (θ) Threshold Value‬ﻭﺫﻟﻙ ﻓﻲ ﺍﻟﻤـﺩﻯ‬
‫]‪.[-0.5,0.5‬‬
‫ﺍﻟﺨﻁﻭﺓ ﺍﻟﺜﺎﻨﻴﺔ‪:‬ﻋﻤﻠﻴﺔ ﺍﻟﺘﻨﺸﻴﻁ‪ :‬ﻓﺒﻌﺩ ﺃﻥ ﺘﺴﺘﻘﺒل ﻜل ﻭﺤﺩﺓ ﻤﻌﺎﻟﺠﺔ )ﻨﻴﺭﻭﻥ(‬
‫‪ Neuron‬ﺍﻟﻌﺩﻴﺩ ﻤﻥ ﺍﻹﺸﺎﺭﺍﺕ ﺍﻟﻤﺩﺨﻠﺔ‪ ،‬ﻭ ﺤﺴﺎﺏ ﺍﻟﻤﺠﻤﻭﻉ ﺍﻟﻤـﻭﺯﻭﻥ ﻟﻬـﺫﻩ‬
‫ﺍﻟﻤﺩﺨﻼﺕ ﺍﻟﻤﺴﺘﺨﺩﻤﺔ ﻭ ﺍﻟـﺫﻱ ﻋـﺎﺩﺓ ﻤـﺎ ﻴـﺘﻡ ﺒﺎﺴـﺘﺨﺩﺍﻡ ﺩﺍﻟـﺔ ﺍﻟﻤﺠﻤـﻭﻉ‬
‫‪ Summation Function‬ﺍﻵﺘﻴﺔ‪:‬‬
‫‪n‬‬
‫‪X   wi xi‬‬ ‫)‪… … … (2-1‬‬
‫‪i 1‬‬
‫ﺤﻴﺙ‪:‬‬
‫‪ X‬ﺼﺎﻓﻰ ﺍﻟﻤﺩﺨﻼﺕ ﺍﻟﻤﻭﺯﻭﻨﺔ ﻟﻠﺨﻠﻴﺔ ﺍﻟﻌﺼﺒﻴﺔ‬
‫‪ Wi‬ﺍﻷﻭﺯﺍﻥ ﺍﻟﻨﺴﺒﻴﺔ ﻟﻌﻘﺩ ﺍﻻﺘﺼﺎل ﻤﺎ ﺒﻴﻥ ﺍﻟﻁﺒﻘﺎﺕ‬
‫‪ xi‬ﻗﻴﻤﺔ ﺍﻟﻤﺩﺨﻼﺕ ‪i‬‬

‫‪٥٨٢‬‬
‫‪ – –‬‬

‫‪ n‬ﻋﺩﺩ ﻤﺩﺨﻼﺕ ﺍﻟﺨﻠﻴﺔ ﺍﻟﻌﺼﺒﻴﺔ‬


‫‪X1,‬‬ ‫ﻭﻴﺘﻡ ﺘﻨﺸﻴﻁ ﺍﻟﻤﺩﺭﻙ ‪ Perceptron‬ﻋﻥ ﻁﺭﻴﻕ ﺘﻁﺒﻴـﻕ ﺍﻟﻤـﺩﺨﻼﺕ‬
‫‪ ،X2,… Xn‬ﻭﺍﻟﻤﺨﺭﺠﺎﺕ ﺍﻟﻤﺭﻏﻭﺏ ﺒﻬﺎ )‪ ،T(k‬ﻭﺤﺴﺎﺏ ﺍﻟﻤﺨﺭﺠـﺎﺕ ﺍﻟﻔﻌﻠﻴـﺔ‬
‫ﻋﻨﺩ ﺍﻟﺘﻜﺭﺍﺭ ‪ k‬ﻜﻤﺎ ﻴﻠﻲ‪:‬‬
‫‪n‬‬
‫] ‪y (k )  step[ X i (k )Wi (k )  ‬‬ ‫)‪… … … (2-2‬‬
‫‪i 1‬‬
‫‪Step‬‬ ‫ﻭﻫﺫﺍ ﺍﻟﻨﻭﻉ ﻤﻥ ﺩﻭﺍل ﺍﻟﺘﺤﻭﻴل )ﺍﻟﺘﻨﺸﻴﻁ( ﻴـﺴﻤﻰ ﺒﺩﺍﻟـﺔ ﺍﻟﺨﻁـﻭﺓ‬
‫‪.Function‬‬
‫ﻭﺘﺠﺩﺭ ﺍﻹﺸﺎﺭﺓ ﺇﻟﻰ ﺍﻨﻪ ﺒﺠﺎﻨﺏ ﺩﺍﻟﺔ ﺍﻟﺨﻁﻭﺓ ﻴﻭﺠﺩ ﻜﺜﻴﺭ ﻤﻥ ﺩﻭﺍل ﺍﻟﺘﻨﺸﻴﻁ‬
‫ﻭﻟﻜﻥ ﻭﺠﺩ ﺃﻥ ﻗﻠﺔ ﻓﻘﻁ ﻤﻨﻬﺎ ﻟﻬﺎ ﺘﻁﺒﻴﻘﺎﺕ ﻋﻤﻠﻴﺔ ﻭ ﻤﻥ ﺃﻫﻡ ﻫﺫﻩ ﺍﻟﺩﻭﺍل ﻤﺎ ﻴﻠﻲ‪:‬‬
‫‪ -١‬ﺩﺍﻟﺔ ﺍﻹﺸﺎﺭﺓ ‪Sign Function‬‬
‫‪ -٢‬ﺩﺍﻟﺔ ﺍﻟﺨﻁﻭﺓ ‪Step Function‬‬
‫‪ -٣‬ﺍﻟﺩﺍﻟﺔ ﺍﻟﺨﻁﻴﺔ ‪Linear Function‬‬
‫‪ -٤ -٤‬ﺍﻟﺩﺍﻟﺔ ﺍﻟﺘﺠﻤﻴﻌﻴﺔ ‪ Function Sigmoid‬ﻭﻋﺎﺩﺓ ﺘﺴﺘﺨﺩﻡ ﺩﺍﻟﺔ ﺍﻹﺸـﺎﺭﺓ‬
‫‪ Sign Function‬ﻭ ﺩﺍﻟﺔ ﺍﻟﺨﻁﻭﺓ ‪ Step Function‬ﻟﻠﺨﻼﻴﺎ ﺍﻟﻌﺼﺒﻴﺔ ﻓـﻲ‬
‫ﺍﺘﺨﺎﺫ ﺍﻟﻘﺭﺍﺭﺍﺕ ﻟﻤﻬﺎﻡ ﺍﻟﺘﺼﻨﻴﻑ‪ ،‬ﻭﺘﻤﻴﻴﺯ ﺍﻷﻨﻤﺎﻁ‪) .‬ﺍﻨﻅﺭ ﻋﻠﻰ ﺴﺒﻴل ﺍﻟﻤﺜﺎل‪:‬‬
‫ﻨﺠﻴﻨﻔﻴﺘﺴﻜﻰ‪ ،(٢٠٠٤)،‬ﺹ ‪.(٢٣٥‬‬
‫ﻭﺘﻌﺘﺒﺭ ﺍﻟﺩﺍﻟﺔ ﺍﻟﺘﺠﻤﻴﻌﻴـﺔ ‪ sigmoid Function‬ﺃﻜﺜـﺭ ﺩﻭﺍل ﺍﻟﺘﺤﻭﻴـل‬
‫ﺍﺴﺘﺨﺩﺍﻤﺎ ﻟﻤﺎ ﺘﺘﻤﻴﺯ ﺒﻪ ﻤﻥ ﺒﺴﺎﻁﺔ ﻓﻲ ﺍﻟﺘﻤﻴﻴﺯ ﻭﺴﻬﻭﻟﺔ ﻓﻲ ﺤﺴﺎﺏ ﺍﻟﻤﻴل‪،‬ﺃﻤﺎ ﺩﺍﻟﺔ‬
‫ﺍﻟﺘﺤﻭﻴل ﺍﻟﺨﻁﻴﺔ ‪ Linear Function‬ﻓﻴﻤﻜﻥ ﺃﻥ ﺘﺴﺘﺨﺩﻡ ﻓﻲ ﺒﻌـﺽ ﺘﻁﺒﻴﻘـﺎﺕ‬
‫ﺍﻟﺴﻼﺴل ﺍﻟﺯﻤﻨﻴﺔ‪ ،‬ﻭﻓﻰ ﺃﻏﻠﺏ ﺍﻷﺤﻭﺍل ﺩﺍﻟـﺔ ﺍﻟﺘﺤﻭﻴـل ﺘﺠﻤـﻊ ﻤـﻥ ﺘﻭﻟﻴﻔـﺔ‬
‫ﺨﻁﻴﺔ‪).‬ﺃﻨﻅﺭ‪.( Alshawadfi (2003):‬‬

‫‪٥٨٣‬‬
‫‪ – –‬‬

‫]‪ [٢-٢-٢‬ﺸﺒﻜﺔ ﻤﺘﻌﺩﺩﺓ ﺍﻟﻁﺒﻘﺎﺕ ﺫﺍﺕ ﺍﻟﺘﻐﺫﻴﺔ ﺍﻷﻤﺎﻤﻴﺔ‬


‫ﺘﺘﻜﻭﻥ ﺍﻟﺸﺒﻜﺔ ﻤﺘﻌﺩﺩﺓ ﺍﻟﻁﺒﻘﺎﺕ ﻤـﻥ ﻭﺍﺤـﺩ ﺃﻭ ﺃﻜﺜـﺭ ﻤـﻥ ﺍﻟﻁﺒﻘـﺎﺕ )ﺃﻭ‬
‫ﺍﻟﻤﺴﺘﻭﻴﺎﺕ( ﻤﻥ ﻋﻘﺩ ﺍﻻﺘﺼﺎل ‪ Nodes‬ﺘﺴﻤﻰ ﻁﺒﻘﺎﺕ ﻤﺨﻔﻴﺔ ‪Hidden Layers‬‬
‫ﺘﻭﺠﺩ ﺒﻴﻥ ﻁﺒﻘﺎﺕ ﺍﻟﻤﺩﺨﻼﺕ ‪ Input Layers‬ﻭﻁﺒﻘـﺎﺕ ﺍﻟﻤﺨﺭﺠـﺎﺕ ‪Output‬‬
‫‪ ،Layers‬ﻭﻓﻰ ﺍﻟﺸﺒﻜﺎﺕ ﺫﺍﺕ ﺍﻟﺘﻐﺫﻴﺔ ﺍﻷﻤﺎﻤﻴﺔ ﺘﻨﺘﺸﺭ ﺇﺸﺎﺭﺍﺕ ﺍﻟﻤـﺩﺨﻼﺕ ﻓـﻲ‬
‫ﺍﻻﺘﺠﺎﻩ ﺍﻷﻤﺎﻤﻲ ﻤﻥ ﻁﺒﻘﺔ ﻟﻁﺒﻘﺔ‪ ،‬ﻭﻻ ﻴﺴﻤﺢ ﺒﺎﻟﺭﺠﻭﻉ ﻟﻠﺨﻠﻑ‪ ،‬ﻭﻴﻤﻜﻥ ﻟﻬﺫﺍ ﺍﻟﻨﻭﻉ‬
‫ﻤﻥ ﺍﻟﺸﺒﻜﺎﺕ ﺤل ﺍﻟﻌﺩﻴﺩ ﻤﻥ ﺍﻟﻤﺸﺎﻜل ﺍﻟﻤﻌﻘﺩﺓ ﺍﻟﺘـﻲ ﻻ ﺘـﺴﺘﻁﻴﻊ ﺍﻟـﺸﺒﻜﺔ ﺫﺍﺕ‬
‫ﺍﻟﻁﺒﻘﺔ ﺍﻟﻭﺍﺤﺩﺓ ﺤﻠﻬﺎ‪ ،‬ﻭﻟﻜﻥ ﺘﺩﺭﻴﺒﻬﺎ ﻗﺩ ﻴﺴﺘﻐﺭﻕ ﻭﻗﺘـﺎ ﺃﻁـﻭل‪.‬ﺍﻨﻅـﺭ‪Hijazi :‬‬
‫)‪(2007‬‬

‫ﺸﻜل )‪(٢-٢‬‬
‫ﺍﻟﺸﺒﻜﺔ ﻤﺘﻌﺩﺩﺓ ﺍﻟﻁﺒﻘﺎﺕ ﺫﺍﺕ ﺘﻐﺫﻴﺔ ﺃﻤﺎﻤﻴﺔ‬

‫‪Hidden layer‬‬

‫‪Input layer‬‬ ‫‪Output layer‬‬


‫‪‬‬ ‫‪F‬‬
‫‪‬‬ ‫‪F‬‬

‫‪‬‬ ‫‪F‬‬

‫‪‬‬ ‫‪F‬‬
‫‪‬‬ ‫‪F‬‬

‫ﻭﺘﻘﻭﻡ ﺍﻟﻌﻨﺎﺼﺭ ﺍﻟﺤﺴﺎﺒﻴﺔ ﻓﻲ ﺍﻟﻁﺒﻘﺔ ﺍﻟﻤﺨﻔﻴﺔ ﺒﺈﺠﺭﺍﺀ ﻋﻤﻠﻴﺔ ﺍﻟﺠﻤﻊ ﺍﻟﻤﻭﺯﻭﻥ‬


‫ﻭﺘﺭﺴل ﺇﺸﺎﺭﺓ ﻤﺨﺭﺠﺎﺕ )ﺃﻨﻤﺎﻁ ﺘﻨﺒﻴﻪ( ﺇﻟﻰ ﻁﺒﻘﺔ ﺍﻟﻤﺨﺭﺠﺎﺕ ﺍﻟﺘـﻲ ﺘﻘـﻭﻡ ﻫـﻲ‬

‫‪٥٨٤‬‬
‫‪ – –‬‬

‫ﺍﻷﺨﺭﻯ ﺒﻌﻤﻠﻴﺔ ﺍﻟﺠﻤﻊ ﺍﻟﻤﻭﺯﻭﻥ ﻭﺘﺤﺩﻴﺩ ﺃﻨﻤـﺎﻁ ﺍﻟﻤﺨﺭﺠـﺎﺕ ﻟﻠـﺸﺒﻜﺔ ﻜﻠﻬـﺎ‪.‬‬


‫)ﺍﻨﻅﺭ‪ (٢٠٠٤):‬ﻨﺠﻴﻨﻔﻴﺘﺴﻜﻰ(‪ .‬ﻜﻤﺎ ﺍﻨﻪ ﺒﺯﻴﺎﺩﺓ ﻋﺩﺩ ﻭﺤـﺩﺍﺕ ﺍﻟﻤﻌﺎﻟﺠـﺔ ﺩﺍﺨـل‬
‫ﺍﻟﻁﺒﻘﺔ ﺍﻟﺨﻔﻴﺔ ﺃﻭ ﺇﻀﺎﻓﺔ ﺃﻜﺜﺭ ﻤﻥ ﻁﺒﻘﺔ ﻤﺨﻔﻴﺔ ﻓـﻲ ﺍﻟـﺸﺒﻜﺔ‪ ،‬ﻴـﺴﻤﺢ ﻟﻠـﺸﺒﻜﺔ‬
‫ﺒﺎﻟﺘﻌﺎﻤل ﻤﻊ ﻜﺜﻴﺭ ﻤﻥ ﺍﻟﺩﻭﺍل ﺍﻟﻤﻌﻘﺩﺓ ﻭﺍﻟﺘﻘﺭﻴﺏ ﺍﻟﺩﻗﻴﻕ ﻟﻠﺩﻭﺍل ﺍﻟﻤﺴﺘﻤﺭﺓ ﻭﻫﺫﺍ ﻤﺎ‬
‫ﻨﺼﺕ ﻋﻠﻴﻪ ﻨﻅﺭﻴﺔ )‪) .( Cybenko‬ﺃﻨﻅﺭ)‪. (Cybenko (1989‬‬
‫ﺃﻤﺎ ﺒﺎﻟﻨﺴﺒﺔ ﻟﻌﻤﻠﻴﺔ ﺍﻟﺘﻌﻠﻴﻡ ﻓﺈﻨﻬﺎ ﺘﺴﺘﻤﺭ ﺒﻨﻔﺱ ﺍﻟﻁﺭﻴﻘﺔ ﻓﻲ ﺍﻟﺸﺒﻜﺔ ﻭﺤﻴـﺩﺓ‬
‫ﺍﻟﻁﺒﻘﺔ‪ ،‬ﺤﻴﺙ ﺘﺤ‪‬ـﺩﺙ ﺍﻷﻭﺯﺍﻥ ﺒﺎﺴـﺘﺨﺩﺍﻡ ﺍﻟـﺘﻌﻠﻡ ﺒﺎﻟﺘﻐﺫﻴـﺔ ﺍﻟﺨﻠﻔﻴـﺔ ‪Back‬‬
‫‪ ،propagation Learning‬ﻭﻗﺒل ﻫﺫﺍ ﺍﻟﺘﺤﺩﻴﺙ ﻴﻭﺠﺩ ﺨﻁﻭﺘﻴﻥ ﻤﺘﺘﺎﻟﻴﺘﻴﻥ ﺃﻭﻟﻬﻤﺎ‬
‫ﺃﻤﺎﻤﻴﺔ‪ Forward‬ﻭﺍﻷﺨﺭﻯ ﺘﺭﺍﺠﻌﻴﺔ ‪ ،Backward‬ﻓﻔﻲ ﺍﻟﺨﻁﻭﺓ ﺍﻷﻤﺎﻤﻴﺔ ﻴـﺘﻡ‬
‫ﺤﺴﺎﺏ ﻤﺨﺭﺠﺎﺕ ﺍﻟﺸﺒﻜﺔ ﻤﻥ ﺍﻟﺒﻴﺎﻨﺎﺕ ﺍﻟﻤﺩﺨﻠﺔ‪ ،‬ﻭﻤﻘﺎﺭﻨﺔ ﻫﺫﻩ ﺍﻟﻤﺨﺭﺠـﺎﺕ ﻤـﻊ‬
‫ﺍﻟﻤﺨﺭﺠﺎﺕ ﺍﻟﻤﺴﺘﻬﺩﻓﺔ ﻋﻥ ﻁﺭﻕ ﺤﺴﺎﺏ ﺍﻟﺨﻁﺄ‪ .‬ﺃﻤﺎ ﻓﻲ ﺍﻟﺨﻁﻭﺓ ﺍﻟﺘﺭﺍﺠﻌﻴﺔ ﻓﺘﻘﻭﻡ‬
‫ﺍﻟﺸﺒﻜﺔ ﺒﺘﻌﺩﻴل ﺍﻷﻭﺯﺍﻥ ﺍﻟﻨﺴﺒﻴﺔ ﺒﻬﺩﻑ ﺘﺼﻐﻴﺭ ﺍﻟﺨﻁﺄ‪ ،‬ﻭﺘﺴﻤﻰ ﺍﻟﻌﻤﻠﻴﺔ ﺍﻟﺘﻲ ﺘﻀﻡ‬
‫ﺍﻟﻤﺭﺤﻠﺘﻴﻥ ﺍﻟﺴﺎﺒﻘﺘﻴﻥ ﺒﺎﻟﺩﻭﺭﺓ )‪ ،(Epoch‬ﻭﻴﺘﻡ ﺘﻜﺭﺍﺭ ﻫﺫﻩ ﺍﻟﺩﻭﺭﺓ ﺇﻟﻰ ﺃﻥ ﻨـﺼل‬
‫ﻷﻗل ﻤﺠﻤﻭﻉ ﻤﺭﺒﻌﺎﺕ ﻟﻠﺨﻁﺄ‪.‬‬
‫]‪ [٣-٢-٢‬ﺸﺒﻜﺔ ﻤﺘﻌﺩﺩﺓ ﺍﻟﻁﺒﻘﺎﺕ ﺫﺍﺕ ﺍﻟﺘﻐﺫﻴﺔ ﺍﻟﻤﺭﺘﺩﺓ‬
‫ﻭﻫﻰ ﺍﻟﻨﻭﻉ ﺍﻟﺜﺎﻨﻲ ﻤﻥ ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﻤﺘﻌﺩﺩﺓ ﺍﻟﻁﺒﻘﺎﺕ‪ ،‬ﻭ ﻫـﺫﺍ ﺍﻟﻨـﻭﻉ‬
‫ﻴﺤﺘﻭﻱ ﻋﻠﻰ ﺤﻠﻘﺔ ﺘﻐﺫﻴﺔ ﺨﻠﻔﻴﺔ ﻭﺍﺤﺩﺓ ﻋﻠﻰ ﺍﻷﻗل )ﻤﺴﺎﺭ ﻤﻐﻠـﻕ ﻤـﻥ ﺍﻟﺘﻐﺫﻴـﺔ‬
‫ﺍﻟﺨﻠﻔﻴﺔ ‪ (feedback loop‬ﻜﻤﺎ ﻓﻲ ﺍﻟﺸﻜل )‪ ،(٣-٢‬ﺤﻴﺙ ﺃﻨـﻪ ﻭﻋﻠـﻰ ﻋﻜـﺱ‬
‫ﺍﻟﺸﺒﻜﺔ ﻤﺘﻌﺩﺩﺓ ﺍﻟﻁﺒﻘﺎﺕ ﺫﺍﺕ ﺍﻟﺘﻐﺫﻴﺔ ﺍﻷﻤﺎﻤﻴﺔ ﻴﺘﻡ ﺤﺴﺎﺏ ﺍﻟﺨﻁﺄ ﺒﻴﻥ ﻤﺨﺭﺠـﺎﺕ‬
‫ﺍﻟﺸﺒﻜﺔ ﻭﺍﻟﻤﺨﺭﺠﺎﺕ ﺍﻟﺤﻘﻴﻘﻴﺔ ﻭﻟﻜﻥ ﻴﺘﻡ ﺘﻐﺫﻴﺔ ﺃﻟﺨﻁﺄ ﻓﻲ ﺍﺘﺠﺎﻩ ﻋﻜـﺴﻲ )ﻋﻜـﺱ‬
‫ﺍﺘﺠﺎﻩ ﺍﻟﻤﺩﺨﻼﺕ( ﻟﻠﺸﺒﻜﺔ ﻟﺘﻌﺩﻴل ﺍﻷﻭﺯﺍﻥ ﻭﺒﺎﻟﺘﺎﻟﻲ ﻴﺘﻡ ﺘﺨﻔﻴﺽ ﺍﻟﺨﻁـﺄ‪ ،‬ﻭﻫـﺫﺍ‬
‫ﺍﻹﺠﺭﺍﺀ ﻴﺘﻡ ﺘﻜﺭﺍﺭﻩ ﺤﺘﻰ ﻴﺼل ﺍﻟﺨﻁﺄ ﺇﻟﻰ ﺃﻗل ﻗﻴﻤﺔ ﻤﻤﻜﻨﺔ‪.‬‬

‫‪٥٨٥‬‬
‫‪ – –‬‬

‫ﺸﻜل )‪(٣-٢‬‬
‫ﺍﻟﺸﺒﻜﺔ ﻤﺘﻌﺩﺩﺓ ﺍﻟﻁﺒﻘﺎﺕ ﺫﺍﺕ ﺍﻟﺘﻐﺫﻴﺔ ﻤﺭﺘﺩﺓ‬

‫‪Hidden layer‬‬

‫‪Input layer‬‬
‫‪‬‬ ‫‪F‬‬
‫‪Output layer‬‬

‫‪‬‬ ‫‪F‬‬ ‫‪‬‬ ‫‪F‬‬

‫‪‬‬ ‫‪F‬‬

‫‪z‬‬

‫‪z‬‬

‫‪z‬‬

‫] ‪ [ ٣‬ﳕﻮﺫﺝ ‪ARMAX‬‬

‫ﻴﻌﺩ ﻨﻤﻭﺫﺝ ‪ ARMAX‬ﺒﻤﺜﺎﺒﺔ ﺘﻌﻤﻴﻡ ﻟﻨﻤﺎﺫﺝ ﺒﻭﻜﺱ – ﺠﻴﻨﻜﻨﺯ ﺤﻴـﺙ‬


‫ﻴﺘﻀﻤﻥ ﻤﺘﻐﻴﺭﺍﺕ ﺍﻨﺤﺩﺍﺭ ﺫﺍﺘﻲ ) ‪ ،Auto-Regressive ( AR‬ﻭﻤﺘﻭﺴـﻁﺎﺕ‬
‫ﻤﺘﺤﺭﻜﺔ )‪ ،Moving Average(MA‬ﺒﺎﻹﻀـﺎﻓﺔ ﺇﻟـﻰ ﺍﻟﻤﺘﻐﻴـﺭ ﺍﻟﺨـﺎﺭﺠﻲ‬
‫‪.exogenous inputs‬‬

‫ﺃﻱ ﺃﻨﻪ ﻓﻲ ﻫﺫﺍ ﺍﻟﻨﻤﻭﺫﺝ ﺘﻨﺤﺩﺭ ﺍﻟﻤﺸﺎﻫﺩﺍﺕ ﺍﻟﺤﺎﻟﻴﺔ ‪ Yt‬ﻋﻠﻰ ﺍﻟﻘـﻴﻡ ﺍﻟـﺴﺎﺒﻘﺔ‬
‫ﻟﻠﺴﻠﺴﻠﺔ ﺍﻟﺤﺎﻟﻴﺔ ﻭﻋﻠﻰ ﺍﻟﻘﻴﻡ ﺍﻟﺤﺎﻟﻴﺔ ‪ Xt‬ﻭ ﺍﻟﺴﺎﺒﻘﺔ ﻟﺴﻠﺴﻠﺔ ﺃﺨﺭﻯ‪ ،‬ﻭﺘﻨﺤﺩﺭ ﻜـﺫﻟﻙ‬

‫‪٥٨٦‬‬
‫‪ – –‬‬

‫ﻋﻠﻰ ﺍﻷﺨﻁﺎﺀ ﺍﻟﺤﺎﻟﻴﺔ ﻭﺍﻟﺴﺎﺒﻘﺔ‪ ،‬ﻭﻴﻤﻜﻥ ﺘﻌﺭﻴﻔﻪ ﻭﻓﻘﺎ ﻟﻠﻤﺭﺠـﻊ ) ‪Alshawadfi‬‬


‫‪ (1994‬ﻜﻤﺎ ﻴﻠﻲ‪:‬‬

‫)‪ B Yt   ( B) X t   ( B)et … … … (3-1‬‬

‫ﺤﻴﺙ‪:‬‬
‫‪t=1,2,3,…..,n‬‬ ‫‪ :Yt‬ﻫــﻰ ﺍﻟﻤــﺸﺎﻫﺩﺓ ‪ Y‬ﻋﻨــﺩ ﺍﻟــﺯﻤﻥ ‪،t‬‬
‫‪ :Xt‬ﻫﻰ ﻤﺘﻐﻴﺭﺍﺕ ﺨﺎﺭﺠﻴﺔ ﻤﺴﺘﻘﻠﺔ ﻋﻥ ‪et‬‬
‫ﺘﻤﺜل ﺴﻠﺴﻠﺔ ﻤﻥ ﺍﻟﻤﺘﻐﻴﺭﺍﺕ ﺍﻟﻌﺸﻭﺍﺌﻴﺔ ﺍﻟﻤﺴﺘﻘﻠﺔ ﻭﺍﻟﺘـﻰ ﻟﻬـﺎ ﺘﻭﺯﻴـﻊ‬ ‫‪:et‬‬
‫ﻁﺒﻴﻌﻰ ﻤﺘﻭﺴﻁﻪ ﺍﻟﺼﻔﺭ‪ ،‬ﻭﺘﺒﺎﻴﻨﻪ ‪ 2‬‬

‫)‪ :(B) ،(B),(B‬ﻫﻰ ﻜﺜﻴﺭﺍﺕ ﺤﺩﻭﺩ ﺘﺄﺨﺫ ﺍﻟﺼﻭﺭﺓ‪:‬‬

‫‪ ( B)  1  1 B  2 B 2  .....   p B p ,i  R, i  1,2,...., p‬‬

‫‪ ( B)   0  1 B   2 B 2  .....   h B h ,  j  R, j  1,2,...., h‬‬

‫‪ ( B)  1  1 B   2 B 2  .....   q B q , k  R, k  1,2,...., q‬‬

‫ﺤﻴﺙ‪:‬‬
‫‪ B‬ﻫﻰ ﻤﻌﺎﻤل ﺍﻹﺯﺍﺤﺔ ﻟﻠﺨﻠﻑ ﺨﻁﻭﺓ ﻭﺍﺤﺩﺓ‪ ،‬ﺃﻯ ﺃﻥ‪:‬‬

‫)‪Br Yt = Yt-1, r=1,2,……,n… … … (3-2‬‬

‫ﻭﺒﺎﻓﺘﺭﺍﺽ ﺃﻥ ﺍﻟﺴﻠﺴﻠﺔ ﺍﻟﻤﻭﻟـﺩﺓ ﻤـﻥ ﺍﻟﻨﻤـﻭﺫﺝ ﻓـﻰ ﺍﻟﻤﻌﺎﺩﻟـﺔ )‪(3-1‬‬


‫ﻤﺴﺘﻘﺭﺓ)ﺭﺒﻤﺎ ﺒﻌﺩ ﺘﺤﻭﻴﻠﺔ ﻤﻨﺎﺴﺒﺔ( ﺒﻤﻌﻨﻰ ﺃﻥ ﻜل ﺠﺫﻭﺭ ﺍﻟﻤﻌﺎﺩﻟﺔ ‪ (B)=0‬ﺘﻘـﻊ‬

‫‪٥٨٧‬‬
 – –

‫ ﻭﺍﻟﺴﻠـﺴﻠﺔ ﻻ ﺘﻤﺜـل ﺘﻐﺫﻴـﺔ ﻤﺭﺘـﺩﺓ ﻤـﻥ ﺍﻟﻤﺨﺭﺠـﺎﺕ‬،‫ﺨﺎﺭﺝ ﺩﺍﺌﺭﺓ ﺍﻟﻭﺤﺩﺓ‬


‫ ﻜﻤﺎ ﺘﻜﻭﻥ ﺍﻟﻌﻤﻠﻴﺔ ﻗﺎﺒﻠﺔ ﻟﻼﻨﻘﻼﺏ ﺇﺫﺍ ﻜﺎﻨﺕ ﻜل ﺠﺫﻭﺭ ﺍﻟﻤﻌﺎﺩﻟﺔ‬،Xt ‫ﻟﻠﻤﺩﺨﻼﺕ‬Yt
.‫ ﻭﺍﻗﻌﺔ ﺨﺎﺭﺝ ﺩﺍﺌﺭﺓ ﺍﻟﻭﺤﺩﺓ‬(B)=0
:‫ﻭﺍﻟﻨﻤﻭﺫﺝ ﺍﻟﺴﺎﺒﻕ ﻴﻤﻜﻥ ﺍﻟﺘﻌﺒﻴﺭ ﻋﻨﻪ ﻓﻲ ﺍﻟﺸﻜل ﺍﻟﺘﺎﻟﻲ‬

p h q
Yt    i Yt  i   X t  j    k  t  k   t … … … (3-3)
i 1 j 0 k 1
:‫ﻛﻣﺎ ﻳﻣﻛن ﻛﺗﺎﺑﺔ ھذا اﻟﻧﻣوذج ﻓﻲ ﺻﻳﻐﺔ ﻣﺻﻔوﻓﺎت ﻛﻣﺎ ﻳﻠﻲ‬

Y  XW  E … … … (3-4)
:‫ﺣﻳث‬
 ym y m1 ... y m p 1 xm x m 1 ... x m h 1 m  m 1 ...  m q 1 
y y m ... y m p x m 1 xm ... x m  h  m 1  m ...  m  q 
 m 1
 ... ... ... ... ... ... ... ... ... ... ... 
 
 ... ... ... ... ... ... ... ... ... ... ... 
X   ... ... ... ... ... ... ... ... ... ... ... 
 
 ... ... ... ... ... ... ... ... ... ... ... 
 ... ... ... ... ... ... ... ... ... ... ... 
 
 y n 2 y n 3 ... y n  p 1 x n 2 x n 3 ... ...  n 2  n 3 ...  n q 1 
 
 y n 1 y n 2 ... yn p x n 1 x n 2 ... x n h  n 1  n  2 ...  n q 

W  1  2 ....  p 0 1 2 ....  h 1  2 ....  q 


Y   y m 1 ym2 .... .... yn 


E   m 1  m 2 ... ...  n  , m  max( p , h) … … … (3-5)
‫ ﺘﻜـﻭﻥ‬X ‫ ﻓﻰ ﻤﺘﺠـﻪ ﺍﻟﻤـﺩﺨﻼﺕ‬، xt 1 ‫ ﻭ‬، yt 1 ‫ﻭﻨﻼﺤﻅ ﺃﻥ ﻜل ﻤﻥ‬
et 1 ‫ ﺒﻴﻨﻤﺎ‬،‫ﻤﻌﻠﻭﻤﺔ‬
‫ ﻤـﻥ‬ (0) ‫ ﻭ ﻜـﺫﻟﻙ‬،  t 1 ‫ ﻭﻴﺘﻡ ﺍﺴﺘﺨﺩﺍﻡ ﺍﻟﺨﻁﺄ ﺍﻟﻤﻘﺩﺭ‬،‫ﺘﻜﻭﻥ ﻤﺠﻬﻭﻟﺔ‬
.‫ﺍﻟﻤﻤﻜﻥ ﺃﻥ ﻨﻀﻊ ﻟﻬﺎ ﻗﻴﻤﺔ ﺍﺒﺘﺩﺍﺌﻴﺔ ﺘﺴﺎﻭﻯ ﺍﻟﺼﻔﺭ‬

٥٨٨
‫‪ – –‬‬

‫ﺤﻴﺙ ﺃﻥ ﺍﻟﺨﻁﺄ )ﺍﻟﺒﻭﺍﻗﻰ( ﺍﻟﻨﺎﺘﺞ ﻋﺒﺎﺭﺓ ﻋﻥ ﺍﻟﻔﺭﻕ ﺒـﻴﻥ ﺍﻟﻘﻴﻤـﺔ ﺍﻟﺤﻘﻴﻘﻴـﺔ‬


‫ﻭﺍﻟﻘﻴﻤﺔ ﺍﻟﻤﻘﺩﺭﺓ‪:‬‬

‫‪e‬‬ ‫) ‪(t‬‬ ‫‪‬‬ ‫‪y‬‬ ‫) ‪p (t‬‬ ‫‪ y‬‬ ‫) ‪(t‬‬

‫‪ y p ( t )  XW‬‬ ‫)‪… … … (3-6‬‬

‫ﻭﻴﻜﻭﻥ ﺍﻟﻬﺩﻑ ﻫﻭ ﺍﻟﺤﺼﻭل ﻋﻠﻰ ﺘﻘﺩﻴﺭ ﻟﻠﻤﻌﺎﻟﻡ ‪ W‬ﺍﻟﺘﻰ ﺘﺠﻌـل ﻤﺠﻤـﻭﻉ‬


‫ﻤﺭﺒﻌﺎﺕ ﺍﻟﺒﻭﺍﻗﻰ ﺃﻗل ﻤﺎ ﻴﻤﻜﻥ‪.‬‬

‫]‪ [٤‬ﳕﻮﺫﺝ ﺷﺒﻜﺎﺕ ﻋﺼﺒﻴﺔ ﻣﻘﱰﺡ ﻟﺘﺤﻠﻴﻞ ﺍﻟﺴﻼﺳﻞ ﺍﻟﺰﻣﻨﻴﺔ‬

‫ﻤﻥ ﺍﻟﻤﻤﻜﻥ ﺍﻋﺘﺒﺎﺭ ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﺍﻻﺼﻁﻨﺎﻋﻴﺔ ﻜﻨﻤﻭﺫﺝ ﺍﻨﺤـﺩﺍﺭ ﻏﻴـﺭ‬


‫ﺨﻁﻰ ﻴﺘﻜﻭﻥ ﻤﻥ ﺍﻟﻤﺘﻐﻴﺭﺍﺕ ﺍﻟﻤﺩﺨﻠﺔ ﻭﺘﺘﻤﺜـل ﻓـﻲ ﺍﻟﻘـﻴﻡ ﺍﻟﻤﺎﻀـﻴﺔ ﻟﻠﺴﻠـﺴﻠﺔ‬
‫‪ yt+1,yt+2,….,yt+p‬ﺃﻭ ﺘﺤﻭﻴﻼﺘﻬﺎ ﻭ ﻗﻴﻡ ﻤﺘﻐﻴﺭ ﺨـﺎﺭﺠﻲ ‪xt+1,xt+2,…..,xt+h‬‬
‫ﻭﺍﻷﺨﻁﺎﺀ ‪  t 1,  t  2 ,....,  t  q‬ﺤﻴﺙ ‪ X‬ﻤﺼﻔﻭﻓﺔ ﺘﺄﺨﺫ ﺍﻟﺸﻜل ﺍﻟﺘﺎﻟﻲ‪:‬‬
‫‪ zm‬‬ ‫‪zm 1 ... z m p 1‬‬ ‫‪xm‬‬ ‫‪xm 1‬‬ ‫‪... xm h 1‬‬ ‫‪ m  m 1‬‬ ‫‪...  m  q 1 ‬‬
‫‪z‬‬ ‫‪zm ... zm  p‬‬ ‫‪xm 1‬‬ ‫‪xm‬‬ ‫‪... xm  h‬‬ ‫‪ m1  m‬‬ ‫‪...  m  q ‬‬
‫‪ m 1‬‬
‫‪ ...‬‬ ‫‪...‬‬ ‫‪...‬‬ ‫‪...‬‬ ‫‪...‬‬ ‫‪...‬‬ ‫‪...‬‬ ‫‪...‬‬ ‫‪...‬‬ ‫‪...‬‬ ‫‪... ‬‬
‫‪‬‬ ‫‪‬‬
‫‪ ...‬‬ ‫‪...‬‬ ‫‪...‬‬ ‫‪...‬‬ ‫‪...‬‬ ‫‪...‬‬ ‫‪...‬‬ ‫‪...‬‬ ‫‪...‬‬ ‫‪...‬‬ ‫‪... ‬‬
‫‪X   ...‬‬ ‫‪...‬‬ ‫‪...‬‬ ‫‪...‬‬ ‫‪...‬‬ ‫‪...‬‬ ‫‪...‬‬ ‫‪...‬‬ ‫‪...‬‬ ‫‪...‬‬ ‫‪... ‬‬
‫‪‬‬ ‫‪‬‬
‫‪ ...‬‬ ‫‪...‬‬ ‫‪...‬‬ ‫‪...‬‬ ‫‪...‬‬ ‫‪...‬‬ ‫‪...‬‬ ‫‪...‬‬ ‫‪...‬‬ ‫‪...‬‬ ‫‪... ‬‬
‫‪ ...‬‬ ‫‪...‬‬ ‫‪...‬‬ ‫‪...‬‬ ‫‪...‬‬ ‫‪...‬‬ ‫‪...‬‬ ‫‪...‬‬ ‫‪...‬‬ ‫‪...‬‬ ‫‪... ‬‬
‫‪‬‬ ‫‪‬‬
‫‪ zn 2‬‬ ‫‪zn  3 ...‬‬ ‫‪zn  p 1‬‬ ‫‪xn  2‬‬ ‫‪xn  3‬‬ ‫‪...‬‬ ‫‪...‬‬ ‫‪ n  2  n 3‬‬ ‫‪...  n  q 1 ‬‬
‫‪z‬‬ ‫‪zn  2 ...‬‬ ‫‪zn  p‬‬ ‫‪xn 1‬‬ ‫‪xn  2‬‬ ‫‪...‬‬ ‫‪xn  h‬‬ ‫‪ n 1  n  2‬‬ ‫‪...  n  q ‬‬
‫‪ n 1‬‬

‫ﻭﻤﺘﺠﻪ ﺍﻟﻤﺨﺭﺠﺎﺕ ‪ Z‬ﺒﺄﺒﻌﺎﺩ ‪ h×1‬ﻫﻭ‪:‬‬

‫]‪'Z =[zn+1,zn+2,……,zn+h‬‬ ‫)‪……. (4-1‬‬

‫‪٥٨٩‬‬
‫‪ – –‬‬

‫ﻫﺫﻩ ﺍﻟﻨﻤﺎﺫﺝ ﻤـﻥ ﺍﻟﻤﻤﻜـﻥ ﺃﻥ ﺘـﺴﺘﺨﺩﻡ ﻜﺘﻘﺭﻴـﺏ ﻟﻠﻌﻼﻗـﺎﺕ ﺍﻟﻤﺤـﺩﺩﺓ‬


‫‪:deterministic relations‬‬
‫) ‪Z  v( X‬‬ ‫)‪… (4-2‬‬

‫ﺃﻭ ﺍﻟﻌﻼﻗﺎﺕ ﺍﻟﻌﺸﻭﺍﺌﻴﺔ‪stochastic relations :‬‬


‫‪Z  v(X )  ‬‬ ‫)‪… (4-3‬‬

‫)‪ ،E|X‬ﻭﻜﺫﻟﻙ ﺍﻟﺩﺍﻟـﺔ ) ‪ ( X ) =E(Z| X‬‬ ‫‪‬‬ ‫ﺤﻴﺙ ﻴﻔﺘﺭﺽ ﺃﻥ (‪= 0‬‬
‫ﻏﺎﻟﺒﺎ ﻤﺎ ﺘﻜﻭﻥ ﻏﻴﺭ ﻤﻌﻠﻭﻤﺔ ﻭﻴﺘﻡ ﺘﻘﺭﻴﺒﻬﺎ ﺒﻭﺍﺴﻁﺔ ﺍﻟﺩﺍﻟﺔ )‪ ، g( X,W‬ﺤﻴـﺙ ‪W‬‬
‫ﺘﻤﺜل ﻤﺘﺠﻪ ﺍﻟﻤﻌﺎﻟﻡ ﻭﺘﺴﻤﻰ ﺍﻷﻭﺯﺍﻥ‪ ،‬ﻭﺍﻟﺘﻲ ﺘﻜﻭﻥ ﻤﺠﻤﻭﻋﺔ ﺠﺯﺌﻴﺔ ﻓـﻲ ﻓـﺭﺍﻍ‬
‫ﺍﻟﻤﻌﺎﻟﻡ ‪ parameter space‬ﻭﺘﻘﺩﺭ ﻤﻥ ﻤﺠﻤﻭﻋﺔ ﺍﻟﺒﻴﺎﻨﺎﺕ ﺍﻟﻤﺩﺭﺒﺔ)ﺍﻟﻌﻴﻨﺔ(‪ ،‬ﻭﻜل‬
‫ﻤﻥ ﻓﺭﺍﻍ ﺍﻟﻤﻌﺎﻟﻡ ﻭﻤﺘﺠﻪ ﺍﻟﻤﻌـﺎﻟﻡ ﻴﻌﺘﻤـﺩ ﻋﻠـﻰ ﺩﺍﻟـﺔ ﺍﻟﺘﻘﺭﻴـﺏ ﺍﻟﺘـﻲ ﺘـﻡ‬
‫ﺍﺨﺘﻴﺎﺭﻫﺎ)‪.g(X,W‬‬
‫ﻭﺤﻴﺙ ﺃﻨﻪ ﻤﻥ ﺍﻟﻤﻤﻜﻥ ﺍﻟﺤﺼﻭل ﻋﻠﻰ ﺘﻨﺒﺅﺍﺕ ﻤﺸﺎﻫﺩﺍﺕ ﺍﻟﺴﻠﺴﻠﺔ ﺍﻟﻤﺴﺘﻘﺒﻠﻴﺔ‬
‫‪ yt+l‬ﻓﻲ ﻨﻤﻭﺫﺝ ‪ ARMAX‬ﻜﻤﺠﻤﻭﻉ ﻤﺭﺠﺢ ﻤﻥ ﺍﻟﻤﺸﺎﻫﺩﺍﺕ ﺍﻟﺴﺎﺒﻘﺔ ﻟﻠﺴﻼﺴل‬
‫‪ ،yt‬ﻭ‪xt‬ﺒﺎﻹﻀــﺎﻓﺔ ﺇﻟــﻰ ﺍﻟﻤﺘﻐﻴــﺭ ﺍﻟﻌــﺸﻭﺍﺌﻲ ‪)tε‬ﺃﻨﻅــﺭ ‪and Reinsel‬‬
‫‪ ،( (1994),p.446Box, Jenkins‬ﺃﻱ ﺍﻨﻪ ﻴﺼﺒﺢ ﻜﻨﻤﻭﺫﺝ ‪ ARX‬ﻜﻤﺎ ﻴﻠﻲ‪:‬‬
‫‪‬‬ ‫‪‬‬
‫‪y t l    j y t l  j    j X t l  j   t l‬‬
‫‪j 1‬‬ ‫‪j 1‬‬ ‫)‪… (4-4‬‬
‫ﺤﻴﺙ ﺘﻤﺜل ‪ jπ‬ﻭ ‪ jη‬ﺍﻷﻭﺯﺍﻥ ﺍﻟﻤﺭﺠﺤﺔ ﻭﻫﻰ ﻋﺒﺎﺭﺓ ﻋﻥ‪:‬‬

‫)‪ ( B‬‬
‫‪ ( B) ‬‬ ‫)‪ (1   1 B   2 B  ......‬‬
‫)‪ ( B‬‬
‫)‪ ( B‬‬
‫‪ ( B) ‬‬ ‫)‪ (1  1 B   2 B  ......‬‬
‫)‪ ( B‬‬
‫)‪(4-5‬‬

‫‪٥٩٠‬‬
‫‪ – –‬‬

‫ﻜﻤﺎ ﺃﻥ ﻤﺠﻤﻭﻉ ﻫﺫﻩ ﺍﻷﻭﺯﺍﻥ ﻴﺴﺎﻭﻯ ﺍﻟﻭﺍﺤﺩ ﺍﻟﺼﺤﻴﺢ‪:‬‬


‫‪ ( B)   ( B)  1‬‬ ‫)‪… (4-6‬‬
‫ﺒﺎﻹﻀﺎﻓﺔ ﺇﻟﻰ ﺃﻨﻪ ﻤﻥ ﺍﻟﻤﻤﻜﻥ ﻟﻨﻤﻭﺫﺝ ‪ ARMAX‬ﺃﻥ ﻴﻜﻭﻥ ﺤﺎﻟﺔ ﺨﺎﺼـﺔ‬
‫ﻟﻌﻤﻠﻴﺔ ﺍﻟﺸﺒﻜﺔ ﺍﻟﻌﺼﺒﻴﺔ ﻓﻲ ﺍﻟﻤﻌﺎﺩﻟﺔ )‪ (4-1‬ﺒﻜﺘﺎﺒﺔ ) ‪ ( X‬ﻜﻤﺎ ﻴﻠﻲ‪:‬‬
‫‪‬‬ ‫‪‬‬
‫‪ ( X )    j y t  l  j    j X t l  j‬‬
‫‪j 1‬‬ ‫‪j 1‬‬
‫)‪... (4-7‬‬
‫ﻟذاﻟك ﻓﺎن ﻧﻣوذج اﻟﺷﺑﻛﺔ اﻟﻌﺻﺑﻳﺔ اﻟﻣﻘرب ﻣن اﻟﻣﻣﻛن أن ﻳﻛون ﻓـﻲ اﻟـﺻﻳﻐﺔ‬
‫اﻟﺗﺎﻟﻳﺔ‪:‬‬
‫‪Z t 1  g ( X , W )   t 1‬‬ ‫)‪… (4-8‬‬
‫ﻭﻋﻨﺩ ﺍﻟﺘﻨﺒﺅ ﺒﻘﻴﻤﺔ ‪ Z‬ﻤﻊ ﻭﺠﻭﺩ ﺩﺍﻟﺔ ﺍﻟﺘﻘﺭﻴـﺏ )‪ g( X,W‬ﻓﺎﻨـﻪ ﻴﻭﺠـﺩ‬
‫ﻨﻭﻋﻴﻥ ﻤﻥ ﺍﻟﺨﻁﺄ‪ ،‬ﺍﻟﺨﻁﺄ ﺍﻷﻭل‪ :‬ﻫﻭ ﺍﻟﺨﻁﺄ ﺍﻟﻌﺸﻭﺍﺌﻲ ‪ ،ε‬ﻭﺍﻟﺨﻁﺄ ﺍﻟﺜـﺎﻨﻲ‪ :‬ﻫـﻭ‬
‫ﺨﻁﺄ ﺍﻟﺘﻘﺭﻴﺏ ﻭ ﺍﻟﺫﻱ ﻴﻤﺜل ﺍﻟﻔـﺭﻕ ﺒـﻴﻥ ﺍﻟﻘﻴﻤـﺔ ﺍﻟﻔﻌﻠﻴـﺔ ﻭﺍﻟﻘﻴﻤـﺔ ﺍﻟﻤﻘـﺩﺭﺓ‬
‫) ‪ ، ( X )  g ( X , W‬ﻭﺍﻟﺨﻁﺄ ﺍﻟﺜﺎﻨﻲ ﻴﻜﺎﻓﺊ ﺍﻟﺨﻁﺄ ﻓﻲ ﻨﻤﺎﺫﺝ ﺍﻻﻨﺤﺩﺍﺭ ﻏﻴـﺭ‬
‫ﺍﻟﺨﻁﻴﺔ ﺍﻟﺘﻲ ﺘﺤﺘﻭﻯ ﻋﻠﻰ ﺨﻁﺄ ﻓﻲ ﺍﻟﺘﺤﺩﻴﺩ ﻭﺘﻭﺯﻴﻊ ﺍﻟﺨﻁﺄ ﻓﻴﻬـﺎ ﻴﻜـﻭﻥ ﻏﻴـﺭ‬
‫ﻤﺤﺩﺩ‪ ،‬ﻭﺫﻟﻙ ﻋﻠﻰ ﻋﻜﺱ ﺍﻟﻌﺩﻴﺩ ﻤﻥ ﺍﻟﻨﻤﺎﺫﺝ ﺍﻹﺤﺼﺎﺌﻴﺔ‪.‬ﻭﺘﺤﺩﻴﺩ ﺍﻟﻤﻌـﺎﻟﻡ ﻟﺩﺍﻟـﺔ‬
‫ﺍﻟﺘﻘﺭﻴﺏ )‪ g(X,W‬ﺘﺴﻤﻰ ﺒﺒﻨﺎﺀ ﺍﻟﺸﺒﻜﺔ‬
‫"‪ ،"network architecture‬ﻭﻫﺫﻩ ﺍﻟﺩﺍﻟﺔ ﻋﺒﺎﺭﺓ ﻋﻥ ﺘﻭﻟﻴﻔﺔ ﻤـﻥ ﺍﻟـﺩﻭﺍل‬
‫ﺍﻟﺨﻁﻴﺔ ﻭﻏﻴﺭ ﺍﻟﺨﻁﻴﺔ‪ .‬ﻭﻋﺎﺩﺓ ﻤﺎ ﻴﺘﻡ ﺘﺼﻭﻴﺭ ﺍﻟﺸﺒﻜﺔ ﺒﻭﺍﺴﻁﺔ ﺃﺸﻜﺎل ﺒﻴﺎﻨﻴﺔ ﺘﻤﺜل‬
‫ﻓﻴﻬﺎ ﺍﻟﺩﻭﺍل ﺍﻟﻔﺭﻋﻴﺔ‬
‫‪sub-functions‬ﺒﻭﺍﺴﻁﺔ ﺩﻭﺍﺌﺭ‪ ،‬ﻭﺍﻟﻨﺘﺎﺌﺞ ﺍﻟﻤﺤﻭﻟﺔ ﻤﻥ ﻭﺍﺤﺩ ﺃﻭ ﺃﻜﺜﺭ ﻤـﻥ‬
‫ﺍﻟﺩﻭﺍل ﺍﻟﻔﺭﻋﻴﺔ ﺒﻭﺍﺴﻁﺔ ﺍﻷﺴﻬﻡ‪.‬ﻭﻋﻠﻰ ﺴﺒﻴل ﺍﻟﻤﺜﺎل ﻴﻤﻜﻥ ﺘﻤﺜﻴل ﻨﻤﻭﺫﺝ ﺍﻻﻨﺤﺩﺍﺭ‬
‫‪Arminger‬‬ ‫ﺍﻟﺨﻁﻰ ﻜﺸﺒﻜﺔ ﻋﺼﺒﻴﺔ ﺍﺼﻁﻨﺎﻋﻴﺔ ﻜﻤﺎ ﺒﺎﻟﺸﻜل ﺍﻟﺘﺎﻟﻲ ) ﺍﻨﻅـﺭ‪:‬‬
‫)‪.(and Enache (1996‬‬

‫‪٥٩١‬‬
‫‪ – –‬‬

‫ﺸﻜل )‪(١-٤‬‬
‫ﻨﻤﻭﺫﺝ ﺍﻻﻨﺤﺩﺍﺭ ﺍﻟﺨﻁﻰ ﻜﺸﺒﻜﺔ ﻋﺼﺒﻴﺔ ﺍﺼﻁﻨﺎﻋﻴﺔ‬

‫‪Z‬‬

‫‪XW‬‬

‫‪1‬‬
‫‪X1‬‬ ‫‪Xk‬‬
‫‪X2‬‬
‫ﺯﺯﺯﺯﺯﺯ‬

‫ﺤﻴﺙ ﻴﻭﻀﺢ ﻫﺫﺍ ﺍﻟﺸﻜل ﻜﻴﻔﻴﺔ ﺘﻤﺜﻴل ﻨﻤﻭﺫﺝ ﺍﻻﻨﺤﺩﺍﺭ ﺍﻟﺨﻁـﻰ ﺒﺎﺴـﺘﺨﺩﺍﻡ‬
‫ﻨﻤﻭﺫﺝ ﺸﺒﻜﺔ ﻋﺼﺒﻴﺔ ﻭﻴﺘﻜﻭﻥ ﻫﺫﺍ ﺍﻟﺒﻨﺎﺀ ﻤﻥ ﻭﺤﺩﺓ ﻭﺍﺤـﺩﺓ ﺘـﻀﻡ ﺍﻟﻤﺘﻐﻴـﺭﺍﺕ‬
‫ﺍﻟﺨﻁﻴﺔ ﺍﻟﻤﺩﺨﻠﺔ ‪x1,x2,………xk‬‬
‫ﺒﺠﻭﺍﺭ ﺍﻟﺜﺎﺒﺕ ‪،1‬ﻤﻊ ﻤﺘﺠﻪ ﺍﻟﻤﻌﺎﻟﻡ )ﺍﻷﻭﺯﺍﻥ( ‪ w‬ﻭﻫﺫﺍ ﺍﻟﻨﻤﻭﺫﺝ ﻫﻭ ﺃﺒـﺴﻁ‬
‫ﺍﻟﻨﻤﺎﺫﺝ ﺍﻟﺭﻴﺎﻀﻴﺔ ﻟﻠﺸﺒﻜﺎﺕ ﻭﻴﺴﻤﻰ ﺍﻟﺨﻠﻴﺔ ﺍﻟﺨﻁﻴـﺔ ﺍﻻﻨـﻀﺒﺎﻁﻴﺔ ‪Adaptive‬‬
‫)‪ ،Linear Neurons (ADALINE‬ﻭﻻ ﻴﺤﺘﻭﻯ ﻫﺫﺍ ﺍﻟﻨﻤـﻭﺫﺝ ﻋﻠـﻰ ﻁﺒﻘـﺔ‬
‫ﻤﺨﻔﻴﺔ ﺒل ﺘﻨﺘﻘل ﺍﻟﻤﻌﻠﻭﻤﺎﺕ ﻤﺒﺎﺸﺭﺓ ﻤﻥ ﺍﻟﻁﺒﻘﺔ ﺍﻷﻤﺎﻤﻴﺔ ﺇﻟﻰ ﺍﻟﻨﻬﺎﺌﻴﺔ ﻤﻥ ﺨـﻼل‬
‫ﺩﺍﻟﺔ ﺘﺤﻭﻴل ﻤﻨﺎﺴﺒﺔ‪:‬‬
‫‪Z  g ( X , W )  XW‬‬ ‫)‪... (4-9‬‬
‫ﺤﻴﺙ ‪ Z‬ﻤﺘﻐﻴﺭ ﺫﻭ ﺒﻌﺩ ﻭﺍﺤﺩ )ﻭﺤﺩﺓ ﺇﺨﺭﺍﺝ ﻭﺍﺤﺩﺓ(‪ ،‬ﻭﻨﻤﻭﺫﺝ ﺍﻟـﺸﺒﻜﺔ ﻓـﻲ‬
‫ﻫﺫﻩ ﺍﻟﺤﺎﻟﺔ ﻴﻌﻁﻰ ﺒﺎﻟﻌﻼﻗﺔ ﺍﻟﺭﻴﺎﻀﻴﺔ)ﺍﻟﻤﺤﺩﺩﺓ(‪:‬‬

‫‪٥٩٢‬‬
‫‪ – –‬‬

‫‪Z  XW‬‬ ‫)‪… (4-10‬‬

‫ﺃﻭ ﺒﺎﻟﻌﻼﻗﺔ ﺍﻟﻌﺸﻭﺍﺌﻴﺔ‪:‬‬


‫‪Z  XW  e‬‬ ‫)‪… (4-11‬‬

‫ﻭﻫﺫﺍ ﺍﻟﻨﻤﻭﺫﺝ ﻴﻌﺎﺩل ﻨﻤﻭﺫﺝ ﺍﻻﻨﺤﺩﺍﺭ ﺍﻟﺨﻁﻰ ﺍﻟﺒﺴﻴﻁ ﺃﺤـﺎﺩﻯ ﺍﻟﻤﺘﻐﻴـﺭﺍﺕ‪،‬‬


‫ﺍﻟﻘﺎﺩﺭ ﻋﻠﻰ ﺃﺩﺍﺀ ﺍﻟﺘﻘﺭﻴﺏ ﺍﻟﺨﻁﻰ ﻓﻘﻁ‪ ،‬ﻭﻟﻜﻥ ﺍﻟﻜﺜﻴﺭ ﻤﻥ ﺍﻟﺩﻭﺍل ) ‪  (X‬ﺘﻜـﻭﻥ‬
‫ﺩﻭﺍل ﻏﻴﺭ ﺨﻁﻴﺔ ﻤﻤﺎ ﻗﺩ ﻴﺅﺩﻯ ﺇﻟﻰ ﺨﻁﺄ ﻓﻲ ﺍﻟﺘﻘﺭﻴﺏ‪.‬‬
‫] ‪ [ 5‬ﺍﳌﻘﺎﺭﻧﺔ ﺑﲔ ﻃﺮﻳﻘﺘﻲ ﺑﻮﻛﺲ‪-‬ﺟﻴﻨﻜﻨﺰ ﻭﺍﻟﺸﺒﻜﺎﺕ ﻟﻌﺼﺒﻴﺔ‬
‫ﻟﻜﻲ ﻴﺘﻡ ﻤﻘﺎﺭﻨﺔ ﺍﻟﻁﺭﻴﻘﺔ ﺍﻟﺠﺩﻴﺩﺓ ﻤﻊ ﻁﺭﻴﻘﺔ ﺒـﻭﻜﺱ‪-‬ﺠﻴﻨﻜﻨـﺯ ﺘـﻡ ﺘﻭﻟﻴـﺩ‬
‫‪ 32000‬ﻋﻴﻨﺔ ﺒﻭﺍﻗﻊ ‪ 500‬ﻋﻴﻨﺔ ﻟﻜل ﻨﻤﻭﺫﺝ ﻤﻥ ‪ 32‬ﻨﻤﻭﺫﺝ ﻤﺨﺘﺎﺭﺓ ﻤﻥ ﻨﻤـﺎﺫﺝ‬
‫)‪ ARMAX(p,h,q‬ﻤﻊ ﺃﺤﺠﺎﻡ ﻋﻴﻨﺎﺕ ﻤﺨﺘﻠﻔﺔ)‪ ،(25-40-60-100-150‬ﻭﺘـﺘﻡ‬
‫ﺍﻟﻤﻘﺎﺭﻨﺔ ﺒﻴﻥ ﺘﻨﺒﺅﺍﺕ ﺍﻟﻤﺸﺎﻫﺩﺍﺕ ﻭﻗﻴﻤﻬﺎ ﺍﻟﺤﻘﻴﻘﻴﺔ ﻟﻘﻴﺎﺱ ﺩﻗﺔ ﺍﻟﺘﻨﺒـﺅﺍﺕ ﻭﻤﻘﺎﺭﻨـﺔ‬
‫ﺃﺩﺍﺀ ﺍﻟﻁﺭﻴﻘﺔ ﺍﻟﻤﻘﺘﺭﺤﺔ ﻟﻠﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﺍﻻﺼﻁﻨﺎﻋﻴﺔ ﻤﻊ ﺃﺩﺍﺀ ﻁﺭﻴﻘﺔ " ﺒﻭﻜﺱ‬
‫‪ -‬ﺠﻴﻨﻜﻨﺯ " ﻟﻘﻴﺎﺱ ﺩﻗﺔ ﺍﻟﻁﺭﻴﻘﺔ ﺍﻟﻤﻘﺘﺭﺤﺔ ﻟﻠﺘﻨﺒﺅ ﺒﻨﻤﺎﺫﺝ‪ARMAX .‬‬
‫ﻭﺘﻡ ﻗﻴﺎﺱ ﺩﻗﺔ ﺍﻟﺘﻨﺒﺅﺍﺕ ﻟﻜل ﻤﻥ ﺍﻟﻁﺭﻴﻘﺘﻴﻥ ﺒﺎﺴﺘﺨﺩﺍﻡ ﺜﻼﺙ ﻤﻘﺎﻴﻴﺱ ﻤﺘﻌﻠﻘـﺔ‬
‫ﺒﺎﻟﺒﻭﺍﻗﻲ )‪ (MSE, MAD, MAEP‬ﻟﻜل ﻁﺭﻴﻘﺔ ﺒﺸﻜل ﻤﺴﺘﻘل ﻟﻠﻤﻘﺎﺭﻨﺔ ﻭﺇﺒﺩﺍﺀ‬
‫ﺍﻟﻤﻼﺤﻅﺎﺕ‪ .‬ﻋﻼﻭﺓ ﻋﻠﻰ ﺫﻟﻙ ﻓﺎﻥ ﻫﺫﻩ ﺍﻟﻤﻘﺎﺭﻨﺔ ﺘﺘﻡ ﻭﻓﻘﺎ ﻟﺜﻼﺜﺔ ﻤﻌﺎﻴﻴﺭ )ﺤﺠـﻡ‬
‫ﺍﻟﻌﻴﻨﺔ‪ ،‬ﺍﻟﻨﻤﻭﺫﺝ ﺍﻟﻤﺤﺩﺩ‪ ،‬ﺃﻓﻕ ﺍﻟﺘﻨﺒﺅ(‪ ،‬ﻭﻓﻴﻤﺎ ﻴﻠﻲ ﺸﺭﺡ ﻟﻬﺫﻩ ﺍﻟﻤﻘﺎﻴﻴﺱ‪:‬‬

‫‪ MSE‬‬
‫ﻓﻴﻤﺎ ﻴﻠﻲ ﺠﺩﻭل )‪ (5-1‬ﺍﻟﺫﻱ ﻴﺤﺘﻭﻯ ﻋﻠﻰ ﻨﺘﺎﺌﺞ ﻤﻘﻴﺎﺱ ‪ MSE‬ﻟﻜـل ﻤـﻥ‬
‫ﺘﻨﺒﺅﺍﺕ ﺒﻭﻜﺱ‪-‬ﺠﻴﻨﻜﻨﺯ ﻭﺘﻨﺒﺅﺍﺕ ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﺍﻻﺼـﻁﻨﺎﻋﻴﺔ ﻤـﻊ ﻨﻤـﺎﺫﺝ‬
‫‪ ARMAX‬ﺍﻟﻤﺤﺩﺩﺓ ﺒﺄﺤﺠﺎﻡ ﻋﻴﻨﺎﺕ ﻤﺨﺘﻠﻔﺔ‪ ،‬ﻜﺫﻟﻙ ﺠﺩﻭل)‪ (5-2‬ﺍﻟﺫﻱ ﻴﺤﺘـﻭﻯ‬
‫ﻋﻠﻰ ﻨﺘﺎﺌﺞ ﻤﻘﻴﺎﺱ ‪ MSE‬ﻟﻠﺜﻼﺜﺔ ﻤﺸﺎﻫﺩﺍﺕ ﺍﻟﻤـﺴﺘﻘﺒﻠﻴﺔ ﻋﻨـﺩ ﺃﺤﺠـﺎﻡ ﻋﻴﻨـﺎﺕ‬
‫ﻤﺨﺘﻠﻔﺔ‪:‬‬

‫‪٥٩٣‬‬
 – –

(١-٥) ‫ﺠﺩﻭل‬
‫ﺠﻴﻨﻜﻨﺯ ﻭﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﻤﻭﺯﻋﺔ ﻭﻓﻘﺎ ﻷﺤﺠﺎﻡ‬-‫ ﻟﺘﻨﺒﺅﺍﺕ ﺒﻭﻜﺱ‬MSE ‫ﻗﻴﻡ‬
‫ﻋﻴﻨﺎﺕ ﻭﻨﻤﺎﺫﺝ ﻤﺨﺘﻠﻔﺔ‬

Average 150 100 60 40 25 Method sam.Size

P,h,q
0.0026 0.0038 0.0034 0.0026 0.0022 0.0013 NN (1,1,0)
0.0035 0.0034 0.0037 0.0034 0.0036 0.0037 BJ
0.0027 0.0037 0.0035 0.0026 0.0023 0.0015 NN (1,2,0)
0.0028 0.0027 0.0030 0.0027 0.0027 0.0030 BJ
0.0023 0.0033 0.0030 0.0024 0.0018 0.0012 NN (1,1,1)
0.0032 0.0028 0.0036 0.0029 0.0032 0.0036 BJ
0.0020 0.0027 0.0026 0.0021 0.0018 0.0010 NN (0,1,0)
0.0020 0.0018 0.0021 0.0019 0.0019 0.0021 BJ
0.0019 0.0026 0.0023 0.0023 0.0017 0.0010 NN (0,2,0)
0.0018 0.0017 0.0019 0.0018 0.0017 0.0019 BJ
0.0025 0.0033 0.0033 0.0029 0.0020 0.0014 NN (0,1,1)
0.00188 0.0018 0.0019 0.0019 0.0019 0.0019 BJ
0.0 029 0.0040 0.0038 0.0029 0.0021 0.0017 NN (0,1,2)
0.00198 0.0019 0.0020 0.0020 0.0020 0.0020 BJ
0.0026 0.0035 0.0031 0.0028 0.0021 0.0015 NN (0,2,1)
0.0021 0.0019 0.0022 0.0020 0.0020 0.0022 BJ
0.0024 0.0026 0.0031 0.0026 0.0020 0.0013 NN Average
0.0024 0.0022 0.0023 0.0023 0.0024 0.0025 BJ

٥٩٤
‫‪ – –‬‬

‫ﺠﺩﻭل )‪(٢-٥‬‬
‫ﻗﻴﻡ ‪ MSE‬ﻟﻠﻤﺸﺎﻫﺩﺍﺕ ﺍﻟﺜﻼﺜﺔ ﺍﻟﻤﺴﺘﻘﺒﻠﻴﺔ ﻟﺘﻨﺒﺅﺍﺕ ﺒﻭﻜﺱ‪-‬ﺠﻴﻨﻜﻨﺯ ﻭﺍﻟﺸﺒﻜﺎﺕ‬
‫ﻤﻊ ﺃﺤﺠﺎﻡ ﻋﻴﻨﺎﺕ ﻤﺨﺘﻠﻔﺔ‬
‫‪Average‬‬ ‫‪Zn+3‬‬ ‫‪Zn+2‬‬ ‫‪Zn+1‬‬ ‫‪Method‬‬ ‫‪Pred.horizon‬‬
‫‪sam.Size‬‬
‫‪0.0013‬‬ ‫‪0.0013‬‬ ‫‪0.0013‬‬ ‫‪0.0013‬‬ ‫‪NN‬‬ ‫‪25‬‬
‫‪0.0026‬‬ ‫‪0.0025‬‬ ‫‪0.0027‬‬ ‫‪0.0025‬‬ ‫‪BJ‬‬
‫‪0.0020‬‬ ‫‪0.0020‬‬ ‫‪0.0019‬‬ ‫‪0.0020‬‬ ‫‪NN‬‬ ‫‪40‬‬
‫‪0.0024‬‬ ‫‪0.0025‬‬ ‫‪0.0023‬‬ ‫‪0.0023‬‬ ‫‪BJ‬‬
‫‪0.0026‬‬ ‫‪0.0026‬‬ ‫‪0.0025‬‬ ‫‪0.0026‬‬ ‫‪NN‬‬ ‫‪60‬‬
‫‪0.0023‬‬ ‫‪0.0022‬‬ ‫‪0.0024‬‬ ‫‪0.0024‬‬ ‫‪BJ‬‬
‫‪0.0031‬‬ ‫‪0.0031‬‬ ‫‪0.0032‬‬ ‫‪0.0030‬‬ ‫‪NN‬‬ ‫‪100‬‬
‫‪0.0023‬‬ ‫‪0.0023‬‬ ‫‪0.0022‬‬ ‫‪0.0024‬‬ ‫‪BJ‬‬
‫‪0.0034‬‬ ‫‪0.0034‬‬ ‫‪0.0033‬‬ ‫‪0.0034‬‬ ‫‪NN‬‬ ‫‪150‬‬
‫‪0.0022‬‬ ‫‪0.0021‬‬ ‫‪0.0023‬‬ ‫‪0.0024‬‬ ‫‪BJ‬‬
‫‪0.0024‬‬ ‫‪0.0025‬‬ ‫‪0.0024‬‬ ‫‪0.0024‬‬ ‫‪NN‬‬ ‫‪Average‬‬
‫‪0.0024‬‬ ‫‪0.0023‬‬ ‫‪0.0024‬‬ ‫‪0.0024‬‬ ‫‪BJ‬‬

‫ﻭﻤﻥ ﺨﻼل ﺠﺩﻭل )‪ ،(5-1‬ﻭﺠﺩﻭل )‪(5-2‬ﻨﻼﺤﻅ ﺃﻥ‪:‬‬


‫ﻭﻓﻘﺎ ﻟﺤﺠﻡ ﺍﻟﻌﻴﻨﺔ‪ :‬ﻨﻼﺤﻅ ﻤﻥ ﺠﺩﻭل )‪ (5-1‬ﺃﻥ ﺃﺩﺍﺀ ﺍﻟﻁـﺭﻴﻘﺘﻴﻥ ﻴﺘﻘـﺎﺭﺏ‬
‫ﺒﺯﻴﺎﺩﺓ ﺤﺠﻡ ﺍﻟﻌﻴﻨﺔ ﻤﻊ ﻤﻼﺤﻅﺔ ﺃﻥ ﻤﺘﻭﺴﻁ ﺃﺩﺍﺀ ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﺃﻓﻀل ﺘﻘﺭﻴﺒﺎ‬
‫ﻟﻜل ﻨﻤﻭﺫﺝ ﻋﻠﻰ ﺤﺩﻩ‬
‫ﻓﻌﻠﻰ ﺴﺒﻴل ﺍﻟﻤﺜﺎل ﻨﺠﺩ ﺃﻨﻪ ﻋﻨﺩ ﺤﺠﻡ ﺍﻟﻌﻴﻨﺔ ‪ 25‬ﻜﺎﻨﺕ ﻗﻴﻤﺔ ﻤﻘﻴـﺎﺱ ‪MSE‬‬
‫ﻟﺘﻨﺒﺅﺍﺕ ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﺘﺴﺎﻭﻯ ‪ 0.0013‬ﻭﻫﻰ ﺃﻗل ﻤﻥ ﻗﻴﻤﺘﻪ ﺒﺎﻟﻨﺴﺒﺔ ﻟﺘﻨﺒﺅﺍﺕ‬
‫ﺒﻭﻜﺱ‪-‬ﺠﻴﻨﻜﻨﺯ ﺍﻟﺘﻲ ﺘﺴﺎﻭﻯ ‪ 0.0026‬ﻭﺒﺎﻟﺘﺎﻟﻲ ﻴﻜﻭﻥ ﺃﺩﺍﺀ ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﻫﻭ‬
‫ﺍﻷﻓﻀل‪ .‬ﻭﻟﻜﻥ ﺒﻭﺠﻪ ﻋﺎﻡ ﻓﺎﻥ ﺃﺩﺍﺀ ﻁﺭﻴﻘﺔ ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﻜﺎﻥ ﻤﺘﻌﺎﺩﻻ ﻤـﻊ‬
‫ﺃﺩﺍﺀ ﻁﺭﻴﻘﺔ ﺒﻭﻜﺱ‪-‬ﺠﻴﻨﻜﻨﺯ ﻭﺫﻟﻙ ﻻﻥ ﺍﻟﻤﺘﻭﺴﻁ ﺍﻟﻌﺎﻡ ﻟﻠﻁﺭﻴﻘﺘﻴﻥ ﻜﺎﻥ ‪.0.0024‬‬
‫ﻭﻤﻥ ﺍﻟﻤﻤﻜﻥ ﻤﻼﺤﻅﺔ ﻫﺫﻩ ﺍﻟﻨﺘﺎﺌﺞ ﻤﻥ ﺨﻼل ﺍﻟﺸﻜل ﺍﻟﺒﻴﺎﻨﻲ ﺍﻟﺘﺎﻟﻲ‪:‬‬

‫‪٥٩٥‬‬
‫‪ – –‬‬

‫ﺸﻜل )‪(١-٥‬‬
‫ﻗﻴﻡ ‪ MSE‬ﻟﺘﻨﺒﺅﺍﺕ ﺒﻭﻜﺱ‪-‬ﺠﻴﻨﻜﻨﺯ ﻭﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﻭﻓﻘﺎ ﻟﺤﺠﻡ ﺍﻟﻌﻴﻨﺔ‬
‫‪0.004‬‬
‫‪0.0035‬‬

‫‪0.003‬‬

‫‪0.0025‬‬
‫‪0.002‬‬

‫‪0.0015‬‬
‫‪0.001‬‬

‫‪0.0005‬‬
‫‪0‬‬
‫‪25‬‬ ‫‪40‬‬ ‫‪60‬‬ ‫‪100‬‬ ‫‪150‬‬ ‫‪Average‬‬

‫‪NN‬‬ ‫‪BJ‬‬

‫ﻭﻓﻘﺎ ﻟﻠﻨﻤﻭﺫﺝ ﺍﻟﻤﺤﺩﺩ‪ :‬ﻨﻼﺤﻅ ﻤﻥ ﺠﺩﻭل )‪ (5-1‬ﺃﻥ ﻗﻴﻤـﺔ ‪ MSE‬ﻟﺘﻨﺒـﺅﺍﺕ‬


‫ﻁﺭﻴﻘﺔ ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﻜﺎﻨﺕ ﺃﻗل ﻤﻥ ﻗﻴﻤﺘﻬﺎ ﻟﺘﻨﺒﺅﺍﺕ ﻁﺭﻴﻘﺔ ﺒﻭﻜﺱ‪-‬ﺠﻴﻨﻜﻨـﺯ‬
‫ﻭﺫﻟــﻙ ﻓــﻲ ﺤﺎﻟــﺔ ﺍﻟﻨﻤــﺎﺫﺝ )‪،ARMAX(1,2,0) ،ARMAX(1,1,0‬‬
‫)‪ ،ARMAX(1,1,1‬ﻭﺒﺎﻟﺘﺎﻟﻲ ﻜﺎﻨﺕ ﺍﻷﻓﻀل ﻤﻊ ﺘﻠﻙ ﺍﻟﻨﻤﺎﺫﺝ‪ ،‬ﺒﻴﻨﻤﺎ ﻜﺎﻨﺕ ﻗﻴﻤﺔ‬
‫‪ MSE‬ﻟﺘﻨﺒﺅﺍﺕ ﻁﺭﻴﻘﺔ ﺒﻭﻜﺱ‪-‬ﺠﻴﻨﻜﻨﺯ ﺃﻗـل ﻤـﻥ ﻗﻴﻤﺘﻬـﺎ ﻟﺘﻨﺒـﺅﺍﺕ ﻁﺭﻴﻘـﺔ‬
‫ﺍﻟــﺸﺒﻜﺎﺕ ﺍﻟﻌــﺼﺒﻴﺔ ﻭﺫﻟــﻙ ﻓــﻲ ﺤﺎﻟــﺔ ﺍﻟﻨﻤــﺎﺫﺝ )‪،ARMAX(0,2,0‬‬
‫)‪ ARMAX(0,2,1) ،ARMAX(0,1,2) ،ARMAX(0,1,1‬ﻭﺒﺎﻟﺘـــﺎﻟﻲ‬
‫ﻜﺎﻨﺕ ﺍﻷﻓﻀل ﻤﻊ ﺘﻠﻙ ﺍﻟﻨﻤﺎﺫﺝ‪ .‬ﻭﻤﻥ ﺍﻟﻤﻤﻜﻥ ﻤﻼﺤﻅﺔ ﻫﺫﻩ ﺍﻟﻨﺘﺎﺌﺞ ﻤـﻥ ﺨـﻼل‬
‫ﺍﻟﺸﻜل ﺍﻟﺒﻴﺎﻨﻲ ﺍﻟﺘﺎﻟﻲ‪:‬‬

‫‪٥٩٦‬‬
‫‪ – –‬‬

‫ﺸﻜل )‪(٢-٥‬‬
‫ﻗﻴﻡ ‪ MSE‬ﻟﺘﻨﺒﺅﺍﺕ ﺒﻭﻜﺱ‪-‬ﺠﻴﻨﻜﻨﺯ ﻭﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﻟﻨﻤﺎﺫﺝ ‪ARMAX‬‬
‫‪0.004‬‬
‫‪0.0035‬‬
‫‪0.003‬‬
‫‪0.0025‬‬
‫‪0.002‬‬
‫‪0.0015‬‬
‫‪0.001‬‬
‫‪0.0005‬‬
‫‪0‬‬
‫)‬ ‫)‬ ‫)‬ ‫)‬ ‫)‬ ‫)‬ ‫)‬ ‫)‬ ‫‪e‬‬
‫‪,0‬‬ ‫‪,0‬‬ ‫‪,1‬‬ ‫‪,0‬‬ ‫‪,0‬‬ ‫‪,1‬‬ ‫‪,2‬‬ ‫‪,1‬‬ ‫‪ag‬‬
‫‪,1‬‬ ‫‪,2‬‬ ‫‪,1‬‬ ‫‪,1‬‬ ‫‪,2‬‬ ‫‪,1‬‬ ‫‪,1‬‬ ‫‪,2‬‬
‫‪(1‬‬ ‫‪(1‬‬ ‫‪(1‬‬ ‫‪(0‬‬ ‫‪(0‬‬ ‫‪(0‬‬ ‫‪(0‬‬ ‫‪(0‬‬ ‫‪v‬‬‫‪er‬‬
‫‪A‬‬

‫‪NN‬‬ ‫‪BJ‬‬

‫ﻭﻓﻘﺎ ﻷﻓﻕ ﺍﻟﺘﻨﺒﺅ‪ :‬ﻨﻼﺤﻅ ﻓﻲ ﺠﺩﻭل )‪(5-2‬ﺃﻥ ﻗﻴﻤﺔ ‪ MSE‬ﻟﺘﻨﺒﺅﺍﺕ ﻁﺭﻴﻘﺔ‬


‫ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﻟﻠﻤﺸﺎﻫﺩﺍﺕ ﺍﻟﺜﻼﺜﺔ ﺍﻟﻤﺴﺘﻘﺒﻠﻴﺔ ﺃﻗل ﻤـﻥ ﺃﻭ ﻤـﺴﺎﻭﻴﺔ ﺘﻘﺭﻴﺒـﺎ‬
‫ﻟﻘﻴﻤﺘﻬﺎ ﻟﺘﻨﺒﺅﺍﺕ ﻁﺭﻴﻘﺔ ﺒﻭﻜﺱ‪-‬ﺠﻴﻨﻜﻨـﺯ ﻭﺤﻴـﺙ ﺃﻥ ﺍﻟﻤﺘﻭﺴـﻁ ﺍﻟﻌـﺎﻡ ﻟﻠﺘﻨﺒـﺅ‬
‫ﺒﺎﻟﻤﺸﺎﻫﺩﺍﺕ ﺍﻟﺜﻼﺜﺔ ﺍﻟﻤﺴﺘﻘﺒﻠﻴﺔ ﻤﺘﺴﺎﻭﻴﺎ ﻟﻠﻁﺭﻴﻘﺘﻴﻥ ﺇﺫﺍ ﻴﻤﻜﻥ ﺍﻟﻘﻭل ﺒﺘـﺴﺎﻭﻱ ﺃﺩﺍﺀ‬
‫ﺍﻟﻁﺭﻴﻘﺘﻴﻥ ﻟﻠﺘﻨﺒﺅ ﺒﺎﻟﻤﺸﺎﻫﺩﺍﺕ ﺍﻟﻤﺴﺘﻘﺒﻠﻴﺔ‪ .‬ﻭﻤﻥ ﺍﻟﻤﻤﻜﻥ ﻤﻼﺤﻅﺔ ﻫﺫﻩ ﺍﻟﻨﺘﺎﺌﺞ ﻤـﻥ‬
‫ﺨﻼل ﺍﻟﺸﻜل ﺍﻟﺒﻴﺎﻨﻲ ﺍﻟﺘﺎﻟﻲ‪:‬‬

‫‪٥٩٧‬‬
‫‪ – –‬‬

‫ﺸﻜل )‪(٣-٥‬‬
‫ﻗﻴﻡ ‪ MSE‬ﻟﻠﻤﺸﺎﻫﺩﺍﺕ ﺍﻟﺜﻼﺜﺔ ﺍﻟﻤﺴﺘﻘﺒﻠﻴﺔ ﻟﺘﻨﺒﺅﺍﺕ ﺒﻭﻜﺱ‪-‬ﺠﻴﻨﻜﻨﺯ ﻭﺍﻟﺸﺒﻜﺎﺕ‬
‫ﺍﻟﻌﺼﺒﻴﺔ‬
‫‪0.0026‬‬
‫‪0.00255‬‬
‫‪0.0025‬‬
‫‪0.00245‬‬
‫‪0.0024‬‬
‫‪0.00235‬‬
‫‪0.0023‬‬
‫‪0.00225‬‬
‫‪0.0022‬‬
‫‪0.00215‬‬
‫‪Zn+1‬‬ ‫‪Zn+2‬‬ ‫‪Zn+3‬‬ ‫‪Average‬‬

‫‪NN‬‬ ‫‪BJ‬‬

‫‪MAD‬‬

‫ﻓﻴﻤﺎ ﻴﻠﻲ ﺠﺩﻭل )‪ (5-3‬ﻭﺍﻟﺫﻱ ﻴﺤﺘﻭﻯ ﻋﻠﻰ ﻨﺘﺎﺌﺞ ﻤﻘﻴﺎﺱ ‪ MAD‬ﻟﻜل ﻤـﻥ‬
‫ﺘﻨﺒﺅﺍﺕ ﺒﻭﻜﺱ‪-‬ﺠﻴﻨﻜﻨﺯ ﻭﺘﻨﺒﺅﺍﺕ ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﻟﻨﻤﺎﺫﺝ ‪ ARMAX‬ﺍﻟﻤﺤﺩﺩﺓ‬
‫ﺒﺄﺤﺠﺎﻡ ﻋﻴﻨﺎﺕ ﻤﺨﺘﻠﻔﺔ‪ ،‬ﻭﻜﺫﻟﻙ ﺠﺩﻭل )‪ (5-4‬ﻴﺤﺘـﻭﻯ ﻋﻠـﻰ ﻨﺘـﺎﺌﺞ ﻤﻘﻴـﺎﺱ‬
‫‪ MAD‬ﻟﻠﻤﺸﺎﻫﺩﺍﺕ ﺍﻟﻤﺴﺘﻘﺒﻠﻴﺔ ﺍﻟﺜﻼﺜﺔ ﺍﻷﻭﻟﻰ‪.‬‬
‫ﺠﺩﻭل )‪(٣-٥‬‬
‫ﻗﻴﻡ ‪ MAD‬ﻟﺘﻨﺒﺅﺍﺕ ﺒﻭﻜﺱ‪-‬ﺠﻴﻨﻜﻨﺯ ﻭﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ‬
‫ﻤﻭﺯﻋﺔ ﻭﻓﻘﺎ ﻷﺤﺠﺎﻡ ﺍﻟﻌﻴﻨﺎﺕ ﻭﻨﻤﺎﺫﺝ ‪ ARMAX‬ﺍﻟﻤﺨﺘﻠﻔﺔ‪.‬‬
‫‪Average‬‬ ‫‪150‬‬ ‫‪100‬‬ ‫‪60‬‬ ‫‪40‬‬ ‫‪25‬‬ ‫‪Method‬‬ ‫‪Sam.size‬‬

‫‪P,h,q‬‬
‫‪0.0006‬‬ ‫‪0.0007‬‬ ‫‪0.0007‬‬ ‫‪0.0006‬‬ ‫‪0.0006‬‬ ‫‪0.0004‬‬ ‫‪NN‬‬ ‫)‪(1,1,0‬‬
‫‪0.0014‬‬ ‫‪0.0014‬‬ ‫‪0.0014‬‬ ‫‪0.0014‬‬ ‫‪0.0014‬‬ ‫‪0.0014‬‬ ‫‪BJ‬‬
‫‪0.0006‬‬ ‫‪0.0008‬‬ ‫‪0.0007‬‬ ‫‪0.0006‬‬ ‫‪0.0006‬‬ ‫‪0.0004‬‬ ‫‪NN‬‬ ‫)‪(1,2,0‬‬
‫‪0.0013‬‬ ‫‪0.0013‬‬ ‫‪0.0013‬‬ ‫‪0.0013‬‬ ‫‪0.0013‬‬ ‫‪0.0013‬‬ ‫‪BJ‬‬

‫‪٥٩٨‬‬
 – –

Average 150 100 60 40 25 Method Sam.size

P,h,q
0.0006 0.0007 0.0007 0.0006 0.0005 0.0004 NN (1,1,1)
0.0013 0.0013 0.0013 0.0013 0.0013 0.0014 BJ
0.0005 0.0006 0.0006 0.0006 0.0005 0.0003 NN (0,1,0)
0.0011 0.0011 0.0011 0.0011 0.0011 0.0011 BJ
0.0005 0.0006 0.0006 0.0006 0.0005 0.0003 NN (0,2,0)
0.0011 0.0010 0.0011 0.0011 0.0010 0.0011 BJ
0.0006 0.0007 0.0007 0.0007 0.0005 0.0004 NN (0,1,1)
0.0011 0.0011 0.0011 0.0011 0.0011 0.0011 BJ
0.0007 0.0008 0.0008 0.0007 0.0005 0.0005 NN (0,1,2)
0.0011 0.0011 0.0011 0.0011 0.0011 0.0011 BJ
0.0006 0.0007 0.0007 0.0006 0.0005 0.0004 NN (0,2,1)
0.0011 0.0011 0.0011 0.0011 0.0011 0.0012 BJ
0.0006 0.0007 0.0007 0.0006 0.0005 0.0004 NN Average
0.0012 0.0012 0.0012 0.0012 0.0012 0.0012 BJ

(٤-٥) ‫ﺠﺩﻭل‬
‫ﺠﻴﻨﻜﻨﺯ ﻭﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﻤﻭﺯﻋﺔ ﻭﻓﻘﺎ ﻷﺤﺠﺎﻡ‬-‫ ﻟﻁﺭﻴﻘﺘﻲ ﺒﻭﻜﺱ‬MAD ‫ﻗﻴﻡ‬
.‫ﺍﻟﻌﻴﻨﺎﺕ ﻭﺍﻟﻤﺸﺎﻫﺩﺍﺕ ﺍﻟﻤﺴﺘﻘﺒﻠﻴﺔ ﺍﻟﺜﻼﺜﺔ ﺍﻷﻭﻟﻰ‬

Average Zn+3 Zn+2 Zn+1 method Pred.horizon


sam.Size
0.0004 0.0004 0.0004 0.0004 NN 25
0.0012 0.0012 0.0012 0.0012 BJ
0.0005 0.0005 0.0005 0.0005 NN 40
0.0012 0.0012 0.0011 0.0012 BJ
0.0006 0.0006 0.0006 0.0006 NN 60
0.0012 0.0012 0.0012 0.0012 BJ
0.0007 0.0007 0.0007 0.0007 NN 100
0.0012 0.0012 0.0012 0.0012 BJ
0.0007 0.0007 0.0007 0.0007 NN 150
0.0012 0.0011 0.0012 0.0012 BJ
0.0006 0.0006 0.0006 0.0006 NN Average
0.0012 0.0012 0.0012 0.0012 BJ

٥٩٩
‫‪ – –‬‬

‫ﻭﻤﻥ ﺨﻼل ﺠﺩﻭل )‪ ،(5-3‬ﻭﺠﺩﻭل )‪(5-4‬ﻨﻼﺤﻅ ﺃﻥ‪:‬‬


‫ﻭﻓﻘﺎ ﻟﺤﺠﻡ ﺍﻟﻌﻴﻨﺔ‪ :‬ﻨﻼﺤﻅ ﻤﻥ ﺠﺩﻭل )‪ (5-3‬ﺃﻥ ﻗﻴﻡ ‪ MAD‬ﻟﺘﻨﺒﺅﺍﺕ ﻁﺭﻴﻘﺔ‬
‫ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﺃﻗل ﻤﻥ ﻗﻴﻡ ‪ MAD‬ﻟﺘﻨﺒﺅﺍﺕ ﻁﺭﻴﻘﺔ ﺒﻭﻜﺱ‪-‬ﺠﻴﻨﻜﻨـﺯ ﻭﻫـﺫﺍ‬
‫ﻴﺩل ﻋﻠﻰ ﺃﻥ ﺃﺩﺍﺀ ﻁﺭﻴﻘﺔ ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﺃﻓﻀل ﻤﻥ ﺃﺩﺍﺀ ﻁﺭﻴﻘـﺔ ﺒـﻭﻜﺱ‪-‬‬
‫ﺠﻴﻨﻜﻨﺯ ﻤﻊ ﻜل ﺃﺤﺠﺎﻡ ﺍﻟﻌﻴﻨﺎﺕ‪ ،‬ﺃﻱ ﺃﻨﻪ ﻭﺒﻭﺠﻪ ﻋﺎﻡ ﻓﺈﻥ ﺃﺩﺍﺀ ﻁﺭﻴﻘـﺔ ﺍﻟـﺸﺒﻜﺎﺕ‬
‫ﺍﻟﻌﺼﺒﻴﺔ ﺃﻓﻀل ﻤﻥ ﻁﺭﻴﻘﺔ ﺒﻭﻜﺱ‪-‬ﺠﻴﻨﻜﻨﺯ ﻭﺫﻟﻙ ﻻﻥ ﺍﻟﻤﺘﻭﺴﻁ ﺍﻟﻌـﺎﻡ ﻟﻁﺭﻴﻘـﺔ‬
‫ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﻭﻫﻭ ‪ 0.0006‬ﻜﺎﻥ ﺃﻗل ﻤﻥ ﺍﻟﻤﺘﻭﺴﻁ ﺍﻟﻌﺎﻡ ﻟﻁﺭﻴﻘﺔ ﺒـﻭﻜﺱ‪-‬‬
‫ﺠﻴﻨﻜﻨﺯ ﻫﻭ ‪ .0.0012‬ﻭﻤﻥ ﺍﻟﻤﻤﻜﻥ ﻤﻼﺤﻅﺔ ﻫﺫﻩ ﺍﻟﻨﺘﺎﺌﺞ ﻤـﻥ ﺨـﻼل ﺍﻟـﺸﻜل‬
‫ﺍﻟﺒﻴﺎﻨﻲ ﺍﻟﺘﺎﻟﻲ‪.‬‬
‫ﺸﻜل )‪(٤-٥‬‬
‫ﻗﻴﻡ ‪ MAD‬ﻟﺘﻨﺒﺅﺍﺕ ﺒﻭﻜﺱ‪-‬ﺠﻴﻨﻜﻨﺯ ﻭﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﻭﻓﻘﺎ ﻟﺤﺠﻡ ﺍﻟﻌﻴﻨﺔ‬

‫‪0.0014‬‬
‫‪0.0012‬‬
‫‪0.001‬‬
‫‪0.0008‬‬

‫‪0.0006‬‬
‫‪0.0004‬‬
‫‪0.0002‬‬
‫‪0‬‬
‫‪25‬‬ ‫‪40‬‬ ‫‪60‬‬ ‫‪0‬‬ ‫‪0‬‬ ‫‪e‬‬
‫‪10‬‬ ‫‪15‬‬ ‫‪ag‬‬
‫‪v‬‬‫‪er‬‬
‫‪A‬‬

‫‪NN‬‬ ‫‪BJ‬‬

‫ﻭﻓﻘﺎ ﻟﻠﻨﻤﻭﺫﺝ ﺍﻟﻤﺤﺩﺩ‪ :‬ﻨﻼﺤﻅ ﻤﻥ ﺠﺩﻭل )‪ (5-3‬ﺃﻥ ﻗﻴﻡ‪ MAD‬ﻟﺘﻨﺒـﺅﺍﺕ‬


‫ﻁﺭﻴﻘﺔ ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﻜﺎﻨﺕ ﺃﻗل ﻤﻥ ﻗﻴﻤﺘﻬﺎ ﻟﺘﻨﺒﺅﺍﺕ ﻁﺭﻴﻘﺔ ﺒﻭﻜﺱ‪-‬ﺠﻴﻨﻜﻨـﺯ‬
‫ﻭﺫﻟﻙ ﻓﻲ ﺤﺎﻟﺔ ﻜل ﻨﻤﺎﺫﺝ ‪ ARMAX‬ﺍﻟﻤﺤﺩﺩﺓ‪ ،‬ﻭﺒﺎﻟﺘﺎﻟﻲ ﻜﺎﻨﺕ ﺍﻷﻓﻀل ﻤـﻊ‬

‫‪٦٠٠‬‬
‫‪ – –‬‬

‫ﻜل ﺘﻠﻙ ﺍﻟﻨﻤﺎﺫﺝ‪ ،‬ﻜﻤﺎ ﻜﺎﻥ ﺍﻟﻤﺘﻭﺴﻁ ﺍﻟﻌﺎﻡ ﻟﻘﻴﻡ ‪ MAD‬ﻓﻲ ﺤﺎﻟﺔ ﺘﻨﺒﺅﺍﺕ ﻁﺭﻴﻘـﺔ‬
‫ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﻭﻫﻭ ﻴﺴﺎﻭﻯ ‪ 0.0006‬ﺃﻗل ﻤﻥ ﻗﻴﻤﺘﻪ ﻓﻲ ﺤﺎﻟﺔ ﺘﻨﺒﺅﺍﺕ ﻁﺭﻴﻘﺔ‬
‫ﺒﻭﻜﺱ‪-‬ﺠﻴﻨﻜﻨﺯ ﺍﻟﺫﻱ ﻴﺴﺎﻭﻯ ‪ .0.0012‬ﻭﻤﻥ ﺍﻟﻤﻤﻜﻥ ﻤﻼﺤﻅﺔ ﻫﺫﻩ ﺍﻟﻨﺘﺎﺌﺞ ﻤـﻥ‬
‫ﺨﻼل ﺍﻟﺸﻜل ﺍﻟﺒﻴﺎﻨﻲ ﺍﻟﺘﺎﻟﻲ‪:‬‬
‫ﺸﻜل )‪(٥-٥‬‬
‫‪ MAD‬ﻟﺘﻨﺒﺅﺍﺕ ﺒﻭﻜﺱ‪-‬ﺠﻴﻨﻜﻨﺯ ﻭﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﻟﻨﻤﺎﺫﺝ ‪ARMAX‬‬

‫‪0.0016‬‬
‫‪0.0014‬‬
‫‪0.0012‬‬
‫‪0.001‬‬
‫‪0.0008‬‬
‫‪0.0006‬‬
‫‪0.0004‬‬
‫‪0.0002‬‬
‫‪0‬‬
‫)‬ ‫)‬ ‫)‬ ‫)‬ ‫)‬ ‫)‬ ‫)‬ ‫)‬ ‫‪e‬‬
‫‪,0‬‬ ‫‪,0‬‬ ‫‪,1‬‬ ‫‪,0‬‬ ‫‪,0‬‬ ‫‪,1‬‬ ‫‪,2‬‬ ‫‪,1‬‬ ‫‪ag‬‬
‫‪,1‬‬ ‫‪,2‬‬ ‫‪,1‬‬ ‫‪,1‬‬ ‫‪,2‬‬ ‫‪,1‬‬ ‫‪,1‬‬ ‫‪,2‬‬ ‫‪er‬‬
‫‪(1‬‬ ‫‪(1‬‬ ‫‪(1‬‬ ‫‪(0‬‬ ‫‪(0‬‬ ‫‪(0‬‬ ‫‪(0‬‬ ‫‪(0‬‬ ‫‪v‬‬
‫‪A‬‬

‫‪NN‬‬ ‫‪BJ‬‬

‫ﻭﻓﻘﺎ ﻷﻓﻕ ﺍﻟﺘﻨﺒﺅ‪ :‬ﻨﻼﺤﻅ ﻤﻥ ﺠـﺩﻭل )‪ (5-4‬ﺃﻥ ﻗـﻴﻡ ‪ MAD‬ﻟﻠﻤـﺸﺎﻫﺩﺍﺕ‬


‫ﺍﻟﺜﻼﺜﺔ ﺍﻟﻤﺴﺘﻘﺒﻠﻴﺔ ﻟﺘﻨﺒﺅﺍﺕ ﻁﺭﻴﻘﺔ ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌـﺼﺒﻴﺔ ﺃﻗـل ﻤـﻥ ﻗـﻴﻡ ‪MAD‬‬
‫ﻟﺘﻨﺒﺅﺍﺕ ﻁﺭﻴﻘﺔ ﺒﻭﻜﺱ‪-‬ﺠﻴﻨﻜﻨﺯ ﻭﻫﺫﺍ ﻤﺎ ﻴﺩل ﻋﻠـﻰ ﺃﻥ ﺃﺩﺍﺀ ﻁﺭﻴﻘـﺔ ﺍﻟـﺸﺒﻜﺎﺕ‬
‫ﺍﻟﻌﺼﺒﻴﺔ ﻜﺎﻥ ﺃﻓﻀل ﻤﻥ ﺃﺩﺍﺀ ﻁﺭﻴﻘﺔ ﺒـﻭﻜﺱ‪-‬ﺠﻴﻨﻜﻨـﺯ ﻟﻠﻤـﺸﺎﻫﺩﺍﺕ ﺍﻟﺜﻼﺜـﺔ‬
‫ﺍﻟﻤﺴﺘﻘﺒﻠﻴﺔ‪ .‬ﻭﻤﻥ ﺍﻟﻤﻤﻜﻥ ﻤﻼﺤﻅﺔ ﻫﺫﻩ ﺍﻟﻨﺘﺎﺌﺞ ﻤﻥ ﺨﻼل ﺍﻟﺸﻜل ﺍﻟﺒﻴﺎﻨﻲ ﺍﻟﺘﺎﻟﻲ‪.‬‬

‫‪٦٠١‬‬
‫‪ – –‬‬

‫ﺸﻜل )‪(٦-٥‬‬
‫ﻗﻴﻡ ‪ MAD‬ﻟﻠﻤﺸﺎﻫﺩﺍﺕ ﺍﻟﺜﻼﺜﺔ ﺍﻟﻤﺴﺘﻘﺒﻠﻴﺔ ﻟﺒﻭﻜﺱ‪-‬ﺠﻴﻨﻜﻨﺯ ﻭﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ‬
‫‪0.0014‬‬

‫‪0.0012‬‬

‫‪0.001‬‬

‫‪0.0008‬‬

‫‪0.0006‬‬

‫‪0.0004‬‬

‫‪0.0002‬‬

‫‪0‬‬
‫‪Zn+1‬‬ ‫‪Zn+2‬‬ ‫‪Zn+3‬‬ ‫‪Average‬‬

‫‪NN‬‬ ‫‪BJ‬‬

‫‪ MAEP‬‬
‫ﻓﻴﻤﺎ ﻴﻠﻲ ﺠﺩﻭل )‪ (5-5‬ﻴﺤﺘﻭﻯ ﻋﻠﻰ ﻨﺘﺎﺌﺞ ﻤﻘﻴﺎﺱ ﻤﺘﻭﺴﻁ ﻨـﺴﺒﺔ ﺍﻷﺨﻁـﺎﺀ‬
‫ﺍﻟﻤﻁﻠﻘﺔ ﺍﻷﻗل ﻟﺘﻨﺒﺅﺍﺕ ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﺇﻟﻰ ﺒـﻭﻜﺱ‪-‬ﺠﻴﻨﻜﻨـﺯ ‪ MAEP‬ﻤـﻊ‬
‫ﻨﻤﺎﺫﺝ ‪ ARMAX‬ﺍﻟﻤﺤﺩﺩﺓ ﺒﺄﺤﺠﺎﻡ ﻋﻴﻨﺎﺕ ﻤﺨﺘﻠﻔﺔ‪ ،‬ﻜﺫﻟﻙ ﺠﺩﻭل)‪ (5-6‬ﻴﺤﺘـﻭﻯ‬
‫ﻋﻠﻰ ﻨﺘﺎﺌﺞ ﻤﻘﻴﺎﺱ ‪ MAEP‬ﻟﻠﺜﻼﺜﺔ ﻤﺸﺎﻫﺩﺍﺕ ﺍﻟﻤﺴﺘﻘﺒﻠﻴﺔ ﺍﻷﻭﻟﻰ‪.‬‬
‫ﺠﺩﻭل )‪(٥-٥‬‬
‫ﻗﻴﻡ ‪ MAEP‬ﻟﻨﻤﺎﺫﺝ ‪ ARMAX‬ﻤﻊ ﺃﺤﺠﺎﻡ ﺍﻟﻌﻴﻨﺎﺕ ﺍﻟﻤﺨﺘﻠﻔﺔ‬

‫‪Average‬‬ ‫‪150‬‬ ‫‪100‬‬ ‫‪60‬‬ ‫‪40‬‬ ‫‪25‬‬ ‫‪sam.size‬‬

‫‪P,h,q‬‬
‫‪1.60576‬‬ ‫‪1.4471‬‬ ‫‪1.4634‬‬ ‫‪1.4622‬‬ ‫‪1.5274‬‬ ‫‪2.1287‬‬ ‫)‪(1,1,0‬‬
‫‪1.5335‬‬ ‫‪1.2904‬‬ ‫‪1.3637‬‬ ‫‪1.5107‬‬ ‫‪1.5013‬‬ ‫‪2.0014‬‬ ‫)‪(1,2,0‬‬
‫‪1.5111‬‬ ‫‪1.2854‬‬ ‫‪1.3023‬‬ ‫‪1.3715‬‬ ‫‪1.5993‬‬ ‫‪1.9970‬‬ ‫)‪(1,1,1‬‬
‫‪1.5436‬‬ ‫‪1.3918‬‬ ‫‪1.4337‬‬ ‫‪1.4354‬‬ ‫‪1.3547‬‬ ‫‪2.1024‬‬ ‫)‪(0,1,0‬‬
‫‪1.5782‬‬ ‫‪1.4306‬‬ ‫‪1.4411‬‬ ‫‪1.2716‬‬ ‫‪1.6115‬‬ ‫‪2.1362‬‬ ‫)‪(0,2,0‬‬
‫‪1.54158‬‬ ‫‪1.4169‬‬ ‫‪1.3975‬‬ ‫‪1.3574‬‬ ‫‪1.5951‬‬ ‫‪1.9410‬‬ ‫)‪(0,1,1‬‬
‫‪1.42622‬‬ ‫‪1.2324‬‬ ‫‪1.2024‬‬ ‫‪1.3928‬‬ ‫‪1.6167‬‬ ‫‪1.6868‬‬ ‫)‪(0,1,2‬‬
‫‪1.53322‬‬ ‫‪1.3700‬‬ ‫‪1.4579‬‬ ‫‪1.4188‬‬ ‫‪1.4985‬‬ ‫‪1.9209‬‬ ‫)‪(0,2,1‬‬
‫‪1.534148‬‬ ‫‪1.3581‬‬ ‫‪1.3828‬‬ ‫‪1.4026‬‬ ‫‪1.5381‬‬ ‫‪1.9893‬‬ ‫‪Average‬‬

‫‪٦٠٢‬‬
‫‪ – –‬‬

‫ﺠﺩﻭل )‪(٦-٥‬‬
‫ﻗﻴﻡ ‪ MAEP‬ﻟﻠﻤﺸﺎﻫﺩﺍﺕ ﺍﻟﻤﺴﺘﻘﺒﻠﻴﺔ ﺍﻟﺜﻼﺜﺔ‬

‫‪Average‬‬ ‫‪Zn+3‬‬ ‫‪Zn+2‬‬ ‫‪Zn+1‬‬ ‫‪Pred. horizon‬‬


‫‪sam.Size‬‬
‫‪1.9893‬‬ ‫‪1.9696‬‬ ‫‪2.0442‬‬ ‫‪1.9541‬‬ ‫‪25‬‬
‫‪1.5381‬‬ ‫‪1.5731‬‬ ‫‪1.5151‬‬ ‫‪1.5260‬‬ ‫‪40‬‬
‫‪1.4026‬‬ ‫‪1.3750‬‬ ‫‪1.4187‬‬ ‫‪1.4140‬‬ ‫‪60‬‬
‫‪1.3828‬‬ ‫‪1.3700‬‬ ‫‪1.3666‬‬ ‫‪1.4117‬‬ ‫‪100‬‬
‫‪1.3581‬‬ ‫‪1.3342‬‬ ‫‪1.3790‬‬ ‫‪1.3610‬‬ ‫‪150‬‬
‫‪1.5342‬‬ ‫‪1.52438‬‬ ‫‪1.54472‬‬ ‫‪1.53336‬‬ ‫‪Average‬‬

‫ﻭﻤﻥ ﺨﻼل ﺠﺩﻭل )‪ ،(5-5‬ﻭﺠﺩﻭل )‪ (5-6‬ﻨﻼﺤﻅ ﺃﻥ‪:‬‬


‫ﻭﻓﻘﺎ ﻟﺤﺠﻡ ﺍﻟﻌﻴﻨﺔ‪ :‬ﻨﻼﺤﻅ ﻤﻥ ﺠﺩﻭل )‪ (5-5‬ﺃﻥ ﻗﻴﻡ ﻤﻘﻴﺎﺱ ﻤﺘﻭﺴـﻁ ﻨـﺴﺒﺔ‬
‫ﺍﻷﺨﻁﺎﺀ ﺍﻟﻤﻁﻠﻘﺔ ﺍﻷﻗل ﻟﻠﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﺇﻟﻰ ﺒﻭﻜﺱ‪-‬ﺠﻴﻨﻜﻨﺯ )‪ (MAEP‬ﺘﻘـل‬
‫ﺒﺯﻴﺎﺩﺓ ﺤﺠﻡ ﺍﻟﻌﻴﻨﺔ ﺤﻴﺙ ﺃﻨﻬﺎ ﻜﺎﻨﺕ ﺘﻤﺜل ‪ % 198.93‬ﻋﻨﺩ ﺤﺠـﻡ ﺍﻟﻌﻴﻨـﺔ ‪، 25‬‬
‫ﺜﻡ ﺃﺼﺒﺤﺕ ‪ % 153.81‬ﻋﻨﺩ ﺤﺠﻡ ﺍﻟﻌﻴﻨﺔ ‪ 40‬ﺇﻟﻰ ﺃﻥ ﻭﺼﻠﺕ ‪ %135.81‬ﻋﻨﺩ‬
‫ﺤﺠﻡ ﺍﻟﻌﻴﻨﺔ ‪ ،150‬ﻭﻫﺫﺍ ﻴﺩل ﻋﻠﻰ ﺃﻥ ﺃﺩﺍﺀ ﻁﺭﻴﻘﺔ ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌـﺼﺒﻴﺔ ﻴﻜـﻭﻥ‬
‫ﺃﻓﻀل ﻤﻥ ﺃﺩﺍﺀ ﻁﺭﻴﻘﺔ ﺒﻭﻜﺱ‪-‬ﺠﻴﻨﻜﻨﺯ ﺇﺫﺍ ﻜﺎﻨﺕ ﺤﺠﻡ ﺍﻟﻌﻴﻨﺔ ﺼﻐﻴﺭ‪ ،‬ﺒﻤﻌﻨﻰ ﺃﺨﺭ‬
‫ﻴﻤﻜﻥ ﺍﻟﻘﻭل ﺃﻥ ﺃﺩﺍﺀ ﻁﺭﻴﻘﺔ ﺒﻭﻜﺱ‪-‬ﺠﻴﻨﻜﻨﺯ ﻴﺘﺤﺴﻥ ﺒﺯﻴﺎﺩﺓ ﺤﺠﻡ ﺍﻟﻌﻴﻨﺔ‪ .‬ﻭﺒﻭﺠـﻪ‬
‫ﻋﺎﻡ ﻓﺎﻥ ﻨﺴﺒﺔ ﺍﻷﺨﻁﺎﺀ ﺍﻟﻤﻁﻠﻘﺔ ﺍﻷﻗل ﻟﻠﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﺇﻟﻰ ﺒـﻭﻜﺱ‪-‬ﺠﻴﻨﻜﻨـﺯ‬
‫‪ MAEP‬ﻓﻲ ﺍﻟﻤﺘﻭﺴﻁ ﻜﺎﻨﺕ ‪ 1.5342%‬ﺃﻱ ﺃﻥ ﺃﺩﺍﺀ ﻁﺭﻴﻘﺔ ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ‬
‫ﻴﻔﻭﻕ ﺃﺩﺍﺀ ﻁﺭﻴﻘﺔ ﺒﻭﻜﺱ‪-‬ﺠﻴﻨﻜﻨﺯ ﺒﻨﺴﺒﺔ‪ .53.42 %‬ﻭﻤﻥ ﺍﻟﻤﻤﻜﻥ ﻤﻼﺤﻅﺔ ﻫﺫﻩ‬
‫ﺍﻟﻨﺘﺎﺌﺞ ﻤﻥ ﺨﻼل ﺍﻟﺸﻜل ﺍﻟﺒﻴﺎﻨﻲ ﺍﻟﺘﺎﻟﻲ‪:‬‬

‫‪٦٠٣‬‬
‫‪ – –‬‬

‫ﺸﻜل )‪(٧-٥‬‬
‫ﻗﻴﻡ ﻤﺘﻭﺴﻁ ﺍﻟﻨﺴﺏ ‪ MAEP‬ﻭﻓﻘﺎ ﻟﺤﺠﻡ ﺍﻟﻌﻴﻨﺔ‬
‫‪MAEP‬‬

‫‪2.5‬‬

‫‪2‬‬

‫‪1.5‬‬

‫‪1‬‬

‫‪0.5‬‬

‫‪0‬‬
‫‪25‬‬ ‫‪40‬‬ ‫‪60‬‬ ‫‪100‬‬ ‫‪150‬‬ ‫‪Average‬‬

‫‪MAEP‬‬

‫ﻭﻭﻓﻘﺎ ﻟﻠﻨﻤﻭﺫﺝ ﺍﻟﻤﺤﺩﺩ‪ :‬ﻨﺠﺩ ﺃﻥ ﻨﺴﺒﺔ ﺍﻷﺨﻁﺎﺀ ﺍﻟﻤﻁﻠﻘـﺔ ﺍﻷﻗـل ﻟﻠـﺸﺒﻜﺎﺕ‬


‫ﺍﻟﻌﺼﺒﻴﺔ ﺇﻟﻰ ﺒﻭﻜﺱ‪-‬ﺠﻴﻨﻜﻨﺯ ﻭﺍﻟﺘﻲ ﻓﻲ ﺠﺩﻭل )‪ (5-5‬ﺒﻠﻐﺕ ﺃﻋﻠﻰ ﻗﻴﻤﺔ ﻟﻬﺎ ﻤـﻊ‬
‫ﻨﻤﻭﺫﺝ )‪ ARMAX(1,1,0‬ﻭﻜﺎﻨﺕ ﺘﺴﺎﻭﻯ ‪ ،%160.576‬ﺒﻴﻨﻤﺎ ﻜﺎﻨﺕ ﺃﻗل ﻗﻴﻤﺔ‬
‫ﻟﻬﺎ ﻤﻊ ﻨﻤﻭﺫﺝ )‪ ARMAX(0,1,2‬ﺤﻴﺙ ﻜﺎﻨﺕ ﺘـﺴﺎﻭﻯ ‪ .%142.622‬ﻭﻤـﻥ‬
‫ﺍﻟﻤﻤﻜﻥ ﻤﻼﺤﻅﺔ ﻫﺫﻩ ﺍﻟﻨﺘﺎﺌﺞ ﻤﻥ ﺨﻼل ﺍﻟﺸﻜل ﺍﻟﺒﻴﺎﻨﻲ ﺍﻟﺘﺎﻟﻲ‪:‬‬
‫ﺸﻜل )‪(٨-٥‬‬
‫ﻗﻴﻡ ﻤﺘﻭﺴﻁ ﺍﻟﻨﺴﺏ ‪ MAEP‬ﻭﻓﻘﺎ ﻟﻨﻤﺎﺫﺝ ‪ARMAX‬‬
‫‪MAEP‬‬

‫‪1.65‬‬
‫‪1.6‬‬
‫‪1.55‬‬
‫‪1.5‬‬
‫‪1.45‬‬
‫‪1.4‬‬
‫‪1.35‬‬
‫‪1.3‬‬
‫)‬ ‫)‬ ‫)‬ ‫)‬ ‫)‬ ‫)‬ ‫)‬ ‫)‬ ‫‪e‬‬
‫‪,0‬‬ ‫‪,0‬‬ ‫‪,1‬‬ ‫‪,0‬‬ ‫‪,0‬‬ ‫‪,1‬‬ ‫‪,2‬‬ ‫‪,1‬‬ ‫‪ag‬‬
‫‪,1‬‬ ‫‪,2‬‬ ‫‪,1‬‬ ‫‪,1‬‬ ‫‪,2‬‬ ‫‪,1‬‬ ‫‪,1‬‬ ‫‪,2‬‬
‫‪(1‬‬ ‫‪(1‬‬ ‫‪(1‬‬ ‫‪(0‬‬ ‫‪(0‬‬ ‫‪(0‬‬ ‫‪(0‬‬ ‫‪(0‬‬ ‫‪v‬‬‫‪er‬‬
‫‪A‬‬

‫‪MAEP‬‬

‫‪٦٠٤‬‬
‫‪ – –‬‬

‫ﻭﻓﻘﺎ ﻷﻓﻕ ﺍﻟﺘﻨﺒﺅ‪ :‬ﻨﻼﺤﻅ ﻓﻲ ﺠﺩﻭل )‪ (5-6‬ﺃﻥ ﻗﻴﻡ ﻤﻘﻴﺎﺱ ﻤﺘﻭﺴـﻁ ﻨـﺴﺒﺔ‬
‫ﺍﻷﺨﻁﺎﺀ ﺍﻟﻤﻁﻠﻘﺔ ﺍﻷﻗل ﻟﻠﺸﺒﻜﺎﺕ ﺍﻟﻌـﺼﺒﻴﺔ ﺇﻟـﻰ ﺒـﻭﻜﺱ‪-‬ﺠﻴﻨﻜﻨـﺯ ‪MAEP‬‬
‫ﻟﻠﻤﺸﺎﻫﺩﺍﺕ ﺍﻟﺜﻼﺜﺔ ﺍﻟﻤﺴﺘﻘﺒﻠﻴﺔ ﻜﺒﻴﺭﺓ ﺒﻤﻌﻨﻰ ﺃﻨﻬﺎ ﺘﺯﻴﺩ ﻋﻥ ﻨـﺴﺒﺔ‪ ،%100‬ﻭﺫﻟـﻙ‬
‫ﻴﺩل ﻋﻠﻰ ﺘﻔﻭﻕ ﺃﺩﺍﺀ ﻁﺭﻴﻘﺔ ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﻋﻠﻰ ﻁﺭﻴﻘـﺔ ﺒـﻭﻜﺱ‪-‬ﺠﻴﻨﻜﻨـﺯ‬
‫ﺨﻼل ﻓﺘﺭﺓ ﺍﻟﺘﻨﺒﺅ‪ .‬ﻭﻤﻥ ﺍﻟﻤﻤﻜﻥ ﻤﻼﺤﻅﺔ ﻫﺫﻩ ﺍﻟﻨﺘﺎﺌﺞ ﻤﻥ ﺨﻼل ﺍﻟـﺸﻜل ﺍﻟﺒﻴـﺎﻨﻲ‬
‫ﺍﻟﺘﺎﻟﻲ‪.‬‬
‫ﺸﻜل )‪(٩-٥‬‬
‫ﻗﻴﻡ ‪ MAEP‬ﻟﻠﻤﺸﺎﻫﺩﺍﺕ ﺍﻟﻤﺴﺘﻘﺒﻠﻴﺔ ﺍﻟﺜﻼﺜﺔ‬
‫‪MAEP‬‬

‫‪1.55‬‬
‫‪1.545‬‬
‫‪1.54‬‬
‫‪1.535‬‬
‫‪1.53‬‬
‫‪1.525‬‬
‫‪1.52‬‬
‫‪1.515‬‬
‫‪1.51‬‬
‫‪Zn+1‬‬ ‫‪Zn+2‬‬ ‫‪Zn+3‬‬ ‫‪Average‬‬

‫‪MAEP‬‬

‫]‪ [٦‬ﺍﳋﻼﺻﺔ‬
‫ﻴﻌﺘﺒﺭ ﻫﺫﺍ ﺍﻟﺒﺤﺙ ﺍﻤﺘﺩﺍﺩﺍ ﻟﺒﺤـﺙ )‪ Alshawadfi(2003‬ﻭﺍﻟـﺫﻱ ﺍﻗﺘـﺭﺡ‬
‫ﻁﺭﻴﻘﺔ ﺠﺩﻴﺩﺓ ﻭﻤﺅﺜﺭﺓ ﻟﻠﺘﻨﺒﺅ ﺒﻨﻤﺎﺫﺝ ‪ARMA‬ﺒﺎﺴـﺘﺨﺩﺍﻡ ﺃﺤـﺩ ﻁـﺭﻕ ﺍﻟـﺫﻜﺎﺀ‬
‫ﺍﻻﺼﻁﻨﺎﻋﻲ ﻭﻫﻲ ﻁﺭﻴﻘﺔ ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ‪ ،‬ﻜﻤﺎ ﻗﺎﺭﻥ ﺒﻴﻥ ﺍﻟﻁﺭﻴﻘﺔ ﺍﻟﻤﻘﺘﺭﺤـﺔ‬
‫ﻭﻁﺭﻴﻘﺔ ﺒﻭﻜﺱ‪-‬ﺠﻴﻨﻜﻨﺯ‪.‬ﻭﻴﺘﻀﻤﻥ ﻫﺫﺍ ﺍﻟﺒﺤﺙ ﻫﺩﻓﻴﻥ ‪ :‬ﺘﻌﻤـﻴﻡ ﻁﺭﻴﻘـﺔ‬
‫‪ Alshawadfi‬ﻟﻠﺘﻨﺒﺅ ﺒﺎﻟﺴﻼﺴل ﺍﻟﺯﻤﻨﻴـﺔ ﺍﻟﻤﻭﻟـﺩﺓ ﻤـﻥ ﻨﻤـﺎﺫﺝ ‪ARMAX‬‬
‫ﺒﺎﺴﺘﺨﺩﺍﻡ ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﺍﻻﺼﻁﻨﺎﻋﻴﺔ )‪ ،(ANN‬ﻭﻟﺘﺤﻘﻴﻕ ﻫﺫﺍ ﺍﻟﻬـﺩﻑ ﺘـﻡ‬

‫‪٦٠٥‬‬
‫‪ – –‬‬

‫ﺘﻭﻟﻴﺩ‪ 32000‬ﻋﻴﻨﺔ ﺒﺄﺤﺠـﺎﻡ ﻤﺨﺘﻠﻔـﺔ )‪ (25-40-60-100-150‬ﻤـﻥ ﻨﻤـﺎﺫﺝ‬


‫‪ ARMAX‬ﺒﻤﻌﻠﻤﺎﺕ ﻤﺨﺘﻠﻔﺔ ﺘﺴﺘﺨﺩﻡ ﻟﺘﺩﺭﻴﺏ ﺍﻟﺸﺒﻜﺔ‪،‬ﺒﻌـﺩ ﺫﻟـﻙ ﺘـﻡ ﻤﻘﺎﺭﻨـﺔ‬
‫ﺍﻟﺒﻴﺎﻨﺎﺕ ﺍﻟﻤﻭﻟﺩﺓ ﺒﺎﻟﺒﻴﺎﻨﺎﺕ ﺍﻟﺤﻘﻴﻘﻴﺔ ﻟﻘﻴﺎﺱ ﺩﻗﺔ ﺍﻟﺘﻨﺒﺅﺍﺕ‪.‬‬
‫‪ :‬ﻤﻘﺎﺭﻨﺔ ﺃﺩﺍﺀ ﺍﻟﻁﺭﻴﻘـﺔ ﺍﻟﻤﻘﺘﺭﺤـﺔ ﻟﻠـﺸﺒﻜﺎﺕ ﺍﻟﻌـﺼﺒﻴﺔ‬
‫ﺍﻻﺼﻁﻨﺎﻋﻴﺔ ﻤﻊ ﺃﺩﺍﺀ ﻁﺭﻴﻘﺔ "ﺒﻭﻜﺱ ﻭ ﺠﻴﻨﻜﻨﺯ " ﻟﺘﻭﻀﻴﺢ ﺃﻱ ﻤﻨﻬﻤـﺎ ﺃﻓـﻀل‬
‫ﻟﻠﺘﻨﺒﺅ ﺒﻨﻤﺎﺫﺝ ‪ ARMAX‬ﻭﺫﻟﻙ ﻤﻥ ﺨﻼل ﻗﻴﺎﺱ ﺩﻗـﺔ ﺍﻟﺘﻨﺒـﺅﺍﺕ ﻟﻜـل ﻤـﻥ‬
‫‪Mean of‬‬ ‫ﺍﻟﻁﺭﻴﻘﺘﻴﻥ ﺒﺎﺴﺘﺨﺩﺍﻡ ﺜﻼﺙ ﻁﺭﻕ ﻫﻲ ﻤﺘﻭﺴﻁ ﻤﺭﺒﻌـﺎﺕ ﺍﻟﺨﻁـﺄ‬
‫‪Mean‬‬ ‫‪ ،Squared Error‬ﻤﺘﻭﺴــﻁ ﺍﻻﻨﺤﺭﺍﻓــﺎﺕ ﺍﻟﻤﻁﻠﻘــﺔ‬ ‫)‪(MSE‬‬
‫)‪ ،Absolute Deviation (MAD‬ﻨﺴﺒﺔ ﺍﻟﺤﺎﻻﺕ ﺍﻟﺘﻲ ﺘﺘﺤﻘﻕ ﻓﻴﻬـﺎ ﺃﺨﻁـﺎﺀ‬
‫ﻤﻁﻠﻘﺔ ﺃﻗل ﻓـﻲ ﺍﻟﻁـﺭﻴﻘﺘﻴﻥ ‪The Percentage of cases of Minimum‬‬
‫‪Absolute error (MAEP).‬‬
‫ﻭﻻﻨﺠﺎﺯ ﻋﻤﻠﻴﺔ ﺘﺩﺭﻴﺏ ﺍﻟﺸﺒﻜﺔ ﻭﺍﺨﺘﺒﺎﺭﻫـﺎ ﻭﺤـﺴﺎﺏ ﺘﻨﺒـﺅﺍﺕ ﻁﺭﻴﻘﺘـﻲ‬
‫ﺒﻭﻜﺱ‪-‬ﺠﻴﻨﻜﻨﺯ ﻭﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﻤﻊ ﺍﻟﻤﻘﺎﺭﻨﺔ ﺒﻴﻥ ﺍﻟﻁﺭﻴﻘﺘﻴﻥ ﺘﻡ ﺍﺴﺘﺨﺩﺍﻡ ﺤﺯﻡ‬
‫ﺒﺭﺍﻤﺞ ﻤﺎﺘﻼﺏ ‪ .MATLAB‬ﻭﺘﻡ ﺘﺼﻤﻴﻡ ﺼـﻨﺩﻭﻕ ﺃﺩﻭﺍﺕ ﻟﺒﺭﻨـﺎﻤﺞ ﻤـﺎﺘﻼﺏ‬
‫ـﺎﺘﻼﺏ‬
‫ـﺭﺍﻤﺞ ﻤـ‬
‫ـﺯﻡ ﺒـ‬
‫ـﻤﻥ ﺤـ‬
‫ـﺏ ﻀـ‬
‫‪MATLAB TOOLBOX‬ﻤﻨﺎﺴـ‬
‫‪) MATLAB‬ﺃﻨﻅﺭ ﺍﻟﻤﻠﺤﻕ ﺍﻟﻤﺭﻓﻕ ﻤﻊ ﺍﻟﺒﺤﺙ ( ﻟﺘﻭﻟﻴﺩ ﺍﻟﺒﻴﺎﻨﺎﺕ ‪ ٣٢٠٠٠‬ﻋﻴﻨﺔ‬
‫ﻭ ﻻﻨﺠﺎﺯ ﻋﻤﻠﻴﺔ ﺘﺩﺭﻴﺏ ﺍﻟﺸﺒﻜﺔ ﻭﺍﺨﺘﺒﺎﺭﻫﺎ ﻭﺤﺴﺎﺏ ﺘﻨﺒﺅﺍﺕ ﻁﺭﻴﻘﺘـﻲ ﺒـﻭﻜﺱ‪-‬‬
‫ﺠﻴﻨﻜﻨﺯ ﻭﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ ﻭﺍﻟﻤﻘﺎﺭﻨﺔ ﺒﻴﻨﻬﻤﺎ‪.‬‬

‫ﻭﻗﺩ ﺃﻭﻀﺤﺕ ﻨﺘﺎﺌﺞ ﺍﻟﺒﺤﺙ ﺍﻟﻘﺩﺭﺓ ﺍﻟﻌﺎﻟﻴﺔ ﻟﻠﻁﺭﻴﻘﺔ ﺍﻟﻤﻘﺘﺭﺤﺔ ﻟﻠﺘﻨﺒﺅ ﻟﻨﻤـﺎﺫﺝ‬
‫‪ ARMAX‬ﺒﺎﺴﺘﺨﺩﺍﻡ ﺃﺤﺩ ﻁﺭﻕ ﺍﻟﺫﻜﺎﺀ ﺍﻻﺼﻁﻨﺎﻋﻲ ‪ -‬ﻭﻫﻲ ﻁﺭﻴﻘﺔ ﺍﻟـﺸﺒﻜﺎﺕ‬
‫ﺍﻟﻌﺼﺒﻴﺔ ‪ -‬ﻋﻠﻰ ﺍﻟﺘﻨﺒﺅ ﺒﺎﻟﻘﻴﻡ ﺍﻟﻤﺴﺘﻘﺒﻠﻴﺔ ﻟﻠﺴﻠﺴﻠﺔ ﺍﻟﺯﻤﻨﻴﺔ ﺍﻟﻤﻌﻁﺎﺓ ﻭﺫﻟﻙ ﺒﻁﺭﻴﻘـﺔ‬
‫ﺁﻟﻴﺔ‪ ،‬ﺤﻴﺙ ﺃﻅﻬﺭﺕ ﺍﻟﻨﺘﺎﺌﺞ ﻤﻥ ﺩﺭﺍﺴﺔ ‪ 32000‬ﻋﻴﻨﺔ ﻭﺍﻟﺘـﻲ ﺘـﻡ ﺘﻭﻟﻴـﺩﻫﺎ ﺃﻥ‬
‫ﻤﺘﻭﺴﻁ ﻤﺭﺒﻌﺎﺕ ﺃﺨﻁﺎﺀ ﺍﻟﺘﻨﺒﺅ‪ MSE‬ﺨﺎﺼﺔ ﻓـﻲ ﺤﺎﻟـﺔ ﺍﻟﻌﻴﻨـﺎﺕ ﺍﻟـﺼﻐﻴﺭﺓ‪،‬‬
‫ﻭﻤﺘﻭﺴﻁ ﺍﻟﻘﻴﻤﺔ ﺍﻟﻤﻁﻠﻘﺔ ﻟﺨﻁـﺄ ﺍﻟﺘﻨﺒـﺅ ‪ ، MAD‬ﻭﻜـﺫﻟﻙ ﻤﺘﻭﺴـﻁ ﺍﻟﻨـﺴﺒﺔ‬

‫‪٦٠٦‬‬
 – –

‫ ﻭﺫﻟﻙ ﻜﻤﺘﻭﺴـﻁ ﻋـﺎﻡ‬،‫ﺠﻴﻨﻜﻨﺯ‬-‫ﺃﻓﻀل ﻤﻥ ﻨﻅﻴﺭﺘﻬﺎ ﻓﻲ ﺃﺴﻠﻭﺏ ﺒﻭﻜﺱ‬MAEP


‫ ﻭﺒﻨﺎﺀﺍ‬.‫ﺒﺎﻟﻨﺴﺒﺔ ﻟﻜل ﺍﻟﻌﻴﻨﺎﺕ ﻭﻜل ﺍﻟﻨﻤﺎﺫﺝ ﻭﻟﻠﻤﺸﺎﻫﺩﺍﺕ ﺍﻟﻤﺴﺘﻘﺒﻠﻴﺔ ﺍﻟﺜﻼﺙ ﺍﻷﻭﻟﻰ‬
‫ﻋﻠﻰ ﺫﻟﻙ ﻴﻤﻜﻥ ﺍﻟﻘﻭل ﺒﺼﻼﺤﻴﺔ ﺍﻟﻁﺭﻴﻘﺔ ﺍﻟﻤﻘﺘﺭﺤﺔ ﻟﻠﺘﻨﺒـﺅ ﺒـﺎﻟﻘﻴﻡ ﺍﻟﻤـﺴﺘﻘﺒﻠﻴﺔ‬
 .ARMAX ‫ﻟﻠﺴﻼﺴل ﺍﻟﺯﻤﻨﻴﺔ ﺍﻟﻤﻭﻟﺩﺓ ﻤﻥ ﻨﻤﺎﺫﺝ‬

‫ﺍﳌﺮﺍﺟﻊ‬
‫ﺍﳌﺮﺍﺟﻊ ﺍﻟﻌﺮﺑﻴﺔ‬:‫ﺃﻭﻻ‬
.(‫ﻡ‬١٩٩٩)،‫ ﻤﺤﻤـﺩ ﻋﺒـﺩ ﺍﻟﻬـﺎﺩﻱ ﺍﻟﻤﺤﻤﻴـﺩ‬،‫( ﺠﻌﻔﺭ ﻤﺤﻤﺩ ﺤـﺎﺠﻰ‬١)
‫ ﺍﻟﺘﻨﺒﺅ ﺒﺄﺴﻌﺎﺭ ﺼﺭﻑ ﺍﻟـﺩﻴﻨﺎﺭ ﺍﻟﻜـﻭﻴﺘﻲ ﻤﻘﺎﺒـل ﺍﻟـﺩﻭﻻﺭ‬:‫"ﺍﻟﺸﺒﻜﺎﺕ ﺍﻟﻌﺼﺒﻴﺔ‬
،(١٩٩٩) ‫ ﻴﻨﺎﻴﺭ‬،١ ‫ ﻋﺩﺩ‬،٦ ‫ ﻤﺠﻠﺩ‬،‫ ﺍﻟﻤﺠﻠﺔ ﺍﻟﻌﺭﺒﻴﺔ ﻟﻠﻌﻠﻭﻡ ﺍﻹﺩﺍﺭﻴﺔ‬،" ‫ﺍﻷﻤﺭﻴﻜﻲ‬
.٣٥ -١٧ ‫ﺹ‬
‫ " ﺍﻟﺫﻜﺎﺀ ﺍﻟﺼﻨﺎﻋﻰ ﺩﻟﻴل ﺍﻟﻨﻅﻡ ﺍﻟﺫﻜﻴﺔ‬.(‫ﻡ‬٢٠٠٤ )،‫( ﻤﻴﺸﻴل ﻨﺠﻴﻨﻔﻴﺘﺴﻜﻰ‬٢)
‫ ﺍﻟﻤﻤﻠﻜـﺔ‬،‫ ﺍﻟﺭﻴـﺎﺽ‬،‫ ﺩﺍﺭ ﺍﻟﻤﺭﻴﺦ ﻟﻠﻨﺸﺭ‬،‫" ﺘﻌﺭﻴﺏ ﺴﺭﻭﺭ ﻋﻠﻰ ﺇﺒﺭﺍﻫﻴﻡ ﺴﺭﻭﺭ‬
.٢٥٢ ‫ ﺹ‬،‫ﺍﻟﻌﺭﺒﻴﺔ ﺍﻟﺴﻌﻭﺩﻴﺔ‬
‫ﺍﳌﺮﺍﺟﻊ ﺍﻷﺟﻨﺒﻴﺔ‬:‫ﺛﺎﻧﻴﺎ‬
(1) Arminger, G. and Enache, D. (1996), "Statistical Models
and Artificial Neural Networks". In: Bock, H.H. and
Polasek, W. (Eds.): Data Analysis and Information
Systems, Vol. 7, Springer Verlag, Heidelberg, 243-260.
(2) Al-Shawadfi, Gamal A.(1994). "Bayesian Inference of
ARMAX Models", Scientific Magazine, Faculty of
Commerce, Al-Azhar University, Cairo, Egypt, Vol. 20
July 1994.
(3) Al-Shawadfi, Gamal A.(1996). "Bayesian Estimation for
the Parameters of the Seasonal ARMAX Models",
Scientific Magazine, Faculty of Commerce, Ain Shams
University Cairo, Egypt, Vol. 1 July 1996,PP. 139-151.

٦٠٧
 – –

(4) Al-Shawadfi, Gamal A.(1997). "The Derivation of


Bayesian Predictive Density for the Seasonal ARMAX
Model", Scientific Magazine, Faculty of Commerce
Tanta University, Vol.1 April 1997,Tanta, Egypt.
(5) Al-Shawadfi, Gamal A.(2003). " A comparison between
neural network and Box-Jenkins Forecasting Techniques
With Application to Real data " King Saud University,
King Fahd National Library Cataloging-in-Publication
Data.
(6) Box, G. E. P., Jenkins, G. M., and Reinsel, G. C. (1994).
"Time Series Analysis, Forecasting and Control", (3rd
ed.), Englewood Cliffs, NJ: Prentice-Hal l, p (151).
(7) Cybenko, g.(1989). "Approximation by superposition of
sigmoidal functions ", Mathematics of control, Signals
and Systems, 2, 303-314.
(8) Demuth H. & Beale M. (1998). " Neural Networks
Toolbox: Users Guide, Version 3.0",The Math Works inc
Natick, MA.
(9) Harvey, A. C. and Phillips, G. A.(1977). "A comparison
of Estimators in the ARMA (1,1)model", University of
Kent(Unpublished Paper ).
(10) Hijazi, D. (2007), "Predicting the stock market method
of using artificial neural cells.", M.A. Thesis Department
of Statistics, Faculty of Economics and Political Science,
Cairo University, Cairo, Egypt.
(11) Shaarawy S. and Ismail M. A., (1987). "Bayesian
inference for seasonal ARMA models", Egypt. Statist. J.
31, 323-336.
(12) Turban, E., Aronson J. and Liang T. (2005). "Decision
Support Systems and Intelligent Systems ". 7th edition
Prentice Hall.
(13) Zurada, J. M. (1992). "Artificial Neural Systems".1st ed.
St. Paul, MN., West Publishing Company.

٦٠٨
 
MATLABTOOLBOX

 

%.. toolbox for Time Series Forecasting Training &Testing Using Neural Network
Technique
%....file name : train2010...output file out2010.mat , out2010';
diary('outout2010')
clear all;
tic;
mu=0; sigma=1; mm=60; m=mm-10; n=1; m0=500; n1=32; n2=8; n3=4; h=3;
ss01(n2,h)=0.0;ss02(n2,h)=0.0;sb01(n2,h)=0.0;sb02(n2,h)=0.0;ss(n2,h)=0;sb(n2,h)=0;
p=[1 1 0; 1 1 0; 1 1 0; 1 1 0; 1 2 0; 1 2 0; 1 2 0; 1 2 0; 1 1 1; 1 1 1; 1 1 1;1 1 1;
0 1 0; 0 1 0; 0 1 0; 0 1 0; 0 2 0; 0 2 0; 0 2 0; 0 2 0; 0 1 1; 0 1 1; 0 1 1; 0 1 1;
0 1 2; 0 1 2; 0 1 2; 0 1 2; 0 2 1; 0 2 1; 0 2 1; 0 2 1];
a=[.3 .5 .7 .9 .3 .5 .7 .9 .3 .5 .7 .9 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0;
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0;
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0;
.3 .5 .7 .9 .3 .3 .5 .5 .3 .5 .7 .9 .3 .5 .7 .9 .3 .3 .5 .5 .3 .5 .7 .9 .3 .5 .7 .9 .3 .3 .5 .5;
0 0 0 0 -.5 .5 -.7 .3 0 0 0 0 0 0 0 0 -.5 .5 -.7 -.5 0 0 0 0 0 0 0 0 -.5 .5 -.7 -.5;
0 0 0 0 0 0 0 0 .3 .5 .7 .9 0 0 0 0 0 0 0 0 .3 .5 .7 .9 .3 .3 .5 .5 .3 .5 .7 .9;
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -.5 .5 -.7 -.5 0 0 0 0];
' 1....... initialization arrays ...... ....';
for i=1:m0;
e(mm,1)=0.0;e0(m,1)=0.0;
E(mm,1)=0.0; E0(m,1)=0.0;
x(mm,n1)=0;y(mm,n1)=0;
% .......generating samples.. ....;
e=normrnd(mu,sigma,mm,n);
E=normrnd(mu,sigma,mm,n);
x(1,:) = e(1)*ones(1,n1);
x(2,:) = e(2)*ones(1,n1) + a(3,:).*x(1,:);
y(1,:) = E(1)*ones(1,n1);
y(2,:) = E(2)*ones(1,n1)+ a(1,:).*y(1,:) + a(3,:).*x(2,:)- a(4,:).*x(1,:) -a(6,:)*E(1);
for i1=3:mm;
x(i1,:) = e(i1)*ones(1,n1) + a(4,:).*x(i1-1,:) + a(5,:).*x(i1-2,:);
y(i1,:) = E(i1)*ones(1,n1) + a(1,:).*y(i1-1) + a(2,:).*y(i1-2)+ a(3,:).*x(i1) -
a(4,:).*x(i1-1) - a(5,:).*x(i1-2)-a(6,:).*E(i1-1,:) -a(7,:).*E(i1-2,:);
end;
x0=x(11:mm,:); x1=x0(1:(m-h),:); x2=x0((m-h+1):m,:);
y0=y(11:mm,:); y1=y0(1:(m-h),:); y2=y0((m-h+1):m,:);
z=[x0; y0] ; z0=unstkc(z,m,2*n1); z1=z0(1:(m-h),:) ;z2=z0((m-h+1):m,:);
if i==1;
xx = x0 ; xx1= x1 ; xx2 = x2 ;
yy = y0 ; yy1= y1 ; yy2 = y2 ;
zz = z0 ; zz1= z1 ; zz2 = z2 ;
else;
xx = [xx x0] ; xx1= [xx1 x1] ; xx2 = [xx2 x2 ];
yy = [yy y0] ; yy1= [yy1 y1] ; yy2 = [yy2 y2] ;
zz = [zz z0] ; zz1= [zz1 z1] ; zz2 = [zz2 z2];

- ٦٠٩ -
end; end;
%2.........transforming data............;
xx0 = 0.8*(xx-ones(m,1)*min(xx))./(ones(m,1)*(max(xx)-min(xx)))+0.1;
xx01 = xx0(1:(m-h),:); xx02=xx0((m-h+1):m,:);
yy0 = 0.8*(yy-ones(m,1)*min(yy))./(ones(m,1)*(max(yy)-min(yy)))+0.1;
yy01 = yy0(1:(m-h),:); yy02=yy0((m-h+1):m,:);
zz0 = 0.8*(zz-ones(m,1)*min(zz))./(ones(m,1)*(max(zz)-min(zz)))+0.1;
zz01 = zz0(1:(m-h),:);zz02=zz0((m-h+1):m,:);
%3.........training ,testing and predicting phase............;
j00=0
for j = 1 : m0;
for j0 = 1 : n1;
j00 = j00 + 1
j1 = fix((j0-1)/n3)+1;
zz00 = zz0(:,2*j00-1:2*j00);zz001= zz01(:,2*j00-1:2*j00);
z000 = zz(:,2*j00-1:2*j00);z0001= zz1(:,2*j00-1:2*j00);
yy00 = yy0(:,j00) ; yy001= yy01(:,j00); yy002 = yy02(:,j00);
y000 = yy(:,j00) ; y0001= yy1(:,j00) ; y0002 = yy2(:,j00);
net =newff([0 1;0 1],[3 1],{'logsig' 'tansig'}) ;
% net=init(net);
net.trainparam.epochs =50;
%net.trainparam.gole =0.001;

net = train(net,zz00',yy00');
f = sim(net,zz00');
YN = min(y000)+ (f - 0.1)*(max(y000)- min(y000))/0.8;
pc = [p(j0,:), 1] ;
th = armax(z000,pc);
YB = predict(z000,th,h);
for j3=1:h;
s01 = abs(y0002(j3)-YN(m-h+j3));
b01 = abs(y0002(j3)-YB(m-h+j3));
s02 = (s01)^2;
b02 = (b01)^2;
ss01(j1,j3) = ss01(j1,j3) + s01;
sb01(j1,j3) = sb01(j1,j3) + b01;
ss02(j1,j3) = ss02(j1,j3) + s02;
sb02(j1,j3) = sb02(j1,j3) + b02;
if s01 < b01 ; ss(j1,j3) = ss(j1,j3)+1;
elseif s01 ==b01 ;sb(j1,j3)=sb(j1,j3)+0.5
; ss(j1,j3) = ss(j1,j3)+0.5;
else;s01 > b01 ; sb(j1,j3) = sb(j1,j3)+1;
end; end; end; end;

ss03=(ones(1,n2)*ss01)/(n2*n3*m0);
sb03=(ones(1,n2)*sb01)/(n2*n3*m0);

ss04=(ss01*ones(h,1))/(h*n3*m0);
sb04=(sb01*ones(h,1))/(h*n3*m0);

ss05=(ones(1,n2)*ss02)/(n2*n3*m0);

- ٦١٠ -
sb05=(ones(1,n2)*sb02)/(n2*n3*m0);

ss06=(ss02*ones(h,1))/(h*n3*m0);
sb06=(sb02*ones(h,1))/(h*n3*m0);

ss3=(ones(1,n2)*ss)/(n2*n3*m0);
sb3=(ones(1,n2)*sb)/(n2*n3*m0);

ss4=(ss*ones(h,1))/(h*n3*m0);
sb4=(sb*ones(h,1))/(h*n3*m0);

s1=sum(ss03)/h;
s11=sum(sb03)/h;

s2=sum(ss05)/h;
s22=sum(sb05)/h;

s3=sum(ss3)/h;
s4=sum(sb3)/h;

' 4 comparison between Neural Network and BOX JENKINS forecasts ...Final
results.........';
disp 'mse results'
MSE=[ss02,ss06;[ss05,s2]]
MSE1=[sb02,sb06;[sb05,s22]]

disp 'mae results'


MAE=[ss01,ss04,;[ss03,s1]]
MAE1=[sb01,sb04,;[sb03,s11]]
disp 'nnf results'
mnnf=[ss/(n3*m0),ss4;[ss3,s3]]
disp 'box_jenkins ratios results'
'RATIO results'
ssb = ss./(n3*sb)
RATIO = [ssb,sum(ssb,2)/h;sum(ssb,1)/n2,sum(ssb(:))/(n2*h)]
save 'out2010'
%diary off;
toc

- ٦١١ -
Artificial intelligence and time series analysis
Prof. Dr. Gamal Alshawadfi Dr. Abd El-Wahab Hagag
Head of Statistics Department Assistant professor
MAIL: Dr_Gamal1@yahoo.com MAIL:Wahabstat@yahoo.com
Mobile: 0020-01066543923 Mobile: 0020-01224709386

Faculty of Commerce, Al-Azhar University

Abstract
This paper has two objects. First, we present artificial neural
networks method for forecasting linear and nonlinear ARAMAX time
series. Second, we compare the proposed method with the well known
Box-Jenkins method through a simulation study . To achieve these
objects 32000 samples, generated from different ARMAX models,
different sizes (25,40,60,100,150), were used for the network training.
Then the system was tested for generated data . The accuracy of the
neural network forecasts(NNF) is compared with the corresponding Box-
Jenkins forecasts(BJF) by using three tools: the mean square error
(MSE) , the mean absolute deviation of error (MAD) and the ratio of
closeness from the true values (MPE) . A suitable computer program
was designed (MATLAB TOOLBOX) for NN training , testing and
comparing with Box-Jenkins method .
The forecasts of theproposed NN approach, as shown from three
measures, seem to provide better results than the classical forecasting
Box-Jenkins approach . The results suggest thatthe ANN approach may
provide a superior alternative to the Box-Jenkins forecasting approach
for developing forecasting models in situations that do not require
modeling of the internal structure of the series .
The numerical results show that the proposed approach has a
good performance for the forecasting of ARMAX(p,h,q) models.

- ٦١٢ -

You might also like