Books by Shaimaa Ahmed Elsaid
The differential diagnosis of erythemato-squamous diseases is a real challenge in dermatology. In... more The differential diagnosis of erythemato-squamous diseases is a real challenge in dermatology. In diagnosing of these diseases, a biopsy is vital. However, unfortunately these diseases share many histopathological features, as well. Another difficulty for the differential diagnosis is that one disease may show the features of another disease at the beginning stage and may have the characteristic features at the following stages. In this paper, a new Feature Selection based on Linguistic Hedges Neural-Fuzzy classifier is pre- sented for the diagnosis of erythemato-squamous diseases. The performance evaluation of this system is estimated by using four training-test partition mod- els: 50–50%, 60–40%, 70–30% and 80–20%. The highest classification accura- cy of 95.7746% was achieved for 80–20% training-test partition using 3 clusters and 18 fuzzy rules, 93.820% for 50–50% training-test partition using 3 clusters and 18 fuzzy rules, 92.5234% for 70–30% training-test partition using 5 clusters and 30 fuzzy rules, and 91.6084% for 60–40% training-test partition using 6 clusters and 36 fuzzy rules. Therefore, 80–20% training-test partition using 3 clusters and 18 fuzzy rules are the best classification accuracy with RMSE of 6.5139e-013. This research demonstrated that the proposed method can be used for reducing the dimension of feature space and can be used to obtain fast automatic diagnostic systems for other diseases.
In the past few years, many wireless sensor networks (WSN) had been deployed. It has proved its ... more In the past few years, many wireless sensor networks (WSN) had been deployed. It has proved its usage in the future distributed computing environment. Some of its specific applications are habitat monitoring, object tracking, nuclear reactor controlling, fire detection, traffic monitoring, and health care. The main goals of this paper is to describe the major challenges and open research problems of using WSN in healthcare and survey advancements in using WSN to build a chronically implanted artificial retina for visually impaired people. Using WSN in vision repairing addresses two retinal diseases: Age-related Macular Degeneration (severe vision loss at the center of the retina in over 60) and Retinitis Pigmentosa (photoreceptor dysfunction textrightarrow loss of peripheral vision). The use of WSN in artificial retina provides new features that have the potential to be an economically viable to assist people with visual impairments.
Multimedia is one of the most popular data transmitted over mobile and wireless communication cha... more Multimedia is one of the most popular data transmitted over mobile and wireless communication channels. Image encryption is often achieved by using one of three approaches; traditional, selective, or in-compression encryption techniques. The most
suitable one for multimedia encryption approach is the in-compression encryption approach since it has the advantages of both traditional and selective techniques. As an important way of designing a secure video encryption schemes, secret Multiple Huffman Tables (MHT) technique has been suggested as an in-compression encryption technique. However,
the MHT technique suffers from high visual degradation at low bit rate and the multiple Huffman tables may be vulnerable to be known.
In this book, a new in-compression encryption technique for efficient and secure multimedia transmission is proposed. It is a general-purpose technique meaning that it can be used with any bit-string. It depends on a statistical model of the images to be encrypted and compressed to generate Optimized Multiple Huffman Tables (OMHT). In this method, a set of images or videos, randomly selected from the entire set of images or videos to be
transmitted, are used as a training data set to generate such optimized multiple Huffman tables. As these tables are generated from a set of images or videos of the same nature as those to be transmitted they will be the optimal tables to encrypt and compress the transmitted data, and hence, the name "Optimized" is used. These tables are used to
encode-encrypt the compressed image according to a secret order (secret key) whose length is equal to the number of the generated Huffman tables. Only receivers that have the secret key can decode/decrypt the received bit stream. The effectiveness of this scheme is verified through a series of experiments, and the robustness of developed approach is demonstrated
by comparing its performance with those of the traditional, selective, and MHT techniques. The experimental results prove that the proposed OMHT technique gives superior performance over other encryption techniques. The OMHT-based in-compression encryption technique can be used in addition to a lossy compression technique to get high compression ratio with high perceptual quality of multimedia data transmitted over communication channels. An adaptive Lossy Image
Compression (ALIC) technique is proposed to achieve high compression ratio with acceptable visual quality of the compressed image. ALIC is proposed to overcome the drawbacks of JPEG technique at low bit rate. It has three levels of compression. First, the excess information that the human eye can’t resolve is taken out by applying Discrete Cosine Transform (DCT) and discarding the coefficients of low energy content. Second, the source symbols are greatly reduced by applying an efficient quantization technique. Finally the information left over is perfect for further compression by Run-length encoding and
Huffman encoding using Huffman table generated by merging multiple images that have the same nature of the image to be compressed to form a single image. For video encryption, two computationally efficient and secure video in-compression encryption schemes are proposed. These schemes are based on using the ALIC technique for video compression, and the OMHT technique for video encryption. In the first scheme, the video sequence is considered as a sequence of consecutive frames and each frame is
compressed and encrypted individually without applying inter-frame compression and, hence, the transmission errors in one frame do not propagate to subsequent frames. This scheme uses ALIC technique to perform compression on each frame through the three symbol reduction steps; DCT coefficients reduction then quantization and, finally,
application of the OMHT technique to encode/encrypt the compressed video. The other scheme processes the video as groups of pictures/frames (GOP). The GOP is a group of successive pictures within a coded video stream. Each coded video stream consists of successive GOPs. This scheme processes the I-frames (intra coded pictures) and P-frames
(predictive coded pictures) in a different manner. The P-frames are subjected to the ALIC technique to perform compression through three symbol reduction steps: DCT coefficients reduction, quantization, and OMHT encoding/encryption technique. As for I-frames compression is done by selecting the highest energy coefficients in the DCT output vector
that carry 99.9% of the frame energy to be encoding and encrypting using OMHT technique. The performance of the proposed schemes is assessed by comparison with that of the currently used compression and encryption techniques as regarding not only their compression efficiency but also their security level and perceptual quality. From the
experimental results it is found that the proposed schemes produce an encrypted coded bit stream with an improved video quality at low bit rate when compared to the currently used compression techniques in addition to being of higher security level than the existing encryption techniques.
The proposed schemes are applied for compression/encryption of video sequences to be transmitted over mobile communication channels implementing WCDMA. The performance of the proposed schemes when applied for video transmission over WCDMA is assessed by comparison with the MPEG-4 and Motion JPEG2000 (MJ2K) video compression standard techniques. During comparison, both the cases of presence and absence of transmission errors are considered. The experimental results are presented and the achieved performance is compared with that of the standard codecs. Experimental results including comparisons with standard codecs and taking into consideration the effects of transmission errors on the visual quality of the transmitted videos are achieved.
The experimental results show that the proposed video in-compression encryption schemes performs well over mobile communication channels in addition to achieving higher security of video transmission.
Papers by Shaimaa Ahmed Elsaid
Springer International Publishing, 2014
Wireless Sensor Network consists of an enormous number of small disposable sensors which have lim... more Wireless Sensor Network consists of an enormous number of small disposable sensors which have limited energy. The sensor nodes equipped with limited power sources. Therefore, efficiently utilizing sensor nodes energy can maintain a prolonged network lifetime. This paper proposes an optimized hierarchical routing technique which aims to reduce the energy consumption and prolong network lifetime. In this technique, the selection of optimal cluster heads (CHs) locations is based on Artificial Fish Swarm Algorithm (AFSA). Various behaviors in AFSA such as preying, swarming, and following are applied to select the best locations of CHs. A fitness function is used to compare between these behaviors to select the best CHs. The model developed is simulated in MATLAB. Simulation results show the stability and efficiency of the proposed technique. The results are obtained in terms of number of alive nodes and the energy residual mean value after some communication rounds. To prove the AFSA efficiency of energy consumption, we have compared it to LEACH and PSO. Simulation results show that the proposed method outperforms both LEACH and PSO in terms of first node die (FND) round, total data received by base station, network lifetime, and energy consume per round.
This paper proposes three adaptive imagequantisation techniques. These techniques
calculate the ... more This paper proposes three adaptive imagequantisation techniques. These techniques
calculate the quantisation levels using a methodthat is dependent on the images distribution
pattern (hence the word ‘adaptive’) and thenround off the pixels values to the nearest
quantisation level. In this way, the number ofdifferent transmitted values is reduced. These
schemes provide a wide range of compression ratios (CRs) with a very slight degradation of the
signal to noise ratio (SNR). They are applied toboth grey and colour images. The performances
of these techniques are analysed and compared to other quantisation techniques to prove their
quality. From the experimental results it is clear that they work well for all types of images and
provide higher PSNR and perceptual quality than other techniques.
Soft Computing
Most image compression algorithms suffer from several drawbacks: high-computational complexity, m... more Most image compression algorithms suffer from several drawbacks: high-computational complexity, moderate reconstructed picture qualities, and a variable bit rate. In this paper, an efficient color image quantization technique that depends on an optimized Fuzzy C-means (OFCM) algorithm is proposed. It exploits the optimization capability of the improved artificial fish swarm algorithm to overcome the shortage of Fuzzy C-means algorithm. It uses error diffusion algorithms to obtain perceptually better images after quantization. Experiments are carried out to estimate the performance of the proposed OFCM algorithm in image compression using standard image set. The results indicate that the algorithm can decrease effectively the mean square deviation of color quantization, keep overall arrangement of ideas and part characteristic detail in image reconstruction. The performance efficiency of the proposed technique is compared with those of three other quantization algorithms. The Comparative results confirmed that the OFCM has potential in terms of both accuracy and perceptual quality as compared to recent methods of the literature.
Soft Computing Applications (SOFA 2012), Aug 2012
The differential diagnosis of erythemato-squamous diseases is a real challenge in dermatology. I... more The differential diagnosis of erythemato-squamous diseases is a real challenge in dermatology. In diagnosing of these diseases, a biopsy is vital. However, unfortunately these diseases share many histopathological features, as well. Another difficulty for the differential diagnosis is that one disease may show the features of another disease at the beginning stage and may have the characteristic features at the following stages. In this paper, a new Feature Selection based on Linguistic Hedges Neural-Fuzzy classifier is pre- sented for the diagnosis of erythemato-squamous diseases. The performance evaluation of this system is estimated by using four training-test partition mod- els: 50–50%, 60–40%, 70–30% and 80–20%. The highest classification accura- cy of 95.7746% was achieved for 80–20% training-test partition using 3 clusters and 18 fuzzy rules, 93.820% for 50–50% training-test partition using 3 clusters and 18 fuzzy rules, 92.5234% for 70–30% training-test partition using 5 clusters and 30 fuzzy rules, and 91.6084% for 60–40% training-test partition using 6 clusters and 36 fuzzy rules. Therefore, 80–20% training-test partition using 3 clusters and 18 fuzzy rules are the best classification accuracy with RMSE of 6.5139e-013. This research demonstrated that the proposed method can be used for reducing the dimension of feature space and can be used to obtain fast automatic diagnostic systems for other diseases.
Int. J. Biomedical Engineering and Technology, 2012
In ultrasound images, the noise can restrain information which is valuable for the general pract... more In ultrasound images, the noise can restrain information which is valuable for the general practitioner. Although sigma fi lter has been shown to be a good solution both in terms of fi ltering accuracy and computational complexity, it depends on estimating both the mean and variance manually. This paper presents an Enhanced Ultrasound Images Denoising (EUID) technique for speckle noise suppression in ultrasound images. This technique estimates automatically speckle noise amount in the ultrasound images by estimating important input parameters of the fi lter and then denoise the image using the sigma fi lter. The experimental results show that the proposed technique is a valuable tool for estimating speckle standard deviation, being accurate, less tedious and preventing typical human errors associated with manual tasks in addition to preserving the edges from the image. A comparative study is made on the performances of the EUID technique and other filters in removing the speckles from the image and in preserving the edges.
Computer Methods and Programs in Biomedicine, 2013
Thyroid hormones produced by the thyroid gland help regulation of the body's metabolism. A variet... more Thyroid hormones produced by the thyroid gland help regulation of the body's metabolism. A variety of methods have been proposed in the literature for thyroid disease classification. As far as we know, clustering techniques have not been used in thyroid diseases data set so far. This paper proposes a comparison between hard and fuzzy clustering algorithms for thyroid diseases data set in order to find the optimal number of clusters. Different scalar validity measures are used in comparing the performances of the proposed clustering systems. To demonstrate the performance of each algorithm, the feature values that represent thyroid disease are used as input for the system. Several runs are carried out and recorded with a different number of clusters being specified for each run (between 2 and 11), so as to establish the optimum number of clusters. To find the optimal number of clusters, the so-called elbow criterion is applied. The experimental results revealed that for all algorithms, the elbow was located at c = 3. The clustering results for all algorithms are then visualized by the Sammon mapping method to find a low-dimensional (normally 2D or 3D) representation of a set of points distributed in a high dimensional pattern space. At the end of this study, some recommendations are formulated to improve determining the actual number of clusters present in the data set.
Int. J. Signal and Imaging Systems Engineering, Jan 1, 2013
Although JPEG technique is considered as the most popular image compression standard, it behaves ... more Although JPEG technique is considered as the most popular image compression standard, it behaves high visual degradation at low bit rates. In this paper, an Efficient DCT-based Image Compression technique is proposed to achieve high Compression Ratio (CR) with high quality at both high and low bit rates. This technique uses switching between JPEG compression technique at high bit rates and a novel Adaptive Lossy Image Compression (ALIC) technique at low bit rates. ALIC is proposed to overcome the drawbacks of JPEG technique at low bitrates. The performance of the proposed technique is analysed at low and high bit rates on both gray and color images. Performances of both JPEG and ALIC techniques are analysed and compared. The experimental results reveal that the proposed ALIC technique achieves better CR with acceptable SNR in comparison with JPEG technique. Also, the resultant CR of ALIC technique can be considerably increased with a slight decrease of its PSNR. This decrease in PSNR does not result in a noticeable visual degradation of the compressed image, On the other hand increasing the CR of JPEG technique results in a noticeable visual degradation due to the appearance of blocking effect in the reconstructed image. Thus, it is greatly recommended to use ALIC technique in the applications that require high CR with stable PSNR. ALIC is a general purpose technique that can be applied, not only on images, but also on any data source which uses Huffman coding to achieve better CR. Therefore, it is suitable for compression of text, image and video.
International Journal of System Dynamics Applications, Jan 1, 2013
Facial detection and recognition are among the most heavily researched fields of computer vision ... more Facial detection and recognition are among the most heavily researched fields of computer vision and image processing. Most of the current face recognition techniques suffer when the noises affect the global features or the local intensity pixels of the images under consideration. In the proposed Reliable Face Recognition System (RFRS) system, for the first time, a combination of Gabor Filter (GF), Principal component analysis (PCA) for efficient feature extraction, and ANN for classification is employed. This demonstrates how to detect faces in noisy images by training the network several times on various input; ideal and noisy images of faces. Applying GF before PCA reduces PCA sensitivity to noise, provides a greater level of invariance, and trains the ANN on different sets of noisy images. The output of the ANN is a vector whose length equal to the distinct subjects in Olivetti Research Laboratory (ORL). The ANN is trained to output a 1 in the correct position of the output vector and to fill the rest of the output vector with 0’s. Experimentation is carried out on RFRS by using ORL datasets. The experimental results show that training the network on noisy input images of face greatly reduce its errors when it had to classify or recognize noisy images. For noisy face images, the network did not make any errors for faces with noise of mean 0.00 or 0.05, while the average recogni- tion rate varies from 96.8% to 98%. When noise of mean 0.10 is added to the images the network begins to make errors. For noiseless face images, the proposed system achieves correct classification. Performance comparison between RFRS and other face recognition techniques shows that for most of the cases, RFRS performs better than other conventional techniques under different types of noises and it shows the high robustness of the proposed algorithm
Soft Computing, Feb 1, 2014
During recent years, license plate recognition have been widely used as a core technology for sec... more During recent years, license plate recognition have been widely used as a core technology for security or traffic applications such as in traffic surveillance, parking lot access control, and information management. In this paper, Shadow Aware License Plate Recognition (SALPR) system is proposed to recognize Egyptian LP. This system achieves high recognition rate through applying shadow detection and removal, rotation adjustment and using Multilayer perceptron as a powerful tool to perform the recognition process. To show the efficiency of the proposed system, experiments have been done on numerous captured images including various types of vehicles with different lighting and noise effects. The experimental results yield 95.5 % recognition accuracy, the recognition process takes 1.6 s to recognize plate information. Most of the elapsed time used is for the license plate extraction and rotation adjustment. The results show the feasibility of the methodology followed in this paper. Performance comparison between SALPR and other LP recognition techniques shows that for most of the cases, SALPR performs better than other techniques under different lighting conditions and it shows the high robustness of the proposed algorithm.
Support vector machine (SVM) is a supervised machine learning approach that was recognized as a s... more Support vector machine (SVM) is a supervised machine learning approach that was recognized as a statistical learning apotheosis for the small-sample database. SVM has shown its excellent learning and generalization ability and has been extensively employed in many areas. This paper presents a performance analysis of six types of SVMs for the diagnosis of the classical Wisconsin breast cancer problem from a statistical point of view. The classification performance of standard SVM (St-SVM) is analyzed and compared with those of the other modified classifiers such as proximal support vector machine (PSVM) classifiers, Lagrangian support vector machines (LSVM), finite Newton method for Lagrangian support vector machine (NSVM), Linear programming support vector machines (LPSVM), and smooth support vector machine (SSVM). The experimental results reveal that these SVM classifiers achieve very fast, simple, and efficient breast cancer diagnosis. The training results indicated that LSVM has the lowest accuracy of 95.6107 %, while St-SVM performed better than other methods for all performance indices (accuracy = 97.71 %) and is closely followed by LPSVM (accuracy = 97.3282). However, in the validation phase, the overall accuracies of LPSVM achieved 97.1429 %, which was superior to LSVM (95.4286 %), SSVM (96.5714 %), PSVM (96 %), NSVM (96.5714 %), and St-SVM (94.86 %). Value of ROC and MCC for LPSVM achieved 0.9938 and 0.9369, respectively, which outperformed other classifiers. The results strongly suggest that LPSVM can aid in the diagnosis of breast cancer.
Although adaptive neuro-fuzzy inference system (ANFIS) has very fast convergence time, it is not ... more Although adaptive neuro-fuzzy inference system (ANFIS) has very fast convergence time, it is not suitable for classification problems because its outputs are not integer. In order to overcome this problem, this paper provides four adaptive neuro-fuzzy classifiers; adaptive neuro-fuzzy classifier with linguistic hedges (ANFCLH), linguistic hedges neuro-fuzzy classifier with selected features (LHNFCSF), conjugate gradient neuro-fuzzy classifier (SCGNFC) and speeding up scaled conjugate gradient neuro-fuzzy classifier (SSCGNFC). These classifiers are used to achieve very fast, simple and efficient breast cancer diagnosis. Both SCGNFC and SSCGNFC systems are optimized by scaled conjugate gradient algorithms. In these two systems, k-means algorithm is used to initialize the fuzzy rules. Also, Gaussian membership function is only used for fuzzy set descriptions, because of its simple derivative expressions. The other two systems are based on linguistic hedges (LH) tuned by scaled conjugate gradient. The classifiers performances are analyzed and compared by applying them to breast cancer diagnosis. The results indicated that SCGNFC, SSCGNFC and ANFCLH achieved the same accuracy of 97.6608 % in the training phase while LHNFCSF performed better than other methods in the training phase by achieving an accuracy of 100 %. In the testing phase, the overall accuracies of LHNFCSF achieved 97.8038 %, which is superior also to other methods. Applying LHNFCSF not only reduces the dimensions of the problem, but also improves classification performance by discarding redundant, noise-corrupted or unimportant features. Also, the k-means clustering algorithm was used to determine the membership functions of each feature. LHNFCSF achieved mean RMSE values of 0.0439 in the training phase after feature selection and gives the best testing recognition rates of 98.8304 and 98.0469 during training and testing phases, respectively using two clusters for each class. The results strongly suggest that ANFCLH can aid in the diagnosis of breast cancer and can be very helpful to the physicians for their final decision on their patients.
Most medical images have a poor signal to noise ratio than scenes taken with a digital camera, wh... more Most medical images have a poor signal to noise ratio than scenes taken with a digital camera, which often leads to incorrect diagnosis. Speckles suppression from ultrasound images is one of the most important concerns in computer aided diagnosis. This article proposes two novel, robust and efficient ultrasound images denoising techniques. The first technique is the enhanced ultrasound images denoising (EUID) technique, which estimates automatically the speckle noise amount in the ultrasound images by estimating important input parameters of the filter and then denoising the image using the sigma filter. The second technique is the ultrasound image denoising using neural network (UIDNN) that is based on the second-order difference of pixels with adaptive threshold value in order to identify random valued speckles from images to achieve high efficient image restoration. The performances of the proposed techniques are analyzed and compared with those of other image denoising techniques. The experimental results show that the proposed techniques are valuable tools for speckles suppression, being accurate, less tedious, and preventing typical human errors associated with manual tasks in addition to preserving the edges from the image. The EUID algorithm has nearly the same peak signal to noise ratio (PSNR) as Frost and speckle-reducing anisotropic diffusion 1, whereas it achieves higher gains, on average—0.4 dB higher PSNR—than the Lee, Kuan, and anisotropic diffusion filters. The UIDNN technique outperforms all the other techniques since it can determine the noisy pixels and perform filtering for these pixels only. Generally, when relatively high levels of noise added, the proposed algorithms show better performances than the other conventional filters.
A novel Adaptive Lossy Image Compression
(ALIC) technique is proposed to achieve hig... more A novel Adaptive Lossy Image Compression
(ALIC) technique is proposed to achieve high compression
ratio by reducing the number of source symbols through the
application of an efficient technique. The proposed algorithm
is based on processing the discrete cosine transform (DCT) of
the image to extract the highest energy coefficients in
addition to applying one of the novel quantization schemes
proposed in the present work. This method is straightforward
and simple. It does not need complicated calculation;
therefore the hardware implementation is easy to attach.
Experimental comparisons are carried out to compare the
performance of the proposed technique with those of other
standard techniques such as the JPEG. The experimental
results show that the proposed compression technique
achieves high compression ratio with higher peak signal to
noise ratio than that of JPEG at low bit rate without the
visual degradation that appears in case of JPEG.
International Journal of …, Jan 1, 2011
A novel Adaptive Lossy Image Compression (ALIC) technique is proposed to achieve high compression... more A novel Adaptive Lossy Image Compression (ALIC) technique is proposed to achieve high compression ratio
by reducing the number of source symbols through the application of an efficient technique. The proposed
algorithm is based on processing the discrete cosine transform (DCT) of the image to extract the highest en-
ergy coefficients in addition to applying one of the novel quantization schemes proposed in the present work.
This method is straightforward and simple. It does not need complicated calculation; therefore the hardware
implementation is easy to attach. Experimental comparisons are carried out to compare the performance of
the proposed technique with those of other standard techniques such as the JPEG. The experimental results
show that the proposed compression technique achieves high compression ratio with higher peak signal to
noise ratio than that of JPEG at low bit rate without the visual degradation that appears in case of JPEG.
International Journal of Signal …, Jan 1, 2011
Multimedia is one of the most popular data shared in the Web, and the protection of it via encryp... more Multimedia is one of the most popular data shared in the Web, and the protection of it via encryption techniques is of vast interest. In this paper, an Optimized Multiple Huffman Tables (OMHT) technique is proposed to face some compression and security problems found in Multiple Huffman Tables (MHT) technique. OMHT depends on using statistical-model-based compression method to generate different tables from the same data type of images or videos to be encrypted leading to increase compression efficiency and security of the used tables. A systematic study on how to strategically integrate different atomic operations to build a multimedia compression-encryption system is presented. The resulting system can
provide superior performance over both generic encryption and its simple adaptation to multimedia in terms of a joint consideration of security, and bitrate overhead. The effectiveness of this scheme is verified through a series of experiments, and the robustness of
our approach is demonstrated by comparing it against a standard compression technique, JPEG on which the MHT technique is built.
However JPEG is the most popular image
compression standard, it is not an adaptive technique; it... more However JPEG is the most popular image
compression standard, it is not an adaptive technique; it is
independent on the image characteristics. A novel Adaptive
Lossy Compression technique; Least Probable Coefficients
Approximation (LPCA) technique is proposed to achieve
better performance and simpler implementation than jpeg at
low bitrate. Compression using LPCA based on reducing the
number of source symbols by applying efficient processing on
the discrete cosine transform (DCT) coefficients of the image
in addition to efficient quantization to reduce the transmitted
values. It is general compression technique that can be
applied to any digital data not only images. Experimental
comparisons are carried out to compare the performance of
the proposed technique with that of JPEG. The experimental
results show that: at high bitrate JPEG works efficiently but
at low bitrate significant block artifacts appears in decoded
images. The proposed compression technique achieves high
compression ratio (CR) with higher signal to noise ratio
(SNR) than that of JPEG at low bitrate without the great
visual degradation that appears in case of JPEG.
International Journal of Networking and Virtual …, Jan 1, 2012
Secure link state protocol (SLIP) is the most effective scheme
proposed so far to protect the in... more Secure link state protocol (SLIP) is the most effective scheme
proposed so far to protect the internet infrastructure against routing table ‘poisoning’ attacks. The major disadvantage of the SLIP is its failure in protecting the network in the case of inactive routing table attacks and in some cases of proactive attacks. A successful attempt is carried out to improve the performance of secure link state protocol SLIP applied to protect internet infrastructure in a communication network. A modified secure link state protocol (MSLIP) is proposed to overcome the problems found in the SLIP, since it takes the advantages of SLIP and adds the necessary operations to protect the internet infrastructure in all the cases of inactive and proactive routing attacks. A complete comparison under different scenarios is performed between the performances of MSLIP, SLIP and traditional LS are presented in
this work. A sensible results and valuable practical relation are obtained. The
results show that the performance of the MSLIP is better than the existing SLIP as well as traditional link secure protocol LS.
Uploads
Books by Shaimaa Ahmed Elsaid
suitable one for multimedia encryption approach is the in-compression encryption approach since it has the advantages of both traditional and selective techniques. As an important way of designing a secure video encryption schemes, secret Multiple Huffman Tables (MHT) technique has been suggested as an in-compression encryption technique. However,
the MHT technique suffers from high visual degradation at low bit rate and the multiple Huffman tables may be vulnerable to be known.
In this book, a new in-compression encryption technique for efficient and secure multimedia transmission is proposed. It is a general-purpose technique meaning that it can be used with any bit-string. It depends on a statistical model of the images to be encrypted and compressed to generate Optimized Multiple Huffman Tables (OMHT). In this method, a set of images or videos, randomly selected from the entire set of images or videos to be
transmitted, are used as a training data set to generate such optimized multiple Huffman tables. As these tables are generated from a set of images or videos of the same nature as those to be transmitted they will be the optimal tables to encrypt and compress the transmitted data, and hence, the name "Optimized" is used. These tables are used to
encode-encrypt the compressed image according to a secret order (secret key) whose length is equal to the number of the generated Huffman tables. Only receivers that have the secret key can decode/decrypt the received bit stream. The effectiveness of this scheme is verified through a series of experiments, and the robustness of developed approach is demonstrated
by comparing its performance with those of the traditional, selective, and MHT techniques. The experimental results prove that the proposed OMHT technique gives superior performance over other encryption techniques. The OMHT-based in-compression encryption technique can be used in addition to a lossy compression technique to get high compression ratio with high perceptual quality of multimedia data transmitted over communication channels. An adaptive Lossy Image
Compression (ALIC) technique is proposed to achieve high compression ratio with acceptable visual quality of the compressed image. ALIC is proposed to overcome the drawbacks of JPEG technique at low bit rate. It has three levels of compression. First, the excess information that the human eye can’t resolve is taken out by applying Discrete Cosine Transform (DCT) and discarding the coefficients of low energy content. Second, the source symbols are greatly reduced by applying an efficient quantization technique. Finally the information left over is perfect for further compression by Run-length encoding and
Huffman encoding using Huffman table generated by merging multiple images that have the same nature of the image to be compressed to form a single image. For video encryption, two computationally efficient and secure video in-compression encryption schemes are proposed. These schemes are based on using the ALIC technique for video compression, and the OMHT technique for video encryption. In the first scheme, the video sequence is considered as a sequence of consecutive frames and each frame is
compressed and encrypted individually without applying inter-frame compression and, hence, the transmission errors in one frame do not propagate to subsequent frames. This scheme uses ALIC technique to perform compression on each frame through the three symbol reduction steps; DCT coefficients reduction then quantization and, finally,
application of the OMHT technique to encode/encrypt the compressed video. The other scheme processes the video as groups of pictures/frames (GOP). The GOP is a group of successive pictures within a coded video stream. Each coded video stream consists of successive GOPs. This scheme processes the I-frames (intra coded pictures) and P-frames
(predictive coded pictures) in a different manner. The P-frames are subjected to the ALIC technique to perform compression through three symbol reduction steps: DCT coefficients reduction, quantization, and OMHT encoding/encryption technique. As for I-frames compression is done by selecting the highest energy coefficients in the DCT output vector
that carry 99.9% of the frame energy to be encoding and encrypting using OMHT technique. The performance of the proposed schemes is assessed by comparison with that of the currently used compression and encryption techniques as regarding not only their compression efficiency but also their security level and perceptual quality. From the
experimental results it is found that the proposed schemes produce an encrypted coded bit stream with an improved video quality at low bit rate when compared to the currently used compression techniques in addition to being of higher security level than the existing encryption techniques.
The proposed schemes are applied for compression/encryption of video sequences to be transmitted over mobile communication channels implementing WCDMA. The performance of the proposed schemes when applied for video transmission over WCDMA is assessed by comparison with the MPEG-4 and Motion JPEG2000 (MJ2K) video compression standard techniques. During comparison, both the cases of presence and absence of transmission errors are considered. The experimental results are presented and the achieved performance is compared with that of the standard codecs. Experimental results including comparisons with standard codecs and taking into consideration the effects of transmission errors on the visual quality of the transmitted videos are achieved.
The experimental results show that the proposed video in-compression encryption schemes performs well over mobile communication channels in addition to achieving higher security of video transmission.
Papers by Shaimaa Ahmed Elsaid
calculate the quantisation levels using a methodthat is dependent on the images distribution
pattern (hence the word ‘adaptive’) and thenround off the pixels values to the nearest
quantisation level. In this way, the number ofdifferent transmitted values is reduced. These
schemes provide a wide range of compression ratios (CRs) with a very slight degradation of the
signal to noise ratio (SNR). They are applied toboth grey and colour images. The performances
of these techniques are analysed and compared to other quantisation techniques to prove their
quality. From the experimental results it is clear that they work well for all types of images and
provide higher PSNR and perceptual quality than other techniques.
(ALIC) technique is proposed to achieve high compression
ratio by reducing the number of source symbols through the
application of an efficient technique. The proposed algorithm
is based on processing the discrete cosine transform (DCT) of
the image to extract the highest energy coefficients in
addition to applying one of the novel quantization schemes
proposed in the present work. This method is straightforward
and simple. It does not need complicated calculation;
therefore the hardware implementation is easy to attach.
Experimental comparisons are carried out to compare the
performance of the proposed technique with those of other
standard techniques such as the JPEG. The experimental
results show that the proposed compression technique
achieves high compression ratio with higher peak signal to
noise ratio than that of JPEG at low bit rate without the
visual degradation that appears in case of JPEG.
by reducing the number of source symbols through the application of an efficient technique. The proposed
algorithm is based on processing the discrete cosine transform (DCT) of the image to extract the highest en-
ergy coefficients in addition to applying one of the novel quantization schemes proposed in the present work.
This method is straightforward and simple. It does not need complicated calculation; therefore the hardware
implementation is easy to attach. Experimental comparisons are carried out to compare the performance of
the proposed technique with those of other standard techniques such as the JPEG. The experimental results
show that the proposed compression technique achieves high compression ratio with higher peak signal to
noise ratio than that of JPEG at low bit rate without the visual degradation that appears in case of JPEG.
provide superior performance over both generic encryption and its simple adaptation to multimedia in terms of a joint consideration of security, and bitrate overhead. The effectiveness of this scheme is verified through a series of experiments, and the robustness of
our approach is demonstrated by comparing it against a standard compression technique, JPEG on which the MHT technique is built.
compression standard, it is not an adaptive technique; it is
independent on the image characteristics. A novel Adaptive
Lossy Compression technique; Least Probable Coefficients
Approximation (LPCA) technique is proposed to achieve
better performance and simpler implementation than jpeg at
low bitrate. Compression using LPCA based on reducing the
number of source symbols by applying efficient processing on
the discrete cosine transform (DCT) coefficients of the image
in addition to efficient quantization to reduce the transmitted
values. It is general compression technique that can be
applied to any digital data not only images. Experimental
comparisons are carried out to compare the performance of
the proposed technique with that of JPEG. The experimental
results show that: at high bitrate JPEG works efficiently but
at low bitrate significant block artifacts appears in decoded
images. The proposed compression technique achieves high
compression ratio (CR) with higher signal to noise ratio
(SNR) than that of JPEG at low bitrate without the great
visual degradation that appears in case of JPEG.
proposed so far to protect the internet infrastructure against routing table ‘poisoning’ attacks. The major disadvantage of the SLIP is its failure in protecting the network in the case of inactive routing table attacks and in some cases of proactive attacks. A successful attempt is carried out to improve the performance of secure link state protocol SLIP applied to protect internet infrastructure in a communication network. A modified secure link state protocol (MSLIP) is proposed to overcome the problems found in the SLIP, since it takes the advantages of SLIP and adds the necessary operations to protect the internet infrastructure in all the cases of inactive and proactive routing attacks. A complete comparison under different scenarios is performed between the performances of MSLIP, SLIP and traditional LS are presented in
this work. A sensible results and valuable practical relation are obtained. The
results show that the performance of the MSLIP is better than the existing SLIP as well as traditional link secure protocol LS.
suitable one for multimedia encryption approach is the in-compression encryption approach since it has the advantages of both traditional and selective techniques. As an important way of designing a secure video encryption schemes, secret Multiple Huffman Tables (MHT) technique has been suggested as an in-compression encryption technique. However,
the MHT technique suffers from high visual degradation at low bit rate and the multiple Huffman tables may be vulnerable to be known.
In this book, a new in-compression encryption technique for efficient and secure multimedia transmission is proposed. It is a general-purpose technique meaning that it can be used with any bit-string. It depends on a statistical model of the images to be encrypted and compressed to generate Optimized Multiple Huffman Tables (OMHT). In this method, a set of images or videos, randomly selected from the entire set of images or videos to be
transmitted, are used as a training data set to generate such optimized multiple Huffman tables. As these tables are generated from a set of images or videos of the same nature as those to be transmitted they will be the optimal tables to encrypt and compress the transmitted data, and hence, the name "Optimized" is used. These tables are used to
encode-encrypt the compressed image according to a secret order (secret key) whose length is equal to the number of the generated Huffman tables. Only receivers that have the secret key can decode/decrypt the received bit stream. The effectiveness of this scheme is verified through a series of experiments, and the robustness of developed approach is demonstrated
by comparing its performance with those of the traditional, selective, and MHT techniques. The experimental results prove that the proposed OMHT technique gives superior performance over other encryption techniques. The OMHT-based in-compression encryption technique can be used in addition to a lossy compression technique to get high compression ratio with high perceptual quality of multimedia data transmitted over communication channels. An adaptive Lossy Image
Compression (ALIC) technique is proposed to achieve high compression ratio with acceptable visual quality of the compressed image. ALIC is proposed to overcome the drawbacks of JPEG technique at low bit rate. It has three levels of compression. First, the excess information that the human eye can’t resolve is taken out by applying Discrete Cosine Transform (DCT) and discarding the coefficients of low energy content. Second, the source symbols are greatly reduced by applying an efficient quantization technique. Finally the information left over is perfect for further compression by Run-length encoding and
Huffman encoding using Huffman table generated by merging multiple images that have the same nature of the image to be compressed to form a single image. For video encryption, two computationally efficient and secure video in-compression encryption schemes are proposed. These schemes are based on using the ALIC technique for video compression, and the OMHT technique for video encryption. In the first scheme, the video sequence is considered as a sequence of consecutive frames and each frame is
compressed and encrypted individually without applying inter-frame compression and, hence, the transmission errors in one frame do not propagate to subsequent frames. This scheme uses ALIC technique to perform compression on each frame through the three symbol reduction steps; DCT coefficients reduction then quantization and, finally,
application of the OMHT technique to encode/encrypt the compressed video. The other scheme processes the video as groups of pictures/frames (GOP). The GOP is a group of successive pictures within a coded video stream. Each coded video stream consists of successive GOPs. This scheme processes the I-frames (intra coded pictures) and P-frames
(predictive coded pictures) in a different manner. The P-frames are subjected to the ALIC technique to perform compression through three symbol reduction steps: DCT coefficients reduction, quantization, and OMHT encoding/encryption technique. As for I-frames compression is done by selecting the highest energy coefficients in the DCT output vector
that carry 99.9% of the frame energy to be encoding and encrypting using OMHT technique. The performance of the proposed schemes is assessed by comparison with that of the currently used compression and encryption techniques as regarding not only their compression efficiency but also their security level and perceptual quality. From the
experimental results it is found that the proposed schemes produce an encrypted coded bit stream with an improved video quality at low bit rate when compared to the currently used compression techniques in addition to being of higher security level than the existing encryption techniques.
The proposed schemes are applied for compression/encryption of video sequences to be transmitted over mobile communication channels implementing WCDMA. The performance of the proposed schemes when applied for video transmission over WCDMA is assessed by comparison with the MPEG-4 and Motion JPEG2000 (MJ2K) video compression standard techniques. During comparison, both the cases of presence and absence of transmission errors are considered. The experimental results are presented and the achieved performance is compared with that of the standard codecs. Experimental results including comparisons with standard codecs and taking into consideration the effects of transmission errors on the visual quality of the transmitted videos are achieved.
The experimental results show that the proposed video in-compression encryption schemes performs well over mobile communication channels in addition to achieving higher security of video transmission.
calculate the quantisation levels using a methodthat is dependent on the images distribution
pattern (hence the word ‘adaptive’) and thenround off the pixels values to the nearest
quantisation level. In this way, the number ofdifferent transmitted values is reduced. These
schemes provide a wide range of compression ratios (CRs) with a very slight degradation of the
signal to noise ratio (SNR). They are applied toboth grey and colour images. The performances
of these techniques are analysed and compared to other quantisation techniques to prove their
quality. From the experimental results it is clear that they work well for all types of images and
provide higher PSNR and perceptual quality than other techniques.
(ALIC) technique is proposed to achieve high compression
ratio by reducing the number of source symbols through the
application of an efficient technique. The proposed algorithm
is based on processing the discrete cosine transform (DCT) of
the image to extract the highest energy coefficients in
addition to applying one of the novel quantization schemes
proposed in the present work. This method is straightforward
and simple. It does not need complicated calculation;
therefore the hardware implementation is easy to attach.
Experimental comparisons are carried out to compare the
performance of the proposed technique with those of other
standard techniques such as the JPEG. The experimental
results show that the proposed compression technique
achieves high compression ratio with higher peak signal to
noise ratio than that of JPEG at low bit rate without the
visual degradation that appears in case of JPEG.
by reducing the number of source symbols through the application of an efficient technique. The proposed
algorithm is based on processing the discrete cosine transform (DCT) of the image to extract the highest en-
ergy coefficients in addition to applying one of the novel quantization schemes proposed in the present work.
This method is straightforward and simple. It does not need complicated calculation; therefore the hardware
implementation is easy to attach. Experimental comparisons are carried out to compare the performance of
the proposed technique with those of other standard techniques such as the JPEG. The experimental results
show that the proposed compression technique achieves high compression ratio with higher peak signal to
noise ratio than that of JPEG at low bit rate without the visual degradation that appears in case of JPEG.
provide superior performance over both generic encryption and its simple adaptation to multimedia in terms of a joint consideration of security, and bitrate overhead. The effectiveness of this scheme is verified through a series of experiments, and the robustness of
our approach is demonstrated by comparing it against a standard compression technique, JPEG on which the MHT technique is built.
compression standard, it is not an adaptive technique; it is
independent on the image characteristics. A novel Adaptive
Lossy Compression technique; Least Probable Coefficients
Approximation (LPCA) technique is proposed to achieve
better performance and simpler implementation than jpeg at
low bitrate. Compression using LPCA based on reducing the
number of source symbols by applying efficient processing on
the discrete cosine transform (DCT) coefficients of the image
in addition to efficient quantization to reduce the transmitted
values. It is general compression technique that can be
applied to any digital data not only images. Experimental
comparisons are carried out to compare the performance of
the proposed technique with that of JPEG. The experimental
results show that: at high bitrate JPEG works efficiently but
at low bitrate significant block artifacts appears in decoded
images. The proposed compression technique achieves high
compression ratio (CR) with higher signal to noise ratio
(SNR) than that of JPEG at low bitrate without the great
visual degradation that appears in case of JPEG.
proposed so far to protect the internet infrastructure against routing table ‘poisoning’ attacks. The major disadvantage of the SLIP is its failure in protecting the network in the case of inactive routing table attacks and in some cases of proactive attacks. A successful attempt is carried out to improve the performance of secure link state protocol SLIP applied to protect internet infrastructure in a communication network. A modified secure link state protocol (MSLIP) is proposed to overcome the problems found in the SLIP, since it takes the advantages of SLIP and adds the necessary operations to protect the internet infrastructure in all the cases of inactive and proactive routing attacks. A complete comparison under different scenarios is performed between the performances of MSLIP, SLIP and traditional LS are presented in
this work. A sensible results and valuable practical relation are obtained. The
results show that the performance of the MSLIP is better than the existing SLIP as well as traditional link secure protocol LS.
are applied for the purpose of detection and classification of breast cancer. Decision making is performed in two stages: training the classifiers with features from Wisconsin Breast Cancer database and then testing. The performance of the proposed structure is evaluated in terms of sensitivity, specificity, accuracy and ROC. The results revealed that PNN was the best classifiers by achieving accuracy rates of 100 and 97.66 % in both training and testing phases, respectively. MLP was ranked as the second classifier and was capable of achieving 97.80 and 96.34 % classification accuracy for training and validation phases, respectively, using scaled conjugate gradient learning algorithm. However, RBF performed better than MLP in the training phase, and it has achieved the lowest accuracy in the validation phase.
(DCT) coefficients of the image in addition to efficient quantisation to reduce the transmitted values. It is a general compression technique that can be applied to any digital data not just images. Experimental comparisons are carried out to compare the performance of the proposed technique with that of JPEG. The experimental results show that: The proposed compression technique achieves high Compression Ratio (CR) with higher Signal to Noise Ratio (SNR) than that of JPEG at low bitrate without the great visual degradation that appears in case of JPEG.
important concern in multimedia technology. In this paper, we propose two computationally efficient and secure video in-compression encryption models. Both models use adaptive lossy image compression (ALIC) technique (El-said et al., 2010) for video compression and optimised multiple Huffman tables (OMHT) technique (El-said et al., 2011) for video encryption. We achieve computational efficiency by using the proposed ALIC technique that exploits the frequently occurring patterns in the DCT coefficients of the video data. Computational complexity of the encryption is made proportional to the number of DCT transmitted coefficients in each video frame. The proposed video encryption models produce an encrypted coded sequence with an improved quality for the high compression ratios when compared to the
existing techniques. The performances of the proposed models are compared with those of the existing techniques with respect not only to their compression performance but also their security level. The simulation study for the two models over mobile communication system proves that the proposed models perform well in the presence of transmission errors.