Papers by International Journal of Computer Science & Information Technology (IJCSIT)
It is very simple and easier to recapture a high quality images from LCD screens with the develop... more It is very simple and easier to recapture a high quality images from LCD screens with the development of multimedia technology and digital devices. In authentication, the use of such recaptured images can be very dangerous. So, it is very important to recognize the recaptured images in order to increase authenticity. Even though, there are a number of features that have been proposed in various state-of-the-art visual recognition tasks, but it is still difficult to decide which feature or combination of features have more significant impact on this task. In this paper an image recapture detection method based on set of physical based features including texture, HSV colour and blurriness is proposed. Also, this paper evaluates the performance of different distinctive featuresin the context of recognition of recaptured images. Several experimental setups have been conducted in order to demonstrate the performance of the proposed method. In all these experimental results, the proposed method is efficient with good recognition rate. Among the combination of low-level features, CS-LBP detection is to operator which is used to extract the texture feature is the most robust feature.
Biometric matching involves finding similarity between fingerprint images.The accuracy and speed ... more Biometric matching involves finding similarity between fingerprint images.The accuracy and speed of the matching algorithmdetermines its effectives. This researchaims at comparing two types of matching algorithms namely(a) matching using global orientation features and (b) matching using minutia triangulation.The comparison is done using accuracy, time and number of similar features. The experiment is conducted on a datasets of 100 candidates using four (4) fingerprints from each candidate. The data is sampled from a mass registration conducted by a reputable organization in Kenya.Theresearch reveals that fingerprint matching based on algorithm (b) performs better in speed with an average of 38.32 milliseconds as compared to matching based on algorithm (a) with an average of 563.76 milliseconds. On accuracy,algorithm(a) performs better with an average accuracy of 0.142433 as compared to algorithm (b) with an average accuracy score of 0.004202.
The increasing number of threads inside the cores of a multicore processor, and competitive acces... more The increasing number of threads inside the cores of a multicore processor, and competitive access to the shared cache memory, become the main reasons for an increased number of competitive cache misses and performance decline. Inevitably, the development of modern processor architectures leads to an increased number of cache misses. In this paper, we make an attempt to implement a technique for decreasing the number of competitive cache misses in the first level of cache memory. This technique enables competitive access to the entire cache memory when there is a hit – but, if there are cache misses, memory data (by using replacement techniques) is put in a virtual part given to threads, so that competitive cache misses are avoided. By using a simulator tool, the results show a decrease in the number of cache misses and performance increase for up to 15%. The conclusion that comes out of this research is that cache misses are a real challenge for future processor designers, in order to hide memory latency.
RSSI based localization techniques are effected by environmental factors which cause the RF signa... more RSSI based localization techniques are effected by environmental factors which cause the RF signalsemitted from transmitter nodes fluctuate in time domain. These variations generate fluctuations on distance calculations and result false object position detection during localization.Smoothing procedures must be applied on distance values either collectively or individually to minimize these fluctuations. In this study,proposed detection system has two main phases. Firstly, calibration of RSSI values with respect to distances and calculation of environmental coefficient for each transmitter.Secondly, position estimation of objects by applyingiterative trilateration on smoothed distance values. A smoothing algorithm is employed to minimize the dynamic fluctuations of RF signals received from each reference transmitter node. Distances between the reference nodes and the objects are calculated by deploying environmental coefficients. Experimental measurements are carried out to measure the sensitivity of the system. Results show that the proposed system can be deployed as a viable position detection system in indoors and outdoors.
In the field of enterprise architecture (EA), qualitative scenarios are used to understand the qu... more In the field of enterprise architecture (EA), qualitative scenarios are used to understand the qualitative characteristics better. In order to reduce the implementation cost, scenarios are prioritized to be able to focus on the higher priority and more important scenarios. There are different methods to evaluate enterprise architecture including architecture Trade-off Analysis Method (ATAM).Prioritizing qualitative scenarios is one of the main phases of this method. Since none of the recent studies meet the prioritizing qualitative scenarios requirements, considering proper prioritizing criteria and reaching an appropriate speed priority, non-dominated sorting genetic algorithms is used in this study (NSGA-II). In addition to previous research standards more criteria were considered in the proposed algorithm, these sets of structures together as gene and in the form of cell array constitute chromosome. The proposed algorithm is evaluated in two case studies in the field of enterprise architecture and architecture software. The results showed the accuracy and the more appropriate speed comparing to the previous works including genetic algorithms. K EYWORDS enterprise architecture evaluation, qualitative scenarios prioritizing, NSGA-II algorithm
The importance of health sector in Colombia is notorously growing. In this article one of the maj... more The importance of health sector in Colombia is notorously growing. In this article one of the major issues concerning the field is addressed, the interoperability of heath history. Due to the stablishment of standards , Colombia must begin a process to accomplish such an important necesity to offer an efficient and high quality health service. In this context, the standar FHIR is an example of successfull implementation of electronic health history in public sector. It's tought to be gradually implemented first in the beneficiary institution Rubén Cruz Vélez Hospital and further in ther health institutions in the country. KEYWORDS Interoperability – HL7 FHIR – HCE-Archetypes
Acritical process in a software project life-cycle is risk assessment and mitigation. Risks exist... more Acritical process in a software project life-cycle is risk assessment and mitigation. Risks exist in every software project and recognizing and evaluatingrisks and uncertainties is a challenging process for practitioners with little historical data. In our study, by using a survey data, we identify and provide a relatively wider coverage of risks and ratings of software project. The risk register and evaluations are useful for practitioner of small organizations at initial phase of risk identification and assessment. There
Preserving confidentiality, integrity and authenticity of images is becoming very important. Ther... more Preserving confidentiality, integrity and authenticity of images is becoming very important. There are so many different encryption techniques to protect images from unauthorized access. Matrix multiplication can be successfully used to encrypt-decrypt digital images. In this paper we made a comparison study between two image encryption techniques based on matrix multiplication namely, segmentation and parallel methods.
High-utility itemset mining (HUIM) is an important research topic in data mining field and extens... more High-utility itemset mining (HUIM) is an important research topic in data mining field and extensive algorithms have been proposed. However, existing methods for HUIM present too many high-utility itemsets (HUIs), which reduces not only efficiency but also effectiveness of mining since users have to sift through a large number of HUIs to find useful ones. Recently a new representation, closed + high-utility itemset (CHUI), has been proposed. With this concept, the number of HUIs is reduced massively. Existing methods adopt two phases to discover CHUIs from a transaction database. In phase I, an itemset is first checked whether it is closed. If the itemset is closed, an overestimation technique is adopted to set an upper bound of the utility of this itemset in the database. The itemsets whose overestimated utilities are no less than a given threshold are selected as candidate CHUIs. In phase II, the candidate CHUIs generated from phase 1 are verified through computing their utilities in the database. However, there are two problems in these methods. 1) The number of candidate CHUIs is usually very huge and extensive memory is required. 2) The method computing closed itemsets is time consuming. Thus in this paper we propose an efficient algorithm CloHUI for mining CHUIs from a transaction database. CloHUI does not generate any candidate CHUIs during the mining process, and verifies closed itemsets from a tree structure. We propose a strategy to make the verifying process faster. Extensive experiments have been performed on sparse and dense datasets to compare CloHUI with the state-of-the-art algorithm CHUD, the experiment results show that for dense datasets our proposed algorithm CloHUI significantly outperforms CHUD: it is more than an order of magnitude faster, and consumes less memory.
Nowadays, in the world of industry end-users of business rules inside huge or small companies cla... more Nowadays, in the world of industry end-users of business rules inside huge or small companies claims that it's so hard to understand the rules either because they are hand written by a specific structural or procedural languages used only inside their organizations or because they require a certain understanding of the back-end process. As a result, a high need for a better management system that is easy to use, easy to maintain during the evolution process has increased. In this paper, the emphasis is put on building a business rule management system (BRMS) as a graphical editor for editing the models in a flexible agile manner with the assistant of ATL and Sirius frameworks within Eclipse platform. Thus, the proposed solution, on one hand, solves the problem of wasting resources dedicated for updating the rules and on the other hand it guarantees a great visibility and reusability of the rules.
Domain-specific modeling is more and more understood as a comparable solution compared to classic... more Domain-specific modeling is more and more understood as a comparable solution compared to classical software development. Textual domain-specific languages (DSLs) already have a massive impactin contrast tographical DSLs, they still have to show their full potential. The established textual DSLs are normally generated from a domain specific grammar or maybe other specific textual descriptions. And advantage of textual DSLs is thatthey can be development cost-efficient. In this paper, we describe asimilar approach for the creation of graphical DSLs from textual descriptions. We present a set of speciallydeveloped textual DSLs to fully describe graphical DSLs based on node and edge diagrams. These are, together with an EMF meta-model, the input for a generator that produces an eclipse-based graphical Editor. The entire project is available as open source under the name MoDiGen. KEYWORDS Model-Driven Software Development (MDSD), Domain-Specific Language (DSL), Xtext, Eclipse Modeling Framework (EMF), Metamodel Model-Driven Architecture (MDA), Graphical Editor
The main goal of this work is the implementation of a new tool for the Amazigh part of speech tag... more The main goal of this work is the implementation of a new tool for the Amazigh part of speech tagging using Markov Models and decision trees. After studying different approaches and problems of part of speech tagging, we have implemented a tagging system based on TreeTagger-a generic stochastic tagging tool, very popular for its efficiency. We have gathered a working corpus, large enough to ensure a general linguistic coverage. This corpus has been used to run the tokenization process, as well as to train TreeTagger. Then, we performed a straightforward outputs' evaluation on a small test corpus. Though restricted, this evaluation showed really encouraging results. Part-of-Speech (POS) tagging is an essential step to achieve the most natural language processing applications because it identifies the grammatical category of words belong text. Thus, POS taggers are an import ant module for large public applications such as questions-answering systems, information extraction, information retrieval, machine translation... They can be used in many other applications such as text-to-speech or like a pre-processor for a parser; the parser can do it better but more expensive. In this paper, we decided to focus on POS tagging for the Amazigh language. Currently, TreeTagger (hencefore TT) is one of the most popular and most widely used tools thanks to its speed, its independent architecture of languages, and the quality of obtained results. Therefore, we sought to develop a settings file TT for Amazigh. Our work involves the construction of dataset and the input pre-processing in order to run the two main modules: training program and tagger itself. For this reason, this work is the part to the still scarce set of tools and resources available for Amazigh automatic processing. The rest of the paper is organized as follows. Section 2 puts the current article in context by overviewing related work. Section 3 describes the linguistic background of Amazigh language. Section 4 presents the used Amazigh tagset and our training corpus. Experimentation results are discussed in Section 5. Finally, we will report our conclusions and eventual future works.
Information security against hacking, altering, corrupting, and divulging data is vital and inevi... more Information security against hacking, altering, corrupting, and divulging data is vital and inevitable and it requires an effective management in every organization. Some of the upcoming challenges can be the study of available frameworks in Enterprise Information Security Architecture (EISA) as well as criteria extraction in this field. In this study a method has been adopted in order to extract and categorize important and effective criteria in the field of information security by studying the major dimensions of EISA including standards, policies and procedures, organization infrastructure, user awareness and training, security base lines, risk assessment and compliance. Gartner's framework has been applied as a fundamental model to categorize the criteria. To assess the proposed model, a questionnaire was prepared and a group of EISA professionals completed it. The Fuzzy TOPSIS was used to quantify the data and prioritize criteria. It could be concluded that the database and database security criteria, inner software security, electronic exchange security and supervising malicious software can be high priorities.
The availability of online information shows a need of efficient text summarization system. The t... more The availability of online information shows a need of efficient text summarization system. The text summarization system follows extractive and abstractive methods. In extractive summarization, the important sentences are selected from the original text on the basis of sentence ranking methods. The Abstractive summarization system understands the main concept of texts and predicts the overall idea about the topic. This paper mainly concentrated the survey of existing extractive text summarization models. Numerous algorithms are studied and their evaluations are explained. The main purpose is to observe the peculiarities of existing extractive summarization models and to find a good approach that helps to build a new text summarization system.
Software testing is the primary phase, which is performed during software development and it is c... more Software testing is the primary phase, which is performed during software development and it is carried by a sequence of instructions of test inputs followed by expected output. Evolutionary algorithms are most popular in the computational field based on population. The test case generation process is used to identify test cases with resources and also identifies critical domain requirements. The behavior of bees is based on population and evolutionary method. Bee Colony algorithm (BCA) has gained superiority in comparison to other algorithms in the field of computation. The Harmony Search (HS) algorithm is based on the enhancement process of music. When musicians compose the harmony through different possible combinations of the music, at that time the pitches are stored in the harmony memory and the optimization can be done by adjusting the input pitches and generate the perfect harmony. Particle Swarm Optimization (PSO) is an intelligence based meta-heuristic algorithm where each particle can locate their source of food at different position.. In this algorithm, the particles will search for a better food source position in the hope of getting a better result. In this paper, the role of Artificial Bee Colony, particle swarm optimization and harmony search algorithms are analyzed in generating random test data and optimized those test data. Test case generation and optimization through bee colony, PSO and harmony search (HS) algorithms which are applied through a case study, i.e., withdrawal operation in Bank ATM and it is observed that these algorithms are able to generate suitable automated test cases or test data in a client manner. This section further gives the brief details and compares between HS, PSO, and Bee Colony (BC) Optimization methods which are used for test case or test data generation and optimization.
Edge detection is one of the most fundamental algorithms in digital image processing. Many algori... more Edge detection is one of the most fundamental algorithms in digital image processing. Many algorithms have been implemented to construct image layers extracted from the original image based on selecting threshold parameters. Changing theses parameters to get a high quality layer is time consuming. In this paper, we propose two parallel technique, NASHT1 and NASHT2, to generate multiple layers of an input image automatically to enable the image tester to select the highest quality detected edges. In addition, the effect of intensive I/O operations and the number of parallel running processes on the performance of the proposed techniques have also been studied.
For more than two decades children's use of multimedia was restricted to watching television and ... more For more than two decades children's use of multimedia was restricted to watching television and listening to music. Although some parents complained about children being addicted to listening to music the idea that children could be addicted to television was a real concern to most parents. Nowadays parents not only need to be concerned about how much television their kids are watching, but also many other forms of media that are emerging with the fast development in information and technology such as the internet, video games, tablets and smart phones. From this the researcher came to realize that children are increasingly becoming the consumers of application software facilitated by these information systems. Children spend at least three hours according to research on these media which includes the use of computers, tablets, smartphones and music. The researcher was concerned that system vendors use the same learnability principles to make applications for all age groups based on learnability principles that were designed with adult users in mind. Many interface design principles used for adult products cannot be applied to products meant for children and further yet children at different ages learn differently. The research looked at the existing learnability principles by trying to evaluate them and come up with new principle(s) that can be used to further improve the current principles so that they can be used effectively by information system designers to improve on the learnability of their application software meant for children of different age groups.
Biometrics data recently has become a major role in determining the identity of the person. With ... more Biometrics data recently has become a major role in determining the identity of the person. With such importance for the use of biometrics data, there are many attacks that threaten the security and integrity of biometrics data itself. Therefore, it becomes necessary to protect the originality of biometrics data against manipulation and fraud. This paper presents an authentication technique to achieve the authenticity of speech signals based on adaptive watermarking technique. The basic idea is depends on extracting the speech features from the speech signal initially and then using these features as a watermark. The watermark information embeds into the same speech signal. The short time energy technique is used to identifying the suitable positions for embedding the watermark in order to avoid the regions that used in the speech recognition system. After exclusion the important areas that used in speech recognition the Genetic Algorithm (GA) is used to generate random locations to hide the watermark information in an intelligent manner. The experimental results have achieved high efficiency in establishing the authenticity of speech signal and the process of embedding watermark not effect on features that used in speech recognition.
Previous research indicates that there is a failure in the adoption of e-government services to c... more Previous research indicates that there is a failure in the adoption of e-government services to citizens as planned in the context of developing countries. Obstacles behind this failure are varied, including socio-cultural, economic and technical obstacles. But with recent advances in mobile technologies as well as the pervasive penetration of mobile phones, governments in developing countries including Jordan have been able to overcome most of these obstacles through the so-called mobile government (or m-government). This has provided an alternative channel for governments to improve the interaction with their citizens, as well as the quality of services provided to them. Accordingly, the exploration of the factors that affect the adoption of m-government services would reduce the gap between government strategies and policies relating to the development of m-government services on the one hand, and the perceptions of citizens on the other hand, allowing for a better understanding of citizens' needs and priorities that must be taken into account by the governments in order to ensure the success of such services on a large scale. This research is based on a re-evaluation of the empirical results of a comprehensive study conducted by Susanto and Goodwin (2010), which concludes that there are fifteen factors that are likely to affect citizens in 25 countries around the world to adopt SMS-based e-government services, but in the context of a different country in the Arab world, namely Jordan.
Scheduling various processes is one of the most fundamental functions of the operating system. In... more Scheduling various processes is one of the most fundamental functions of the operating system. In that context one of the most common scheduling algorithms used in most operating systems is the Round Robin method in which, the ready processes waiting in the ready queue, take control of the processor for a short period of time known as the time quantum (or time slice) circularly. Here we discuss the use of statistics and develop a mathematical model to determine the most efficient value for time quantum. This is strictly theoretical as we do not know the values of times for the various processes beforehand. However the proposed approach is compared with the recent developed algorithms to this regard to determine the efficiency of the proposed algorithm.
Uploads
Papers by International Journal of Computer Science & Information Technology (IJCSIT)
Jordanian E-government websites. The Conversational Agent is a smart system used to handle natural conversations between user and machine. A Jordanian citizen facing struggles when he/she want to apply for a service through the E-government portal.In addition the Jordanians struggling when searched for a
piece of information (for example the needed documents for a specific service) inside the E-government websites. This struggling comes from number of reasons such as the needed knowledge that the user could have to deal with such services and the big number of links that the user must visitto achieve his/her target. In addition, the Jordanian E-Government websites does not meet the users’ requirements in their design.
Instead, this paper proposes the idea of applying a prototype called CA into those websites as a general helpdesk automated service to save the Jordanians time and effort. Simply, the user will chat with the proposed CA with what he/she coming to do through the targeted website using a text based Arabic
conversations. The CA’s responses might be the exact needed link or the targeted information. Such a proposed service will strength the Jordanian E-government platform especially for accessibility and usability factors and as to best of our knowledge, no country has been applied it before.
system, their functionalities and features provided.However, the proposed system is not only confined to computing area, it can support any other science and technology area without any need to modify this system.
support of software process and products. There is no universally agreed theory for software measurement.
And the software metrics are useful for obtaining the information on evaluation of process and product in
software engineering. It helps to plan and carry out improvement in software organizations and to provide
objective information about project performance, process capability and product quality. The process capability is extremely important for software industry because the quality of products is largely determined by the quality of the processes. The make use of of existing metrics and development of innovative software metrics will be important factors in future software engineering process and product
development. In future, research work will be based on using software metrics in software development for the development of the time schedule, cost estimates and software quality and can be improved through software metrics. The permanent application of measurement based methodologies is used to the software process and its products to provide important and timely management information, together with the use of
those techniques to improve that software process and its products. This research paper mainly concentrates on the overview of unique basics of software measurement and exclusive fundamentals of software metrics in software engineering.
technique produced a stego-image with minimal distortion in image quality than MSB technique independent of the nature of data that was hidden. However, LSB algorithm produced the best stego-image quality. Large cover images however made the combined algorithm’s quality better improved. The
combined algorithm had lesser time of image and text encoding. Therefore, a trade-off exists between the encoding time and the quality of stego-image as demonstrated in this work.
more number of redundant contents. Hence training, testing and classification time becomes more. The expression recognition accuracy rate is also reduced. To overcome this problem Symmetrical Weighted 2DPCA (SW2DPCA) subspace method is introduced. Extracted feature vector space is projected in to subspace by using SW2DPCA method. By implementing weighted principles on odd and even symmetrical decomposition space of training samples sets proposed method have been formed. Conventional PCA and
2DPCA method yields less recognition rate due to larger variations in expressions and light due to more number of feature space redundant variants. Proposed SW2DPCA method optimizes this problem by reducing redundant contents and discarding unequal variants. In this work a well known JAFFE databases is used for experiments and tested with proposed SW2DPCA algorithm. From the experimental results it
was found that facial recognition accuracy rate of F+SW2DPCA based feature fusion subspace method has been increased to 95.24% compared to 2DPCA method.
explanatory power was the highest amongst the constructs (able to explain 28% of usage behaviour). While, “Attitude” explain around 11% of SNSs usage behaviour. The study findings also show that “Perceived Social Capital” construct has a notable impact on usage behaviour, this impact came indirectly through its direct effect on “Attitude” and “Perceived Usefulness”. Participation of “Perceived Social Capital” in the models' explanatory power was the third highest amongst the constructs. “Perceived Social Capital”, alone explain around 9% of SNSs usage behaviour.
issues and challenges raised on fully functional DBMS/Data Warehouse on MapReduce. The comparison of various Map Reduce implementations is done with the most popular implementation Hadoop and other similar implementations using other platforms.
each region to localize the forged portion. Experimental results prove that this hybrid method can effectively detect such kind of image tampering with minimum false positives.
detected in time. Computed Tomography can be more efficient than X ray for detecting lung cancer in time. But the problem seemed to merge due to time constraint in detecting the presence of lung cancer. MATLAB have been applied for the study of these techniques. Feature selection is a method to reduce the number of features in medical applications where the image has hundreds or thousands of features. In order to extract the accurate features of an image, an image need to be processed for its effective retreival. Image feature selection is an essential task for recognizing the image and it can be done for overcoming classification problems. However, the quality of the image recognition tasks can be improved with the help
of better classification accuracy for enhancing the retrieval performance.