Papers by Lokanatha Reddy
Connectivity is one of the most fundamental aspects of MANETs. The fundamental application of a n... more Connectivity is one of the most fundamental aspects of MANETs. The fundamental application of a network is to facilitate the exchange of data among its nodes. This paper introduces connectivity Index (CI) as one of the parameter to study MANETs
International journal of computer applications, Feb 28, 2011
Data and Information or Knowledge has a significant role on human activities. Data mining is the ... more Data and Information or Knowledge has a significant role on human activities. Data mining is the knowledge discovery process by analyzing the large volumes of data from various perspectives and summarizing it into useful information. Due to the importance of extracting knowledge/information from the large data repositories, data mining has become an essential component in various fields of human life. Advancements in Statistics, Machine Learning, Artificial Intelligence, Pattern Recognition and Computation capabilities have evolved the present day's data mining applications and these applications have enriched the various fields of human life including business, education, medical, scientific etc. Hence, this paper discusses the various improvements in the field of data mining from past to the present and explores the future trends.
Page 1. Route Failure Tolerant Multicast In Mobile Ad Hoc Networks Using Disjoint Minimum Spannin... more Page 1. Route Failure Tolerant Multicast In Mobile Ad Hoc Networks Using Disjoint Minimum Spanning Trees Arun Kumar B. R, Lokanatha C. Reddy and Rajan Dept. of CS, School of Science & Technology Dravidian University, Kuppam-AP, India. ...
The conventional TCP suffersfrom poor Thecongestion control functionality ofTCPisprovided perform... more The conventional TCP suffersfrom poor Thecongestion control functionality ofTCPisprovided performance onhighbandwidth delayproduct links meant byfourmainalgorithms namely slowstart, congestion avoidance, forsupporting datatransmission rates ofmultiGigabits per fastretransmit andfastrecovery inconjunction withseveral seconds (Gbps). Thisismainlyduetothefactthatduring different timers. Slowstart usesexponential windowincrease to congestion, theTCP'scongestion control algorithm reduces quickly bring anewlystarting flowtospeed. Insteady state, the thecongestion windowcwndto¼ andenters additive increase flowmostly usescongestion avoidance inconjunction withfast mode,whichcanbe slowintakingadvantage oflarge retransmit/recovery. amountsofavailable bandwidth. Inthispaperwe have Thesealgorithms implement theclassic Additive presented a modified new modeland toovercomethe Increase/Multiplicative Decrease (AIMD)ofthecongestion drawbacks oftheTCP protocol andpropose tocarryouta window. Whennolosses areobserved, thecongestion windowis study ofthemodified modelbasedonvarious parameters viz., increased byoneforthesuccessful acknowledgment ofone Throughput, Fairness, Stability, Performance andBandwidth windowofpackets. Uponapacket loss, thewindowisdecreased Utilization forsupporting datatransmission across theHigh tohalf itsearlier value, toclear outthebottleneck link buffers. SpeedNetworks. Thereareseveral challenges incurrent networks tothis simple
International journal of computer applications, Feb 25, 2010
Mobile Ad-hoc Networks (MANETs) are highly decentralized, independent and self-organizing network... more Mobile Ad-hoc Networks (MANETs) are highly decentralized, independent and self-organizing networks. It is significant to study the cost of the network, to optimize the routing method by means of cross layer interaction across the layers of the network. In this paper, first we generate a minimum cost spanning tree for a given network of N-nodes using an efficient algorithm, and then we study the problem of constructing a K-node Multicast Minimum Spanning Tree (KMMST) for any given multicasting group with K nodes, where K is less than n. Comparing the cost associated with the minimum spanning tree of the entire network with n nodes and the cost of KMMST, it is found that the cost of KMMST is significantly less compared to cost of the n nodes spanning tree.
Advances in Intelligent Systems and Computing, 2018
Distributed computing is one of the most significant recent paradigms facing IT organizations. Si... more Distributed computing is one of the most significant recent paradigms facing IT organizations. Since this new handling technology requires clients to believe in their information being useful to providers, there have been problems regarding the enhancement of security and details were explored. A few strategies using quality based security have been prescribed for the administration of access of abbreviated subtle elements in cloud computing. However, be that as it may, the greater part of them are resolute in actualizing complex the rules of accessibility administration. To provide effective secure authentication for multi-user data sharing in the cloud computing environment, Transmitted Team Key Management (TTKM) has traditionally been used for the sharing of distributed data between multiple users in cloud computing settings. This allows the users to share their data securely using Shamir secret key sharing. One major limitation of TTKM is in the provision of inner side security in data sharing for multiple files with single security considerations in the distributed cloud environment. To access this limitation properly in the distributed environment, in this paper we propose the Integrated Key Cryptosystem (IKC) for multiple file sharing with a single aggregate key for single user data sharing. It is the combination of different security systems in attributes-based encryption. Our experimental results shows effective data utilization in the real time distributed cloud environment with different file sharing in a cloud computing environment.
Protecting databases or data contents from the web world environment is a tough task for a compan... more Protecting databases or data contents from the web world environment is a tough task for a company. Because every Company/ Financial Institute/ Hospital was hiding their customers or end users list secretly and will not open for all. But now Tom’s gang (Hackers) made this possible and tries stealing the data and major portion. In these conditions securing the data outsourcing area such as web hosting and cloud space storage option are becoming very prominent. To manage the situation many were out with secured sharing solutions. Now one more novel approach with high secured and efficient sharing option in data retrieving by end user is demonstrating in this paper. The technique is comprises with two famous algorithms one is DES an encryption scheme and the next is K-NN query passing and data retrieving code.
Distributed cloud data storage is an advanced and empirical concept in present days for out sourc... more Distributed cloud data storage is an advanced and empirical concept in present days for out sourcing of data in cloud. A new decentralized grained access control approach is required for privacy on data storage that supports anonymous authentication. In this paper we introduce to propose and develop an approach i.e. Scalable Attribute Based Encryption (SABE) to achieve grained with flexible and scalable access control in cloud computing for secure distributed cloud storage. SABE is not only perform scalable due to its pyramid structure, it also share effective and flexible access control in supporting on ABE, it also assigns for user expiration time and revocation efficient than existing schemas. Protected data transmission among users should be effective in as well as flexible in order to support access control policy models with secure team communication, selective and hierarchal data transfer control in sharing. So in this paper we propose and develop Transmitted Team Key Managem...
The conventional TCP suffersfrom poor Thecongestion control functionality ofTCPisprovided perform... more The conventional TCP suffersfrom poor Thecongestion control functionality ofTCPisprovided performance onhighbandwidth delayproduct links meant byfourmainalgorithms namely slowstart, congestion avoidance, forsupporting datatransmission rates ofmultiGigabits per fastretransmit andfastrecovery inconjunction withseveral seconds (Gbps). Thisismainlyduetothefactthatduring different timers. Slowstart usesexponential windowincrease to congestion, theTCP'scongestion control algorithm reduces quickly bring anewlystarting flowtospeed. Insteady state, the thecongestion windowcwndto¼ andenters additive increase flowmostly usescongestion avoidance inconjunction withfast mode,whichcanbe slowintakingadvantage oflarge retransmit/recovery. amountsofavailable bandwidth. Inthispaperwe have Thesealgorithms implement theclassic Additive presented a modified new modeland toovercomethe Increase/Multiplicative Decrease (AIMD)ofthecongestion drawbacks oftheTCP protocol andpropose tocarryouta window. When...
2010 IEEE 2nd International Advance Computing Conference (IACC), 2010
... Lokanath C. Reddy Dept. of CS, School of Science & Technology, Dravidian University Kuppa... more ... Lokanath C. Reddy Dept. of CS, School of Science & Technology, Dravidian University Kuppam Andhra Pradesh, India Prakash S. Hiremath Dept. of PG studies and Research Group, Gulbarga University, Gulbarga-Karnataka, India ...
International Journal of Computer Applications, 2014
Digital libraries are huge and complex information systems. Digital library users are from variou... more Digital libraries are huge and complex information systems. Digital library users are from various backgrounds with diversified information requirements. Digital library users are fed up with the information overload problem due its unsophisticated search features. Digital libraries can be effectively implemented by addressing the information overload problem. Hence, this paper presents newly designed personalized information retrieval services architecture and its implementation details to enhance the digital library users' search experience.
International Journal of Computer Applications, 2011
Data and Information or Knowledge has a significant role on human activities. Data mining is the ... more Data and Information or Knowledge has a significant role on human activities. Data mining is the knowledge discovery process by analyzing the large volumes of data from various perspectives and summarizing it into useful information. Due to the importance of extracting knowledge/information from the large data repositories, data mining has become an essential component in various fields of human life. Advancements in Statistics, Machine Learning, Artificial Intelligence, Pattern Recognition and Computation capabilities have evolved the present day's data mining applications and these applications have enriched the various fields of human life including business, education, medical, scientific etc. Hence, this paper discusses the various improvements in the field of data mining from past to the present and explores the future trends.
ijetae.com
In India there are a number of university level libraries which are in the process of conversion ... more In India there are a number of university level libraries which are in the process of conversion to Digital libraries. There is a mission to create a portal for the digital library of India. A digital library portal that provides an integrated web-based user interface to a wide range of online oceanographic climate data sets. The portal provides a single point of entry set to the sets, and displays them in a common and easy to navigate format. Moreover, it provides a means of searching the meta-data and other characteristics of data sets; to simplify the management of data sets, and to provide authentication and access control mechanism for sensitive data.[5] In essence, we are concentrating on designing methodologies for web pages; home pages for books/research papers/theses/a variety of digital resources to facilitate a local cold fusion for scripting for access mechanism and digital assert management based on semantic webs. The purpose of the digital Libraries motto should have been a successful but the research has shown that digital libraries are underutilized, due to their infancy user interfaces and information overload.[6] In addition to that, the cost effect has become an obstacle to the development of the Digital Libraries
… Journal of Computer Science and Network …, 2008
Conventional TCP suffers from poor performance on high bandwidth delay product links meant for su... more Conventional TCP suffers from poor performance on high bandwidth delay product links meant for supporting transmission rates of multi Gigabits per seconds (Gbps). This is largely due to TCP's congestion control algorithm, which can be slow in taking advantage of large amounts of available bandwidth. A number of high-speed variants have been proposed recently, the major ones being BIC TCP, CUBIC, FAST, High-Speed TCP, Layered TCP, Scalable TCP and XCP. In this paper an effort has been made to comparatively analyze the aforementioned protocols based on various parameters viz., Throughput, Fairness, Stability, Performance, Bandwidth Utilization and Responsiveness and study the limitations of these protocols meant for the High Speed Networks.
One approach is to design multi tired architectures that include an integration layer providing p... more One approach is to design multi tired architectures that include an integration layer providing programme level services for user level applications such as a portal. Web portals are seen as positive potential frameworks for achieving order out of chaos. The library portal is one approach to organize information resources and services in a way that supports the users‟ needs. However, the library portal will not be the only starting point for access to the library. [1] The future of library websites in fact lies in integration of different effective information management and need based service modules. Web portals are seen as positive potential frameworks for achieving order out of chaos. As portals become a primary means for transacting information and commerce, libraries of all types are becoming involved in thinking, planning and building various frameworks and services that they call portals. A web portal or public portal refers to a web site or service that offers a broad array...
IJCSNS, 2008
The issue of network partitioning is an important aspect of any network design and its study is m... more The issue of network partitioning is an important aspect of any network design and its study is more relevant to mobile ad-hoc networks (MANETs). MANETs are highly vulnerable to network partition due to the dynamic change in the topology. Very often the network will partition and remerge, affecting the performance of routing protocols. This paper introduces the application of Connectivity Index (CI) concept to detect the partition (having two clusters) of MANET.
International Journal of Data Mining & Knowledge Management Process, 2015
Interpreting available data is a focal issue in data mining. Gathering of primary data is a diffi... more Interpreting available data is a focal issue in data mining. Gathering of primary data is a difficult and expensive affair for assessing the trends for any business decision especially when multiple players are present. There is no uniform formula-type work procedure to deduce information from a vast data set especially if the data formats in the secondary sources are not uniform and need enormous cleansing to mend the data for statistical analysis. In this paper, an incremental approach to cleanse data using a simple yet extended procedure is presented and it is shown how to deduce conclusions to facilitate business decisions. Freely available Indian Telecom Industry's data over a year is used to illustrate this process. It is shown how to conclude the superiority of one telecom service provider over the others comparing different parameters like network availability, customer service quality etc. using a relative parameter quantification technique. It is found that this method is computationally less costly than the other known methods.
Mobile Ad-hoc Networks (MANETs) are highly decentralized, independent and self-organizing network... more Mobile Ad-hoc Networks (MANETs) are highly decentralized, independent and self-organizing networks. It is significant to study the cost of the network, to optimize the routing method by means of cross layer interaction across the layers of the network. In this paper, first we generate a minimum cost spanning tree for a given network of N- nodes using an efficient algorithm, and then we study the problem of constructing a K-node Multicast Minimum Spanning Tree (KMMST) for any given multicasting group with K nodes, where K is less than n. Comparing the cost associated with the minimum spanning tree of the entire network with n nodes and the cost of KMMST, it is found that the cost of KMMST is significantly less compared to cost of the n nodes spanning tree. Key words: K-nodes multicasting, cost, spanning tree, routing. 1.
Interpreting available data is a focal issue in data mining. Gathering of primary data is a diffi... more Interpreting available data is a focal issue in data mining. Gathering of primary data is a difficult and
expensive affair for assessing the trends for any business decision especially when multiple players are
present. There is no uniform formula-type work procedure to deduce information from a vast data set
especially if the data formats in the secondary sources are not uniform and need enormous cleansing to
mend the data for statistical analysis. In this paper, an incremental approach to cleanse data using a
simple yet extended procedure is presented and it is shown how to deduce conclusions to facilitate business
decisions. Freely available Indian Telecom Industry’s data over a year is used to illustrate this process. It
is shown how to conclude the superiority of one telecom service provider over the others comparing
different parameters like network availability, customer service quality etc. using a relative parameter
quantification technique. It is found that this method is computationally less costly than the other known
methods.
The chief limiting factor for current mobile devices is the amount of battery power. To improve t... more The chief limiting factor for current mobile devices is the amount of battery power. To improve this crucial factor, researchers have tried to optimize power consumption of every aspect of the mobile device. Power consumption can be optimized by disks, memory chips, CPU scheduling and efficient routing techniques.
Dynamic Source Routing (DSR) is a popular protocol for mobile adhoc routing and data forwarding over wireless networks. In this research an innovative mechanism is suggested for DSR that improves both the routing and data forwarding performance, with lesser power consumption.This mechanism involves intelligent use of the route discovery and route maintenance process thereby providing faster routing and reduced traffic as compared to the basic DSR. This mechanism enables faster data forwarding and reduced collisions with lesser power consumption. The basic DSR and modified DSR were studied and compared in GloMoSim simulation environment. Since one of our major goals was to reduce the routing overhead, the existing algorithm was modified to achieve this objective. To get a better idea of the generated overhead we considered the number of routing packets, which carry the overhead. The analysis shows that the performance of modified DSR is better than the performance of the basic DSR for the considered simulations scenarios. The modified algorithm was found to reduce the power consumption of the network by routing lesser routing load.
Uploads
Papers by Lokanatha Reddy
expensive affair for assessing the trends for any business decision especially when multiple players are
present. There is no uniform formula-type work procedure to deduce information from a vast data set
especially if the data formats in the secondary sources are not uniform and need enormous cleansing to
mend the data for statistical analysis. In this paper, an incremental approach to cleanse data using a
simple yet extended procedure is presented and it is shown how to deduce conclusions to facilitate business
decisions. Freely available Indian Telecom Industry’s data over a year is used to illustrate this process. It
is shown how to conclude the superiority of one telecom service provider over the others comparing
different parameters like network availability, customer service quality etc. using a relative parameter
quantification technique. It is found that this method is computationally less costly than the other known
methods.
Dynamic Source Routing (DSR) is a popular protocol for mobile adhoc routing and data forwarding over wireless networks. In this research an innovative mechanism is suggested for DSR that improves both the routing and data forwarding performance, with lesser power consumption.This mechanism involves intelligent use of the route discovery and route maintenance process thereby providing faster routing and reduced traffic as compared to the basic DSR. This mechanism enables faster data forwarding and reduced collisions with lesser power consumption. The basic DSR and modified DSR were studied and compared in GloMoSim simulation environment. Since one of our major goals was to reduce the routing overhead, the existing algorithm was modified to achieve this objective. To get a better idea of the generated overhead we considered the number of routing packets, which carry the overhead. The analysis shows that the performance of modified DSR is better than the performance of the basic DSR for the considered simulations scenarios. The modified algorithm was found to reduce the power consumption of the network by routing lesser routing load.
expensive affair for assessing the trends for any business decision especially when multiple players are
present. There is no uniform formula-type work procedure to deduce information from a vast data set
especially if the data formats in the secondary sources are not uniform and need enormous cleansing to
mend the data for statistical analysis. In this paper, an incremental approach to cleanse data using a
simple yet extended procedure is presented and it is shown how to deduce conclusions to facilitate business
decisions. Freely available Indian Telecom Industry’s data over a year is used to illustrate this process. It
is shown how to conclude the superiority of one telecom service provider over the others comparing
different parameters like network availability, customer service quality etc. using a relative parameter
quantification technique. It is found that this method is computationally less costly than the other known
methods.
Dynamic Source Routing (DSR) is a popular protocol for mobile adhoc routing and data forwarding over wireless networks. In this research an innovative mechanism is suggested for DSR that improves both the routing and data forwarding performance, with lesser power consumption.This mechanism involves intelligent use of the route discovery and route maintenance process thereby providing faster routing and reduced traffic as compared to the basic DSR. This mechanism enables faster data forwarding and reduced collisions with lesser power consumption. The basic DSR and modified DSR were studied and compared in GloMoSim simulation environment. Since one of our major goals was to reduce the routing overhead, the existing algorithm was modified to achieve this objective. To get a better idea of the generated overhead we considered the number of routing packets, which carry the overhead. The analysis shows that the performance of modified DSR is better than the performance of the basic DSR for the considered simulations scenarios. The modified algorithm was found to reduce the power consumption of the network by routing lesser routing load.