Cloud Drops

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

DROPS: Division and Replication of Data in

Cloud for Optimal Performance and Security


Objective:
The main aim of this system is to propose DROPS methodology, a cloud storage security scheme
that collectively deals with the security and performance in terms of retrieval time.

Abstract:
Outsourcing data to a third-party administrative control, as is done in cloud computing, gives rise
to security concerns. The data compromise may occur due to attacks by other users and nodes
within the cloud. Therefore, high security measures are required to protect data within the cloud.
However, the employed security strategy must also take into account the optimization of the data
retrieval time. In this paper, we propose Division and Replication of Data in the Cloud for
Optimal Performance and Security (DROPS) that collectively approaches the security and
performance issues. In the DROPS methodology, we divide a file into fragments, and replicate
the fragmented data over the cloud nodes. Each of the nodes stores only a single fragment of a
particular data file that ensures that even in case of a successful attack, no meaningful
information is revealed to the attacker. Moreover, the nodes storing the fragments are separated
with certain distance by means of graph T-coloring to prohibit an attacker of guessing the
locations of the fragments.

Introduction:
THE cloud computing paradigm has reformed the usage and management of the information
technology infrastructure. Cloud computing is characterized by on-demand self-services,
ubiquitous network accesses, resource pooling, elasticity, and measured services. The
aforementioned characteristics of cloud computing make it a striking candidate for businesses,
organizations, and individual users for adoption. However, the benefits of low-cost, negligible
management (from a users perspective), and greater flexibility come with increased security
concerns.
Security is one of the most crucial aspects among those prohibiting the wide-spread adoption of
cloud computing. Cloud security issues may stem due to the core technologyœ
implementation (virtual machine (VM) escape, session riding, etc.), cloud service offerings
(structured query language injection, weak authentication schemes, etc.), and arising from cloud
characteristics (data recovery vulnerability, Internet protocol vulnerability, etc.). For a cloud to
be secure, all of the participating entities must be secure. In any given system with multiple
units, the highest level of the system’s security is equal to the security level of the weakest entity.
Therefore, in a cloud, the security of the assets does not solely depend on an individual’s security
measures. The neighboring entities may provide an opportunity to an attacker to bypass the
user’s defenses.

Existing System:
Outsourcing data to a third-party administrative control, as is done in cloud computing, gives rise
to security concerns. The data compromise may occur due to attacks by other users and nodes
within the cloud. Therefore, high security measures are required to protect data within the cloud.
Disadvantages Existing System:
 Risk of exposing confidential data
 Synchronizing the deliverables
 Hidden costs
Proposed System:
The data outsourced to a public cloud must be secured. Unauthorized data access by other users
and processes (whether accidental or deliberate) must be prevented. As discussed above, any
weak entity can put the whole cloud at risk. In such a scenario, the security mechanism must
substantially increase an attacker’s effort to retrieve a reasonable amount of data even after a
successful intrusion in the cloud. Moreover, the probable amount of loss (as a result of data
leakage) must also be minimized. A cloud must ensure throughput, reliability, and security.
A key factor determining the throughput of a cloud that stores data is the data retrieval time.
In large-scale systems, the problems of data reliability, data availability, and response time are
dealt with data replication strategies. However, placing replicas data over a number of nodes
increases the attack surface for that particular data. For instance, storing m replicas of a file in a
cloud instead of one replica increases the probability of a node holding file to be chosen as attack
victim, from 1/n to m/n, where n is the total number of nodes. From the above discussion, we can
deduce that both security and performance are critical for the next generation large-scale
systems, such as clouds. Therefore, in this paper, we collectively approach the issue of security
and performance as a secure data replication problem. We present Division and Replication of
Data in the Cloud for Optimal Performance and Security (DROPS) that judicially fragments user
files into pieces and replicates them at strategic locations within the cloud. The division of a file
into fragments is performed based on a given user criteria such that the individual fragments do
not contain any meaningful information. Each of the cloud nodes (we use the term node to
represent computing, storage, physical, and virtual machines) contains a distinct fragment to
increase the data security. A successful attack on a single node must not reveal the locations of
other fragments within the cloud. To keep an attacker uncertain about the locations of the file

To improve data retrieval time, the nodes are selected based on the centrality measures that
ensure an improved access time. To further improve the retrieval time, we judicially replicate
fragments over the nodes that generate the highest read/write requests. The selection of the nodes
is performed in two phases. In the first phase, the nodes are selected for the initial placement of
the fragments based on the centrality measures. In the second phase, the nodes are selected
For replication. The working of the DROPS methodology is shown as a high-level work flow in
Fig. 1. We implement ten heuristics based replication strategies as comparative techniques to the
DROPS methodology.
Advantage of proposed system:
 More security
 Reduce workload and enhance productivity
 Better flexibility and speed

HARDWARE & SOFTWARE REQUIREMENTS:


HARDWARE REQUIREMENTS:

· System : Pentium IV 2.4 GHz.


· Hard Disk : 500 GB.
· Ram : 4 GB
Any desktop / Laptop system with above configuration or higher level

SOFTWARE REQUIREMENTS:

Operating system : Windows XP / 7

Coding Language : Java (Jdk 1.7)

Web Technology : Servlet, JSP

Web Server : TomCAT 6.0

IDE : Eclipse Galileo

Database : My-SQL 5.0

UGI for DB : SQLyog

JDBC Connection : Type 4 Driver

You might also like