Dbms Pyq Solution 2019

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

PYQ 2019

2.(a) Draw and explain the three-level architecture of the database system.
Ans:- Physical Level
This is the lowest level in the three level architecture. It is also known as the internal level.
The physical level describes how data is actually stored in the database. In the lowest
level, this data is stored in the external hard drives in the form of bits and at a little high
level, it can be said that the data is stored in files and folders. The physical level also
discusses compression and encryption techniques.
Conceptual Level
The conceptual level is at a higher level than the physical level. It is also known as the
logical level. It describes how the database appears to the users conceptually and the
relationships between various data tables. The conceptual level does not care for how the
data in the database is actually stored.
External Level
This is the highest level in the three level architecture and closest to the user. It is also
known as the view level. The external level only shows the relevant database content to
the users in the form of views and hides the rest of the data. So different users can see the
database as a different view as per their individual requirements.

(b). Compare the traditional file-based systems and relational database management
system approaches.
Ans:- Traditional file-based approach
The term 'file-based approach' refers to the situation where data is stored in one or more
separate computer files defined and managed by different application programs. Typically,
for example, the details of customers may be stored in one file, orders in another, etc.
Computer programs access the stored files to perform the various tasks required by the
business. Each program, or sometimes a related set of programs, is called a computer
application. For example, all of the programs associated with processing customers' orders
are referred to as the order processing application. The file-based approach might have
application programs that deal with purchase orders, invoices, sales and marketing,
suppliers, customers, employees, and so on.
Limitations
Data duplication: Each program stores its own separate files. If the same data is to be
accessed by different programs, then each program must store its own copy of the same
data.
Data inconsistency: If the data is kept in different files, there could be problems when an
item of data needs updating, as it will need to be updated in all the relevant files; if this is
not done, the data will be inconsistent, and this could lead to errors.
Difficult to implement data security: Data is stored in different files by different application
programs. This makes it difficult and expensive to implement organisation-wide security
procedures on the data.
Relational database management system
RDBMS stands for Relational Database Management System. It is an information
management system that is oriented on a data model. Here all the information is properly
stored as tables. RDBMS Example systems are SQL Server, Oracle, MySQL, MariaDB, and
SQLite.
Basic Features of RDBMS:

• Offers information to be saved in the tables


• Numerous users can access it together which is managed by a single user
• Virtual tables are available for storing the insightful data
• In order to exclusively find out the rows, the primary key is used
• The data are always saved in rows and columns
• To retrieve the information the indexes are used
• Columns are being shared between tables using the keys
RDBMS CONCEPTS
Relation in the RDBMS is the deposit of tuples having similar characteristics. A relation in
RDBMS means the set of the entities and data contained in them. The entities are different
instances and their relation is organized in different rows and columns in the data table.
The related information is of identical domains and constraints. By deleting old data and
by inserting new data, relations are altered in the database model.
3.(a) What is a view? Can we update a view?
Ans:- Views in SQL are the virtual tables that do not really exist like the tables of the
database. These Views are created by SQL statements that join one or more tables. The
views contain rows and columns. We can CREATE VIEW by selecting the fields from one or
more tables of the database. We can also Update and Drop the views according to our
requirements.
Can we update a view
Yes,we update a view.
To modify table data through a view

• In Object Explorer, expand the database that contains the view and then expand
Views.
• Right-click the view and select Edit Top 200 Rows.
• You may need to modify the SELECT statement in the SQL pane to return the rows
to be modified.
• In the Results pane, locate the row to be changed or deleted. To delete the row,
right-click the row and select Delete. To change data in one or more columns,
modify the data in the column.
(B.) When we try to modify any table in database system, we encounter some side-effects
if the tables insufficiently normalized. Can you explain those side-effects with the
respective examples?
(c) List out various constraints in relational model and explain in short.
Ans:- These are the restrictions or sets of rules imposed on the database contents. It
validates the quality of the database. It validates the various operations like data
insertion, updation, and other processes which have to be performed without affecting
the integrity of the data. It protects us against threats/damages to the database. Mainly
Constraints on the relational database are of 4 types
1. Domain constraints
2. Key constraints or Uniqueness Constraints
3. Entity Integrity constraints
4. Referential integrity constraints

1.Domain Constraints
Every domain must contain atomic values(smallest indivisible units) which means
composite and multi-valued attributes are not allowed.
We perform a datatype check here, which means when we assign a data type to a column
we limit the values that it can contain. Eg. If we assign the datatype of attribute age as int,
we can’t give it values other than int datatype.

2.Key Constraints or Uniqueness Constraints


These are called uniqueness constraints since it ensures that every tuple in the relation
should be unique.
A relation can have multiple keys or candidate keys(minimal superkey), out of which we
choose one of the keys as the primary key, we don’t have any restriction on choosing the
primary key out of candidate keys, but it is suggested to go with the candidate key with
less number of attributes.
Null values are not allowed in the primary key, hence Not Null constraint is also part of the
key constraint.
3. Entity Integrity Constraints:
Entity Integrity constraints say that no primary key can take a NULL value, since using the
primary key we identify each tuple uniquely in a relation.
4. Referential Integrity Constraints
The Referential integrity constraint is specified between two relations or tables and used
to maintain the consistency among the tuples in two relations.
This constraint is enforced through a foreign key, when an attribute in the foreign key of
relation R1 has the same domain(s) as the primary key of relation R2, then the foreign key
of R1 is said to reference or refer to the primary key of relation R2.

4.(a) Discuss the correspondence between E-R model construct and the relation model
construct. Show how each E-R model construct can be append to the relational model
using the following description of an organization :
An organization uses number of items of an equipment. to produce goods. Each item is at
one LOCATION, of one TYPE and has a DETAILED DISCRIPTION. Faults on the equipment are
identified by a unique FAULT_ID and are reported at a TIME REPORTED. Any number of
persons may be assigned to a fault and work on the fault until it is fixed. The TIME FIXED is
recorded as the TIME SPENT by each person on a fault. Any number of parts may be used
to repair a fault. The QTY USED of each part is recorded against the fault. Each part is
identified by a PART_ID and has a given weight and MAX DIMENSION and can have any
number of colors.
Ans(a)
1) The basic difference between E-R Model and Relational Model is that E-R model
specifically deals with entities and their relations. On the other hand, the
Relational Model deals with Tables and relation between the data of those tables.
2) An E-R Model describes the data with entity set, relationship set and attributes.
However, the Relational model describes the data with the tuples, attributes and
domain of the attribute.
3) One can easily understand the relationship among the data in E-R Model as
compared to Relational Model.
4) E-R Model has Mapping Cardinality as a constraint whereas Relational Model does
not have such constraint.
(b) Write a short note on types of attributes and their representation in E-R model
with neat figures.
Ans:- An attribute is a property or characteristic of an entity. An entity may contain any
number of attributes. One of the attributes is considered as the primary key. In an
Entity-Relation model, attributes are represented in an elliptical shape.
Types
Simple Attributes
Simple attributes in an ER model diagram are independent attributes that can't be
classified further and also, can't be subdivided into any other component. These
attributes are also known as atomic attributes.
Example Diagram:

Composite Attributes
Composite attributes have opposite functionality to that of simple attributes as we can
further subdivide composite attributes into different components or sub-parts that
form simple attributes. In simple terms, composite attributes are composed of one or
more simple attributes.
Example Diagram

Single-Valued Attributes
Single valued attributes are those attributes that consist of a single value for each
entity instance and can't store more than one value. The value of these single-valued
attributes always remains the same, just like the name of a person.
Example Diagram:
Multi-Valued Attributes
Multi-valued attributes have opposite functionality to that of single-valued attributes,
and as the name suggests, multi-valued attributes can take up and store more than
one value at a time for an entity instance from a set of possible values. These
attributes are represented by co-centric elliptical shape, and we can also use curly
braces { } to represent multi-valued attributes inside it.
Example Diagram:

Derived Attributes
Derived attributes in DBMS are those attributes whose values can be derived from the
values of other attributes. They are always dependent upon other attributes for their
value.
For example, As we were discussing above, DOB is a single-valued attribute and remains
constant for an entity instance. From DOB, we can derive the Age attribute, which changes
every year, and can easily calculate the age of a person from his/her date of birth value.
Hence, the Age attribute here is derived attribute from the DOB single-valued attribute.
Example Diagram:

5. Considering the following schema, create the appropriate tables and insert at least 5
records:
AUTHOR(author-id,name, city,country)
PUBLISHER (publisher-id, name,city, country)
CATALOG (book-id, title, author-id, publisher-id, category-id, year, price) CATEGORY
(category-id, description) ORDER-DETAILS (order-no, book-id,quantity)
Write each of the following queries in SQL and relational algebra:
(a) Obtain the names of authors who have 2 or more books in the catalog.
(b) Find the author of the book which has maximum sales.
(c) Obtain the names of author who have maximum number of publisher.
(d) Obtain the name of the city, author,publisher where publisher and author belong to
same city.
(e) Obtain the title of books which has maximum sales.Obtain the book-id, description for
the author who have exactly 3 books in the catalog.
(g) Obtain the author and publisher who have published books in more than or 14 equal
to 2 categories.
6. (a) You are given the following set F of functional dependencies for a relation: R(A, B, C,
D, E, F): F (ABCD,ABDE, CDF, CDFB, BFD) Find all keys of R based on thesefunctional
dependencies. (i) Is this relation in Boyce-Codd normal form? Is it 3NF? Explain your
answer.
Can the set F be simplified (by removing functional dependencies or by removing
attributes from the left-hand side of functional. dependencies) without changing the
closure of F (Le. F+)?
b) Compute the closure of the following set F of functional dependencies for relation
schema: R=(A, B, C, D, E). ABC, CDE, BD, EA List the candidate keys for R.
7.(a) What is the need of normalization? How many types of normalization exist? Explain
in detail with suitable examples.
(b) What is trigger? When are they used and why? Explain.
Ans:-7(a) The main reason for normalizing the relations is removing these anomalies.
Failure to eliminate anomalies leads to data redundancy and can cause data integrity and
other problems as the database grows. Normalization consists of a series of guidelines
that helps to guide you in creating a good database structure.
Data modification anomalies can be categorized into three types:
Insertion Anomaly: Insertion Anomaly refers to when one cannot insert a new tuple into a
relationship due to lack of data.
Deletion Anomaly: The delete anomaly refers to the situation where the deletion of data
results in the unintended loss of some other important data.
Updatation Anomaly: The update anomaly is when an update of a single data value
requires multiple rows of data to be updated.
Types of Normal Forms:
1st Normal Form (1NF)
A table is referred to as being in its First Normal Form if atomicity of the table is 1.
Here, atomicity states that a single cell cannot hold multiple values. It must hold only a
single-valued attribute.
The First normal form disallows the multi-valued attribute, composite attribute, and their
combinations.
Second Normal Form (2NF)
The first condition for the table to be in Second Normal Form is that the table has to be in
First Normal Form. The table should not possess partial dependency. The partial
dependency here means the proper subset of the candidate key should give a non-prime
attribute.
Third Normal Form (3NF)
The first condition for the table to be in Third Normal Form is that the table should be in
the Second Normal Form.
The second condition is that there should be no transitive dependency for non-prime
attributes, which indicates that non-prime attributes (which are not a part of the
candidate key) should not depend on other non-prime attributes in a table. Therefore, a
transitive dependency is a functional dependency in which A → C (A determines C)
indirectly, because of A → B and B → C (where it is not the case that B → A).
The third Normal Form ensures the reduction of data duplication. It is also used to achieve
data integrity.
Boyce Codd Normal Form (BCNF)
Boyce Codd Normal Form is also known as 3.5 NF. It is the superior version of 3NF and was
developed by Raymond F. Boyce and Edgar F. Codd to tackle certain types of anomalies
which were not resolved with 3NF.
The first condition for the table to be in Boyce Codd Normal Form is that the table should
be in the third normal form. Secondly, every Right-Hand Side (RHS) attribute of the
functional dependencies should depend on the super key of that particular table.
Ans.7(b) Trigger is a statement that a system executes automatically when there is any
modification to the database. In a trigger, we first specify when the trigger is to be
executed and then the action to be performed when the trigger executes. Triggers are
used to specify certain integrity constraints and referential constraints that cannot be
specified using the constraint mechanism of SQL.
When we need to carry out some actions automatically in certain desirable scenarios,
triggers will be useful. For instance, we need to be aware of the frequency and timing of
changes to a table that is constantly changing. In such cases, we could create a trigger to
insert the required data into a different table if the primary table underwent any changes.
SQL triggers are defined using SQL statements and are associated with a specific table.
When the defined trigger event (e.g., INSERT, UPDATE, DELETE) occurs on that table, the
associated trigger code is executed automatically. The trigger code can consist of SQL
statements that can manipulate data in the same or other tables, enforce constraints, or
perform other actions. Triggers are executed within the transaction scope, and they can be
defined to execute either before or after the triggering event.
The benefits of using SQL triggers include:
Data integrity: Triggers allow you to enforce complex business rules and constraints at the
database level, ensuring that data remains consistent and accurate.
Automation: Triggers can automate repetitive or complex tasks by executing predefined
actions whenever a specified event occurs. This reduces the need for manual intervention
and improves efficiency.
Audit trails: Triggers can be used to track changes made to data, such as logging
modifications in a separate audit table. This helps in auditing and maintaining a history of
data changes.
Data validation: Triggers can perform additional validation checks on data before it is
inserted, updated, or deleted, ensuring that only valid and conforming data is stored in the
database.
8. (a) For the following set of key values. construct a B+ tree with a degree 4: 5, 10, 15, 29,
35, 46, 58, 63, 67, 89 Initially tree is empty. Values must be added in ascending order.
Show the step-by-step construction.
(b) What is multilevel indexing? Explain in detail.
Ans:- Multilevel indexes refer to a hierarchical structure of indexes. Here, each level of the
index provides a more detailed reference to the data. It allows faster data retrieval,
reduces disk access, and improves query performance. Multilevel indexes are essential in
large databases where traditional indexes are not efficient.
A multi-level index can be created for any first-level index (primary, secondary, or
clustering) that has more than one disk block. It's like a search tree, but adding or
removing new index entries is challenging because every level of the index is an ordered
file.
Types of Multilevel Indexes
There are two main types of multilevel indexes: B-Tree Index, B+Tree Index. These are
explained as follows below in brief.
B Tree Index
This type of index is used in most database management systems. It is a balanced tree
data structure, in which each node in the tree contains a set of keys and pointers to its
child nodes.
B- trees are highly efficient for range queries and support insertion and deletion of records
without requiring a complete restructuring of the index.
B+ Tree Index
This type of index is similar to B-trees, but with some modifications to improve the
performance of range queries. In a B+ tree, only the leaf nodes contain actual data
records, while the non- leaf nodes act as keys for the child nodes. B+ trees are often used
for large databases with high read/write operations as they can handle large amounts of
data with relatively low overhead.
9.(a) What is two-phase locking protocol? Explain its working in detail. How can it
guarantee serializability?
(b) Discuss the various approaches for handling the deadlocks in dbms.
Ans:-9(a) Two-Phase Locking (2PL) is a concurrency control method which divides the
execution phase of a transaction into three parts.
It ensures conflict serializable schedules.
If read and write operations introduce the first unlock operation in the transaction, then it
is said to be Two-Phase Locking Protocol.
This protocol can be divided into two phases,
1. In Growing Phase, a transaction obtains locks, but may not release any lock.
2. In Shrinking Phase, a transaction may release locks, but may not obtain any lock.
Two-Phase Locking does not ensure freedom from deadlocks.
Types of Two – Phase Locking Protocol
Following are the types of two – phase locking protocol:
1. Strict Two – Phase Locking Protocol
2. Rigorous Two – Phase Locking Protocol
3. Conservative Two – Phase Locking Protocol
1. Strict Two-Phase Locking Protocol
Strict Two-Phase Locking Protocol avoids cascaded rollbacks.
This protocol not only requires two-phase locking but also all exclusive-locks should be
held until the transaction commits or aborts.
It is not deadlock free.
It ensures that if data is being modified by one transaction, then other transaction cannot
read it until first transaction commits.
Most of the database systems implement rigorous two – phase locking protocol.
2. Rigorous Two-Phase Locking
Rigorous Two – Phase Locking Protocol avoids cascading rollbacks.
This protocol requires that all the share and exclusive locks to be held until the transaction
commits.
3. Conservative Two-Phase Locking Protocol
Conservative Two – Phase Locking Protocol is also called as Static Two – Phase Locking
Protocol.
This protocol is almost free from deadlocks as all required items are listed in advanced.
It requires locking of all data items to access before the transaction starts.
To guarantee serializability in a distributed manner, we normally use a protocol called two-
phase locking(2PL). 2PL has two phases to control the execution of the transactions. It
ensures that the transactions are serializable conflict. The serializable conflict means when
there is no cycle between the read and write operations of different transactions.
Ans:-9(b) Deadlock Handling in Centralized Systems
There are three classical approaches for deadlock handling, namely −
Deadlock prevention.
Deadlock avoidance.
Deadlock detection and removal.
All of the three approaches can be incorporated in both a centralized and a distributed
database system.
Deadlock Prevention
The deadlock prevention approach does not allow any transaction to acquire locks that
will lead to deadlocks. The convention is that when more than one transactions request
for locking the same data item, only one of them is granted the lock.
One of the most popular deadlock prevention methods is pre-acquisition of all the locks.
In this method, a transaction acquires all the locks before starting to execute and retains
the locks for the entire duration of transaction. If another transaction needs any of the
already acquired locks, it has to wait until all the locks it needs are available. Using this
approach, the system is prevented from being deadlocked since none of the waiting
transactions are holding any lock.
Deadlock Avoidance
The deadlock avoidance approach handles deadlocks before they occur. It analyzes the
transactions and the locks to determine whether or not waiting leads to a deadlock.
The method can be briefly stated as follows. Transactions start executing and request data
items that they need to lock. The lock manager checks whether the lock is available. If it is
available, the lock manager allocates the data item and the transaction acquires the lock.
However, if the item is locked by some other transaction in incompatible mode, the lock
manager runs an algorithm to test whether keeping the transaction in waiting state will
cause a deadlock or not. Accordingly, the algorithm decides whether the transaction can
wait or one of the transactions should be aborted.
There are two algorithms for this purpose, namely wait-die and wound-wait. Let us
assume that there are two transactions, T1 and T2, where T1 tries to lock a data item
which is already locked by T2. The algorithms are as follows −
Wait-Die − If T1 is older than T2, T1 is allowed to wait. Otherwise, if T1 is younger than T2,
T1 is aborted and later restarted.
Wound-Wait − If T1 is older than T2, T2 is aborted and later restarted. Otherwise, if T1 is
younger than T2, T1 is allowed to wait.
Deadlock Detection and Removal
The deadlock detection and removal approach runs a deadlock detection algorithm
periodically and removes deadlock in case there is one. It does not check for deadlock
when a transaction places a request for a lock. When a transaction requests a lock, the
lock manager checks whether it is available. If it is available, the transaction is allowed to
lock the data item; otherwise the transaction is allowed to wait.
Since there are no precautions while granting lock requests, some of the transactions may
be deadlocked. To detect deadlocks, the lock manager periodically checks if the wait-
forgraph has cycles. If the system is deadlocked, the lock manager chooses a victim
transaction from each cycle. The victim is aborted and rolled back; and then restarted
later. Some of the methods used for victim selection are −
Choose the youngest transaction.
Choose the transaction with fewest data items.
Choose the transaction that has performed least number of updates.
Choose the transaction having least restart overhead.
Choose the transaction which is common to two or more cycles.
This approach is primarily suited for systems having transactions low and where fast
response to lock requests is needed.

You might also like