0% found this document useful (0 votes)
68 views133 pages

Lab manual-DBMS

Uploaded by

shirke1590
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
68 views133 pages

Lab manual-DBMS

Uploaded by

shirke1590
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 133

Database Management System Lab Manual

ASSIGNMENT NO. 1
TITLE :-
ER Modeling and Normalization.

THEORY :-

ER Modeling –

1
Database Management System Lab Manual

2
Database Management System Lab Manual

3
Database Management System Lab Manual

Normalization: The process of decomposing unsatisfactory "bad" relations by


breaking up their attributes into smaller relations

Normal form: Condition using keys and FDs of a relation to certify whether a relation
schema is in a particular normal form.
Normalization is carried out in practice so that the resulting designs are of high quality
and meet the desirable properties

The practical utility of these normal forms becomes questionable when the constraints
on which they are based are hard to understand or to detect

4
Database Management System Lab Manual

5
Database Management System Lab Manual

Students are expected to refer the notations and prepare ER diagram and map the ER diagram
into tables. Apply normalization technique for the problem statement.

6
Database Management System Lab Manual

ASSIGNMENT NO. 2
Part A-DDL
TITLE :-
Design and develope SQL DDL statements which demonstrate the use of SQL objects such as
Table, View, Index, Sequence, Synonym.
Create table with Primary key and Foreign key.
Book (Bid, Title, Author, Cost) Primary key : Bid
Stud (Sid, Name, Add, DOB, Mob) Primary key : Sid
Trans (Sid, Bid, t_date, Fine, Status) Foreign key : Sid, Bid
1) Display book title issue to student with Sid 5.
2) Display cost of book issued to Sid 5.
3) Find total cost of books.
4) Find count of books issued to Sid 5.
5) Find fine of student with Sid 5.
6) Create sequence of Bid.
7) Create sequence of Sid.
8) Create view on transaction side.
View 1 on all fields
9) Create view on multiple tables transaction and student.
10) Apply select query on Sid, Name, Fine.
11) Insert, select, delete, update…List which operations are allowed.
12) Illustrate with query that removal of column is not allowed, if that column is Foreign
key in other table.
13) Illustrate with query that removal of record is not allowed, if that record is used in other
table with Foreign key.
Select distinct table_name, index_name from information_schema.statistics where
table_schema=‖view_name‖;

THEORY :-

DATA DEFINITION LANGUAGE (DDL) QUERIES

DDL- Data Definition Language (DDL) statements are used to define the database structure or
schema. Data Definition Language understanding with database schemas and describes how the
data should consist in the database, therefore language statements like CREATE TABLE or
ALTER TABLE belongs to the DDL. DDL is about ―metadata‖.

7
Database Management System Lab Manual

DDL includes commands such as CREATE, ALTER and DROP statements.DDL is used to
CREATE, ALTER OR DROP the database objects (Table, Views, Users).

Data Definition Language (DDL) are used different statements :

 CREATE – to create objects in the database


 ALTER – alters the structure of the database
 DROP – delete objects from the database
 TRUNCATE – remove all records from a table, including all spaces allocated for the records are
removed
 COMMENT – add comments to the data dictionary
 RENAME – rename an object

Create Database: From the MySQL command line, enter the command CREATE DATABASE
<DATABASENAME>;. Replace <DATABASENAMEs> with the name of your database. It
cannot include spaces.

 For example, to create a database of all the US states, you might enter CREATE DATABASE
pune;
 Note: Commands do not have to be entered in upper-case.
 Note: All MySQL commands must end with ―;‖. If you forgot to include the semicolon, you can
enter just ―;‖ on the next line to process the previous command.

Select your database: Once the database has been created, you will need to select it in order to
begin editing it. Enter the command

 USE pune;

You will see the message Database changed, letting you know that your active database is
now pune.

Display a list of your available databases: Enter the command

 SHOW DATABASES;

to list all of the databases you have stored. Besides the database you just created, you will also
see a mysql database and a test database.

Create table: We define an SQL relation by using the create table command. The following
command creates a relation department in the database.

8
Database Management System Lab Manual

 create table department (dept name varchar (20), building varchar (15),
budget numeric (12,2), primary key (dept name));

The relation created above has three attributes, dept name, which is a character string of
maximum length 20, building, which is a character string of maximum length 15, and budget,
which is a number with 12 digits in total, 2 of which are after the decimal point. The create
table command also specifies that the dept name attribute is the primary key of
the department relation.

Insert values in table: A newly created relation is empty initially. We can use
the insert command to load data into the relation. For example, if we wish to insert the fact that
there is an instructor named Smith in the Biology department with instructor id 10211 and a
salary of $66,000, we write:

 insert into instructor values (10211, ‘Smith‘, ‘Biology‘, 66000);

Drop table: To remove a relation from an SQL database, we use the drop table command.
The drop table command deletes all information about the dropped relation from the database.
The command

 drop table r

Alter Table :We use the alter table command to add attributes to an existing relation. All
tuples in the relation are assigned null as the value for the new attribute. The form of the alter
table command is

 alter table r add AD;

where r is the name of an existing relation, A is the name of the attribute to be added, and D is
the type of the added attribute. We can drop attributes from a relation by the command

 alter table r drop A;

where r is the name of an existing relation, and A is the name of an attribute of the relation.

View: SQL allows a ―virtual relation‖ to be defined by a query, and the relation conceptually
contains the result of the query. The virtual relation is not precomputed and stored, but instead is
computed by executing the query whenever the virtual relation is used. Any such relation that is
not part of the logical model, but is made visible to a user as a virtual relation, is called a view. It
is possible to support a large number of views on top of any given set of actual relations.

Create View: We define a view in SQL by using the create view command. To define a view,
we must give the view a name and must state the query that computes the view. The form of the
create view command is:

9
Database Management System Lab Manual

 create view v as <query expression>;

Where <query expression> is any legal query expression. The view name is represented by v.

Consider again the clerkwho needs to access all data in the instructor relation, except salary. The
clerk should not be authorized to access the instructor relation. Instead, a view
relation faculty can bemade available to the clerk,with the view defined as follows:

 create view faculty as select ID, name, dept name from instructor;

Once we have defined a view, we can use the view name to refer to the virtual relation that the
view generates. Using the view physics fall 2009, we can find all Physics courses offered in the
Fall 2009 semester in the Watson building by writing:

 select course id from physics fall 2009 where building= ‘Watson‘;

Alter View: The CREATE VIEW statement creates a new view, or replaces an existing view if
the OR REPLACE clause is given. If the view does not exist, CREATE OR REPLACE VIEW is
the same as CREATE VIEW.

Drop view: Use the DROP VIEW statement to remove a view or an object view from the
database. You can change the definition of a view by dropping and re-creating it.

Syntax:

 Drop view view_name;

Create Index: A database index is a data structure that improves the speed of operations in a
table. Indexes can be created using one or more columns, providing the basis for both rapid
random lookups and efficient ordering of access to records. While creating index, it should be
taken into consideration which all columns will be used to make SQL queries and create one or
more indexes on those columns. Practically, indexes are also a type of tables, which keep
primary key or index field and a pointer to each record into the actual table.

The users cannot see the indexes; they are just used to speed up queries and will be used by the
Database Search Engine to locate records very fast. The INSERT and UPDATE statements take
more time on tables having indexes, whereas the SELECT statements become fast on those
tables. The reason is that while doing insert or update, a database needs to insert or update the
index values as well.

Simple and Unique Index: You can create a unique index on a table. A unique index means
that two rows cannot have the same index value. Here is the syntax to create an Index on a table.

10
Database Management System Lab Manual

 CREATE UNIQUE INDEX index_name ON table_name ( column1, column2,…);

You can use one or more columns to create an index. For example, we can create an index
on table student using column rno

 CREATE UNIQUE INDEX RNO_INDEX ON student (rno);

You can create a simple index on a table. Just omit the UNIQUE keyword from the query to
create a simple index. A Simple index allows duplicate values in a table. If you want to index the
values in a column in a descending order, you can add the reserved word DESC after the column
name.

 CREATE UNIQUE INDEX RNO_INDEX ON student (rno DESC);

Alter Index: There are four types of statements for adding indexes to a table −

 ALTER TABLE tbl_name ADD PRIMARY KEY (column_list)− This statement adds
a PRIMARY KEY, which means that the indexed values must be unique and cannot be NULL.
 ALTER TABLE tbl_name ADD UNIQUE index_name (column_list)− This statement creates
an index for which the values must be unique (except for the NULL values, which may appear
multiple times).
 ALTER TABLE tbl_name ADD INDEX index_name (column_list)− This adds an ordinary
index in which any value may appear more than once.
 ALTER TABLE tbl_name ADD FULLTEXT index_name (column_list)− This creates a
special FULLTEXT index that is used for text-searching purposes.

The following code block is an example to add index in an existing table.

 ALTER TABLE student ADD INDEX (id);

Drop Index: You can drop any INDEX by using the DROP clause along with the ALTER
command. Try out the following example to drop the above-created index.

 ALTER TABLE student DROP INDEX (id);

You can drop any INDEX by using the DROP clause along with the ALTER command.

Sequence: To create a sequence value in SQL query, we can use AUTO_INCREMENT with
optional starting value.

1. create table sonali(id int primary key auto_increment, name varchar(20));


2. create table sonali(id int primary key auto_increment = 10, name varchar(20));

11
Database Management System Lab Manual

Summary of commands :

Creating View, replacing or updating view and dropping view

1. Create view view1 as select * from emp;


2. Create or replace view1 as select * from emp1;
3. Drop view view1;

Crating sequence and inserting values

3. create table sonali(id int primary key auto_increment, name varchar(20));


4. create table sonali(id int primary key auto_increment = 10, name varchar(20));
5. insert into sonali(name) values(‗a‘),(‗b‘),(‗c‘);
6. insert into sonali values(NULL,‘sonali‘);
7. insert into sonali(name) values(‗sonali‘);

Creating index

8. create index id1 on sonali(id)


9. create index id1 on sonali(id desc)

PROGRAM :-
mysql> use view1;
Database changed

mysql> desc stud;


+---------+----------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+---------+----------+------+-----+---------+----------------+
| sid | int(50) | NO | PRI | NULL | auto_increment |
| name | char(50) | YES | | NULL | |
| address | char(50) | YES | | NULL | |
| dob | date | YES | | NULL | |
| mob | int(50) | YES | | NULL | |
+---------+----------+------+-----+---------+----------------+
5 rows in set (0.16 sec)

12
Database Management System Lab Manual

mysql> select *from stud;


+-----+-----------+-----------+------------+--------+
| sid | name | address | dob | mob |
+-----+-----------+-----------+------------+--------+
| 1 | jacob | Mumbai | 1998-02-01 | 268859 |
| 2 | dhanashri | Pune | 2018-05-05 | 262326 |
| 3 | Kiku | Hyderabad | 2018-05-09 | 264426 |
| 4 | lucy | Pune | 2018-05-10 | 264478 |
| 5 | sakshi | Pune | 2018-10-10 | 264432 |
+-----+-----------+-----------+------------+--------+
5 rows in set (0.00 sec)

mysql> desc book;


+--------+----------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+--------+----------+------+-----+---------+----------------+
| bid | int(50) | NO | PRI | NULL | auto_increment |
| title | char(50) | YES | | NULL | |
| Author | char(50) | YES | | NULL | |
| cost | int(50) | YES | | NULL | |
+--------+----------+------+-----+---------+----------------+
4 rows in set (0.00 sec)

mysql> select *from book;


+-----+---------+----------+------+
| bid | title | Author | cost |
+-----+---------+----------+------+
| 1 | Origin | DanBrown | 600 |
| 2|C | D.K | 500 |
| 3 | Cpp | S.K | 900 |
| 4 | OOP | M.P | 200 |
| 5 | Android | Rowling | 100 |
| 6 | Dbms | Patankar | 500 |
+-----+---------+----------+------+
6 rows in set (0.00 sec)

mysql> desc trans;


+--------+----------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+--------+----------+------+-----+---------+-------+
| bid | int(50) | YES | MUL | NULL | |
| sid | int(50) | YES | MUL | NULL | |
| t_date | date | YES | | NULL | |

13
Database Management System Lab Manual

| fine | int(50) | YES | | NULL | |


| status | char(50) | YES | | NULL | |
+--------+----------+------+-----+---------+-------+
5 rows in set (0.00 sec)

mysql> select *from trans;


+------+------+------------+------+--------+
| bid | sid | t_date | fine | status |
+------+------+------------+------+--------+
| 1 | 5 | 2018-01-01 | 50 | I |
| 5 | 2 | 2018-02-01 | 20 | R |
| 3 | 3 | 2018-10-01 | 50 | I |
| NULL | NULL | NULL | 800 | I |
+------+------+------------+------+--------+
4 rows in set (0.00 sec)

mysql> select title from book where bid=(select bid from trans where sid=5);
+--------+
| title |
+--------+
| Origin |
+--------+
1 row in set (0.45 sec)

mysql> select cost from book where bid=(select bid from trans where sid=5);
+------+
| cost |
+------+
| 600 |
+------+
1 row in set (0.00 sec)

mysql> select sum(cost) from book;


+-----------+
| sum(cost) |
+-----------+
| 2800 |
+-----------+
1 row in set (0.11 sec)

14
Database Management System Lab Manual

mysql> select count(bid) from book where bid=(select bid from trans where sid=5);
+------------+
| count(bid) |
+------------+
| 1|
+------------+
1 row in set (0.00 sec)

mysql> select fine from trans where sid=5;


+------+
| fine |
+------+
| 50 |
+------+
1 row in set (0.00 sec)

mysql> create view v1 as select sid,name,address,dob from stud;


Query OK, 0 rows affected (0.28 sec)

mysql> select *from v1;


+-----+-----------+-----------+------------+
| sid | name | address | dob |
+-----+-----------+-----------+------------+
| 1 | jacob | Mumbai | 1998-02-01 |
| 2 | dhanashri | Pune | 2018-05-05 |
| 3 | Kiku | Hyderabad | 2018-05-09 |
| 4 | lucy | Pune | 2018-05-10 |
| 5 | sakshi | Pune | 2018-10-10 |
+-----+-----------+-----------+------------+
5 rows in set (0.10 sec)
mysql> insert into v1(name,address,dob) values("lucy","akola",'1998-02-02');
Query OK, 1 row affected (0.21 sec)
mysql> select *from v1;
+-----+-----------+-----------+------------+
| sid | name | address | dob |
+-----+-----------+-----------+------------+
| 1 | jacob | Mumbai | 1998-02-01 |
| 2 | dhanashri | Pune | 2018-05-05 |
| 3 | Kiku | Hyderabad | 2018-05-09 |
| 4 | lucy | Pune | 2018-05-10 |
| 5 | sakshi | Pune | 2018-10-10 |
| 6 | lucy | akola | 1998-02-02 |
+-----+-----------+-----------+------------+

15
Database Management System Lab Manual

mysql> delete from v1 where sid=6;


Query OK, 1 row affected (0.25 sec)

mysql> select *from v1;


+-----+---------------+-----------------+----------------+
| sid | name | address | dob |
+-----+---------------+-----------------+----------------+
| 1 | jacob | Mumbai | 1998-02-01 |
| 2 | dhanashri | Pune | 2018-05-05 |
| 3 | Kiku | Hyderabad | 2018-05-09 |
| 4 | lucy | Pune | 2018-05-10 |
| 5 | sakshi | Pune | 2018-10-10 |
+-----+---------------+----------------+-----------------+
5 rows in set (0.00 sec)

mysql> create view v2 as select bid,sid,fine,status from trans;


Query OK, 0 rows affected (0.28 sec)

mysql> select *from trans;


+------+------+------------+------+--------+
| bid | sid | t_date | fine | status |
+------+------+------------+------+--------+
| 1 | 5 | 2018-01-01 | 50 | I |
| 5 | 2 | 2018-02-01 | 20 | R |
| 3 | 3 | 2018-10-01 | 50 | I |
+------+------+------------+------+--------+
3 rows in set (0.00 sec)

mysql> insert into v2(fine,status) values(800,"I");


Query OK, 1 row affected (0.32 sec)

mysql> select *from v2;


+------+------+------+--------+
| bid | sid | fine | status |
+------+------+------+--------+
| 1 | 5 | 50 | I |
| 5 | 2 | 20 | R |
| 3 | 3 | 50 | I |
| NULL | NULL | 800 | I |
+------+------+------+--------+
4 rows in set (0.00 sec)

16
Database Management System Lab Manual

mysql> create view newv1 as select bid,sid,t_date,fine,status from trans;


Query OK, 0 rows affected (0.23 sec)

mysql> select *from newv1;


+------+------+------------+------+--------+
| bid | sid | t_date | fine | status |
+------+------+------------+------+--------+
| 1 | 5 | 2018-01-01 | 50 | I |
| 5 | 2 | 2018-02-01 | 20 | R |
| 3 | 3 | 2018-10-01 | 50 | I |
| NULL | NULL | NULL | 800 | I |
+------+------+------------+------+--------+
4 rows in set (0.00 sec)

mysql> insert into newv1 values(4,1,'2018-05-03',800,"I");


Query OK, 1 row affected (0.23 sec)

mysql> select *from newv1;


+------+------+------------+------+--------+
| bid | sid | t_date | fine | status |
+------+------+------------+------+--------+
| 1 | 5 | 2018-01-01 | 50 | I |
| 5 | 2 | 2018-02-01 | 20 | R |
| 3 | 3 | 2018-10-01 | 50 | I |
| NULL | NULL | NULL | 800 | I |
| 4 | 1 | 2018-05-03 | 800 | I |
+------+------+------------+------+--------+
5 rows in set (0.00 sec)

mysql> select *from trans;


+------+------+------------+------+--------+
| bid | sid | t_date | fine | status |
+------+------+------------+------+--------+
| 1 | 5 | 2018-01-01 | 50 | I |
| 5 | 2 | 2018-02-01 | 20 | R |
| 3 | 3 | 2018-10-01 | 50 | I |
| NULL | NULL | NULL | 800 | I |
| 4 | 1 | 2018-05-03 | 800 | I |
+------+------+------------+------+--------+
5 rows in set (0.00 sec)

17
Database Management System Lab Manual

mysql> delete from newv1 where fine=50;


Query OK, 2 rows affected (0.18 sec)

mysql> select *from newv1;


+------+------+------------+------+--------+
| bid | sid | t_date | fine | status |
+------+------+------------+------+--------+
| 5 | 2 | 2018-02-01 | 20 | R |
| NULL | NULL | NULL | 800 | I |
| 4 | 1 | 2018-05-03 | 800 | I |
+------+------+------------+------+--------+
3 rows in set (0.00 sec)

mysql> select *from trans;


+------+------+------------+------+--------+
| bid | sid | t_date | fine | status |
+------+------+------------+------+--------+
| 5 | 2 | 2018-02-01 | 20 | R |
| NULL | NULL | NULL | 800 | I |
| 4 | 1 | 2018-05-03 | 800 | I |
+------+------+------------+------+--------+
3 rows in set (0.00 sec)

mysql> update newv1 set fine=1000 where bid='4';


Query OK, 1 row affected (0.18 sec)
Rows matched: 1 Changed: 1 Warnings: 0

mysql> select *from newv1;


+------+------+------------+------+--------+
| bid | sid | t_date | fine | status |
+------+------+------------+------+--------+
| 5 | 2 | 2018-02-01 | 20 | R |
| NULL | NULL | NULL | 800 | I |
| 4 | 1 | 2018-05-03 | 1000 | I |
+------+------+------------+------+--------+
3 rows in set (0.00 sec)

18
Database Management System Lab Manual

mysql> select *from trans;


+------+------+------------+------+--------+
| bid | sid | t_date | fine | status |
+------+------+------------+------+--------+
| 5 | 2 | 2018-02-01 | 20 | R |
| NULL | NULL | NULL | 800 | I |
| 4 | 1 | 2018-05-03 | 1000 | I |
+------+------+------------+------+--------+
3 rows in set (0.00 sec)

mysql> create view multi_v1 as select s.sid,s.name,t.fine from stud s,trans t where s.sid=t.sid;
Query OK, 0 rows affected (0.11 sec)

mysql> select * from multi_v1;


+-----+-----------+------+
| sid | name | fine |
+-----+-----------+------+
| 2 | dhanashri | 20 |
| 1 | jacob | 1000 |
+-----+-----------+------+
2 rows in set (0.00 sec)

mysql> select * from stud;


+-----+-----------+-----------+------------+--------+
| sid | name | address | dob | mob |
+-----+-----------+-----------+------------+--------+
| 1 | jacob | Mumbai | 1998-02-01 | 268859 |
| 2 | dhanashri | Pune | 2018-05-05 | 262326 |
| 3 | Kiku | Hyderabad | 2018-05-09 | 264426 |
| 4 | lucy | Pune | 2018-05-10 | 264478 |
| 5 | sakshi | Pune | 2018-10-10 | 264432 |
+-----+-----------+-----------+------------+--------+
5 rows in set (0.00 sec)

mysql> select * from trans;


+------+------+------------+------+--------+
| bid | sid | t_date | fine | status |
+------+------+------------+------+--------+
| 5 | 2 | 2018-02-01 | 20 | R |
| NULL | NULL | NULL | 800 | I |
| 4 | 1 | 2018-05-03 | 1000 | I |
+------+------+------------+------+--------+

19
Database Management System Lab Manual

mysql> insert into multi_v1 values(3,'koko',60);


ERROR 1394 (HY000): Can not insert into join view 'view1.multi_v1' without fields list

mysql> update multi_v1 set name='helen' where sid=1;

mysql> select * from multi_v1;


+-----+-----------+------+
| sid | name | fine |
+-----+-----------+------+
| 2 | dhanashri | 20 |
| 1 | helen | 1000 |
+-----+-----------+------+
2 rows in set (0.00 sec)

mysql> select *from stud;


+-----+-----------+-----------+------------+--------+
| sid | name | address | dob | mob |
+-----+-----------+-----------+------------+--------+
| 1 | helen | Mumbai | 1998-02-01 | 268859 |
| 2 | dhanashri | Pune | 2018-05-05 | 262326 |
| 3 | Kiku | Hyderabad | 2018-05-09 | 264426 |
| 4 | lucy | Pune | 2018-05-10 | 264478 |
| 5 | sakshi | Pune | 2018-10-10 | 264432 |
+-----+-----------+-----------+------------+--------+
5 rows in set (0.00 sec)
mysql> delete from multi_v1 where sid=1;
ERROR 1395 (HY000): Cannot delete from join view 'view1.multi_v1'

mysql> alter table trans drop sid;


ERROR 1828 (HY000): Cannot drop column 'sid': needed in a foreign key constraint
'trans_ibfk_2'
mysql> select distinct table_name, index_name from Information_schema. statistics where
table_schema='view1';
+------------+------------+
| TABLE_NAME | INDEX_NAME |
+------------+------------+
| book | PRIMARY |
| stud | PRIMARY |
| trans | bid |
| trans | sid |
+------------+----------------+
4 rows in set (0.29 sec)

20
Database Management System Lab Manual

mysql> show index in book;


| Table | Non_unique | Key_name | Seq_in_index | Column_name |
| book | 0 | PRIMARY | 1 | bid |

|Collation | Cardinality | Sub_part | Packed | Null | Index_type |


|A | 6 | NULL | NULL | |BTREE |

Comment | Index_comment | Visible |


| | | YES |
1 row in set (0.17 sec)

mysql> show create table book;


| Table | Create Table |
| book | CREATE TABLE `book` (
`bid` int(50) NOT NULL AUTO_INCREMENT,
`title` char(50) DEFAULT NULL,
`Author` char(50) DEFAULT NULL,
`cost` int(50) DEFAULT NULL,
PRIMARY KEY (`bid`)
) ENGINE=InnoDB AUTO_INCREMENT=7 DEFAULT CHARSET=utf8mb4
COLLATE=utf8mb4_0900_ai_ci |
1 row in set (0.10 sec)

mysql> show create table trans;


| Table | Create Table |
| trans | CREATE TABLE `trans` (
`bid` int(50) DEFAULT NULL,
`sid` int(50) DEFAULT NULL,
`t_date` date DEFAULT NULL,
`fine` int(50) DEFAULT NULL,
`status` char(50) DEFAULT NULL,
KEY `bid` (`bid`),
KEY `sid` (`sid`),
CONSTRAINT `trans_ibfk_1` FOREIGN KEY (`bid`) REFERENCES `book` (`bid`),
CONSTRAINT `trans_ibfk_2` FOREIGN KEY (`sid`) REFERENCES `stud` (`sid`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci |
1 row in set (0.10 sec)

mysql> alter table trans drop foreign key trans_ibfk_1;


Query OK, 0 rows affected (0.38 sec)
Records: 0 Duplicates: 0 Warnings: 0

21
Database Management System Lab Manual

Assignment 2 –Part B DML

Design atleast 10 SQL queries for suitable database application using SQL DML statements:
Insert, Select, Update, Delete with operators, function and set operators.
THEORY:-
SET Operations in SQL
SQL supports few Set operations which can be performed on the table data. These are used to get
meaningful results from data stored in the table, under different special conditions.
In this tutorial, we will cover 4 different types of SET operations, along with example:

1. UNION
2. UNION ALL
3. INTERSECT

4. MINUS

UNION Operation

UNION is used to combine the results of two or more SELECT statements. However it will
eliminate duplicate rows from its resultset. In case of union, number of columns and datatype
must be same in both the tables, on which UNION operation is being applied.

Example of UNION
The First table,

ID Name

1 abhi

2 adam

22
Database Management System Lab Manual

The Second table,

ID Name

2 adam

3 Chester

Union SQL query will be,


SELECT * FROM First
UNION
SELECT * FROM Second;
The resultset table will look like,

ID NAME

1 abhi

2 adam

3 Chester

UNION ALL
This operation is similar to Union. But it also shows the duplicate rows.

23
Database Management System Lab Manual

Example of Union All


The First table,

ID NAME

1 abhi

2 adam

The Second table,

ID NAME

2 adam

3 Chester

Union All query will be like,


SELECT * FROM First
UNION ALL
SELECT * FROM Second;
The resultset table will look like,

ID NAME

1 abhi

2 adam

2 adam

3 Chester

24
Database Management System Lab Manual

INTERSECT
Intersect operation is used to combine two SELECT statements, but it only retuns the records
which are common from both SELECT statements. In case of Intersect the number of columns
and datatype must be same.
NOTE: MySQL does not support INTERSECT operator.

Example of Intersect
The First table,

ID NAME

1 abhi

2 adam

The Second table,

ID NAME

2 adam

3 Chester

Intersect query will be,


SELECT * FROM First
INTERSECT
SELECT * FROM Second;
The resultset table will look like

25
Database Management System Lab Manual

ID NAME

2 adam

MINUS
The Minus operation combines results of two SELECT statements and return only those in the
final result, which belongs to the first set of the result.

Example of Minus
The First table,

ID NAME

1 abhi

2 adam

The Second table,

ID NAME

2 adam

3 Chester

Minus query will be,


SELECT * FROM First
MINUS
SELECT * FROM Second;

26
Database Management System Lab Manual

The resultset table will look like,

ID NAME

1 abhi

PROGRAM:-
[student@localhost ~]$ su
Password:
[root@localhost student]# systemctl start mysqld
[root@localhost student]# mysql
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 2
Server version: 5.5.33a-MariaDB MariaDB Server

Copyright (c) 2000, 2013, Oracle, Monty Program Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> show databases;


+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| test |
+--------------------+
4 rows in set (0.00 sec)

MariaDB [(none)]> create database employee;


Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> use employee;


Database changed

MariaDB [employee]> create table emp(Id int(10),name char(30),dob date,designation


char(30),joining_date date,salary int(10),dept char(30));

27
Database Management System Lab Manual

MariaDB [employee]> alter table emp add primary key(Id);

MariaDB [employee]> select * from emp;


Empty set (0.01 sec)

MariaDB [employee]> insert into emp values(1,'ABC','1990-09-23','HOD','2012-05-


15',40000,'COMP');
Query OK, 1 row affected (0.03 sec)

MariaDB [employee]> insert into emp values(2,'JHON','1991-02-10','Lecturer','2015-07-


15',35000,'COMP');
Query OK, 1 row affected (0.02 sec)

MariaDB [employee]> insert into emp values(3,'ANNE','1991-04-17','Lecturer','2014-06-


23',37000,'E&TC');
Query OK, 1 row affected (0.01 sec)

MariaDB [employee]> insert into emp values(4,'BOB','1990-08-27','HOD','2011-05-


24',42000,'E&TC');
Query OK, 1 row affected (0.01 sec)

MariaDB [employee]> insert into emp values(5,'JAKE','1990-09-25','LabAssistant','2013-11-


30',30000,'COMP');
Query OK, 1 row affected (0.02 sec)

MariaDB [employee]> select * from emp;


+----+------+------------+--------------+--------------+--------+------+
| Id | name | dob | designation | joining_date | salary | dept |
+----+------+------------+--------------+--------------+--------+------+
| 1 | ABC | 1990-09-23 | HOD | 2012-05-15 | 40000 | COMP |
| 2 | JHON | 1991-02-10 | Lecturer | 2015-07-15 | 35000 | COMP |
| 3 | ANNE | 1991-04-17 | Lecturer | 2014-06-23 | 37000 | E&TC |
| 4 | BOB | 1990-08-27 | HOD | 2011-05-24 | 42000 | E&TC |
| 5 | JAKE | 1990-09-25 | LabAssistant | 2013-11-30 | 30000 | COMP |
+----+------+------------+--------------+--------------+--------+------+

MariaDB [employee]> select * from emp where dept='COMP';

28
Database Management System Lab Manual

+----+------+------------+--------------+--------------+--------+------+
| Id | name | dob | designation | joining_date | salary | dept |
+----+------+------------+--------------+--------------+--------+------+
| 1 | ABC | 1990-09-23 | HOD | 2012-05-15 | 40000 | COMP |
| 2 | JHON | 1991-02-10 | Lecturer | 2015-07-15 | 35000 | COMP |
| 5 | JAKE | 1990-09-25 | LabAssistant | 2013-11-30 | 30000 | COMP |
+----+------+------------+--------------+--------------+--------+------+

MariaDB [employee]> select * from emp where salary > 20000;


+----+------+------------+--------------+--------------+--------+------+
| Id | name | dob | designation | joining_date | salary | dept |
+----+------+------------+--------------+--------------+--------+------+
| 1 | ABC | 1990-09-23 | HOD | 2012-05-15 | 40000 | COMP |
| 2 | JHON | 1991-02-10 | Lecturer | 2015-07-15 | 35000 | COMP |
| 3 | ANNE | 1991-04-17 | Lecturer | 2014-06-23 | 37000 | E&TC |
| 4 | BOB | 1990-08-27 | HOD | 2011-05-24 | 42000 | E&TC |
| 5 | JAKE | 1990-09-25 | LabAssistant | 2013-11-30 | 30000 | COMP |
+----+------+------------+--------------+--------------+--------+------+

MariaDB [employee]> delete from emp where designation = 'LabAssistant';


Query OK, 1 row affected (0.05 sec)

MariaDB [employee]> select * from emp;


+----+------+------------+-------------+--------------+--------+------+
| Id | name | dob | designation | joining_date | salary | dept |
+----+------+------------+-------------+--------------+--------+------+
| 1 | ABC | 1990-09-23 | HOD | 2012-05-15 | 40000 | COMP |
| 2 | JHON | 1991-02-10 | Lecturer | 2015-07-15 | 35000 | COMP |
| 3 | ANNE | 1991-04-17 | Lecturer | 2014-06-23 | 37000 | E&TC |
| 4 | BOB | 1990-08-27 | HOD | 2011-05-24 | 42000 | E&TC |
+----+------+------------+-------------+--------------+--------+------+

MariaDB [employee]> select name from emp where salary=(select min(salary) from emp);
+------+
| name |
+------+
| JHON |
+------+

MariaDB [employee]> select name from emp where salary=(select max(salary) from emp);
+------+
| name |
+------+
| BOB |
+------+

29
Database Management System Lab Manual

MariaDB [employee]> select avg(salary) from emp;


+-------------+
| avg(salary) |
+-------------+
| 38500.0000 |
+-------------+

MariaDB [employee]> select avg(salary) from emp where dept='COMP';


+-------------+
| avg(salary) |
+-------------+
| 37500.0000 |
+-------------+

MariaDB [employee]> select count(id) from emp where dept='COMP';


+-----------+
| count(id) |
+-----------+
| 2|
+-----------+

MariaDB [employee]> select name from emp where name like('a%c');


+------+
| name |
+------+
| ABC |
+------+

MariaDB [employee]> select name from emp where salary BETWEEN 10000 AND 40000;
+------+
| name |
+------+
| ABC |
| JHON |
| ANNE |
+------+

30
Database Management System Lab Manual

MariaDB [employee]> select name from emp where salary IN(20000,30000,40000);


+------+
| name |
+------+
| ABC |
+------+

MariaDB [employee]> select name from emp where dept IN('COMP','E&TC');


+------+
| name |
+------+
| ABC |
| JHON |
| ANNE |
| BOB |
+------+

MariaDB [employee]> select * from emp;


+----+------+------------+-------------+--------------+--------+------+
| Id | name | dob | designation | joining_date | salary | dept |
+----+------+------------+-------------+--------------+--------+------+
| 1 | ABC | 1990-09-23 | HOD | 2012-05-15 | 40000 | COMP |
| 2 | JHON | 1991-02-10 | Lecturer | 2015-07-15 | 35000 | COMP |
| 3 | ANNE | 1991-04-17 | Lecturer | 2014-06-23 | 37000 | E&TC |
| 4 | BOB | 1990-08-27 | HOD | 2011-05-24 | 42000 | E&TC |
+----+------+------------+-------------+--------------+--------+------+

MariaDB [employee]> create table emp1(Id int(10),name char(30),dob date,designation


char(30),joining_date date,salary int(10),dept char(30));

MariaDB [employee]> insert into emp1 values(5,'JAY','1995-02-13','Employee','2015-12-


10',15000,'MECH');
Query OK, 1 row affected (0.04 sec)

MariaDB [employee]> select * from emp1;


+------+------+------------+-------------+--------------+--------+------+
| Id | name | dob | designation | joining_date | salary | dept |
+------+------+------------+-------------+--------------+--------+------+
| 5 | JAY | 1995-02-13 | Employee | 2015-12-10 | 15000 | MECH |
+------+------+------------+-------------+--------------+--------+------+

31
Database Management System Lab Manual

MariaDB [employee]> select * from emp UNION select * from emp1;


+------+------+------------+-------------+--------------+--------+------+
| Id | name | dob | designation | joining_date | salary | dept |
+------+------+------------+-------------+--------------+--------+------+
| 1 | ABC | 1990-09-23 | HOD | 2012-05-15 | 40000 | COMP |
| 2 | JHON | 1991-02-10 | Lecturer | 2015-07-15 | 35000 | COMP |
| 3 | ANNE | 1991-04-17 | Lecturer | 2014-06-23 | 37000 | E&TC |
| 4 | BOB | 1990-08-27 | HOD | 2011-05-24 | 42000 | E&TC |
| 5 | JAY | 1995-02-13 | Employee | 2015-12-10 | 15000 | MECH |
+------+------+------------+-------------+--------------+--------+------+

MariaDB [employee]> select * from emp where Id IN(select Id from emp1);


Empty set (0.00 sec)

MariaDB [employee]> insert into emp1 values(4,'BOB','1990-08-27','HOD','2011-05-


24',42000,'E&TC');
Query OK, 1 row affected (0.02 sec)

MariaDB [employee]> select * from emp1;


| Id | name | dob | designation | joining_date | salary | dept |
+------+------+------------+-------------+--------------+--------+------+
| 5 | JAY | 1995-02-13 | Employee | 2015-12-10 | 15000 | MECH |
| 4 | BOB | 1990-08-27 | HOD | 2011-05-24 | 42000 | E&TC |
+------+------+------------+-------------+--------------+--------+------+

MariaDB [employee]> select * from emp where Id IN(select Id from emp1);


| Id | name | dob | designation | joining_date | salary | dept |
+----+------+------------+-------------+--------------+--------+------+
| 4 | BOB | 1990-08-27 | HOD | 2011-05-24 | 42000 | E&TC |
+----+------+------------+-------------+--------------+--------+------+

MariaDB [employee]> select * from emp where Id NOT IN(select Id from emp1);
+----+------+------------+-------------+--------------+--------+------+
| Id | name | dob | designation | joining_date | salary | dept |
+----+------+------------+-------------+--------------+--------+------+
| 1 | ABC | 1990-09-23 | HOD | 2012-05-15 | 40000 | COMP |
| 2 | JHON | 1991-02-10 | Lecturer | 2015-07-15 | 35000 | COMP |
| 3 | ANNE | 1991-04-17 | Lecturer | 2014-06-23 | 37000 | E&TC |
+----+------+------------+-------------+--------------+--------+------+
3 rows in set (0.00 sec)

32
Database Management System Lab Manual

ASSIGNMENT NO. 3
TITLE :-
Design at least 10 SQL queries for suitable database application using SQL DML statements:
all types of Joins, Sub-Query and View.
1. Create table physician (reg_no primary, name, tel_no, city).
2. Create table patient (p_id primary, p_name, street, city).
3. Create table visit (p_id foreign, reg_no foreign, date_of_visit, fee).
4. Find name and city of patient who visited a physician on 13 July 2017.
5. Get the name of physician and total number of patients visited him.
6. Get the details of date wise fees collected at clinic group by date_of_visit.
7. Take join of physician and visit (inner, left, right, full outer join) [reg_no, p_name, p_id,
date_of_visit].
8. Take join of patient and visit (inner, left, right, full outer join) [p_name, city, reg_no,
date_of_visit].
9. Create view for date wise fees collected at clinic. Apply operations on view.
10. Create view on patient (p_name, reg_no, date_of_visit).Apply operations on view.

THEORY :-
JOINS :-
SQL Join
A JOIN clause is used to combine rows from two or more tables, based on a related column
between them.
Let's look at a selection from the "Orders" table:
OrderID CustomerID OrderDate

10308 2 1996-09-18

10309 37 1996-09-19

10310 77 1996-09-20
Then, look at a selection from the "Customers" table:
CustomerID CustomerName ContactName

1 Alfreds Futterkiste Maria Anders

2 Ana Trujillo Emparedados y helados Ana Trujillo

3 Antonio Moreno Taquería Antonio Moreno

33
Database Management System Lab Manual

Notice that the "CustomerID" column in the "Orders" table refers to the "CustomerID" in the
"Customers" table. The relationship between the two tables above is the "CustomerID" column.
Then, we can create the following SQL statement (that contains an INNER JOIN), that selects
records that have matching values in both tables:
Example
SELECT Orders.OrderID, Customers.CustomerName, Orders.OrderDate
FROM Orders
INNER JOIN Customers ON Orders.CustomerID=Customers.CustomerID;
and it will produce something like this:
OrderID CustomerName

10308 Ana Trujillo Emparedados y helados

10365 Antonio Moreno Taquería

10383 Around the Horn

10355 Around the Horn

10278 Berglunds snabbköp

Different Types of SQL JOINs

Here are the different types of the JOINs in SQL:

 (INNER) JOIN: Returns records that have matching values in both tables
 LEFT (OUTER) JOIN: Return all records from the left table, and the matched records
from the right table
 RIGHT (OUTER) JOIN: Return all records from the right table, and the matched records
from the left table
 FULL (OUTER) JOIN: Return all records when there is a match in either left or right
table

34
Database Management System Lab Manual

SUB-QUERY :-

A Subquery or Inner query or a Nested query is a query within another SQL query and
embedded within the WHERE clause.

A subquery is used to return data that will be used in the main query as a condition to further
restrict the data to be retrieved.

Subqueries can be used with the SELECT, INSERT, UPDATE, and DELETE statements along
with the operators like =, <, >, >=, <=, IN, BETWEEN, etc.

There are a few rules that subqueries must follow −

 Subqueries must be enclosed within parentheses.

 A subquery can have only one column in the SELECT clause, unless multiple columns
are in the main query for the subquery to compare its selected columns.

 An ORDER BY command cannot be used in a subquery, although the main query can
use an ORDER BY. The GROUP BY command can be used to perform the same
function as the ORDER BY in a subquery.

 Subqueries that return more than one row can only be used with multiple value operators
such as the IN operator.

 The SELECT list cannot include any references to values that evaluate to a BLOB,
ARRAY, CLOB, or NCLOB.
35
Database Management System Lab Manual

 A subquery cannot be immediately enclosed in a set function.

 The BETWEEN operator cannot be used with a subquery. However, the BETWEEN
operator can be used within the subquery.
Subqueries with the SELECT Statement
Subqueries are most frequently used with the SELECT statement. The basic syntax is as follows

SELECT column_name [, column_name ]
FROM table1 [, table2 ]
WHERE column_name OPERATOR
(SELECT column_name [, column_name ]
FROM table1 [, table2 ]
[WHERE])

Example
Consider the CUSTOMERS table having the following records −

+----+----------+-----+-----------+----------+

| ID | NAME | AGE | ADDRESS | SALARY |

+----+----------+-----+-----------+----------+

| 1 | Ramesh | 35 | Ahmedabad | 2000.00 |

| 2 | Khilan | 25 | Delhi | 1500.00 |

| 3 | kaushik | 23 | Kota | 2000.00 |

| 4 | Chaitali | 25 | Mumbai | 6500.00 |

| 5 | Hardik | 27 | Bhopal | 8500.00 |

| 6 | Komal | 22 | MP | 4500.00 |

| 7 | Muffy | 24 | Indore | 10000.00 |

+----+----------+-----+-----------+----------+

Now, let us check the following subquery with a SELECT statement.

SQL> SELECT *

FROM CUSTOMERS

WHERE ID IN (SELECT ID

36
Database Management System Lab Manual

FROM CUSTOMERS

WHERE SALARY > 4500) ;

This would produce the following result.


+----+----------+-----+---------+----------+
| ID | NAME | AGE | ADDRESS | SALARY |
+----+----------+-----+---------+----------+
| 4 | Chaitali | 25 | Mumbai | 6500.00 |
| 5 | Hardik | 27 | Bhopal | 8500.00 |
| 7 | Muffy | 24 | Indore | 10000.00 |
+----+----------+-----+---------+----------+
Subqueries with the INSERT Statement
Subqueries also can be used with INSERT statements. The INSERT statement uses the data
returned from the subquery to insert into another table. The selected data in the subquery can be
modified with any of the character, date or number functions.
The basic syntax is as follows.
INSERT INTO table_name [ (column1 [, column2 ]) ]
SELECT [ *|column1 [, column2 ]
FROM table1 [, table2 ]
[ WHERE VALUE OPERATOR ]
Example
Consider a table CUSTOMERS_BKP with similar structure as CUSTOMERS table. Now to
copy the complete CUSTOMERS table into the CUSTOMERS_BKP table, you can use the
following syntax.
SQL> INSERT INTO CUSTOMERS_BKP
SELECT * FROM CUSTOMERS
WHERE ID IN (SELECT ID
FROM CUSTOMERS) ;
Subqueries with the UPDATE Statement
The subquery can be used in conjunction with the UPDATE statement. Either single or multiple
columns in a table can be updated when using a subquery with the UPDATE statement.
The basic syntax is as follows.
UPDATE table
SET column_name = new_value
[ WHERE OPERATOR [ VALUE ]
(SELECT COLUMN_NAME
FROM TABLE_NAME)
[ WHERE) ]
Example
Assuming, we have CUSTOMERS_BKP table available which is backup of CUSTOMERS
table. The following example updates SALARY by 0.25 times in the CUSTOMERS table for all
the customers whose AGE is greater than or equal to 27.

37
Database Management System Lab Manual

SQL> UPDATE CUSTOMERS

SET SALARY = SALARY * 0.25

WHERE AGE IN (SELECT AGE FROM CUSTOMERS_BKP

WHERE AGE >= 27 );

This would impact two rows and finally CUSTOMERS table would have the following records.
+----+----------+-----+-----------+----------+
| ID | NAME | AGE | ADDRESS | SALARY |
+----+----------+-----+-----------+----------+
| 1 | Ramesh | 35 | Ahmedabad | 125.00 |
| 2 | Khilan | 25 | Delhi | 1500.00 |
| 3 | kaushik | 23 | Kota | 2000.00 |
| 4 | Chaitali | 25 | Mumbai | 6500.00 |
| 5 | Hardik | 27 | Bhopal | 2125.00 |
| 6 | Komal | 22 | MP | 4500.00 |
| 7 | Muffy | 24 | Indore | 10000.00 |
+----+----------+-----+-----------+----------+
Subqueries with the DELETE Statement
The subquery can be used in conjunction with the DELETE statement like with any other
statements mentioned above.
The basic syntax is as follows.
DELETE FROM TABLE_NAME
[ WHERE OPERATOR [ VALUE ]
(SELECT COLUMN_NAME
FROM TABLE_NAME)
[ WHERE) ]

Example
Assuming, we have a CUSTOMERS_BKP table available which is a backup of the
CUSTOMERS table. The following example deletes the records from the CUSTOMERS table
for all the customers whose AGE is greater than or equal to 27.

SQL> DELETE FROM CUSTOMERS

WHERE AGE IN (SELECT AGE FROM CUSTOMERS_BKP

WHERE AGE >= 27 );

This would impact two rows and finally the CUSTOMERS table would have the following
records.

+----+----------+-----+---------+----------+
| ID | NAME | AGE | ADDRESS | SALARY |
38
Database Management System Lab Manual

+----+----------+-----+---------+----------+
| 2 | Khilan | 25 | Delhi | 1500.00 |
| 3 | kaushik | 23 | Kota | 2000.00 |
| 4 | Chaitali | 25 | Mumbai | 6500.00 |
| 6 | Komal | 22 | MP | 4500.00 |
| 7 | Muffy | 24 | Indore | 10000.00 |
+----+----------+-----+---------+----------+

VIEWS :-
SQL CREATE VIEW Statement
In SQL, a view is a virtual table based on the result-set of an SQL statement.
A view contains rows and columns, just like a real table. The fields in a view are fields from one
or more real tables in the database.
You can add SQL functions, WHERE, and JOIN statements to a view and present the data as if
the data were coming from one single table.
CREATE VIEW
Syntax-
CREATE VIEW view_name AS
SELECT column1, column2, ...
FROM table_name
WHERE condition;
Note: A view always shows up-to-date data! The database engine recreates the data, using the
view's SQL statement, every time a user queries a view.
Example-

The following SQL creates a view that shows all customers from Brazil:

CREATE VIEW [Brazil Customers] AS


SELECT CustomerName, ContactName
FROM Customers
WHERE Country = "Brazil";

We can query the view above as follows:

Example-
SELECT * FROM [Brazil Customers];

The following SQL creates a view that selects every product in the "Products" table with a price
higher than the average price:

39
Database Management System Lab Manual

Example-
CREATE VIEW [Products Above Average Price] AS
SELECT ProductName, Price
FROM Products
WHERE Price > (SELECT AVG(Price) FROM Products);

We can query the view above as follows:

Example-
SELECT * FROM [Products Above Average Price];

SQL Updating a View

A view can be updated with the CREATE OR REPLACE VIEW command.

SQL CREATE OR REPLACE VIEW


Syntax-
CREATE OR REPLACE VIEW view_name AS
SELECT column1, column2, ...
FROM table_name
WHERE condition;

The following SQL adds the "City" column to the "Brazil Customers" view:

Example-
CREATE OR REPLACE VIEW [Brazil Customers] AS
SELECT CustomerName, ContactName, City
FROM Customers
WHERE Country = "Brazil";

SQL Dropping a View

A view is deleted with the DROP VIEW command.

40
Database Management System Lab Manual

SQL DROP VIEW


Syntax-
DROP VIEW view_name;

The following SQL drops the "Brazil Customers" view:

Example-
DROP VIEW [Brazil Customers];

PROGRAM :-
create table physician(regno int(60) primary key,name char(60),telno int(60),city char(60));
Query OK, 0 rows affected (0.71 sec)

mysql> create table patient(pid int(60) primary key,pname char(60),street char(60),city


char(60));
Query OK, 0 rows affected (0.92 sec)

mysql> create table visit(pid int(60),regno int(60),dateofvisit date,fee int(60),foreign key(pid)


references patient(pid),foreign key(regno) references physician(regno));
Query OK, 0 rows affected (2.57 sec)

insert into physician values(1,'moni',234562,'pune');


insert into physician values(2,'tony',237862,'mumbai');
Query OK, 1 row affected (0.14 sec)

mysql> insert into physician values(3,'gracy',277862,'kolhhapur');


Query OK, 1 row affected (0.23 sec)

insert into patient values(1,'hannah','kp_road','hyderabad');


Query OK, 1 row affected (0.33 sec)

mysql> insert into patient values(2,'courtney','laxmi_road','pune');


Query OK, 1 row affected (0.36 sec)

mysql> insert into patient values(3,'jenni','mg_road','nagpur');


Query OK, 1 row affected (0.29 sec)
insert into visit values(1,3,'2017-07-13',20000);
Query OK, 1 row affected (1.08 sec)

mysql> insert into visit values(2,3,'2018-04-03',30000);


Query OK, 1 row affected (2.24 sec)

41
Database Management System Lab Manual

mysql> insert into visit values(3,1,'2018-02-02',10000);


Query OK, 1 row affected (2.00 sec)

select *from physician;


+-------+-------+--------+-----------+
| regno | name | telno | city |
+-------+-------+--------+-----------+
| 1 | moni | 234562 | pune |
| 2 | tony | 237862 | mumbai |
| 3 | gracy | 277862 | kolhhapur |
+-------+-------+--------+-----------+
3 rows in set (0.00 sec)

mysql> select *from visit;


+------+-------+-------------+-------+
| pid | regno | dateofvisit | fee |
+------+-------+-------------+-------+
| 1 | 3 | 2017-07-13 | 20000 |
| 2 | 3 | 2018-04-03 | 30000 |
| 3 | 1 | 2018-02-02 | 10000 |
+------+-------+-------------+-------+
3 rows in set (0.00 sec)
mysql> select *from patient;
+-----+----------+------------+-----------+
| pid | pname | street | city |
+-----+----------+------------+-----------+
| 1 | hannah | kp_road | hyderabad |
| 2 | courtney | laxmi_road | pune |
| 3 | jenni | mg_road | nagpur |
+-----+----------+------------+-----------+
3 rows in set (0.00 sec)
select pname, city from patient where pid=(select pid from visit where dateofvisit='2017-07-
13');
+--------+-----------+
| pname | city |
+--------+-----------+
| hannah | hyderabad |
+--------+-----------+
1 row in set (0.00 sec)

> select p1.regno,p1.name,v.pid,v.dateofvisit from physician p1 inner join visit v on


v.regno=p1.regno ;
+-------+-------+------+-------------+
| regno | name | pid | dateofvisit |
+-------+-------+------+-------------+

42
Database Management System Lab Manual

| 1 | moni | 3 | 2018-02-02 |
| 3 | gracy | 1 | 2017-07-13 |
| 3 | gracy | 2 | 2018-04-03 |
+-------+-------+------+-------------+
3 rows in set (0.00 sec)

mysql> select p1.regno,p1.name,v.pid,v.dateofvisit from physician p1 left outer join visit v on


v.regno=p1.regno ;
+-------+-------+------+-------------+
| regno | name | pid | dateofvisit |
+-------+-------+------+-------------+
| 1 | moni | 3 | 2018-02-02 |
| 2 | tony | NULL | NULL |
| 3 | gracy | 1 | 2017-07-13 |
| 3 | gracy | 2 | 2018-04-03 |
+-------+-------+------+-------------+
4 rows in set (0.00 sec)

mysql> select p1.regno,p1.name,v.pid,v.dateofvisit from physician p1 right outer join visit v on


v.regno=p1.regno ;
+-------+-------+------+-------------+
| regno | name | pid | dateofvisit |
+-------+-------+------+-------------+
| 3 | gracy | 1 | 2017-07-13 |
| 3 | gracy | 2 | 2018-04-03 |
| 1 | moni | 3 | 2018-02-02 |
+-------+-------+------+-------------+
3 rows in set (0.00 sec)

mysql> select p1.regno,p1.name,v.pid,v.dateofvisit from physician p1 left outer join visit v on


v.regno=p1.regno union select p1.regno,p1.name,v.pid,v.dateofvisit from physician p1 right
outer join visit v on v.regno=p1.regno ; ;
+-------+-------+------+-------------+
| regno | name | pid | dateofvisit |
+-------+-------+------+-------------+
| 3 | gracy | 1 | 2017-07-13 |
| 3 | gracy | 2 | 2018-04-03 |
| 1 | moni | 3 | 2018-02-02 |
| 2 | tony | NULL | NULL |
+-------+-------+------+-------------+
4 rows in set (0.00 sec)

43
Database Management System Lab Manual

mysql> select p2.pname,p2.city,v.regno,v.dateofvisit from patient p2 inner join visit v on


p2.pid=v.pid;
+----------+-----------+-------+-------------+
| pname | city | regno | dateofvisit |
+----------+-----------+-------+-------------+
| hannah | hyderabad | 3 | 2017-07-13 |
| courtney | pune | 3 | 2018-04-03 |
| jenni | nagpur | 1 | 2018-02-02 |
+----------+-----------+-------+-------------+
3 rows in set (0.00 sec)

mysql> select p2.pname,p2.city,v.regno,v.dateofvisit from patient p2 left outer join visit v on


p2.pid=v.pid;
+----------+-----------+-------+-------------+
| pname | city | regno | dateofvisit |
+----------+-----------+-------+-------------+
| hannah | hyderabad | 3 | 2017-07-13 |
| courtney | pune | 3 | 2018-04-03 |
| jenni | nagpur | 1 | 2018-02-02 |
+----------+-----------+-------+-------------+
3 rows in set (0.00 sec)

mysql> select p2.pname,p2.city,v.regno,v.dateofvisit from patient p2 right outer join visit v on


p2.pid=v.pid;
+----------+-----------+-------+-------------+
| pname | city | regno | dateofvisit |
+----------+-----------+-------+-------------+
| hannah | hyderabad | 3 | 2017-07-13 |
| courtney | pune | 3 | 2018-04-03 |
| jenni | nagpur | 1 | 2018-02-02 |
+----------+-----------+-------+-------------+
3 rows in set (0.00 sec)

mysql> select p2.pname,p2.city,v.regno,v.dateofvisit from patient p2 left outer join visit v on


p2.pid=v.pid union select p2.pname,p2.city,v.regno,v.dateofvisit from patient p2 right outer join
visit v on p2.pid=v.pid;
+----------+-----------+-------+-------------+
| pname | city | regno | dateofvisit |
+----------+-----------+-------+-------------+
| hannah | hyderabad | 3 | 2017-07-13 |
| courtney | pune | 3 | 2018-04-03 |
| jenni | nagpur | 1 | 2018-02-02 |
+----------+-----------+-------+-------------+

44
Database Management System Lab Manual

3 rows in set (0.00 sec)

mysql> select p.name,count(v.regno) from physician p,visit v where p.regno=1 and


p.regno=v.regno;
+------+----------------+
| name | count(v.regno) |
+------+----------------+
| moni | 1|
+------+----------------+
1 row in set (0.00 sec)

mysql> select p.name,count(v.regno) from physician p,visit v where p.regno=2 and


p.regno=v.regno;
+------+----------------+
| name | count(v.regno) |
+------+----------------+
| NULL | 0|
+------+----------------+
1 row in set (0.00 sec)

mysql> select p.name,count(v.regno) from physician p,visit v where p.regno=3 and


p.regno=v.regno;
+-------+----------------+
| name | count(v.regno) |
+-------+----------------+
| gracy | 2|
+-------+----------------+
1 row in set (0.00 sec)

mysql> select dateofvisit ,sum(fee) from visit group by dateofvisit;


+-------------+----------+
| dateofvisit | sum(fee) |
+-------------+----------+
| 2017-07-13 | 20000 |
| 2018-04-03 | 30000 |
| 2018-02-02 | 10000 |
+-------------+----------+
3 rows in set (0.10 sec)

mysql> create view v1 as select dateofvisit,fee from visit;


Query OK, 0 rows affected (0.37 sec)

45
Database Management System Lab Manual

mysql> select *from v1;


+-------------+-------+
| dateofvisit | fee |
+-------------+-------+
| 2017-07-13 | 20000 |
| 2018-04-03 | 30000 |
| 2018-02-02 | 10000 |
+-------------+-------+
3 rows in set (0.02 sec)

mysql> create view v3 as select p.pname,v.regno,v.dateofvisit from patient p, visit v where


p.pid=v.pid;
Query OK, 0 rows affected (0.25 sec)

mysql> select *from v3;


+----------+-------+-------------+
| pname | regno | dateofvisit |
+----------+-------+-------------+
| hannah | 3 | 2017-07-13 |
| courtney | 3 | 2018-04-03 |
| jenni | 1 | 2018-02-02 |
+----------+-------+-------------+
3 rows in set (0.00 sec)

mysql> update v3 set pname="simi" where regno=1;


Query OK, 1 row affected (0.41 sec)
Rows matched: 1 Changed: 1 Warnings: 0

mysql> select *from v3;


+----------+-------+-------------+
| pname | regno | dateofvisit |
+----------+-------+-------------+
| hannah | 3 | 2017-07-13 |
| courtney | 3 | 2018-04-03 |
| simi | 1 | 2018-02-02 |
+----------+-------+-------------+
3 rows in set (0.00 sec)

ASSIGNMENT NO. 4
TITLE :-

46
Database Management System Lab Manual

Unnamed PL/SQL code block : Use of Control structure and Exception handling is mandatory.
Write a PL/SQL block of code for following requirements :-
Schema :
1. Borrower (Rollin, Name, DateofIssue, NameofBook, Status)
2. Fine (Roll_no, Date, Amt)
a. Accept roll_no and name of book from user.
b. Check the number of days (from the date of issue), if days are between 15 to 30 then fine
amount Rs. 5 per day.
c. If number of days > 30 per day fine will be Rs. 50.
d. After submitting the book, status will change from I to R.
e. If condition for the fine is true, then details will be stored into Fine table.
Frame the problem statement for writing PL/SQL block inline with above statements.
THEORY :-
CONTROL STRUCTURE :-
Overview of PL/SQL Control Structures
Procedural computer programs use the basic control structures shown in Figure .

The selection structure tests a condition, then executes one sequence of statements instead of
another, depending on whether the condition is true or false. A condition is any variable or
expression that returns a BOOLEAN value (TRUE or FALSE). The iteration structure executes a
sequence of statements repeatedly as long as a condition holds true. The sequence structure
simply executes a sequence of statements in the order in which they occur.
Testing Conditions: IF and CASE Statements
The IF statement executes a sequence of statements depending on the value of a condition. There
are three forms of IF statements: IF-THEN, IF-THEN-ELSE, and IF-THEN-ELSIF. For a
description of the syntax of the IF statement, see "IF Statement".
The CASE statement is a compact way to evaluate a single condition and choose between many
alternative actions. It makes sense to use CASE when there are three or more alternatives to
choose from. For a description of the syntax of the CASE statement, see "CASE Statement".

 Using the IF-THEN Statement-

The simplest form of IF statement associates a condition with a sequence of statements enclosed
by the keywords THEN and END IF (not ENDIF) as illustrated in e.g.

47
Database Management System Lab Manual

The sequence of statements is executed only if the condition is TRUE. If the condition
is FALSE or NULL, the IF statement does nothing. In either case, control passes to the next
statement.
Example - Using a Simple IF-THEN Statement
DECLARE sales NUMBER(8,2) := 10100; quota NUMBER(8,2) := 10000; bonus
NUMBER(6,2); emp_id NUMBER(6) := 120; BEGIN IF sales > (quota + 200) THEN bonus :=
(sales - quota)/4; UPDATE employees SET salary = salary + bonus WHERE employee_id =
emp_id; END IF; END; /

 Using the IF-THEN-ELSE Statement

The second form of IF statement adds the keyword ELSE followed by an alternative sequence of
statements, as shown in Example 4-2.
The statements in the ELSE clause are executed only if the condition is FALSE or NULL.
The IF-THEN-ELSE statement ensures that one or the other sequence of statements is executed.
In the e.g., the first UPDATE statement is executed when the condition is TRUE, and the
second UPDATE statement is executed when the condition is FALSEor NULL.
Example - Using a Simple IF-THEN-ELSE Statement
DECLARE

sales NUMBER(8,2) := 12100;

quota NUMBER(8,2) := 10000;

bonus NUMBER(6,2);

emp_id NUMBER(6) := 120;

BEGIN

IF sales > (quota + 200) THEN

bonus := (sales - quota)/4;

ELSE

bonus := 50;

END IF;

UPDATE employees SET salary = salary + bonus WHERE employee_id = emp_id;

END;

48
Database Management System Lab Manual

IF statements can be nested as shown in e.g.


Example - Nested IF Statements
DECLARE

sales NUMBER(8,2) := 12100;

quota NUMBER(8,2) := 10000;

bonus NUMBER(6,2);

emp_id NUMBER(6) := 120;

BEGIN

IF sales > (quota + 200) THEN

bonus := (sales - quota)/4;

ELSE

IF sales > quota THEN

bonus := 50;

ELSE

bonus := 0;

END IF;

END IF;

UPDATE employees SET salary = salary + bonus WHERE employee_id = emp_id;

END;

 Using the IF-THEN-ELSIF Statement

Sometimes you want to choose between several alternatives. You can use the
keyword ELSIF (not ELSEIF or ELSE IF) to introduce additional conditions, as shown in e.g.
If the first condition is FALSE or NULL, the ELSIF clause tests another condition.
An IF statement can have any number of ELSIF clauses; the final ELSE clause is optional.
Conditions are evaluated one by one from top to bottom. If any condition is TRUE, its associated

49
Database Management System Lab Manual

sequence of statements is executed and control passes to the next statement. If all conditions are
false or NULL, the sequence in the ELSE clause is executed, as shown in e.g.
Example - Using the IF-THEN-ELSEIF Statement
DECLARE

sales NUMBER(8,2) := 20000;

bonus NUMBER(6,2);

emp_id NUMBER(6) := 120;

BEGIN

IF sales > 50000 THEN

bonus := 1500;

ELSIF sales > 35000 THEN

bonus := 500;

ELSE

bonus := 100;

END IF;

UPDATE employees SET salary = salary + bonus WHERE employee_id = emp_id;

END;

If the value of sales is larger than 50000, the first and second conditions are TRUE.
Nevertheless, bonus is assigned the proper value of 1500 because the second condition is never
tested. When the first condition is TRUE, its associated statement is executed and control passes
to the INSERT statement.

PROGRAM :-
[pratiksha@localhost ~]$ sudo su[sudo] password for pratiksha:
[root@localhost pratiksha]# systemctl start mysqld;
[root@localhost pratiksha]# mysql;
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 9
Server version: 10.2.17-MariaDB MariaDB Server

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

50
Database Management System Lab Manual

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> show databases;


+--------------------+
| Database |
+--------------------+
| information_schema |
| jdbcConn |
| mysql |
| performance_schema |
| pratiksha |
| test |
+--------------------+
6 rows in set (0.00 sec)

MariaDB [(none)]> create database pro;


Query OK, 1 row affected (0.41 sec)

MariaDB [(none)]> use pro;


Database changed
MariaDB [pro]> create table borrower(roll_no int(10),name char(20),date_of_issue
date,name_of_book char(20),status char(5),primary key(roll_no));
Query OK, 0 rows affected (1.01 sec)

MariaDB [pro]> create table fine(roll_no int(10) primary key,date_of_return date,amount


int(10),foreign key(roll_no) references borrower(roll_no));
Query OK, 0 rows affected (1.26 sec)

MariaDB [pro]> desc borrower;


+---------------+----------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+---------------+----------+------+-----+---------+-------+
| roll_no | int(10) | NO | PRI | NULL | |
| name | char(20) | YES | | NULL | |
| date_of_issue | date | YES | | NULL | |
| name_of_book | char(20) | YES | | NULL | |
| status | char(5) | YES | | NULL | |

51
Database Management System Lab Manual

+---------------+----------+------+-----+---------+-------+
5 rows in set (0.09 sec)

MariaDB [pro]> desc fine;


+----------------+---------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+----------------+---------+------+-----+---------+-------+
| roll_no | int(10) | NO | PRI | NULL | |
| date_of_return | date | YES | | NULL | |
| amount | int(10) | YES | | NULL | |
+----------------+---------+------+-----+---------+-------+
3 rows in set (0.00 sec)

MariaDB [pro]> insert into borrower values(1,'pratiksha','2018-07-10','dbms','I');


Query OK, 1 row affected (0.55 sec)

MariaDB [pro]> insert into borrower values(2,'suman','2018-06-12','sepm','I');


Query OK, 1 row affected (0.45 sec)

MariaDB [pro]> insert into borrower values(3,'rupali','2018-06-28','isee','I');


Query OK, 1 row affected (0.06 sec)

MariaDB [pro]> select * from borrower;


+---------+-----------+---------------+--------------+--------+
| roll_no | name | date_of_issue | name_of_book | status |
+---------+-----------+---------------+--------------+--------+
| 1 | pratiksha | 2018-07-10 | dbms |I |
| 2 | suman | 2018-06-12 | sepm |I |
| 3 | rupali | 2018-06-28 | isee |I |
+---------+-----------+---------------+--------------+--------+
3 rows in set (0.00 sec)

MariaDB [pro]> delimiter //


MariaDB [pro]> create procedure find_fine(IN rno int(10),IN bname char(20))
-> begin
-> declare continue handler for 1062
-> select 'The record is already exists in result table' as msg;
-> select date_of_issue into @d from borrower where roll_no=rno AND
name_of_book=bname;
-> select DATEDIFF(CURDATE(),@d) into @da;
-> IF(@da > 15 AND @da <= 30) THEN

52
Database Management System Lab Manual

-> insert into fine values(rno,CURDATE(),@da*5);


-> ELSEIF @da > 30 THEN
-> insert into fine values(rno,CURDATE(),@da*50);
-> END IF;
-> update borrower set status='R' where roll_no=rno;
-> end
-> //
Query OK, 0 rows affected (0.64 sec)

MariaDB [pro]> call find_fine(1,'dbms')


-> //
Query OK, 1 row affected (0.62 sec)

MariaDB [pro]> select * from fine;


-> //
+---------+----------------+--------+
| roll_no | date_of_return | amount |
+---------+----------------+--------+
| 1 | 2018-09-25 | 3850 |
+---------+----------------+--------+
1 row in set (0.00 sec)

MariaDB [pro]> select * from borrower;


-> //
+---------+-----------+---------------+--------------+--------+
| roll_no | name | date_of_issue | name_of_book | status |
+---------+-----------+---------------+--------------+--------+
| 1 | pratiksha | 2018-07-10 | dbms |R |
| 2 | suman | 2018-06-12 | sepm |I |
| 3 | rupali | 2018-06-28 | isee |I |
+---------+-----------+---------------+--------------+--------+
3 rows in set (0.00 sec)

MariaDB [pro]> call find_fine(2,'sepm')


-> //
Query OK, 1 row affected (0.26 sec)

MariaDB [pro]> select * from fine;


-> //
+---------+----------------+--------+
| roll_no | date_of_return | amount |
+---------+----------------+--------+

53
Database Management System Lab Manual

| 1 | 2018-09-25 | 3850 |
| 2 | 2018-09-25 | 5250 |
+---------+----------------+--------+
2 rows in set (0.00 sec)

MariaDB [pro]> select * from borrower;


-> //
+---------+-----------+---------------+--------------+--------+
| roll_no | name | date_of_issue | name_of_book | status |
+---------+-----------+---------------+--------------+--------+
| 1 | pratiksha | 2018-07-10 | dbms |R |
| 2 | suman | 2018-06-12 | sepm |R |
| 3 | rupali | 2018-06-28 | isee |I |
+---------+-----------+---------------+--------------+--------+
3 rows in set (0.00 sec)

MariaDB [pro]> call find_fine(3,'isee')


-> //
Query OK, 1 row affected (0.18 sec)

MariaDB [pro]> select * from fine;


-> //
+---------+----------------+--------+
| roll_no | date_of_return | amount |
+---------+----------------+--------+
| 1 | 2018-09-25 | 3850 |
| 2 | 2018-09-25 | 5250 |
| 3 | 2018-09-25 | 4450 |
+---------+----------------+--------+
3 rows in set (0.00 sec)

MariaDB [pro]> select * from borrower;


-> //
+---------+-----------+---------------+--------------+--------+
| roll_no | name | date_of_issue | name_of_book | status |
+---------+-----------+---------------+--------------+--------+
| 1 | pratiksha | 2018-07-10 | dbms |R |
| 2 | suman | 2018-06-12 | sepm |R |
| 3 | rupali | 2018-06-28 | isee |R |

54
Database Management System Lab Manual

+---------+-----------+---------------+--------------+--------+
3 rows in set (0.00 sec)

MariaDB [pro]> call find_fine(1,'dbms')


-> //
+----------------------------------------------+
| msg |
+----------------------------------------------+
| The record is already exists in result table |
+----------------------------------------------+
1 row in set (0.01 sec)

Query OK, 0 rows affected (0.01 sec)

MariaDB [pro]> call find_fine(2,'sepm')


-> //
+----------------------------------------------+
| msg |
+----------------------------------------------+
| The record is already exists in result table |
+----------------------------------------------+
1 row in set (0.00 sec)

Query OK, 0 rows affected (0.00 sec)

MariaDB [pro]>

ASSIGNMENT NO. 5
TITLE :-
PL/SQL Stored Procedures and Stored Function.
Write a Stored Procedure namely proc_Grade for the categorization of student.
If marks scored by students in examination is<=1500 and marks>=990 then student will be
placed in 'Distinction' category.
If marks scored are between 989 and 900 category is 'First class'.
If marks are between 899 and 825 category is 'Higher Second Class'.
55
Database Management System Lab Manual

Write a PL/SQL block for using procedure created with above requirements.
Stud_Marks (name, total_marks)
Result (Roll, Name, Class)
Frame the separate problem statement for writing PL/SQLStored Procedure and Function, inline
with above statements. The problem statement should clearly state the requirements.
THEORY :-
STORED PROCEDURES :-
 Definition of stored procedures-

A stored procedure is a segment of declarative SQL statements stored inside the database
catalog. A stored procedure can be invoked by triggers, other stored procedures, and applications
such as Java, Python, PHP.

A stored procedure that calls itself is known as a recursive stored procedure. Most database
management systems support recursive stored procedures. However, MySQL does not support it
very well. You should check your version of MySQL database before implementing recursive
stored procedures in MySQL.
 Stored Procedures in MySQL-

MySQL is known as the most popular open source RDBMS which is widely used by both
community and enterprise. However, during the first decade of its existence, it did not support
stored procedures, stored functions, triggers, and events. Since MySQL version 5.0, those
features were added to the MySQL database engine to make it more flexible and powerful.

 MySQL stored procedures advantages-

1. Typically, stored procedures help increase the performance of the applications. Once
created, stored procedures are compiled and stored in the database. However, MySQL

56
Database Management System Lab Manual

implements the stored procedures slightly different. MySQL stored procedures are
compiled on demand. After compiling a stored procedure, MySQL puts it into a cache
and maintains its own stored procedure cache for every single connection. If an
application uses a stored procedure multiple times in a single connection, the compiled
version is used, otherwise, the stored procedure works like a query.
2. Stored procedures help reduce the traffic between application and database server
because instead of sending multiple lengthy SQL statements, the application has to send
only the name and parameters of the stored procedure.
3. Stored procedures are reusable and transparent to any applications. Stored procedures
expose the database interface to all applications so that developers do not have to develop
functions that are already supported in stored procedures.
4. Stored procedures are secure. The database administrator can grant appropriate
permissions to applications that access stored procedures in the database without giving
any permissions on the underlying database tables.
Besides those advantages, stored procedures have their own disadvantages, which you should be
aware of before using them in your databases.

 MySQL stored procedures disadvantages-

1. If you use many stored procedures, the memory usage of every connection that is using
those stored procedures will increase substantially. In addition, if you overuse a large
number of logical operations inside stored procedures, the CPU usage will increase
because the database server is not well-designed for logical operations.
2. Stored procedure‘s constructs are not designed for developing complex and flexible
business logic.
3. It is difficult to debug stored procedures. Only a few database management systems allow
you to debug stored procedures. Unfortunately, MySQL does not provide facilities for
debugging stored procedures.
4. It is not easy to develop and maintain stored procedures. Developing and maintaining
stored procedures are often required a specialized skill set that not all application
developers possess. This may lead to problems in both application development and
maintenance phases.
MySQL stored procedures have their own advantages and disadvantages. When you develop
applications, you should decide whether or not to use stored procedures based on your
application‘s architecture.

 Example : MySQL Procedure

Here we have created a simple procedure called job_data, when we will execute the procedure it
will display all the data from "jobs" tables.
mysql> DELIMITER $$ ;mysql> CREATE PROCEDURE job_data()

57
Database Management System Lab Manual

> SELECT * FROM JOBS; $$


Query OK, 0 rows affected (0.00 sec)

 Explanation:

- CREATE PROCEDURE command creates the stored procedure.


- Next part is the procedure name. Here the procedure name is " job_data".
-- Procedure names are not case sensitive, so job_data and JOB_DATA are same.
-- You cannot use two procedures with the same name in the same database.
-- You can use qualified names of the form "database-name.procedure-name", for example
"hr.job_data".
-- Procedure names can be delimited. If the name is delimited, it can contain spaces.
-- The maximum name length is 64 characters.
-- Avoid using names of built-in MySQL functions.
-- The last part of "CREATE PROCEDURE" is a pair of parentheses. "()" holds the parameter(s)
list as there are no parameters in this procedure, the parameter list is empty.
- Next part is SELECT * FROM JOBS; $$ which is the last statement of the procedure body.
Here the semicolon (;) is optional as $$ is a real statement-ender.

STORED FUNCTIONS :-
A stored function is a special kind stored program that returns a single value. You use stored
functions to encapsulate common formulas or business rules that are reusable among SQL
statements or stored programs.
Different from a stored procedure, you can use a stored function in SQL statements wherever an
expression is used. This helps improve the readability and maintainability of the procedural code.
 MySQL stored function syntax-

The following illustrates the simplest syntax for creating a new stored function:
1 CREATE FUNCTIONfunction_name(param1,param2,…)
2 RETURNSdatatype
3 [NOT]DETERMINISTIC
4 statements
First, you specify the name of the stored function after CREATE FUNCTION clause.
Second, you list all parameters of the stored function inside the parentheses. By default, all
parameters are the IN parameters. You cannot specify IN , OUT or INOUT modifiers to the
parameters.
Third, you must specify the data type of the return value in the RETURNS statement. It can be
any valid MySQL data types.
Fourth, for the same input parameters, if the stored function returns the same result, it is
considered deterministic; otherwise, the stored function is not deterministic. You have to decide
whether a stored function is deterministic or not. If you declare it incorrectly, the stored function
may produce an unexpected result, or the available optimization is not used which degrades the
performance.
Fifth, you write the code in the body of the stored function. It can be a single statement or a
compound statement. Inside the body section, you have to specify at least

58
Database Management System Lab Manual

one RETURN statement. The RETURNstatement returns a value to the caller. Whenever
the RETURN statement is reached, the stored function‘s execution is terminated immediately.
 MySQL stored function example-

Let‘s take a look at an example of using the stored function. We will use the customers table in
the sample database for the demonstration.
The following example is a function that returns the level of a customer based on credit limit. We
use the IF statement to determine the credit limit.
1 DELIMITER$$
2
3 CREATE FUNCTIONCustomerLevel(p_creditLimit double)RETURNS VARCHAR(10)
4 DETERMINISTIC
5 BEGIN
6 DECLARElvl varchar(10);
7
8 IFp_creditLimit>50000THEN
9 SET lvl='PLATINUM';
10 ELSEIF(p_creditLimit<=50000ANDp_creditLimit>=10000)THEN
11 SET lvl='GOLD';
12 ELSEIFp_creditLimit<10000THEN
13 SET lvl='SILVER';
14 ENDIF;
15
16 RETURN(lvl);
17 END

Now, we can call the CustomerLevel() in a SELECT statement as follows:


1 SELECT
2 customerName,
3 CustomerLevel(creditLimit)
4 FROM
5 customers
6 ORDER BY
7 customerName;
We also rewrite the GetCustomerLevel() stored procedure that we developed in the MySQL IF
statement tutorial as follows:
DELIMITER$$

CREATE PROCEDURE GetCustomerLevel(


IN p_customerNumber INT(11),
OUT p_customerLevel varchar(10)
)
BEGIN
DECLAREcreditlim DOUBLE;

59
Database Management System Lab Manual

SELECT creditlimit INTO creditlim


FROM customers
WHERE customerNumber=p_customerNumber;

SELECT CUSTOMERLEVEL(creditlim)
INTO p_customerLevel;

END
As you can see, the GetCustomerLevel() stored procedure is much more readable when using
the CustomerLevel() stored function.
Notice that a stored function returns a single value only. If you include a SELECT statement
without the INTO clause, you will get an error.
In addition, if a stored function contains SQL statements, you should not use it inside other SQL
statements; otherwise, the stored function will slow down the speed of the query.

PROGRAM :-
[root@localhost ~]# systemctl start mysqld;[root@localhost ~]# mysql;
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 2
Server version: 5.5.43-MariaDB MariaDB Server

Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> show databases;


+--------------------+
| Database |
+--------------------+
| information_schema |
| EMPLOYEE |
| Employee |
| Pratik |
| Student |
| Trig |
| book |
| dbms |
| emp |
| employee |
| employee1 |
| fine |
| mysql |
| performance_schema |
| stud |
| student |

60
Database Management System Lab Manual

| student1 |
| suman |
| test |
| view |
+--------------------+
20 rows in set (0.22 sec)

MariaDB [(none)]> create database error_hamdling;


Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> use error_hamdling;


Database changed
MariaDB [error_hamdling]> create table student(roll_no int(5) primary key,sname
char(10),total_marks int(5));
Query OK, 0 rows affected (0.09 sec)

MariaDB [error_hamdling]> insert into student values(2,'Sarang Patil',825);


Query OK, 1 row affected, 1 warning (0.01 sec)

MariaDB [error_hamdling]> insert into student values(3,'Shruti Kulkarni',1150);


Query OK, 1 row affected, 1 warning (0.02 sec)

MariaDB [error_hamdling]> insert into student values(4,'Aarati Dike',750);


Query OK, 1 row affected, 1 warning (0.02 sec)

MariaDB [error_hamdling]> delimiter //

MariaDB [error_hamdling]> desc student;


-> //
+-------------+----------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------------+----------+------+-----+---------+-------+
| roll_no | int(5) | NO | PRI | NULL | |
| sname | char(10) | YES | | NULL | |
| total_marks | int(5) | YES | | NULL | |
+-------------+----------+------+-----+---------+-------+
3 rows in set (0.03 sec)

MariaDB [error_hamdling]> desc result;


-> //
+---------+----------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+---------+----------+------+-----+---------+-------+
| roll_no | int(5) | YES | MUL | NULL | |
| sname | char(10) | YES | | NULL | |
| class | char(10) | YES | | NULL | |

61
Database Management System Lab Manual

+---------+----------+------+-----+---------+-------+
3 rows in set (0.00 sec)

MariaDB [error_hamdling]> select * from student;


-> //
+---------+------------+-------------+
| roll_no | sname | total_marks |
+---------+------------+-------------+
| 2 | Sarang Pat | 825 |
| 3 | Shruti Kul | 1150 |
| 4 | Aarati Dik | 750 |
+---------+------------+-------------+
3 rows in set (0.00 sec)

MariaDB [error_hamdling]> create function find_grade1(marks int(10)) returns char(15)


deterministic
-> begin
-> declare grade char(15);
-> if(marks<=1500 AND marks>=990) then
-> set grade='distinction';
-> elseif(marks>=900 AND marks<=989) then
-> set grade='first class';
-> elseif(marks<=899 AND marks>=825) then
-> set grade='second class';
-> end if;
-> return grade;
-> end
-> //
Query OK, 0 rows affected (0.00 sec)

MariaDB [error_hamdling]> select find_grade1(990);


-> //
+------------------+
| find_grade1(990) |
+------------------+
| distinction |
+------------------+
1 row in set (0.00 sec)

MariaDB [error_hamdling]> desc result;


-> //
+---------+----------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+---------+----------+------+-----+---------+-------+
| roll_no | int(5) | YES | MUL | NULL | |
| sname | char(10) | YES | | NULL | |

62
Database Management System Lab Manual

| class | char(10) | YES | | NULL | |


+---------+----------+------+-----+---------+-------+
3 rows in set (0.00 sec)

MariaDB [error_hamdling]> create procedure find_result2( IN rno int(10))


-> begin
->
-> declare tmarks int(10);
-> declare grade char(20);
-> declare student_name char(20);
-> declare continue handler for 1062
-> select 'The record is already exists in result table' as msg;
-> select student.total_marks,student.sname into tmarks,student_name from student where
student.roll_no=rno;
->
-> set grade= find_grade1(tmarks);
-> insert into result values(rno,student_name,grade);
-> end
-> //
Query OK, 0 rows affected (0.00 sec)

MariaDB [error_hamdling]> call find_result2(2);


-> //
Query OK, 1 row affected, 1 warning (0.03 sec)

MariaDB [error_hamdling]> select * from result;


-> //
+---------+------------+------------+
| roll_no | sname | class |
+---------+------------+------------+
| 2 | Sarang Pat | second cla |
+---------+------------+------------+
1 row in set (0.00 sec)

MariaDB [error_hamdling]> call find_result2(2);


-> //
Query OK, 1 row affected, 1 warning (0.02 sec)

MariaDB [error_hamdling]> select * from result;


-> //
+---------+------------+------------+
| roll_no | sname | class |
+---------+------------+------------+
| 2 | Sarang Pat | second cla |
| 2 | Sarang Pat | second cla |
+---------+------------+------------+

63
Database Management System Lab Manual

2 rows in set (0.00 sec)

MariaDB [error_hamdling]> desc result;


-> //
+---------+----------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+---------+----------+------+-----+---------+-------+
| roll_no | int(5) | YES | MUL | NULL | |
| sname | char(10) | YES | | NULL | |
| class | char(10) | YES | | NULL | |
+---------+----------+------+-----+---------+-------+
3 rows in set (0.00 sec)

MariaDB [error_hamdling]> drop table result;


-> //
Query OK, 0 rows affected (0.01 sec)

MariaDB [error_hamdling]> create table result(roll_no int(10) primary key,sname char(15),class


char(20));
-> //
Query OK, 0 rows affected (0.05 sec)

MariaDB [error_hamdling]> call find_result2(2);


-> //
Query OK, 1 row affected (0.02 sec)

MariaDB [error_hamdling]> select * from result;


-> //
+---------+------------+--------------+
| roll_no | sname | class |
+---------+------------+--------------+
| 2 | Sarang Pat | second class |
+---------+------------+--------------+
1 row in set (0.01 sec)

MariaDB [error_hamdling]> call find_result2(2);


-> //
+----------------------------------------------+
| msg |
+----------------------------------------------+
| The record is already exists in result table |
+----------------------------------------------+
1 row in set (0.01 sec)

64
Database Management System Lab Manual

Query OK, 0 rows affected, 1 warning (0.01 sec)

MariaDB [error_hamdling]> call find_result2(3);


-> //
Query OK, 1 row affected (0.02 sec)

MariaDB [error_hamdling]> select * from result;


-> //
+---------+------------+--------------+
| roll_no | sname | class |
+---------+------------+--------------+
| 2 | Sarang Pat | second class |
| 3 | Shruti Kul | distinction |
+---------+------------+--------------+
2 rows in set (0.00 sec)

MariaDB [error_hamdling]> call find_result2(3);


-> //
+----------------------------------------------+
| msg |
+----------------------------------------------+
| The record is already exists in result table |
+----------------------------------------------+
1 row in set (0.01 sec)

Query OK, 0 rows affected, 1 warning (0.01 sec)

ASSIGNMENT NO. 6
TITLE :-
Cursors :
( All types : Implicit, Explicit, Cursor FOR loop,Parameterized Cursor )

65
Database Management System Lab Manual

Write a PL/SQL block of code using parameterized Cursor, that will merge the data available in
the newly created table N_RollCall with the data available in the table O_RollCall.
If the data in the first table already exist in the second table then the data should be skipped.
Frame the separate problem statement for writing PL/SQL block to implement all types of
Cursors inline with above statements. The problem statement should clearly state the
requirements.
THEORY :-
Cursors

A cursor is a temporary work area created in system memory when an SQL statement is
executed. A cursor is a set of rows together with a pointer that identifies a current row. It is a
database object to retrieve data from a result set one row at a time. It is useful when we want to
manipulate the record of a table in a singleton method, in other words one row at a time. In other
words, a cursor can hold more than one row, but can process only one row at a time. The set of
rows the cursor holds is called the active set.

Types of Cursors

There are the following two types of Cursors:


1. Implicit Cursor
2. Explicit Cursor

Implicit Cursor

These types of cursors are generated and used by the system during the manipulation of a DML
query (INSERT, UPDATE and DELETE). An implicit cursor is also generated by the system
when a single row is selected by a SELECT command.

Explicit Cursor

This type of cursor is generated by the user using a SELECT command. An explicit cursor
contains more than one row, but only one row can be processed at a time. An explicit cursor
moves one by one over the records. An explicit cursor uses a pointer that holds the record of a
row. After fetching a row, the cursor pointer moves to the next row.

Main components of Cursors

Each cursor contains the followings 5 parts:

1. Declare Cursor: In this part we declare variables and return a set of values.
2. Open: This is the entering part of the cursor.
3. Fetch: Used to retrieve the data row by row from a cursor.
4. Close: This is an exit part of the cursor and used to close a cursor.

66
Database Management System Lab Manual

5. Deallocate: In this part we delete the cursor definition and release all the system
resources associated with the cursor.

Syntax of a Cursor

1. DECLARE @Variable nvarchar(50) /* Declare All Required Variables */


2. DECLARE Cursor_Name CURSOR /* Declare Cursor Name*/
3. [LOCAL | GLOBAL] /* Define Cursor Scope */
4. [FORWARD_ONLY | SCROLL] /* Define Movement Direction of Cursor */

5. [ KEYSET | DYNAMIC |STATIC | FAST_FORWARD] /* Define basic type of cursor


*/
6. [ SCROLL_LOCKS | OPTIMISTIC |READ_ONLY ] /* Define Locks */
7.
8. OPEN Cursor_Name /* Open Cursor */
9. FETCH NEXT FROM Cursor_Name /* Fetch data From Cursor */
10. Implement SQL QUery
11. CLOSE Cursor_Name /* Clsoe The Cursor */
12. DEALLOCATE Cursor_Name /* Deallocate all resources and Memory */

Now we will explain 4 important terminologies of cursors.

Cursor Scope

Microsoft SQL Server supports the GLOBAL and LOCAL keywords on the DECLARE
CURSOR statement to define the scope of the cursor name.

1. GLOBAL: specifies that the cursor name is global to the connection.


2. LOCAL: specifies that the cursor name is local to the Stored Procedure, trigger or query
that holds the cursor.

Data Fetch Option in Cursors

Microsoft SQL Server supports the following two fetch options for data:

1. FORWARD_ONLY: Specifies that the cursor can only be scrolled from the first to the
last row.
2. SCROLL: It provides 6 options to fetch the data (FIRST, LAST, PRIOR, NEXT,
RELATIVE and ABSOLUTE).

Types of cursors

Microsoft SQL Server supports the following 4 types of cursors.

1. STATIC CURSOR: A static cursor populates the result set during cursor creation and
the query result is cached for the lifetime of the cursor. A static cursor can move forward

67
Database Management System Lab Manual

and backward.

2. FAST_FORWARD: This is the default type of cursor. It is identical to the static except
that you can only scroll forward.

3. DYNAMIC: In a dynamic cursor, additions and deletions are visible for others in the
data source while the cursor is open.

4. KEYSET: This is similar to a dynamic cursor except we can't see records others add. If
another user deletes a record, it is inaccessible from our record set.

Types of Locks

Locking is the process by which a DBMS restricts access to a row in a multi-user environment.
When a row or column is exclusively locked, other users are not permitted to access the locked
data until the lock is released. It is used for data integrity. This ensures that two users cannot
simultaneously update the same column in a row.

Microsoft SQL Server supports the following three types of Locks.

1. READ ONLY: Specifies that the cursor cannot be updated.

2. SCROLL_LOCKS: Provides data integrity into the cursor. It specifies that the cursor
will lock the rows as they are read into the cursor to ensure that updates or deletes made
using the cursor will succeed.

3. OPTIMISTIC: Specifies that the cursor does not lock rows as they are read into the
cursor. So, the updates or deletes made using the cursor will not succeed if the row has
been updated outside the cursor.

ASSIGNMENT NO. 7
TITLE :-
Database Trigger ( All types : Row-level and Statement-level triggers, Before and After triggers
).
Write a database trigger on Library table.

68
Database Management System Lab Manual

The system should keep track of the records that are being updated or deleted. The old value of
updated or deleted records should be added in Library_Audit table.
Frame the problem statement for writing Database Triggers of all types, inline with above
statement. The problem statement should clearly state the requirements.
THEORY :-
Trigger :-

A SQL trigger is a set of SQL statements stored in the database catalog. A SQL trigger is
executed or fired whenever an event associated with a table occurs e.g., insert, update or delete.
A SQL trigger is a special type of stored procedure. It is special because it is not called directly
like a stored procedure. The main difference between a trigger and a stored procedure is that a
trigger is called automatically when a data modification event is made against a table whereas a
stored procedure must be called explicitly.
It is important to understand the SQL trigger‘s advantages and disadvantages so that you can use
it appropriately. In the following sections, we will discuss the advantages and disadvantages of
using SQL triggers.
 Advantages of using SQL triggers-
1. SQL triggers provide an alternative way to check the integrity of data.
2. SQL triggers can catch errors in business logic in the database layer.
3. SQL triggers provide an alternative way to run scheduled tasks. By using SQL triggers,
you don‘t have to wait to run the scheduled tasks because the triggers are invoked
automatically before or a change is made to the data in the tables.
4. SQL triggers are very useful to audit the changes of data in tables.

 Disadvantages of using SQL triggers-


1. SQL triggers only can provide an extended validation and they cannot replace all the
validations. Some simple validations have to be done in the application layer. For
example, you can validate user‘s inputs in the client side by using JavaScript or on the
server side using server-side scripting languages such as JSP, PHP, ASP.NET, Perl.
2. SQL triggers are invoked and executed invisible from the client applications, therefore, it
is difficult to figure out what happens in the database layer.
3. SQL triggers may increase the overhead of the database server.
4.

Triggers or stored procedures? It is recommended that if you have no way to get the work done
with the stored procedure, think about SQL trigger.
 Introduction to MySQL triggers-

In MySQL, a trigger is a set of SQL statements that is invoked automatically when a change is
made to the data on the associated table. A trigger can be defined to be invoked either before or
after the data is changed by INSERT, UPDATE or DELETE statement. Before MySQL version
5.7.2, you can to define maximum six triggers for each table.

69
Database Management System Lab Manual

1. BEFORE INSERT – activated before data is inserted into the table.


2. AFTER INSERT – activated after data is inserted into the table.
3. BEFORE UPDATE – activated before data in the table is updated.
4. AFTER UPDATE – activated after data in the table is updated.
5. BEFORE DELETE – activated before data is removed from the table.
6. AFTER DELETE – activated after data is removed from the table.
However, from MySQL version 5.7.2+, you can define multiple triggers for the same trigger
event and action time.
When you use a statement that does not use INSERT, DELETE or UPDATE statement to change
data in a table, the triggers associated with the table are not invoked. For example,
the TRUNCATE statement removes all data of a table but does not invoke the trigger associated
with that table.
There are some statements that use the INSERT statement behind the scenes such as REPLACE
statement or LOAD DATA statement. If you use these statements, the corresponding triggers
associated with the table are invoked.
You must use a unique name for each trigger associated with a table. However, you can have the
same trigger name defined for different tables though it is a good practice.
You should name the triggers using the following naming convention:
1 (BEFORE|AFTER)_tableName_(INSERT|UPDATE|DELETE)
For example, before_order_update is a trigger invoked before a row in the order table is updated.
The following naming convention is as good as the one above.
1 tablename_(BEFORE|AFTER)_(INSERT|UPDATE|DELETE)
For example, order_before_update is the same as before_order_update trigger above.
 MySQL triggers storage-

MySQL stores triggers in a data directory e.g., /data/classicmodels/ with the files
named tablename.TRG and triggername.TRN :

1. The tablename.TRG file maps the trigger to the corresponding table.


2. the triggername.TRN file contains the trigger definition.
You can back up the MySQL triggers by copying the trigger files to the backup folder. You can
also backup the triggers using the mysqldump tool.

 MySQL trigger limitations-

MySQL triggers cover all features defined in the standard SQL. However, there are some
limitations that you should know before using them in your applications.
MySQL triggers cannot:

1. Use SHOW, LOAD DATA, LOAD TABLE, BACKUP


DATABASE, RESTORE, FLUSH and RETURN statements.
2. Use statements that commit or rollback implicitly or explicitly such as COMMIT ,
ROLLBACK , START TRANSACTION , LOCK/UNLOCK TABLES , ALTER ,
CREATE , DROP , RENAME.
3. Use prepared statements such as PREPARE and EXECUTE.

70
Database Management System Lab Manual

4. Use dynamic SQL statements.


From MySQL version 5.1.4, a trigger can call a stored procedure or stored function, which was a
limitation is the previous versions.

PROGRAM :-
mysql> create database trigger1;
Query OK, 1 row affected (0.10 sec)

mysql> use trigger1;


Database changed

mysql> create table library(bid int(10),bname char(20),bauthor char(20),cost int(10),publisher


char(20),primary key(bid));
Query OK, 0 rows affected (1.15 sec)

mysql> create table library_audit(bid int(10),bname char(20),bauthor char(20),cost


int(10),publisher char(20),operation char(10),primary key(bid));
Query OK, 0 rows affected (0.70 sec)

mysql> insert into library values(1,'dbms','korth',500,'technical');


Query OK, 1 row affected (0.17 sec)

mysql> create trigger trig_update after update on library for each row insert into library_audit
values(old.bid,old.bname,old.bauthor,old.cost,old.publisher,"update");
Query OK, 0 rows affected (0.09 sec)

mysql> select * from library_audit;


Empty set (0.01 sec)

mysql> select * from library;


+-----+-------+---------+------+-----------+
| bid | bname | bauthor | cost | publisher |
+-----+-------+---------+------+-----------+
| 1 | dbms | korth | 500 | technical |
+-----+-------+---------+------+-----------+
1 row in set (0.10 sec)

mysql> insert into library values(2,'sepm','pressman',400,'techmax');


Query OK, 1 row affected (0.21 sec)

mysql> select *from library;


+-----+-------+----------+------+-----------+
| bid | bname | bauthor | cost | publisher |
+-----+-------+----------+------+-----------+
71
Database Management System Lab Manual

| 1 | dbms | korth | 500 | technical |


| 2 | sepm | pressman | 400 | techmax |
+-----+-------+----------+------+-----------+
2 rows in set (0.00 sec)

mysql> update library set cost=600 where bid=1;


Query OK, 1 row affected (0.16 sec)
Rows matched: 1 Changed: 1 Warnings: 0

mysql> select * from library_audit;


+-----+-------+---------+------+-----------+-----------+
| bid | bname | bauthor | cost | publisher | operation |
+-----+-------+---------+------+-----------+-----------+
| 1 | dbms | korth | 500 | technical | update |
+-----+-------+---------+------+-----------+-----------+
1 row in set (0.00 sec)

mysql> select *from library;


+-----+-------+----------+------+-----------+
| bid | bname | bauthor | cost | publisher |
+-----+-------+----------+------+-----------+
| 1 | dbms | korth | 600 | technical |
| 2 | sepm | pressman | 400 | techmax |
+-----+-------+----------+------+-----------+
2 rows in set (0.00 sec)

mysql> create trigger trig_delete before delete on library for each row insert into library_audit
values(old.bid,old.bname,old.bauthor,old.cost,old.publisher,"Delete");
Query OK, 0 rows affected (0.17 sec)

mysql> delete from library where bid=2;


Query OK, 1 row affected (0.24 sec)

mysql> select *from library;


+-----+-------+---------+------+-----------+
| bid | bname | bauthor | cost | publisher |
+-----+-------+---------+------+-----------+
| 1 | dbms | korth | 600 | technical |
+-----+-------+---------+------+-----------+
1 row in set (0.00 sec)

mysql> select *from library_audit;


+-----+-------+----------+------+-----------+-----------+
| bid | bname | bauthor | cost | publisher | operation |
+-----+-------+----------+------+-----------+-----------+
| 1 | dbms | korth | 500 | technical | update |

72
Database Management System Lab Manual

| 2 | sepm | pressman | 400 | techmax | Delete |


+-----+-------+----------+------+-----------+-----------+
2 rows in set (0.00 sec)

mysql> insert into library values(2,'sepm','pressman',400,'Mcgrawhill');


Query OK, 1 row affected (0.23 sec)

mysql> insert into library values(3,'CN','frouzen',800,'Schilbtz');


Query OK, 1 row affected (0.21 sec)

mysql> insert into library values(4,'c++','balguruswami',600,'Pearson');


Query OK, 1 row affected (0.16 sec)

mysql> select *from library;


+-----+-------+--------------+------+------------+
| bid | bname | bauthor | cost | publisher |
+-----+-------+--------------+------+------------+
| 1 | dbms | korth | 600 | technical |
| 2 | sepm | pressman | 400 | Mcgrawhill |
| 3 | CN | frouzen | 800 | Schilbtz |
| 4 | c++ | balguruswami | 600 | Pearson |
+-----+-------+--------------+------+------------+
4 rows in set (0.00 sec)

mysql> select * from library_audit;


+-----+-------+----------+------+-----------+-----------+
| bid | bname | bauthor | cost | publisher | operation |
+-----+-------+----------+------+-----------+-----------+
| 1 | dbms | korth | 500 | technical | update |
| 2 | sepm | pressman | 400 | techmax | Delete |
+-----+-------+----------+------+-----------+-----------+
2 rows in set (0.00 sec)

mysql> create table coun1t(book_count int(10));


Query OK, 0 rows affected (0.57 sec)

mysql> drop table coun1t;


Query OK, 0 rows affected (0.47 sec)

mysql> create table count1(book_count int(10));


Query OK, 0 rows affected (0.40 sec)

mysql> create trigger trig_insert after insert on library for each row update count1 set
book_count=book_count+1;

73
Database Management System Lab Manual

Query OK, 0 rows affected (0.19 sec)

mysql> insert into count1 values(0);


Query OK, 1 row affected (0.17 sec)

mysql> select count(*) from library;


+----------+
| count(*) |
+----------+
| 4|
+----------+
1 row in set (0.10 sec)

mysql> select *from count1;


+------------+
| book_count |
+------------+
| 0|
+------------+
1 row in set (0.00 sec)

mysql> insert into library values(5,'SEPM','Pressman',400,'Samuels');


Query OK, 1 row affected (0.17 sec)

mysql> select *from count1;


+------------+
| book_count |
+------------+
| 1|
+------------+
1 row in set (0.00 sec)

mysql> create TRIGGER trig_insert1 AFTER INSERT ON library FOR EACH ROW insert into
library_audit values(bid,bname,bauthor,cost,publisher,"INSERT");
Query OK, 0 rows affected (0.28 sec)

mysql> select *from library;


+-----+-------+--------------+------+------------+
| bid | bname | bauthor | cost | publisher |
+-----+-------+--------------+------+------------+
| 1 | dbms | korth | 600 | technical |
| 2 | sepm | pressman | 400 | Mcgrawhill |
| 3 | CN | frouzen | 800 | Schilbtz |
| 4 | c++ | balguruswami | 600 | Pearson |

74
Database Management System Lab Manual

| 5 | SEPM | Pressman | 400 | Samuels |


+-----+-------+--------------+------+------------+
5 rows in set (0.00 sec)

mysql> insert into library values(6,'AuditCourse','Gikwad',100,'Mane');


Query OK, 1 row affected (0.12 sec)

mysql> select *from library_audit;


+-----+-------+----------+------+-----------+-----------+
| bid | bname | bauthor | cost | publisher | operation |
+-----+-------+----------+------+-----------+-----------+
| 0 | NULL | NULL | NULL | NULL | INSERT |
| 1 | dbms | korth | 500 | technical | update |
| 2 | sepm | pressman | 400 | techmax | Delete |
+-----+-------+----------+------+-----------+-----------+
3 rows in set (0.00 sec)

mysql>

ASSIGNMENT NO. 8
TITLE :-
Implement MYSQL/Oracle database connectivity with PHP/python/Java.
Implement Database navigation operations (add, delete, edit) using ODBC/JDBC.
THEORY :-

75
Database Management System Lab Manual

1. Connection to database with Java


The interface for accessing relational databases from Java is Java Database Connectivity
(JDBC). Via JDBC you create a connection to the database, issue database queries and update as
well as receive the results.
JDBC provides an interface which allows you to perform SQL operations independently of the
instance of the used database. To use JDBC, you require the database specific implementation of
the JDBC driver.
2. Introduction to MySQL
It is assumed that you have installed MySQL successfully.
Now you should know how to access MySQL via the command line.
3. MySQL JDBC driver
To connect to MySQL from Java, you have to use the JDBC driver from MySQL. The MySQL
JDBC driver is called MySQL Connector/J. You find the latest MySQL JDBC driver under the
following URL: http://dev.mysql.com/downloads/connector/j.
The download contains a JAR file which we require later.
4. Exercise: create example database

In this exercise you create a new database, a new user and an example table. For this connect to
the MySQL server via the mysql command line client.

Create a new database called feedback and start using it with the following command.

create database feedback;


use feedback;

Create a user with the following command.

CREATE USER sqluser IDENTIFIED BY 'sqluserpw';

grant usage on *.* to sqluser@localhost identified by 'sqluserpw';


grant all privileges on feedback.* to sqluser@localhost;

Now create a sample database table with example content via the following SQL statement.

CREATE TABLE comments (


id INT NOT NULL AUTO_INCREMENT,
MYUSER VARCHAR(30) NOT NULL,
EMAIL VARCHAR(30),
WEBPAGE VARCHAR(100) NOT NULL,
DATUM DATE NOT NULL,
SUMMARY VARCHAR(40) NOT NULL,
COMMENTS VARCHAR(400) NOT NULL,
PRIMARY KEY (ID)
);

76
Database Management System Lab Manual

INSERT INTO comments values (default, 'lars',


'myemail@gmail.com','http://www.vogella.com', '2009-09-14 10:33:11', 'Summary','My first
comment' );
5. Java JDBC
Create a Java project and a package called de.vogella.mysql.first.
Create a lib folder and copy the JDBC driver into this folder. Add the JDBC driver to your
classpath.
Create the following class to connect to the MySQL database and perform queries, inserts and
deletes. It also prints the metadata (table name, column names) of a query result.
packagede.vogella.mysql.first;

importjava.sql.Connection;
importjava.sql.DriverManager;
importjava.sql.PreparedStatement;
importjava.sql.ResultSet;
importjava.sql.SQLException;
importjava.sql.Statement;
importjava.util.Date;

publicclassMySQLAccess {
privateConnection connect = null;
privateStatementstatement = null;
privatePreparedStatementpreparedStatement = null;
privateResultSetresultSet = null;

publicvoidreadDataBase() throwsException {
try {
// This will load the MySQL driver, each DB has its own driver
Class.forName("com.mysql.jdbc.Driver");
// Setup the connection with the DB
connect = DriverManager
.getConnection("jdbc:mysql://localhost/feedback?"
+ "user=sqluser&password=sqluserpw");

// Statements allow to issue SQL queries to the database


statement = connect.createStatement();
// Result set get the result of the SQL query
resultSet = statement
.executeQuery("select * from feedback.comments");
writeResultSet(resultSet);

// PreparedStatements can use variables and are more efficient


preparedStatement = connect
.prepareStatement("insert into feedback.comments values (default, ?, ?, ?, ? , ?, ?)");
// "myuser, webpage, datum, summary, COMMENTS from feedback.comments");
// Parameters start with 1

77
Database Management System Lab Manual

preparedStatement.setString(1, "Test");
preparedStatement.setString(2, "TestEmail");
preparedStatement.setString(3, "TestWebpage");
preparedStatement.setDate(4, newjava.sql.Date(2009, 12, 11));
preparedStatement.setString(5, "TestSummary");
preparedStatement.setString(6, "TestComment");
preparedStatement.executeUpdate();

preparedStatement = connect
.prepareStatement("SELECT myuser, webpage, datum, summary, COMMENTS
from feedback.comments");
resultSet = preparedStatement.executeQuery();
writeResultSet(resultSet);

// Remove again the insert comment


preparedStatement = connect
.prepareStatement("delete from feedback.comments where myuser= ? ; ");
preparedStatement.setString(1, "Test");
preparedStatement.executeUpdate();

resultSet = statement
.executeQuery("select * from feedback.comments");
writeMetaData(resultSet);

} catch (Exception e) {
throw e;
} finally {
close();
}

privatevoidwriteMetaData(ResultSetresultSet) throwsSQLException {
// Now get some metadata from the database
// Result set get the result of the SQL query

System.out.println("The columns in the table are: ");

System.out.println("Table: " + resultSet.getMetaData().getTableName(1));


for (inti = 1; i<= resultSet.getMetaData().getColumnCount(); i++){
System.out.println("Column " +i + ""+ resultSet.getMetaData().getColumnName(i));
}
}

privatevoidwriteResultSet(ResultSetresultSet) throwsSQLException {
// ResultSet is initially before the first data set

78
Database Management System Lab Manual

while (resultSet.next()) {
// It is possible to get the columns via name
// also possible to get the columns via the column number
// which starts at 1
// e.g. resultSet.getSTring(2);
String user = resultSet.getString("myuser");
String website = resultSet.getString("webpage");
String summary = resultSet.getString("summary");
Datedate = resultSet.getDate("datum");
String comment = resultSet.getString("comments");
System.out.println("User: " + user);
System.out.println("Website: " + website);
System.out.println("summary: " + summary);
System.out.println("Date: " + date);
System.out.println("Comment: " + comment);
}
}

// You need to close the resultSet


privatevoid close() {
try {
if (resultSet != null) {
resultSet.close();
}

if (statement != null) {
statement.close();
}

if (connect != null) {
connect.close();
}
} catch (Exception e) {

}
}

}
Create the following main program to test your class.
packagede.vogella.mysql.first.test;

importde.vogella.mysql.first.MySQLAccess;

publicclassMain {
publicstaticvoid main(String[]args) throwsException {
MySQLAccessdao = newMySQLAccess();

79
Database Management System Lab Manual

dao.readDataBase();
}

ASSIGNMENT NO. 9
TITLE :- Design and develop MongoDB Queries using CRUD operations. (Use CRUD
operations, SAVE method, logical operators).
1. Create collection (book).

80
Database Management System Lab Manual

2. Create document (b_id, b_title, b_author, cost, category).


3. Display all records.
4. Update cost of book with given b_id.
5. Delete document with given b_id.
6. Display only title from all the documents.
7. Illustrate the use of save() method [save method used to insert as well as updating
document].
8. List all records where category is CN and cost > 200.
9. List all records where category is not CN.
10. List all records where category is CN or DBMS.
11. List all records where category is nor CN as well as DBMS.

THEORY :-
MongoDB is a cross-platform, document-oriented database that provides, high performance,
high availability, and easy scalability. MongoDB works on concept of collection and document.
Database
Database is a physical container for collections. Each database gets its own set of files on the file
system. A single MongoDB server typically has multiple databases.
Collection
Collection is a group of MongoDB documents. It is the equivalent of an RDBMS table. A
collection exists within a single database. Collections do not enforce a schema. Documents
within a collection can have different fields. Typically, all documents in a collection are of
similar or related purpose.
Document
A document is a set of key-value pairs. Documents have dynamic schema. Dynamic schema
means that documents in the same collection do not need to have the same set of fields or
structure, and common fields in a collection's documents may hold different types of data.
The following table shows the relationship of RDBMS terminology with MongoDB.
RDBMS MongoDB

Database Database

Table Collection

Tuple/Row Document

column Field

Table Join Embedded Documents

Primary Key Primary Key (Default key _id provided by

81
Database Management System Lab Manual

mongodb itself)

Database Server and Client

Mysqld/Oracle mongod

mysql/sqlplus mongo

Any relational database has a typical schema design that shows number of tables and the
relationship between these tables. While in MongoDB, there is no concept of relationship.
 Advantages of MongoDB over RDBMS-
1. Schema less − MongoDB is a document database in which one collection holds different
documents. Number of fields, content and size of the document can differ from one
document to another.
2. Structure of a single object is clear.
3. No complex joins.
4. Deep query-ability. MongoDB supports dynamic queries on documents using a
document-based query language that's nearly as powerful as SQL.
5. Tuning.
6. Ease of scale-out − MongoDB is easy to scale.
7. Conversion/mapping of application objects to database objects not needed.
8. Uses internal memory for storing the (windowed) working set, enabling faster access of
data.

 Why Use MongoDB?


1. Document Oriented Storage − Data is stored in the form of JSON style documents.
2. Index on any attribute
3. Replication and high availability
4. Auto-sharing
5. Rich queries
6. Fast in-place updates
7. Professional support by MongoDB

 Where to Use MongoDB?


1. Big Data
2. Content Management and Delivery
3. Mobile and Social Infrastructure
4. User Data Management
5. Data Hub

82
Database Management System Lab Manual

Install MongoDB on Ubuntu


Run the following command to import the MongoDB public GPG key −
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10
Create a /etc/apt/sources.list.d/mongodb.list file using the following command.
echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen'
| sudo tee /etc/apt/sources.list.d/mongodb.list
Now issue the following command to update the repository −
sudo apt-get update
Next install the MongoDB by using the following command −
apt-get install mongodb-10gen = 2.2.3
In the above installation, 2.2.3 is currently released MongoDB version. Make sure to install the
latest version always. Now MongoDB is installed successfully.

Start MongoDB
sudo service mongodb start

Stop MongoDB
sudo service mongodb stop

Restart MongoDB
sudo service mongodb restart
To use MongoDB run the following command.
mongo
This will connect you to running MongoDB instance.

CRUD operations create, read, update, and delete documents.


Create Operations
Create or insert operations add new documents to a collection. If the collection does not currently
exist, insert operations will create the collection.

83
Database Management System Lab Manual

MongoDB provides the following methods to insert documents into a collection:

 db.collection.insertOne() New in version 3.2


 db.collection.insertMany() New in version 3.2

In MongoDB, insert operations target a single collection. All write operations in MongoDB
are atomic on the level of a single document.
For examples, see Insert Documents.

Read Operations
Read operations retrieves documents from a collection; i.e. queries a collection for documents.
MongoDB provides the following methods to read documents from a collection:

 db.collection.find()

You can specify query filters or criteria that identify the documents to return.

For examples, see:

 Query Documents
 Query on Embedded/Nested Documents
 Query an Array
 Query an Array of Embedded Documents

Update Operations

Update operations modify existing documents in a collection. MongoDB provides the following
methods to update documents of a collection:

 db.collection.updateOne() New in version 3.2


 db.collection.updateMany() New in version 3.2
 db.collection.replaceOne() New in version 3.2

In MongoDB, update operations target a single collection. All write operations in MongoDB
are atomic on the level of a single document.

You can specify criteria, or filters, that identify the documents to update. These filters use the
same syntax as read operations.

For examples, see Update Documents.

84
Database Management System Lab Manual

Delete Operations

Delete operations remove documents from a collection. MongoDB provides the following
methods to delete documents of a collection:

 db.collection.deleteOne() New in version 3.2


 db.collection.deleteMany() New in version 3.2

In MongoDB, delete operations target a single collection. All write operations in MongoDB
are atomic on the level of a single document.

You can specify criteria, or filters, that identify the documents to remove. These filters use the
same syntax as read operations.

For examples, see Delete Documents.

PROGRAM :-
[student@localhost ~]$ suPassword:
[root@localhost student]# systemctl start mongod;
[root@localhost student]# mongo
MongoDB shell version: 2.4.6
connecting to: test
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
http://docs.mongodb.org/
Questions? Try the support group
http://groups.google.com/group/mongodb-user
> show dbs;
local 0.078125GB
> use pratiksha
switched to db pratiksha
> show collections;
> db.createcollection("library");
Thu Jan 1 11:25:28.500 TypeError: Property 'createcollection' of object pratiksha is not a
function
> db.createcollection("library");
Thu Jan 1 11:26:03.678 TypeError: Property 'createcollection' of object pratiksha is not a
function
> db.createCollection("library");
{ "ok" : 1 }
> db.library.insert({"bid":1,"name":"c++"});

85
Database Management System Lab Manual

> db.library.insert({"bid":2,"name":"sepm","author":"pressman"});
> db.library.insert({"bid":3,"name":"cn","author":"frouzen","cost":700});
> db.library.find();
{ "_id" : ObjectId("495c5b25bd217c3854101d6e"), "bid" : 1, "name" : "c++" }
{ "_id" : ObjectId("495c5b37bd217c3854101d6f"), "bid" : 2, "name" : "sepm", "author" :
"pressman" }
{ "_id" : ObjectId("495c5b47bd217c3854101d70"), "bid" : 3, "name" : "cn", "author" :
"frouzen", "cost" : 700 }
> db.library.find().pretty();
{ "_id" : ObjectId("495c5b25bd217c3854101d6e"), "bid" : 1, "name" : "c++" }
{
"_id" : ObjectId("495c5b37bd217c3854101d6f"),
"bid" : 2,
"name" : "sepm",
"author" : "pressman"
}
{
"_id" : ObjectId("495c5b47bd217c3854101d70"),
"bid" : 3,
"name" : "cn",
"author" : "frouzen",
"cost" : 700
}
> db.library.update({"cost":700},{$set :{"cost":500}})
> db.library.find().pretty();
{ "_id" : ObjectId("495c5b25bd217c3854101d6e"), "bid" : 1, "name" : "c++" }
{
"_id" : ObjectId("495c5b37bd217c3854101d6f"),
"bid" : 2,
"name" : "sepm",
"author" : "pressman"
}
{
"_id" : ObjectId("495c5b47bd217c3854101d70"),
"bid" : 3,
"name" : "cn",
"author" : "frouzen",
"cost" : 500
}
> db.library.remove({"bid":1});
> db.library.find().pretty();
{
"_id" : ObjectId("495c5b37bd217c3854101d6f"),
"bid" : 2,
"name" : "sepm",
"author" : "pressman"

86
Database Management System Lab Manual

}
{
"_id" : ObjectId("495c5b47bd217c3854101d70"),
"bid" : 3,
"name" : "cn",
"author" : "frouzen",
"cost" : 500
}
> db.library.count();
2
> db.library.insert({"bid":1,"name":"c++"});
> db.library.find().pretty();
{
"_id" : ObjectId("495c5b37bd217c3854101d6f"),
"bid" : 2,
"name" : "sepm",
"author" : "pressman"
}
{
"_id" : ObjectId("495c5b47bd217c3854101d70"),
"bid" : 3,
"name" : "cn",
"author" : "frouzen",
"cost" : 500
}
{ "_id" : ObjectId("495c5c9cbd217c3854101d71"), "bid" : 1, "name" : "c++" }
> db.library.find().sort({"bid":1})
{ "_id" : ObjectId("495c5c9cbd217c3854101d71"), "bid" : 1, "name" : "c++" }
{ "_id" : ObjectId("495c5b37bd217c3854101d6f"), "bid" : 2, "name" : "sepm", "author" :
"pressman" }
{ "_id" : ObjectId("495c5b47bd217c3854101d70"), "bid" : 3, "name" : "cn", "author" :
"frouzen", "cost" : 500 }
> db.library.insert({"bid":4,"name":"sepm","author":"pressman","cost":500});
> db.library.find().pretty();
{
"_id" : ObjectId("495c5b37bd217c3854101d6f"),
"bid" : 2,
"name" : "sepm",
"author" : "pressman"
}
{
"_id" : ObjectId("495c5b47bd217c3854101d70"),
"bid" : 3,
"name" : "cn",
"author" : "frouzen",
"cost" : 500

87
Database Management System Lab Manual

}
{ "_id" : ObjectId("495c5c9cbd217c3854101d71"), "bid" : 1, "name" : "c++" }
{
"_id" : ObjectId("495c5f31bd217c3854101d72"),
"bid" : 4,
"name" : "sepm",
"author" : "pressman",
"cost" : 500
}
> db.library.find().sort({"bid":1})
{ "_id" : ObjectId("495c5c9cbd217c3854101d71"), "bid" : 1, "name" : "c++" }
{ "_id" : ObjectId("495c5b37bd217c3854101d6f"), "bid" : 2, "name" : "sepm", "author" :
"pressman" }
{ "_id" : ObjectId("495c5b47bd217c3854101d70"), "bid" : 3, "name" : "cn", "author" :
"frouzen", "cost" : 500 }
{ "_id" : ObjectId("495c5f31bd217c3854101d72"), "bid" : 4, "name" : "sepm", "author" :
"pressman", "cost" : 500 }
> db.library.find({"cost":500})
{ "_id" : ObjectId("495c5b47bd217c3854101d70"), "bid" : 3, "name" : "cn", "author" :
"frouzen", "cost" : 500 }
{ "_id" : ObjectId("495c5f31bd217c3854101d72"), "bid" : 4, "name" : "sepm", "author" :
"pressman", "cost" : 500 }
> db.library.find({"bid":1})
{ "_id" : ObjectId("495c5c9cbd217c3854101d71"), "bid" : 1, "name" : "c++" }

> db.library.find({$and:[{"name":"sepm"},{"cost":500}]}).pretty()
{
"_id" : ObjectId("495c5f31bd217c3854101d72"),
"bid" : 4,
"name" : "sepm",
"author" : "pressman",
"cost" : 500
}
> db.library.insert({"bid":5,"name":"c++","cost":800});
> db.library.insert({"bid":6,"name":"dbms","cost":900});
> db.library.find().pretty();
{
"_id" : ObjectId("495c5b37bd217c3854101d6f"),
"bid" : 2,
"name" : "sepm",
"author" : "pressman"
}
{
"_id" : ObjectId("495c5b47bd217c3854101d70"),
"bid" : 3,

88
Database Management System Lab Manual

"name" : "cn",
"author" : "frouzen",
"cost" : 500
}
{ "_id" : ObjectId("495c5c9cbd217c3854101d71"), "bid" : 1, "name" : "c++" }
{
"_id" : ObjectId("495c5f31bd217c3854101d72"),
"bid" : 4,
"name" : "sepm",
"author" : "pressman",
"cost" : 500
}
{
"_id" : ObjectId("495c664ebd217c3854101d73"),
"bid" : 5,
"name" : "c++",
"cost" : 800
}
{
"_id" : ObjectId("495c6667bd217c3854101d74"),
"bid" : 6,
"name" : "dbms",
"cost" : 900
}

> db.library.find({$or:[{"cost":800},{"cost":900}]}).pretty()
{
"_id" : ObjectId("495c664ebd217c3854101d73"),
"bid" : 5,
"name" : "c++",
"cost" : 800
}
{
"_id" : ObjectId("495c6667bd217c3854101d74"),
"bid" : 6,
"name" : "dbms",
"cost" : 900
}

> db.library.find({"cost":{$ne:500}})
{ "_id" : ObjectId("495c5b37bd217c3854101d6f"), "bid" : 2, "name" : "sepm", "author" :
"pressman" }
{ "_id" : ObjectId("495c5c9cbd217c3854101d71"), "bid" : 1, "name" : "c++" }
{ "_id" : ObjectId("495c664ebd217c3854101d73"), "bid" : 5, "name" : "c++", "cost" : 800 }
{ "_id" : ObjectId("495c6667bd217c3854101d74"), "bid" : 6, "name" : "dbms", "cost" : 900 }

89
Database Management System Lab Manual

> db.library.find({$nor:[{"cost":500},{"author":"frouzen"}]})
{ "_id" : ObjectId("495c5b37bd217c3854101d6f"), "bid" : 2, "name" : "sepm", "author" :
"pressman" }
{ "_id" : ObjectId("495c5c9cbd217c3854101d71"), "bid" : 1, "name" : "c++" }
{ "_id" : ObjectId("495c664ebd217c3854101d73"), "bid" : 5, "name" : "c++", "cost" : 800 }
{ "_id" : ObjectId("495c6667bd217c3854101d74"), "bid" : 6, "name" : "dbms", "cost" : 900 }

> db.library.find({"cost":{$not:{$gt:800}}})
{ "_id" : ObjectId("495c5b37bd217c3854101d6f"), "bid" : 2, "name" : "sepm", "author" :
"pressman" }
{ "_id" : ObjectId("495c5b47bd217c3854101d70"), "bid" : 3, "name" : "cn", "author" :
"frouzen", "cost" : 500 }
{ "_id" : ObjectId("495c5c9cbd217c3854101d71"), "bid" : 1, "name" : "c++" }
{ "_id" : ObjectId("495c5f31bd217c3854101d72"), "bid" : 4, "name" : "sepm", "author" :
"pressman", "cost" : 500 }
{ "_id" : ObjectId("495c664ebd217c3854101d73"), "bid" : 5, "name" : "c++", "cost" : 800 }

> db.library.save({"bid":16,"name":"Java"})
> db.library.find()
{ "_id" : ObjectId("495c5b37bd217c3854101d6f"), "bid" : 2, "name" : "sepm", "author" :
"pressman" }
{ "_id" : ObjectId("495c5b47bd217c3854101d70"), "bid" : 3, "name" : "cn", "author" :
"frouzen", "cost" : 500 }
{ "_id" : ObjectId("495c5c9cbd217c3854101d71"), "bid" : 1, "name" : "c++" }
{ "_id" : ObjectId("495c5f31bd217c3854101d72"), "bid" : 4, "name" : "sepm", "author" :
"pressman", "cost" : 500 }
{ "_id" : ObjectId("495c664ebd217c3854101d73"), "bid" : 5, "name" : "c++", "cost" : 800 }
{ "_id" : ObjectId("495c6667bd217c3854101d74"), "bid" : 6, "name" : "dbms", "cost" : 900 }
{ "_id" : ObjectId("495c6e1dbd217c3854101d75"), "bid" : 16, "name" : "Java" }
> db.library.save({"_id":"bid":16,"name":"Java"})^C
bye
[root@localhost student]# mongo
MongoDB shell version: 2.4.6
connecting to: test
> use DB;
switched to db DB
> db.library.find()
{ "_id" : ObjectId("495c5b37bd217c3854101d6f"), "bid" : 2, "name" : "sepm", "author" :
"pressman" }
{ "_id" : ObjectId("495c5b47bd217c3854101d70"), "bid" : 3, "name" : "cn", "author" :
"frouzen", "cost" : 500 }
{ "_id" : ObjectId("495c5c9cbd217c3854101d71"), "bid" : 1, "name" : "c++" }

90
Database Management System Lab Manual

{ "_id" : ObjectId("495c5f31bd217c3854101d72"), "bid" : 4, "name" : "sepm", "author" :


"pressman", "cost" : 500 }
{ "_id" : ObjectId("495c664ebd217c3854101d73"), "bid" : 5, "name" : "c++", "cost" : 800 }
{ "_id" : ObjectId("495c6667bd217c3854101d74"), "bid" : 6, "name" : "dbms", "cost" : 900 }
{ "_id" : ObjectId("495c6e1dbd217c3854101d75"), "bid" : 16, "name" : "Java" }

> db.library.save({"_id":"495c6e1dbd217c3854101d75","bid":16,"name":"Java"})
> db.library.find()
{ "_id" : ObjectId("495c5b37bd217c3854101d6f"), "bid" : 2, "name" : "sepm", "author" :
"pressman" }
{ "_id" : ObjectId("495c5b47bd217c3854101d70"), "bid" : 3, "name" : "cn", "author" :
"frouzen", "cost" : 500 }
{ "_id" : ObjectId("495c5c9cbd217c3854101d71"), "bid" : 1, "name" : "c++" }
{ "_id" : ObjectId("495c5f31bd217c3854101d72"), "bid" : 4, "name" : "sepm", "author" :
"pressman", "cost" : 500 }
{ "_id" : ObjectId("495c664ebd217c3854101d73"), "bid" : 5, "name" : "c++", "cost" : 800 }
{ "_id" : ObjectId("495c6667bd217c3854101d74"), "bid" : 6, "name" : "dbms", "cost" : 900 }
{ "_id" : ObjectId("495c6e1dbd217c3854101d75"), "bid" : 16, "name" : "Java" }
{ "_id" : "495c6e1dbd217c3854101d75", "bid" : 16, "name" : "Java" }

> db.library.save({"_id":"495c6e1dbd217c3854101d75"},{"bid":16,"name":"Java"})
> db.library.find()
{ "_id" : ObjectId("495c5b37bd217c3854101d6f"), "bid" : 2, "name" : "sepm", "author" :
"pressman" }
{ "_id" : ObjectId("495c5b47bd217c3854101d70"), "bid" : 3, "name" : "cn", "author" :
"frouzen", "cost" : 500 }
{ "_id" : ObjectId("495c5c9cbd217c3854101d71"), "bid" : 1, "name" : "c++" }
{ "_id" : ObjectId("495c5f31bd217c3854101d72"), "bid" : 4, "name" : "sepm", "author" :
"pressman", "cost" : 500 }
{ "_id" : ObjectId("495c664ebd217c3854101d73"), "bid" : 5, "name" : "c++", "cost" : 800 }
{ "_id" : ObjectId("495c6667bd217c3854101d74"), "bid" : 6, "name" : "dbms", "cost" : 900 }
{ "_id" : ObjectId("495c6e1dbd217c3854101d75"), "bid" : 16, "name" : "Java" }
{ "_id" : "495c6e1dbd217c3854101d75" }

> db.library.save({"_id":"495c6e1dbd217c3854101d75"},{"bid":75,"name":"Java"})
> db.library.find()
{ "_id" : ObjectId("495c5b37bd217c3854101d6f"), "bid" : 2, "name" : "sepm", "author" :
"pressman" }
{ "_id" : ObjectId("495c5b47bd217c3854101d70"), "bid" : 3, "name" : "cn", "author" :
"frouzen", "cost" : 500 }
{ "_id" : ObjectId("495c5c9cbd217c3854101d71"), "bid" : 1, "name" : "c++" }
{ "_id" : ObjectId("495c5f31bd217c3854101d72"), "bid" : 4, "name" : "sepm", "author" :
"pressman", "cost" : 500 }
{ "_id" : ObjectId("495c664ebd217c3854101d73"), "bid" : 5, "name" : "c++", "cost" : 800 }

91
Database Management System Lab Manual

{ "_id" : ObjectId("495c6667bd217c3854101d74"), "bid" : 6, "name" : "dbms", "cost" : 900 }


{ "_id" : ObjectId("495c6e1dbd217c3854101d75"), "bid" : 16, "name" : "Java" }
{ "_id" : "495c6e1dbd217c3854101d75" }

> db.library.save({"_id":ObjectId("495c6e1dbd217c3854101d75"),"bid":75,"name":"Java"})
> db.library.find()
{ "_id" : ObjectId("495c5b37bd217c3854101d6f"), "bid" : 2, "name" : "sepm", "author" :
"pressman" }
{ "_id" : ObjectId("495c5b47bd217c3854101d70"), "bid" : 3, "name" : "cn", "author" :
"frouzen", "cost" : 500 }
{ "_id" : ObjectId("495c5c9cbd217c3854101d71"), "bid" : 1, "name" : "c++" }
{ "_id" : ObjectId("495c5f31bd217c3854101d72"), "bid" : 4, "name" : "sepm", "author" :
"pressman", "cost" : 500 }
{ "_id" : ObjectId("495c664ebd217c3854101d73"), "bid" : 5, "name" : "c++", "cost" : 800 }
{ "_id" : ObjectId("495c6667bd217c3854101d74"), "bid" : 6, "name" : "dbms", "cost" : 900 }
{ "_id" : ObjectId("495c6e1dbd217c3854101d75"), "bid" : 75, "name" : "Java" }
{ "_id" : "495c6e1dbd217c3854101d75" }

> db.library.save({"_id":ObjectId("495c6e1dbd217c3854101d71"),"bid":10,"name":"Java"})
> db.library.find()
{ "_id" : ObjectId("495c5b37bd217c3854101d6f"), "bid" : 2, "name" : "sepm", "author" :
"pressman" }
{ "_id" : ObjectId("495c5b47bd217c3854101d70"), "bid" : 3, "name" : "cn", "author" :
"frouzen", "cost" : 500 }
{ "_id" : ObjectId("495c5c9cbd217c3854101d71"), "bid" : 1, "name" : "c++" }
{ "_id" : ObjectId("495c5f31bd217c3854101d72"), "bid" : 4, "name" : "sepm", "author" :
"pressman", "cost" : 500 }
{ "_id" : ObjectId("495c664ebd217c3854101d73"), "bid" : 5, "name" : "c++", "cost" : 800 }
{ "_id" : ObjectId("495c6667bd217c3854101d74"), "bid" : 6, "name" : "dbms", "cost" : 900 }
{ "_id" : ObjectId("495c6e1dbd217c3854101d75"), "bid" : 75, "name" : "Java" }
{ "_id" : "495c6e1dbd217c3854101d75" }
{ "_id" : ObjectId("495c6e1dbd217c3854101d71"), "bid" : 10, "name" : "Java" }

> db.library.save({"_id":ObjectId("495c5c9cbd217c3854101d71"),"bid":12,"name":"Java"})
> db.library.find()
{ "_id" : ObjectId("495c5b37bd217c3854101d6f"), "bid" : 2, "name" : "sepm", "author" :
"pressman" }
{ "_id" : ObjectId("495c5b47bd217c3854101d70"), "bid" : 3, "name" : "cn", "author" :
"frouzen", "cost" : 500 }
{ "_id" : ObjectId("495c5c9cbd217c3854101d71"), "bid" : 12, "name" : "Java" }
{ "_id" : ObjectId("495c5f31bd217c3854101d72"), "bid" : 4, "name" : "sepm", "author" :
"pressman", "cost" : 500 }
{ "_id" : ObjectId("495c664ebd217c3854101d73"), "bid" : 5, "name" : "c++", "cost" : 800 }
{ "_id" : ObjectId("495c6667bd217c3854101d74"), "bid" : 6, "name" : "dbms", "cost" : 900 }
{ "_id" : ObjectId("495c6e1dbd217c3854101d75"), "bid" : 75, "name" : "Java" }

92
Database Management System Lab Manual

{ "_id" : "495c6e1dbd217c3854101d75" }
{ "_id" : ObjectId("495c6e1dbd217c3854101d71"), "bid" : 10, "name" : "Java" }

> db.library.save({"_id":ObjectId("495c5c9cbd217c3854101d71"),"bid":22,"name":"Java"})
> db.library.find()
{ "_id" : ObjectId("495c5b37bd217c3854101d6f"), "bid" : 2, "name" : "sepm", "author" :
"pressman" }
{ "_id" : ObjectId("495c5b47bd217c3854101d70"), "bid" : 3, "name" : "cn", "author" :
"frouzen", "cost" : 500 }
{ "_id" : ObjectId("495c5c9cbd217c3854101d71"), "bid" : 22, "name" : "Java" }
{ "_id" : ObjectId("495c5f31bd217c3854101d72"), "bid" : 4, "name" : "sepm", "author" :
"pressman", "cost" : 500 }
{ "_id" : ObjectId("495c664ebd217c3854101d73"), "bid" : 5, "name" : "c++", "cost" : 800 }
{ "_id" : ObjectId("495c6667bd217c3854101d74"), "bid" : 6, "name" : "dbms", "cost" : 900 }
{ "_id" : ObjectId("495c6e1dbd217c3854101d75"), "bid" : 75, "name" : "Java" }
{ "_id" : "495c6e1dbd217c3854101d75" }
{ "_id" : ObjectId("495c6e1dbd217c3854101d71"), "bid" : 10, "name" : "Java" }
>

ASSIGNMENT NO. 10
TITLE :-
Implement aggregation and indexing with suitable example using MongoDB.
1. Create document emp (eid, ename, dept, salary).
2. Find total number of employees in a department.

93
Database Management System Lab Manual

3. Find sum of salary of employees in a department.


4. Find minimum salary paid by the department.
5. Find maximum salary paid by the department.
6. Find average salary paid by the department.
7. Find first and last record of department.
8. Create index in eid.
9. Create index in ename.
10. Display indexes created.
11. Drop index.

THEORY :-
 Introduction to Indexing-

Typically, Indexes are data structures that can store collection‘s data set in a form that is easy to
traverse. Queries are efficiently executed with the help of indexes in MongoDB.
Indexes help MongoDB find documents that match the query criteria without performing a
collection scan. If a query has an appropriate index, MongoDB uses the index and limits the
number of documents it examines.
Indexes store field values in the order of the value.The order in which the index entries are made
support operations, such as equality matches and range-based queries. MongoDB sorts and
returns the results by using the sequential order of the indexes.
The indexes of MongoDB are similar to the indexes in any other databases.MongoDB defines
the indexes at the collection level for use in any field or subfield.
In the next section, we will discuss index types.
 Types of Index-

MongoDB supports the following index types for querying.


Default _id: Each MongoDB collection contains an index on the default _id (Read as underscore
id) field. If no value is specified for _id, the language driver or the mongod (read as mongo D)
creates a _id field and provides an ObjectId (read as Object ID) value.
Single Field: For a single-field index and sort operation, the sort order of the index keys do not
matter. MongoDB can traverse the indexes either in the ascending or descending order.
Compound Index: For multiple fields, MongoDB supports user-defined indexes, such as
compound indexes. The sequential order of fields in a compound index is significant in
MongoDB.
Multikey Index: To index array data, MongoDB uses multikey indexes. When indexing a field
with an array value, MongoDB makes separate index entries for each array element.
Geospatial Index: To query geospatial data, MongoDB uses two types of indexes—2d indexes
(read as two D indexes) and 2d sphere (read as two D sphere) indexes.
Text Indexes: These indexes in MongoDB searches data string in a collection.
Hashed Indexes: MongoDB supports hash-based sharing and provides hashed indexes. These
indexes the hashes of the field value.
We will discuss the index types in detail later in the lesson. In the next section, we will discuss
the index properties.

94
Database Management System Lab Manual

 Properties of Index-

Following are the index properties of MongoDB.


Unique Indexes
The unique property of MongoDB indexes ensures that duplicate values for the indexed field are
rejected. In addition, the unique indexes can be interchanged functionally with other MongoDB
indexes.
Sparse Indexes
This property ensures that queries search document entries having an indexed field. Documents
without indexed fields are skipped during a query. Sparse index and the unique index can be
combined to reject documents with duplicate field values and ignore documents without indexed
keys.
Total time to Live or TTL Indexes
These are special indexes in MongoDB used to automatically delete documents from a collection
after a specified duration of time. This is ideal for deleting information, such as machine-
generated data, event logs, and session data that needs to be in the database for a shorter
duration.

 Single Field Index-

MongoDB supports indexes on any document filed in a collection. By default, the _id field in all
collections has an index. Moreover, applications and users add indexes for triggering queries and
performing operations.
MongoDB supports both, single field or multiple field indexes based on the operations the index-
type performs.

db.items.createIndex( { ―item" : 1 } )

The command given above is used to create an index on the item field for the items collection.
In the next section, we will discuss how to create single field indexes on embedded documents.

 Single Field Index on Embedded Document-

You can index top-level fields within a document. Similarly, you can create indexes within
embedded document fields.
{ "_id" : 3, "item" : "Book", "available" : true, "soldQty" : 144821, "category" : "NoSQL",
"details" : { "ISDN" : "1234", "publisher" : "XYZ Company" }, "onlineSale" : true }
The structure shown above refers to a document stored in a collection. In the document, the
details field depicts an embedded document that has two embedded fields— ISDN and publisher.

db.items.createIndex( {details.ISDN: 1 } )

To create an index on the ISDN field and the embedded document called ―details‖, perform the
queries shown above.
In the next section, we will discuss compound indexes.

 Compound Indexes-
95
Database Management System Lab Manual

MongoDB supports compound indexes to query multiple fields. A compound index contains
multiple single field indexes separated by a comma.
db.products.createIndex( { "item": 1, "stock": 1 } )

The command shown above is an example of a compound index on two fields.

This diagram depicts a compound index for the fields, userid, and score. The documents are first
organized by userid and within each userid, scores are organized in the descending order. The
sort order of fields in a compound index is crucial.
The documents are first sorted by the itemfield value and then, within each item field value, they
are further sorted by the stock field values.
For a compound index, MongoDB limits the fields to a maximum of 31.

 Index Prefixes

Index prefixes are created by taking a different combination of fields and typically, start from the
first field.
{ "item": 1, ―available‖:1, "soldQty―:1}

For example, consider the compound index given above.


It has the item in the ascending order and available in the ascending order as the index prefixes.
MongoDB uses a compound index even if the find queries are for index prefixes fields. It uses
indexes for querying the item field, the available field, and the soldQty (read as sold quantity)
field.
MongoDB cannot efficiently support the query on the item and soldQty fields by using index
prefixes as it would be like using separate indexes for these fields. The item field is a part of the
compound index and the index prefixes. Hence, the item field should be used in the find query of
the index.

 Sort Order-

In MongoDB, you can use the sort operations to manage the sort order. You can retrieve
documents based on the sort order in an index.
Following are the characteristics of a sort order:

96
Database Management System Lab Manual

1. If sorted documents cannot be obtained from an index, the results will get sorted in the
memory.
2. Sort operations executed using an index show better performance than those executed
without using an index.
3. Sort operations performed without an index gets terminated after exhausting 32 MB of
memory.
4. Indexes store field references in the ascending or descending sort order.
5. Sort order is not important for single-field indexes because MongoDB can traverse the
index in either direction.
6. Sort order is important for compound indexes because it helps determine if the index can
support a sort operation

 Ensure Indexes Fit RAM-

To process query faster, ensure that your indexes fit into your system RAM. This will help the
system avoid reading the indexes from the hard disk.
To confirm the index size, use the query given above. This returns the data in bytes. To ensure
this index fits your RAM, you must have more than the required RAM available. In addition, you
must have RAM available for the rest of the working set.
For multiple collections, check the size of all indexes across all collections. The indexes and the
working sets both must fit in the RAM simultaneously.
 Multi-Key Indexes-

When indexing a field containing an array value, MongoDB creates separate index entries for
each array component. These multikey indexes in queries match array elements with documents
containing arrays and select them.
You can construct multikey indexes for arrays holding scalar values, such as strings, numbers,
and nested documents.
db.coll.createIndex( { : < 1 or -1 > } )

To create a multikey index, you can use the db.collection.createIndex() (read as D-B dot
collection dot create Index) method given above.
If the indexed field contains an array, MongoDB automatically decides to either create a
multikey index or not create one. You need not specify the multikey type explicitly.
 Compound Multi-Key Indexes-

In compound multikey indexes, each indexed document can have maximum one indexed field
with an array value. If more than one field has an array value, you cannot create a compound
multikey index.
{ _id: 1, product_id: [ 1, 2 ], retail_id: [ 100, 200 ], category: "both fields are arrays" }

An example of a document structure is shown above. In this collection, both the product_id (read
as product underscore ID) and retail_id (read as retail underscore ID) fields are arrays. Therefore,
you cannot create a compound multikey index.
Note that a shard key index and a hashed index cannot be a multikey index.
 Hashed Indexes-

97
Database Management System Lab Manual

Following are the characteristics of a hashing function.

1. The hashing function combines all embedded documents and computes hashes for all
field values.
2. The hashing function does not support multi-key indexes.
3. Hashed indexes support sharding, uses a hashed shard key to shard a collection, ensures
an even distribution of data.
4. Hashed indexes support equality queries, however, range queries are not supported.

You cannot create a unique or compound index by taking a field whose type is hashed. However,
you can create a hashed and non-hashed index for the same field. MongoDB uses the scalar
index for range queries.
db.items.createIndex( { item: "hashed" } )
You can create a hashed index using the operation given above. This will create a hashed index
for the items collection on the item field.
 TTL Indexes-

TTL indexes automatically delete machine-generated data. You can create a TTL index by
combining the db.collection.createIndex() method with the expireAfterSeconds option on a field
whose value is either a date or an array that contains date values.
db.eventlog.createIndex( { "lastModifiedDate": 1 }, { expireAfterSeconds: 3600 } )
For example, to create a TTL index on the lastModifiedDate (read as last modified date) field of
the eventlog collection, use the operation shown above in the mongo shell.
The TTL background thread runs on both primary and secondary nodes. However, it deletes
documents only from the primary node.
TTL indexes have the following limitations.

1. They are not supported by compound indexes which ignore expireAfterSeconds


2. The _id field does not support TTL indexes.
3. TTL indexes cannot be created on a capped collection because MongoDB cannot delete
documents from a capped collection.
4. It does not allow the createIndex()(read as create index) method to change the value of
expireAfterSeconds of an existing index.

You cannot create a TTL index for a field if a non-TTL index already exists for the same field. If
you want to change a non-TTL single-field index to a TTL index, first drop the index and
recreate the index with the expireAfterSeconds option.
 Unique Indexes-

To create a unique index, use the db.collection.createIndex() method and set the unique option to
true.
db.items.createIndex( { ―item": 1 }, { unique: true } )

For example, to create a unique index on the item field of the items collection, execute the
operation shown above in the mongo shell. By default, unique is false on MongoDB indexes.
If you use the unique constraint on the compound index, then MongoDB will enforce uniqueness
on the combination of all those fields which were the part of the compound key.
98
Database Management System Lab Manual

Unique Index and Missing Field


If the indexed field in a unique index has no value, the index stores a null value for the
document. Because of this unique constraint, MongoDB permits only one document without the
indexed field.
In case there is more than one document with a valueless or missing indexed field, the index
build process will fail and will display a duplicate key error. To filter these null values and avoid
error, combine the unique constraint with the sparse index.
 Sparse Indexes-

Sparse indexes manage only those documents which have indexed fields, even if that field
contains null values. Sparse index ignores those documents which do not contain any index field.
Non-sparse indexes do not ignore these documents and store null values for them.
To create a sparse index, use the db.collection.createIndex() method and set the sparse option to
true.
db.addresses.createIndex( { "xmpp_id": 1 }, { sparse: true } )
In the example given above, the operation in the mongo shell creates a sparse index on the item
field of the items collection. If a sparse index returns an incomplete index, then MongoDB does
not use that index unless it is specified in the hint method.
{ x: { $exists: false } }

For example, the second command given above will not use a sparse index on the x field unless
it receives explicit hints.
An index that combines both sparse and unique does not allow the collection to include
documents having duplicate field values for a single field. However, it allows multiple
documents that omit the key.
 Text Indexes-

Text indexes in MongoDB help search for text strings in documents of a collection. You can
create a text index for field or fields containing string values or an array of strings.
To access text indexes, trigger a query using the $text (read as text) query operator. When you
create text indexes for multiple fields, specify the individual fields or use the wildcard specifier
($**)
db.collection.createIndex({subject: "text",content: "text"})

To create text indexes on the subject and content fields, perform the query given above. The text
index organizes all strings in the subject and content field, where the field value is either a string
or an array of string elements.
To allow text search for all fields with strings, use the wildcard specifier ($**). This indexes all
fields containing string content.
db.collection.createIndex({ "$**": "text" },{ name: "TextIndex" })

The second example given above indexes any string value available in each field of each
document in a collection and names the indexes as TextIndex.
 Text Search-

99
Database Management System Lab Manual

MongoDB supports various languages for text search. The text indexes use simple language-
specific suffix stemming instead of language-specific stop words, such as ―the‖, ―an‖, ―a‖, ―and‖.
You can also choose to specify a language for text search.
If you specify the language value as "none", then the text index uses simple tokenization without
any stop word and stemming.
db.customer_info.createIndex({―item‖: ―Text‖},{ default_language: "spanish"})
In the query given above, you are enabling the text search option for the item field of the
customer_info collection with Spanish as the default language.
If the index language is English, text indexes are case-insensitive for all alphabets from A to Z.
The text index and the $text operator supports the following:
1. Two-letter language codes defined in ISO 639-1 (read as I-S-O 6-3-9-1).
2. Danish, Dutch, English, Finnish, French, German, Hungarian, Italian, Norwegian,
Portuguese, Romanian, Russian, Spanish, Swedish, and Turkish

Note that a compound text index cannot include special index types, such as multi-key or
geospatial Index fields.
 Index Creation-

MongoDB provides several options to create indexes. By default, when indexes are created, all
other operations on a database are blocked.
For example, when indexes on a collection are created, the database becomes unavailable for any
read or write operation until the index creation process completes.
The read or write operations on the database queue and allow the index building process to
complete. Therefore, for index building operations which may consume longer time, you can
consider the background operation and thus make MongoDB available even during the entire
operation.
db.items.createIndex( {item:1},{background: true})
db.items.createIndex({category:1}, {sparse: true, background: true})

The command given above is used for this purpose. By default, the background is false for
building MongoDB indexes.
When MongoDB is creating indexes in the background for a collection, you cannot perform
other administrative operations involving that collection.
For example, you cannot perform tasks, such as runrepairDatabase, (read as run repair database)
drop the collection, or use the query db.collection.drop(),(read as D-B dot collection dot drop)
and runcompact (read as run compact).
If you perform any of these operations, you will receive an error.
The index build process in the background uses an incremental approach and is slower than the
normal ―foreground‖ index build process. The speed of the index build process depends on the
size of the index. If the index size is bigger than the RAM of the system, the process takes more
time than the foreground process.
Building indexes can impact your database performance:

1. If the application includes createIndex()(read as create index) operations and


2. If no index is available for operational concerns.

100
Database Management System Lab Manual

To avoid any performance issues, you can use the getIndexes()(read as get indexes) method to
ensure that your application checks for the indexes at the startup.
You can also use an equivalent method for your driver and ensure it terminates an operation if
the proper indexes do not exist. When building indexes, use separate application codes and
designated maintenance windows.
 Index Creation on Replica Set-

Typically, background index operations on a secondary replica set begin after the index building
process completes in the primary.
If the index build process is running in the background on the primary, the same will happen on
the secondary nodes as well.
If you want to build large indexes on secondaries, you can build the index by restarting one
secondary at a time in a standalone mode.
After the index build is complete, restart as a member of the replica set, allow it to catch up with
the other members of the set, and then build the index on the next secondary. When all the
secondaries have the new index, step down the primary, restart it as a standalone, and build the
index on the former primary.
To ensure that the secondary catch up with primary, the time taken to build the index on a
secondary must be within an oplog. To catch up with primary node, index creation on secondary
nodes always happen in the foreground in the ―recovering‖ mode.

db.products.createIndex( { item: 1, quantity: -1 } , { name: "inventory" } )


Instead of using the default name, you can specify a name for the index by using the command
given above. This will create an index on the item field whose name will be item_index for the
customer_info collection.
 Remove Indexes-

You can use the following methods to remove indexes.


dropIndex()(read as drop index) method: This removes an index from a collection.
db.collection.dropIndex() method: This removes an index.
db.accounts.dropIndex( { "tax-id": 1 } )
For example, the first operation given above removes an ascending index on the item field in the
items collection.
db.collection.dropIndexes()

101
Database Management System Lab Manual

To remove all indexes barring the _id index from a collection, use the second operation provided
above.
 Modify Indexes-

To modify an index, first, drop the index and then recreate it. Perform the following steps to
modify an index.
Drop Index: Execute the query given below to return a document showing the operation status.
db.orders.dropIndex({ "cust_id" : 1, "ord_date" : -1, "items" : 1 })

Recreate the Index: Execute the query given below to return a document showing the status of
the results.
db.orders.createIndex({ "cust_id" : 1, "ord_date" : -1, "items" : -1 })

 Rebuild Indexes-

In addition to modifying indexes, you can also rebuild them. To rebuild all indexes of a
collection, use the db.collection.reIndex() method. This will drop all indexes including _id and
rebuild all indexes in a single operation. The operation takes the form db.items.reIndex().
To view the indexing process status, type the db.currentOp() (read as D B dot Current operation)
command in the mongo shell. The message field will show the percentage of the build
completion.
To abort an ongoing index build process, use the db.killOp()(read as D B dot kill operation)
method in the mongo shell. For index builds, the db.killOp()may occur after most of the index
build operation has completed.
Note that a replicated index built on the secondary replica set cannot be aborted.
 Listing Indexes-

You can list all indexes of a collection and a database. You can get a list of all indexes of a
collection by using the db.collection.getIndexes()or a similar method for your drivers.
For example, to view all indexes on the items collection, use the db.items.getIndexes() method.
db.getCollectionNames().forEach(function(collection) {
indexes = db[collection].getIndexes();
print("Indexes for " + collection + ":");
printjson(indexes);\
});
To list all indexes of collections, you can use the operation in the mongo shell as shown above.
 Measure Index Use-

Typically, query performance indicates an index usage. MongoDB provides a number of tools to
study query operations and observe index use for your database.
The explain() method can be used to print information about query execution. The explain
method returns a document that explains the process and indexes used to return a query. This
helps to optimize a query.
Using the db.collection.explain() or the cursor.explain() method helps measure index usages.
 Control Index Use-

102
Database Management System Lab Manual

In case you want to force MongoDB to use particular indexes for querying documents, then you
need to specify the index with the hint() method.
The hint method can be appended in the find() method as well.
db.items.find({item: ―Book‖, available : true }).hint({item:1})
Consider the example given above. This command queries a document whose item field value is
―Book‖ and available field is ―true‖. Here, MongoDB‘s query planner is directed to use the index
created on the item field.
To view the execution statistics for a specific index, use the explain method in the find
command.
db.items.find({item: ―Book‖, available : true }).hint({item:1}).explain(―executionStats‖)
db.items.explain("executionStats").find({item: ―Book‖, available : true }).hint( { item:1 } )
For example, consider the queries given above.
If you want to prevent MongoDB from using any index, specify the $natural (read as natural)
operator to the hint() method.
db.items.find({item: ―Book‖, available : true }).hint({$natural:1}).explain(―executionStats‖)
For example, use the following query given above.
 Index Use Reporting-

MongoDB provides different metrics to report index use and operation. You can consider these
metrics when analyzing index use for your database. These metrics are printed using the
following commands.
serverStatus: serverStatus prints the two metrics.
1. scanned: Displays the documents that MongoDB scans in the index to carry out the
operation. If the number of the scanned document is higher than the number of returned
documents, this indicates that the database has scanned many objects to find the target
object. In such cases, consider creating an index to improve this.
2. scanAndOrder: A boolean that is true when a query cannot use the order of documents
in the index for returning sorted results MongoDB must sort the documents after it
receives the documents from a cursor. If scanAndOrder is false, MongoDB can use the
order of the documents in an index to return the sorted results.

MongoDB must sort the documents after it receives the documents from a cursor. If
scanAndOrder is false, MongoDB can use the order of the documents in an index to return the
sorted results.
1. collStats: collStats prints the two metrics.
2. totalIndexSize: Returns index size in bytes
3. indexSizes: Explains the size of the data allocated for an index
4. dbStats: dbStats has the following two metrics.
5. dbStats.indexes: Contains a count of the total number of indexes across all collections in
the database
6. dbStats.indexSize: The total size in bytes of all indexes created in this database.

 Geospatial Index-

103
Database Management System Lab Manual

With the increased usage of handheld devices, geospatial queries are becoming increasingly
frequent for finding the nearest data points for a given location.
MongoDB provides geospatial indexes for coordinating such queries. Suppose you want to find
the nearest coffee shop from your current location. You need to create a special index to
efficiently perform such queries because it needs to search in two dimensions— longitude and
latitude.
A geospatial index is created using the createIndex function. It passes "2d" or ―2dsphere‖ as a
value instead of 1 or -1. To query geospatial data, you first need to create a geospatial index.
db.collection.createIndex( { : "2dsphere" } )

In the index specification document for the db.collection.createIndex() method, as shown above,
specify the location field as the index key and specify the string literal "2dsphere" as the value.
A compound index can include a 2dsphere index key in combination with non-geospatial index
keys.
 MongoDB’s Geospatial Query Operators-

The geospatial query operators in MongoDB lets you perform the following queries.
Inclusion Queries
1. Return the locations included within a specified polygon.
2. Use the operator $geoWithin. The 2d and 2dsphere indexes support this query.
3. Although MongoDB does not require an index to perform an inclusion query, they can
enhance the query performance.

Intersection Queries
1. Return locations intersecting with a specified geometry.
2. Use the $geoIntersects operator and return the data on a spherical surface.

Proximity Queries
1. Return various points closer to a specified point.
2. Use the $near operator that requires a 2d or 2dsphere index.

$geoWith Operator-
The $geoWithin (read as geo within) operator is used to query location data found within a
GeoJSON (read as geo J-SON) polygon. To get a response, the location data needs to be stored
in the GeoJSON format.
db..find( { : { $geoWithin :{ $geometry :{ type : "Polygon" ,coordinates : [ ]} } } } )

You can use the syntax given above to use the $geoWith Operator.
db.places.find( { loc :{ $geoWithin : { $geometry :{ type : "Polygon" ,
coordinates :[ [[ 0 , 0 ] ,[ 3 , 6 ] ,[ 6 , 1 ] ,[ 0 , 0 ]] ]} } } } )

The example given above selects all points and shapes that exist entirely within a GeoJSON
polygon.
 Proximity Queries in MongoDB-

104
Database Management System Lab Manual

Proximity queries return the points closest to the specified point. These queries sort the results by
its proximity to the specified point.
You need to create a 2dsphere index in order to perform a proximity query on the GeoJSON data
points. To query the data, you can either use the $near or $geonear (read as geo near) operator.
db..find( { :{ $near :{ $geometry :{ type : "Point" ,coordinates : [ <longitude> , <latitude> ] } ,
$maxDistance : } } } )
The first syntax given above is an example of the $near operator.
db.runCommand( { geoNear : ,near : { type : "Point" ,coordinates: [ <longitude>, <latitude> ] } ,
spherical : true } )
The $geoNear command uses the second syntax given above. This command offersadditional
options and returns further information than the $near operator.

Aggregation :-
Operations that process data sets and return calculated results are called aggregations. MongoDB
provides data aggregations that examine data sets and perform calculations on them. Aggregation
is run on the mongod instance to simplify application codes and limit resource requirements.
Similar to queries, aggregation operations in MongoDB use collections of documents as an input
and return results in the form of one or more documents The aggregation framework in
MongoDB is based on data processing pipelines.
Documents pass through multi-stage pipelines and get transformed into an aggregated result. The
most basic pipeline stage in the aggregation framework provides filters that function like queries.
It also provides document transformations that modify the output document.
The pipeline operations group and sort documents by defined field or fields. In addition, they
perform aggregation on arrays.
Pipeline stages can use operators to perform tasks such as calculate the average or concatenate a
string. The pipeline uses native operations within MongoDB to allow efficient data aggregation
and is the favored method for data aggregation.
With the help of the aggregate function, you can perform complex aggregation operations, such
as finding out the total transaction amount for each customer.
―orders‖ is the collection that has three fields, cust_id, amount, and status. In the $match (read as
match) stage, you will filter out those documents in which status field value is ―A‖. In the group
stage, you will aggregate the ―amount‖ field for each cust_id.
 Pipeline Operators and Indexes :-

The aggregate command in MongoDB functions on a single collection and logically passes the
collection through the aggregation pipeline. You can optimize the operation and avoid scanning
the entire collection by using the $match, $limit, and $kip stages.
You may require only a subset of data from a collection to perform an aggregation operation.
Therefore, use the $match, $limit, and $skip stages to filter the documents. When placed at the
beginning of a pipeline, the $match operation scans and selects only the matching documents in a
collection.
Placing a $match before $sort in the pipeline stage is equivalent to using a query in which the
sorting function is performed before looking into the indexes. Therefore, it is recommended to
use $match operators at the beginning of the pipeline.
 Aggregate Pipeline Stages :-

105
Database Management System Lab Manual

Pipeline stages appear in an array. Documents are passed through the pipeline stages in a proper
order one after the other. Barring $out and $geoNear, all stages of the pipeline can appear
multiple times.
The db.collection.aggregate()(read as DB dot collection dot aggregate) method provides access
to the aggregation pipeline and returns a cursor and result sets of any size.
The various pipeline stages are as follows.
1. $project: This stage adds new fields or removes existing fields and thus restructure each
document in the stream. This stage returns one output document for each input document
provided.
2. $match: It filters the document stream and allows only matching documents to pass into
the next stage without any modification. $match uses the standard MongoDB queries. For
each input document, it returns either one output document if there is a match or zero
documents when there is no match.
3. $group: This stage groups documents based on the specified identifier expression and
applies logic known as accumulator expression to compute the output document.
4. $sort: This stage rearranges the order of the document stream using specified sort keys.
The documents remain unaltered even though the order changes. This stage provides one
output document for each input document.
5. $skip: This stage skips the first n documents where n is the specified skip number. It
passes the remaining documents without any modifications to the pipeline. For each input
document, it returns either zero documents for the first n documents or one document.
6. $limit: It passes the first n number of documents without any modifications to the
pipeline. For each input document, this stage returns either one document for the first n
documents or zero documents after the first n documents.
7. $unwind: It deconstructs an array field in the input documents to return a document for
each element. Each output document replaces the array with an element value. For each
input document, it returns n documents where n is the number of array elements and can
be zero for an empty array.
 Aggregation Example :-

The aggregation operation given above returns all states with the total population greater than 10
million.
db.zipcodes.aggregate( [{ $group: { _id: "$state", totalPop: { $sum: "$pop" } } },
{ $match: { totalPop: { $gte: 10*1000*1000 } } }] )
This example depicts that the aggregation pipeline contains the $group stage followed by the
$match stage.
In this operation, the $group stage does three things:
1. Groups the documents of the zip code collection under the state field
2. Calculates thetotalPop (read as the total population) field for each state
3. Returns an output document for each unique state.

106
Database Management System Lab Manual

The new per-state documents contain two fields: the _id field and the totalPop field. Here in this
command, the aggregate pipeline is used. The $sort stage orders those documents and $group
stage applies the sum operation on the amount fields of those documents.
db.users.aggregate([{ $project :{month_joined : { $month : "$joined" },name : "$_id",_id :
0}},{ $sort : { month_joined : 1 } } ])

The second aggregation operation is shown above returns usernames sorted by the month of their
joining. This kind of aggregation could help generate membership renewal notices.

 MapReduce :-

MapReduce is a data processing model used for aggregation. To perform MapReduce operations,
MongoDB provides the MapReduce database command.
A MapReduce operation consists of two phases.
Map stage: Documents are processed and one or more objects are produced for each input
document.
Reduce stage: Outputs of the map operation are combined. Optionally, there can be an
additional stage to make final modifications to the result.
Similar to other aggregation operations, MapReduce can define a query condition to select the
input documents, and sort and limit the results.
The MapReduce function in MongoDB can be written as JavaScript codes.
The MapReduce operations:-
1. Accept a set of documents from a collection of inputs, performs sort and limit functions,
and then start the map stage. At the end of a MapReduce operation, the result is generated
as documents which can be saved in the collection.
2. Associate values to a key by using the custom JavaScript functions. If a key contains
more than one mapped value, then this operation converts them into a single object, such
as an array.

The use of custom JavaScript functions makes the MapReduce operations flexible. For example,
the map function can generate more than one key and value when processing documents.
Additionally, the MapReduce operations use the custom JavaScript function to alter the results at
the conclusion of the map and reduce operations may perform further calculations.
If a collection is sharded, then you can use MapReduce to perform many complex aggregation
operations.
The orders in the collection have three fields—cust_id, amount and status. If you want to find out
the sum of the total amount for each customer, then use the MapReduce framework. In the map
stage, cust_id and amount will be generated as the key.
The value will be further processed by the reduce stage in which cust_id an array of the amount
will be passed as input to each reducer. The reducer then finds out the total of the amount and
generate cust_id as key and order_totals as value.
 Aggregation Operations:-

Aggregations are operations that manipulate data and return a computed result based on the input
document and a specific procedure. MongoDB performs aggregation operations on data sets.

107
Database Management System Lab Manual

Aggregation operations have limited scope compared to the aggregation pipeline and
MapReduce functions.
Aggregation operations provide the following semantics for common data processing options.
1. Count

Count MongoDB returns all of the documents matching a query. The count command along
with the two methods, count() and cursor.count() provides access to total counts in the
mongo shell. The db.customer_info.count() command helps count all documents in the
customer_info collection.
2. Distinct

The distinct operation searches for documents matching a query and returns all unique values
for a field in the matched document. The distinct command and db.collection.distinct()
method execute this operation in the mongo shell.
db.customer_info.distinct( ―customer_name" )
The syntax given above is an example of a distinct operation.
Group operations accept sets of documents as input which matches the given query, apply the
operation, and then return an array of documents with the computed results.
A group does not support sharded collection data. In addition, the results of the group operation
must not exceed 16 megabytes.
The group operation shown above groups documents by the field ‗a‘, where ‗a‘ is less than three
and sums the field count for each group.

PROGRAM :-
[student@localhost ~]$ suPassword:
[root@localhost student]# systemctl start mongod
[root@localhost student]# mongo
MongoDB shell version: 2.4.6
connecting to: test
> show db
Tue Jan 1 00:14:49.606 don't know how to show [db] at src/mongo/shell/utils.js:847
> use local
switched to db local
> db.createCollection("emp")
{ "ok" : 1 }
> db.emp.insert({"eid":1,"ename":"Sonal","dept":"comp","project":"DBMS","cost":5000});
> db.emp.insert({"eid":2,"ename":"Komal","dept":"comp","project":"DBMS","cost":10000});
> db.emp.insert({"eid":3,"ename":"Priya","dept":"comp","project":"DBMS","cost":7000});
> db.emp.insert({"eid":2,"ename":"Komal","dept":"comp","project":"JAVA","cost":6000});
> db.emp.insert({"eid":3,"ename":"Priya","dept":"comp","project":"JAVA","cost":4000});
> db.emp.insert({"eid":3,"ename":"Priya","dept":"comp","project":"Testing","cost":8000});

> db.emp.find()

{ "_id" : ObjectId("50e1df13d651aeec3581f7b8"), "eid" : 1, "ename" : "Sonal", "dept" : "comp",


"project" : "DBMS", "cost" : 5000 }

108
Database Management System Lab Manual

{ "_id" : ObjectId("50e1ed8cd651aeec3581f7bf"), "eid" : 2, "ename" : "Komal", "dept" :


"comp", "project" : "DBMS", "cost" : 10000 }
{ "_id" : ObjectId("50e1eda6d651aeec3581f7c0"), "eid" : 3, "ename" : "Priya", "dept" : "comp",
"project" : "DBMS", "cost" : 7000 }
{ "_id" : ObjectId("50e1edd6d651aeec3581f7c1"), "eid" : 2, "ename" : "Komal", "dept" :
"comp", "project" : "JAVA", "cost" : 6000 }
{ "_id" : ObjectId("50e1efc2d651aeec3581f7c6"), "eid" : 3, "ename" : "Priya", "dept" : "comp",
"project" : "JAVA", "cost" : 4000 }
{ "_id" : ObjectId("50e1efd5d651aeec3581f7c7"), "eid" : 3, "ename" : "Priya", "dept" : "comp",
"project" : "Testing", "cost" : 8000 }

> db.emp.aggregate([{$group : {"_id":"$eid","No of Employees" : {$sum : 1}}}])


{
"result" : [
{
"_id" : 3,
"No of Employees" : 3
},
{
"_id" : 2,
"No of Employees" : 2
},
{
"_id" : 1,
"No of Employees" : 1
}
],
"ok" : 1
}

> db.emp.aggregate([{$group : {"_id":"$project","Total cost of project" : {$sum : "$cost"}}}])


{
"result" : [
{
"_id" : "Testing",
"Total cost of project" : 8000
},
{
"_id" : "JAVA",
"Total cost of project" : 10000
},
{
"_id" : "DBMS",
"Total cost of project" : 22000
}
],

109
Database Management System Lab Manual

"ok" : 1
}

> db.emp.aggregate([{$group : {"_id":"$project","First project cost" : {$first : "$cost"}}}])


{
"result" : [
{
"_id" : "Testing",
"First project cost" : 8000
},
{
"_id" : "JAVA",
"First project cost" : 6000
},
{
"_id" : "DBMS",
"First project cost" : 5000
}
],
"ok" : 1
}
> db.emp.aggregate([{$group : {"_id":"$project","First project cost" : {$last : "$cost"}}}])
{
"result" : [
{
"_id" : "Testing",
"First project cost" : 8000
},
{
"_id" : "JAVA",
"First project cost" : 4000
},
{
"_id" : "DBMS",
"First project cost" : 7000
}
],
"ok" : 1
}
> db.emp.aggregate([{$group : {"_id":"$project","Minimum cost of project" : {$min :
"$cost"}}}])
{
"result" : [
{
"_id" : "Testing",
"Minimum cost of project" : 8000

110
Database Management System Lab Manual

},
{
"_id" : "JAVA",
"Minimum cost of project" : 4000
},
{
"_id" : "DBMS",
"Minimum cost of project" : 5000
}
],
"ok" : 1
}
> db.emp.aggregate([{$group : {"_id":"$project","Maximum cost of project" : {$max :
"$cost"}}}])
{
"result" : [
{
"_id" : "Testing",
"Maximum cost of project" : 8000
},
{
"_id" : "JAVA",
"Maximum cost of project" : 6000
},
{
"_id" : "DBMS",
"Maximum cost of project" : 10000
}
],
"ok" : 1
}
> db.emp.aggregate([{$group : {"_id":"$project","Average cost of project" : {$avg :
"$cost"}}}])
{
"result" : [
{
"_id" : "Testing",
"Average cost of project" : 8000
},
{
"_id" : "JAVA",
"Average cost of project" : 5000
},
{
"_id" : "DBMS",
"Average cost of project" : 7333.333333333333

111
Database Management System Lab Manual

}
],
"ok" : 1
}

> db.emp.createIndex({"eid":1})
> db.emp.createIndex({"ename":1})
> db.emp.getIndexes();
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"ns" : "local.emp",
"name" : "_id_"
},
{
"v" : 1,
"key" : {
"eid" : 1
},
"ns" : "local.emp",
"name" : "eid_1"
},
{
"v" : 1,
"key" : {
"ename" : 1
},
"ns" : "local.emp",
"name" : "ename_1"
}
]

> db.emp.dropIndex("eid_1");
{ "nIndexesWas" : 3, "ok" : 1 }
> db.emp.getIndexes();
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"ns" : "local.emp",
"name" : "_id_"

112
Database Management System Lab Manual

},
{
"v" : 1,
"key" : {
"ename" : 1
},
"ns" : "local.emp",
"name" : "ename_1"
}
]

> db.emp.dropIndex("ename_1");
{ "nIndexesWas" : 2, "ok" : 1 }
> db.emp.getIndexes();
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"ns" : "local.emp",
"name" : "_id_"
}
]

map reduce

> db.emp.mapReduce(function(){emit(this.ename,this.cost);},function(key,values){return
Array.sum(values)},{out:"MapReduce_F"}).find();
{ "_id" : "Komal", "value" : 16000 }
{ "_id" : "Priya", "value" : 19000 }
{ "_id" : "Sonal", "value" : 5000 }
> db.emp.mapReduce(function(){emit(this.ename,1);},function(key,values){return
Array.sum(values)},{out:"MapReduce_F"}).find();
{ "_id" : "Komal", "value" : 2 }
{ "_id" : "Priya", "value" : 3 }
{ "_id" : "Sonal", "value" : 1 }

ASSIGNMENT NO. 11
TITLE :-
Implement Map reduce operation with suitable example using MongoDB.

113
Database Management System Lab Manual

1. Create collection emp (eid, ename, dept, salary).


2. Write map reduce query to count employee in department.
3. Write map reduce query to sum salary of employees in a department.
THEORY :-
As per the MongoDB documentation, Map-reduce is a data processing paradigm for condensing
large volumes of data into useful aggregated results. MongoDB uses mapReduce command for
map-reduce operations. MapReduce is generally used for processing large data sets.
 MapReduce Command :-

Following is the syntax of the basic mapReduce command −

>db.collection.mapReduce(

function(){emit(key,value);},//map function

function(key,values){return reduceFunction},{//reduce function

out: collection,

query: document,

sort: document,

limit: number

The map-reduce function first queries the collection, then maps the result documents to emit key-
value pairs, which is then reduced based on the keys that have multiple values.
In the above syntax −
1. map is a JavaScript function that maps a value with a key and emits a key-value pair
2. reduce is a JavaScript function that reduces or groups all the documents having the same
key
3. out specifies the location of the map-reduce query result
4. query specifies the optional selection criteria for selecting documents
5. sort specifies the optional sort criteria
6. limit specifies the optional maximum number of documents to be returned

 Using MapReduce :-

Consider the following document structure storing user posts. The document stores user_name of
the user and the status of post.

114
Database Management System Lab Manual

"post_text":"welcome…",

"user_name":"mark",

"status":"active"

Now, we will use a mapReduce function on our posts collection to select all the active posts,
group them on the basis of user_name and then count the number of posts by each user using the
following code −

>db.posts.mapReduce(

function(){ emit(this.user_id,1);},

function(key, values){returnArray.sum(values)},{

query:{status:"active"},

out:"post_total"

The above mapReduce query outputs the following result −


{
"result" : "post_total",
"timeMillis" : 9,
"counts" : {
"input" : 4,
"emit" : 4,
"reduce" : 2,
"output" : 2
},
"ok" : 1,
}
The result shows that a total of 4 documents matched the query (status:"active"), the map
function emitted 4 documents with key-value pairs and finally the reduce function grouped
mapped documents having the same keys into 2.
To see the result of this mapReduce query, use the find operator −

>db.posts.mapReduce(

115
Database Management System Lab Manual

function(){ emit(this.user_id,1);},

function(key, values){returnArray.sum(values)},{

query:{status:"active"},

out:"post_total"

).find()

The above query gives the following result which indicates that both users tom and mark have
two posts in active states −

{"_id":"tom","value":2}

{"_id":"mark","value":2}

In a similar manner, MapReduce queries can be used to construct large complex aggregation
queries. The use of custom JavaScript functions make use of MapReduce which is very flexible
and powerful.

PROGRAM :-
[student@localhost ~]$ suPassword:
[root@localhost student]# systemctl start mongod
[root@localhost student]# mongo
MongoDB shell version: 2.4.6
connecting to: test
> show db
Tue Jan 1 00:14:49.606 don't know how to show [db] at src/mongo/shell/utils.js:847
> use local
switched to db local

> db.createCollection("emp")
{ "ok" : 1 }
> db.emp.insert({"eid":1,"ename":"Sonal","dept":"comp","project":"DBMS","cost":5000});
> db.emp.insert({"eid":2,"ename":"Komal","dept":"comp","project":"DBMS","cost":10000});
> db.emp.insert({"eid":3,"ename":"Priya","dept":"comp","project":"DBMS","cost":7000});
> db.emp.insert({"eid":2,"ename":"Komal","dept":"comp","project":"JAVA","cost":6000});
> db.emp.insert({"eid":3,"ename":"Priya","dept":"comp","project":"JAVA","cost":4000});
> db.emp.insert({"eid":3,"ename":"Priya","dept":"comp","project":"Testing","cost":8000});

> db.emp.find()

116
Database Management System Lab Manual

{ "_id" : ObjectId("50e1df13d651aeec3581f7b8"), "eid" : 1, "ename" : "Sonal", "dept" : "comp",


"project" : "DBMS", "cost" : 5000 }
{ "_id" : ObjectId("50e1ed8cd651aeec3581f7bf"), "eid" : 2, "ename" : "Komal", "dept" :
"comp", "project" : "DBMS", "cost" : 10000 }
{ "_id" : ObjectId("50e1eda6d651aeec3581f7c0"), "eid" : 3, "ename" : "Priya", "dept" : "comp",
"project" : "DBMS", "cost" : 7000 }
{ "_id" : ObjectId("50e1edd6d651aeec3581f7c1"), "eid" : 2, "ename" : "Komal", "dept" :
"comp", "project" : "JAVA", "cost" : 6000 }
{ "_id" : ObjectId("50e1efc2d651aeec3581f7c6"), "eid" : 3, "ename" : "Priya", "dept" : "comp",
"project" : "JAVA", "cost" : 4000 }
{ "_id" : ObjectId("50e1efd5d651aeec3581f7c7"), "eid" : 3, "ename" : "Priya", "dept" : "comp",
"project" : "Testing", "cost" : 8000 }

> db.emp.mapReduce(function(){emit(this.ename,this.cost);},function(key,values){return
Array.sum(values)},{out:"MapReduce_F"}).find();
{ "_id" : "Komal", "value" : 16000 }
{ "_id" : "Priya", "value" : 19000 }
{ "_id" : "Sonal", "value" : 5000 }

> db.emp.mapReduce(function(){emit(this.ename,1);},function(key,values){return
Array.sum(values)},{out:"MapReduce_F"}).find();
{ "_id" : "Komal", "value" : 2 }
{ "_id" : "Priya", "value" : 3 }
{ "_id" : "Sonal", "value" : 1 }

ASSIGNMENT NO. 12
TITLE :-
Write a program to implement MongoDB database connectivity with PHP/python/Java.
Implement Database navigation operations ( add, delete, edit etc. ) using ODBC/JDBC.

117
Database Management System Lab Manual

THEORY :-
 Installation-

Before you start using MongoDB in your Java programs, you need to make sure that you have
MongoDB JDBC driver and Java set up on the machine. Now, let us check how to set up
MongoDB JDBC driver.
1. You need to download the jar from the path Download mongo.jar. Make sure to
download the latest release of it.

2. You need to include the mongo.jar into your classpath.


 Connect to Database-

To connect database, you need to specify the database name, if the database doesn't exist then
MongoDB creates it automatically.
Following is the code snippet to connect to the database −

import com.mongodb.client.MongoDatabase;

import com.mongodb.MongoClient;

import com.mongodb.MongoCredential;

public class ConnectToDB {

public static void main( String args[] ) {

// Creating a Mongo client

MongoClient mongo = new MongoClient( "localhost" , 27017 );

// Creating Credentials

MongoCredential credential;

credential = MongoCredential.createCredential("sampleUser", "myDb",

"password".toCharArray());

System.out.println("Connected to the database successfully");

118
Database Management System Lab Manual

// Accessing the database

MongoDatabase database = mongo.getDatabase("myDb");

System.out.println("Credentials ::"+ credential);

Now, let's compile and run the above program to create our database myDb as shown below.
$javac ConnectToDB.java
$java ConnectToDB
On executing, the above program gives you the following output.
Connected to the database successfully
Credentials ::MongoCredential{
mechanism = null,
userName = 'sampleUser',
source = 'myDb',
password = <hidden>,
mechanismProperties = {}
}

 Create a Collection-
To create a collection, createCollection() method
of com.mongodb.client.MongoDatabase class is used.
Following is the code snippet to create a collection −

import com.mongodb.client.MongoDatabase;

import com.mongodb.MongoClient;

import com.mongodb.MongoCredential;

public class CreatingCollection {

public static void main( String args[] ) {

119
Database Management System Lab Manual

// Creating a Mongo client

MongoClient mongo = new MongoClient( "localhost" , 27017 );

// Creating Credentials

MongoCredential credential;

credential = MongoCredential.createCredential("sampleUser", "myDb",

"password".toCharArray());

System.out.println("Connected to the database successfully");

//Accessing the database

MongoDatabase database = mongo.getDatabase("myDb");

//Creating a collection

database.createCollection("sampleCollection");

System.out.println("Collection created successfully");

On compiling, the above program gives you the following result −


Connected to the database successfully
Collection created successfully

 Getting/Selecting a Collection-

To get/select a collection from the database, getCollection() method


of com.mongodb.client.MongoDatabase class is used.
Following is the program to get/select a collection −

import com.mongodb.client.MongoCollection;

import com.mongodb.client.MongoDatabase;

120
Database Management System Lab Manual

import org.bson.Document;

import com.mongodb.MongoClient;

import com.mongodb.MongoCredential;

public class selectingCollection {

public static void main( String args[] ) {

// Creating a Mongo client

MongoClient mongo = new MongoClient( "localhost" , 27017 );

// Creating Credentials

MongoCredential credential;

credential = MongoCredential.createCredential("sampleUser", "myDb",

"password".toCharArray());

System.out.println("Connected to the database successfully");

// Accessing the database

MongoDatabase database = mongo.getDatabase("myDb");

// Creating a collection

System.out.println("Collection created successfully");

// Retieving a collection

MongoCollection<Document> collection = database.getCollection("myCollection");

121
Database Management System Lab Manual

System.out.println("Collection myCollection selected successfully");

On compiling, the above program gives you the following result −


Connected to the database successfully
Collection created successfully
Collection myCollection selected successfully

 Insert a Document-

To insert a document into MongoDB, insert() method


of com.mongodb.client.MongoCollection class is used.
Following is the code snippet to insert a document −

import com.mongodb.client.MongoCollection;

import com.mongodb.client.MongoDatabase;

import org.bson.Document;

import com.mongodb.MongoClient;

import com.mongodb.MongoCredential;

public class InsertingDocument {

public static void main( String args[] ) {

// Creating a Mongo client

MongoClient mongo = new MongoClient( "localhost" , 27017 );

// Creating Credentials

MongoCredential credential;

122
Database Management System Lab Manual

credential = MongoCredential.createCredential("sampleUser", "myDb",

"password".toCharArray());

System.out.println("Connected to the database successfully");

// Accessing the database

MongoDatabase database = mongo.getDatabase("myDb");

// Retrieving a collection

MongoCollection<Document> collection = database.getCollection("sampleCollection");

System.out.println("Collection sampleCollection selected successfully");

Document document = new Document("title", "MongoDB")

.append("id", 1)

.append("description", "database")

.append("likes", 100)

.append("url", "http://www.tutorialspoint.com/mongodb/")

.append("by", "tutorials point");

collection.insertOne(document);

System.out.println("Document inserted successfully");

On compiling, the above program gives you the following result −


Connected to the database successfully
Collection sampleCollection selected successfully
Document inserted successfully

 Retrieve All Documents-

123
Database Management System Lab Manual

To select all documents from the collection, find() method


of com.mongodb.client.MongoCollection class is used. This method returns a cursor, so you
need to iterate this cursor.
Following is the program to select all documents −

import com.mongodb.client.FindIterable;

import com.mongodb.client.MongoCollection;

import com.mongodb.client.MongoDatabase;

import java.util.Iterator;

import org.bson.Document;

import com.mongodb.MongoClient;

import com.mongodb.MongoCredential;

public class RetrievingAllDocuments {

public static void main( String args[] ) {

// Creating a Mongo client

MongoClient mongo = new MongoClient( "localhost" , 27017 );

// Creating Credentials

MongoCredential credential;

credential = MongoCredential.createCredential("sampleUser", "myDb",

"password".toCharArray());

System.out.println("Connected to the database successfully");

// Accessing the database

124
Database Management System Lab Manual

MongoDatabase database = mongo.getDatabase("myDb");

// Retrieving a collection

MongoCollection<Document> collection = database.getCollection("sampleCollection");

System.out.println("Collection sampleCollection selected successfully");

// Getting the iterable object

FindIterable<Document>iterDoc = collection.find();

int i = 1;

// Getting the iterator

Iterator it = iterDoc.iterator();

while (it.hasNext()) {

System.out.println(it.next());

i++;

On compiling, the above program gives you the following result −


Document{{
_id = 5967745223993a32646baab8,
title = MongoDB,
id = 1,
description = database,
likes = 100,
url = http://www.tutorialspoint.com/mongodb/, by = tutorials point
}}
Document{{
_id = 7452239959673a32646baab8,

125
Database Management System Lab Manual

title = RethinkDB,
id = 2,
description = database,
likes = 200,
url = http://www.tutorialspoint.com/rethinkdb/, by = tutorials point
}}

 Update Document-

To update a document from the collection, updateOne() method


of com.mongodb.client.MongoCollection class is used.
Following is the program to select the first document −

import com.mongodb.client.FindIterable;

import com.mongodb.client.MongoCollection;

import com.mongodb.client.MongoDatabase;

import com.mongodb.client.model.Filters;

import com.mongodb.client.model.Updates;

import java.util.Iterator;

import org.bson.Document;

import com.mongodb.MongoClient;

import com.mongodb.MongoCredential;

public class UpdatingDocuments {

public static void main( String args[] ) {

// Creating a Mongo client

MongoClient mongo = new MongoClient( "localhost" , 27017 );

126
Database Management System Lab Manual

// Creating Credentials

MongoCredential credential;

credential = MongoCredential.createCredential("sampleUser", "myDb",

"password".toCharArray());

System.out.println("Connected to the database successfully");

// Accessing the database

MongoDatabase database = mongo.getDatabase("myDb");

// Retrieving a collection

MongoCollection<Document> collection = database.getCollection("sampleCollection");

System.out.println("Collection myCollection selected successfully");

collection.updateOne(Filters.eq("id", 1), Updates.set("likes", 150));

System.out.println("Document update successfully...");

// Retrieving the documents after updation

// Getting the iterable object

FindIterable<Document>iterDoc = collection.find();

int i = 1;

// Getting the iterator

Iterator it = iterDoc.iterator();

while (it.hasNext()) {

System.out.println(it.next());

127
Database Management System Lab Manual

i++;

On compiling, the above program gives you the following result −


Document update successfully...
Document {{
_id = 5967745223993a32646baab8,
title = MongoDB,
id = 1,
description = database,
likes = 150,
url = http://www.tutorialspoint.com/mongodb/, by = tutorials point
}}

 Delete a Document-

To delete a document from the collection, you need to use the deleteOne()method of
the com.mongodb.client.MongoCollection class.
Following is the program to delete a document −

import com.mongodb.client.FindIterable;

import com.mongodb.client.MongoCollection;

import com.mongodb.client.MongoDatabase;

import com.mongodb.client.model.Filters;

import java.util.Iterator;

import org.bson.Document;

import com.mongodb.MongoClient;

import com.mongodb.MongoCredential;

public class DeletingDocuments {

128
Database Management System Lab Manual

public static void main( String args[] ) {

// Creating a Mongo client

MongoClient mongo = new MongoClient( "localhost" , 27017 );

// Creating Credentials

MongoCredential credential;

credential = MongoCredential.createCredential("sampleUser", "myDb",

"password".toCharArray());

System.out.println("Connected to the database successfully");

// Accessing the database

MongoDatabase database = mongo.getDatabase("myDb");

// Retrieving a collection

MongoCollection<Document> collection = database.getCollection("sampleCollection");

System.out.println("Collection sampleCollection selected successfully");

// Deleting the documents

collection.deleteOne(Filters.eq("id", 1));

System.out.println("Document deleted successfully...");

// Retrieving the documents after updation

// Getting the iterable object

FindIterable<Document>iterDoc = collection.find();

129
Database Management System Lab Manual

int i = 1;

// Getting the iterator

Iterator it = iterDoc.iterator();

while (it.hasNext()) {

System.out.println("Inserted Document: "+i);

System.out.println(it.next());

i++;

On compiling, the above program gives you the following result −


Connected to the database successfully
Collection sampleCollection selected successfully
Document deleted successfully...

 Dropping a Collection-

To drop a collection from a database, you need to use the drop() method of
the com.mongodb.client.MongoCollection class.
Following is the program to delete a collection −

import com.mongodb.client.MongoCollection;

import com.mongodb.client.MongoDatabase;

import org.bson.Document;

import com.mongodb.MongoClient;

import com.mongodb.MongoCredential;

130
Database Management System Lab Manual

public class DropingCollection {

public static void main( String args[] ) {

// Creating a Mongo client

MongoClient mongo = new MongoClient( "localhost" , 27017 );

// Creating Credentials

MongoCredential credential;

credential = MongoCredential.createCredential("sampleUser", "myDb",

"password".toCharArray());

System.out.println("Connected to the database successfully");

// Accessing the database

MongoDatabase database = mongo.getDatabase("myDb");

// Creating a collection

System.out.println("Collections created successfully");

// Retieving a collection

MongoCollection<Document> collection = database.getCollection("sampleCollection");

// Dropping a Collection

collection.drop();

System.out.println("Collection dropped successfully");

131
Database Management System Lab Manual

On compiling, the above program gives you the following result −


Connected to the database successfully
Collection sampleCollection selected successfully
Collection dropped successfully

 Listing All the Collections-

To list all the collections in a database, you need to use the listCollectionNames() method of
the com.mongodb.client.MongoDatabase class.
Following is the program to list all the collections of a database −

import com.mongodb.client.MongoDatabase;

import com.mongodb.MongoClient;

import com.mongodb.MongoCredential;

public class ListOfCollection {

public static void main( String args[] ) {

// Creating a Mongo client

MongoClient mongo = new MongoClient( "localhost" , 27017 );

// Creating Credentials

MongoCredential credential;

credential = MongoCredential.createCredential("sampleUser", "myDb",

"password".toCharArray());

System.out.println("Connected to the database successfully");

132
Database Management System Lab Manual

// Accessing the database

MongoDatabase database = mongo.getDatabase("myDb");

System.out.println("Collection created successfully");

for (String name : database.listCollectionNames()) {

System.out.println(name);

On compiling, the above program gives you the following result −


Connected to the database successfully
Collection created successfully
myCollection
myCollection1
myCollection5

133

You might also like