Lab manual-DBMS
Lab manual-DBMS
ASSIGNMENT NO. 1
TITLE :-
ER Modeling and Normalization.
THEORY :-
ER Modeling –
1
Database Management System Lab Manual
2
Database Management System Lab Manual
3
Database Management System Lab Manual
Normal form: Condition using keys and FDs of a relation to certify whether a relation
schema is in a particular normal form.
Normalization is carried out in practice so that the resulting designs are of high quality
and meet the desirable properties
The practical utility of these normal forms becomes questionable when the constraints
on which they are based are hard to understand or to detect
4
Database Management System Lab Manual
5
Database Management System Lab Manual
Students are expected to refer the notations and prepare ER diagram and map the ER diagram
into tables. Apply normalization technique for the problem statement.
6
Database Management System Lab Manual
ASSIGNMENT NO. 2
Part A-DDL
TITLE :-
Design and develope SQL DDL statements which demonstrate the use of SQL objects such as
Table, View, Index, Sequence, Synonym.
Create table with Primary key and Foreign key.
Book (Bid, Title, Author, Cost) Primary key : Bid
Stud (Sid, Name, Add, DOB, Mob) Primary key : Sid
Trans (Sid, Bid, t_date, Fine, Status) Foreign key : Sid, Bid
1) Display book title issue to student with Sid 5.
2) Display cost of book issued to Sid 5.
3) Find total cost of books.
4) Find count of books issued to Sid 5.
5) Find fine of student with Sid 5.
6) Create sequence of Bid.
7) Create sequence of Sid.
8) Create view on transaction side.
View 1 on all fields
9) Create view on multiple tables transaction and student.
10) Apply select query on Sid, Name, Fine.
11) Insert, select, delete, update…List which operations are allowed.
12) Illustrate with query that removal of column is not allowed, if that column is Foreign
key in other table.
13) Illustrate with query that removal of record is not allowed, if that record is used in other
table with Foreign key.
Select distinct table_name, index_name from information_schema.statistics where
table_schema=‖view_name‖;
THEORY :-
DDL- Data Definition Language (DDL) statements are used to define the database structure or
schema. Data Definition Language understanding with database schemas and describes how the
data should consist in the database, therefore language statements like CREATE TABLE or
ALTER TABLE belongs to the DDL. DDL is about ―metadata‖.
7
Database Management System Lab Manual
DDL includes commands such as CREATE, ALTER and DROP statements.DDL is used to
CREATE, ALTER OR DROP the database objects (Table, Views, Users).
Create Database: From the MySQL command line, enter the command CREATE DATABASE
<DATABASENAME>;. Replace <DATABASENAMEs> with the name of your database. It
cannot include spaces.
For example, to create a database of all the US states, you might enter CREATE DATABASE
pune;
Note: Commands do not have to be entered in upper-case.
Note: All MySQL commands must end with ―;‖. If you forgot to include the semicolon, you can
enter just ―;‖ on the next line to process the previous command.
Select your database: Once the database has been created, you will need to select it in order to
begin editing it. Enter the command
USE pune;
You will see the message Database changed, letting you know that your active database is
now pune.
SHOW DATABASES;
to list all of the databases you have stored. Besides the database you just created, you will also
see a mysql database and a test database.
Create table: We define an SQL relation by using the create table command. The following
command creates a relation department in the database.
8
Database Management System Lab Manual
create table department (dept name varchar (20), building varchar (15),
budget numeric (12,2), primary key (dept name));
The relation created above has three attributes, dept name, which is a character string of
maximum length 20, building, which is a character string of maximum length 15, and budget,
which is a number with 12 digits in total, 2 of which are after the decimal point. The create
table command also specifies that the dept name attribute is the primary key of
the department relation.
Insert values in table: A newly created relation is empty initially. We can use
the insert command to load data into the relation. For example, if we wish to insert the fact that
there is an instructor named Smith in the Biology department with instructor id 10211 and a
salary of $66,000, we write:
Drop table: To remove a relation from an SQL database, we use the drop table command.
The drop table command deletes all information about the dropped relation from the database.
The command
drop table r
Alter Table :We use the alter table command to add attributes to an existing relation. All
tuples in the relation are assigned null as the value for the new attribute. The form of the alter
table command is
where r is the name of an existing relation, A is the name of the attribute to be added, and D is
the type of the added attribute. We can drop attributes from a relation by the command
where r is the name of an existing relation, and A is the name of an attribute of the relation.
View: SQL allows a ―virtual relation‖ to be defined by a query, and the relation conceptually
contains the result of the query. The virtual relation is not precomputed and stored, but instead is
computed by executing the query whenever the virtual relation is used. Any such relation that is
not part of the logical model, but is made visible to a user as a virtual relation, is called a view. It
is possible to support a large number of views on top of any given set of actual relations.
Create View: We define a view in SQL by using the create view command. To define a view,
we must give the view a name and must state the query that computes the view. The form of the
create view command is:
9
Database Management System Lab Manual
Where <query expression> is any legal query expression. The view name is represented by v.
Consider again the clerkwho needs to access all data in the instructor relation, except salary. The
clerk should not be authorized to access the instructor relation. Instead, a view
relation faculty can bemade available to the clerk,with the view defined as follows:
create view faculty as select ID, name, dept name from instructor;
Once we have defined a view, we can use the view name to refer to the virtual relation that the
view generates. Using the view physics fall 2009, we can find all Physics courses offered in the
Fall 2009 semester in the Watson building by writing:
Alter View: The CREATE VIEW statement creates a new view, or replaces an existing view if
the OR REPLACE clause is given. If the view does not exist, CREATE OR REPLACE VIEW is
the same as CREATE VIEW.
Drop view: Use the DROP VIEW statement to remove a view or an object view from the
database. You can change the definition of a view by dropping and re-creating it.
Syntax:
Create Index: A database index is a data structure that improves the speed of operations in a
table. Indexes can be created using one or more columns, providing the basis for both rapid
random lookups and efficient ordering of access to records. While creating index, it should be
taken into consideration which all columns will be used to make SQL queries and create one or
more indexes on those columns. Practically, indexes are also a type of tables, which keep
primary key or index field and a pointer to each record into the actual table.
The users cannot see the indexes; they are just used to speed up queries and will be used by the
Database Search Engine to locate records very fast. The INSERT and UPDATE statements take
more time on tables having indexes, whereas the SELECT statements become fast on those
tables. The reason is that while doing insert or update, a database needs to insert or update the
index values as well.
Simple and Unique Index: You can create a unique index on a table. A unique index means
that two rows cannot have the same index value. Here is the syntax to create an Index on a table.
10
Database Management System Lab Manual
You can use one or more columns to create an index. For example, we can create an index
on table student using column rno
You can create a simple index on a table. Just omit the UNIQUE keyword from the query to
create a simple index. A Simple index allows duplicate values in a table. If you want to index the
values in a column in a descending order, you can add the reserved word DESC after the column
name.
Alter Index: There are four types of statements for adding indexes to a table −
ALTER TABLE tbl_name ADD PRIMARY KEY (column_list)− This statement adds
a PRIMARY KEY, which means that the indexed values must be unique and cannot be NULL.
ALTER TABLE tbl_name ADD UNIQUE index_name (column_list)− This statement creates
an index for which the values must be unique (except for the NULL values, which may appear
multiple times).
ALTER TABLE tbl_name ADD INDEX index_name (column_list)− This adds an ordinary
index in which any value may appear more than once.
ALTER TABLE tbl_name ADD FULLTEXT index_name (column_list)− This creates a
special FULLTEXT index that is used for text-searching purposes.
Drop Index: You can drop any INDEX by using the DROP clause along with the ALTER
command. Try out the following example to drop the above-created index.
You can drop any INDEX by using the DROP clause along with the ALTER command.
Sequence: To create a sequence value in SQL query, we can use AUTO_INCREMENT with
optional starting value.
11
Database Management System Lab Manual
Summary of commands :
Creating index
PROGRAM :-
mysql> use view1;
Database changed
12
Database Management System Lab Manual
13
Database Management System Lab Manual
mysql> select title from book where bid=(select bid from trans where sid=5);
+--------+
| title |
+--------+
| Origin |
+--------+
1 row in set (0.45 sec)
mysql> select cost from book where bid=(select bid from trans where sid=5);
+------+
| cost |
+------+
| 600 |
+------+
1 row in set (0.00 sec)
14
Database Management System Lab Manual
mysql> select count(bid) from book where bid=(select bid from trans where sid=5);
+------------+
| count(bid) |
+------------+
| 1|
+------------+
1 row in set (0.00 sec)
15
Database Management System Lab Manual
16
Database Management System Lab Manual
17
Database Management System Lab Manual
18
Database Management System Lab Manual
mysql> create view multi_v1 as select s.sid,s.name,t.fine from stud s,trans t where s.sid=t.sid;
Query OK, 0 rows affected (0.11 sec)
19
Database Management System Lab Manual
20
Database Management System Lab Manual
21
Database Management System Lab Manual
Design atleast 10 SQL queries for suitable database application using SQL DML statements:
Insert, Select, Update, Delete with operators, function and set operators.
THEORY:-
SET Operations in SQL
SQL supports few Set operations which can be performed on the table data. These are used to get
meaningful results from data stored in the table, under different special conditions.
In this tutorial, we will cover 4 different types of SET operations, along with example:
1. UNION
2. UNION ALL
3. INTERSECT
4. MINUS
UNION Operation
UNION is used to combine the results of two or more SELECT statements. However it will
eliminate duplicate rows from its resultset. In case of union, number of columns and datatype
must be same in both the tables, on which UNION operation is being applied.
Example of UNION
The First table,
ID Name
1 abhi
2 adam
22
Database Management System Lab Manual
ID Name
2 adam
3 Chester
ID NAME
1 abhi
2 adam
3 Chester
UNION ALL
This operation is similar to Union. But it also shows the duplicate rows.
23
Database Management System Lab Manual
ID NAME
1 abhi
2 adam
ID NAME
2 adam
3 Chester
ID NAME
1 abhi
2 adam
2 adam
3 Chester
24
Database Management System Lab Manual
INTERSECT
Intersect operation is used to combine two SELECT statements, but it only retuns the records
which are common from both SELECT statements. In case of Intersect the number of columns
and datatype must be same.
NOTE: MySQL does not support INTERSECT operator.
Example of Intersect
The First table,
ID NAME
1 abhi
2 adam
ID NAME
2 adam
3 Chester
25
Database Management System Lab Manual
ID NAME
2 adam
MINUS
The Minus operation combines results of two SELECT statements and return only those in the
final result, which belongs to the first set of the result.
Example of Minus
The First table,
ID NAME
1 abhi
2 adam
ID NAME
2 adam
3 Chester
26
Database Management System Lab Manual
ID NAME
1 abhi
PROGRAM:-
[student@localhost ~]$ su
Password:
[root@localhost student]# systemctl start mysqld
[root@localhost student]# mysql
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 2
Server version: 5.5.33a-MariaDB MariaDB Server
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
27
Database Management System Lab Manual
28
Database Management System Lab Manual
+----+------+------------+--------------+--------------+--------+------+
| Id | name | dob | designation | joining_date | salary | dept |
+----+------+------------+--------------+--------------+--------+------+
| 1 | ABC | 1990-09-23 | HOD | 2012-05-15 | 40000 | COMP |
| 2 | JHON | 1991-02-10 | Lecturer | 2015-07-15 | 35000 | COMP |
| 5 | JAKE | 1990-09-25 | LabAssistant | 2013-11-30 | 30000 | COMP |
+----+------+------------+--------------+--------------+--------+------+
MariaDB [employee]> select name from emp where salary=(select min(salary) from emp);
+------+
| name |
+------+
| JHON |
+------+
MariaDB [employee]> select name from emp where salary=(select max(salary) from emp);
+------+
| name |
+------+
| BOB |
+------+
29
Database Management System Lab Manual
MariaDB [employee]> select name from emp where salary BETWEEN 10000 AND 40000;
+------+
| name |
+------+
| ABC |
| JHON |
| ANNE |
+------+
30
Database Management System Lab Manual
31
Database Management System Lab Manual
MariaDB [employee]> select * from emp where Id NOT IN(select Id from emp1);
+----+------+------------+-------------+--------------+--------+------+
| Id | name | dob | designation | joining_date | salary | dept |
+----+------+------------+-------------+--------------+--------+------+
| 1 | ABC | 1990-09-23 | HOD | 2012-05-15 | 40000 | COMP |
| 2 | JHON | 1991-02-10 | Lecturer | 2015-07-15 | 35000 | COMP |
| 3 | ANNE | 1991-04-17 | Lecturer | 2014-06-23 | 37000 | E&TC |
+----+------+------------+-------------+--------------+--------+------+
3 rows in set (0.00 sec)
32
Database Management System Lab Manual
ASSIGNMENT NO. 3
TITLE :-
Design at least 10 SQL queries for suitable database application using SQL DML statements:
all types of Joins, Sub-Query and View.
1. Create table physician (reg_no primary, name, tel_no, city).
2. Create table patient (p_id primary, p_name, street, city).
3. Create table visit (p_id foreign, reg_no foreign, date_of_visit, fee).
4. Find name and city of patient who visited a physician on 13 July 2017.
5. Get the name of physician and total number of patients visited him.
6. Get the details of date wise fees collected at clinic group by date_of_visit.
7. Take join of physician and visit (inner, left, right, full outer join) [reg_no, p_name, p_id,
date_of_visit].
8. Take join of patient and visit (inner, left, right, full outer join) [p_name, city, reg_no,
date_of_visit].
9. Create view for date wise fees collected at clinic. Apply operations on view.
10. Create view on patient (p_name, reg_no, date_of_visit).Apply operations on view.
THEORY :-
JOINS :-
SQL Join
A JOIN clause is used to combine rows from two or more tables, based on a related column
between them.
Let's look at a selection from the "Orders" table:
OrderID CustomerID OrderDate
10308 2 1996-09-18
10309 37 1996-09-19
10310 77 1996-09-20
Then, look at a selection from the "Customers" table:
CustomerID CustomerName ContactName
33
Database Management System Lab Manual
Notice that the "CustomerID" column in the "Orders" table refers to the "CustomerID" in the
"Customers" table. The relationship between the two tables above is the "CustomerID" column.
Then, we can create the following SQL statement (that contains an INNER JOIN), that selects
records that have matching values in both tables:
Example
SELECT Orders.OrderID, Customers.CustomerName, Orders.OrderDate
FROM Orders
INNER JOIN Customers ON Orders.CustomerID=Customers.CustomerID;
and it will produce something like this:
OrderID CustomerName
(INNER) JOIN: Returns records that have matching values in both tables
LEFT (OUTER) JOIN: Return all records from the left table, and the matched records
from the right table
RIGHT (OUTER) JOIN: Return all records from the right table, and the matched records
from the left table
FULL (OUTER) JOIN: Return all records when there is a match in either left or right
table
34
Database Management System Lab Manual
SUB-QUERY :-
A Subquery or Inner query or a Nested query is a query within another SQL query and
embedded within the WHERE clause.
A subquery is used to return data that will be used in the main query as a condition to further
restrict the data to be retrieved.
Subqueries can be used with the SELECT, INSERT, UPDATE, and DELETE statements along
with the operators like =, <, >, >=, <=, IN, BETWEEN, etc.
A subquery can have only one column in the SELECT clause, unless multiple columns
are in the main query for the subquery to compare its selected columns.
An ORDER BY command cannot be used in a subquery, although the main query can
use an ORDER BY. The GROUP BY command can be used to perform the same
function as the ORDER BY in a subquery.
Subqueries that return more than one row can only be used with multiple value operators
such as the IN operator.
The SELECT list cannot include any references to values that evaluate to a BLOB,
ARRAY, CLOB, or NCLOB.
35
Database Management System Lab Manual
The BETWEEN operator cannot be used with a subquery. However, the BETWEEN
operator can be used within the subquery.
Subqueries with the SELECT Statement
Subqueries are most frequently used with the SELECT statement. The basic syntax is as follows
−
SELECT column_name [, column_name ]
FROM table1 [, table2 ]
WHERE column_name OPERATOR
(SELECT column_name [, column_name ]
FROM table1 [, table2 ]
[WHERE])
Example
Consider the CUSTOMERS table having the following records −
+----+----------+-----+-----------+----------+
+----+----------+-----+-----------+----------+
| 6 | Komal | 22 | MP | 4500.00 |
+----+----------+-----+-----------+----------+
SQL> SELECT *
FROM CUSTOMERS
WHERE ID IN (SELECT ID
36
Database Management System Lab Manual
FROM CUSTOMERS
37
Database Management System Lab Manual
This would impact two rows and finally CUSTOMERS table would have the following records.
+----+----------+-----+-----------+----------+
| ID | NAME | AGE | ADDRESS | SALARY |
+----+----------+-----+-----------+----------+
| 1 | Ramesh | 35 | Ahmedabad | 125.00 |
| 2 | Khilan | 25 | Delhi | 1500.00 |
| 3 | kaushik | 23 | Kota | 2000.00 |
| 4 | Chaitali | 25 | Mumbai | 6500.00 |
| 5 | Hardik | 27 | Bhopal | 2125.00 |
| 6 | Komal | 22 | MP | 4500.00 |
| 7 | Muffy | 24 | Indore | 10000.00 |
+----+----------+-----+-----------+----------+
Subqueries with the DELETE Statement
The subquery can be used in conjunction with the DELETE statement like with any other
statements mentioned above.
The basic syntax is as follows.
DELETE FROM TABLE_NAME
[ WHERE OPERATOR [ VALUE ]
(SELECT COLUMN_NAME
FROM TABLE_NAME)
[ WHERE) ]
Example
Assuming, we have a CUSTOMERS_BKP table available which is a backup of the
CUSTOMERS table. The following example deletes the records from the CUSTOMERS table
for all the customers whose AGE is greater than or equal to 27.
This would impact two rows and finally the CUSTOMERS table would have the following
records.
+----+----------+-----+---------+----------+
| ID | NAME | AGE | ADDRESS | SALARY |
38
Database Management System Lab Manual
+----+----------+-----+---------+----------+
| 2 | Khilan | 25 | Delhi | 1500.00 |
| 3 | kaushik | 23 | Kota | 2000.00 |
| 4 | Chaitali | 25 | Mumbai | 6500.00 |
| 6 | Komal | 22 | MP | 4500.00 |
| 7 | Muffy | 24 | Indore | 10000.00 |
+----+----------+-----+---------+----------+
VIEWS :-
SQL CREATE VIEW Statement
In SQL, a view is a virtual table based on the result-set of an SQL statement.
A view contains rows and columns, just like a real table. The fields in a view are fields from one
or more real tables in the database.
You can add SQL functions, WHERE, and JOIN statements to a view and present the data as if
the data were coming from one single table.
CREATE VIEW
Syntax-
CREATE VIEW view_name AS
SELECT column1, column2, ...
FROM table_name
WHERE condition;
Note: A view always shows up-to-date data! The database engine recreates the data, using the
view's SQL statement, every time a user queries a view.
Example-
The following SQL creates a view that shows all customers from Brazil:
Example-
SELECT * FROM [Brazil Customers];
The following SQL creates a view that selects every product in the "Products" table with a price
higher than the average price:
39
Database Management System Lab Manual
Example-
CREATE VIEW [Products Above Average Price] AS
SELECT ProductName, Price
FROM Products
WHERE Price > (SELECT AVG(Price) FROM Products);
Example-
SELECT * FROM [Products Above Average Price];
The following SQL adds the "City" column to the "Brazil Customers" view:
Example-
CREATE OR REPLACE VIEW [Brazil Customers] AS
SELECT CustomerName, ContactName, City
FROM Customers
WHERE Country = "Brazil";
40
Database Management System Lab Manual
Example-
DROP VIEW [Brazil Customers];
PROGRAM :-
create table physician(regno int(60) primary key,name char(60),telno int(60),city char(60));
Query OK, 0 rows affected (0.71 sec)
41
Database Management System Lab Manual
42
Database Management System Lab Manual
| 1 | moni | 3 | 2018-02-02 |
| 3 | gracy | 1 | 2017-07-13 |
| 3 | gracy | 2 | 2018-04-03 |
+-------+-------+------+-------------+
3 rows in set (0.00 sec)
43
Database Management System Lab Manual
44
Database Management System Lab Manual
45
Database Management System Lab Manual
ASSIGNMENT NO. 4
TITLE :-
46
Database Management System Lab Manual
Unnamed PL/SQL code block : Use of Control structure and Exception handling is mandatory.
Write a PL/SQL block of code for following requirements :-
Schema :
1. Borrower (Rollin, Name, DateofIssue, NameofBook, Status)
2. Fine (Roll_no, Date, Amt)
a. Accept roll_no and name of book from user.
b. Check the number of days (from the date of issue), if days are between 15 to 30 then fine
amount Rs. 5 per day.
c. If number of days > 30 per day fine will be Rs. 50.
d. After submitting the book, status will change from I to R.
e. If condition for the fine is true, then details will be stored into Fine table.
Frame the problem statement for writing PL/SQL block inline with above statements.
THEORY :-
CONTROL STRUCTURE :-
Overview of PL/SQL Control Structures
Procedural computer programs use the basic control structures shown in Figure .
The selection structure tests a condition, then executes one sequence of statements instead of
another, depending on whether the condition is true or false. A condition is any variable or
expression that returns a BOOLEAN value (TRUE or FALSE). The iteration structure executes a
sequence of statements repeatedly as long as a condition holds true. The sequence structure
simply executes a sequence of statements in the order in which they occur.
Testing Conditions: IF and CASE Statements
The IF statement executes a sequence of statements depending on the value of a condition. There
are three forms of IF statements: IF-THEN, IF-THEN-ELSE, and IF-THEN-ELSIF. For a
description of the syntax of the IF statement, see "IF Statement".
The CASE statement is a compact way to evaluate a single condition and choose between many
alternative actions. It makes sense to use CASE when there are three or more alternatives to
choose from. For a description of the syntax of the CASE statement, see "CASE Statement".
The simplest form of IF statement associates a condition with a sequence of statements enclosed
by the keywords THEN and END IF (not ENDIF) as illustrated in e.g.
47
Database Management System Lab Manual
The sequence of statements is executed only if the condition is TRUE. If the condition
is FALSE or NULL, the IF statement does nothing. In either case, control passes to the next
statement.
Example - Using a Simple IF-THEN Statement
DECLARE sales NUMBER(8,2) := 10100; quota NUMBER(8,2) := 10000; bonus
NUMBER(6,2); emp_id NUMBER(6) := 120; BEGIN IF sales > (quota + 200) THEN bonus :=
(sales - quota)/4; UPDATE employees SET salary = salary + bonus WHERE employee_id =
emp_id; END IF; END; /
The second form of IF statement adds the keyword ELSE followed by an alternative sequence of
statements, as shown in Example 4-2.
The statements in the ELSE clause are executed only if the condition is FALSE or NULL.
The IF-THEN-ELSE statement ensures that one or the other sequence of statements is executed.
In the e.g., the first UPDATE statement is executed when the condition is TRUE, and the
second UPDATE statement is executed when the condition is FALSEor NULL.
Example - Using a Simple IF-THEN-ELSE Statement
DECLARE
bonus NUMBER(6,2);
BEGIN
ELSE
bonus := 50;
END IF;
END;
48
Database Management System Lab Manual
bonus NUMBER(6,2);
BEGIN
ELSE
bonus := 50;
ELSE
bonus := 0;
END IF;
END IF;
END;
Sometimes you want to choose between several alternatives. You can use the
keyword ELSIF (not ELSEIF or ELSE IF) to introduce additional conditions, as shown in e.g.
If the first condition is FALSE or NULL, the ELSIF clause tests another condition.
An IF statement can have any number of ELSIF clauses; the final ELSE clause is optional.
Conditions are evaluated one by one from top to bottom. If any condition is TRUE, its associated
49
Database Management System Lab Manual
sequence of statements is executed and control passes to the next statement. If all conditions are
false or NULL, the sequence in the ELSE clause is executed, as shown in e.g.
Example - Using the IF-THEN-ELSEIF Statement
DECLARE
bonus NUMBER(6,2);
BEGIN
bonus := 1500;
bonus := 500;
ELSE
bonus := 100;
END IF;
END;
If the value of sales is larger than 50000, the first and second conditions are TRUE.
Nevertheless, bonus is assigned the proper value of 1500 because the second condition is never
tested. When the first condition is TRUE, its associated statement is executed and control passes
to the INSERT statement.
PROGRAM :-
[pratiksha@localhost ~]$ sudo su[sudo] password for pratiksha:
[root@localhost pratiksha]# systemctl start mysqld;
[root@localhost pratiksha]# mysql;
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 9
Server version: 10.2.17-MariaDB MariaDB Server
50
Database Management System Lab Manual
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
51
Database Management System Lab Manual
+---------------+----------+------+-----+---------+-------+
5 rows in set (0.09 sec)
52
Database Management System Lab Manual
53
Database Management System Lab Manual
| 1 | 2018-09-25 | 3850 |
| 2 | 2018-09-25 | 5250 |
+---------+----------------+--------+
2 rows in set (0.00 sec)
54
Database Management System Lab Manual
+---------+-----------+---------------+--------------+--------+
3 rows in set (0.00 sec)
MariaDB [pro]>
ASSIGNMENT NO. 5
TITLE :-
PL/SQL Stored Procedures and Stored Function.
Write a Stored Procedure namely proc_Grade for the categorization of student.
If marks scored by students in examination is<=1500 and marks>=990 then student will be
placed in 'Distinction' category.
If marks scored are between 989 and 900 category is 'First class'.
If marks are between 899 and 825 category is 'Higher Second Class'.
55
Database Management System Lab Manual
Write a PL/SQL block for using procedure created with above requirements.
Stud_Marks (name, total_marks)
Result (Roll, Name, Class)
Frame the separate problem statement for writing PL/SQLStored Procedure and Function, inline
with above statements. The problem statement should clearly state the requirements.
THEORY :-
STORED PROCEDURES :-
Definition of stored procedures-
A stored procedure is a segment of declarative SQL statements stored inside the database
catalog. A stored procedure can be invoked by triggers, other stored procedures, and applications
such as Java, Python, PHP.
A stored procedure that calls itself is known as a recursive stored procedure. Most database
management systems support recursive stored procedures. However, MySQL does not support it
very well. You should check your version of MySQL database before implementing recursive
stored procedures in MySQL.
Stored Procedures in MySQL-
MySQL is known as the most popular open source RDBMS which is widely used by both
community and enterprise. However, during the first decade of its existence, it did not support
stored procedures, stored functions, triggers, and events. Since MySQL version 5.0, those
features were added to the MySQL database engine to make it more flexible and powerful.
1. Typically, stored procedures help increase the performance of the applications. Once
created, stored procedures are compiled and stored in the database. However, MySQL
56
Database Management System Lab Manual
implements the stored procedures slightly different. MySQL stored procedures are
compiled on demand. After compiling a stored procedure, MySQL puts it into a cache
and maintains its own stored procedure cache for every single connection. If an
application uses a stored procedure multiple times in a single connection, the compiled
version is used, otherwise, the stored procedure works like a query.
2. Stored procedures help reduce the traffic between application and database server
because instead of sending multiple lengthy SQL statements, the application has to send
only the name and parameters of the stored procedure.
3. Stored procedures are reusable and transparent to any applications. Stored procedures
expose the database interface to all applications so that developers do not have to develop
functions that are already supported in stored procedures.
4. Stored procedures are secure. The database administrator can grant appropriate
permissions to applications that access stored procedures in the database without giving
any permissions on the underlying database tables.
Besides those advantages, stored procedures have their own disadvantages, which you should be
aware of before using them in your databases.
1. If you use many stored procedures, the memory usage of every connection that is using
those stored procedures will increase substantially. In addition, if you overuse a large
number of logical operations inside stored procedures, the CPU usage will increase
because the database server is not well-designed for logical operations.
2. Stored procedure‘s constructs are not designed for developing complex and flexible
business logic.
3. It is difficult to debug stored procedures. Only a few database management systems allow
you to debug stored procedures. Unfortunately, MySQL does not provide facilities for
debugging stored procedures.
4. It is not easy to develop and maintain stored procedures. Developing and maintaining
stored procedures are often required a specialized skill set that not all application
developers possess. This may lead to problems in both application development and
maintenance phases.
MySQL stored procedures have their own advantages and disadvantages. When you develop
applications, you should decide whether or not to use stored procedures based on your
application‘s architecture.
Here we have created a simple procedure called job_data, when we will execute the procedure it
will display all the data from "jobs" tables.
mysql> DELIMITER $$ ;mysql> CREATE PROCEDURE job_data()
57
Database Management System Lab Manual
Explanation:
STORED FUNCTIONS :-
A stored function is a special kind stored program that returns a single value. You use stored
functions to encapsulate common formulas or business rules that are reusable among SQL
statements or stored programs.
Different from a stored procedure, you can use a stored function in SQL statements wherever an
expression is used. This helps improve the readability and maintainability of the procedural code.
MySQL stored function syntax-
The following illustrates the simplest syntax for creating a new stored function:
1 CREATE FUNCTIONfunction_name(param1,param2,…)
2 RETURNSdatatype
3 [NOT]DETERMINISTIC
4 statements
First, you specify the name of the stored function after CREATE FUNCTION clause.
Second, you list all parameters of the stored function inside the parentheses. By default, all
parameters are the IN parameters. You cannot specify IN , OUT or INOUT modifiers to the
parameters.
Third, you must specify the data type of the return value in the RETURNS statement. It can be
any valid MySQL data types.
Fourth, for the same input parameters, if the stored function returns the same result, it is
considered deterministic; otherwise, the stored function is not deterministic. You have to decide
whether a stored function is deterministic or not. If you declare it incorrectly, the stored function
may produce an unexpected result, or the available optimization is not used which degrades the
performance.
Fifth, you write the code in the body of the stored function. It can be a single statement or a
compound statement. Inside the body section, you have to specify at least
58
Database Management System Lab Manual
one RETURN statement. The RETURNstatement returns a value to the caller. Whenever
the RETURN statement is reached, the stored function‘s execution is terminated immediately.
MySQL stored function example-
Let‘s take a look at an example of using the stored function. We will use the customers table in
the sample database for the demonstration.
The following example is a function that returns the level of a customer based on credit limit. We
use the IF statement to determine the credit limit.
1 DELIMITER$$
2
3 CREATE FUNCTIONCustomerLevel(p_creditLimit double)RETURNS VARCHAR(10)
4 DETERMINISTIC
5 BEGIN
6 DECLARElvl varchar(10);
7
8 IFp_creditLimit>50000THEN
9 SET lvl='PLATINUM';
10 ELSEIF(p_creditLimit<=50000ANDp_creditLimit>=10000)THEN
11 SET lvl='GOLD';
12 ELSEIFp_creditLimit<10000THEN
13 SET lvl='SILVER';
14 ENDIF;
15
16 RETURN(lvl);
17 END
59
Database Management System Lab Manual
SELECT CUSTOMERLEVEL(creditlim)
INTO p_customerLevel;
END
As you can see, the GetCustomerLevel() stored procedure is much more readable when using
the CustomerLevel() stored function.
Notice that a stored function returns a single value only. If you include a SELECT statement
without the INTO clause, you will get an error.
In addition, if a stored function contains SQL statements, you should not use it inside other SQL
statements; otherwise, the stored function will slow down the speed of the query.
PROGRAM :-
[root@localhost ~]# systemctl start mysqld;[root@localhost ~]# mysql;
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 2
Server version: 5.5.43-MariaDB MariaDB Server
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
60
Database Management System Lab Manual
| student1 |
| suman |
| test |
| view |
+--------------------+
20 rows in set (0.22 sec)
61
Database Management System Lab Manual
+---------+----------+------+-----+---------+-------+
3 rows in set (0.00 sec)
62
Database Management System Lab Manual
63
Database Management System Lab Manual
64
Database Management System Lab Manual
ASSIGNMENT NO. 6
TITLE :-
Cursors :
( All types : Implicit, Explicit, Cursor FOR loop,Parameterized Cursor )
65
Database Management System Lab Manual
Write a PL/SQL block of code using parameterized Cursor, that will merge the data available in
the newly created table N_RollCall with the data available in the table O_RollCall.
If the data in the first table already exist in the second table then the data should be skipped.
Frame the separate problem statement for writing PL/SQL block to implement all types of
Cursors inline with above statements. The problem statement should clearly state the
requirements.
THEORY :-
Cursors
A cursor is a temporary work area created in system memory when an SQL statement is
executed. A cursor is a set of rows together with a pointer that identifies a current row. It is a
database object to retrieve data from a result set one row at a time. It is useful when we want to
manipulate the record of a table in a singleton method, in other words one row at a time. In other
words, a cursor can hold more than one row, but can process only one row at a time. The set of
rows the cursor holds is called the active set.
Types of Cursors
Implicit Cursor
These types of cursors are generated and used by the system during the manipulation of a DML
query (INSERT, UPDATE and DELETE). An implicit cursor is also generated by the system
when a single row is selected by a SELECT command.
Explicit Cursor
This type of cursor is generated by the user using a SELECT command. An explicit cursor
contains more than one row, but only one row can be processed at a time. An explicit cursor
moves one by one over the records. An explicit cursor uses a pointer that holds the record of a
row. After fetching a row, the cursor pointer moves to the next row.
1. Declare Cursor: In this part we declare variables and return a set of values.
2. Open: This is the entering part of the cursor.
3. Fetch: Used to retrieve the data row by row from a cursor.
4. Close: This is an exit part of the cursor and used to close a cursor.
66
Database Management System Lab Manual
5. Deallocate: In this part we delete the cursor definition and release all the system
resources associated with the cursor.
Syntax of a Cursor
Cursor Scope
Microsoft SQL Server supports the GLOBAL and LOCAL keywords on the DECLARE
CURSOR statement to define the scope of the cursor name.
Microsoft SQL Server supports the following two fetch options for data:
1. FORWARD_ONLY: Specifies that the cursor can only be scrolled from the first to the
last row.
2. SCROLL: It provides 6 options to fetch the data (FIRST, LAST, PRIOR, NEXT,
RELATIVE and ABSOLUTE).
Types of cursors
1. STATIC CURSOR: A static cursor populates the result set during cursor creation and
the query result is cached for the lifetime of the cursor. A static cursor can move forward
67
Database Management System Lab Manual
and backward.
2. FAST_FORWARD: This is the default type of cursor. It is identical to the static except
that you can only scroll forward.
3. DYNAMIC: In a dynamic cursor, additions and deletions are visible for others in the
data source while the cursor is open.
4. KEYSET: This is similar to a dynamic cursor except we can't see records others add. If
another user deletes a record, it is inaccessible from our record set.
Types of Locks
Locking is the process by which a DBMS restricts access to a row in a multi-user environment.
When a row or column is exclusively locked, other users are not permitted to access the locked
data until the lock is released. It is used for data integrity. This ensures that two users cannot
simultaneously update the same column in a row.
2. SCROLL_LOCKS: Provides data integrity into the cursor. It specifies that the cursor
will lock the rows as they are read into the cursor to ensure that updates or deletes made
using the cursor will succeed.
3. OPTIMISTIC: Specifies that the cursor does not lock rows as they are read into the
cursor. So, the updates or deletes made using the cursor will not succeed if the row has
been updated outside the cursor.
ASSIGNMENT NO. 7
TITLE :-
Database Trigger ( All types : Row-level and Statement-level triggers, Before and After triggers
).
Write a database trigger on Library table.
68
Database Management System Lab Manual
The system should keep track of the records that are being updated or deleted. The old value of
updated or deleted records should be added in Library_Audit table.
Frame the problem statement for writing Database Triggers of all types, inline with above
statement. The problem statement should clearly state the requirements.
THEORY :-
Trigger :-
A SQL trigger is a set of SQL statements stored in the database catalog. A SQL trigger is
executed or fired whenever an event associated with a table occurs e.g., insert, update or delete.
A SQL trigger is a special type of stored procedure. It is special because it is not called directly
like a stored procedure. The main difference between a trigger and a stored procedure is that a
trigger is called automatically when a data modification event is made against a table whereas a
stored procedure must be called explicitly.
It is important to understand the SQL trigger‘s advantages and disadvantages so that you can use
it appropriately. In the following sections, we will discuss the advantages and disadvantages of
using SQL triggers.
Advantages of using SQL triggers-
1. SQL triggers provide an alternative way to check the integrity of data.
2. SQL triggers can catch errors in business logic in the database layer.
3. SQL triggers provide an alternative way to run scheduled tasks. By using SQL triggers,
you don‘t have to wait to run the scheduled tasks because the triggers are invoked
automatically before or a change is made to the data in the tables.
4. SQL triggers are very useful to audit the changes of data in tables.
Triggers or stored procedures? It is recommended that if you have no way to get the work done
with the stored procedure, think about SQL trigger.
Introduction to MySQL triggers-
In MySQL, a trigger is a set of SQL statements that is invoked automatically when a change is
made to the data on the associated table. A trigger can be defined to be invoked either before or
after the data is changed by INSERT, UPDATE or DELETE statement. Before MySQL version
5.7.2, you can to define maximum six triggers for each table.
69
Database Management System Lab Manual
MySQL stores triggers in a data directory e.g., /data/classicmodels/ with the files
named tablename.TRG and triggername.TRN :
MySQL triggers cover all features defined in the standard SQL. However, there are some
limitations that you should know before using them in your applications.
MySQL triggers cannot:
70
Database Management System Lab Manual
PROGRAM :-
mysql> create database trigger1;
Query OK, 1 row affected (0.10 sec)
mysql> create trigger trig_update after update on library for each row insert into library_audit
values(old.bid,old.bname,old.bauthor,old.cost,old.publisher,"update");
Query OK, 0 rows affected (0.09 sec)
mysql> create trigger trig_delete before delete on library for each row insert into library_audit
values(old.bid,old.bname,old.bauthor,old.cost,old.publisher,"Delete");
Query OK, 0 rows affected (0.17 sec)
72
Database Management System Lab Manual
mysql> create trigger trig_insert after insert on library for each row update count1 set
book_count=book_count+1;
73
Database Management System Lab Manual
mysql> create TRIGGER trig_insert1 AFTER INSERT ON library FOR EACH ROW insert into
library_audit values(bid,bname,bauthor,cost,publisher,"INSERT");
Query OK, 0 rows affected (0.28 sec)
74
Database Management System Lab Manual
mysql>
ASSIGNMENT NO. 8
TITLE :-
Implement MYSQL/Oracle database connectivity with PHP/python/Java.
Implement Database navigation operations (add, delete, edit) using ODBC/JDBC.
THEORY :-
75
Database Management System Lab Manual
In this exercise you create a new database, a new user and an example table. For this connect to
the MySQL server via the mysql command line client.
Create a new database called feedback and start using it with the following command.
Now create a sample database table with example content via the following SQL statement.
76
Database Management System Lab Manual
importjava.sql.Connection;
importjava.sql.DriverManager;
importjava.sql.PreparedStatement;
importjava.sql.ResultSet;
importjava.sql.SQLException;
importjava.sql.Statement;
importjava.util.Date;
publicclassMySQLAccess {
privateConnection connect = null;
privateStatementstatement = null;
privatePreparedStatementpreparedStatement = null;
privateResultSetresultSet = null;
publicvoidreadDataBase() throwsException {
try {
// This will load the MySQL driver, each DB has its own driver
Class.forName("com.mysql.jdbc.Driver");
// Setup the connection with the DB
connect = DriverManager
.getConnection("jdbc:mysql://localhost/feedback?"
+ "user=sqluser&password=sqluserpw");
77
Database Management System Lab Manual
preparedStatement.setString(1, "Test");
preparedStatement.setString(2, "TestEmail");
preparedStatement.setString(3, "TestWebpage");
preparedStatement.setDate(4, newjava.sql.Date(2009, 12, 11));
preparedStatement.setString(5, "TestSummary");
preparedStatement.setString(6, "TestComment");
preparedStatement.executeUpdate();
preparedStatement = connect
.prepareStatement("SELECT myuser, webpage, datum, summary, COMMENTS
from feedback.comments");
resultSet = preparedStatement.executeQuery();
writeResultSet(resultSet);
resultSet = statement
.executeQuery("select * from feedback.comments");
writeMetaData(resultSet);
} catch (Exception e) {
throw e;
} finally {
close();
}
privatevoidwriteMetaData(ResultSetresultSet) throwsSQLException {
// Now get some metadata from the database
// Result set get the result of the SQL query
privatevoidwriteResultSet(ResultSetresultSet) throwsSQLException {
// ResultSet is initially before the first data set
78
Database Management System Lab Manual
while (resultSet.next()) {
// It is possible to get the columns via name
// also possible to get the columns via the column number
// which starts at 1
// e.g. resultSet.getSTring(2);
String user = resultSet.getString("myuser");
String website = resultSet.getString("webpage");
String summary = resultSet.getString("summary");
Datedate = resultSet.getDate("datum");
String comment = resultSet.getString("comments");
System.out.println("User: " + user);
System.out.println("Website: " + website);
System.out.println("summary: " + summary);
System.out.println("Date: " + date);
System.out.println("Comment: " + comment);
}
}
if (statement != null) {
statement.close();
}
if (connect != null) {
connect.close();
}
} catch (Exception e) {
}
}
}
Create the following main program to test your class.
packagede.vogella.mysql.first.test;
importde.vogella.mysql.first.MySQLAccess;
publicclassMain {
publicstaticvoid main(String[]args) throwsException {
MySQLAccessdao = newMySQLAccess();
79
Database Management System Lab Manual
dao.readDataBase();
}
ASSIGNMENT NO. 9
TITLE :- Design and develop MongoDB Queries using CRUD operations. (Use CRUD
operations, SAVE method, logical operators).
1. Create collection (book).
80
Database Management System Lab Manual
THEORY :-
MongoDB is a cross-platform, document-oriented database that provides, high performance,
high availability, and easy scalability. MongoDB works on concept of collection and document.
Database
Database is a physical container for collections. Each database gets its own set of files on the file
system. A single MongoDB server typically has multiple databases.
Collection
Collection is a group of MongoDB documents. It is the equivalent of an RDBMS table. A
collection exists within a single database. Collections do not enforce a schema. Documents
within a collection can have different fields. Typically, all documents in a collection are of
similar or related purpose.
Document
A document is a set of key-value pairs. Documents have dynamic schema. Dynamic schema
means that documents in the same collection do not need to have the same set of fields or
structure, and common fields in a collection's documents may hold different types of data.
The following table shows the relationship of RDBMS terminology with MongoDB.
RDBMS MongoDB
Database Database
Table Collection
Tuple/Row Document
column Field
81
Database Management System Lab Manual
mongodb itself)
Mysqld/Oracle mongod
mysql/sqlplus mongo
Any relational database has a typical schema design that shows number of tables and the
relationship between these tables. While in MongoDB, there is no concept of relationship.
Advantages of MongoDB over RDBMS-
1. Schema less − MongoDB is a document database in which one collection holds different
documents. Number of fields, content and size of the document can differ from one
document to another.
2. Structure of a single object is clear.
3. No complex joins.
4. Deep query-ability. MongoDB supports dynamic queries on documents using a
document-based query language that's nearly as powerful as SQL.
5. Tuning.
6. Ease of scale-out − MongoDB is easy to scale.
7. Conversion/mapping of application objects to database objects not needed.
8. Uses internal memory for storing the (windowed) working set, enabling faster access of
data.
82
Database Management System Lab Manual
Start MongoDB
sudo service mongodb start
Stop MongoDB
sudo service mongodb stop
Restart MongoDB
sudo service mongodb restart
To use MongoDB run the following command.
mongo
This will connect you to running MongoDB instance.
83
Database Management System Lab Manual
In MongoDB, insert operations target a single collection. All write operations in MongoDB
are atomic on the level of a single document.
For examples, see Insert Documents.
Read Operations
Read operations retrieves documents from a collection; i.e. queries a collection for documents.
MongoDB provides the following methods to read documents from a collection:
db.collection.find()
You can specify query filters or criteria that identify the documents to return.
Query Documents
Query on Embedded/Nested Documents
Query an Array
Query an Array of Embedded Documents
Update Operations
Update operations modify existing documents in a collection. MongoDB provides the following
methods to update documents of a collection:
In MongoDB, update operations target a single collection. All write operations in MongoDB
are atomic on the level of a single document.
You can specify criteria, or filters, that identify the documents to update. These filters use the
same syntax as read operations.
84
Database Management System Lab Manual
Delete Operations
Delete operations remove documents from a collection. MongoDB provides the following
methods to delete documents of a collection:
In MongoDB, delete operations target a single collection. All write operations in MongoDB
are atomic on the level of a single document.
You can specify criteria, or filters, that identify the documents to remove. These filters use the
same syntax as read operations.
PROGRAM :-
[student@localhost ~]$ suPassword:
[root@localhost student]# systemctl start mongod;
[root@localhost student]# mongo
MongoDB shell version: 2.4.6
connecting to: test
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
http://docs.mongodb.org/
Questions? Try the support group
http://groups.google.com/group/mongodb-user
> show dbs;
local 0.078125GB
> use pratiksha
switched to db pratiksha
> show collections;
> db.createcollection("library");
Thu Jan 1 11:25:28.500 TypeError: Property 'createcollection' of object pratiksha is not a
function
> db.createcollection("library");
Thu Jan 1 11:26:03.678 TypeError: Property 'createcollection' of object pratiksha is not a
function
> db.createCollection("library");
{ "ok" : 1 }
> db.library.insert({"bid":1,"name":"c++"});
85
Database Management System Lab Manual
> db.library.insert({"bid":2,"name":"sepm","author":"pressman"});
> db.library.insert({"bid":3,"name":"cn","author":"frouzen","cost":700});
> db.library.find();
{ "_id" : ObjectId("495c5b25bd217c3854101d6e"), "bid" : 1, "name" : "c++" }
{ "_id" : ObjectId("495c5b37bd217c3854101d6f"), "bid" : 2, "name" : "sepm", "author" :
"pressman" }
{ "_id" : ObjectId("495c5b47bd217c3854101d70"), "bid" : 3, "name" : "cn", "author" :
"frouzen", "cost" : 700 }
> db.library.find().pretty();
{ "_id" : ObjectId("495c5b25bd217c3854101d6e"), "bid" : 1, "name" : "c++" }
{
"_id" : ObjectId("495c5b37bd217c3854101d6f"),
"bid" : 2,
"name" : "sepm",
"author" : "pressman"
}
{
"_id" : ObjectId("495c5b47bd217c3854101d70"),
"bid" : 3,
"name" : "cn",
"author" : "frouzen",
"cost" : 700
}
> db.library.update({"cost":700},{$set :{"cost":500}})
> db.library.find().pretty();
{ "_id" : ObjectId("495c5b25bd217c3854101d6e"), "bid" : 1, "name" : "c++" }
{
"_id" : ObjectId("495c5b37bd217c3854101d6f"),
"bid" : 2,
"name" : "sepm",
"author" : "pressman"
}
{
"_id" : ObjectId("495c5b47bd217c3854101d70"),
"bid" : 3,
"name" : "cn",
"author" : "frouzen",
"cost" : 500
}
> db.library.remove({"bid":1});
> db.library.find().pretty();
{
"_id" : ObjectId("495c5b37bd217c3854101d6f"),
"bid" : 2,
"name" : "sepm",
"author" : "pressman"
86
Database Management System Lab Manual
}
{
"_id" : ObjectId("495c5b47bd217c3854101d70"),
"bid" : 3,
"name" : "cn",
"author" : "frouzen",
"cost" : 500
}
> db.library.count();
2
> db.library.insert({"bid":1,"name":"c++"});
> db.library.find().pretty();
{
"_id" : ObjectId("495c5b37bd217c3854101d6f"),
"bid" : 2,
"name" : "sepm",
"author" : "pressman"
}
{
"_id" : ObjectId("495c5b47bd217c3854101d70"),
"bid" : 3,
"name" : "cn",
"author" : "frouzen",
"cost" : 500
}
{ "_id" : ObjectId("495c5c9cbd217c3854101d71"), "bid" : 1, "name" : "c++" }
> db.library.find().sort({"bid":1})
{ "_id" : ObjectId("495c5c9cbd217c3854101d71"), "bid" : 1, "name" : "c++" }
{ "_id" : ObjectId("495c5b37bd217c3854101d6f"), "bid" : 2, "name" : "sepm", "author" :
"pressman" }
{ "_id" : ObjectId("495c5b47bd217c3854101d70"), "bid" : 3, "name" : "cn", "author" :
"frouzen", "cost" : 500 }
> db.library.insert({"bid":4,"name":"sepm","author":"pressman","cost":500});
> db.library.find().pretty();
{
"_id" : ObjectId("495c5b37bd217c3854101d6f"),
"bid" : 2,
"name" : "sepm",
"author" : "pressman"
}
{
"_id" : ObjectId("495c5b47bd217c3854101d70"),
"bid" : 3,
"name" : "cn",
"author" : "frouzen",
"cost" : 500
87
Database Management System Lab Manual
}
{ "_id" : ObjectId("495c5c9cbd217c3854101d71"), "bid" : 1, "name" : "c++" }
{
"_id" : ObjectId("495c5f31bd217c3854101d72"),
"bid" : 4,
"name" : "sepm",
"author" : "pressman",
"cost" : 500
}
> db.library.find().sort({"bid":1})
{ "_id" : ObjectId("495c5c9cbd217c3854101d71"), "bid" : 1, "name" : "c++" }
{ "_id" : ObjectId("495c5b37bd217c3854101d6f"), "bid" : 2, "name" : "sepm", "author" :
"pressman" }
{ "_id" : ObjectId("495c5b47bd217c3854101d70"), "bid" : 3, "name" : "cn", "author" :
"frouzen", "cost" : 500 }
{ "_id" : ObjectId("495c5f31bd217c3854101d72"), "bid" : 4, "name" : "sepm", "author" :
"pressman", "cost" : 500 }
> db.library.find({"cost":500})
{ "_id" : ObjectId("495c5b47bd217c3854101d70"), "bid" : 3, "name" : "cn", "author" :
"frouzen", "cost" : 500 }
{ "_id" : ObjectId("495c5f31bd217c3854101d72"), "bid" : 4, "name" : "sepm", "author" :
"pressman", "cost" : 500 }
> db.library.find({"bid":1})
{ "_id" : ObjectId("495c5c9cbd217c3854101d71"), "bid" : 1, "name" : "c++" }
> db.library.find({$and:[{"name":"sepm"},{"cost":500}]}).pretty()
{
"_id" : ObjectId("495c5f31bd217c3854101d72"),
"bid" : 4,
"name" : "sepm",
"author" : "pressman",
"cost" : 500
}
> db.library.insert({"bid":5,"name":"c++","cost":800});
> db.library.insert({"bid":6,"name":"dbms","cost":900});
> db.library.find().pretty();
{
"_id" : ObjectId("495c5b37bd217c3854101d6f"),
"bid" : 2,
"name" : "sepm",
"author" : "pressman"
}
{
"_id" : ObjectId("495c5b47bd217c3854101d70"),
"bid" : 3,
88
Database Management System Lab Manual
"name" : "cn",
"author" : "frouzen",
"cost" : 500
}
{ "_id" : ObjectId("495c5c9cbd217c3854101d71"), "bid" : 1, "name" : "c++" }
{
"_id" : ObjectId("495c5f31bd217c3854101d72"),
"bid" : 4,
"name" : "sepm",
"author" : "pressman",
"cost" : 500
}
{
"_id" : ObjectId("495c664ebd217c3854101d73"),
"bid" : 5,
"name" : "c++",
"cost" : 800
}
{
"_id" : ObjectId("495c6667bd217c3854101d74"),
"bid" : 6,
"name" : "dbms",
"cost" : 900
}
> db.library.find({$or:[{"cost":800},{"cost":900}]}).pretty()
{
"_id" : ObjectId("495c664ebd217c3854101d73"),
"bid" : 5,
"name" : "c++",
"cost" : 800
}
{
"_id" : ObjectId("495c6667bd217c3854101d74"),
"bid" : 6,
"name" : "dbms",
"cost" : 900
}
> db.library.find({"cost":{$ne:500}})
{ "_id" : ObjectId("495c5b37bd217c3854101d6f"), "bid" : 2, "name" : "sepm", "author" :
"pressman" }
{ "_id" : ObjectId("495c5c9cbd217c3854101d71"), "bid" : 1, "name" : "c++" }
{ "_id" : ObjectId("495c664ebd217c3854101d73"), "bid" : 5, "name" : "c++", "cost" : 800 }
{ "_id" : ObjectId("495c6667bd217c3854101d74"), "bid" : 6, "name" : "dbms", "cost" : 900 }
89
Database Management System Lab Manual
> db.library.find({$nor:[{"cost":500},{"author":"frouzen"}]})
{ "_id" : ObjectId("495c5b37bd217c3854101d6f"), "bid" : 2, "name" : "sepm", "author" :
"pressman" }
{ "_id" : ObjectId("495c5c9cbd217c3854101d71"), "bid" : 1, "name" : "c++" }
{ "_id" : ObjectId("495c664ebd217c3854101d73"), "bid" : 5, "name" : "c++", "cost" : 800 }
{ "_id" : ObjectId("495c6667bd217c3854101d74"), "bid" : 6, "name" : "dbms", "cost" : 900 }
> db.library.find({"cost":{$not:{$gt:800}}})
{ "_id" : ObjectId("495c5b37bd217c3854101d6f"), "bid" : 2, "name" : "sepm", "author" :
"pressman" }
{ "_id" : ObjectId("495c5b47bd217c3854101d70"), "bid" : 3, "name" : "cn", "author" :
"frouzen", "cost" : 500 }
{ "_id" : ObjectId("495c5c9cbd217c3854101d71"), "bid" : 1, "name" : "c++" }
{ "_id" : ObjectId("495c5f31bd217c3854101d72"), "bid" : 4, "name" : "sepm", "author" :
"pressman", "cost" : 500 }
{ "_id" : ObjectId("495c664ebd217c3854101d73"), "bid" : 5, "name" : "c++", "cost" : 800 }
> db.library.save({"bid":16,"name":"Java"})
> db.library.find()
{ "_id" : ObjectId("495c5b37bd217c3854101d6f"), "bid" : 2, "name" : "sepm", "author" :
"pressman" }
{ "_id" : ObjectId("495c5b47bd217c3854101d70"), "bid" : 3, "name" : "cn", "author" :
"frouzen", "cost" : 500 }
{ "_id" : ObjectId("495c5c9cbd217c3854101d71"), "bid" : 1, "name" : "c++" }
{ "_id" : ObjectId("495c5f31bd217c3854101d72"), "bid" : 4, "name" : "sepm", "author" :
"pressman", "cost" : 500 }
{ "_id" : ObjectId("495c664ebd217c3854101d73"), "bid" : 5, "name" : "c++", "cost" : 800 }
{ "_id" : ObjectId("495c6667bd217c3854101d74"), "bid" : 6, "name" : "dbms", "cost" : 900 }
{ "_id" : ObjectId("495c6e1dbd217c3854101d75"), "bid" : 16, "name" : "Java" }
> db.library.save({"_id":"bid":16,"name":"Java"})^C
bye
[root@localhost student]# mongo
MongoDB shell version: 2.4.6
connecting to: test
> use DB;
switched to db DB
> db.library.find()
{ "_id" : ObjectId("495c5b37bd217c3854101d6f"), "bid" : 2, "name" : "sepm", "author" :
"pressman" }
{ "_id" : ObjectId("495c5b47bd217c3854101d70"), "bid" : 3, "name" : "cn", "author" :
"frouzen", "cost" : 500 }
{ "_id" : ObjectId("495c5c9cbd217c3854101d71"), "bid" : 1, "name" : "c++" }
90
Database Management System Lab Manual
> db.library.save({"_id":"495c6e1dbd217c3854101d75","bid":16,"name":"Java"})
> db.library.find()
{ "_id" : ObjectId("495c5b37bd217c3854101d6f"), "bid" : 2, "name" : "sepm", "author" :
"pressman" }
{ "_id" : ObjectId("495c5b47bd217c3854101d70"), "bid" : 3, "name" : "cn", "author" :
"frouzen", "cost" : 500 }
{ "_id" : ObjectId("495c5c9cbd217c3854101d71"), "bid" : 1, "name" : "c++" }
{ "_id" : ObjectId("495c5f31bd217c3854101d72"), "bid" : 4, "name" : "sepm", "author" :
"pressman", "cost" : 500 }
{ "_id" : ObjectId("495c664ebd217c3854101d73"), "bid" : 5, "name" : "c++", "cost" : 800 }
{ "_id" : ObjectId("495c6667bd217c3854101d74"), "bid" : 6, "name" : "dbms", "cost" : 900 }
{ "_id" : ObjectId("495c6e1dbd217c3854101d75"), "bid" : 16, "name" : "Java" }
{ "_id" : "495c6e1dbd217c3854101d75", "bid" : 16, "name" : "Java" }
> db.library.save({"_id":"495c6e1dbd217c3854101d75"},{"bid":16,"name":"Java"})
> db.library.find()
{ "_id" : ObjectId("495c5b37bd217c3854101d6f"), "bid" : 2, "name" : "sepm", "author" :
"pressman" }
{ "_id" : ObjectId("495c5b47bd217c3854101d70"), "bid" : 3, "name" : "cn", "author" :
"frouzen", "cost" : 500 }
{ "_id" : ObjectId("495c5c9cbd217c3854101d71"), "bid" : 1, "name" : "c++" }
{ "_id" : ObjectId("495c5f31bd217c3854101d72"), "bid" : 4, "name" : "sepm", "author" :
"pressman", "cost" : 500 }
{ "_id" : ObjectId("495c664ebd217c3854101d73"), "bid" : 5, "name" : "c++", "cost" : 800 }
{ "_id" : ObjectId("495c6667bd217c3854101d74"), "bid" : 6, "name" : "dbms", "cost" : 900 }
{ "_id" : ObjectId("495c6e1dbd217c3854101d75"), "bid" : 16, "name" : "Java" }
{ "_id" : "495c6e1dbd217c3854101d75" }
> db.library.save({"_id":"495c6e1dbd217c3854101d75"},{"bid":75,"name":"Java"})
> db.library.find()
{ "_id" : ObjectId("495c5b37bd217c3854101d6f"), "bid" : 2, "name" : "sepm", "author" :
"pressman" }
{ "_id" : ObjectId("495c5b47bd217c3854101d70"), "bid" : 3, "name" : "cn", "author" :
"frouzen", "cost" : 500 }
{ "_id" : ObjectId("495c5c9cbd217c3854101d71"), "bid" : 1, "name" : "c++" }
{ "_id" : ObjectId("495c5f31bd217c3854101d72"), "bid" : 4, "name" : "sepm", "author" :
"pressman", "cost" : 500 }
{ "_id" : ObjectId("495c664ebd217c3854101d73"), "bid" : 5, "name" : "c++", "cost" : 800 }
91
Database Management System Lab Manual
> db.library.save({"_id":ObjectId("495c6e1dbd217c3854101d75"),"bid":75,"name":"Java"})
> db.library.find()
{ "_id" : ObjectId("495c5b37bd217c3854101d6f"), "bid" : 2, "name" : "sepm", "author" :
"pressman" }
{ "_id" : ObjectId("495c5b47bd217c3854101d70"), "bid" : 3, "name" : "cn", "author" :
"frouzen", "cost" : 500 }
{ "_id" : ObjectId("495c5c9cbd217c3854101d71"), "bid" : 1, "name" : "c++" }
{ "_id" : ObjectId("495c5f31bd217c3854101d72"), "bid" : 4, "name" : "sepm", "author" :
"pressman", "cost" : 500 }
{ "_id" : ObjectId("495c664ebd217c3854101d73"), "bid" : 5, "name" : "c++", "cost" : 800 }
{ "_id" : ObjectId("495c6667bd217c3854101d74"), "bid" : 6, "name" : "dbms", "cost" : 900 }
{ "_id" : ObjectId("495c6e1dbd217c3854101d75"), "bid" : 75, "name" : "Java" }
{ "_id" : "495c6e1dbd217c3854101d75" }
> db.library.save({"_id":ObjectId("495c6e1dbd217c3854101d71"),"bid":10,"name":"Java"})
> db.library.find()
{ "_id" : ObjectId("495c5b37bd217c3854101d6f"), "bid" : 2, "name" : "sepm", "author" :
"pressman" }
{ "_id" : ObjectId("495c5b47bd217c3854101d70"), "bid" : 3, "name" : "cn", "author" :
"frouzen", "cost" : 500 }
{ "_id" : ObjectId("495c5c9cbd217c3854101d71"), "bid" : 1, "name" : "c++" }
{ "_id" : ObjectId("495c5f31bd217c3854101d72"), "bid" : 4, "name" : "sepm", "author" :
"pressman", "cost" : 500 }
{ "_id" : ObjectId("495c664ebd217c3854101d73"), "bid" : 5, "name" : "c++", "cost" : 800 }
{ "_id" : ObjectId("495c6667bd217c3854101d74"), "bid" : 6, "name" : "dbms", "cost" : 900 }
{ "_id" : ObjectId("495c6e1dbd217c3854101d75"), "bid" : 75, "name" : "Java" }
{ "_id" : "495c6e1dbd217c3854101d75" }
{ "_id" : ObjectId("495c6e1dbd217c3854101d71"), "bid" : 10, "name" : "Java" }
> db.library.save({"_id":ObjectId("495c5c9cbd217c3854101d71"),"bid":12,"name":"Java"})
> db.library.find()
{ "_id" : ObjectId("495c5b37bd217c3854101d6f"), "bid" : 2, "name" : "sepm", "author" :
"pressman" }
{ "_id" : ObjectId("495c5b47bd217c3854101d70"), "bid" : 3, "name" : "cn", "author" :
"frouzen", "cost" : 500 }
{ "_id" : ObjectId("495c5c9cbd217c3854101d71"), "bid" : 12, "name" : "Java" }
{ "_id" : ObjectId("495c5f31bd217c3854101d72"), "bid" : 4, "name" : "sepm", "author" :
"pressman", "cost" : 500 }
{ "_id" : ObjectId("495c664ebd217c3854101d73"), "bid" : 5, "name" : "c++", "cost" : 800 }
{ "_id" : ObjectId("495c6667bd217c3854101d74"), "bid" : 6, "name" : "dbms", "cost" : 900 }
{ "_id" : ObjectId("495c6e1dbd217c3854101d75"), "bid" : 75, "name" : "Java" }
92
Database Management System Lab Manual
{ "_id" : "495c6e1dbd217c3854101d75" }
{ "_id" : ObjectId("495c6e1dbd217c3854101d71"), "bid" : 10, "name" : "Java" }
> db.library.save({"_id":ObjectId("495c5c9cbd217c3854101d71"),"bid":22,"name":"Java"})
> db.library.find()
{ "_id" : ObjectId("495c5b37bd217c3854101d6f"), "bid" : 2, "name" : "sepm", "author" :
"pressman" }
{ "_id" : ObjectId("495c5b47bd217c3854101d70"), "bid" : 3, "name" : "cn", "author" :
"frouzen", "cost" : 500 }
{ "_id" : ObjectId("495c5c9cbd217c3854101d71"), "bid" : 22, "name" : "Java" }
{ "_id" : ObjectId("495c5f31bd217c3854101d72"), "bid" : 4, "name" : "sepm", "author" :
"pressman", "cost" : 500 }
{ "_id" : ObjectId("495c664ebd217c3854101d73"), "bid" : 5, "name" : "c++", "cost" : 800 }
{ "_id" : ObjectId("495c6667bd217c3854101d74"), "bid" : 6, "name" : "dbms", "cost" : 900 }
{ "_id" : ObjectId("495c6e1dbd217c3854101d75"), "bid" : 75, "name" : "Java" }
{ "_id" : "495c6e1dbd217c3854101d75" }
{ "_id" : ObjectId("495c6e1dbd217c3854101d71"), "bid" : 10, "name" : "Java" }
>
ASSIGNMENT NO. 10
TITLE :-
Implement aggregation and indexing with suitable example using MongoDB.
1. Create document emp (eid, ename, dept, salary).
2. Find total number of employees in a department.
93
Database Management System Lab Manual
THEORY :-
Introduction to Indexing-
Typically, Indexes are data structures that can store collection‘s data set in a form that is easy to
traverse. Queries are efficiently executed with the help of indexes in MongoDB.
Indexes help MongoDB find documents that match the query criteria without performing a
collection scan. If a query has an appropriate index, MongoDB uses the index and limits the
number of documents it examines.
Indexes store field values in the order of the value.The order in which the index entries are made
support operations, such as equality matches and range-based queries. MongoDB sorts and
returns the results by using the sequential order of the indexes.
The indexes of MongoDB are similar to the indexes in any other databases.MongoDB defines
the indexes at the collection level for use in any field or subfield.
In the next section, we will discuss index types.
Types of Index-
94
Database Management System Lab Manual
Properties of Index-
MongoDB supports indexes on any document filed in a collection. By default, the _id field in all
collections has an index. Moreover, applications and users add indexes for triggering queries and
performing operations.
MongoDB supports both, single field or multiple field indexes based on the operations the index-
type performs.
db.items.createIndex( { ―item" : 1 } )
The command given above is used to create an index on the item field for the items collection.
In the next section, we will discuss how to create single field indexes on embedded documents.
You can index top-level fields within a document. Similarly, you can create indexes within
embedded document fields.
{ "_id" : 3, "item" : "Book", "available" : true, "soldQty" : 144821, "category" : "NoSQL",
"details" : { "ISDN" : "1234", "publisher" : "XYZ Company" }, "onlineSale" : true }
The structure shown above refers to a document stored in a collection. In the document, the
details field depicts an embedded document that has two embedded fields— ISDN and publisher.
db.items.createIndex( {details.ISDN: 1 } )
To create an index on the ISDN field and the embedded document called ―details‖, perform the
queries shown above.
In the next section, we will discuss compound indexes.
Compound Indexes-
95
Database Management System Lab Manual
MongoDB supports compound indexes to query multiple fields. A compound index contains
multiple single field indexes separated by a comma.
db.products.createIndex( { "item": 1, "stock": 1 } )
This diagram depicts a compound index for the fields, userid, and score. The documents are first
organized by userid and within each userid, scores are organized in the descending order. The
sort order of fields in a compound index is crucial.
The documents are first sorted by the itemfield value and then, within each item field value, they
are further sorted by the stock field values.
For a compound index, MongoDB limits the fields to a maximum of 31.
Index Prefixes
Index prefixes are created by taking a different combination of fields and typically, start from the
first field.
{ "item": 1, ―available‖:1, "soldQty―:1}
Sort Order-
In MongoDB, you can use the sort operations to manage the sort order. You can retrieve
documents based on the sort order in an index.
Following are the characteristics of a sort order:
96
Database Management System Lab Manual
1. If sorted documents cannot be obtained from an index, the results will get sorted in the
memory.
2. Sort operations executed using an index show better performance than those executed
without using an index.
3. Sort operations performed without an index gets terminated after exhausting 32 MB of
memory.
4. Indexes store field references in the ascending or descending sort order.
5. Sort order is not important for single-field indexes because MongoDB can traverse the
index in either direction.
6. Sort order is important for compound indexes because it helps determine if the index can
support a sort operation
To process query faster, ensure that your indexes fit into your system RAM. This will help the
system avoid reading the indexes from the hard disk.
To confirm the index size, use the query given above. This returns the data in bytes. To ensure
this index fits your RAM, you must have more than the required RAM available. In addition, you
must have RAM available for the rest of the working set.
For multiple collections, check the size of all indexes across all collections. The indexes and the
working sets both must fit in the RAM simultaneously.
Multi-Key Indexes-
When indexing a field containing an array value, MongoDB creates separate index entries for
each array component. These multikey indexes in queries match array elements with documents
containing arrays and select them.
You can construct multikey indexes for arrays holding scalar values, such as strings, numbers,
and nested documents.
db.coll.createIndex( { : < 1 or -1 > } )
To create a multikey index, you can use the db.collection.createIndex() (read as D-B dot
collection dot create Index) method given above.
If the indexed field contains an array, MongoDB automatically decides to either create a
multikey index or not create one. You need not specify the multikey type explicitly.
Compound Multi-Key Indexes-
In compound multikey indexes, each indexed document can have maximum one indexed field
with an array value. If more than one field has an array value, you cannot create a compound
multikey index.
{ _id: 1, product_id: [ 1, 2 ], retail_id: [ 100, 200 ], category: "both fields are arrays" }
An example of a document structure is shown above. In this collection, both the product_id (read
as product underscore ID) and retail_id (read as retail underscore ID) fields are arrays. Therefore,
you cannot create a compound multikey index.
Note that a shard key index and a hashed index cannot be a multikey index.
Hashed Indexes-
97
Database Management System Lab Manual
1. The hashing function combines all embedded documents and computes hashes for all
field values.
2. The hashing function does not support multi-key indexes.
3. Hashed indexes support sharding, uses a hashed shard key to shard a collection, ensures
an even distribution of data.
4. Hashed indexes support equality queries, however, range queries are not supported.
You cannot create a unique or compound index by taking a field whose type is hashed. However,
you can create a hashed and non-hashed index for the same field. MongoDB uses the scalar
index for range queries.
db.items.createIndex( { item: "hashed" } )
You can create a hashed index using the operation given above. This will create a hashed index
for the items collection on the item field.
TTL Indexes-
TTL indexes automatically delete machine-generated data. You can create a TTL index by
combining the db.collection.createIndex() method with the expireAfterSeconds option on a field
whose value is either a date or an array that contains date values.
db.eventlog.createIndex( { "lastModifiedDate": 1 }, { expireAfterSeconds: 3600 } )
For example, to create a TTL index on the lastModifiedDate (read as last modified date) field of
the eventlog collection, use the operation shown above in the mongo shell.
The TTL background thread runs on both primary and secondary nodes. However, it deletes
documents only from the primary node.
TTL indexes have the following limitations.
You cannot create a TTL index for a field if a non-TTL index already exists for the same field. If
you want to change a non-TTL single-field index to a TTL index, first drop the index and
recreate the index with the expireAfterSeconds option.
Unique Indexes-
To create a unique index, use the db.collection.createIndex() method and set the unique option to
true.
db.items.createIndex( { ―item": 1 }, { unique: true } )
For example, to create a unique index on the item field of the items collection, execute the
operation shown above in the mongo shell. By default, unique is false on MongoDB indexes.
If you use the unique constraint on the compound index, then MongoDB will enforce uniqueness
on the combination of all those fields which were the part of the compound key.
98
Database Management System Lab Manual
Sparse indexes manage only those documents which have indexed fields, even if that field
contains null values. Sparse index ignores those documents which do not contain any index field.
Non-sparse indexes do not ignore these documents and store null values for them.
To create a sparse index, use the db.collection.createIndex() method and set the sparse option to
true.
db.addresses.createIndex( { "xmpp_id": 1 }, { sparse: true } )
In the example given above, the operation in the mongo shell creates a sparse index on the item
field of the items collection. If a sparse index returns an incomplete index, then MongoDB does
not use that index unless it is specified in the hint method.
{ x: { $exists: false } }
For example, the second command given above will not use a sparse index on the x field unless
it receives explicit hints.
An index that combines both sparse and unique does not allow the collection to include
documents having duplicate field values for a single field. However, it allows multiple
documents that omit the key.
Text Indexes-
Text indexes in MongoDB help search for text strings in documents of a collection. You can
create a text index for field or fields containing string values or an array of strings.
To access text indexes, trigger a query using the $text (read as text) query operator. When you
create text indexes for multiple fields, specify the individual fields or use the wildcard specifier
($**)
db.collection.createIndex({subject: "text",content: "text"})
To create text indexes on the subject and content fields, perform the query given above. The text
index organizes all strings in the subject and content field, where the field value is either a string
or an array of string elements.
To allow text search for all fields with strings, use the wildcard specifier ($**). This indexes all
fields containing string content.
db.collection.createIndex({ "$**": "text" },{ name: "TextIndex" })
The second example given above indexes any string value available in each field of each
document in a collection and names the indexes as TextIndex.
Text Search-
99
Database Management System Lab Manual
MongoDB supports various languages for text search. The text indexes use simple language-
specific suffix stemming instead of language-specific stop words, such as ―the‖, ―an‖, ―a‖, ―and‖.
You can also choose to specify a language for text search.
If you specify the language value as "none", then the text index uses simple tokenization without
any stop word and stemming.
db.customer_info.createIndex({―item‖: ―Text‖},{ default_language: "spanish"})
In the query given above, you are enabling the text search option for the item field of the
customer_info collection with Spanish as the default language.
If the index language is English, text indexes are case-insensitive for all alphabets from A to Z.
The text index and the $text operator supports the following:
1. Two-letter language codes defined in ISO 639-1 (read as I-S-O 6-3-9-1).
2. Danish, Dutch, English, Finnish, French, German, Hungarian, Italian, Norwegian,
Portuguese, Romanian, Russian, Spanish, Swedish, and Turkish
Note that a compound text index cannot include special index types, such as multi-key or
geospatial Index fields.
Index Creation-
MongoDB provides several options to create indexes. By default, when indexes are created, all
other operations on a database are blocked.
For example, when indexes on a collection are created, the database becomes unavailable for any
read or write operation until the index creation process completes.
The read or write operations on the database queue and allow the index building process to
complete. Therefore, for index building operations which may consume longer time, you can
consider the background operation and thus make MongoDB available even during the entire
operation.
db.items.createIndex( {item:1},{background: true})
db.items.createIndex({category:1}, {sparse: true, background: true})
The command given above is used for this purpose. By default, the background is false for
building MongoDB indexes.
When MongoDB is creating indexes in the background for a collection, you cannot perform
other administrative operations involving that collection.
For example, you cannot perform tasks, such as runrepairDatabase, (read as run repair database)
drop the collection, or use the query db.collection.drop(),(read as D-B dot collection dot drop)
and runcompact (read as run compact).
If you perform any of these operations, you will receive an error.
The index build process in the background uses an incremental approach and is slower than the
normal ―foreground‖ index build process. The speed of the index build process depends on the
size of the index. If the index size is bigger than the RAM of the system, the process takes more
time than the foreground process.
Building indexes can impact your database performance:
100
Database Management System Lab Manual
To avoid any performance issues, you can use the getIndexes()(read as get indexes) method to
ensure that your application checks for the indexes at the startup.
You can also use an equivalent method for your driver and ensure it terminates an operation if
the proper indexes do not exist. When building indexes, use separate application codes and
designated maintenance windows.
Index Creation on Replica Set-
Typically, background index operations on a secondary replica set begin after the index building
process completes in the primary.
If the index build process is running in the background on the primary, the same will happen on
the secondary nodes as well.
If you want to build large indexes on secondaries, you can build the index by restarting one
secondary at a time in a standalone mode.
After the index build is complete, restart as a member of the replica set, allow it to catch up with
the other members of the set, and then build the index on the next secondary. When all the
secondaries have the new index, step down the primary, restart it as a standalone, and build the
index on the former primary.
To ensure that the secondary catch up with primary, the time taken to build the index on a
secondary must be within an oplog. To catch up with primary node, index creation on secondary
nodes always happen in the foreground in the ―recovering‖ mode.
101
Database Management System Lab Manual
To remove all indexes barring the _id index from a collection, use the second operation provided
above.
Modify Indexes-
To modify an index, first, drop the index and then recreate it. Perform the following steps to
modify an index.
Drop Index: Execute the query given below to return a document showing the operation status.
db.orders.dropIndex({ "cust_id" : 1, "ord_date" : -1, "items" : 1 })
Recreate the Index: Execute the query given below to return a document showing the status of
the results.
db.orders.createIndex({ "cust_id" : 1, "ord_date" : -1, "items" : -1 })
Rebuild Indexes-
In addition to modifying indexes, you can also rebuild them. To rebuild all indexes of a
collection, use the db.collection.reIndex() method. This will drop all indexes including _id and
rebuild all indexes in a single operation. The operation takes the form db.items.reIndex().
To view the indexing process status, type the db.currentOp() (read as D B dot Current operation)
command in the mongo shell. The message field will show the percentage of the build
completion.
To abort an ongoing index build process, use the db.killOp()(read as D B dot kill operation)
method in the mongo shell. For index builds, the db.killOp()may occur after most of the index
build operation has completed.
Note that a replicated index built on the secondary replica set cannot be aborted.
Listing Indexes-
You can list all indexes of a collection and a database. You can get a list of all indexes of a
collection by using the db.collection.getIndexes()or a similar method for your drivers.
For example, to view all indexes on the items collection, use the db.items.getIndexes() method.
db.getCollectionNames().forEach(function(collection) {
indexes = db[collection].getIndexes();
print("Indexes for " + collection + ":");
printjson(indexes);\
});
To list all indexes of collections, you can use the operation in the mongo shell as shown above.
Measure Index Use-
Typically, query performance indicates an index usage. MongoDB provides a number of tools to
study query operations and observe index use for your database.
The explain() method can be used to print information about query execution. The explain
method returns a document that explains the process and indexes used to return a query. This
helps to optimize a query.
Using the db.collection.explain() or the cursor.explain() method helps measure index usages.
Control Index Use-
102
Database Management System Lab Manual
In case you want to force MongoDB to use particular indexes for querying documents, then you
need to specify the index with the hint() method.
The hint method can be appended in the find() method as well.
db.items.find({item: ―Book‖, available : true }).hint({item:1})
Consider the example given above. This command queries a document whose item field value is
―Book‖ and available field is ―true‖. Here, MongoDB‘s query planner is directed to use the index
created on the item field.
To view the execution statistics for a specific index, use the explain method in the find
command.
db.items.find({item: ―Book‖, available : true }).hint({item:1}).explain(―executionStats‖)
db.items.explain("executionStats").find({item: ―Book‖, available : true }).hint( { item:1 } )
For example, consider the queries given above.
If you want to prevent MongoDB from using any index, specify the $natural (read as natural)
operator to the hint() method.
db.items.find({item: ―Book‖, available : true }).hint({$natural:1}).explain(―executionStats‖)
For example, use the following query given above.
Index Use Reporting-
MongoDB provides different metrics to report index use and operation. You can consider these
metrics when analyzing index use for your database. These metrics are printed using the
following commands.
serverStatus: serverStatus prints the two metrics.
1. scanned: Displays the documents that MongoDB scans in the index to carry out the
operation. If the number of the scanned document is higher than the number of returned
documents, this indicates that the database has scanned many objects to find the target
object. In such cases, consider creating an index to improve this.
2. scanAndOrder: A boolean that is true when a query cannot use the order of documents
in the index for returning sorted results MongoDB must sort the documents after it
receives the documents from a cursor. If scanAndOrder is false, MongoDB can use the
order of the documents in an index to return the sorted results.
MongoDB must sort the documents after it receives the documents from a cursor. If
scanAndOrder is false, MongoDB can use the order of the documents in an index to return the
sorted results.
1. collStats: collStats prints the two metrics.
2. totalIndexSize: Returns index size in bytes
3. indexSizes: Explains the size of the data allocated for an index
4. dbStats: dbStats has the following two metrics.
5. dbStats.indexes: Contains a count of the total number of indexes across all collections in
the database
6. dbStats.indexSize: The total size in bytes of all indexes created in this database.
Geospatial Index-
103
Database Management System Lab Manual
With the increased usage of handheld devices, geospatial queries are becoming increasingly
frequent for finding the nearest data points for a given location.
MongoDB provides geospatial indexes for coordinating such queries. Suppose you want to find
the nearest coffee shop from your current location. You need to create a special index to
efficiently perform such queries because it needs to search in two dimensions— longitude and
latitude.
A geospatial index is created using the createIndex function. It passes "2d" or ―2dsphere‖ as a
value instead of 1 or -1. To query geospatial data, you first need to create a geospatial index.
db.collection.createIndex( { : "2dsphere" } )
In the index specification document for the db.collection.createIndex() method, as shown above,
specify the location field as the index key and specify the string literal "2dsphere" as the value.
A compound index can include a 2dsphere index key in combination with non-geospatial index
keys.
MongoDB’s Geospatial Query Operators-
The geospatial query operators in MongoDB lets you perform the following queries.
Inclusion Queries
1. Return the locations included within a specified polygon.
2. Use the operator $geoWithin. The 2d and 2dsphere indexes support this query.
3. Although MongoDB does not require an index to perform an inclusion query, they can
enhance the query performance.
Intersection Queries
1. Return locations intersecting with a specified geometry.
2. Use the $geoIntersects operator and return the data on a spherical surface.
Proximity Queries
1. Return various points closer to a specified point.
2. Use the $near operator that requires a 2d or 2dsphere index.
$geoWith Operator-
The $geoWithin (read as geo within) operator is used to query location data found within a
GeoJSON (read as geo J-SON) polygon. To get a response, the location data needs to be stored
in the GeoJSON format.
db..find( { : { $geoWithin :{ $geometry :{ type : "Polygon" ,coordinates : [ ]} } } } )
You can use the syntax given above to use the $geoWith Operator.
db.places.find( { loc :{ $geoWithin : { $geometry :{ type : "Polygon" ,
coordinates :[ [[ 0 , 0 ] ,[ 3 , 6 ] ,[ 6 , 1 ] ,[ 0 , 0 ]] ]} } } } )
The example given above selects all points and shapes that exist entirely within a GeoJSON
polygon.
Proximity Queries in MongoDB-
104
Database Management System Lab Manual
Proximity queries return the points closest to the specified point. These queries sort the results by
its proximity to the specified point.
You need to create a 2dsphere index in order to perform a proximity query on the GeoJSON data
points. To query the data, you can either use the $near or $geonear (read as geo near) operator.
db..find( { :{ $near :{ $geometry :{ type : "Point" ,coordinates : [ <longitude> , <latitude> ] } ,
$maxDistance : } } } )
The first syntax given above is an example of the $near operator.
db.runCommand( { geoNear : ,near : { type : "Point" ,coordinates: [ <longitude>, <latitude> ] } ,
spherical : true } )
The $geoNear command uses the second syntax given above. This command offersadditional
options and returns further information than the $near operator.
Aggregation :-
Operations that process data sets and return calculated results are called aggregations. MongoDB
provides data aggregations that examine data sets and perform calculations on them. Aggregation
is run on the mongod instance to simplify application codes and limit resource requirements.
Similar to queries, aggregation operations in MongoDB use collections of documents as an input
and return results in the form of one or more documents The aggregation framework in
MongoDB is based on data processing pipelines.
Documents pass through multi-stage pipelines and get transformed into an aggregated result. The
most basic pipeline stage in the aggregation framework provides filters that function like queries.
It also provides document transformations that modify the output document.
The pipeline operations group and sort documents by defined field or fields. In addition, they
perform aggregation on arrays.
Pipeline stages can use operators to perform tasks such as calculate the average or concatenate a
string. The pipeline uses native operations within MongoDB to allow efficient data aggregation
and is the favored method for data aggregation.
With the help of the aggregate function, you can perform complex aggregation operations, such
as finding out the total transaction amount for each customer.
―orders‖ is the collection that has three fields, cust_id, amount, and status. In the $match (read as
match) stage, you will filter out those documents in which status field value is ―A‖. In the group
stage, you will aggregate the ―amount‖ field for each cust_id.
Pipeline Operators and Indexes :-
The aggregate command in MongoDB functions on a single collection and logically passes the
collection through the aggregation pipeline. You can optimize the operation and avoid scanning
the entire collection by using the $match, $limit, and $kip stages.
You may require only a subset of data from a collection to perform an aggregation operation.
Therefore, use the $match, $limit, and $skip stages to filter the documents. When placed at the
beginning of a pipeline, the $match operation scans and selects only the matching documents in a
collection.
Placing a $match before $sort in the pipeline stage is equivalent to using a query in which the
sorting function is performed before looking into the indexes. Therefore, it is recommended to
use $match operators at the beginning of the pipeline.
Aggregate Pipeline Stages :-
105
Database Management System Lab Manual
Pipeline stages appear in an array. Documents are passed through the pipeline stages in a proper
order one after the other. Barring $out and $geoNear, all stages of the pipeline can appear
multiple times.
The db.collection.aggregate()(read as DB dot collection dot aggregate) method provides access
to the aggregation pipeline and returns a cursor and result sets of any size.
The various pipeline stages are as follows.
1. $project: This stage adds new fields or removes existing fields and thus restructure each
document in the stream. This stage returns one output document for each input document
provided.
2. $match: It filters the document stream and allows only matching documents to pass into
the next stage without any modification. $match uses the standard MongoDB queries. For
each input document, it returns either one output document if there is a match or zero
documents when there is no match.
3. $group: This stage groups documents based on the specified identifier expression and
applies logic known as accumulator expression to compute the output document.
4. $sort: This stage rearranges the order of the document stream using specified sort keys.
The documents remain unaltered even though the order changes. This stage provides one
output document for each input document.
5. $skip: This stage skips the first n documents where n is the specified skip number. It
passes the remaining documents without any modifications to the pipeline. For each input
document, it returns either zero documents for the first n documents or one document.
6. $limit: It passes the first n number of documents without any modifications to the
pipeline. For each input document, this stage returns either one document for the first n
documents or zero documents after the first n documents.
7. $unwind: It deconstructs an array field in the input documents to return a document for
each element. Each output document replaces the array with an element value. For each
input document, it returns n documents where n is the number of array elements and can
be zero for an empty array.
Aggregation Example :-
The aggregation operation given above returns all states with the total population greater than 10
million.
db.zipcodes.aggregate( [{ $group: { _id: "$state", totalPop: { $sum: "$pop" } } },
{ $match: { totalPop: { $gte: 10*1000*1000 } } }] )
This example depicts that the aggregation pipeline contains the $group stage followed by the
$match stage.
In this operation, the $group stage does three things:
1. Groups the documents of the zip code collection under the state field
2. Calculates thetotalPop (read as the total population) field for each state
3. Returns an output document for each unique state.
106
Database Management System Lab Manual
The new per-state documents contain two fields: the _id field and the totalPop field. Here in this
command, the aggregate pipeline is used. The $sort stage orders those documents and $group
stage applies the sum operation on the amount fields of those documents.
db.users.aggregate([{ $project :{month_joined : { $month : "$joined" },name : "$_id",_id :
0}},{ $sort : { month_joined : 1 } } ])
The second aggregation operation is shown above returns usernames sorted by the month of their
joining. This kind of aggregation could help generate membership renewal notices.
MapReduce :-
MapReduce is a data processing model used for aggregation. To perform MapReduce operations,
MongoDB provides the MapReduce database command.
A MapReduce operation consists of two phases.
Map stage: Documents are processed and one or more objects are produced for each input
document.
Reduce stage: Outputs of the map operation are combined. Optionally, there can be an
additional stage to make final modifications to the result.
Similar to other aggregation operations, MapReduce can define a query condition to select the
input documents, and sort and limit the results.
The MapReduce function in MongoDB can be written as JavaScript codes.
The MapReduce operations:-
1. Accept a set of documents from a collection of inputs, performs sort and limit functions,
and then start the map stage. At the end of a MapReduce operation, the result is generated
as documents which can be saved in the collection.
2. Associate values to a key by using the custom JavaScript functions. If a key contains
more than one mapped value, then this operation converts them into a single object, such
as an array.
The use of custom JavaScript functions makes the MapReduce operations flexible. For example,
the map function can generate more than one key and value when processing documents.
Additionally, the MapReduce operations use the custom JavaScript function to alter the results at
the conclusion of the map and reduce operations may perform further calculations.
If a collection is sharded, then you can use MapReduce to perform many complex aggregation
operations.
The orders in the collection have three fields—cust_id, amount and status. If you want to find out
the sum of the total amount for each customer, then use the MapReduce framework. In the map
stage, cust_id and amount will be generated as the key.
The value will be further processed by the reduce stage in which cust_id an array of the amount
will be passed as input to each reducer. The reducer then finds out the total of the amount and
generate cust_id as key and order_totals as value.
Aggregation Operations:-
Aggregations are operations that manipulate data and return a computed result based on the input
document and a specific procedure. MongoDB performs aggregation operations on data sets.
107
Database Management System Lab Manual
Aggregation operations have limited scope compared to the aggregation pipeline and
MapReduce functions.
Aggregation operations provide the following semantics for common data processing options.
1. Count
Count MongoDB returns all of the documents matching a query. The count command along
with the two methods, count() and cursor.count() provides access to total counts in the
mongo shell. The db.customer_info.count() command helps count all documents in the
customer_info collection.
2. Distinct
The distinct operation searches for documents matching a query and returns all unique values
for a field in the matched document. The distinct command and db.collection.distinct()
method execute this operation in the mongo shell.
db.customer_info.distinct( ―customer_name" )
The syntax given above is an example of a distinct operation.
Group operations accept sets of documents as input which matches the given query, apply the
operation, and then return an array of documents with the computed results.
A group does not support sharded collection data. In addition, the results of the group operation
must not exceed 16 megabytes.
The group operation shown above groups documents by the field ‗a‘, where ‗a‘ is less than three
and sums the field count for each group.
PROGRAM :-
[student@localhost ~]$ suPassword:
[root@localhost student]# systemctl start mongod
[root@localhost student]# mongo
MongoDB shell version: 2.4.6
connecting to: test
> show db
Tue Jan 1 00:14:49.606 don't know how to show [db] at src/mongo/shell/utils.js:847
> use local
switched to db local
> db.createCollection("emp")
{ "ok" : 1 }
> db.emp.insert({"eid":1,"ename":"Sonal","dept":"comp","project":"DBMS","cost":5000});
> db.emp.insert({"eid":2,"ename":"Komal","dept":"comp","project":"DBMS","cost":10000});
> db.emp.insert({"eid":3,"ename":"Priya","dept":"comp","project":"DBMS","cost":7000});
> db.emp.insert({"eid":2,"ename":"Komal","dept":"comp","project":"JAVA","cost":6000});
> db.emp.insert({"eid":3,"ename":"Priya","dept":"comp","project":"JAVA","cost":4000});
> db.emp.insert({"eid":3,"ename":"Priya","dept":"comp","project":"Testing","cost":8000});
> db.emp.find()
108
Database Management System Lab Manual
109
Database Management System Lab Manual
"ok" : 1
}
110
Database Management System Lab Manual
},
{
"_id" : "JAVA",
"Minimum cost of project" : 4000
},
{
"_id" : "DBMS",
"Minimum cost of project" : 5000
}
],
"ok" : 1
}
> db.emp.aggregate([{$group : {"_id":"$project","Maximum cost of project" : {$max :
"$cost"}}}])
{
"result" : [
{
"_id" : "Testing",
"Maximum cost of project" : 8000
},
{
"_id" : "JAVA",
"Maximum cost of project" : 6000
},
{
"_id" : "DBMS",
"Maximum cost of project" : 10000
}
],
"ok" : 1
}
> db.emp.aggregate([{$group : {"_id":"$project","Average cost of project" : {$avg :
"$cost"}}}])
{
"result" : [
{
"_id" : "Testing",
"Average cost of project" : 8000
},
{
"_id" : "JAVA",
"Average cost of project" : 5000
},
{
"_id" : "DBMS",
"Average cost of project" : 7333.333333333333
111
Database Management System Lab Manual
}
],
"ok" : 1
}
> db.emp.createIndex({"eid":1})
> db.emp.createIndex({"ename":1})
> db.emp.getIndexes();
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"ns" : "local.emp",
"name" : "_id_"
},
{
"v" : 1,
"key" : {
"eid" : 1
},
"ns" : "local.emp",
"name" : "eid_1"
},
{
"v" : 1,
"key" : {
"ename" : 1
},
"ns" : "local.emp",
"name" : "ename_1"
}
]
> db.emp.dropIndex("eid_1");
{ "nIndexesWas" : 3, "ok" : 1 }
> db.emp.getIndexes();
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"ns" : "local.emp",
"name" : "_id_"
112
Database Management System Lab Manual
},
{
"v" : 1,
"key" : {
"ename" : 1
},
"ns" : "local.emp",
"name" : "ename_1"
}
]
> db.emp.dropIndex("ename_1");
{ "nIndexesWas" : 2, "ok" : 1 }
> db.emp.getIndexes();
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"ns" : "local.emp",
"name" : "_id_"
}
]
map reduce
> db.emp.mapReduce(function(){emit(this.ename,this.cost);},function(key,values){return
Array.sum(values)},{out:"MapReduce_F"}).find();
{ "_id" : "Komal", "value" : 16000 }
{ "_id" : "Priya", "value" : 19000 }
{ "_id" : "Sonal", "value" : 5000 }
> db.emp.mapReduce(function(){emit(this.ename,1);},function(key,values){return
Array.sum(values)},{out:"MapReduce_F"}).find();
{ "_id" : "Komal", "value" : 2 }
{ "_id" : "Priya", "value" : 3 }
{ "_id" : "Sonal", "value" : 1 }
ASSIGNMENT NO. 11
TITLE :-
Implement Map reduce operation with suitable example using MongoDB.
113
Database Management System Lab Manual
>db.collection.mapReduce(
function(){emit(key,value);},//map function
out: collection,
query: document,
sort: document,
limit: number
The map-reduce function first queries the collection, then maps the result documents to emit key-
value pairs, which is then reduced based on the keys that have multiple values.
In the above syntax −
1. map is a JavaScript function that maps a value with a key and emits a key-value pair
2. reduce is a JavaScript function that reduces or groups all the documents having the same
key
3. out specifies the location of the map-reduce query result
4. query specifies the optional selection criteria for selecting documents
5. sort specifies the optional sort criteria
6. limit specifies the optional maximum number of documents to be returned
Using MapReduce :-
Consider the following document structure storing user posts. The document stores user_name of
the user and the status of post.
114
Database Management System Lab Manual
"post_text":"welcome…",
"user_name":"mark",
"status":"active"
Now, we will use a mapReduce function on our posts collection to select all the active posts,
group them on the basis of user_name and then count the number of posts by each user using the
following code −
>db.posts.mapReduce(
function(){ emit(this.user_id,1);},
function(key, values){returnArray.sum(values)},{
query:{status:"active"},
out:"post_total"
>db.posts.mapReduce(
115
Database Management System Lab Manual
function(){ emit(this.user_id,1);},
function(key, values){returnArray.sum(values)},{
query:{status:"active"},
out:"post_total"
).find()
The above query gives the following result which indicates that both users tom and mark have
two posts in active states −
{"_id":"tom","value":2}
{"_id":"mark","value":2}
In a similar manner, MapReduce queries can be used to construct large complex aggregation
queries. The use of custom JavaScript functions make use of MapReduce which is very flexible
and powerful.
PROGRAM :-
[student@localhost ~]$ suPassword:
[root@localhost student]# systemctl start mongod
[root@localhost student]# mongo
MongoDB shell version: 2.4.6
connecting to: test
> show db
Tue Jan 1 00:14:49.606 don't know how to show [db] at src/mongo/shell/utils.js:847
> use local
switched to db local
> db.createCollection("emp")
{ "ok" : 1 }
> db.emp.insert({"eid":1,"ename":"Sonal","dept":"comp","project":"DBMS","cost":5000});
> db.emp.insert({"eid":2,"ename":"Komal","dept":"comp","project":"DBMS","cost":10000});
> db.emp.insert({"eid":3,"ename":"Priya","dept":"comp","project":"DBMS","cost":7000});
> db.emp.insert({"eid":2,"ename":"Komal","dept":"comp","project":"JAVA","cost":6000});
> db.emp.insert({"eid":3,"ename":"Priya","dept":"comp","project":"JAVA","cost":4000});
> db.emp.insert({"eid":3,"ename":"Priya","dept":"comp","project":"Testing","cost":8000});
> db.emp.find()
116
Database Management System Lab Manual
> db.emp.mapReduce(function(){emit(this.ename,this.cost);},function(key,values){return
Array.sum(values)},{out:"MapReduce_F"}).find();
{ "_id" : "Komal", "value" : 16000 }
{ "_id" : "Priya", "value" : 19000 }
{ "_id" : "Sonal", "value" : 5000 }
> db.emp.mapReduce(function(){emit(this.ename,1);},function(key,values){return
Array.sum(values)},{out:"MapReduce_F"}).find();
{ "_id" : "Komal", "value" : 2 }
{ "_id" : "Priya", "value" : 3 }
{ "_id" : "Sonal", "value" : 1 }
ASSIGNMENT NO. 12
TITLE :-
Write a program to implement MongoDB database connectivity with PHP/python/Java.
Implement Database navigation operations ( add, delete, edit etc. ) using ODBC/JDBC.
117
Database Management System Lab Manual
THEORY :-
Installation-
Before you start using MongoDB in your Java programs, you need to make sure that you have
MongoDB JDBC driver and Java set up on the machine. Now, let us check how to set up
MongoDB JDBC driver.
1. You need to download the jar from the path Download mongo.jar. Make sure to
download the latest release of it.
To connect database, you need to specify the database name, if the database doesn't exist then
MongoDB creates it automatically.
Following is the code snippet to connect to the database −
import com.mongodb.client.MongoDatabase;
import com.mongodb.MongoClient;
import com.mongodb.MongoCredential;
// Creating Credentials
MongoCredential credential;
"password".toCharArray());
118
Database Management System Lab Manual
Now, let's compile and run the above program to create our database myDb as shown below.
$javac ConnectToDB.java
$java ConnectToDB
On executing, the above program gives you the following output.
Connected to the database successfully
Credentials ::MongoCredential{
mechanism = null,
userName = 'sampleUser',
source = 'myDb',
password = <hidden>,
mechanismProperties = {}
}
Create a Collection-
To create a collection, createCollection() method
of com.mongodb.client.MongoDatabase class is used.
Following is the code snippet to create a collection −
import com.mongodb.client.MongoDatabase;
import com.mongodb.MongoClient;
import com.mongodb.MongoCredential;
119
Database Management System Lab Manual
// Creating Credentials
MongoCredential credential;
"password".toCharArray());
//Creating a collection
database.createCollection("sampleCollection");
Getting/Selecting a Collection-
import com.mongodb.client.MongoCollection;
import com.mongodb.client.MongoDatabase;
120
Database Management System Lab Manual
import org.bson.Document;
import com.mongodb.MongoClient;
import com.mongodb.MongoCredential;
// Creating Credentials
MongoCredential credential;
"password".toCharArray());
// Creating a collection
// Retieving a collection
121
Database Management System Lab Manual
Insert a Document-
import com.mongodb.client.MongoCollection;
import com.mongodb.client.MongoDatabase;
import org.bson.Document;
import com.mongodb.MongoClient;
import com.mongodb.MongoCredential;
// Creating Credentials
MongoCredential credential;
122
Database Management System Lab Manual
"password".toCharArray());
// Retrieving a collection
.append("id", 1)
.append("description", "database")
.append("likes", 100)
.append("url", "http://www.tutorialspoint.com/mongodb/")
collection.insertOne(document);
123
Database Management System Lab Manual
import com.mongodb.client.FindIterable;
import com.mongodb.client.MongoCollection;
import com.mongodb.client.MongoDatabase;
import java.util.Iterator;
import org.bson.Document;
import com.mongodb.MongoClient;
import com.mongodb.MongoCredential;
// Creating Credentials
MongoCredential credential;
"password".toCharArray());
124
Database Management System Lab Manual
// Retrieving a collection
FindIterable<Document>iterDoc = collection.find();
int i = 1;
Iterator it = iterDoc.iterator();
while (it.hasNext()) {
System.out.println(it.next());
i++;
125
Database Management System Lab Manual
title = RethinkDB,
id = 2,
description = database,
likes = 200,
url = http://www.tutorialspoint.com/rethinkdb/, by = tutorials point
}}
Update Document-
import com.mongodb.client.FindIterable;
import com.mongodb.client.MongoCollection;
import com.mongodb.client.MongoDatabase;
import com.mongodb.client.model.Filters;
import com.mongodb.client.model.Updates;
import java.util.Iterator;
import org.bson.Document;
import com.mongodb.MongoClient;
import com.mongodb.MongoCredential;
126
Database Management System Lab Manual
// Creating Credentials
MongoCredential credential;
"password".toCharArray());
// Retrieving a collection
FindIterable<Document>iterDoc = collection.find();
int i = 1;
Iterator it = iterDoc.iterator();
while (it.hasNext()) {
System.out.println(it.next());
127
Database Management System Lab Manual
i++;
Delete a Document-
To delete a document from the collection, you need to use the deleteOne()method of
the com.mongodb.client.MongoCollection class.
Following is the program to delete a document −
import com.mongodb.client.FindIterable;
import com.mongodb.client.MongoCollection;
import com.mongodb.client.MongoDatabase;
import com.mongodb.client.model.Filters;
import java.util.Iterator;
import org.bson.Document;
import com.mongodb.MongoClient;
import com.mongodb.MongoCredential;
128
Database Management System Lab Manual
// Creating Credentials
MongoCredential credential;
"password".toCharArray());
// Retrieving a collection
collection.deleteOne(Filters.eq("id", 1));
FindIterable<Document>iterDoc = collection.find();
129
Database Management System Lab Manual
int i = 1;
Iterator it = iterDoc.iterator();
while (it.hasNext()) {
System.out.println(it.next());
i++;
Dropping a Collection-
To drop a collection from a database, you need to use the drop() method of
the com.mongodb.client.MongoCollection class.
Following is the program to delete a collection −
import com.mongodb.client.MongoCollection;
import com.mongodb.client.MongoDatabase;
import org.bson.Document;
import com.mongodb.MongoClient;
import com.mongodb.MongoCredential;
130
Database Management System Lab Manual
// Creating Credentials
MongoCredential credential;
"password".toCharArray());
// Creating a collection
// Retieving a collection
// Dropping a Collection
collection.drop();
131
Database Management System Lab Manual
To list all the collections in a database, you need to use the listCollectionNames() method of
the com.mongodb.client.MongoDatabase class.
Following is the program to list all the collections of a database −
import com.mongodb.client.MongoDatabase;
import com.mongodb.MongoClient;
import com.mongodb.MongoCredential;
// Creating Credentials
MongoCredential credential;
"password".toCharArray());
132
Database Management System Lab Manual
System.out.println(name);
133