NGT 11-2018,19
NGT 11-2018,19
NGT 11-2018,19
(2½ Hours)
[Total Marks: 75]
The CAP theorem states that when designing an application in a distributed environment there are
three basic requirements that exist, namely consistency, availability, and partition tolerance.
1. Consistency means that the data remains consistent after any operation is performed that
changes the data, and that all users or clients accessing the application see the same updated
data.
2. Availability means that the system is always available.
3. Partition Tolerance means that the system will continue to function even if it is partitioned
into groups of servers that are not able to communicate with one
d. What are the advantages and disadvantages of NoSQL databases?
Ans. Advantages of NoSQL.
1. High scalability : This scaling up approach fails when the transaction rates and
fast response requirements increase. In contrast to this, the new generation of
NoSQL databases is designed to scale out (i.e. to expand horizontally using low-end commodity
servers).
2. Manageability and administration : NoSQL databases are designed to mostly work with
automated repairs, distributed data, and simpler data models, leading to low manageability and
administration.
3. Low cost : NoSQL databases are typically designed to work with a cluster of cheap commodity
servers, enabling the users to store and process more data at a low cost.
4. Flexible data models : NoSQL databases have a very flexible data model, enabling them to
work with any type of data; they don’t comply with the rigid RDBMS data models. As a result,
any application changes that involve updating the database schema can be easily implemented.
Disadvantages of NoSQL
1. Maturity : Most NoSQL databases are pre-production versions with key features that are
still to be implemented. Thus, when deciding on a NoSQL database, you should analyze the
product properly to ensure the features are fully implemented and not still on the To-do list .
2. Support : Support is one limitation that you need to consider. Most NoSQL databases are
from start-ups which were open sourced. As a result, support is very minimal as compared to
the enterprise software companies and may not have global reach or support resources.
3. Limited Query Capabilities : Since NoSQL databases are generally developed to meet the
scaling requirement of the web-scale applications, they provide limited querying capabilities.
A simple querying requirement may involve significant programming expertise.
4. Administration : Although NoSQL is designed to provide a no-admin solution, it still
requires skill and effort for installing and maintaining the solution.
5. Expertise : Since NoSQL is an evolving area, expertise on the technology is limited in the
developer and administrator community.
Page 2 of 20
e. What are the different categories of NoSQL database? Explain each with an example.
Ans.
iv. Find out a count of female users who stay in either India or USA.
db.users.find({"Gender":"F",$or:[{"Country":"India"},
{"Country":"USA"}]}).count()
Page 5 of 20
If you want to find all students who are younger than 25 (Age < 25), you can execute the
following find with a selector:
db.students.find({"Age":{"$lt":25}})
2. $gt and $gte
The $ gt and $gte operators stand for “g reater than” and “ greater than or equal to,”
respectively.Let’s find out all of the students with Age > 25 . This can be achieved by executing
the following command:
db.students.find({"Age":{"$gt":25}})
3. $in and $nin
Let’s find all students who belong to either class C1 or C2 . The command for the same is
db.students.find({"Class":{"$in":["C1","C2"]}})
The inverse of this can be returned by using $nin .
To find students who don’t belong to class C1 or C2 . The command is
db.students.find({"Class":{"$nin":["C1","C2"]}})
e. Explain the two ways MongoDB enables distribution of the data in Sharding.
Ans There are two ways MongoDB enables distribution of the data:
range-based partitioning and hash basedpartitioning.
1. Range-Based Partitioning
In range-based partitioning , the shard key values are divided into ranges. Say you consider a
timestamp field as the shard key. In this way of partitioning, the values are considered as a straight
line starting from a Min value to Max value where Min is the starting period (say, 01/01/1970) and
Max is the end period (say, 12/31/9999). Every document in the collection will have timestamp
value within this range only, and it will represent some point on the line.
Based on the number of shards available, the line will be divided into ranges, and documents will
be distributed based on them
2. Hash-Based Partitioning
In hash-based partitioning , the data is distributed on the basis of the hash value of the shard field.
If selected, this will lead to a more random distribution compared to range-based partitioning. It’s
unlikely that the documents with close shard key will be part of the same chunk. For example, for
ranges based on the hash of the id field, there will be a straight line of hash values, which will again
be partitioned on basis of the number of shards. On the basis of the hash values, the documents will
lie in either of the shards.
Page 6 of 20
Fig: Hash-based partitioning
f. List and explain the 3 core components in the MongoDB package.
Ans. The core components in the MongoDB package are
1) mongod :which is the core database process
2) mongos : which is the controller and query router for sharded clusters
3) mongo : which is the interactive MongoDB shell
1. mongod
The primary daemon in a MongoDB system is known as mongod . This daemon handles all the
data requests, manages the data format, and performs operations for background
management.When a mongod is run without any arguments, it connects to the default data
directory, which isC:\data\db or /data/db , and default port 27017, where it listens for socket
connections.It’s important to ensure that the data directory exists and you have write permissions
to the directory before the mongod process is started.
2. mongo
mongo provides an interactive JavaScript interface for the developer to test queries and operations
directly on the database and for the system administrators to manage the database. This is all done
via the command line. When the mongo shell is started, it will connect to the default database called
test . This database connection value is assigned to global variable db 3. mongos
mongos is used in MongoDB sharding. It acts as a routing service that processes queries from the
application layer and determines where in the sharded cluster the requested data is located.
Page 7 of 20
A simple solution is to monitor your MongoDB instance capacity using tools such as MongoDB
Cloud Manager (flush time, lock percentages, queue lengths, and faults are good measures) and
shard before reaching 80% of the estimated capacity.
2. Shard Key Can’t Be Updated
The shard key can’t be updated once the document is inserted in the collection because MongoDB
uses shard keys to determine to which shard the document should be routed. If you want to change
the shard key of a document, the suggested solution is to remove the document and reinsert the
document when he change has been made.
3. Shard Collection Limit
The collection should be sharded before it reaches 256GB.
4. Select the Correct Shard Key
It’s very important to choose a correct shard key because once the key is chosen it’s not easy to
correct it.
b. What is Data Storage engine? Which is the default storage engine in MongoDB? Also compare
MMAP and Wired Tiger storage engines.
Ans. A component that has been an integral part of MongoDB database is a Storage Engine, which
defines how data is stored in the disk. There may be different storage engines for a particular
database and each one is productive in different aspects.
MongoDB uses MMAP as its default storage engine
MMAP vs Wired Tiger storage engines
1. Concurrency
MMAP uses Collection-level locking. In MongoDB 3.0 and more,if a client acquires a lock
on a document to modify its content,then no other client can access the collection which
holds the document currently. Whereas in earlier versions a single write operation acquires
the lock to the database.WiredTiger uses Document-Level Locking. Multiple clients can
access the same collection, since the lock is acquired for that particular document.
2. Consistency
Journaling is one feature that helps your database from recovery after a failure. MongoDB
uses write ahead logging to on-disk journal files.
MMAP uses this feature to recover from failure.
In WiredTiger,a consistent view of the data is given by means of checkpoints. so that in
case of failure it can rollback to the previous checkpoint. But journaling is required if the
changes made after checkpoint is also needed.It’s left to the user’s choice to enable or
disable journaling.
3. Compression
Data compression is needed in places where the data growth is extremely fast which can be
used to reduce the disk space consumed.
Data compression facilities is not present in MMAP storage engine.
In WiredTiger Storage Engine. Data compression is achieved using two methods: Snappy
compression and Zlib
In Snappy method,compression rate is slow whereas in Zlib its high, And again, it’s the
user’s preference to have it or not.
4. Memory Constraints
Wired Tiger can make use of multithreading and hence multi core CPU’s can be made use
of it.whereas in MMAP, increasing the size of the RAM decreases the number of page faults
which in turn increases the performance.
c. “With the rise of the Smartphone, it’s becoming very common to query for things near a current
location”. Explain the different indexes used by MongoDB to support such location-based queries.
Ans. The different indexes used by MongoDB to support such location-based queries, MongoDB
provides geospatial indexes
Geospatial indexes
To create a geospatial index, a coordinate pair in the following forms must exist in the documents:
Page 8 of 20
• Either an array with two elements
• Or an embedded document with two keys (the key names can be anything).
The following are valid examples:
{ "userloc" : [ 0, 90 ] }
{ "loc" : { "x" : 30, "y" : -30 } }
{ "loc" : { "latitude" : -30, "longitude" : 180 } }
{"loc" : {"a1" : 0, "b1" : 1}}. db.userplaces.ensureIndex( { userloc : "2d" } )
A geospatial index assumes that the values will range from -180 to 180 by default. If this needs to
be changed, it can be specified along with ensureIndex as follows:
db.userplaces.ensureIndex({"userloc" : "2d"}, {"min" : -1000, "max" : 1000})
The following can be used to create a geospatial index on the userloc field:
Let’s understand with an example how this index works. Say you have documents that are of the
following type:
{"loc":[0,100], "desc":"coffeeshop"}
{"loc":[0,1], "desc":"pizzashop"}
If the query of a user is to find all coffee shops near her location, the following compound index
can help:
db.ensureIndex({"userloc" : "2d", "desc" : 1})
Geohaystack Indexes
Geohaystack indexes are bucket-based geospatial indexes (also called geospatial haystack indexes
). They are useful for queries that need to find out locations in a small area and also need to be
filtered along another dimension, such as finding documents with coordinates within 10 miles and
a type field value as restaurant .
While defining the index, it’s mandatory to specify the bucketSize parameter as it determines the
haystack index granularity. For example,
db.userplaces.ensureIndex({ userpos : "geoHaystack", type : 1 }, { bucketSize : 1 })
This example creates an index wherein keys within 1 unit of latitude or longitude are stored together
in the same bucket. You can also include an additional category in the index, which means that
information will be looked up at the same time as finding the location details.
d. What is Journaling? Explain the importance of Journaling with the help of a neat diagram.
Ans. 1. In this process, a write operation occurs in mongod, which then creates changes in private
view. The first block is memory and the second block is ‘my disc’. After a specified interval,
which is called a ‘journal commit interval’, the private view writes those operations in journal
directory (residing in the disc).
2. Once the journal commit happens, mongod pushes data into shared view. As part of the
process, it gets written to actual data directory from the shared view (as this process happens
in background). The basic advantage is, we have a reduced cycle from 60 seconds to 200
milliseconds.
3. In a scenario where an abruption occurs at any point of time or flash disc remains unavailable
for last 59 seconds (keeping in mind the existing data in journal directory/write operations),
then when the next time mongod starts, it basically replays all write operation logs and writes
into the actual data directory.
Page 9 of 20
e. Write a short note on Replication Lag.
Ans. Replication Lag
Replication lag is the primary administrative concern behind monitoring replica sets. Replication
lag for a given secondary is the difference in time when an operation is written in primary and the
time when the same was replicated on the secondary. Often, the replication lag remedies itself and
is transient. However, if it remains high and continues to rise, there might be a problem with the
system. You might end up either shutting down the system until the problem is resolved, or it might
require manual intervention for reconciling the mismatch, or you might even end up running the
system with outdated data.
The following command can be used to determine the current replication lag of the replica set:
testset:PRIMARY> rs.printSlaveReplicationInfo()
Further, you can use the rs.printReplicationInfo() command to fill in the missing piece:
testset:PRIMARY> rs.printReplicationInfo()
MongoDB Cloud Manager can also be used to view recent and historical replication lag
information. The repl lag graph is available from the Status tab of each SECONDARY node.
Here are some ways to reduce this time:
1. In scenarios with a heavy write load, you should have a secondary as powerful as the
primary node so that it can keep up with the primary and the writes can be applied
on the secondary at the same rate. Also, you should have enough network bandwidth
so that the ops can be retrieved from the primary at the same rate at which they are
getting created.
2. Adjust the application write concern.
3. If the secondary is used for index builds, this can be planned to be done when there
are low write activities on the primary.
4. If the secondary is used for taking backups, consider taking backups without
blocking.
5. Check for replication errors. Run rs.status() and check the errmsg field.
Additionally, the secondary’s log files can be checked for any existing error messages.
f. Explain “ GridFS – The MongoDB File System” with the help of a neat diagram.
Ans. 1. MongoDB stores data in BSON documents. BSONdocuments have a document size limit of
16MB.GridFS is MongoDB’s specification for handling large files that exceed BSON’s
document size limit.
2. GridFS uses two collections for storing the file. One collection maintains the metadata of the
file and the other collection stores the file’s data by breaking it into small pieces called chunks.
This means the file is divided into smaller chunks and each chunk is stored as a separate
document. By default the chunk size is limited to 255KB.This approach not only makes the
storing of data scalable and easy but also makes the range queries easier to use when a specific
Page 10 of 20
part of files are retrieved.Whenver a file is queried in GridFS, the chunks are reassembled as
required by the client. This also provides the user with the capability to access arbitrary sections
of the files. For example, the user can directly move to the middle of a video file.
3. The GridFS specification is useful in cases where the file size exceeds the default 16MB
limitation of MongoDB BSON document. It’s also used for storing files that you need to access
without loading the entire file in memory.GridFS enables you to store large files by splitting
them up into smaller chunks and storing each of the chunks as separate documents. In addition
to these chunks, there’s one more document that contains the metadata about the file. Using
this metadata information, the chunks are grouped together, forming the complete file. The
storage overhead for the chunks can be kept to a minimum, as MongoDB supports storing
binary data in documents.
4. The two collections that are used by GridFS for storing of large files are by default named as
fs.filesand fs.chunks , although a different bucket name can be chosen than fs .The chunks are
stored by default in the fs.chunks collection. If required, this can be overridden. Hence all of
the data is contained in the fs.chunks collection.
5. The structure of the individual documents in the chunks collection is pretty simple:
}
The chunk document has the following important keys.
1. "_id" : This is the unique identifier.
2. "files_id" : This is unique identifier of the document that contains the metadata
related to the chunk.
3. "n" : This is basically depicting the position of the chunk in the original file.
4. "data" : This is the actual binary data that constitutes this chunk.
Fig: Grid FS
Page 11 of 20
2. TimesTen implements a fairly familiar SQL-based relational model. Subsequent to the purchase
by Oracle,it implemented ANSI standard SQL, but in recent years the effort has been to make
the database compatiblewith the core Oracle database—to the extent of supporting Oracle’s
stored procedure language PL/SQL.In a TimesTen database, all data is memory resident.
3. Persistence is achieved by writing periodic snapshots of memory to disk, as well as writing to
a disk-based transaction log following a transaction commit.
document.write("hello world");
}
Page 12 of 20
);
}
);
</script>
</head>
<body>
<p>Hello! Welcome in Jquery Language!!</p>
<button>Click me</button>
</body>
</html>
c Explain how we can create our own custom event in jQuery with an example.
Ans. A seldom-used but very useful feature of jQuery’s events is the ability to trigger and bind to your own
custom events. We can use the jQuery,s On method to attach event handlers to elements.For example in the
below code we have created a customized event named “myOwnEvent” which will get triggered on click
of the button.
Code:
<html>
<head>
<script src="jquery-3.3.1.min.js"></script>
<script>
$(document).ready(function(){
$("p").on("myOwnEvent", function(event, showName){
$(this).text(showName + "! It is a Javascript Library!");
});
$("button").click(function(){
$("p").trigger("myOwnEvent", ["Jquery"]);
});
});
</script>
</head>
<body>
</body>
</html>
d What is Ajax? What is the use of Ajax? Explain how Ajax can be used with jQuery.
Ans. 1. Ajax stands for Asynchronous Javascript And Xml. Ajax is just a means of loading data from
the server to the web browser without reloading the whole page.
2. Basically, what Ajax does is make use of the JavaScript-based XMLHttpRequest object to send
and receive information to and from a web server asynchronously, in the background, without
interfering with the user's experience.
3. Ajax has become so popular that you hardly find an application that doesn't use Ajax to some
extent. The example of some large-scale Ajax-driven online applications is: Gmail, Google
Maps, Google Docs, YouTube, Facebook, Flickr, etc.
Ajax with jQuery
4. Different browsers implement the Ajax differently that means if we are adopting the typical
JavaScript way to implement the Ajax we have to write the different code for different browsers
to ensure that Ajax would work cross-browser.
Page 13 of 20
5. But, fortunately jQuery simplifies the process of implementing Ajax by taking care of those
browser differences. It offers simple methods such as load(), $.get(), $.post(), etc. to implement
the Ajax that works seamlessly across all the browsers.
For example jQuery load() Method
6. The jQuery load() method loads data from the server and place the returned HTML into the
selected element. This method provides a simple way to load data asynchronous from a web
server. The basic syntax of this method can be given with:
$(selector).load(URL, data, complete);
The parameters of the load() method has the following meaning:
1) The required URL parameter specifies the URL of the file you want to load.
2) The optional data parameter specifies a set of query string (i.e. key/value pairs) that is
sent to the web server along with the request.
3) The optional complete parameter is basically a callback function that is executed when
the request completes. The callback is fired once for each selected element.
Code:
<html>
<head>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<script>
$(document).ready(function(){
$("button").click(function(){
$("#div1").load("demo_test.txt", function(responseTxt, statusTxt, xhr){
if(statusTxt == "success")
alert("External content loaded successfully!");
if(statusTxt == "error")
alert("Error: " + xhr.status + ": " + xhr.statusText);
});
});
});
</script>
</head>
<body>
</body>
</html>
The following example loads the content of the element with id="p1", inside the file
"demo_test.txt", into a specific <div> element:
e Explain how to add and remove elements to DOM in jQuery with an example
Ans. Adding Elements to DOM
1) jQuery provides several methods that allows us to insert new content inside an existing
element.Thare are 3 ways to insert elements in the DOM. Inserting into the DOM
1. DOM Insertion, Around: These methods let you insert elements around
existing ones.(wrap(),wrapAll(),wrapInner())
The wrap() method wraps specified HTML element(s) around each selected element.
Example
Wrap a <div> element around each <p> element:
$("button").click(function(){
$("p").wrap("<div></div>");
Page 14 of 20
});
wrapAll():Wraps HTML element(s) around all selected elements
wrapInner():Wraps HTML element(s) around the content of each selected element
2. DOM Insertion, Inside: These methods let you insert elements within existing
ones.(append(),appendTo(), html(),prepend(),prependTo(),text())
The append() method inserts specified content at the end of the selected elements.
Example
Insert content at the end of all <p> elements:
$("button").click(function(){
$("p").append("<b>Appended text</b>");
});
The prepend() method inserts specified content at the beginning of the selected
elements.
Example
Insert content at the beginning of all <p> elements:
$("button").click(function(){
$("p").prepend("<b>Prepended text</b>");
});
The html() method sets or returns the content (innerHTML) of the selected elements.
Example
Change the content of all <p> elements:
$("button").click(function(){
$("p").html("Hello <b>world</b>!");
});
3. DOM Insertion, Outside: These methods let you insert elements outside existing
ones that are completely separate( after(),before(),insertAfter(),insertBefore() )
The after() method inserts specified content after the selected elements.
Example
Insert content after each <p> element:
$("button").click(function(){
$("p").after("<p>Hello world!</p>");
});
The before() method inserts specified content in front of (before) the selected elements.
Example
Insert content before each <p> element:
$("button").click(function(){
$("p").before("<p>Hello world!</p>");
});
2) jQuery provides handful of methods, such as empty(), remove(), unwrap() etc. to remove
existing HTML elements or contents from the document.
The empty() method removes all child nodes and content from the selected elements
Example
Page 15 of 20
Remove the content of all <div> elements:
$("button").click(function(){
$("div").empty();
});
The remove() method removes the selected elements, including all text and child nodes.
This method also removes data and events of the selected elements.
Example
Remove all <p> elements:
$("button").click(function(){
$("p").remove();
});
f Write a jQuery code to add a CSS class to the HTML elements.
Ans. <!DOCTYPE html>
<html>
<head>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<script>
$(document).ready(function(){
$("button").click(function(){
$("p:first").addClass("intro");
});
});
</script>
<style>
.intro {
font-size: 150%;
color: red;
}
</style>
</head>
<body>
<h1>This is a heading</h1>
<p>This is a paragraph.</p>
<p>This is another paragraph.</p>
</body>
</html>
<?php
$myObj->name = "John";
$myObj->age = 30;
$myObj->city = "New York";
Page 16 of 20
$myJSON = json_encode($myObj);
echo $myJSON;
?>
3. PHP's json_decode function takes a JSON string and converts it into a PHP variable. Typically, the
JSON data will represent a JavaScript array or object literal which json_decode will convert into a PHP
array or object. The following two examples demonstrate, first with an array, then with an object:
Example 1:
$json = '["apple","orange","banana","strawberry"]';
$ar = json_decode($json);
// access first element of $ar array
echo $ar[0]; // apple
Example 2:
$json = '{
"title": "JavaScript: The Definitive Guide",
"author": "David Flanagan",
"edition": 6
}';
$book = json_decode($json);
// access title of $book object
echo $book->title; // JavaScript: The Definitive Guide
b. List and explain any 5 XMLHttpRequest Event Handlers used for Monitoring the Progress of the
HTTP Request.
Ans.
c. What is the use of Stringify function? What are the different parameters that can be passed in
Stringify function? Explain with an example.
Ans. 1. A common use of JSON is to exchange data to/from a web server. When sending data to a web server,
the data has to be a string.
2. We can Convert a JavaScript object into a string with JSON.stringify(). Stringify a JavaScript Object
Imagine we have this object in JavaScript:
var obj = { name: "John", age: 30, city: "New York" };
Use the JavaScript function JSON.stringify() to convert it into a string.
var myJSON = JSON.stringify(obj);
Page 17 of 20
The result will be a string following the JSON notation.
3. Syntax of the JSON stringify Method
JSON.stringify(value[, replacer [, space]]);
4. The value parameter of the stringify method is the only required parameter of the three outlined by the
signature.The argument supplied to the method represents the JavaScript value intended to be serialized.
This can be that of any object, primitive, or even a composite of the two.
5. The optional replacer parameter is either a function that alters the way objects and arrays are stringified
or an array of strings and numbers that acts as a white list for selecting the object properties that will be
stringified.
6. The third parameter, space, is also optional and allows you to specify the amount ofpadding that
separates each value from one another within the produced JSON text. This padding provides an added
layer of readability to the produced string.
7. Code:
<html>
<head>
<title>JSON programs </title>
</head>
<body>
<script type="text/javascript">
var data={
"Bname":"JSON",
"Publisher": "TataMcgraw",
"author": "Smith",
"price":250,
"ISBN":"1256897912345"
};
document.writeln(JSON.stringify(data));
document.writeln(JSON.stringify(data,["Bname","author","price"]));
document.write(JSON.stringify(data,["Bname","author","price"],5));
</script>
</body>
</html>
Page 19 of 20
b) JSON.Parse the JSON string.
6. The following JSON and XML examples both define an employees object, with an array of 3
employees:
JSON Example
{"employees":[
{ "firstName":"John", "lastName":"Doe" },
{ "firstName":"Anna", "lastName":"Smith" },
{ "firstName":"Peter", "lastName":"Jones" }
]}
XML Example
<employees>
<employee>
<firstName>John</firstName> <lastName>Doe</lastName>
</employee>
<employee>
<firstName>Anna</firstName> <lastName>Smith</lastName>
</employee>
<employee>
<firstName>Peter</firstName> <lastName>Jones</lastName>
</employee>
</employees>
_____________________________
Page 20 of 20
BSc.(Information Technology)
(Semester V)
2019-20
New Generation
Technology
(USIT507 Elective)
University Paper Solution
By
Ms. Seema Bhatkar
Ans:
1. Volume
Volume in big data means the size of the data. As businesses are becoming
more transaction-oriented so increasing numbers of transactions adding
generating huge amount of data. This huge volume of data is the biggest
challenge for big data technologies. The storage and processing power
needed to store, process, and make accessible the data in a timely and cost
effective manner is massive.
2. Variety
The data generated from various devices and sources follows no fixed format
or structure. Compared to text, CSV or RDBMS data varies from text files, log
files, streaming videos, photos, meter readings, stock ticker data, PDFs, audio,
and various other unstructured formats.
New sources and structures of data are being created at a rapid pace. So the
onus is on technology to find a solution to analyze and visualize the huge
variety of data that is out there. As an example, to provide alternate routes for
commuters, a traffic analysis application needs data feeds from millions of
smartphones and sensors to provide accurate analytics on traffic conditions
and alternate routes.
3. Velocity
Velocity in big data is the speed at which data is created and the speed at
which it is required to be processed. If data cannot be processed at the
required speed, it loses its significance. Due to data streaming in from social
media sites, sensors, tickers, metering, and monitoring, it is important for the
organizations to speedily process data both when it is on move and when it is
static.
• Consistency means that the data remains consistent after any operation is
performed that changes the data, and that all users or clients accessing the
application see the same updated data.
• Availability means that the system is always available.
• Partition Tolerance means that the system will continue to function even if it
is partitioned into groups of servers that are not able to communicate with
one another.
The CAP theorem states that at any point in time a distributed system can fulfil only
two of the above three guarantees.
example:
{
"Name": "ABC",
"Phone": ["1111111",
........"222222"
........],
"Fax":..}
Ans:
SQL NoSQL
Types All types support SQL standard. Multiple types exists, such as
document stores, key value
stores, column databases, etc.
Development Developed in 1970. Developed in 2000s.
History
Examples SQL Server, Oracle, MySQL. MongoDB, HBase, Cassandra.
Data Storage Data is stored in rows and The data model depends on the
Model columns in a table, where each database type. Say data is stored
column is of a specific type. as a key-value pair for key-value
The tables generally are created stores. In documentbased
on principles of normalization. databases, the data is stored as
Joins are used to retrieve data documents.
from multiple tables. The data model is flexible, in
contrast to the rigid table model
of the RDBMS.
Schemas Fixed structure and schema, so Dynamic schema, new data types,
any change to schema involves or structures can be
altering the database. accommodated by
expanding or altering the current
schema.
New fields can be added
dynamically.
Scalability Scale up approach is used; this Scale out approach is used; this
means as the load increases, means distributing the data load
bigger, across inexpensive commodity
expensive servers are bought to servers.
accommodate the data.
Supports Supports ACID and transactions. Supports partitioning and
Transactions availability, and compromises on
Syntax:
db.createCollection(name, options)
Example:
db.createCollection(“VSIT”,{capped:true, size:20000, max:100})
Q2c. Discuss the points to be considered while Importing data in a Share environment.
Ans
1. Pre-Splitting of the Data
Instead of leaving the choice of chunks creation with MongoDB, you can tell
MongoDB how to do so using the following command:
The insert operation creates the following document in the users collection:
{ "_id" : 1, "Name" : "vsit" }
2. Compound Index
When creating an index, you should keep in mind that the index covers most of
your queries. If you sometimes query only the Name field and at times you query both
the Name and the Age field, creating a compound index on the Name and Age fields will
be more beneficial than an index that is created on either of the fields because the
compound index will cover both queries.
The following command creates a compound index on fields Name and Age of
the collection testindx .
> db.testindx.ensureIndex({"Name":1, "Age": 1})
Shard Key
Any indexed single/compound field that exists within all documents of the collection
can be a shard key. You specify that this is the field basis which the documents of the
collection need to be distributed. Internally, MongoDB divides the documents based on the
value of the field into chunks and distributes them across the shards.
There are two ways MongoDB enables distribution of the data: range-based
partitioning and hashbased partitioning.
1. Range-Based Partitioning
In range-based partitioning , the shard key values are divided into ranges. Say you
consider a timestamp field as the shard key. In this way of partitioning, the values are
considered as a straight line starting from a Min value to Max value where Min is the starting
period (say, 01/01/1970) and Max is the end period (say, 12/31/9999). Every document in the
collection will have timestamp value within this range only, and it will represent some point
on the line.
Based on the number of shards available, the line will be divided into ranges, and
documents will be distributed based on them.
The documents where the values of the shard key are nearby are likely to fall on the
same shard. This can significantly improve the performance of the range queries.
2. Hash-Based Partitioning
In hash-based partitioning , the data is distributed on the basis of the hash value of
the shard field. If selected, this will lead to a more random distribution compared to range-
based partitioning.
Ms. Seema Bhatkar Page 11
It’s unlikely that the documents with close shard key will be part of the same chunk.
For example, for ranges based on the hash of the id field, there will be a straight line of hash
values, which will again be partitioned on basis of the number of shards. On the basis of the
hash values, the documents will lie in either of the shards.
Question 3
4. BSON Documents
This section covers the limitations of BSON documents .
Size limits : The current versions support documents up to 16MB in size. This
maximum size ensures that a document cannot not use excessive RAM or excessive
bandwidth while in transmission.
Nested depth limit : In MongoDB, no more than 100 levels of nesting are supported
for BSON documents.
Field names : If you store 1,000 documents with the key “col1”, the key is stored that
many times in the data set. Although arbitrary documents are supported in
MongoDB, in practice most of the field names are the same.
5. Security Limitations
No Authentication by Default:- Although authentication is not enabled by
default, it’s fully supported and can be enabled easily.
Traffic to and from MongoDB Isn’t Encrypted:- By default the connections
to and from MongoDB are not encrypted. Communications on a public
network can be encrypted using the SSL-supported build of MongoDB, which
is available in the 64-bit version only.
Basically, the OS recognizes that your data file is 2000 bytes on disk, so it maps this to
memory address 1,000,000 – 1,002,000. Until now you still had files backing up the
memory. Thus any change in memory will be flushed to the underlying files by the
OS.
This is how the mongod works when journaling is not enabled. Every 60 seconds the
in-memory changes are flushed by the OS. why the virtual memory amount used by
mongod doubles when the journaling is enabled
JQuery Features
DOM manipulation − The jQuery made it easy to select DOM elements, negotiate
them and modifying their content by using cross-browser open source selector
engine called Sizzle.
Event handling − The jQuery offers an elegant way to capture a wide variety of
events, such as a user clicking on a link, without the need to clutter the HTML code
itself with event handlers.
AJAX Support − The jQuery helps you a lot to develop a responsive and featurerich
site using AJAX technology.
Animations − The jQuery comes with plenty of built-in animation effects which you
can use in your websites.
Lightweight − The jQuery is very lightweight library - about 19KB in size (Minified
and gzipped).
Cross Browser Support − The jQuery has cross-browser support, and works well in IE
6.0+, FF 2.0+, Safari 3.0+, Chrome and Opera 9.0+
Latest Technology − The jQuery supports CSS3 selectors and basic XPath syntax.
Parameters:
1. criteria : It specifies a selector expression, a jQuery object or one or more elements
to be returned from a group of selected elements.
2. function(index) : It specifies a function to run for each element in the set. If the
function returns true, the element is kept. Otherwise, it is removed.
3. index : The index position of the element in the set.
A mouse click
A web page loading
Taking mouse over an element
Submitting an HTML form
A keystroke on your keyboard, etc.
When these events are triggered, you can then use a custom function to do pretty much
whatever you want with the event. These custom functions call Event Handlers.
Using the jQuery Event Model, we can establish event handlers on DOM elements
with the bind() method
Syntax:
Syntax
selector.unbind(eventType, handler)
or
selector.unbind(eventType)
Question 5
1. a string
2. a number
3. an object (JSON object)
4. an array
5. a boolean
6. null
1. JSON Strings
Strings in JSON must be written in double quotes.
Example:- { "name":"John" }
2. JSON Numbers
Numbers in JSON must be an integer or a floating point.
Example:- { "age":30 }
3. JSON Objects
Values in JSON can be objects.
Example
{
"employee":{ "name":"John", "age":30, "city":"New York" }
}
4. JSON Arrays
Values in JSON can be arrays.
Example
{
"employees":[ "John", "Anna", "Peter" ]
}
5. JSON Booleans
Values in JSON can be true/false.
Example
{ "sale":true }
"properties": {
"id": {
"description": "The unique identifier for a product",
"type": "integer"
},
"name": {
"description": "Name of the product",
"type": "string"
},
"price": {
"type": "number",
"minimum": 0,
}
},
1 $schema
The $schema keyword states that this schema is written according to the draft
v4 specification.
2 Title
You will use this to give a title to your schema.
3 Description
A little description of the schema.
4 Type
The type keyword defines the first constraint on our JSON data: it has to be a
JSON Object.
5 Properties
Defines various keys and their value types, minimum and maximum values to be
used in JSON file.
6 Required
This keeps a list of required properties.
7 Minimum
This is the constraint to be put on the value and represents minimum acceptable
value.
8 Maximum
This is the constraint to be put on the value and represents maximum
acceptable value.
9 maxLength
The length of a string instance is defined as the maximum number of its
characters.
10 minLength
The length of a string instance is defined as the minimum number of its
characters.
Example:
[
{
"id": 2,
"name": "soap",
"price": 12.50,
}
]
import json
Output:-
Brian
Seattle
import json
data = {
a: 0,
b: 9.6,
c: "Hello World",
d: {
a: 4
}
Ms. Seema Bhatkar Page 24
}
json_data = json.dumps(data)
print(json_data)
Output:-
{"c": "Hello World", "b": 9.6, "d": {"e": [89, 90]}, "a": 0}
As the diagram outlines, a collection begins with the use of the opening brace ({), and
ends with the use of the closing brace (}). The content of the collection can be
composed of any of the following possible three designated paths:
i. The top path illustrates that the collection can remain devoid of any string/value
pairs.
ii. The middle path illustrates that our collection can be that of a single string/value
pair.
iii. The bottom path illustrates that after a single string/value pair is supplied, the
collection needn’t end but, rather, allow for any number of string/value pairs,
before reaching the end. Each string/value pair possessed by the collection must
be delimited or separated from one another by way of a comma (,).
Example:-
{};
{name:"Bob"};
The figure below illustrates the grammatical representation for an ordered list of
values
[];
["abc"];
["0",1,2,3,4,100];
Syntax
The cookie is simply a string of ASCII encoded characters composed of one or more
attribute-value pairs, separated by a semicolon (;) token.
Here syntax outlines to set some cookie specified by the indicated NAME, to possess the
assigned VALUE.
i. expires informs the browser of the date and time it is no longer necessary to further
store said cookie.
ii. Max-age specifies how long(in seconds) a cookie should persist.
iii. domain attribute explicitly defines the domain(s) to which the cookie is to be made
available.
iv. path attribute further enforces to which subdirectories a cookie is available.
v. secure attribute does not provide security. It informs the browser to send the cookie
to the server only if the connection over which it is to be sent is a secure connection,
such as HTTPS. Transmitting
vi. httponly attribute, when specified, limits the availability of the cookie to the server
and the server alone. This means the cookie will not be available to the client side,
thereby preventing client-side JavaScript from referencing, deleting, or updating the
cookie.
Example
Creating a cookie
document.cookie= "ourFirstCookie=abc123";
New Generation
Technology
(USIT5P7 Core)
University Paper Solution
By
Mrs. Spruha More
Ans Big data is data that has high volume, is generated at high velocity, and has multiple
varieties. Let’s look at few facts and figures of big data.
The following are the ways in which MongoDB is different from SQL.
1. MongoDB uses documents for storing its data, which offer a flexible schema
(documents in same collection can have different fields). This enables the users
to store nested or multi-value fields such as arrays, hashes, etc. In contrast,
RDBMS systems offer a fixed schema where a column’s value should have a
similar data type. Also, it’s not possible to store arrays or nested values in a cell.
2. MongoDB doesn’t provide support for JOIN operations, like in SQL. However, it
enables the user to store all relevant data together in a single document, avoiding
at the periphery the usage of JOINs. It has a workaround to overcome this issue
3. MongoDB doesn’t provide support for transactions in the same way as SQL.
However, it guarantees atomicity at the document level. Also, it uses an isolation
operator to isolate write operations that affect multiple documents, but it does
not provide “all-or-nothing” atomicity for multi-document write operations.
d. Explain how volume, velocity and variety are important component of bigdata.
Ans Three Vs of Big Data
1. Volume
Volume in big data means the size of the data. As businesses are becoming more transaction
oriented so increasing numbers of transactions adding generating huge amount of data. This
huge volume of data is the biggest challenge for big data technologies. The storage and
processing power needed to store, process, and make accessible the data in a timely and
cost effective manner is massive.
2. Variety
The data generated from various devices and sources follows no fixed format or structure.
Compared to text, CSV or RDBMS data varies from text files, log files, streaming videos,
photos, meter readings, stock ticker data, PDFs, audio, and various other unstructured
formats. New sources and structures of data are being created at a rapid pace. So the onus is
on technology to find a solution to analyze and visualize the huge variety of data that is out
there.
3. Velocity
Velocity in big data is the speed at which data is created and the speed at which it is required
to be processed. If data cannot be processed at the required speed, it loses its significance.
Due to data streaming in from social media sites, sensors, tickers, metering, and monitoring,
it is important for the organizations to speedily process data both when it is on move and
when it is static.
Eric Brewer outlined the CAP theorem in 2000. The theorem states that when designing
an application in a distributed environment there are three basic requirements that exist,
namely
• Partition Tolerance means that the system will continue to function even if it is partitioned
into groups of servers that are not able to communicate with one another.
The CAP theorem states that at any point in time a distributed system can fulfil only two of
the above three guarantees.
The NoSQL databases are categorized on the basis of how the data is stored. NoSQL mostly
follows a horizontal structure because of the need to provide curated information from large
volumes, generally in near real-time. They are optimized for insert and retrieve operations on
a large scale with built-in capabilities for replication and clustering.
Table briefly provides a feature comparison between the various categories of NoSQL
Databases
The following command will delete the documents where Gender = ‘M’ :
> db.users.remove({"Gender":"M"})
>
The same can be verified by issuing the find() command on Users :
> db.users.find({"Gender":"M"})
Finally, if you want to drop the collection, the following command will drop the collection:
> db.users.drop()
true
>
A page fault happens when data which is not there in memory is accessed by MongoDB. If
there’s free memory available, the OS will directly load the requested page into memory;
however, in the absence of free memory, the page in memory is written to the disk and then
the requested page is loaded in the memory, slowing down the process. Few operations
accidentally purge large portion of the working set from the memory, leading to an adverse
effect on the performance. One example is a query scanning through all documents of a
database where the size exceeds the server memory. This leads to loading of the documents
in memory and moving the working set out to disk.
Sharding Components
The components that enable sharding in MongoDB. Sharding is enabled in MongoDB via
sharded clusters.
The following are the components of a sharded cluster:
• Shards
• mongos
• Config servers
Compound Index
When creating an index, you should keep in mind that the index covers most of your queries.
If you sometimes query only the Name field and at times you query both the Name and the
Age field, creating a compound index on the Name and Age fields will be more beneficial
than an index that is created on either of the fields because the compound index will cover
both queries.
The following command creates a compound index on fields Name and Age of the collection
testindx .
> db.testindx.ensureIndex({"Name":1, "Age": 1})
Compound indexes help MongoDB execute queries with multiple clauses more efficiently.
When creating a compound index, it is also very important to keep in
mind that the fields that will be used for exact matches (e.g. Name : "S1" ) come first,
followed by fields that are used in ranges (e.g. Age : {"$gt":20} ).
Question 3
1. Time field : In choosing this option, although the data will be distributed evenly among the
shards, neither the inserts nor the reads will be balanced.
As in the case of performance data, the time field is in an upward direction, so all the
inserts will end up going to a single shard and the write throughput will end up being same
as in a standalone instance.
Most reads will also end up on the same shard, assuming you are interested in
viewing the most recent data frequently.
3. Use the key, which is evenly distributed, such as Host This has following advantages: if the
query selects the host field, the reads will be selective and local to a single shard, and the
writes will be balanced. However, the biggest potential drawback is that all data collected for
a single host must go to the same chunk since all the documents in it have the same shard
key. This will not be a problem if the data is getting collected across all the hosts, but if the
monitoring collects a disproportionate amount of data for one host, you can end up with a
large chunk that will be completely unsplittable, causing an unbalanced load on one shard.
4. Combining the best of options 2 and 3, you can have a compound shard key, such as
{host:1, ssk: 1} where host is the host field of the document and ssk is _id field’s hash value.
In this case, the data is distributed largely by the host field making queries, accessing
the host field local to either one shard or group of shards. At the same time, using ssk
ensures that data is distributed evenly across the cluster.
1. Data set size: The most important thing is to determine the current and anticipated data
set size. This not only lets you choose resources for individual physical nodes, but it also
helps when planning your sharding plans (if any).
2. Data importance: The second most important thing is to determine data importance, to
determine how important the data is and how tolerant you can be to any data loss or data
lagging (especially in case of replication) .
3. Memory sizing: The next step is to identify memory needs and accordingly take care of
the
RAM. If possible, you should always select a platform that has memory greater than
your working set size.
4. Disk Type: If speed is not a primary concern or if the data set is larger than what any
in-memory strategy can support, it’s very important to select a proper disk type. IOPS
(input/output operations per second) is the key for selecting a disk type; the higher
the IOPS, the better the MongoDB performance. If possible, local disks should be
used because network storage can cause poor performance and high latency. It is
also advised to use RAID 10 when creating disk arrays (wherever possible).
5. CPU: Clock speed can also have a major impact on the overall performance when you are
running a mongod with most data in memory. In circumstances where you want to
maximize the operations per second, you must consider including a CPU with a high
clock/bus speed in your deployment strategy.
2. Resident memory: An eye should always be kept on the allocated memory. This counter
value should always be lower than the physical memory.
3. Working set size: The active working set should fit into memory for a good performance.
You can either optimize the queries so that the working set fits inside the memory or
increase the memory when the working set is expected to increase.
4. Queues: Prior to the release of MongoDB 3.0, a reader-writer lock was used for
simultaneous reads and exclusive access was used for writes. In such scenario, you might end
up with queues behind a single writer. Starting from Version 3.0, collection level locking (in
the MMAPv1 storage engine) and document level locking (in the WiredTiger storage engine)
have been introduced.
5. Whenever there’s a hiccup in the application, the CRUD behavior, indexing patterns, and
indexes can help you better understand the application’s flow.
6. It’s recommended to run the entire performance test against a full-size database, such as
the production database copy, because performance characteristic are often highlighted
when dealing with the actual data.
f. What is data storage engine? Differentiate between MMAP and wired storage
engines.
Ans: Data Storage Engine
MongoDB uses MMAP as its default storage engine. This engine works with memory-
mapped files. Memory-mapped files are data files that are placed by the operating system in
memory using the mmap() system call. mmap is a feature of OS that maps a file on the disk
into virtual memory.
MongoDB allows the OS to control the memory mapping and allocate the maximum amount
of RAM. The caching is done based on LRU behavior wherein the least recently used files are
moved out to disk from the working set, making space for the new recently and frequently
used pages. But there are some drawbacks of this method:-
1. MongoDB has no control over what data to keep in memory and what to remove. So every
server restart will lead to a page fault because every page that is accessed will not be
available in the working set, leading to a long data retrieval time.
2. MongoDB also has no control over prioritizing the content of the memory.
Question 4
a. Define In-memory database. What are the techniques used in In-Memory database
to ensure that data is not lost.
Ans: In-Memory Databases
The solid-state disk may have had a transformative impact on database performance, but it
has resulted in only incremental changes for most database architectures. A more paradigm-
shifting trend has been the increasing practicality of storing complete databases in main
memory.
The cost of memory and the amount of memory that can be stored on a server have both
been moving exponentially since the earliest days of computing. Figure illustrates these
trends: both the cost of memory per unit storage and the amount of storage that can fit on a
single memory chip have been increasing over many
In-memory databases generally use some combination of techniques to ensure they don’t
lose data.
These include:
• Replicating data to other members of a cluster.
• Writing complete database images (called snapshots or checkpoints) to disk files.
• Writing out transaction/operation records to an append-only disk file (called a
transaction log or journal).
b. Explain how does Redis uses disk files for persistence Redis.
While
TimesTen is an attempt to build an RDBMS compatible in-memory database, Redis is at the
opposite extreme: essentially an in-memory key-value store. Redis (Remote Dictionary
Server) was originally envisaged as a simple in-memory system capable of sustaining very
high transaction rates on underpowered systems, such as virtual machine images.
Redis was created by Salvatore Sanfilippo in 2009. VMware hired Sanfilippo and sponsored
Redis development in 2010. In 2013, Pivotal software a Big Data spinoff from VMware’s
parent company EMC—became the primary sponsor.
Redis follows a familiar key-value store architecture in which keys point to objects. In Redis,
objects consist mainly of strings and various types of collections of strings (lists, sorted lists,
hash maps, etc.). Only primary key lookups are supported; Redis does not have a secondary
indexing mechanism.
Redis uses disk files for persistence:
• The Snapshot files store copies of the entire Redis system at a point in time.
Snapshots can be created on demand or can be configured to occur at scheduled
intervals or after a threshold of writes has been reached. A snapshot also occurs
when the server is shut down.
Examples:
The $(document).ready() method allows us to execute a function when the document is fully
loaded. This event is already explained in the jQuery Syntax chapter.
click()
dblclick()
mouseenter()
CSS
jQuery’s css() method is very powerful. There are actually three primary ways that you’ll work
with it.
The first is when determining the value of an element’s property. Simply pass it one
parameter—the
property whose value you want to know:
$("div").css("width");
$("div").css("margin-right");
$("div").css("color");
You can also use CSS to set values. To set just one value, pass in a property and a value as
separate
parameters. You used this in Chapter 3.
$("div").css("color", "red");
$("div").css("border", "1px solid red");
animate() and Animation Convenience Methods
all the animation methods you’ve used so far, including fadeIn() and fadeOut(), use animate().
JQuery provides these methods, known as convenience methods, to save you some typing.
Here’s the code that implements fadeIn() from the jQuery source:
function (speed, easing, callback) {
return this.animate(props, speed, easing, callback);
- It simplifies the complicated things from javascript like the AJAX calls and the DOM
manipulation.
DOM manipulation − The jQuery made it easy to select DOM elements, negotiate them and
modifying their content by using cross-browser open source selector engine called Sizzle.
Event handling − The jQuery offers an elegant way to capture a wide variety of events, such
as a user clicking on a link, without the need to clutter the HTML code itself with event
handlers.
AJAX Support − The jQuery helps you a lot to develop a responsive and feature rich site
using AJAX technology.
Animations − The jQuery comes with plenty of built-in animation effects which you can use
in your websites.
Lightweight − The jQuery is very lightweight library - about 19KB in size (Minified and
gzipped).
Cross Browser Support − The jQuery has cross-browser support, and works well in IE 6.0+,
FF 2.0+, Safari 3.0+, Chrome and Opera 9.0+
Question 5
a. Explain JSON grammar.
Ans: JSON Grammar
JSON, in a nutshell, is a textual representation defined by a small set of governing rules in
which data is structured. The JSON specification states that data can be structured in either
of the two following compositions:
1. A collection of name/value pairs
2. An ordered list of values(Array)
1. The top path illustrates that the collection can remain devoid of any string/value pairs.
2. The middle path illustrates that our collection can be that of a single string/value pair.
3. The bottom path illustrates that after a single string/value pair is supplied, the collection
needn’t end but, rather, allow for any number of string/value pairs, before reaching the
end. Each string/value pair possessed by the collection must be delimited or separated
from one another by way of a comma (,).
XML Example
<employees>
<employee>
<firstName>John</firstName> <lastName>Doe</lastName>
</employee>
<employee>
<firstName>Anna</firstName> <lastName>Smith</lastName>
</employee>
<employee>
<firstName>Peter</firstName> <lastName>Jones</lastName>
</employee>
</employees>
JSON is Like XML Because
• Both JSON and XML are "self describing" (human readable)
• Both JSON and XML are hierarchical (values within values)
• Both JSON and XML can be parsed and used by lots of programming languages
• Both JSON and XML can be fetched with an XMLHttpRequest
JSON is Unlike XML Because
• JSON doesn't use end tag
• JSON is shorter
• JSON is quicker to read and write
• JSON can use arrays
These headers can be supplied with the request to provide the server with preferential
information that will assist in the request. Additionally, they outline the configurations of the
client making the request. Such headers may reveal information about the user-agent
making the request or
the preferred data type that the response should provide. By utilizing the headers within
this category, we can potentially influence the response from the server. For this reason, the
request headers are the most commonly configured headers. One very useful header is the
Accept header. It can be used to inform the server as to what MIME type or data type the
client can properly handle. This can often be set to a particular MIME type, such as
application/json, or text/plain. It can even be set to */*,which informs the server that the
client can accept all MIME types. The response provided by the server is expected to reflect
one of the MIME types the client can handle. The following are request headers:
Accept
Accept-Charset
Accept-Encoding
Accept-Language
Authorization
Expect
From
Host
If-Match
JSON.parse(text [, reviver]);
JSON.parse can accept two parameters, text and reviver. The name of the parameter text is
indicative of the value it expects to receive. The parameter reviver is used similarly to the
replacer parameter of stringify, in that it offers the ability for custom logic to be supplied for
necessary
parsing that would otherwise not be possible by default. As indicated in the method’s
signature, only the provision of text is required.
<!DOCTYPE html>
<html>
<body>
<p id="demo"></p>
<script>
var obj = { name: "John", age: 30, city: "New York" };
var myJSON = JSON.stringify(obj);
document.getElementById("demo").innerHTML = myJSON;
</script>
</body>
</html>
Example;
{ "name":"John" }
Number
A number in JSON is the arrangement of base10 literals, in combination with mathematical
notation to define a real number literal.
Example: { "age":30 }
JSON Arrays
Values in JSON can be arrays.
Example:
{
"employees":[ "John", "Anna", "Peter" ]
}
JSON Booleans
Example:
Values in JSON can be true/false.
{ "sale":true
}
JSON null
Example:
Values in JSON can be null.
{ "middlename":null
}