Enhancement of DNVME Device Driver

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

International Journal on Recent and Innovation Trends in Computing and Communication ISSN: 2321-8169

Volume: 4 Issue: 5 333 - 336


_______________________________________________________________________________________________
Enhancement of DNVME device driver

Sucheta Shivakalimath Dr. Ramakanth Kumar P.


M. Tech in Software Engineering Professor and Dean Academics
Dept. of ISE, Dept. of ISE,
R. V. College of Engineering R. V. College of Engineering
Bengaluru, India Bengaluru, India
sucheta032@gmail.com ramakanthkp@rvce.edu.in

Abstract- The device driver is the interface between hardware and software applications. It includes all the functionality for handling the devices
connected to it. The drivers are device-specific. The storage devices like SSD and HDD use dNVMe driver for handling them. This driver can be
enhanced for supporting various features. The enhancement helps in development of storage devices. The main areas for modification includes
enhancing IOCTL calls, allowing register level changes and allowance for negative testing. These features will enrich the storage devices for all
the qualifications.

Keywords: Device Drivers, Non-Volatile Memory express (NVMe, Submission Queue (SQ), Completion Queue (CQ), Interrupts.
__________________________________________________*****_________________________________________________

I. INTRODUCTION a. Meta data support


One of the numerousbenefits of free operating systems, b. All Admin and NVM command set
as characterized by Linux, is that their internals are exposed commands
for everyone to view. Earlier the code of operating system c. Allows using MSI, MSI-X, and polling for
was not given for user access. But now it is freely available reaping CE's from IOCQ's.
to understand and also for modification.Linux has helped to
democratize operating systems. The Linux kernel is a large 3. dNVMe driver Opens kernel level resources like
withcomplex body of code. But it is prone to hacking. The queues and memory facilitating maximum visibility
kernel hackers find and access pointfrom where they can for debugging support whereas NVMe driver
contact all the code. The device drivers act as gateway for purposely hides kernel level constructs.
such situations [1]. 4. dNVMe driver doesn't do anything automatically,
must be specifically instructed by a user space
Device drivers are special portion in the Linux kernel application. NVMe driver automates sending
[2]. These are separate black boxes which make specific part commands on behalf of user space applications.
of hardware retort to a well-defined internal programming 5. NVMe driver safeguards any application but
interface; they conceal entirely the details regarding how the dNVMe driver allows situations which could crash
device works. A set of standardized calls are used to the kernel, an undesirable side effect as a result of
accomplish user activities. These calls are self-governing of allowing maximum interaction with user space
specific driver. The device driver then has the role of applications.
mapping these calls to device-specific operations which
perform on real hardware. The modularity that makes the dNVMe logic is platform specific and targets Linux
Linux drivers easy to write is that the programming interface kernel versions based on 2.6.35. It has been developed on
for drivers can be developed independent from the rest of Ubuntu distributions only. However, the driver design is
the kernel. There are more than hundreds of them generic and should support other Linux kernel versions by
obtainable. changes in the required kernel API's. Additionally, tNVMe
could be used without modifications on other platforms if
dNVMe is the kernel component of the NVMe one were to implement the IOCTL's within dNVMe on
Compliance Test Suite [3][4]. The user space application those other platforms.
tNVMe wires up components within dNVMe to create
various compliance test cases.dNVMe is not the NVMe dNVMe is a test driver with a goal to verify hardware
driver embedded within the Linux kernel. These 2 separate compliance against a written set of specifications.
code bases target different audiences and greatly differ in Functionality, not speed, was the main target for the driver.
the support they offer to a user space application. Some of It was seen that satisfying both speed and functionality
the major differences can be summarized as follows: could not be addressed simultaneously in all aspects of this
1. dNVMe driver allows sending illegal commands design. Thus when a decision had to be made as to which
and allows creating illegal states to verify proper one to choose, functionality always won. As a result we
hardware error code generation whereas NVMe ended up making dNVMe driver a character driver rather
driver prevents from sending illegal commands. than a block driver with an advanced range of IOCTL's to
2. dNVMe supports every feature of the NVMe improve user space control required for testing.
specification.
333
IJRITCC | May 2016, Available @ http://www.ijritcc.org
__________________________________________________________________________________________________
International Journal on Recent and Innovation Trends in Computing and Communication ISSN: 2321-8169
Volume: 4 Issue: 5 333 - 336
_______________________________________________________________________________________________
II. EXISTING SYSTEM e. Inquire the number of commands in
The existing dNVMe driver provides various well- CQthat are waiting to be reaped
known functionalities for handling devices appropriately. f. Finds SQ, CQ, command, metadata node
The different functions are listed with the various tasks that and removes command node, SQ node,
they perform. CQ node
g. Copyingof CQ data to user buffer for
1. Commands:This function takes care of all the elements reaped and moves the CQ head
commands to be sent to device and receive from pointer to point to location of the elements
the device. The commands are sent through SQ that is to be reaped
from host to device and received through CQ from h. Reap the number of elements specified for
device to host. The tasks handled are: the given CQ id and send the reaped
a. Processes all the commands to be sent elements back.
from SQ to CQ. 6. Register: This function takes care of all the
b. Manages storage of commands. operations related to registers. The task performed
2. Data structure:This function is to create a data is,
structure and manage it. It takes care of all the a. Reads and writes the controller registers
memory related operations. The tasks are: located in the PCI BAR 0 and 1 that are
a. Allocate memory for user space and data mapped to memory area which supports
in kernel space in-order access.
b. Copy userspace buffer to kernel memory 7. Status check: This is the function for checking the
c. Logging metadata and IRQ nodes into status of device with the power management. The
user space file tasks are as follows.
3. IOCTL: The I/O control (IOCTL) is the system call a. Checks PCI device status and controller
for particular I/O operations. The calls that are not status
done by regular calls are carried out by IOCTL b. Checks if the NVME device supports
calls. The tasks are: NEXT capability item in the linked list
a. Checks the status of device c. Performs PCI power management control
b. Performs driver generic read and write and status
c. Creates driver admin SQ and CQ and d. Checks the MSI and MSI-X control and
allocates kernel space status
d. Initializes driver IOCTL calls e. Checks the PCIe capable status register of
e. Creates and deletesmetabuffers with advanced error reporting capability of PCI
freeing memory after deletion of express device.
metabuffer 8. dNVMe system: The main function which handles
f. Acquire queue metrics from global data all the above mentioned functions is dNVMe
structure system. The tasks are as given.
4. IRQ: The interrupt is a signal for intervening the a. Enter and initialize the dNVMe driver
running operation and perform interrupted event. b. Exit the dNVMe driver probe
The various operations are: c. Examine and remove the dNVMe driver
a. Set new IRQ scheme, initialize the IRQ d. Get the device list with their metrics and
list before any scheme runs and releases unlock the device
IRQ list after any scheme runs e. Opens and releases the dNVMe driver
b. Disables pin to read the command register f. Mapping of the contiguous device mapped
in PCI space area to user space
c. Validates IRQ inputs for MSI&MSI-X, g. Take care of IOCTL calls
and also takes care of masking &
unmasking interrupts III. PROPOSED SYSTEM
d. Allocates and deallocatesIRQ and The existing dNVMe driver provides all the basic
interrupt CQ nodeand frees memory. functionality to be performed with the device. But it lacks
5. Queue: The queue is important part. It is used few main areas like IOCTL calls for CQ, providing security,
mainly for sending and receiving of commands and and getting address information from kernel. These new
messages. There are different tasks taken care by features can be developed and put into the existing system.
this function. They are:
a. Checks if the controller has transitioned its IV. IMPLEMENTATION LOGIC
state after controller reset and The enhancement for dNVMe device driver can be
enables/disables controller given by supporting the mentioned three features. An
b. Clean existing driver data structure approach is given for implementing these new features.
c. Creation of SQ, CQ and I/O SQ A. IOCTL calls for CQ
d. Deallocation of memory after CQ and SQ First, the initialization is done. The memory has to
is deleted be allocated for user structure in kernel space. Then the SQ
is acquired. The command size is known and memory is
334
IJRITCC | May 2016, Available @ http://www.ijritcc.org
__________________________________________________________________________________________________
International Journal on Recent and Innovation Trends in Computing and Communication ISSN: 2321-8169
Volume: 4 Issue: 5 333 - 336
_______________________________________________________________________________________________
allocated for command in kernel space. Second, the driver First, find the SQ node in the given device element
rings SQ doorbell. Retrieve the queue from the linked list, node and given SQ identifier. If SQ node is found, it locks
copy the tail pointer with virtual pointer and write the tail the associated CQ and return pointer to the SQ node in the
pointer value to SQ tail doorbell. Third, reap the CQ. Check SQ linked list. Then locking on the dNVMe driver is
for number of unreaped elements in CQ. If this CQ is an performed. Second, unlock the CQ from SQ. Third, lock the
IOCQ, not ACQ, then lookup the component entry size. Get CQ. Find CQ node in the given device element node and
the required base address and copy the number of given CQ id. If found, returns the pointer to the CQ node in
component entrys that should be able to reap. Fourth, free the CQ linked list.
the command identifier node from the command track list.
Fifth, process the queues. Get the persistence queue C. Address information
identifier, unique command identifier and allocate memory Get addresses from operating system about the
to copy user data. Sixth, process the admin commands. PCIe device such as module layout, character device
Perform deletion of IOSQ, creation of IOSQ, deletion of allocation, PCI bus read configuration byte, kernel stack,
IOCQ and creation of IOCQ. Seventh, process the reaped PCI reset slot, PCI disable device, PCI disable MSI-X, PCI
algorithms. Find the SQ node for given SQ identifier and enable MSI-X, PCI enable MSI, and PCI disable MSI.
find the command in SQ node. Eighth, copy the CQ data to
user buffer for the elements reaped. Lastly, move the CQ V. INFERENCE
head pointer to point to location of the elements that is to be With these new features, the facts that are observed are
reaped. given below,
The enhanced driver can give better performance
B. Providing locks when compared with existing driver.
There are four layers of locking in dNVMe. The address information feature helps in
a. The device list read/write lock: The device list lock performing any register level changes.
protects the list of devices that dNVMe maintains. The locking of queues is advantageous for security
b. The device lock: It protects the CQ list, the device, purposes.
the meta, and the IRQ process, as well as the IRQ Figure 1 shows the time breakdown. The native driver
track node list in the IRQ process structure. is compared with the enhanced driver on storage device.
c. The IRQ Lock: protects IRQ track structure
d. The CQ Lock: The CQ Lock protects the CQ
structure, and all SQ structures in its list

8%

7%

6%

5%

NVMe Driver
4%
Kernel other
3% Application

2%

1%

0%
NVMe Device with native driver NVMe Device with modified driver

Figure 1: Time spent in different parts of I/O stack

The time spent by devices in driver is important factor to be overall time while in the modified driver it spends half the
considered. The device in native driver spends 1.5% of time of native driver.

335
IJRITCC | May 2016, Available @ http://www.ijritcc.org
__________________________________________________________________________________________________
International Journal on Recent and Innovation Trends in Computing and Communication ISSN: 2321-8169
Volume: 4 Issue: 5 333 - 336
_______________________________________________________________________________________________
VI. CONCLUSION
The attention for developing Linux device drivers
gradually upsurges as the admiring of Linux system endures
to grow. Many users are unaware of issues regarding
hardware, and majority of Linux is autonomous of the
hardware it runs on. However, the driver exists without
which there is no functioning system. The enhancement of
device drivers provides various advantages with new
features. These help in development of storage devices with
even more less time and efforts.

ACKNOWLEDGEMENT
Any achievement, be it scholastic or otherwise does not
depend solely on the individual efforts but on the guidance,
encouragement and cooperation of intellectuals, elders and
friends. We thank Department of Information Science and
Engineering, RVCE for their constant support and
encouragement.

REFERENCES
[1] Linux device drivers, A. Rubini and J. Corbet,
Sebastopol: O'Reilly & Associates, 2001.
[2] A. Kadav and M. Swift, Understanding modern device
drivers, ACM SIGARH Computer Architecture News,
vol. 40, no. 1, 2012, p. 87.
[3] NVM Express Specification, Revision 1.1a, NVMHCI
Workgroup, Tech. Rep., September 23, 2013.
[4] Sivashankar and S. Ramasamy, "Design and
implementation of non-volatile memory express," Recent
Trends in Information Technology (ICRTIT), 2014
International Conference on, Chennai, 2014, pp. 1-6.

336
IJRITCC | May 2016, Available @ http://www.ijritcc.org
__________________________________________________________________________________________________

You might also like