xii IP sample report
xii IP sample report
xii IP sample report
A
Project Report
Submitted To
Principal
Roll No :
Name of Student :
Address :
Phone No :
Email Address :
Project Title :
Apart from the efforts of me, the success of any project depends largely on the
encouragement and guidelines of many others. I take this opportunity to express my
gratitude to the people who have been instrumental in the successful completion of this
project.
I express deep sense of gratitude to almighty God for giving me strength for the
successful completion of the project.
My sincere thanks to “Mr. Ashutosh Rajguru”, Master In-charge, A guide, Mentor all
the above a friend, who critically reviewed my project and helped in solving each and
every problem, occurred during implementation of the project
The guidance and support received from all the members who contributed and
who are contributing to this project, was vital for the success of the project. I am grateful
for their constant support and help.
TABLE OF CONTENTS
Introduction
Python Details
Preliminary Design
Database Design
Implementation
Bibliography
INTRODUCTION
understanding the stock mix of a company and the different demands on that stock. The
demands are influenced by both external and internal factors and are balanced by the
The objective of this project is to let the students apply the programming
knowledge into a real- world situation/problem and exposed the students how
programming skills helps in developing a good software.
PROPOSED SYSTEM
Today one cannot afford to rely on the fallible human beings of be really wants to
stand against todays merciless competition where not to wise saying to err is
human no longer valid, its out dated to rationalize your mistake. So, to keep pace
with time, to bring about the best result without malfunctioning and greater efficiency so
to replace the unending heaps of flies with a much sophisticated hard disk of the
computer.
One has to use the data management software. Software has been an ascent in
atomization various organisations. Many software products working are now in markets,
which have helped in making the organizations work easier and efficiently. Data
management initially had to maintain a lot of ledgers and a lot of paperwork has to be
done but now software product on this organization has made their work faster and
easier. Now only this software has to be loaded on the computer and work can be done.
This prevents a lot of time and money. The work becomes fully automated and
any information regarding the organization can be obtained by clicking the button.
Moreover, now its an age of computers of and automating such an organization gives
INITIATION PHASE
The Initiation Phase begins when a business sponsor identifies a need or an opportunity.
The purpose of the Initiation Phase is to:
The System Concept Development Phase begins after a business need or opportunity is
validated by the Agency/Organization Program Leadership and the Agency/Organization
CIO.
The purpose of the System Concept Development Phase is to:
Determine the feasibility and appropriateness of the alternatives.
Identify system interfaces.
Identify basic functional and data requirements to satisfy the business need.
Establish system boundaries; identify goals, objectives, critical success factors,
and performance measures.
Evaluate costs and benefits of alternative approaches to satisfy the basic func-
tional requirements
Assess project risks
Identify and initiate risk mitigation actions, andDevelop high-level technical ar-
chitecture, process models, data models, and a concept of operations. This phase
explores potential technical solutions within the context of the business need.
It may include several trade-off decisions such as the decision to use COTS soft-
ware products as opposed to developing custom software or reusing software
components, or the decision to use an incremental delivery versus a complete,
onetime deployment.
Construction of executable prototypes is encouraged to evaluate technology to
support the business process. The System Boundary Document serves as an im-
portant reference document to support the Information Technology Project Re-
quest (ITPR) process.
The ITPR must be approved by the State CIO before the project can move for-
ward.
PICTORIAL
REPRESENTATION OF
SDLC:
PLANNING PHASE
REQUIREMENTS ANALYSISPHASE
This phase formally defines the detailed functional user requirements using high-
level requirements identified in the Initiation, System Concept, and Planning phases. It
also delineates the requirements in terms of data, system performance, security, and
maintainability requirements for the system. The requirements are defined in this phase
to a level of detail sufficient for systems design to proceed. They need to be measurable,
testable, and relate to the business need or opportunity identified in the Initiation Phase.
The requirements that will be used to determine acceptance of the system are captured in
the Test and Evaluation Master Plan.
Further define and refine the functional and data requirements and document them
in the Requirements Document,
Complete business process reengineering of the functions to be supported (i.e.,
verify what information drives the business process, what information is gener-
ated, who generates it, where does the information go, and who processes it),
Develop detailed data and process models (system inputs, outputs, and the pro-
cess.
Develop the test and evaluation requirements that will be used to determine ac-
ceptable system performance.
DESIGN PHASE
The design phase involves converting the informational, functional, and network
requirements identified during the initiation and planning phases into unified design
specifications that developers use to script programs during the development phase.
Program designs are constructed in various ways. Using a top-down approach, designers
first identify and link major program components and interfaces, then expand design
layouts as they identify and link smaller subsystems and connections. Using a bottom-up
approach, designers first identify and link minor program components and interfaces,
then expand design layouts as they identify and link larger systems and connections.
Contemporary design techniques often use prototyping tools that build mock-up designs
of items such as application screens, database layouts, and system architectures. End
users, designers, developers, database managers, and network administrators should
review and refine the prototyped designs in an iterative process until they agree on an
acceptable design. Audit, security, and quality assurance personnel should be involved in
the review and approval process. During this phase, the system is designed to satisfy the
functional requirements identified in the previous phase. Since problems in the design
phase could be very expensive to solve in the later stage of the software development, a
variety of elements are considered in the design to mitigate risk. These include:
DEVELOPMENT PHASE
Testing as a deployed system with end users working together with contract per-
sonnel
Operational testing by the end user alone performing all functions. Requirements
are traced throughout testing, a final Independent Verification & Validation evalu-
ation is performed and all documentation is reviewed and accepted prior to ac-
ceptance of the system.
IMPLEMENTATION PHASE
This phase is initiated after the system has been tested and accepted by the user. In
this phase, the system is installed to support the intended business functions. System
performance is compared to performance objectives established during the planning
phase. Implementation includes user notification, user training, installation of hardware,
installation of software onto production computers, and integration of the system into
daily work processes. This phase continues until the system is operating in production in
accordance with the defined user requirements.
Often, programmers fall in love with Python because of the increased productivity it
provides. Since there is no compilation step, the edit-test-debug cycle is incredibly fast.
Debugging Python programs is easy: a bug or bad input will never cause a segmentation
fault. Instead, when the interpreter discovers an error, it raises an exception. When the
program doesn't catch the exception, the interpreter prints a stack trace. A source level
debugger allows inspection of local and global variables, evaluation of arbitrary
expressions, setting breakpoints, stepping through the code a line at a time, and so on.
The debugger is written in Python itself, testifying to Python's introspective power. On
the other hand, often the quickest way to debug a program is to add a few print
statements to the source: the fast edit-test-debug cycle makes this simple approach
very effective.
Python is a popular programming language. It was created by Guido van Rossum, and
released in 1991.
It is used for:
Why Python?
Python works on different platforms (Windows, Mac, Linux, Raspberry Pi, etc).
Python has a simple syntax similar to the English language.
Python has syntax that allows developers to write programs with fewer lines than
some other programming languages.
Python runs on an interpreter system, meaning that code can be executed as soon
as it is written. This means that prototyping can be very quick.
Python can be treated in a procedural way, an object-oriented way or a functional
way.
Good to know
The most recent major version of Python is Python 3, which we shall be using in
this tutorial. However, Python 2, although not being updated with anything other
than security updates, is still quite popular.
In this tutorial Python will be written in a text editor. It is possible to write Python
in an Integrated Development Environment, such as Thonny, Pycharm, Netbeans
or Eclipse which are particularly useful when managing larger collections of Py-
thon files.
Python is available in two versions, which are different enough to trip up many new
users. Python 2.x, the older legacy branch, will continue to be supported (that is,
receive official updates) through 2020, and it might persist unofficially after that. Python
3.x, the current and future incarnation of the language, has many useful and important
features not found in Python 2.x, such as new syntax features (e.g., the walrus
operator), better concurrency controls, and a more efficient interpreter.
Python 3 adoption was slowed for the longest time by the relative lack of third-party
library support. Many Python libraries supported only Python 2, making it difficult to
switch. But over the last couple of years, the number of libraries supporting only Python
2 has dwindled; all of the most popular libraries are now compatible with both Python 2
and Python 3. Today, Python 3 is the best choice for new projects; there is no reason to
pick Python 2 unless you have no choice. If you are stuck with Python 2, you have
various strategies at your disposal.
Pythons libraries
The success of Python rests on a rich ecosystem of first- and third-party software.
Python benefits from both a strong standard library and a generous assortment of easily
obtained and readily used libraries from third-party developers. Python has been
enriched by decades of expansion and contribution.
The default Python distribution also provides a rudimentary, but useful, cross-platform
GUI library via Tkinter, and an embedded copy of the SQLite 3 database.
For example:
Python3
filter_none
edit
play_arrow
brightness_4
# tuple
print (tup[1])
Output:
(1, 'a', 'string', 3)
a
Detailed article on Tuples in Python
Sets: Unordered collection of unique objects.
Set operations such as union (|) , intersection(&), difference(-) can be applied on a
set.
Frozen Sets are immutable i.e once created further data cant be added to them
{} are used to represent a set.Objects placed inside these brackets would be treated as
a set.
Python
filter_none
edit
play_arrow
brightness_4
# Set in Python
set1 = set()
set2 = set()
set2.add(i)
print("\n")
Output:
('Set1 = ', set([1, 2, 3, 4, 5]))
('Set2 = ', set([3, 4, 5, 6, 7]))
To read more about sets in python read our article about set by clicking here.
Attention geek! Strengthen your foundations with the Python Programming
Foundation Course and learn the basics.
To begin with, your interview preparations Enhance your Data Structures concepts with
the Python DS Course
Java
Python programs are generally expected to run slower than Java programs, but they also
take much less time to develop. Python programs are typically 3-5 times shorter than
equivalent Java programs. This difference can be attributed to Python's built-in high-
level data types and its dynamic typing. For example, a Python programmer wastes no
time declaring the types of arguments or variables, and Python's powerful polymorphic
list and dictionary types, for which rich syntactic support is built straight into the
language, find a use in almost every Python program. Because of the run-time typing,
Python's run time must work harder than Java's. For example, when evaluating the
expression a+b, it must first inspect the objects a and b to find out their type, which is
not known at compile time. It then invokes the appropriate addition operation, which
may be an overloaded user-defined method. Java, on the other hand, can perform an
efficient integer or floating point addition, but requires variable declarations for a and b,
and does not allow overloading of the + operator for instances of user-defined classes.
For these reasons, Python is much better suited as a "glue" language, while Java is better
characterized as a low-level implementation language. In fact, the two together make an
excellent combination. Components can be developed in Java and combined to form
applications in Python; Python can also be used to prototype components until their
design can be "hardened" in a Java implementation. To support this type of
development, a Python implementation written in Java is under development, which
allows calling Python code from Java and vice versa. In this implementation, Python
source code is translated to Java bytecode (with help from a run-time library to support
Python's dynamic semantics).
Javascript
Python's "object-based" subset is roughly equivalent to JavaScript. Like JavaScript (and
unlike Java), Python supports a programming style that uses simple functions and
variables without engaging in class definitions. However, for JavaScript, that's all there
is. Python, on the other hand, supports writing much larger programs and better code
reuse through a true object-oriented programming style, where classes and inheritance
play an important role.
Perl
Python and Perl come from a similar background (Unix scripting, which both have long
outgrown), and sport many similar features, but have a different philosophy. Perl
emphasizes support for common application-oriented tasks, e.g. by having built-in
regular expressions, file scanning and report generating features. Python emphasizes
support for common programming methodologies such as data structure design and
object-oriented programming, and encourages programmers to write readable (and thus
maintainable) code by providing an elegant but not overly cryptic notation. As a
consequence, Python comes close to Perl but rarely beats it in its original application
domain; however Python has an applicability well beyond Perl's niche.
Tcl
Like Python, Tcl is usable as an application extension language, as well as a stand-alone
programming language. However, Tcl, which traditionally stores all data as strings, is
weak on data structures, and executes typical code much slower than Python. Tcl also
lacks features needed for writing large programs, such as modular namespaces. Thus,
while a "typical" large application using Tcl usually contains Tcl extensions written in C
or C++ that are specific to that application, an equivalent Python application can often
be written in "pure Python". Of course, pure Python development is much quicker than
having to write and debug a C or C++ component. It has been said that Tcl's one
redeeming quality is the Tk toolkit. Python has adopted an interface to Tk as its standard
GUI component library.
Tcl 8.0 addresses the speed issuse by providing a bytecode compiler with limited data
type support, and adds namespaces. However, it is still a much more cumbersome
programming language.
Smalltalk
Perhaps the biggest difference between Python and Smalltalk is Python's more
"mainstream" syntax, which gives it a leg up on programmer training. Like Smalltalk,
Python has dynamic typing and binding, and everything in Python is an object.
However, Python distinguishes built-in object types from user-defined classes, and
currently doesn't allow inheritance from built-in types. Smalltalk's standard library of
collection data types is more refined, while Python's library has more facilities for
dealing with Internet and WWW realities such as email, HTML and FTP.
C++
Almost everything said for Java also applies for C++, just more so: where Python code
is typically 3-5 times shorter than equivalent Java code, it is often 5-10 times shorter
than equivalent C++ code! Anecdotal evidence suggests that one Python programmer
can finish in two months what two C++ programmers can't complete in a year. Python
shines as a glue language, used to combine components written in C++.
The Pandas DataFrame is a structure that contains two-dimensional data and its
corresponding labels. DataFrames are widely used in data science, machine learning,
scientific computing, and many other data-intensive fields.
DataFrames are similar to SQL tables or the spreadsheets that you work with in Excel or
Calc. In many cases, DataFrames are faster, easier to use, and more powerful than
tables or spreadsheets because theyre an integral part of
the Python and NumPy ecosystems.
In this tutorial, youll learn:
What a Pandas DataFrame is and how to create one
How to access, modify, add, sort, filter, and delete data
In this table, the first row contains the column labels (name, city, age, and py-score).
The first column holds the row labels (101, 102, and so on). All other cells are filled with
the data values.
Now you have everything you need to create a Pandas DataFrame.
There are several ways to create a Pandas DataFrame. In most cases, youll use
the DataFrame constructor and provide the data, labels, and other information. You can
pass the data as a two-dimensional list, tuple, or NumPy array. You can also pass it as
a dictionary or Pandas Series instance, or as one of several other data types not
covered in this tutorial.
For this example, assume youre using a dictionary to pass the data:
>>>
>>> data = {
... 'name': ['Xavier', 'Ann', 'Jana', 'Yi', 'Robin', 'Amal', 'Nori'],
... 'city': ['Mexico City', 'Toronto', 'Prague', 'Shanghai',
... 'Manchester', 'Cairo', 'Osaka'],
... 'age': [41, 28, 33, 34, 38, 31, 37],
... 'py-score': [88.0, 79.0, 81.0, 80.0, 68.0, 61.0, 84.0]
... }
'age'
'py-score'
Finally, row_labels refers to a list that contains the labels of the rows, which are
numbers ranging from 101 to 107.
Now youre ready to create a Pandas DataFrame:
>>>
Data such as candidate names, cities, ages, and Python test scores
>>> df.head(n=2)
name city age py-score
101 Xavier Mexico City 41 88.0
102 Ann Toronto 28 79.0
>>> df.tail(n=2)
name city age py-score
106 Amal Cairo 31 61.0
107 Nori Osaka 37 84.0
Thats how you can show just the beginning or end of a Pandas DataFrame. The
parameter n specifies the number of rows to show.
Note: It may be helpful to think of the Pandas DataFrame as a dictionary of columns, or
Pandas Series, with many additional features.
You can access a column in a Pandas DataFrame the same way you would get a value
from a dictionary:
>>>
>>> df.city
101 Mexico City
102 Toronto
103 Prague
104 Shanghai
105 Manchester
106 Cairo
107 Osaka
Name: city, dtype: object
Thats how you get a particular column. Youve extracted the column that corresponds
with the label 'city', which contains the locations of all your job candidates.
Its important to notice that youve extracted both the data and the corresponding
row labels:
Each column of a Pandas DataFrame is an instance of pandas.Series, a structure that
holds one-dimensional data and their labels. You can get a single item of a Series object
the same way you would with a dictionary, by using its label as a key:
>>>
>>> cities[102]
'Toronto'
In this case, 'Toronto' is the data value and 102 is the corresponding label. As youll see
in a later section, there are other ways to get a particular item in a Pandas DataFrame.
You can also access a whole row with the accessor .loc[]:
>>>
>>> df.loc[103]
name Jana
city Prague
age 33
py-score 81
Name: 103, dtype: object
This time, youve extracted the row that corresponds to the label 103, which contains
the data for the candidate named Jana. In addition to the data values from this row,
youve extracted the labels of the corresponding columns:
There are other methods as well, which you can learn about in the official
documentation.
You can start by importing Pandas along with NumPy, which youll use throughout the
following examples:
>>>
>>> pd.DataFrame(d)
x y z
0 1 2 100
1 2 4 100
2 3 8 100
The keys of the dictionary are the DataFrames column labels, and the dictionary
values are the data values in the corresponding DataFrame columns. The values can be
contained in a tuple, list, one-dimensional NumPy array, Pandas Series object, or one of
several other data types. You can also provide a single value that will be copied along
the entire column.
Its possible to control the order of the columns with the columns parameter and the
row labels with index:
>>>
>>> pd.DataFrame(d, index=[100, 200, 300], columns=['z', 'y', 'x'])
z y x
100 100 2 1
200 100 4 2
300 100 8 3
As you can see, youve specified the row labels 100, 200, and 300. Youve also forced
the order of columns: z, y, x.
Creating a Pandas DataFrame With Lists
Another way to create a Pandas DataFrame is to use a list of dictionaries:
>>>
>>> pd.DataFrame(l)
x y z
0 1 2 100
1 2 4 100
2 3 8 100
Again, the dictionary keys are the column labels, and the dictionary values are the data
values in the DataFrame.
You can also use a nested list, or a list of lists, as the data values. If you do, then its
wise to explicitly specify the labels of columns, rows, or both when you create the
DataFrame:
>>>
>>> df_
x y z
0 1000 2 100
1 2 4 100
2 3 8 100
As you can see, when you change the first item of arr, you also modify df_.
Note: Not copying data values can save you a significant amount of time and processing
power when working with large datasets.
If this behavior isnt what you want, then you should specify copy=True in
the DataFrame constructor. That way, df_ will be created with a copy of the values
from arr instead of the actual values.
Creating a Pandas DataFrame From Files
You can save and load the data and labels from a Pandas DataFrame to and from a
number of file types, including CSV, Excel, SQL, JSON, and more. This is a very powerful
feature.
You can save your job candidate DataFrame to a CSV file with .to_csv():
>>>
>>> df.to_csv('data.csv')
The statement above will produce a CSV file called data.csv in your working directory:
,name,city,age,py-score
101,Xavier,Mexico City,41,88.0
102,Ann,Toronto,28,79.0
103,Jana,Prague,33,81.0
104,Yi,Shanghai,34,80.0
105,Robin,Manchester,38,68.0
106,Amal,Cairo,31,61.0
107,Nori,Osaka,37,84.0
Now that you have a CSV file with data, you can load it with read_csv():
>>>
>>> df.index
Int64Index([1, 2, 3, 4, 5, 6, 7], dtype='int64')
>>> df.columns
Index(['name', 'city', 'age', 'py-score'], dtype='object')
Now you have the row and column labels as special kinds of sequences. As you can with
any other Python sequence, you can get a single item:
>>>
>>> df.columns[1]
'city'
In addition to extracting a particular item, you can apply other sequence operations,
including iterating through the labels of rows or columns. However, this is rarely
necessary since Pandas offers other ways to iterate over DataFrames, which youll see
in a later section.
You can also use this approach to modify the labels:
>>>
>>> df.index
Int64Index([10, 11, 12, 13, 14, 15, 16], dtype='int64')
>>> df
name city age py-score
10 Xavier Mexico City 41 88.0
11 Ann Toronto 28 79.0
12 Jana Prague 33 81.0
13 Yi Shanghai 34 80.0
14 Robin Manchester 38 68.0
15 Amal Cairo 31 61.0
16 Nori Osaka 37 84.0
In this example, you use numpy.arange() to generate a new sequence of row labels that
holds the integers from 10 to 16. To learn more about arange(), check out NumPy
arange(): How to Use np.arange().
Keep in mind that if you try to modify a particular item of .index or .columns, then
youll get a TypeError.
Reading and Writing CSV Files in Python
Last Updated : 22 Jun, 2020
CSV (Comma Separated Values) format is the most common import and export
format for spreadsheets and databases. It is one of the most common methods for
exchanging data between applications and popular data format used in Data Science. It
is supported by a wide range of applications. A CSV file stores tabular data in which
each data field is separated by a delimiter(comma in most cases). To represent a CSV
file, it must be saved with the .csv file extension.
Reading from CSV file
Python contains a module called csv for the handling of CSV files. The reader class
from the module is used for reading data from a CSV file. At first, the CSV file is
opened using the open() method in r mode(specifies read mode while opening a file)
which returns the file object then it is read by using the reader() method of CSV module
that returns the reader object that iterates throughout the lines in the specified CSV
document.
Syntax:
csv.reader(csvfile, dialect='excel', **fmtparams
Note: The with keyword is used along with the open() method as it simplifies
exception handling and automatically closes the CSV file.
Example:
filter_none
brightness_4
import csv
csvFile = csv.reader(file)
print(lines)
Output:
[['Steve', 13, 'A'],
['John', 14, 'F'],
['Nancy', 14, 'C'],
['Ravi', 13, 'B']]
Writing to CSV file
csv.writer class is used to insert data to the CSV file. This class returns a writer object
which is responsible for converting the users data into a delimited string. A CSV file
object should be opened with newline= otherwise, newline characters inside the
quoted fields will not be interpreted correctly.
Syntax:
csv.writer(csvfile, dialect='excel', **fmtparams)
csv.writer class provides two methods for writing to CSV. They
are writerow() and writerows().
writerow(): This method writes a single row at a time. Field row can be written us-
ing this method.
Syntax:
writerow(fields)
writerows(): This method is used to write multiple rows at a time. This can be used
to write rows list.
Syntax:
writerows(rows)
Example:
filter_none
brightness_4
# writing to CSV
import csv
# field names
filename = "university_records.csv"
csvwriter = csv.writer(csvfile)
csvwriter.writerow(fields)
csvwriter.writerows(rows)
Output:
We can also write dictionary to the CSV file. For this the CSV module provides the
csv.DictWriter class. This class returns a writer object which maps dictionaries onto
output rows.
Syntax:
csv.DictWriter(csvfile, fieldnames, restval=, extrasaction=raise, dialect=excel,
*args, **kwds)
csv.DictWriter provides two methods for writing to CSV. They are:
writeheader(): writeheader() method simply writes the first row of your csv file us-
ing the pre-specified fieldnames.
Syntax:
writeheader()
writerows(): writerows method simply writes all the rows but in each row, it writes
only the values(not keys).
Syntax:
writerows(mydict)
Example:
filter_none
brightness_4
import csv
# field names
filename = "university_records.csv"
writer.writerows(mydict)
Output:
SQL is not a procedural language. It is not used to define complex processes; we can
use SQL to use commands that define and manipulate data. SQL is different from other
languages.
SQL is very readable.
In SQL we always issue commands.
SQL statements fall into two groups:-
● Data Definition Language (DDL) – DDL statements are used to describe the tables and
the data they contain. The subset of SQL statements used for modeling the structure
(rather than the contents) of a database or cube. The DDL gives you the ability to
create, modify, and remove databases and database objects.
● Data Manipulation Language (DML) – DML statements that are used to operate on
data in the database. These are statements that allow you to create or alter objects
(such as tables, indexes, views, and so on) in the database. The subset of SQL
statements used to retrieve and manipulate data. DML can be further divided into 2
groups:-
● Select Statements – Statements that return a set of results.
Everything else – Statements that don’t return a set of results.
Here are some of the quires defined:-
SELECT - SQL statement used to request a selection, projection, join, query, and so on,
from a SQL Server database.
Primary key Primary key constraints identify the column or set of columns whose
values uniquely identify a row in a table. No two rows in a table can have the same
primary key value. You cannot enter a NULL value for any column in a primary key.
Insert- The Insert logical operator inserts each row from its input into the object
specified in the Argument column. To insert the data into a relation we either specify a
tuple to be inserted or write a query.
Delete- The Delete logical operator deletes from an object rows that satisfy the
optional predicate in the Argument column. We can delete only whole tuples; we
cannot delete values on only particular attributes.
Update- The Update logical operator updates each row from its input in the object
specified in the Argument column. It provides a way of modifying existing data in a
table.
Preliminary
Design
Database
Design
implementation
code
BIBILOGRAPHY
Writtten by:
2. Oxford 12th IP