Rohit
Rohit
Submitted in partial fulfillment of the requirements for the award of the Degree of
BACHELOR OF TECHNOLOGY
IN
COMPUTER SCIENCE AND ENGINEERING
Submitted By
N. DIVYA (20MG1A0522)
B. KAVYA (20MG1A0502)
2023-2024
1
A Project Report on
BACHELOR OF TECHNOLOGY
In
COMPUTER SCIENCE AND ENGINEERING
By
G. CHARITHA SRI (20MG1A0508)
N. DIVYA (20MG1A0522)
B. KAVYA (20MG1A0502)
External Examiner
DECLARATION BY THE CANDIDATE
This is a record of work carried out by me and the results embodied in this project have
not been reproduced or copied from any source. The results embodied in this project have not been
submitted to anyother university or institute for the award of any other degree or diploma.
We consider it our privilege to express our gratitude to all those who guided, inspired, and
helped us in the completion of this project.
We wish to express our deep gratitude to our project guide Mr. P. Veeraswami MTech Sree
Vahini Institute of Science and Technology, Tiruvuru, for his timely cooperation and valuable
suggestions while carrying out this project work.
We would like to thank Dr. K. V. Panduranga Rao, HOD, Dept. of CSE for his
encouragement and valuable guidance in completing our project successfully.
We express heartfelt thanks to Dr. R. Nagendra Babu, Principal, Sree Vahini Institute of
Science and Technology, Tiruvuru for the successful completion of our degree.
We would like to thank All faculty members of the CSE Department, Sree Vahini Institute
of Science and Technology, Tiruvuru, for their timely cooperation and valuable suggestions while
carrying out this project work.
We feel and deep sense of gratitude for our family who formed part of our vision and taught us
the good things that matter in life.
I Abstract i
II List Of Figures ii
1 Introduction 01-02
9 Implementation 44-53
12 Conclusion 63-64
13 References 65-67
ABSTRACT
Traffic data is very important in designing a smart city. Now –a day's many intelligent transport systems
use modern technologies to predict traffic flow, to minimize accidents on the road, to predict the speed of
a vehicle, etc. Traffic flow prediction is an appealing study field. Many techniques of data mining are
employed to forecast traffic. Deep learning techniques can be used with technological progress to prevent
information from real-time. Deep algorithms are discussed to forecast real-world traffic data. When traffic
data becomes big data, some techniques to improve the accuracy of traffic prediction are also discussed.
Keywords: Deep learning, Neural network, Traffic flow prediction, Convolutional Neural Network
(CNN), Recurrent Neural Network (RNN).
i
LIST OF FIGURES
ii
ABBREVIATIONS
SNO ABBREVIATIONS
3 DL DEEP LEARNING
iii
CHAPTER – 01
INTRODUCTION
SVIST INTRODUCTION
1. INTRODUCTION
1.1 MOTIVATION:
Transportation uses recent digital techniques to achieve efficient traffic flow, minimize
accidents on the road, and maintain speed on the road, Traffic predictions help us in route planning,
navigation, and other mobility services. Data traffic are real-world information, i.e. traffic models are
usually used to evaluate different past and real-time traffic data to forecast potential traffic
circumstances.
Traffic speed prediction is of great importance for the benefit of both road users and traffic
management agencies. To solve the problem, traffic scientists have developed several time-series speed
prediction approaches, including traditional statistical models and machine learning techniques.
However, existing methods are still unsatisfying due to the difficulty of reflecting the stochastic traffic
flow characteristics. Recently, various deep learning models have been introduced tothe prediction
field.
SYSTEM ANALYSIS
SVIST SYSTEM ANALYSIS
2. SYSTEM ANALYSIS
LITERATURE SURVEY
SVIST LITERATURE SURVEY
3.LITERATURE SURVEY
3.1 Traffic Flow Forecasting: Comparison Of Modeling Approaches FWF
AUTHORS: Brian L. Smith and Michael J. Dejevsky,
ABSTRACT: The capability to forecast traffic volume in an operational setting has been identified
as a critical need for intelligent transportation systems (ITS). In particular, traffic volume forecasts
will support proactive, dynamic traffic control. However, previous attempts to develop traffic
volume forecasting models have met with limited success. This research effort focused on
developing traffic volume forecasting models for two sites on Northern Virginia's Capital Beltway.
Four models were developed and tested for the freeway traffic flow forecasting problem, which is
defined as estimating traffic flow 15 min into the future. They were the historical average, time
series, neural network, and nonparametric regression models. The nonparametric regression model
significantly outperformed the other models. A Wilcoxon signed-rank test revealed that the
nonparametric regression model experienced significantly lower errors than the other models. In
addition, the nonparametric regression model was easy to implement and proved to be portable,
performing well at two distinct sites. Based on its success, research is ongoing to refine the
nonparametric regression model and to extend it to produce multiple interval forecasts.
SYSTEM STUDY
SVIST SYSTEM STUDY
4. SYSTEM STUDY
4. FEASIBILITY STUDY:
The feasibility of the project is analyzed in this phase and business proposal is put forth with a very
general plan for the project and some cost estimates. During system analysis the feasibility study of the
proposed system is to be carried out. This is to ensure that the proposed system is not a burden to the
company. For feasibility analysis, some understanding of the major requirements for the system is
essential.
➢ ECONOMICAL FEASIBILITY
➢ TECHNICAL FEASIBILITY
➢ SOCIAL FEASIBILITY
SYSTEM REQUIREMENTS
SVIST SYSTEM REQUIREMENTS
5. SYSTEM REQUIREMENTS
5.1 HARDWARE REQUIREMENTS:
SYSTEM ARCHITECTURE
SVIST SYSTEM ARCHITECTURE
6. SYSTEM ARCHITECTURE
2. The data flow diagram (DFD) is one of the most important modeling tools. It is used to model
the system components. These components are the system process, the data used by the process,
an external entity that interacts with the system and the information flows in the system.
3. DFD shows how the information moves through the system and how it is modified by a series
of transformations. It is a graphical technique that depicts information flow and the
transformations that are applied as data moves from input to output.
4. DFD is also known as bubble chart. A DFD may be used to represent a system at any level of
abstraction. DFD may be partitioned into levels that represent increasing information flow and
functional detail.
Yes
NO
Data Exploration
Data Preprocessing
Feature Extraction
Data Splitting
Model
Build RNN
Model Build
Load Model
Upload Test
SOFTWARE ENVIRONMENT
SOFTWARE ENVIRONMENT
SVIST
7. SOFTWARE ENVIRONMENT
What is Python:
Python is currently the most widely used multi-purpose, high-level programming language.
Python allows programming in Object-Oriented and Procedural paradigms. Python programs generally are
smaller than other programming languages like Java.
Programmers must type relatively less and indentation requirement of the language, makes them readable all
the time.
Python language is being used by almost all tech-giant companies like Google, Amazon, Facebook,
Instagram, Dropbox, Uber… etc.
The biggest strength of Python is huge collection of standard libraries which can be used for the following –
• Machine Learning
• GUI Applications (like Kivy, Tkinter, PyQt etc)
• Web frameworks like Django (used by YouTube, Instagram, and Dropbox)
• Image processing (like Opencv, and Pillow)
• Web scraping (like Scrapy, BeautifulSoup, and Selenium)
• Test frameworks.
• Multimedia
Advantages of Python:
1. Extensive Libraries:
Python downloads with an extensive library and it contains code for various purposes like regular expressions,
documentation-generation, unit-testing, web browsers, threading, databases, CGI, email,image manipulation,
and more. So, we don’t have to write the complete code for that manually.
2. Extensible:
As we have seen earlier, Python can be extended to other languages. You can write some of your code in
languages like C++ or C. This comes in handy, especially in projects.
3. Embeddable:
Complimentary to extensibility, Python is embeddable as well. You can put your Python code in your source
code of a different language, like C++. This lets us add scripting capabilities to our code in the other language.
4. Improved Productivity:
The language’s simplicity and extensive libraries render programmers more productive than languages like Java
and C++ do. Also, the fact that you need to write less and get more things done.
5. IOT Opportunities:
Since Python forms the basis of new platforms like Raspberry Pi, it finds the future bright for the Internet Of
Things. This is a way to connect the language with the real world.
When working with Java, you may have to create a class to print ‘Hello World’. But in Python, just a print
statement will do. It is also quite easy to learn, understand, and code. This is why when people pick up Python,
they have a hard time adjusting to other more verbose languages like Java.
7. Readable:
Because it is not such a verbose language, reading Python is much like reading English. This is the reason why
it is so easy to learn, understand, and code. It also does not need curly braces to define blocks, and
indentation is mandatory. These further aids the readability of the code.
This language supports both the procedural and object-oriented programming paradigms. While functions help
us with code reusability, classes and objects let us model the real world. A class allowsthe encapsulation
of data and functions into one.
Like we said earlier, Python is freely available. But not only can you download Python for free, but you can
also download its source code, make changes to it, and even distribute it. It downloads with an extensive
collection of libraries to help you with your tasks.
10. Portable:
When you code your project in a language like C++, you may need to make some changes to it if you want to
run it on another platform. But it isn’t the same with Python. Here, you need to code only once, and youcan run
it anywhere. This is called Write Once Run Anywhere (WORA). However, you need to be careful enoughnot
to include any system-dependent features.
11. Interpreted:
Lastly, we will say that it is an interpreted language. Since statements are executed one by one, debugging is
easier than in compiled languages.
1. Less Coding:
Almost all of the tasks done in Python requires less coding when the same task is done in other
languages. Python also has an awesome standard library support, so you don’t have to search for any
third-party libraries to get your job done. This is the reason that many people suggest learning Python to
beginners.
2. Affordable
Python is free therefore individuals, small companies or big organizations can leverage the free
available resources to build applications. Python is popular and widely used so it gives you better
community support.
Python code can run on any machine whether it is Linux, Mac or Windows. Programmers needto
learn different languages for different jobs but with Python, you can professionally build web apps,
perform data analysis and machine learning, automate things, do web scraping and also build games and
powerful visualizations. It is an all-rounder programming language.
Disadvantages of Python
So far, we’ve seen why Python is a great choice for your project. But if you choose it, you should be aware
of its consequences as well. Let’s now see the downsides of choosing Python over another language.
1. Speed Limitations
We have seen that Python code is executed line by line. But since Python is interpreted, it often
results in slow execution. This, however, isn’t a problem unless speed is a focal point for the project. In
other words, unless high speed is a requirement, the benefits offered by Python are enough to distract
us from its speed limitations.
While it serves as an excellent server-side language, Python is much rarely seen on the client- side.
Besides that, it is rarely ever used to implement smartphone-based applications. One such application is
called Carbonnelle.
The reason it is not so famous despite the existence of Brython is that it isn’t that secure.
3. Design Restrictions
As you know, Python is dynamically-typed. This means that you don’t need to declare the typeof
variable while writing the code. It uses duck-typing. But wait, what’s that? Well, it just means that if it
looks like a duck, it must be a duck. While this is easy on the programmers during coding, it can raise run-
time errors.
Compared to more widely used technologies like JDBC (Java DataBase Connectivity) and
ODBC (Open DataBase Connectivity), Python’s database access layers are a bit underdeveloped.
Consequently, it is less often applied in huge enterprises.
No, we’re not kidding. Python’s simplicity can indeed be a problem. Take my example. I don’t do
Java, I’m more of a Python person. To me, its syntax is so simple that the verbosity of Java code seems
unnecessary.
This was all about the Advantages and Disadvantages of Python Programming Language.
History of Python:
What do the alphabet and the programming language Python have in common? Right, both start with ABC. If
we are talking about ABC in the Python context, it's clear that the programming language ABC is meant. ABCis
a general-purpose programming language and programming environment, which had been developed in the
Netherlands, Amsterdam, at the CWI (Centrum Wiskunde &Informatica). The greatest achievement of ABC was
to influence the design of Python.Python was conceptualized in the late 1980s. Guido van Rossum worked that
time in a project at the CWI, called Amoeba, a distributed operating system. In an interview with Bill Venners1,
Guido van Rossum said: "In the early 1980s, I worked as an implementer on a team building a language called
ABC at Centrum voor Wiskunde en Informatica (CWI). I don't knowhow well people know ABC's influence on
Python. I try to mention ABC's influence because I'm indebted to everything I learned during that project and to
the people who worked on it."Later on in the same Interview, Guido van Rossum continued: "I remembered all
my experience and some of my frustration with ABC. I decided to try to design a simple scripting language that
possessed some of ABC's better properties, but without its problems. So I started typing. I created a simple
virtual machine, a simple parser, and a simple runtime. I made my own version of the various ABC parts that I
liked. I created a basic syntax, used indentation for statement grouping instead of curly braces or begin-end
blocks, and developed a small number of powerful data types: a hash table (or dictionary, as we call it), a list,
strings, and numbers."
Before we look at the details of various machine learning methods, let's start by looking at what machine learning
is, and what it isn't. Machine learning is often categorized as a subfield of artificial intelligence, but I find
that categorization can often be misleading at first brush. The study of machine learning certainly arose from
research in this context, but in the data science application of machine learning methods, it's more helpful to
think of machine learning as a means of building models of data.
At the most fundamental level, machine learning can be categorized into two main types: supervised learning
and unsupervised learning.
Supervised learning involves somehow modeling the relationship between measured features of data and some
label associated with the data; once this model is determined, it can be used to apply labels to new, unknown
data. This is further subdivided into classification tasks and regression tasks: in classification, the labels are
discrete categories, while in regression, the labels are continuous quantities. We will see examples of both types
of supervised learning in the following section.
Unsupervised learning involves modeling the features of a dataset without reference to any label, and is often
described as "letting the dataset speak for itself." These models include tasks such as clustering and
dimensionality reduction. Clustering algorithms identify distinct groups of data, while dimensionality reduction
algorithms search for more succinct representations of the data. We will see examples of both types of
unsupervised learning in the following section.
Human beings, at this moment, are the most intelligent and advanced species on earth because they can think,
evaluate, and solve complex problems. On the other side, AI is still in its initial stage and hasn’t surpassed human
intelligence in many aspects. Then the question is what is the need to make machines learn? The most suitable
reason for doing this is, “to make decisions, based on data, with efficiency and scale”.
Lately, organizations are investing heavily in newer technologies like Artificial Intelligence, Machine Learning
and Deep Learning to get the key information from data to perform several real-world tasks and solve problems.
We can call it data-driven decisions taken by machines, particularly to automate the
While Machine Learning is rapidly evolving, making significant strides with cybersecurity and autonomous
cars, this segment of AI as a whole still has a long way to go. The reason behind is that ML has not been able
to overcome a number of challenges. The challenges that ML is facing currently are −
Quality of data: Having good-quality data for ML algorithms is one of the biggest challenges. Use of low-
quality data leads to problems related to data preprocessing and feature extraction.
Time-Consuming task: Another challenge faced by ML models is the consumption of time especially for data
acquisition, feature extraction and retrieval.
Lack of specialist persons: As ML technology is still in its infancy stage, availability of expert resources
is a tough job.
No clear objective for formulating business problems: Having no clear objective and well-defined goal for
business problems is another key challenge for ML because this technology is not that mature yet.
Issue of overfitting & underfitting: If the model is overfitting or underfitting, it cannot be represented well
for the problem.
Curse of dimensionality: Another challenge ML model faces is too many features of data points. This can be
a real hindrance.
Difficulty in deployment: Complexity of the ML model makes it quite difficult to be deployed in real life.
Machine Learning is the most rapidly growing technology and according to researchers we are in the golden year
of AI and ML. It is used to solve many real-world complex problems which cannot be solved with traditional
approach. Following are some real-world applications of ML:
• Emotion analysis
• Sentiment analysis
• Error detection and prevention
• Weather forecasting and prediction
• Stock market analysis and forecasting.
• Speech synthesis
• Speech recognition
• Customer segmentation
• Object recognition.
• Fraud detection
• Fraud prevention
• Recommendation of products to customers in online shopping
Arthur Samuel coined the term “Machine Learning” in 1959 and defined it as a “Field of study that gives
computers the capability to learn without being explicitly programmed”.
And that was the beginning of Machine Learning! In modern times, Machine Learning is one of the most popular
(if not the most!) career choices. According to Indeed, Machine Learning Engineer Is The Best Job of 2019
with a 344% growth and an average base salary of $146,085 per year.
But there is still a lot of doubt about what exactly is Machine Learning and how to start learning it. So, this article
deals with the Basics of Machine Learning and also the path you can follow to eventually become a full- fledged
Machine Learning Engineer. Now let’s get started!
How to start learning ML?
This is a rough roadmap you can follow on your way to becoming an insanely talented Machine Learning
Engineer. Of course, you can always modify the steps according to your needs to reach your desired end goal!
Step 1: Understand the Prerequisites
In case you are a genius, you could start ML directly but normally, there are some prerequisites that you
need to know which include Linear Algebra, Multivariate Calculus, Statistics, and Python. And
Both Linear Algebra and Multivariate Calculus are important in Machine Learning. However, the extent
to which you need them depends on your role as a data scientist. If you are more focused on application
heavy machine learning, then you will not be that heavily focused on maths as there are many common
libraries available. But if you want to focus on R&D in Machine Learning, then mastery of Linear
Algebra and Multivariate Calculus is very important as you will have to implement many ML algorithms
from scratch.
Data plays a huge role in Machine Learning. In fact, around 80% of your time as an ML expert will be
spent collecting and cleaning data. And statistics is a field that handles the collection, analysis, and
presentation of data. So, it is no surprise that you need to learn it!!! Some
of the key concepts in statistics that are important are Statistical Significance, Probability Distributions,
Hypothesis Testing, Regression, etc. Also, Bayesian Thinking is also a very important part of ML which
deals with various concepts like Conditional Probability, Priors, and Posteriors, Maximum Likelihood,
etc.
Some people prefer to skip Linear Algebra, Multivariate Calculus and Statistics and learn them as they
go along with trial and error. But the one thing that you absolutely cannot skip is Python! While there are
other languages you can use for Machine Learning like R, Scala, etc. Python is currently the most popular
language for ML. In fact, there are many Python libraries that are specifically useful for Artificial
Intelligence and Machine Learning such as Keras, TensorFlow, Scikit-learn, etc.
So if you want to learn ML, it’s best if you learn Python! You can do that using various online resources
and courses such as Fork Python available Free on GeeksforGeeks.
Now that you are done with the prerequisites, you can move on to learning ML (Which is the fun part!!!)
It’s best to start with the basics and then move on to the more complicated stuff. Some of the basic concepts
in ML are:
• Model – A model is a specific representation learned from data by applying some machine learning
algorithm. A model is also called a hypothesis.
• Feature – A feature is an individual measurable property of the data. A set of numeric features can be
conveniently described by a feature vector. Feature vectors are fed as input to the model. For example, in
order to predict a fruit, there may be features like color, smell, taste, etc.
• Target (Label) – A target variable or label is the value to be predicted by our model. For the fruit example
discussed in the feature section, the label with each set of input would be the name of the fruit like
apple, orange, banana, etc.
• Training – The idea is to give a set of inputs(features) and its expected outputs(labels), so after training,
we will have a model (hypothesis) that will then map new data to one of the categories trained on.
• Prediction – Once our model is ready, it can be fed a set of inputs to which it will provide a predicted
output(label).
• Supervised Learning – This involves learning from a training dataset with labeled data using
classification and regression models. This learning process continues until the required level of
performance is achieved.
• Unsupervised Learning – This involves using unlabelled data and then finding the underlying structure
in the data in order to learn more and more about the data itself using factor and cluster analysis models.
• Semi-supervised Learning – This involves using unlabelled data like Unsupervised Learning with a small
amount of labeled data. Using labeled data vastly increases the learning accuracy and is also more cost-
effective than Supervised Learning.
• Reinforcement Learning – This involves learning optimal actions through trial and error. So the next
action is decided by learning behaviors that are based on the current state and that will maximize the
reward in the future.
Machine Learning can review large volumes of data and discover specific trends and patterns that would
not be apparent to humans. For instance, for an e-commerce website like Amazon, it serves to understand
the browsing behaviors and purchase histories of its users to help cater to the right products, deals, and
reminders relevant to them. It uses the results to reveal relevant advertisements to them.
With ML, you don’t need to babysit your project every step of the way. Since it means giving machines the
ability to learn, it lets them make predictions and also improve the algorithms on their own. A common
example of this is anti-virus softwares; they learn to filter new threats as they are recognized. ML is also
good at recognizing spam.
3. Continuous Improvement
As ML algorithms gain experience, they keep improving in accuracy and efficiency. This lets them make
better decisions. Say you need to make a weather forecast model. As the amount of data you have keeps
growing, your algorithms learn to make more accurate predictions faster.
Machine Learning algorithms are good at handling data that are multi-dimensional and multi-variety, and
they can do this in dynamic or uncertain environments.
5. Wide Applications
You could be an e-tailer or a healthcare provider and make ML work for you. Where it does apply, it holds
the capability to help deliver a much more personal experience to customers while also targeting the right
customers.
1. Data Acquisition
Machine Learning requires massive data sets to train on, and these should be inclusive/unbiased, and
of good quality. There can also be times where they must wait for new data to be generated.
ML needs enough time to let the algorithms learn and develop enough to fulfill their purpose with a
considerable amount of accuracy and relevancy. It also needs massive resources to function. This can mean
additional requirements of computer power for you.
3. Interpretation of Results
Another major challenge is the ability to accurately interpret results generated by the algorithms. You must
also carefully choose the algorithms for your purpose.
4. High error-susceptibility
Machine Learning is autonomous but highly susceptible to errors. Suppose you train an algorithm with data
sets small enough to not be inclusive. You end up with biased predictions coming from a biased training
set. This leads to irrelevant advertisements being displayed to customers. In the case of ML, such blunders
can set off a chain of errors that can go undetected for long periods of time. And when they do get noticed,
it takes quite some time to recognize the source of the issue, and even longer to correct it.
Guido Van Rossum published the first version of Python code (version 0.9.0) at alt.sources in February 1991.
This release included already exception handling, functions, and the core data types of list, dict, str and others.
It was also object oriented and had a module system.
Python version 1.0 was released in January 1994. The major new features included in this release were the
functional programming tools lambda, map, filter and reduce, which Guido Van Rossum never liked.Six and
a half years later in October 2000, Python 2.0 was introduced. This release included list comprehensions, a
full garbage collector and it was supporting unicode.Python flourished for another 8 years in the versions 2.x
before the next major release as Python 3.0 (also known as "Python 3000" and "Py3K") was released. Python 3
is not backwards compatible with Python 2.x. The emphasis in Python 3 had been on the removal of duplicate
programming constructs and modules, thus fulfilling or coming close to fulfilling the 13th law of the Zen of
Python: "There should be one -- and preferably only one -- obvious way to do it."Some changes in Python 7.3:
Purpose:
We demonstrated that our approach enables successful segmentation of intra-retinal layers—even with low-
quality images containing speckle noise, low contrast, and different intensity ranges throughout—with the
assistance of the ANIS feature.
Python
Python is an interpreted high-level programming language for general-purpose programming. Created by Guido
van Rossum and first released in 1991, Python has a design philosophy that emphasizes code readability, notably
using significant whitespace.
• Python features a dynamic type system and automatic memory management. It supports multiple
programming paradigms, including object-oriented, imperative, functional and procedural, and has
alarge and comprehensive standard library.
• Python is Interpreted − Python is processed at runtime by the interpreter. You do not need to
compileyour program before executing it. This is like PERL and PHP.
• Python is Interactive − you can sit at a Python prompt and interact with the interpreter directly to write
your programs.
• Python also acknowledges that speed of development is important. Readable and terse code is part
ofthis, and so is access to powerful constructs that avoid tedious repetition of code. Maintainability
also ties into this may be an all but useless metric, but it does say something about how much code
you have to scan, read and/or understand to troubleshoot problems or tweak behaviors. This speed of
development, the ease with which a programmer of other languages can pick up basic Python skills
and the huge standard library is key to another area where Python excels. All its tools have been quick
to implement, saved a lot of time, and several of them have later been patched and updated by people
with no Python background - without breaking.
• TensorFlow
TensorFlow is a free and open-source software library for dataflow and differentiable programming across
a range of tasks. It is a symbolic math library, and is also used for machine learning applications such as
neural networks. It is used for both research and production at Google.
TensorFlow was developed by the Google Brain team for internal Google use. It was released under the
Apache 2.0 open-source license on November 9, 2015.
• NumPy
• Pandas
Pandas is an open-source Python Library providing high-performance data manipulation and analysis tools
using its powerful data structures. Python was majorly used for data munging and preparation. Ithad very
little contribution towards data analysis. Pandas solved this problem. Using Pandas, we can accomplish
five typical steps in the processing and analysis of data, regardless of the origin of data load, prepare,
manipulate, model, and analyze. Python with Pandas is used in a wide range of fields including academic
and commercial domains including finance, economics, Statistics, analytics, etc.
• Matplotlib
Matplotlib is a Python 2D plotting library which produces publication quality figures in a variety of
hardcopy formats and interactive environments across platforms. Matplotlib can be used in Python scripts,
the Python and IPython shells, the Jupyter Notebook, web application servers, and four graphical user
interface toolkits. Matplotlib tries to make easy things easy and hard things possible.
For simple plotting the pyplot module provides a MATLAB-like interface, particularly when combined
with IPython. For the power user, you have full control of line styles, font properties, axes properties, etc,
via an object oriented interface or via a set of functions familiar to MATLAB users.
• Scikit – learn
Scikit-learn provides a range of supervised and unsupervised learning algorithms via a consistent
interface in Python. It is licensed under a permissive simplified BSD license and is distributed under
many Linux distributions, encouraging academic and commercial use.
Python features a dynamic type system and automatic memory management. It supports multiple
programming paradigms, including object-oriented, imperative, functional and procedural, and has a large
and comprehensive standard library.
• Python is Interpreted − Python is processed at runtime by the interpreter. You do not need to
compileyour program before executing it. This is similar to PERL and PHP.
• Python is Interactive − you can actually sit at a Python prompt and interact with the interpreter directly
to write your programs.
• Python also acknowledges that speed of development is important. Readable and terse code is part
ofthis, and so is access to powerful constructs that avoid tedious repetition of code. Maintainability
also ties into this may be an all but useless metric, but it does say something about how much code
you have to scan, read and/or understand to troubleshoot problems or tweak behaviors. This speed of
development, the ease with which a programmer of other languages can pick up basic Python skills
and the huge standard library is key to another area where Python excels. All its tools have been quick
to implement, saved a lot of time, and several of them have later been patched and updated by people
with no Python background - without breaking.
Python, a versatile programming language doesn’t come pre-installed on your computer devices. Python was
first released in the year 1991 and until today it is a very popular high-level programming language. Its style
philosophy emphasizes code readability with its notable use of great whitespace.
There have been several updates in the Python version over the years. The question is how to install Python?
It might be confusing for the beginner who is willing to start learning Python but this tutorial will solve your
query. The latest or the newest version of Python is version 3.7.4 or in other words, it is Python 3.
Note: The python version 3.7.4 cannot be used on Windows XP or earlier devices.
Before you start the installation process of Python. First, you need to know about your System Requirements.
Based on your system type i.e. operating system and based processor, you must download the python version.
My system type is a Windows 64-bit operating system. So, the steps below are to install python version 3.7.4 on
Windows 7 device or to install Python 3. The steps on how to install Python on Windows 10, 8 and 7are
divided into 4 parts to help understand better.
Download the Correct version into the system.
Step 1: Go to the official site to download and install python using Google Chrome or any other web browser.
OR Click on the following link: https://www.python.org
Now, check for the latest and the correct version for your operating system.
Step 3: You can either select the Download Python for windows 3.7.4 button in Yellow Color or you can scroll
further down and click on download with respective to their version. Here, we are downloading the most recent
python version for windows 3.7.4
Step 4: Scroll down the page until you find the Files option.
Step 5: Here you see a different version of python along with the operating system.
• To download Windows 32-bit python, you can select any one from the three options: Windows x86
embeddable zip file, Windows x86 executable installer or Windows x86 web-based installer.
•To download Windows 64-bit python, you can select any one from the three options: Windows x86-64
embeddable zip file, Windows x86-64 executable installer or Windows x86-64 web-based installer.
Here we will install Windows x86-64 web-based installer. Here your first part regarding which version of python
is to be downloaded is completed. Now we move ahead with the second part in installing python i.e. Installation
Note: To know the changes or updates that are made in the version you can click on the Release Note Option.
Installation of Python
Step 1: Go to Download and Open the downloaded python version to carry out the installation process.
Step 3: Click on Install NOW After the installation is successful. Click on Close.
With these above three steps on python installation, you have successfully and correctly installed Python. Now
is the time to verify the installation.
SYSTEM DESIGN
SVIST SYSTEM DESIGN
8. SYSTEM DESIGN
8.1 UML DIAGRAMS:
UML stands for Unified Modeling Language. UML is a standardized general-purpose modeling language
in the field of object-oriented software engineering. The standard is managed, and was created by, the Object
Management Group.
The goal is for UML to become a common language for creating models of object-oriented computer
software. In its current form UML is comprised of two major components: a Meta-model and a notation. In
the future, some form of method or process may also be added to; or associated with, UML.
The Unified Modeling Language is a standard language for specifying, Visualization, Constructing and
documenting the artifacts of software system, as well as for business modeling and other non-software
systems.
The UML represents a collection of best engineering practices that have proven successful in the modeling
of large and complex systems.
The UML is a very important part of developing objects-oriented software and the software development
process. UML uses mostly graphical notations to express the design of software projects.
GOALS:
The Primary goals in the design of the UML are as follows:
1. Provide users a ready-to-use, expressive visual modeling Language so that they can develop and
exchange meaningful models.
2. Provide extendibility and specialization mechanisms to extend the core concepts.
3. Be independent of particular programming languages and development process.
4. Provide a formal basis for understanding the modeling language.
5. Encourage the growth of OO tools market.
6. Support higher level development concepts such as collaborations, frameworks, patterns and
components.
8.1.1 USE CASE DIAGRAMS:
A use case diagram in the Unified Modeling Language (UML) is a type of behavioral diagram defined by
and created from a Use-case analysis. Its purpose is to present a graphical overview of the functionality
provided by a system in terms of actors, their goals (represented as use cases), and any dependencies between
those use cases. The main purpose of a use case diagram is to show what system functions are performed for
which actor. Roles of the actors in the system can be depicted.
SAMPLE CODE
SVIST SAMPLE CODE
9. SAMPLE CODE
9.1 IMPLEMENTATION:
MODULES:
1. Data Collection: This step of the dataset from Kaggle site (traffic Dataset). And the composition of
the dataset. Understand the relationship among different features. A plot of the core features and the
entire dataset. The dataset is further split into 2/3 for training and 1/3 for testing the algorithms.
Furthermore, to obtain a representative sample, each class in the full dataset is represented in about the
right proportion in both the training and testing datasets. The various proportions of the training and
testing datasets used in the paper.
2. Data Preprocessing: The data which was collected might contain missing values that may lead to
inconsistency. To gain better results data need to be pre-processed to improve the efficiency of the
algorithm. The outliers must be removed, and variable conversion need to be done. To overcoming
these issues, we use map function.
3. Model Selection: Machine learning is about predicting and recognizing patterns and generate suitable
results after understanding them. ML algorithms study patterns in data and learn from them. An ML
model will learn and improve on each attempt. To gauge the effectiveness of a model, it’s vital to split
the data into training and test sets first. So before training our models, we split the data into Training
set which was 70% of the whole dataset and Test set which was the remaining 30%. Then it was
important to implement a selection of performance metrics to the predictions made by our model.
In this case, we tried to identify whether an individual is going to default on a loan or not. Model
accuracy might not be the sole metric to identify how our model performed- the F1 score and
confusion matrix should be important metrics to analyse as well. What is important is that the right
performance measures are chosen for the right situations.
4. Predict the results: The designed system is tested with test set and the performance is assured.
Evolution analysis refers to the description and model regularities or trends for objects whose behavior
changes over time. Common metrics calculated from the confusion matrix are Precision, Accuracy.
The most important features since these features are to develop a predictive model using CNN & RNN
model.
9.2 ALGORITHM:
Convolutional Neural Network (CNN):
The CNN model is intended mainly for processing 2- dimensional data, such as images. A CNN
model consists of an input layer and an output layer, together with several hidden layers that can
9.3 CODING:
from tkinter import messagebox
import tkinter
import numpy as np
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
import time
import tensorflow as tf
if len(physical_devices) > 0:
tf.config.experimental.set_memory_growth(physical_devices[0], True)
from core.yolov4 import filter_boxes #load YOLOV4 package to filter boxes which contains vehicles
import cv2
fromdeep_sort.tracker import Tracker #deep sort tracker model to predict or track vehciles from YOLO processed
video frame
main.geometry("1300x1200")
global filename
max_cosine_distance = 0.4
nn_budget = None
nms_max_overlap = 1.0
global encoder
def loadModel():
model_filename = 'model_data/mars-small128.pb'
# initialize tracker
tracker = Tracker(metric)
config = ConfigProto()
config.gpu_options.allow_growth = True
session = InteractiveSession(config=config)
text.delete('1.0', END)
def vehicleDetection():
accuracy = 0
precision = 0
filename = filedialog.askopenfilename(initialdir="Videos")
pathlabel.config(text=filename)
text.delete('1.0', END)
text.insert(END,filename+" loaded\n\n")
text.update_idletasks()
model = saved_model_loaded.signatures['serving_default']
cap = cv2.VideoCapture(filename)
start_time = time.time()
while (cap.isOpened()):
image = Image.fromarray(frame)
frame_size = frame.shape[:2]
batch_data = tf.constant(image_data)
pred_bbox = model(batch_data)
for key, value in pred_bbox.items():
num_objects = valid_detections.numpy()[0]
bboxes = bboxes[0:int(num_objects)]
scores = scores.numpy()[0]
scores = scores[0:int(num_objects)]
classes = classes.numpy()[0]
classes = classes[0:int(num_objects)]
# format bounding boxes from normalized ymin, xmin, ymax, xmax ---> xmin, ymin, width, height
# store all predictions in one parameter for simplicity when calling functions
class_names = utils.read_class_names(cfg.YOLO.CLASSES)
#allowed_classes = list(class_names.values())
# custom allowed classes (uncomment line below to customize tracker for only people)
allowed_classes = ['car','truck']
names = []
deleted_indx = []
for i in range(num_objects):
class_indx = int(classes[i])
class_name = class_names[class_indx]
deleted_indx.append(i)
else:
names.append(class_name)
names = np.array(names)
count = len(names)
cv2.putText(frame, "tracked: {}".format(count), (5, 70), 0, 5e-3 * 200, (0, 255, 0), 2)
detections = [Detection(bbox, score, class_name, feature) for bbox, score, class_name, feature in zip(bboxes,
scores, names, features)]
cmap = plt.get_cmap('tab20b')
print(scores)
if accuracy == 0:
accuracy = scores[0]
text.update_idletasks()
tracker.predict()
tracker.update(detections)
# update tracks
continue
bbox = track.to_tlbr()
class_name = track.get_class()
center = (int(((bbox[0])+(bbox[2]))/2),int(((bbox[1])+(bbox[3]))/2))
pts[track.track_id].append(center)
thickness = 5
# center point
continue
cv2.line(frame,(pts[track.track_id][j-1]), (pts[track.track_id][j]),(color),thickness)
frame = cv2.resize(frame,(800,800))
result = np.asarray(frame)
break
cap.release()
cv2.destroyAllWindows()
def close():
main.destroy()
title.config(bg='yellow4', fg='white')
title.config(font=font)
title.config(height=3, width=120)
title.place(x=0,y=5)
upload.place(x=50,y=100)
upload.config(font=font1)
pathlabel = Label(main)
pathlabel.config(bg='yellow4', fg='white')
pathlabel.config(font=font1)
pathlabel.place(x=50,y=150)
markovButton.place(x=50,y=200)
markovButton.config(font=font1)
predictButton.place(x=50,y=250)
predictButton.config(font=font1)
text=Text(main,height=15,width=78)
scroll=Scrollbar(text)
text.configure(yscrollcommand=scroll.set)
text.place(x=450,y=100)
text.config(font=font1)
main.config(bg='magenta3')
main.mainloop()
SYSTEM TESTING
SVIST SYSTEM TESTING
Types of Testing
1. Unit testing
2. Integration Testing
3. Functional Testing
4. System Testing
5. White Box Testing
6. Black Box Testing
Unit Testing:
Unit testing involves the design of test cases that validate that the internal program logic is functioning properly, and
that program inputs produce validoutputs. All decision branches and internal code flow should be validated. It is the
testing of individual software units of the application .it is done after the completionof an individual unit before
integration. This is a structural testing, that relies on knowledge of its construction and is invasive. Unittests perform
basic tests at component level and test a specific business process, application, and/or system configuration. Unit
tests ensure that each unique path of a business process performs accurately to the documented specifications and
contains clearly defined inputs andexpected results.
Integration Testing:
Integration tests are designed to test integrated software components to determine if they actually run as one
program. Testing is event driven and is more concerned with the basic outcome of screens or fields. Integration
tests demonstrate that although the components were individually satisfaction, as shown by successfully unit
testing, the combination of components is correct and consistent. Integration testing is specifically aimed at
exposing the problems that arise from the combination of components.
Functional Testing
Functional tests provide systematic demonstrations that functions tested are available as specified by the business
and technical requirements, system documentation, and user manuals. Organization and preparation of functional
tests is focused on requirements, key functions, or special test cases.
In addition, systematic coverage pertaining to identify Business process flows; data fields, predefined processes, and
successive processes must be considered for testing. Before functional testing is complete, additional testsare
identified and the effective value of current tests is determined.
System Testing:
System testing ensures that the entire integrated software system meets requirements. It tests a configuration to ensure
known and predictable results. An example of system testing is the configuration -oriented system integrationtest.
System testing is based on process descriptions and flows, emphasizing pre-driven process links and integrationpoints.
White Box Testing is a testing in which in which the software tester has knowledge of the inner workings, structure,
and language of the software, or at least its purpose. It is purpose. It is used to test areas that cannotbe reached
from a black box level.
Black Box Testing is testing the software without any knowledge of the inner workings, structure or language
of the module being tested. Black box tests, as most other kinds of tests, must be written from a definitive source
document, such as specification or requirements document, such as specification or requirements document. It is
a testing in which the software under test is treated, as a black box .you cannot “see” into it. The test provides inputs
and responds to outputs without considering how the software works.
Unit Testing:
Unit testing is usually conducted as part of a combined code and unit test phase of the software lifecycle, although
it is not uncommon for coding and unit testing to be conducted as two distinct phases.
Field testing will be performed manually, and functional tests will be written in detail.
Test objectives:
Features to be tested:
Acceptance Testing
• User Acceptance Testing is a critical phase of any project and requires significant participation by the
end user. It also ensures that the system meets the functional requirements.
• Test Results: All the test cases mentioned above passed successfully. No defects were encountered.
OUTPUT SCREENS
SVIST OUTPUT SCREENS
1) Generate & Load YOLOv4-DeepSort Model: using this module we will generate and load YOLOV4 -
DeepSort model.
2) Run Traffic Analysis: using this module we will upload test video and then apply YOLOV4 to detect
vehicle and this detected vehicle frame will be further analyse by DeepSort to track real vehicles.
In below screen we are showing code to load YOLO and Deep sort models.
SCREEN SHOTS
To run project double click on ‘run.bat’ file to get below screen.
In above screen selecting and uploading ‘traffic video’ file and then click on ‘Open’ button to get
below output. To get output u need to wait for few seconds.
In above screen video will play slowly and then YOLOV4 and DeepSort will start detecting and
tracking traffic and in green colour we can see number of real tracked traffic count.
CONCLUSION
SVIST CONCLUSION
12. CONCLUSION
CONCLUSION:
In this paper, we conduct a comprehensive survey of various deep learning architectures fot traffic
prediction. More specifically, we first summarize the existing traffic prediction methods, and give a
taxonomy of them. Then, we list the representative results in different traffic prediction tasks,
comprehensively provide public available traffic datasets, and conduct a series of experiments to
investigate the performance of existing traffic prediction methods. Finally, some major challenges and
future research directions are discussed. This paper is suitable for participants to quickly understand the
traffic prediction, to find branches they are interested in. It also provides good references and inquiry for
researchers in this field, which can facilitate the relevant research.
FUTURE ENHANCEMENT:
Finally, further improvements to the proposed model are suggested. For example, enlarging the input
matrix until the accuracy won’t increase, modification such as changing pooling function or activation
function and compare how the performance will change. Increasing dimension of input matrix into 3D
would be a very interesting attempt.
REFERENCES
SVIST REFERENCES
13. REFERENCES
[1] D. T. Hartgen, M. G. Fields, and A. T. Moore, "Gridlock and growth: the effect of traffic congestion
on regional economic performance," Reason Foundation Los Angeles, CA2009.
[2] M. Barth and K. Boriboonsomsin, "Real-World Carbon Dioxide Impacts of Traffic Congestion,"
Transportation Research Record: Journal of the Transportation Research Board, vol. 2058, pp. 163 - 171,
2008/12/01 2008.
[3] M. G. Karlaftis and E. I. Vlahogianni, "Statistical methods versus neural networks in transportation
research: Differences, similarities and some insights," Transportation Research Part C: Emerging
Technologies, vol. 19, pp. 387-399, 6// 2011.
[4] B. L. Smith and M. J. Demetsky, "Traffic flow forecasting: comparison of modeling approaches,"
Journal of transportation engineering, vol. 123, pp. 261-266, 1997.
[6] B. L. Smith, B. M. Williams, and R. Keith Oswald, "Comparison of parametric and nonpara metric
models for traffic flow forecasting," Transportation Research Part C: Emerging Technologies, vol. 10, pp.
303-321, 8// 2002.
[7] C. Chen, J. Hu, Q. Meng, and Y. Zhang, "Short-time traffic flow prediction with ARIMA-GARCH
model," in Intelligent Vehicles Symposium (IV), 2011 IEEE, 2011, pp. 607-612.
[8] Y. Zhang, Y. Zhang, and A. Haghani, "A hybrid short-term traffic flow forecasting method based on
spectral analysis and statistical volatility model," Transportation Research Part C: Emerging Technologies,
vol. 43, Part 1, pp. 65-78, 6// 2014.
[9] P. Cai, Y. Wang, G. Lu, P. Chen, C. Ding, and J. Sun, "A spatiotemporal correlative k-nearest neighbor
model for short-term traffic multistep forecasting," Transportation Research Part C: Emerging
Technologies, vol. 62, pp. 21-34, 1// 2016.
[10] F. G. Habtemichael and M. Cetin, "Short-term traffic flow rate forecasting based on identifying similar
traffic patterns," Transportation Research Part C: Emerging Technologies, vol. 66, pp. 61 -78, 5// 2016.
[11] Y. Kim, W. Kang, and M. Park, "Application of Traffic State Prediction Methods to Urban
Expressway Network in the City of Seoul," Journal of the Eastern Asia Society for Transportation Studies,
vol. 11, pp. 1885- 1898, 2015.
[12] Y. Yin and P. Shang, "Forecasting traffic time series with multivariate predicting method," Applied
Mathematics and Computation, vol. 291, pp. 266-278, 12/1/ 2016.
[13] L. Zhang, Q. Liu, W. Yang, N. Wei, and D. Dong, "An improved knearest neighbor model for short-
term traffic flow prediction," ProcediaSocial and Behavioral Sciences, vol. 96, pp. 653-662, 2013.
[14] Z. Zheng and D. Su, "Short-term traffic volume forecasting: A k-nearest neighbor approach enhanced
by constrained linearly sewing principle component algorithm," Transportation Research Part C:
Emerging Technologies, vol. 43, Part 1, pp. 143-157, 6// 2014.
[15] C. Goves, R. North, R. Johnston, and G. Fletcher, "Short Term Traffic Prediction on the UK
Motorway Network Using Neural Networks," Transportation Research Procedia, vol. 13, pp. 184-195,
2016/01/01/ 2016.