New Project Report
New Project Report
PROJECT REPORT
ON
2020-2021
Deepesh Malhotra[17EJCEC067]
Hardik Gandhi[17EJCEC082]
CERTIFICATE
This is to certify that the work, which is being presented in the project
entitled
“Stock Feature Predictor WebApp & ChatBot” submitted by Mr. Danish
Khan, Ms. Deepakshi Joshi, Mr.Deepesh Malhotra, Mr. Hardik Gandhi the
students of fourth year (VIII Semester) B.Tech in Electronics and
communication in partial fulfilment for the award of degree of Bachelor of
Technology is a record of student’s work carried out and that the project has
not formed the basis for the award previously of any other degree, diploma,
fellowship or any other similar title.
(signature of guide)
Mr. Bhoopesh Kumawat Mr. Vikas Sharma
Project Guide Project Coordinator
Professor, ECE Professor, ECE
CANDIDATE’S DECLARATION
We hereby declare that the work, which is being presented in the Project
Stage I entitled “Stock Feature Predictor WebApp & ChatBot” in partial
fulfilment for the award of Degree of “Bachelor of Technology” in
Electronics and communication, and submitted to the Department of
Electronics and communication, Jaipur
Engineering College andResearch Centre, Affiliated to Rajasthan
Technical University is a record of our own work carried out under the
Guidance of Mr.
Sandeep Vyas , TPO of Department of Electronics and communication.
Danish Khan[17EJCEC062]
iii
ACKNOWLEDGEMENT
Iv CHAPTER INDEX
S. No. TITLE PAGE NO.
Declaration iii
Acknowledgement v
Abstract vi
1. INTRODUCTION
2. REQUIREMENT ANALYSIS 6
2.1System Requirement 6
2.3Hardware Requirements 7
2.4Technology Required 7
3. FEASIBILITY STUDY 9
3.3Technical Feasibility 9
4. TECHNOLOGY USED 10
5. PROJECT ANALYSIS 25
6. UML DIAGRAMS 35
7. TESTING METHODLOGY 38
7.1 Testing 38
8. OUTPUTS/SCREENSHOTS 43
CONCLUSION 4
FUTURE SCOPE 5
REFERENCES 5
CHAPTER-1
INTRODUCTION
Modeling and Forecasting of the financial market have been an attractive
topic to scholars and researchers from various academic fields. The
financial market is an abstract concept where financial commodities such
as stocks, bonds, and precious metals transactions happen between
buyers and sellers. In the present scenario of the financial market world,
especially in the stock market, forecasting the trend or the price of stocks
using machine learning techniques and artificial neural networks are the
most attractive issue to be investigated.
As Francis E.H. Tay and Lijuan Cao explained in their studies, Neural
networks are more noise tolerant and more flexible compared with
traditional statistical models. By noise tolerance, one means neural
networks have the ability to be trained by incomplete and overlapped
data. Flexibility refers to that neural networks have the capability to
learn dynamic systems through a retraining process using new data
patterns. Long shortterm memory is a recurrent neural network
introduced by Sepp Hochreite and Jurgen Schmidhuber in 1997.
LSTM is designed to forecast, predict and classify time series data even
long time lags between vital events happened before. LSTMs have been
applied to solve several of problems; among those, handwriting
Recognition and speech recognition made LSTM famous. LSTM has
copious advantages compared with traditional backpropagation neural
networks and normal recurrent neural networks. The constant error
back propagation inside memory blocks enables in LSTM ability to
overcome long time lags in case of problems similar to those
discussed above; LSTM can handle noise, distributed representations,
and continuous values; LSTM requires no need for parameter fine-
tuning, it works well over a broad range of parameters such as
learning rate, input gate bias, and output gate bias. The objective of
our project can be generalized into two main parts. We examine the
feasibility of LSTM in stock market forecasting by testing the model
with various configurations.
Predicting how the stock market will perform is one of the most difficult
things to do. There are so many factors involved in the prediction –
physical factors vs. physiological, rational and irrational behaviour etc.
All these aspects combine to make share prices volatile and very
difficult to predict with a high degree of accuracy.
That supply and demand help determine the price for each security or the
levels at which stock market participants — investors and traders — are
willing to buy or sell.
The common trend towards the stock market among the society is that it
is highly risky for investment or not suitable for trade so most of the
people are not even interested. The seasonal variance and steady flow of
any index will help both existing and naïve investors to understand and
make a decision to invest in the stock/share market.
Deep learning can deal with complex structures easily and extract
relationships that further increase the accuracy of the generated results.
Machine learning has the potential to ease the whole process by
analyzing large chunks of data, spotting significant patterns and
generating a single output that navigates traders towards a particular
decision based on predicted asset prices.
Stock prices are not randomly generated values instead they can be
treated as a discrete-time series model which is based on a set of well-
defined numerical data items collected at successive points at regular
intervals of time. Since it is essential to identify a model to analyze
trends of stock prices with adequate information for decision making, we
made a machine learning model for stock prediction that’s capable of
both.
Recurrent neural networks (RNN) have proved one of the most powerful
models for processing sequential data. Long Short-Term memory is one
of the most successful RNNs architectures. LSTM introduces the
memory cell, a unit of computation that replaces traditional artificial
neurons in the hidden layer of the network. With these memory cells,
networks are able to effectively associate memories and input remote in
time, hence suit to grasp the structure of data dynamically over time with
high prediction capacity.
CHAPTER – 2
REQUIREMENT ANALYSIS
The system should collect accurate data from the NEPSE website in consistent
manner.
DR5 VRAM
For every 12hrs or so Disk, RAM, VRAM, CPU cache etc data that is on
our alloted virtual machine will get erased.
The art of forecasting the stock prices has been a difficult task for many
of the researchers and analysts. In fact, investors are highly interested in
the research area of stock price prediction. For a good and successful
investment, many investors are keen in knowing the future situation of
the stock market. Good and effective prediction systems for stock
market help traders, investors, and analyst by providing supportive
information like the future direction of the stock market. In this work,
we present a recurrent neural network (RNN) and Long Short-Term
Memory (LSTM) approach to predict stock market indices.
The stock market runs on predictions. Most people don’t believe them but still,
everyone wants them. The commonest query in the market between two people
is Market kya lage? (What do you think of the market?). This is asking for a
prediction, even if you don’t believe that the other man has the capability of
making one or not.
To gain a steady fortune from the stock market and to help experts to
find out the most informative indicators to make a better prediction. The
prediction of the market value is of great importance to help in
maximizing the profit of stock option purchase while keeping the risk
low.
Analysis of stocks using data mining will be useful for new investors to
invest in stock market based on the various factors considered by the
software.
Stock market includes daily activities like sensex calculation, exchange
of shares. The exchange provides an efficient and transparent market for
trading in equity, debt instruments and derivatives.
2. Corporate results:
FEASIBILITY STUDY
Simply put, stock market cannot be accurately predicted. The future, like
any complex problem, has far too many variables to be predicted. The
stock market is a place where buyers and sellers converge. When there
are more buyers than sellers, the price increases. When there are more
sellers than buyers, the price decreases. So, there is a factor which
causes people to buy and sell. It has more to do with emotion than logic.
Because emotion is unpredictable, stock market movements will be
unpredictable. It’s futile to try to predict where markets are going. They
are designed to be unpredictable.
There are some fundamental financial indicators by which a company’s
stock value can be estimated. Some of the indicators and factors are:
Price-to-Earning (P/E) Ratio, Price-toEarning Growth (PEG) Ratio,
Price-to-Sales (P/S) Ratio, Price/Cash Flow (P/CF) Ratio, Price-to-Book
Value (P/BV) Ratio and Debt-to-Equity Ratio. Some of the parameters
are available
and accessible on the web but all of them aren’t. So we are confined to
use the variables that are available to us. The proposed system will not
always produce accurate results since it does not account for the human
behaviours. Factors like change in company’s leadership, internal
matters, strikes, protests, natural disasters, change in the authority cannot
be taken into account for relating it to the change in Stock market by the
machine. The objective of the system is to give a approximate idea of
where the stock market might be headed. It does not give a long term
forecasting of a stock value. There are way too many reasons to
acknowledge for the long term output of a current stock. Many things
and parameters may affect it on the way due to which long term
forecasting is just not feasible
CHAPTER – 4
TECHNOLOGY USED
4. FRONT-END TECHNOLOGIES:
4.1 Html:
<body>: Body tag is used to enclosed all the data which a web page has
from texts to links.
All of the content that you see rendered in the browser is contained
within this element.
Web pages. When using HTML a block of text is surrounded with tags
that indicate to an Internet browser how the text is to appear (for
example, in bold face or italics). HTML is a collection of platform-
independent styles (indicated by markup tags) that define the various
components of a Web document. It is the preferred tool for creating Web
pages because it is understood by all Internet browsers.
4.2 CSS:
CSS is used along with HTML and JavaScript in most websites to create
user interfaces for web applications and user interfaces for many mobile
applications.
Cascading Style Sheet (CSS) is used to set the style in web pages which
contain HTML elements. It sets the background colour, font-size, font-
family, colour, … etc property of elements in a web pages.
CSS is not an overly complex language. But even if you’ve been writing
CSS for many years, you probably still come across new things —
properties you’ve never used, values you’ve never considered, or
specification details you never knew about.
In my research, I come across new little bit all the time, so I thought I’d
share some of them in this post. Admittedly, not everything in this post
will have a ton of immediate practical value, but maybe you can
mentally file some of these away for later use.
In CSS, selectors declare which part of the markup a style applies to by
matching tags and attributes in the markup itself.
Selectors may apply to the following:
all elementsof a specific type, e.g. the second-level
4.3 JAVASCRIPT:
JavaScript is a lightweight, interpreted programming language. It is
designed for creating network-centric applications. It is complimentary
to and integrated with Java. JavaScript is very easy to implement
because it is integrated with HTML. It is open and cross-platform.
Javascript is a MUST for students and working professionals to become
a great Software Engineer specially when they are working in Web
Development Domain. I will list down some of the key advantages of
learning Javascript:
Javascript helps you create really beautiful and crazy fast websites. You
can develop your website with a console like look and feel and give your users
the best Graphical User Experience.
JavaScript usage has now extended to mobile app development, desktop app
development, and game development. This opens many opportunities for you as
Javascript Programmer.
Due to high demand, there is tons of job growth and high pay for those
who know JavaScript. You can navigate over to different job sites to see what
having JavaScript skills looks like in the job market.
Great thing about Javascript is that you will find tons of frameworks and
Libraries already developed which can be used directly in your software
development to reduce your time to market.
There could be 1000s of good reasons to learn Javascript Programming.
But one thing for sure, to learn any programming language, not only
Javascript, you just need to code, and code and finally code until you
become expert.JavaScript is an object-based scripting language which is
lightweight and cross-platform.JavaScript is not a compiled language,
but it is a translated language. The JavaScript Translator (embedded in
the browser) is responsible for translating the JavaScript code for the
web browser. As a multi-paradigm language, JavaScript supports
eventdriven,functional,and imperative(including objectorientedand
prototype-based)programming styles.It has APIsfor working with text,
arrays, dates, regularexpressions, and the DOM, but the language itself
does not include any I/O, such as networking,storage, or
graphicsfacilities. It relies upon the host environment in which it is
embedded to provide these features.Initially only implemented client-
sidein web browsers, JavaScript engines are now embedded in many
other types of host software, including server-sidein web servers and
databases, and in non-web programs such as word processors and
PDFsoftware, and in runtime environments that make JavaScript
available for writing mobile and desktop applications, including desktop
widgets.
4.4 AJAX:
Ajax uses XHTML for content, CSS for presentation, along with
Document Object Model and JavaScript for dynamic content
display.
Conventional web applications transmit information to and from
the sever using synchronous requests. It means you fill out a
form, hit submit, and get directed to a new page with new
information from the server.
With AJAX, when you hit submit, JavaScript will make a request to the
server, interpret the results, and update the current screen. In the purest sense,
the user would never know that anything was even transmitted to the server.
XML is commonly used as the format for receiving server data,
although any format, including plain text, can be used.
JavaScript
DOM
Although X in Ajax stands for XML, JSON is used more than XML
nowadays because of its many advantages such as being lighter and a
part of JavaScript. Both JSON and XML are used for packaging
information in the Ajax model.
MACHINE LEARNING:
As it is evident from the name, it gives the computer that which makes it
more similar to humans: The ability to learn. Machine learning is
actively being used today, perhaps in many more places than one would
expect.
The study of machine learning tries to deal with this complicated task. In
other words, machine learning is the branch of artificial intelligence that
tries to find an answer to this question: how to make computer learn?
When we say that the machine learns, we mean that the machine is able
to make predictions from examples of desired behavior or past
observations and information. The name machine learning was coined in
1959 by Arthur Samuel.More formal definition of machine learning by
Tom Mitchell is, “A computer program is said to learn from experience
E with respect to some class of tasks T and performance measure P, if its
performance at tasks in T, as measured by P, improves with experience
E.” This definition of the tasks in which machine learning is concerned
offers a fundamentally operational definitionrather than defining the
field in cognitive terms. The definition also indicates the main goal of
machine learning: the design of such programs.
Model
A model is a specific representationlearned from data by
applying some machine learning algorithm. A model is also
called hypothesis.
Feature
A feature is an individual measurable property of our data. A set
of numeric features can be conveniently described by a feature
vector.Feature vectors are fed as input to the model. For
example, in order to predict a fruit, there may be features like
color, smell, taste, etc.
Note:Choosing informative, discriminating and independent
features is a crucial step for effective algorithms. We generally
employ a feature extractorto extract the relevant features from
the raw data.
Target
A target variable or label is the value to be predicted by our
model. For the fruit example discussed in the features section,
the label with each set of input would be the name of the fruit
like apple, orange, banana, etc.
Training
The idea is to give a set of inputs(features) and it’s expected
outputs(labels), so after training, we will have a model
(hypothesis) that will then map new data to one of the categories
trained on.
Prediction
Once our model is ready, it can be fed a set of inputs to which it
will provide a predicted output(label).
Advantages of Machine Learning–
DEEP LEARNING:
If you are just starting out in the field of deep learning or you had some
experience with neural networks some time ago, you may be confused. I
know I was confused initially and so were many of my colleagues and
friends who learned and used neural networks in the 1990s and early
2000s.
The leaders and experts in the field have ideas of what deep learning is
and these specific and nuanced perspectives shed a lot of light on what
deep learning is all about. What is Neural Network?
Unlike feedforward neural networks, RNNs can use their internal state
(memory) to process sequences of inputs. This makes them applicable to
tasks such as unsegmented, connected
handwriting recognition or speech recognition. In other neural networks,
all the inputs are independent of each other. But in RNN, all the inputs
are related to each other.
Humans don’t start their thinking from scratch every second. As you
read this essay, you understand each word based on your understanding
of previous words. You don’t throw everything away and start thinking
from scratch again. Your thoughts have persistence.
Recurrent neural networks address this issue. They are networks with
loops in them, allowing information to persist.
Long Short Term Memory networks – usually just called “LSTMs” – are
a special kind of RNN, capable of learning long-term dependencies.
They were introduced by Hochreiter &
Schmidhuber (1997),and were refined and popularized by many people
in following work.1They work tremendously well on a large variety of
problems, and are now widely used.
The cell state is kind of like a conveyor belt. It runs straight down the
entire chain, with only some minor linear interactions. It’s very easy for
information to just flow along it unchanged. The LSTM does have the
ability to remove or add information to the cell state, carefully regulated
by structures called gates.
Gates are a way to optionally let information through. They are
composed out of a sigmoid neural net layer and a pointwise
multiplication operation. The sigmoid layer outputs numbers between
zero and one, describing how much of each component should be let
through. A value of zero means “let nothing through,” while a value of
one means “let everything through”.
1. Input gate — discover which value from input should be used to modify
the memory. Sigmoid function decides which values to let through 0,1. and
tanh function gives weightage to the values which are passed deciding
their level of importance ranging from-1to 1.
3. Output gate — the input and the memory of the block is used to
decide the output.
Sigmoid function decides which values to let through 0,1. and tanh
function gives weightage to the values which are passed deciding their
level of importance ranging from-1 to 1 and multiplied with output of
Sigmoid.LSTMs were a big step in what we can accomplish with RNNs.
It’s natural to wonder: is there another big step?
A common opinion among researchers is: “Yes! There is a next step and
it’s attention!” The idea is to let every step of an RNN pick information
to look at from some larger collection of information. For example, if
you are using an RNN to create a caption describing an image, it might
pick a part of the image to look at for every word it outputs. In fact,
Xu,et al. (2015) do exactly this – it might be a fun starting point if you
want to explore attention!
CHAPTER – 5
PROJECT ANALYSIS
I will be using the historical stock price data for GE for this post. You
can find the data in my kaggle site here. I don’t remember the source of
data since I had downloaded it long back. We can read the data into
frame as shown below :
As you can see there are around 14060 items, each representing a day’s stock market attributes
for the company. Lets see how does it look on a plot :
26
27
Figure 5.2: Analyzing the Data
It seems the prices — Open, Close, Low, High — don’t vary too much from
each other except for occasional slight drops in Low price.
plt.figure() plt.plot(df_ge["Volume"])
plt.title('GE stock volume history')
plt.ylabel('Volume')
plt.xlabel('Days') plt.show()
There is quite a surge in the number of transactions around 12000th day on the timeline,
which happens to coincide with the sudden drop of stock price. May be we can go back to
that particular date and dig up old news articles to find what caused it.
28
Now let’s see if we have any null/Nan values to worry about. As it turns out we don’t have any
null values. Great!
print("checking if any null values are present\n", df_ge.isna().sum())
The data is not normalized and the range for each column varies, especially Volume.
Normalizing data helps the algorithm in converging i.e. to find local/ global minimum
efficiently. I will use MinMaxScaler from Sci-kit Learn. But before that we have to split the
dataset into training and testing datasets. Also I will convert the DataFrame to ndarray in the
process.
29
Converting data to time-series and supervised learning problem
This is quite important and somewhat tricky. This is where the knowledge LSTM is
needed. I would give a brief description of key concepts that are needed here but I
strongly recommend reading Andre karpathy’s blog here,which is considered one of the
best resources on LSTM out there and this. Or you can watch Andrew Ng’s video too
(which by the way mentions Andre’s blog too).
Batch Size says how many samples of input do you want your Neural Net to seebefore
updating the weights. So let’s say you have 100 samples (input dataset) and you want to
update weights every time your NN has seen an input. In that case batch size would be 1 and
total number of batches would be 100. Like wise if you wanted your network to update
weights after it has seen all the samples, batch size would be 100 and number of batches
would be 1. As it turns out using very small batch size reduces the speed of training and on
the other hand using too big batch size (like whole dataset) reduces the models ability to
generalize to different data and it also consumes more memory. But it takes fewer steps to
find the minima for your objective function. So you have to try out various values on your
data and find the sweet spot. It’s quite a big topic. We will see how to search these in
somewhat smarter way in the next article.
Time Steps define how many units back in time you want your network to see. For
example if you were working on a character prediction problem where you have a text
corpus to train on and you decide to feed your network 6 characters at a time. Then your time
step is 6. In our case we will be using 60 as time step i.e. we will look into 2 months of data
to predict next days price. More on this later.
Features is the number of attributes used to represent each time step. Consider the
character prediction example above, and assume that you use a one-hot encoded vector of
size 100 to represent each character. Then feature size here is 100.
Now that we have some what cleared up terminologies out of the way, let’s convert our
stock data into a suitable format. Let’s assume, for simplicity, that we chose 3 as time our
time step 30
(we want our network to look back on 3 days of data to predict price on 4th day) then we would
form our dataset like this:
Samples 0 to 2 would be our first input and Close price of sample 3 would be its
corresponding output value; both enclosed by green rectangle. Similarly samples 1 to 3
would be our second input and Close price of sample 4 would be output value;
represented by blue rectangle. And so on. So till now we have a matrix of shape (3, 5),
3 being the time step and 5 being the number of features. Now think how many such
input-output pairs are possible in the image above? 4.
Also mix the batch size with this. Let’s assume we choose batch size of 2. Then input-output
pair 1 (green rectangle) and pair 2 (blue rectangle) would constitute batch one. And so on.
‘y_col_index’ is the index of your output column. Now suppose after converting data into
supervised learning format, like shown above, you have 41 samples in your training dataset but
31
your batch size is 20 then you will have to trim your training set to remove the odd samples left
out.
Now using the above functions lets form our train, validation and test datasets
We will be using LSTM for this task, which is a variation of Recurrent Neural
Network.
Creating model
We will be using LSTM for this task, which is a variation of Recurrent Neural
Network.
Creating LSTM model is as simple as this:
Now that you have your model compiled and ready to be trained, train it like shown
below. If you are wondering about what values to use for parameters like epochs, batch
size etc., don’t worry we will see how to figure those out in the nextarticle.
Training this model (with fine tuned hyperparameters) gave best error of 3.27e-4 and best
validation error of 3.7e-4. Here is what the Training loss vs Validation loss looked like:
32
35
CHAPTER – 6
UML DIAGRAMS
6.1 USE CASE DIAGRAM
36
6.2 SEQUENCE DIAGRAM
37
6.3 DATA FLOW DIAGRAM
38
CHAPTER – 7
TESTING METHODOLOGY
7.1 TESTING:
Testing is the set of activities that can be planned in advance and conducted
systematically. Numbers of testing strategies are proposed. All provide software
developer with a template for testing and all have following characteristics.
Testing begins at component level & works “outward” towards the integration
of the entire computer based system.
Testing is conducted by the developer of the software & independent test group.
The standard definition of Verification goes like this: "Are we building the
product RIGHT?" i.e. Verification is a process that makes it sure that the
software product is developed the right way. The software should confirm to its
predefined specifications, as the product development goes through different
stages, an analysis is done to ensure that all required specifications are met.
Methods and techniques used in the Verification and Validation shall be designed
carefully, the planning of which starts right from the beginning of the
development process. The Verification part of ‘Verification and Validation
Model’ comes before Validation, which incorporates Software inspections,
reviews, audits, walkthroughs, buddy checks etc. in each phase of verification
(every phase of Verification is a phase of the Testing Life Cycle)
During the Verification, the work product (the ready part of the Software being
developed and various documentations) is reviewed/examined personally by one
or more persons in order to find and point out the defects in it. This process helps
in prevention of potential bugs, which may cause in failure of the project.
Each activity makes it sure that the product is developed right way and every requirement;every
specification, design code etc. is verified!
Validation is a process of finding out if the product being built is right? I.e.
whatever the software product is being developed; it should do what the user
expects it to do. The software product should functionally do what it is supposed
to, it should satisfy all the functional requirements set by the user. Validation is
done during or at the end of the development process in order to determine
whether the product satisfies specified requirements.
40
(such as Functional Validation/Testing, Code Validation/Testing,
System/Integration Validation etc.).
All types of testing methods are basically carried out during the Validation
process. Test plan, test suits and test cases are developed, which are used during
the various phases of Validation process.
Code Validation/Testing
41
Figure 7.1: Verification and validation model
Test case specification has to be done separately for each unit. Test case
specification gives, for each unit to be tested, all test cases, inputs to be used in
the test cases, conditions being tested by the test case, and outputs expected for
those test cases.
Test case specification is a major activity in the testing process. Careful selection
of test cases that satisfy the criterion and approach specified is essential for
proper testing. Test case specification document gives plan for testing that
evaluates quality of test case.
With the specification of test cases, the next step in the testing process is to
execute them. The steps to be performed to execute the test cases are specified in
a test procedure specification which gives procedure for setting test environment
and describes the methods and formats for reporting the results of testing. Test
log, test summary report, and error report are some common outputs of test case
execution.
TEST CASES :
The input dataset contains last five years data which is used for prediction of
future stock markets by identifying any relevant inferences or patterns. Our
model when fed with all of the input datasets and then automated, resulted in a
overall accuracy of 84.67%.
42
Fig 7.1 Graph of actual v/s predicted stock prediction values
Recovery testing is a system test that enforces the software to fail in a variety of
ways and verifies that recovery is properly performed. If the recovery is
automatic, re-initialization, check pointing mechanism and data recovery and
restart are each evaluated for correctness. If recovery requires human
intervention, the mean time to repair is evaluated to determine whether it is
within acceptable limits.
Security testing attempts to verify that protection mechanisms built into a system will infract
protect it from improper penetration.
43
7.3.3 Stress Testing:
Stress tests are designed to handle programs with abnormal situations. Stress testing
executes a system in a manner that demands resources in abnormal quantity,
frequency or volume.
44
CHAPTER -8
OUTPUTS /
SCREENSHOTS
45
CONCLUSION
The project is made for easing up people's decisions for stock markets.
Some people are skeptical even about investing in it. To help them with the
prediction process and to make smart decisions about investing in stock
market, this project is very helpful.
As our data set grows with more and more passing days with more and more
daily data update of stocks, our model becomes more robust. And hence, it
can be considered as reliable when it comes to investing in stock market.
FUTURE SCOPE
This is a very applicative program for future as the interest of people in stock market is
never going to end anytime soon. Hence, need for an effective stock prediction system is the
foremost requirement to help people decide which shares to sell, which to buy and when.
Although it is very promising on the horizons, few changes can be made :
REFERENCES
1. RESEARCH PAPERS
2. BOOKS
1.The hundred page machine learning book by Andriy Burkov
2.Introduction to Machine Learning with Python: A Guide for Data Scientists
Book by :-
Andreas C. Müller and Sarah Guido
3.OTHER RESOURCES
1. https://www.geeksforgeeks.org/machine-learning/
2. https://towardsdatascience.com/machine-learning/home
3. https://techbeacon.com/enterprise-it/moving-targets-testing-software-age-machine-learning
4. https://www.geeksforgeeks.org/best-python-libraries-for-machine-learning/
5. https://en.wikipedia.org/wiki/Machine_learnin g