0% found this document useful (0 votes)
4 views

CropPredictionUsingMLPythonReport

This document discusses the application of machine learning techniques, particularly Random Forest, for predicting crop yields to enhance agricultural productivity. It outlines the limitations of existing agricultural prediction methods and proposes a mobile application that integrates various data sources and machine learning models for accurate crop yield predictions. The paper emphasizes the importance of data preprocessing and the selection of appropriate algorithms to optimize predictions based on environmental factors.

Uploaded by

dhanumb1983
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

CropPredictionUsingMLPythonReport

This document discusses the application of machine learning techniques, particularly Random Forest, for predicting crop yields to enhance agricultural productivity. It outlines the limitations of existing agricultural prediction methods and proposes a mobile application that integrates various data sources and machine learning models for accurate crop yield predictions. The paper emphasizes the importance of data preprocessing and the selection of appropriate algorithms to optimize predictions based on environmental factors.

Uploaded by

dhanumb1983
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 58

ABSTRACT

Abstract Agriculture is first and foremost factor which is important for


survival. Machine learning (ML) could be a crucial perspective for acquiring
real-world and operative solution for crop yield issue. Considering the
present system including manual counting, climate smart pest management
and satellite imagery, the result obtained aren’t really accurate. This paper
focuses mainly on predicting the yield of the crop by applying various
machine learning techniques.

The classifier models used here include Logistic Regression, Naive Bayes and
Random Forest, out of which the Random Forest provides maximum
accuracy. The prediction made by machine learning algorithms will help the
farmers to come to a decision which crop to grow to induce the most yield by
considering factors like temperature, rainfall, area, etc. This bridges the gap
between technology and agriculture sector.

INTRODUCTION

Agriculture, since its invention and inception, be the prime and pre-eminent
activity of every culture and civilization throughout the history of mankind. It
is not only an enormous aspect of the growing economy, but its essential for
us to survive. Its also a crucial sector for Indian economy and also human
future. It also contributes an outsized portion of employment. Because the
time passes the requirement for production has been increased
exponentially. So as to produce in mass quantity people are using
technology in an exceedingly wrong way. New sorts of hybrid varieties are
produced day by day. However, these varieties dont provide the essential
contents as naturally produced crop. These unnatural techniques spoil the
soil. It all ends up in further environmental harm. Most of these unnatural
techniques are wont to avoid losses.

But when the producers of the crops know the accurate information on the
crop yield it minimizes the loss. Machine learning, a fast-growing approach
thats spreading out and helping every sector in making viable decisions to
create the foremost of its applications. Most devices nowadays are facilitated
by models being analyzed before deployment. The main concept is to
increase the throughput of the agriculture sector with the Machine Learning
models.

Another factor that also affects the prediction is the amount of knowledge
that’s being given within the training period, as the number of parameters
was higher comparatively. The core emphasis would be on precision
agriculture, where quality is ensured over undesirable environmental factors.
So as to perform accurate prediction and stand on the inconsistent trends in
temperature and rainfall various machine learning classifiers like Logistic
Regression, Naive Bayes, Random Forest etc. are applied to urge a pattern.
By applying the above machine learning classifiers, we came into a
conclusion that Random Forest algorithm provides the foremost accurate
value. System predicts crop prediction from the gathering of past data. Using
past information on weather, temperature and a number of other factors the
information is given. The Application which we developed, runs the algorithm
and shows the list of crops suitable for entered data with predicted yield
value.

LITERATURE SURVEY

Aruvansh Nigam, Saksham Garg, Archit Agrawal conducted experiments on


Indian government dataset and its been established that Random Forest
machine learning algorithm gives the best yield prediction accuracy.
Sequential model thats Simple Recurrent Neural Network performs better on
rainfall prediction while LSTM is good for temperature prediction. The paper
puts factors like rainfall, temperature, season, area etc. together for yield
prediction. Results reveals that Random Forest is the best classier when all
parameters are combined.

Leo Brieman, is specializing in the accuracy and strength & correlation of


random forest algorithm. Random forest algorithm creates decision trees on
different data samples and then predict the data from each subset and then
by voting gives better the answer for the system. Random Forest used the
bagging method to trained the data. To boost the accuracy, the randomness
injected has to minimize the correlation while maintaining strength.

Balamurugan, have implemented crop yield prediction by using only the


random forest classifier. Various features like rainfall, temperature and
season were taken into account to predict the crop yield. Other machine
learning algorithms were not applied to the datasets. With the absence of
other algorithms, comparison and quantification were missing thus unable to
provide the apt algorithm.

Mishra, has theoretically described various machine learning techniques that


can be applied in various forecasting areas. However, their work fails to
implement any algorithms and thus cannot provide a clear insight into the
practicality of the proposed work.

Dr. Y. Jeevan Nagendra Kumar, have concluded Machine Learning algorithms


can predict a target/outcome by using Supervised Learning. This paper
focuses on supervised learning techniques for crop yield prediction. To get
the specified outputs it needs to generate an appropriate function by set of
some variables which can map the input variable to the aim output. The
paper conveys that the predictions can be done by Random Forest ML
algorithm which attain the crop prediction with best accurate value by
considering least number of models

PROJECT DESCRIPTION

Data Preprocessing is a method that is used to convert the raw data into a
clean data set. The data are gathered from different sources, it is collected in
raw format which is not feasible for the analysis. By applying different
techniques like replacing missing values and null values, we can transform
data into an understandable format. The final step on data preprocessing is
the splitting of training and testing data. The data usually tend to be split
unequally because training the model usually requires as much data- points
as possible. The training dataset is the initial dataset used to train ML
algorithms to learn and produce right predictions (Here 80% of dataset is
taken as training dataset). Fig.1. shows the few rows of the preprocessed
data

There are a lot of factors that affects the yield of any crop and its production.
These are basically the features that help in predicting the production of any
crop over the year. In this paper we include factors like Temperature,
Rainfall, Area, Humidity and Windspeed.

Before deciding on an algorithm to use, first we need to evaluate and


compare, then choose the best one that fits this specific dataset. Machine
Learning is the best technique which gives a better practical solution to crop
yield problem. There are a lot of machine learning algorithms used for
predicting the crop yield. In this paper we include the following machine
learning algorithms for selection and accuracy comparison :

Logistic Regression:- Logistic regression is a supervised learning


classification algorithm used to predict the probability of target variable. The
nature of target or dependent variable is dichotomous, which means there
would be only two possible classes. When logistic regression algorithm
applied on our dataset it provides an accuracy of 87.8%.

Naive Bayes:- Naive Bayes classifier assumes that the presence of a


particular feature in a class is unrelated to the presence of any other feature.
Naive Bayes model is easy to build and particularly useful for very large data
sets. Along with simplicity,

Naive Bayes is known to outperform even highly sophisticated classification


methods. It provides an accuracy of 91.50%.

Random Forest:- Random Forest has the ability to analyze crop growth
related to the current climatic conditions and biophysical change. Random
forest algorithm creates decision trees on different data samples and then
predict the data from each subset and then by voting gives better solution
for the system. Random Forest uses the bagging method to train the data
which increases the accuracy of the result.

SPECIFIC REQUIREMENT

EXISTING SYSTEM

The existing systems for agricultural crop prediction involve a combination of


traditional methods and modern technologies. These systems aim to forecast
crop yields, identify potential risks, and help farmers make informed
decisions. Here are some key components of existing agri crop prediction
systems:

1. Data Collection: Historical Data:Analysis of historical data, including


past crop yields, weather patterns, and soil conditions, helps in
understanding the trends and patterns over time. Field Surveys: Field
surveys are conducted to gather real-time information about crop conditions,
pest infestations, and other factors that may affect yields

2. Weather Forecasting: Integration of weather data is crucial for predicting


the impact of climate conditions on crop growth. Modern systems leverage
advanced weather forecasting models to provide accurate and timely
information.

3. Remote Sensing: Satellite imagery and remote sensing technologies are


used to monitor crop health, detect diseases, and assess vegetation indices.
This data aids in identifying areas with potential yield variations.
4. Machine Learning and AI: - Machine learning algorithms analyze large
datasets to identify patterns and correlations between various factors
affecting crop yields. Predictive models are trained on historical data to
forecast future yields based on different scenarios.

5. IoT (Internet of Things): IoT devices, such as sensors and drones, are
deployed in the field to collect real-time data on soil moisture, temperature,
and other environmental conditions. This information is then used to
optimize irrigation, fertilization, and other farming practices

PROPOSED SYSTEM

Our proposed system. system is a mobile application which predicts name of


the crop as well as calculate its corresponding yield. Name of the crop is
determined by several features like temperature, humidity, wind-speed,
rainfall etc. and yield is determined by the area and production. In this
paper, Random Forest classifier is used for prediction. It will attain the crop
prediction with best accurate values.

Designing a system for agricultural crop prediction involves integrating


various technologies and data sources to provide accurate and timely
predictions. Here's a proposed system that incorporates key components for
effective crop prediction:

Data Collection: Satellite Imagery: Utilize satellite data to monitor crop


health, identify anomalies, and assess environmental conditions. Weather
Stations: Integrate real-time weather data, including temperature, humidity,
precipitation, and wind speed. Soil Sensors: Deploy soil sensors to collect
information on soil moisture, nutrient levels, and other relevant soil
characteristics.

Historical Data: Include past crop yields, growth patterns, and environmental
conditions for training machine learning models
2. Machine Learning Models: Crop Classification Models: Train machine
learning models to classify satellite imagery and identify different crops in a
given area. Yield Prediction Models: Develop models to predict crop yield
based on historical data, current environmental conditions, and satellite
imagery. Pest and Disease Prediction Models: Implement models that predict
the likelihood of pest and disease outbreaks based on weather and soil
conditions.

3. GIS Integration: Geographical Information System (GIS): Use GIS for spatial
analysis and mapping of crop distribution, allowing for a better
understanding of regional variations.

4. Mobile Apps and Web Platforms: User-Friendly Interfaces: Develop intuitive


mobile apps or web platforms to provide farmers with easy access to crop
predictions and recommendations. Real-Time Alerts: Implement push
notifications or alerts to notify farmers of potential issues, such as adverse
weather conditions or emerging pests.

5.IoT Devices: Smart Sensors:Install IoT devices and smart sensors in the
field to collect real-time data on temperature, humidity, and other
environmental factors. -Automated Irrigation Systems: Integrate systems
that automate irrigation based on soil moisture levels and weather
predictions.

REQUIREMENT ANALYSIS

HARDWARE REQUIREMENTS

 Hardware :Processor Intel dual core and above


 Clock speed :3.0 GHz
 RAM size :512 MB
 Hard Disk capacity :400 GB
 Monitor type :15 inch color monitor
SOFTWARE REQUIREMENTS

 Operating System :Windows XP, Windows 7, Windows


8,Windows 10
 Application :HTML, CSS, JS, Python, Flask
 Browser :Google chrome, Firefox
 Database :Google Firestore.
 Documentation :MS-Office

Software Requirement Specification

1. Flask: Flask is a high-level Python web framework that enables rapid


development of secure and maintainable websites. It provides a
powerful set of tools for building web applications, including a robust
ORM (Object-Relational Mapping) system, automatic admin interface,
and built-in security features.
2. Python: Python is a high-level programming language used for a wide
range of purposes, including web development. It is known for its ease
of use, simplicity, and versatility
3. HTML: Hypertext Markup Language (HTML) is the standard markup
language used to create web pages. It provides a structure for content
on the internet, allowing developers to define and organize the various
elements on a webpage.
4. CSS: Cascading Style Sheets (CSS) is a language used for describing
the presentation of a document written in HTML. It provides a way to
add style and design to a webpage, including colors, fonts, and
layouts.
5. Java Script: JavaScript (JS) is a programming language used primarily
for developing interactive and dynamic front-end web applications. It
allows for the creation of responsive and user friendly websites.
6. DBSql: DBSql is a SQL database system that provides a flexible and
scalable solution for storing and retrieving data. It is designed for
handling large volumes of data and provides high availability and
automatic scaling.

In summary, the project requires the use of standard web development


technologies such as HTML, CSS, and JavaScript, as well as the Python
programming language and the Flask web framework. Additionally, the use
of DBSql is required to provide a scalable and efficient data storage solution.
All of these technologies are essential for building a modern and functional
web application that meets the needs of users.

SYSTEM ANALYSIS
System analysis is the most essential part of the development of the project.
The analyst has to understand the functions and concepts in detail before
designing the appropriate computer based system. He has to carry out
customary appropriate that includes the following steps:
• Requirement specification
• Preliminary investigation
• Feasibility study
• Detailed investigation
• Design and coding
• Testing
• Implementation

System engineering and analysis encompasses requirement gathering at the


system level with a small level of top level design and analysis. This process
of analyzing and gathering requirements is known as software requirement
specification (SRS). The requirement gathering process intensified a focus
especially on software. The preliminary investigation, Feasibility study and
the detailed investigation allows the system to comprehend the full scope of
this project. Soon after testing, implementation of the developed system is
followed by training

FEASIBILITY STUDY
A feasibility study is a high-level capsule version of the entire System
analysis and Design Process. The study begins by classifying the problem
definition. Feasibility is to determine if it’s worth doing. Once an acceptance
problem definition has been generated, the analyst develops a logical model
of the system. A search for alternatives is analyzed carefully. There are 3
parts in feasibility study.

Operational Feasibility

Operational feasibility is the measure of how well a proposed system solves


the problems, and takes advantage of the opportunities identified during
scope definition and how it satisfies the requirements identified in the
requirements analysis phase of system development. The operational
feasibility assessment focuses on the degree to which the proposed
development projects fits in with the existing business environment and
objectives with regard to development schedule, delivery date, corporate
culture and existing business processes. To ensure success, desired
operational outcomes must be imparted during design and development.
These include such design-dependent parameters as reliability,
maintainability, supportability, usability, producibility, disposability,
sustainability, affordability and others. These parameters are required to be
considered at the early stages of design if desired operational behaviors are
to be realized. A system design and development requires appropriate and
timely application of engineering and management efforts to meet the
previously mentioned parameters. A system may serve its intended purpose
most effectively when its technical and operating characteristics are
engineered into the design. Therefore, operational feasibility is a critical
aspect of systems engineering that needs to be an integral part of the early
design phases.

Technical Feasibility

This involves questions such as whether the technology needed for the
system exists, how difficult it will be to build, and whether the firm has
enough experience using that technology. The assessment is based on
outline design of system requirements in terms of input, processes, output,
fields, programs and procedures. This can be qualified in terms of volume
of data, trends, frequency of updating inorder to give an introduction to
the technical system. The application is the fact that it has been developed
on windows XP platform and a high configuration of 1GB RAM on Intel
Pentium Dual core processor. This is technically feasible .The technical
feasibility assessment is focused on gaining an understanding of the
present technical resources of the organization and their applicability to
the expected needs of the proposed system. It is an evaluation of the
hardware and software and how it meets the need of the proposed system.

Economic Feasibility

Establishing the cost-effectiveness of the proposed system i.e. if the


benefits do not outweigh the costs then it is not worth going ahead. In the
fast paced world today there is a great need of online social networking
facilities. Thus the benefits of this project in the current scenario make it
economically feasible. The purpose of the economic feasibility assessment
is to determine the positive economic benefits to the organization that the
proposed system will provide. It includes quantification and identification
of all the benefits expected. This assessment typically involves a
cost/benefits analysis.

DESIGN
Introduction:

Design is the first step in the development


phase for any techniques and principles for the purpose of defining
a device, a process or system in sufficient detail to permit its
physical realization.

Once the software requirements have


been analyzed and specified the software design involves three
technical activities - design, coding, implementation and testing
that are required to build and verify the software.
The design activities are of main
importance in this phase, because in this activity, decisions
ultimately affecting the success of the software implementation and
its ease of maintenance are made. These decisions have the final
bearing upon reliability and maintainability of the system. Design is
the only way to accurately translate the customer’s requirements
into finished software or a system.

Design is the place where quality is


fostered in development. Software design is a process through
which requirements are translated into a representation of
software. Software design is conducted in two steps. Preliminary
design is concerned with the transformation of requirements into
data.

UML Diagrams:

Actor:
A coherent set of roles that users of use cases play when interacting
with the use `cases.

Use case:
A description of sequence of actions, including
variants, that a system performs that yields an observable result of
value of an actor.
UML stands for Unified Modeling Language. UML is a language for
specifying, visualizing and documenting the system. This is the step
while developing any product after analysis. The goal from this is to
produce a model of the entities involved in the project which later
need to be built. The representation of the entities that are to be
used in the product being developed need to be designed.

There are various kinds of methods in software design:


They are as follows:
Use case Diagram
Sequence Diagram
Collaboration Diagram
Activity Diagram
State chat Diagram

Use case Diagrams:

Use case diagrams model behavior within a system and helps the
developers understand of what the user require. The stick man
represents what’s called an actor.
Use case diagram can be useful for getting an overall view
of the system and clarifying that can do and more importantly what
they can’t do.
Use case diagram consists of use cases and actors and shows the
interaction between the use case and actors.

 The purpose is to show the interactions between the use


case and actor.
 To represent the system requirements from user’s
perspective.
 An actor could be the end-user of the system or an
external system.
Use case Diagram
A Use case is a description of set of sequence of actions.
Graphically it is rendered as an ellipse with solid line including only
its name. Use case diagram is a behavioral diagram that shows a
set of use cases and actors and their relationship. It is an
association between the use cases and actors. An actor represents
a real-world object. Primary Actor – Sender, Secondary Actor
Receiver.

ADMIN

New Staff

View Staff
Admin

View Users

View Reports

Staff

View
Plans

View Profile
Customer

View Profile

Crop Prediction
Customer

View Reports

Sequence Diagram

Sequence diagram and collaboration diagram are called


INTERACTION DIAGRAMS. An interaction diagram shows an
interaction, consisting of set of objects and their relationship
including the messages that may be dispatched among them.
A sequence diagram is an introduction that empathizes the time
ordering of messages. Graphically a sequence diagram is a table
that shows objects arranged along the X-axis and messages
ordered in increasing time along the Y-axis.

Data Flow Diagram

DFD LEVEL 0

Add New Staffs


ADMIN
Crop
Detection

DFD LEVEL 1

View Users, Staffs


ADMIN
Crop
Detection

DFD LEVEL 2
View Reports
ADMIN
Crop
Detection

DATA FLOW DIAGRAMS:

The DFD takes an input-process-output view of a system


i.e. data objects flow into the software, are transformed by
processing elements, and resultant data objects flow out of the
software.
Data objects represented by labeled arrows and
transformation are represented by circles also called as bubbles.
DFD is presented in a hierarchical fashion i.e. the first data flow
model represents the system as a whole. Subsequent DFD refine
the context diagram (level 0 DFD), providing increasing details with
each subsequent level.

The DFD enables the software engineer to develop models


of the information domain & functional domain at the same time. As
the DFD is refined into greater levels of details, the analyst
performs an implicit functional decomposition of the system. At the
same time, the DFD refinement results in a corresponding
refinement of the data as it moves through the processes that
embody the applications.

A context-level DFD for the system the primary external


entities produce information for use by the system and consume
information generated by the system. The labeled arrow represents
data objects or object hierarchy.

RULES FOR DFD:

 Fix the scope of the system by means of context diagrams.

 Organize the DFD so that the main sequence of the actions


 Reads left to right and top to bottom.
 Identify all inputs and outputs.
 Identify and label each process internal to the system with Rounded
circles.
 A process is required for all the data transformation and Transfers.
Therefore, never connect a data store to a data Source or the
destinations or another data store with just a Data flow arrow.
 Do not indicate hardware and ignore control information.
 Make sure the names of the processes accurately convey
everything the process is done.
 There must not be unnamed process.
 Indicate external sources and destinations of the data, with
Squares.
 Number each occurrence of repeated external entities.
 Identify all data flows for each process step, except simple Record
retrievals.
 Label data flow on each arrow.
 Use details flow on each arrow.
 Use the details flow arrow to indicate data movements.

E-R Diagrams:

The Entity-Relationship (ER) model was originally proposed by Peter


in 1976 [Chen76] as a way to unify the network and relational
database views. Simply stated the ER model is a conceptual data
model that views the real world as entities and relationships. A
basic component of the model is the Entity-Relationship diagram
which is used to visually represent data objects. Since Chen wrote
his paper the model has been extended and today it is commonly
used for database design For the database designer, the utility of
the ER model is:

 It maps well to the relational model. The constructs used in


the ER model can easily be transformed into relational tables.
 It is simple and easy to understand with a minimum of
training. Therefore, the model can be used by the database
designer to communicate the design to the end user.
 In addition, the model can be used as a design plan by the
database developer to implement a data model in specific
database management software.
Connectivity and Cardinality

The basic types of connectivity for relations are: one-to-one, one-to-


many, and many-to-many. A one-to-one (1:1) relationship is when
at most one instance of a entity A is associated with one instance of
entity B. For example, "employees in the company are each
assigned their own office. For each employee there exists a unique
office and for each office there exists a unique employee.

A one-to-many (1:N) relationships is when for one instance of entity


A, there are zero, one, or many instances of entity B, but for one
instance of entity B, there is only one instance of entity A. An
example of a 1:N relationships is
a department has many employees

Each employee is assigned to one department

A many-to-many (M:N) relationship, sometimes called non-specific,


is when for one instance of entity A, there are zero, one, or many
instances of entity B and for one instance of entity B there are zero,
one, or many instances of entity A. The connectivity of a
relationship describes the mapping of associated
ER Notation

There is no standard for representing data objects in ER


diagrams. Each modeling methodology uses its own notation. The
original notation used by Chen is widely used in academics texts
and journals but rarely seen in either CASE tools or publications by
non-academics. Today, there are a number of notations used;
among the more common are Bachman, crow's foot, and IDEFIX.

All notational styles represent entities as rectangular boxes


and relationships as lines connecting boxes. Each style uses a
special set of symbols to represent the cardinality of a connection.
The notation used in this document is from Martin. The symbols
used for the basic ER constructs are:

 Entities are represented by labelled rectangles. The label is the


name of the entity. Entity names should be singular nouns.
 Relationships are represented by a solid line connecting two
entities. The name of the relationship is written above the line.
Relationship names should be verbs
 Attributes, when included, are listed inside the entity rectangle.
Attributes which are identifiers are underlined. Attribute names
should be singular nouns.
 Cardinality of many is represented by a line ending in a crow's
foot. If the crow's foot is omitted, the cardinality is one.
 Existence is represented by placing a circle or a perpendicular
bar on the line. Mandatory existence is shown by the bar (looks
like a 1) next to the entity for an instance is required. Optional
existence is shown by placing a circle next to the entity that is
optional
Index

About Page
Services

Gallery
Admin Login

Staff Login
User Login

NewUser
ContactPage

Home Page
About Page

Services
Gallery

AdminLoginPage
StaffLogin

UserLogin
NewUserPage

ContactPage

AdminMainPage
NewStaffPage

AdminViewUsersPage
AdminViewStaffsPage

AdminViewContactsPage
UserViewMainPage

UserViewProfilePage
UserMakePredictionPage
Userviewreportspage
Staffmainpage

Staffviewprofile
Staffviewusers

Staffviewreports
Code :

Index.html

{% extends 'commonheader.html' %}

{% block content %}

<html lang="en">

<head>

<meta charset="UTF-8">

</head>

<body>

<div>

<div id="header-carousel" class="carousel slide carousel-fade" data-


bs-ride="carousel">
<div class="carousel-inner">

<div class="carousel-item active">

<img class="w-100" src="/static/img/Pic1.jpg" alt="Image">

<div class="carousel-caption d-flex flex-column align-items-


center justify-content-center">

<div class="p-3" style="max-width: 900px;">

<h5 class="text-white text-uppercase mb-3 animated


slideInDown">AgriCrop Prediction</h5>

<h1 class="display-1 text-white mb-md-4 animated


zoomIn">Creative & Innovative Digital Solution</h1>

</div>

</div>

</div>

<div class="carousel-item">

<img class="w-100" src="/static/img/Pic2.jpg" alt="Image">

<div class="carousel-caption d-flex flex-column align-items-


center justify-content-center">

<div class="p-3" style="max-width: 900px;">

<h5 class="text-white text-uppercase mb-3 animated


slideInDown">AgriCrop Prediction</h5>

<h1 class="display-1 text-white mb-md-4 animated


zoomIn">Creative & Innovative Digital Solution</h1>
</div>

</div>

</div>

</div>

<button class="carousel-control-prev" type="button" data-bs-


target="#header-carousel"

data-bs-slide="prev">

<span class="carousel-control-prev-icon"
aria-hidden="true"></span>

<span class="visually-hidden">Previous</span>

</button>

<button class="carousel-control-next" type="button" data-bs-


target="#header-carousel"

data-bs-slide="next">

<span class="carousel-control-next-icon"
aria-hidden="true"></span>

<span class="visually-hidden">Next</span>

</button>

</div>

</div>

</body>

</html>

{% endblock %}
Main.py

import datetime

from flask import session

import firebase_admin

import random

from firebase_admin import credentials, firestore

from flask import Flask, render_template, request, redirect

import numpy as np

import pandas as pd

from sympy.physics.units import temperature

from utils.disease import disease_dic

from utils.fertilizer import fertilizer_dic

import requests

import config

import pickle

import io

import torch

from torchvision import transforms

from PIL import Image

from utils.model import ResNet9


cred = credentials.Certificate("key.json")

firebase_admin.initialize_app(cred)

app = Flask(__name__)

app.secret_key="AgriCrop@12345"

model = pickle.load(open('model.pkl', 'rb'))

df = pd.read_csv('plant(IBM - Z).csv')

crops = df['label'].unique()

disease_classes = ['Apple___Apple_scab',

'Apple___Black_rot',

'Apple___Cedar_apple_rust',

'Apple___healthy',

'Blueberry___healthy',

'Cherry_(including_sour)___Powdery_mildew',

'Cherry_(including_sour)___healthy',

'Corn_(maize)___Cercospora_leaf_spot Gray_leaf_spot',

'Corn_(maize)___Common_rust_',

'Corn_(maize)___Northern_Leaf_Blight',

'Corn_(maize)___healthy',

'Grape___Black_rot',
'Grape___Esca_(Black_Measles)',

'Grape___Leaf_blight_(Isariopsis_Leaf_Spot)',

'Grape___healthy',

'Orange___Haunglongbing_(Citrus_greening)',

'Peach___Bacterial_spot',

'Peach___healthy',

'Pepper,_bell___Bacterial_spot',

'Pepper,_bell___healthy',

'Potato___Early_blight',

'Potato___Late_blight',

'Potato___healthy',

'Raspberry___healthy',

'Soybean___healthy',

'Squash___Powdery_mildew',

'Strawberry___Leaf_scorch',

'Strawberry___healthy',

'Tomato___Bacterial_spot',

'Tomato___Early_blight',

'Tomato___Late_blight',

'Tomato___Leaf_Mold',

'Tomato___Septoria_leaf_spot',

'Tomato___Spider_mites Two-spotted_spider_mite',
'Tomato___Target_Spot',

'Tomato___Tomato_Yellow_Leaf_Curl_Virus',

'Tomato___Tomato_mosaic_virus',

'Tomato___healthy']

disease_model_path = 'models/plant_disease_model.pth'

disease_model = ResNet9(3, len(disease_classes))

disease_model.load_state_dict(torch.load(

disease_model_path, map_location=torch.device('cpu')))

disease_model.eval()

crop_recommendation_model_path = 'models/RandomForest.pkl'

crop_recommendation_model = pickle.load(

open(crop_recommendation_model_path, 'rb'))

def weather_fetch(city_name):

"""

Fetch and returns the temperature and humidity of a city

:params: city_name

:return: temperature, humidity

"""
api_key = config.weather_api_key

base_url = "http://api.openweathermap.org/data/2.5/weather?"

complete_url = base_url + "appid=" + api_key + "&q=" + city_name

response = requests.get(complete_url)

x = response.json()

if x["cod"] != "404":

y = x["main"]

temperature = round((y["temp"] - 273.15), 2)

humidity = y["humidity"]

return temperature, humidity

else:

return None

def predict_image(img, model=disease_model):

"""

Transforms image to tensor and predicts disease label

:params: image

:return: prediction (string)

"""

transform = transforms.Compose([

transforms.Resize(256),
transforms.ToTensor(),

])

image = Image.open(io.BytesIO(img))

img_t = transform(image)

img_u = torch.unsqueeze(img_t, 0)

# Get predictions from model

yb = model(img_u)

# Pick index with highest probability

_, preds = torch.max(yb, dim=1)

prediction = disease_classes[preds[0].item()]

# Retrieve the class label

return prediction

@app.route('/disease-predict', methods=['GET', 'POST'])

def disease_prediction():

if request.method == 'POST':

if 'file' not in request.files:

return redirect(request.url)

file = request.files.get('file')

if not file:

return render_template('userdiseasepredictions.html')
try:

img = file.read()

prediction = predict_image(img)

prediction = Markup(str(disease_dic[prediction]))

print("Prediction : ", prediction)

return render_template('userdiseasepredictions.html',
prediction=prediction)

except:

pass

return render_template('userdiseasepredictions.html')

@app.route('/fertilizer-predict')

def fertilizer_predict():

try:

return render_template('userfertilizerpredictions.html',
recommendation="")

except Exception as e:

return str(e)

@ app.route('/fertilizer-predict1', methods=['POST'])

def fert_recommend1():

crop_name = str(request.form['cropname'])
N = int(request.form['nitrogen'])

P = int(request.form['phosphorous'])

K = int(request.form['pottasium'])

# ph = float(request.form['ph'])

df = pd.read_csv('Data/fertilizer.csv')

nr = df[df['Crop'] == crop_name]['N'].iloc[0]

pr = df[df['Crop'] == crop_name]['P'].iloc[0]

kr = df[df['Crop'] == crop_name]['K'].iloc[0]

n = nr - N

p = pr - P

k = kr - K

temp = {abs(n): "N", abs(p): "P", abs(k): "K"}

max_value = temp[max(temp.keys())]

if max_value == "N":

if n < 0:

key = 'NHigh'

else:

key = "Nlow"

elif max_value == "P":


if p < 0:

key = 'PHigh'

else:

key = "Plow"

else:

if k < 0:

key = 'KHigh'

else:

key = "Klow"

response = Markup(str(fertilizer_dic[key]))

return render_template('userfertilizerpredictions.html',
recommendation=response)

@app.route('/crop-predict', methods=['POST'])

def crop_prediction():

if request.method == 'POST':

N = int(request.form['nitrogen'])

P = int(request.form['phosphorous'])

K = int(request.form['pottasium'])

ph = float(request.form['ph'])

rainfall = float(request.form['rainfall'])
# state = request.form.get("stt")

city = request.form.get("city")

final_prediction="None"

temperature,humidity='',''

if weather_fetch(city) != None:

temperature, humidity = weather_fetch(city)

data = np.array([[N, P, K, temperature, humidity, ph, rainfall]])

my_prediction = crop_recommendation_model.predict(data)

final_prediction = my_prediction[0]

now = datetime.datetime.now()

date_time = now.strftime("%m/%d/%Y, %H:%M:%S")

userid = session['userid']

id = str(random.randint(1000, 9999))

json = {'id': id,

'Nitrogen': N, 'Phosphorus': P,

'Potassium': K, "Temperature": temperature,

"Humidity": humidity, "Ph": ph,

'Rainfall': rainfall, 'PredictedCrop': final_prediction,

'UserId': userid, 'DateTime': date_time}

db = firestore.client()

newuser_ref = db.collection('newprediction')
id = json['id']

newuser_ref.document(id).set(json)

return render_template('usermakepredictions1.html',

prediction=final_prediction)

TESTING AND IMPLEMENTATION

SYSTEM TESTING
The purpose of testing is to discover errors. Testing is the process of
trying to discover every conceivable fault or weakness in a work product. It
provides a way to check the functionality of components, sub-assemblies,
assemblies and/or a finished product It is the process of exercising software
with the intent of ensuring that the

Software system meets its requirements and user expectations and does not
fail in an unacceptable manner. There are various types of test. Each test
type addresses a specific testing requirement.

TYPES OF TESTS

Unit testing
Unit testing involves the design of test cases that validate that the
internal program logic is functioning properly, and that program inputs
produce valid outputs. All decision branches and internal code flow should be
validated. It is the testing of individual software units of the application .it is
done after the completion of an individual unit before integration. This is a
structural testing, that relies on knowledge of its construction and is
invasive. Unit tests perform basic tests at component level and test a
specific Flightiness process, application, and/or system configuration. Unit
tests ensure that each unique path of a Flightiness process performs
accurately to the documented specifications and contains clearly defined
inputs and expected results.

Integration testing
Integration tests are designed to test integrated software components
to determine if they actually run as one program. Testing is event driven
and is more concerned with the basic outcome of screens or fields.
Integration tests demonstrate that although the components were
individually satisfaction, as shown by successfully unit testing, the
combination of components is correct and consistent. Integration testing is
specifically aimed at exposing the problems that arise from the combination
of components.

Functional test
N Functional tests provide systematic demonstrations that functions
tested are available as specified by the Flightiness and technical
requirements, system documentation, and user manuals.

Functional testing is centered on the following items:

Valid Input : identified classes of valid input must be accepted.

Invalid Input : identified classes of invalid input must be rejected.

Functions : identified functions must be exercised.

Output : identified classes of application outputs must be


exercised.

Systems/Procedures: interfacing systems or procedures must be invoked.

Organization and preparation of functional tests is focused on


requirements, key functions, or special test cases. In addition, systematic
coverage pertaining to identify Flightiness process flows; data fields,
predefined processes, and successive processes must be considered for
testing. Before functional testing is complete, additional tests are identified
and the effective value of current tests is determined.

System Test
System testing ensures that the entire integrated software system
meets requirements. It tests a configuration to ensure known and predictable
results. An example of system testing is the configuration oriented system
integration test. System testing is based on process descriptions and flows,
emphasizing pre-driven process links and integration points.

White Box Testing


White Box Testing is a testing in which in which the software tester has
knowledge of the inner workings, structure and language of the software, or
at least its purpose. It is purpose. It is used to test areas that cannot be
reached from a black box level.

Black Box Testing


Black Box Testing is testing the software without any knowledge of the
inner workings, structure or language of the module being tested. Black box
tests, as most other kinds of tests, must be written from a definitive source
document, such as specification or requirements document, such as
specification or requirements document.

Unit Testing:
Unit testing is usually conducted as part of a combined code and unit
test phase of the software lifecycle, although it is not uncommon for coding
and unit testing to be conducted as two distinct phases.

Test strategy and approach


Field testing will be performed manually and functional tests will be
written in detail.

Test objectives
 All field entries must work properly.
 Pages must be activated from the identified link.
 The entry screen, messages and responses must not be delayed.

Features to be tested
 Verify that the entries are of the correct format
 No duplicate entries should be allowed
 All links should take the user to the correct page.

Integration Testing
Software integration testing is the incremental integration testing of
two or more integrated software components on a single platform to produce
failures caused by interface defects.

Test Results:
All the test cases mentioned above passed successfully. No defects
encountered.

Acceptance Testing
User Acceptance Testing is a critical phase of any project and requires
significant participation by the end user. It also ensures that the system
meets the functional requirements.

Test Results:
All the test cases mentioned above passed successfully. No defects
encountered

TestCase Resul
Number Testing Scenario Expected result t

Registration Testing
Clicking submit without Alert "Please fill all
TC – 01 entering details details" Pass

Clicking submit without Alert "Please fill


TC – 02 entering Username Username" Pass

Clicking submit without Alert "Please fill


TC – 03 entering password Password" Pass

Clicking submit without Alert "Please fill


TC – 04 entering email id email id" Pass

Clicking submit without Alert "Please fill


TC – 05 entering phone number contact number" Pass

Clicking submit entering


confirm password data which Alert "Password and
is not matching with password Confirm Password
TC – 06 data do not match" Pass

Login Testing

Alert "Please enter


Clicking submit without the username and
TC – 07 entering login details password" Pass

Clicking submit without Alert "Please enter


TC – 08 entering password the password" Pass

Clicking submit without Alert "Please enter


TC – 09 entering Username the Username" Pass

Clicking submit entering


TC – 10 wrong Username Alert "Invalid User" Pass
Clicking submit entering
TC – 11 wrong password Alert "Invalid User" Pass

Clicking submit entering


wrong Username and
TC – 12 password Alert "Invalid User" Pass

CONCLUSION

The System was successfully developed to meet the needs of the clients. It was found to provide
all the features that required for the organization. The accuracy and complexity of the software are also
ensured and this system provides benefits such as user-friendly environment, which serves to verify the
integrity of a remotely-hosted Requirements monitor.

The primary objective of this study was to classify heart disease using different models and a
real-world dataset. The k-modes clustering algorithm was applied to a dataset of patients with heart
disease to predict the presence of the disease. The dataset was preprocessed by converting the age
attribute to years and dividing it into bins of 5-year intervals, as well as dividing the diastolic and systolic
blood pressure data into bins of 10 intervals. The dataset was also split on the basis of gender to take into
account the unique characteristics and progression of heart disease in men and women.

The elbow curve method was utilized to determine the optimal number of clusters for both the male
and female datasets. The results indicated that the MLP model had the highest accuracy of 87.23%.
These findings demonstrate the potential of k-modes clustering to accurately predict heart disease and
suggest that the algorithm could be a valuable tool in the development of targeted diagnostic and
treatment strategies for the disease. The study utilized the Kaggle cardiovascular disease dataset with
70,000 instances, and all algorithms were implemented on Google Colab. The accuracies of all algorithms
were above 86% with the lowest accuracy of 86.37% given by decision trees and the highest accuracy
given by multilayer perceptron, as previously mentioned.

FUTURE ENHANCEMENT
Limitations. Despite the promising results, there are several limitations that should be noted. First,
the study was based on a single dataset and may not be generalizable to other populations or patient
groups. Furthermore, the study only considered a limited set of demographic and clinical variables and
did not take into account other potential risk factors for heart disease, such as lifestyle factors or genetic
predispositions. Additionally, the performance of the model on a held-out test dataset was not evaluated,
which would have provided insight on how well the model generalizes to new, unseen data. Lastly, the
interpretability of the results and the ability to explain the clusters formed by the algorithm was not
evaluated. In light of these limitations, it is recommended to conduct further research to address these
issues and to better understand the potential of k-modes clustering.

Future research. Future research could focus on addressing the limitations of this study by
comparing the performance of the k-modes clustering algorithm with other commonly used clustering
algorithms, such as k-means or hierarchical clustering, to gain a more comprehensive understanding of
its performance. Additionally, it would be valuable to evaluate the impact of missing data and outliers on
the accuracy of the model and develop strategies for handling these cases. Furthermore, it would be
beneficial to evaluate the performance of the model on a held-out test dataset in order to establish its
generalizability to new, unseen data. Ultimately, future research should aim to establish the robustness
and generalizability of the results and the interpretability of the clusters formed by the algorithm, which
could aid in understanding the results and support decision making based on the study’s findings.

We plan to formalize our approach that will allow us to provide more rigorous evaluation. This
would include developing a core calculus for the TPM’s machine model based on the cryptographic
protocol Spi calculus. This semantics would account for the authentication, secrecy, and integrity
properties of the TPM. Furthermore, a formal semantics for our approach can be built on top of this core
calculus similar to the techniques
CHAPTER -7
REFERENCES

1. A Programmer's Introduction to Python, 2nd edition (Apress) - Eric Gunnerson


2. Inside Python, 2nd edition (Microsoft Press) - Tom Archer
3. Debugging Html, Css, Js(New Riders) - Jonathon Goodyear, Brian Peek, Brad Fox
4. Designing Python Applications (Microsoft Press) - Jonathon Goodyear, Brian Peek, Brad Fox
5. Debugging Flask - Jonathon Goodyear, Brian Peek, Brad Fox

Web References:

1. Estes, C.; Anstee, Q.M.; Arias-Loste, M.T.; Bantel, H.; Bellentani, S.; Caballeria, J.; Colombo, M.;
Craxi, A.; Crespo, J.; Day, C.P.; et al. Modeling NAFLD disease burden in China, France,
Germany, Italy, Japan, Spain, United Kingdom, and United States for the period 2016–2030. J.
Hepatol. 2018, 69, 896–904.
2. Drożdż, K.; Nabrdalik, K.; Kwiendacz, H.; Hendel, M.; Olejarz, A.; Tomasik, A.; Bartman, W.;
Nalepa, J.; Gumprecht, J.; Lip, G.Y.H.

3. Murthy, H.S.N.; Meenakshi, M. Dimensionality reduction using neuro-genetic approach for early
prediction of plants. In Proceedings of the International Conference on Circuits, Communication,
Control and Computing, Bangalore, India, 21–22 November 2014; pp. 329–332. Benjamin, E.J.;
Muntner, P.; Alonso, A.; Bittencourt, M.S.; Callaway, C.W.; Carson, A.P.; Chamberlain, A.M.;
Chang, A.R.; Cheng, S.; Das, S.R.; et al.
4. Mozaffarian, D.; Benjamin, E.J.; Go, A.S.; Arnett, D.K.; Blaha, M.J.; Cushman, M.; de Ferranti, S.;
Després, J.-P.; Fullerton, H.J.; Howard, V.J.; et al. Plant disease prediction—2015 update: A
report from the American Plant Disease Prediction Association. Circulation 2015, 131, e29–e322.

You might also like