Unit-II (Big Data)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

UNIT-II

2.1 INTRODUCTION TO STREAMS CONCEPTS

Stream processing is a data management technique that involves ingesting a continuous data
stream to quickly analyze, filter, transform or enhance the data in real time.

Big data streaming is a process in which big data is quickly processed in order to extract real-
time insights from it. Big data streaming is ideally a speed-focused approach wherein a
continuous stream of data is processed.

Big data streaming is a process in which large streams of real-time data are processed with the
sole aim of extracting insights and useful trends out of it. A continuous stream of unstructured
data is sent for analysis into memory before storing it onto disk. This happens across a cluster
of servers. Speed matters the most in big data streaming. The value of data, if not processed
quickly, decreases with time.

Streams: Temporally ordered, fast changing, massive and potentially infinite

Data Stream:

 A data stream is a Massive sequence of data


 Too large to store (on disk, memory, cache etc.,)

Types of Data Streams:


Data stream –
A data stream is a (possibly unchained) sequence of tuples. Each tuple comprised of a set
of attributes, similar to a row in a database table.
 Transactional data stream –
It is a log interconnection between entities
 Credit card – purchases by consumers from producer
 Telecommunications – phone calls by callers to the dialed parties
 Web – accesses by clients of information at servers
 Measurement data streams –
 Sensor Networks – a physical natural phenomenon, road traffic
 IP Network – traffic at router interfaces
 Earth climate – temperature, humidity level at weather stations

Characteristics of Data Streams:


 Large volumes of continuous data, possibly infinite.
 Steady changing and requires a fast, real-time response.
 Data stream captures nicely our data processing needs of today.
 Random access is expensive and a single scan algorithm
 Store only the summary of the data seen so far.
 Maximum stream data are at a pretty low level or multidimensional in creation, needs
multilevel and multidimensional treatment.
Challenges in Working with Streaming Data

o Storage layer
o Processing Layer
o Scalability
o Data Durability
o Fault Tolerance in both the Storage and processing layer.

Stream Data Sources


 Sensor Data
 Image Data
 Internet and Web Traffic

BATCH PROCESSING VS. REAL-TIME STREAM PROCESSING

In batch data processing, data is downloaded in batches before being processed, stored, and
analyzed. On the other hand, stream data ingest data continuously, allowing it to be processed
simultaneously and in real-time.

The main benefit of stream processing is real-time insight. We live in an information age where
new data is constantly being created. Organizations that leverage streaming data analytics can
take advantage of real-time information from internal and external assets to inform their
decisions, drive innovation and improve their overall strategy.
2.2 STREAM DATA MODEL AND ARCHITECTURE

Any number of streams can enter the system. Each stream can provide elements at its own
schedule; they need not have the same data rates or data types, and the time between elements
of one stream need not be uniform. The fact that the rate of arrival of stream elements is not
under the control of the system distinguishes stream processing from the processing of data
that goes on within a database-management system. The latter system controls the rate at which
data is read from the disk, and therefore never has to worry about data getting lost as it attempts
to execute queries.

 Data Stream Management System


 It is a real time continuous, ordered (implicitly by arrival time and explicitly by Timestamp)
sequence if items.
 Hard to control the order in which items arrive
 It is not feasible to locally store a stream in its entirety.
 DSMS(Data stream management system) is a computer program that permits to manage
continuous data streams
 Stream Processor
Stream processor receives the data stream
 Data Streams are volatile
 Provides sequential access to data.
 Stream Data changes continuously
 Data streams are generated by various sources
 Data can be viewed as an infinite, time oriented sequence of tuples.
 Any number of streams can enter the system
 Standing query
User need to write Queries and are need to place in processor to process data stream and store
the results in limited working storage (either Main memory/ Disk). These queries are, in a
sense, permanently executing, and produce outputs at appropriate times. Standing queries are
designed to answer Ad-hoc queries.
Example:
1. Standing query to output an alert whenever the temperature exceeds 25 degrees
centigrade from the stream produced by the ocean-surface-temperature sensor.
2. If we want the average temperature over all time, we have only to record two values:
the number of readings ever sent in the stream and the sum of those readings
 Ad-Hoc Queries
If we want the facility to ask a wide variety of ad-hoc queries, a common approach is to store
a sliding window of each stream in the working store. A sliding window can be the most
recent n elements of a stream, for some n, or it can be all the elements that arrived within the
last t time units, e.g., one day.

 Archival Store
Streams may be archived in a large archival store, but we assume it is not possible to answer
queries from the archival store. It could be examined only under special circumstances using
time-consuming retrieval processes.

 Working Store
There is also a working store, into which summaries or parts of streams may be placed, and
which can be used for answering queries. The working store might be disk, or it might be main
memory, depending on how fast we need to process queries.
But either way, it is of sufficiently limited capacity that it cannot store all the data from
all the streams.

Benefits of Stream Data Model

 Able to deal with never-ending streams of events


 Real-time or near-real-time processing
 Detecting patterns in time-series data
 Easy Data scalability

Types of queries on Data stream

o Filtering a Data stream


- Select element with property x from the stream
o Counting distinct elements
- Number of distinct elements from the last k elements of the stream
o Estimating moments
- Estimating Avg./ Std. dev. of last k elements.
o Finding frequent elements
- Identifying which element is repeatedly coming
-

2.3 STREAM COMPUTING

Stream computing is a way to analyze and process Big Data in real time to gain current insights
to take appropriate decisions or to predict new trends in the immediate future. Streams are High
rate of receiving data and are Implements in a distributed clustered environment.

Stream computing Applications

o Financial sectors
o Business intelligence
o Risk management
o Marketing management
o Search engines
o Social network analysis
o Mining query streams
 Ex: Google wants to know what queries are more frequent today than yesterday
o Mining click streams
 Ex: Yahoo wants to know which of its pages are getting an unusual number of hits
in the past hour
o Mining social network news feeds
 E.g., Look for trending topics on Twitter, Facebook

2.4 SAMPLING DATA IN A STREAM

The general problem we shall address is selecting a subset of a stream so that we can ask queries
about the selected subset and have the answers be statistically representative of the stream as a
whole.

Since we can’t store the entire stream, one obvious approach is to store a sample

Two different problems:


- Sample a fixed proportion of elements in the stream (say 1 in 10)
- Maintain a random sample of fixed size over a potentially infinite stream

Problem-1:
Example: A search engine receives a stream of queries, and it would like to study the behaviour
of typical users.1 we assume the stream consists of tuples (user, query, time). Suppose that we
want to answer queries such as “What fraction of the typical user’s queries were repeated over
the past month?” Assume also that we wish to store only 1/10th of the stream elements.

Search engine query stream


o Stream of tuples (User, query, time)
o How often did a user run the same query in a single day
o Have space to store 1/10th of query stream

Fixed size sample

Naïve Solution

o Generate a random integer in [0…9] for each query.


o Store the query if the integer is 0, otherwise discard.

Sample users

o Pick 1/10th of users and take all the searches in the sample
o Use a hash function that hashes the user name or user ID uniformly into 10 buckets
Random sapling

Reservoir sampling algorithm

o Store all the first s elements of the stream to S


o Suppose we have seen n-1 elements, and now the nth element arrives ( n > s )
- With probability s/n, keep the nth element else discard it.
- If we picked the nth element then it replaces one of the s elements in the
sample S, picked uniformly at random.

Sliding windows

Useful model

2.5 FILTERING STREAMS

o Identifies the sequence patterns in a stream


o Stream filtering is the process of selection or matching instances of a desired pattern
in a continuous stream of data
o Assume that a data stream consists of tuples
o Filtering steps: (i) Accept the tuples that meet a criterion in the stream, (ii) Pass the
accepted tuples to another process as a stream and (iii) discard remaining tuples

The Bloom Filter Analysis

o A simple space-efficient data structure introduced by Burton Howard Bloom


in 1970.
o The filter matches the membership of an element in a dataset.

Obvious Solution: Hash Table


- But suppose we do not have enough memory to store all of S in a hash table.
E.g., we might be processing millions of filters on the same stream.
A Bloom filter consist of:

 An array of n bits, initially all 0’s


 A collection of hash functions h1, h2, h3, ……., hk.
 Each hash function maps key values to n buckets corresponding to the n bits of the
bit-array.
 A set S of m key values

Blooms filter allows through all stream elements whose keys are in S, while rejecting most of
the stream elements whose keys are not in S.

Illustration of Blooms Filter

Use K-independent hash functions instead of 1.

Fig: Blooms hashing process for the stream (q), K=3, whose hash functions are h1(q), h2(q), and h3(q).

Inserting elements using two hash functions

Searching element with the same two hash functions

Properties of Bloom filter

 No False negative
- If the query was inserted before, bloom filters always return true
 Chance of false positive
- There is a possibility that It can return true for an element which was not inserted
Weaknesses of Bloom filters

 Need full independent hash functions


 Dynamically growing blooms filter is hard
 Best size depends on false positive rate and number of insertions

2.6 COUNTING DISTINCT ELEMENTS IN A STREAM

Finding the number of distinct (Unique) elements in a data stream with repeated elements.

Example:

- Elements might represent IP Addresses of packets passing through router.


- Unique visitors through web sites.
- Elements in a large data set
- Motifs in a DNA dataset
- Elements of a RFID/Sensor network

The Flejolet-Martin (FM) Algorithm

It approximates the number of unique objects in a stream or a Database in one pass

If the stream contains n elements with m of them unique, this algorithm runs in O(n) times and
needs O(log(m)) memory. It gives an approximation for the number of unique objects along
with a standard deviation σ with a maximum error ϵ

Whenever we apply a hash function h to a stream element a, the bit string h(a) will end in some
number of 0’s, possibly none. Call this number the tail length for h(a). Let R be the maximum
tail length of any a seen so far in the stream. Then we shall use estimate 2R for the number of
distinct elements seen in the stream.

FM Algorithm

o Pick a hash function h(a) that maps each of the ‘n’ elements at least log2n bits
o For each stream element a, let r(a) be the number of trailing 0’s in h(a)
o Record R= the maximum r(a) seen.
o Estimate = 2R which is equal to no of distinct elements.
FM Algorithm can estimate the results correctly only by using the appropriate hash function

2.7 ESTIMATING MOMENTS

All the Statistical Parameters mean, median, mode are called moments which are used to
estimate or compute the distribution of frequencies of different elements in a stream.

Suppose a stream has elements chosen from a set A of N values

Let mi be the number of times value i occurs in the stream

The kth element is ∑𝑖∈𝐴(𝑚𝑖 )𝑘

Ex: A= {4,5,4,5,3,2,4,5,4,2,4,3}

𝒎𝟒 = 5 𝒎𝟓 = 3 𝒎𝟐 = 2 𝒎𝟑 = 2

Special cases

∑𝒊∈𝑨(𝒎𝒊 )𝒌

0th Moment
The 0th moment is the sum of 1 for each mi that is greater than 0.3 that is, the 0th moment is a
count of the number of distinct elements in the stream

1st Moment
The 1st moment is the sum of the mi’s, which must be the length of the stream. Thus, first
moments are especially easy to compute; just count the length of the stream seen so far
2nd moment
The second moment is the sum of the squares of the mi’s. It is sometimes called the surprise
number, since it measures how uneven the distribution of elements in the stream is.
 To see the distinction, suppose we have a stream of length 100, in which eleven
different elements appear. The most even distribution of these eleven elements would
have one appearing 10 times and the other ten appearing 9 times each. In this case, the
surprise number is 102 + 10 × 92 = 910.
 At the other extreme, one of the eleven elements could appear 90 times and the other
ten appear 1 time each. Then, the surprise number would be 1x 902 + 10 × 12 = 8110.

The Alon-Matias-Szegedy (AMS) Algorithm for Second Moments


Let us assume that a stream has a particular length n
Suppose we do not have enough space to count all the mi’s for all the elements of the stream.
We can still estimate the second moment of the stream using a limited amount of space;
The more space we use, the more accurate the estimate will be.
1. AMS method works for all moments.
2. Gives an unbiased estimate
3. We will just concentrate on the 2nd moment S
4. We pick and keep track of many variables X.
- For each variable X we store X.ele and X.val
 X.ele corresponds to the item i
 X.val corresponds to the count of item i
- Note this requires a count in main memory, so number of X’s is limited.

5. our goal is to estimate


S=∑𝒊(𝒎𝒊 )𝟐

For One Random Variable X

1. How to set X.val and X.ele?


- Assume stream has length n
- Pick some random time t to start
 Let at time t the stream have item i ( we set X.ele=i)
 Then we maintain count c (X.val=c) of the no of i’s in the stream
starting from the chosen time t.
2. Then the estimate of the 2nd moment ∑𝒊(𝒎𝒊 )𝟐 is
S= f(x) = n (2 X c-1)
- Note we will keep track of multiple X’s (X1, X2-----Xn) and our final estimate
will be
Example

Suppose the stream is a, b, c, b, d, a, c, d, a, b, d, c, a, a, b.


The length of the stream is n = 15.
Element Count (mi)

A 5

B 4

C 3

D 3

The second moment for the stream is = ∑𝒊∈𝑨(𝒎𝒊 )𝟐 = 52+42+32+32 = 59

Let X1, X2, and X3 are elements picked randomly (different time stamps) from positions 3rd,
8th, and 13th from the above stream

X1.element = c X1.value= 3
X2.element = d X2.value= 2
X3.element = a X3.value= 2

We can derive an estimate of the second moment from any variable X.


This estimate is = n × (2 × X.value − 1).
= 15 × (2 × 3 − 1) = 75.
= 15×(2 ×2−1) = 45
= 15×(2 ×2−1) = 45

(75+45+45)
Average estimate = = 55
3

2.8 COUNTING 1’S IN A WINDOW

DGIM (Datar, Gionis, Indyk, Motwani) Algorithm

1. DGIM algorithm (Datar, Gionis, Indyk, Motwani) Algorithm.


2. Designed to find number of 1;s in a dataset.
3. This algorithm uses O(log2N) bits to represent a window of N bit.
4. It allows to estimate the number of 1’s in the window with an error of no more
than 50%.
Components of DGIM Algorithm
1. Timestamp
2. Buckets
- Each bit that arrives has a timestamp for the position at which it arrives
- If the first bit has timestamp 1, the second bit has timestamp 2 and so on.
- The positions are recognized with the window size N ( which are usually taken as
multiples of 2)
- The Windows are divided into buckets consisting of 1’s and 0’s.

Rules for forming the buclets.

There are five rules that must be followed when representing a stream by buckets.
 The right end of a bucket is always a position with a 1.
 No position is in more than one bucket.
 There are one or two buckets of any given size, up to some maximum size.
 All sizes must be a power of 2.
 Buckets cannot decrease in size as we move to the left (back in time).

Example: Fig: Dividing bit-stream into buckets by following the DGIM rules

At the right (most recent) end we see two buckets of size 1. To its left we see one bucket of
size 2. Note that this bucket covers four positions, but only two of them are 1. Proceeding left,
we see two buckets of size 4, and we suggest that a bucket of size 8 exists further left. Notice
that it is OK for some 0’s to lie between buckets. Also, observe from above Fig. that the buckets
do not overlap; there are one or two of each size up to the largest size, and sizes only increase
moving left.
2.9 DECAYING WINDOW

Decaying algorithm allows you to identify most popular elements (trending in other wards) in
an incoming data stream

This algorithm not only tracks most recurring elements in an incoming data stream, but also
discords any random spikes or spam requests that might have boosted an element’s frequency.

In Decaying window algorithm

 We assign a score / weight to every element of the incoming data stream


o For a new element, Reduce the weight of all the existing elements by a constenct
factor k and then assign the new element with specific weight.
 Further, we need to calculate the aggregate sum for each distinct element by adding all
the weights assigned to that element.
o The aggregate sum of decaying exponential weights can be calculated using the
following formula:
𝑡−1

𝑆 = ∑ 𝑎𝑡−𝑖 (1 − 𝑐)𝑖
𝑖=0
Here, t = time stamp
c = small constant
𝑎 = element
 Finally, the element with highest total score is listed as trending or most popular

Whenever a new element, say at+1 arrives in the data stream, you perform the following steps
to achieve an updated sum

1. Multiply the current sum/score by the value (1-c)


2. Add the weight corresponding to the new element

- In a data stream, consisting of various elements, you maintain a separate sum for each
distinct element.
- For every incoming element, you multiply the sum of the existing elements by a value of
(1-c).
- Further, you add the weight of the incoming element to its corresponding aggregate sum.

Finally, the element with the highest aggregate score is listed as the most popular element.

Example

consider a sequence of twitter tags below

fifa, ipl, fifa, ipl, ipl, ipl, fifa

also let’s say each element in sequence has weight of 1

let c be 0.1

The aggregate sum of each tag in the end of above string will be calculated as below:

Current element is fifa

fifa 1 * (1 – 0.1) = 0.9

ipl 0.9 * (1 – 0.1) + 0 =0.81 (added zero because current tag is other than fifa)

fifa 0.81 * (1 – 0.1) + 1 = 1.729 (added one because current tag is fifa only)

ipl 1.79 * (1 – 0.1) + 0 = 1.5561

ipl 1.5561 * (1 – 0.1) + 0 = 1.4005

ipl 1.4005 * (1 – 0.1) + 0 = 1.2605

fifa 1.2605 * (1 – 0.1) + 1 = 2.315

Current element is ipl

fifa 0 * (1 – 0.1) = 0

ipl 0 * (1 – 0.1) + 1 = 1 (added one because current tag is ipl only)


fifa 1 * (1 – 0.1) + 0 = 0.9 (added zero because current tag is other than ipl)

ipl 0.9 * (1 – 0.1) + 1 = 1.81

ipl 1.81 * (1 – 0.1) + 1 = 2.7919

ipl 2.7919 * (1 – 0.1) + 1 = 3.764

fifa 3.764 * (1 – 0.1) + 0 = 3.7264

In the end of sequence we can see that score of fifa is 2.135 but ipl is 3.7264

So, ipl is more trending than fifa

Real-Time Analytics Platform (RTAP) Applications

 An ideal real-time analytics platform would help in analyzing the data, correlating it
and predicting the outcomes on a real-time basis.

• The real-time analytics platform helps organizations in tracking things in real time, thus
helping them in the decision-making process.

• The platforms connect the data sources for better analytics and visualization

What does Real-Time Analytics Platform mean?

• A real-time analytics platform enables organizations to make the most out of real-time data
by helping them to extract the valuable information and trends from it.

• Such platforms help in measuring data from the business point of view in real time, further
making the best use of data.

Examples of real-time analytics include:

• Real time credit scoring, helping financial institutions to decide immediately whether to
extend credit.

• Customer relationship management (CRM), maximizing satisfaction and business results


during each interaction with the customer.

 Fraud detection at points of sale

Why RTAP?

• Decision systems that go beyond visual analytics have an intrinsic need to analyze data and
respond to situations.
• Depending on the sophistication, such systems may have to act rapidly on incoming
information, grapple with heterogeneous knowledge bases, work across multiple domains, and
often in a distributed manner.
• Big Data platforms offer programming and software infrastructure to help perform
• analytics to support the performance and scalability needs of such decision support systems
for IoT domains.
• There has been significant focus on Big Data analytics platforms on the volume dimension
of Big Data.
• In such platforms, such as MapReduce, data is staged and aggregated over time, and analytics
are performed in a batch mode on these large data corpus.
• These platforms weakly scale with the size of the input data, as more distributed compute
resources are made available.
 However, as we have motivated before, IoT applications place an emphasis on online
analytics, where data that arrives rapidly needs to be processed and analyzed with low latency
to drive autonomic decision making.

Types of Real-Time Data Analytics

There are different types of real-time analytics

 On-demand analytics
 Continuous—or streaming—analytics.

On-demand real-time analytics waits for users or systems to request a query and then delivers
the analytic results.

Continuous real-time analytics is more proactive and alerts users or triggers responses as
events happen.

Streaming Analytics Platforms For All Real-time Applications

The top platforms being used all over the world for Streaming analytics solutions:

Apache Flink

• Flink is an open-source platform that handles distributed stream and batch data processing.
• At its core is a streaming data engine that provides for data distribution, fault tolerance, and
communication, for undertaking distributed computations over the data streams.
• In the last year, the Apache Flink community saw three major version releases for the platform
and the community event Flink Forward in San Francisco.
• Apache Flink contains several APIs to enable creating applications that use the Flink engine.
Some of the most popular APIs on the platform are-
• DataStream API for unbounded streams, DataSet API for static data embedded in Python,
Java, and Scala, and the Table API with a SQL-like language.
Spark Streaming

• Apache Spark is used to build scalable and fault-tolerant streaming applications.


• With Spark Streaming, you get to use Apache Spark's language-integrated API which lets you
write streaming jobs in the similar way as you write batch jobs.
• Spark Streaming supports the three languages- Java, Scala, Python. Apache Spark is being
used in various leading industries today, such as- Healthcare, Finance, e-commerce, Media and
Entertainment, Travel industry, etc.
• The popularity of Apache Spark adds the glitter to the platform Spark Streaming.

IBM Streams
• This streaming analytics platform from IBM enables the applications developed by users to
gather, analyze, and correlate information that comes to them from a variety of sources.
• The solution is known to handle high throughput rates and up to millions of events and
messages per second, making it a leading proprietary streaming analytics solution for real-time
applications.
• IBM Stream computing helps analyze large streams of data in the form of unstructured texts,
audio, video, and geospatial, and allows for organizations to spot risks and opportunities and
make efficient decisions.

Software AG's Apama Streaming Analytics


• Apama Streaming analytics platform is built for streaming analytics and automated action on
fast-moving data on the basis of intelligent decisions.
• The software bundles up other aspects like messaging, event processing, in-memory data
management and visualization and is ideal for fast-moving Big Data Analytics Solutions.
Sensors that bring in loads of data from different sources can be churned using this solution in
real-time.
• With Apama, you can act on high-volume business operations in real-time.

Azure Stream Analytics


• Azure Stream Analytics facilitates the development and deployment of low-cost solutions
that can gain real-time insights from devices, applications, and sensors.
• It is recommended to be used for IoT scenarios like real-time remote management and
monitoring, connected cars, etc.
• It allows to easily develop and run parallel real-time time analytics on IoT and other kinds of
Big Data using a simple language that resembles SQL.
• These streaming applications and platforms are helping organizations drive their streaming
analytics goals and IoT solutions with ease.
• Big Data is a source of knowledge today and organizations are increasingly trying to leverage
its potential to drive their decisions and major changes.

Thus the real-time Analytics Platform is useful for Real time application development and its
successful implementations.
Real-Time Sentiment Analysis

What is Sentiment Analysis?


• Sentiment analysis is:
– The detection of attitudes “enduring, affectively colored beliefs, dispositions
towards objects or persons”
1. Holder (source) of attitude
2. Target (aspect) of attitude
3. Type of attitude
• From a set of types
– Like, love, hate, value, desire, etc.
• Or (more commonly) simple weighted polarity:
– positive, negative, neutral, together with strength
4. Text containing the attitude
• Sentence or entire document

Types of Sentiment Analysis

1. Fine-grained sentiment analysis: This depends on the polarity based. This category can
be designed as very positive, positive, neutral, negative, very negative. The rating is done
on the scale 1 to 5. If the rating is 5 then it is very positive, 2 then negative and 3 then
neutral.
2. Emotion detection: The sentiment happy, sad, anger, upset, jolly, pleasant, and so on come
under emotion detection. It is also known as a lexicon method of sentiment analysis.
3. Aspect based sentiment analysis: It focuses on a particular aspect like for instance, if a
person wants to check the feature of the cell phone then it checks the aspect such as battery,
screen, camera quality then aspect based is used.
4. Multilingual sentiment analysis: Multilingual consists of different languages where the
classification needs to be done as positive, negative, and neutral. This is highly challenging
and comparatively difficult.

Sentiment analysis has many other names

1. Opinion extraction
2. Opinion mining
3. Sentiment mining
4. Subjectivity analysis

Why sentiment analysis?

• SA is required whether there is a need of progress or growth in terms of positive or negative


opinion of an organization and do follow up actions against or for it.
• Movie: is this review positive or negative?
• Products: what do people think about the new iPhone?
• Public sentiment: how is consumer confidence? Is despair increasing?
 Politics: what do people think about this candidate or issue?
• Prediction: predict election outcomes or market trends from sentiment
Data Extraction: The top trending event and related tweets extracted from particular location
using “Where on Earth Id” (WOEID). WOEID’s are 32-bit identifiers which are non-repetitive
and unique. Once the top trending topic is obtained for a given location, tweets related to these
top trending topics is extracted and stored.

Corpus creation and Pre-processing: It is very important step to clean and pre-process tweets
as it reduces the noise from the data. For doing this, tweets are converted in to a corpus of
words and pre-processing and cleaning of data are done.
The extracted tweets should be freed from
 Punctuation
 white spaces
 special characters such as ‘#’,”@” and numbers
 Stop words such as ‘is’, ‘at’, ‘the’ etc.
 data is converted to lower case for uniformity and better visibility.
 Some of the words such as ‘http’, ‘https’ related to web are also removed.
Stemming and lemmatization is done finally to removes suffixes from words in order to get the
common origin.

Perform Analysis: Real-time sentiment analysis is performed on this pre-processed set of


twitter data by developing and implementing various combinations of machine learning and
lexicon methods
 Document Term Matrix: is a 2-dimensional matrix representation of a data. The terms
(words) are represented in the form of rows and the documents are represented in the
form of columns.
 Labelling using Uni-gram/Bi-gram/N-gram
o By identify the sentiment polarity for each tweet, tweets are classified as
positive, negative or neutral

You might also like