Ir Recommendation & KNN

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 9

RECOMMENDATION SYSTEM TECHNIQUES

AND K-NN ALGORITHM PROBLEM

SUMITHRA.K
22UAD016
RECOMMENDATION SYSTEM
TECHNIQUES

In information retrieval (IR), recommendation systems play a vital role in helping


users find relevant information based on their preferences, behaviors, or past
interactions. Several techniques can be used to design recommendation systems in
IR:

1.Collaborative Filtering (CF)

•User-based Collaborative Filtering: This technique suggests items that users


with similar tastes have liked. It finds users similar to the target user and
recommends items liked by those similar users.

•Item-based Collaborative Filtering: Instead of comparing users, it compares


items. It recommends items that are similar to those a user has liked in the past. This
is commonly used in e-commerce platforms.

Pros: Easy to implement and requires no domain-specific knowledge.


2. Content-Based Filtering
Content-based recommendation systems rely on item attributes and user preferences. It
uses keywords or features of items (e.g., genre, actors in a movie) to recommend similar
items to those the user has engaged with.

Pros: Does not rely on user interaction data.

3. Hybrid Models
Hybrid recommendation systems combine collaborative filtering and content-based
filtering to leverage the strengths of both. For instance, Netflix and Amazon use hybrid
models, mixing user and item-based filtering with content-based approaches.

Pros: Better accuracy and can handle cold start problems more effectively.
4. Matrix Factorization
Matrix factorization is a powerful technique based on linear algebra. The most well-
known example is Singular Value Decomposition (SVD). It breaks down large user-
item interaction matrices into smaller factors, capturing latent factors or hidden
patterns.

Pros: Effective at making recommendations from large datasets with sparse user
interaction.
5. Deep Learning Models
Recently, deep learning models such as neural collaborative filtering, autoencoders, and
recurrent neural networks (RNNs) have been applied to recommendation systems. These
models can learn complex patterns from high-dimensional data.

Pros: High scalability and capacity to handle complex data interactions.

6. Knowledge-based Systems
Knowledge-based recommendation systems rely on domain-specific knowledge to suggest
items. They work well when user preferences are known and the system can utilize external
information like rules, constraints, or ontologies to provide recommendations.

Pros: Effective in cases where there is limited user history or for unique items (e.g., travel
recommendations

7. Association Rule Mining


In this technique, association rules are generated by analyzing patterns in user behavior. For
example, the system can recommend products frequently bought together based on historical
transaction data.

Pros: Simple to implement and interpret


8. Context-Aware Recommendation
This approach considers the context of the user’s interaction, such as location, time,
weather, or the current device being used, to refine recommendations. For example, a
user might get different recommendations for books while on vacation versus at home.

Pros: Provides personalized and highly relevant suggestions based on the current
situation.

9.Graph-Based Techniques
Graph-based methods represent the relationships between users, items, and other
entities as a graph, then apply algorithms like random walks or PageRank to
recommend items based on these relationships.

Pros: Effective at capturing complex relationships in the data.

10. Bandit Algorithms


Multi-armed bandit algorithms are used to balance exploration (finding new items) and
exploitation (using known preferences). These algorithms are useful when user
preferences evolve, and they can be applied in online recommendation systems for
real-time learning.
KNN ALGORITHM STEPS

Step 1: Selecting the optimal value of K represents the number of nearest


neighbors that needs to be considered while making prediction.

Step 2: Calculating distance


To measure the similarity between target and training data points,
Euclidean distance is used. Distance is calculated between each of the
data points in the dataset and target point.

Step 3: Finding Nearest Neighbors.


The k data points with the smallest distances to the target point are the
nearest neighbors.

Step 4: Voting for Classification or Taking Average for Regression


K-NEAREST NEIGHBOR ALGORITHM PROBLEM
THANK YOU…

You might also like