OBURU SIMON(ARTIFICIAL INTELLIGENCE COURSEWORK)

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 20

KAMPALA UNIVERSITY

NAME : OBURU SIMON

REG NO. : 22FE/KUJ/BCSIT/985W

COURSE : BCSIT

COURSE UNIT : ARTIFICIAL INTELLIGENCE

COURSE CODE :BCIT3203

YEAR : THREE

SEMESTER : TWO

LECTURER : MADAM ELIZABETH


MUTEYANJULA

QUESTIONS
1. Discuss the feature of Artificial intelligence.
2. Discuss the advancement of Artificial Intelligence.

ANSWERS
1. Discuss the feature of Artificial intelligence.
Artificial Intelligence (AI) refers to the development of computer systems that can
perform tasks typically requiring human intelligence. These tasks range from
problem-solving and decision-making to understanding natural language and
recognizing patterns. AI is not just a futuristic concept—it is already part of our
daily lives, powering technologies like virtual assistants, recommendation
algorithms, and facial recognition systems.
Features of Artificial Intelligence

1. Eliminate Dull & Boring Tasks

One of the most impactful features of Artificial Intelligence is its ability to


automate repetitive and tedious tasks. AI-powered systems can handle tasks that
require little to no creativity or critical thinking, freeing up human workers to focus
on more complex and meaningful work. This capability increases efficiency and
reduces errors in routine jobs.

For example:

Data entry: AI can automatically input and organize data from multiple
sources without human intervention.
 Scheduling: AI assistants can manage appointments and send reminders,
reducing the need for manual scheduling.
 Manufacturing: In factories, robots powered by AI can take over assembly
line jobs, ensuring consistent quality and speed.
Code Example

Here is a simple Python code example that automates data entry using the Python
library openpyxl to read and write Excel files:

from openpyxl import load_workbook

# Load the Excel workbook

workbook = load_workbook('input_data.xlsx')

# Select the active sheet


sheet = workbook.active

# Example data to automate entry

data = [

('Name', 'Age', 'Occupation'),

('Alice', 30, 'Engineer'),

('Bob', 25, 'Designer'),

('Charlie', 35, 'Teacher')

# Write data into Excel

for row in data:

sheet.append(row)

# Save the workbook

workbook.save('output_data.xlsx')

print("Data entry automated successfully!")


This simple code automates the process of inputting data into an Excel file, which
could otherwise be done manually. AI-based systems can extend this by handling
much larger datasets and performing error-checking, validation, and more.

2. Deep Learning

Deep Learning is a subset of AI inspired by the way the human brain processes
information. It uses neural networks, which are layers of algorithms designed to
recognize patterns and make decisions based on data. Deep learning is responsible
for many of the most advanced AI applications, such as image recognition, speech
processing, and natural language understanding.

For example, in image recognition, deep learning algorithms can identify objects in
pictures by analyzing thousands of examples and learning to detect specific
patterns. This process allows AI systems to perform highly complex tasks with
incredible accuracy.

Code Example

Here’s a simple example using the popular TensorFlow library in Python to build a
basic neural network for image recognition:

import tensorflow as tf

from tensorflow.keras import layers, models

# Build a simple CNN model for image classification

model = models.Sequential([

layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28,


28, 1)),

layers.MaxPooling2D((2, 2)),

layers.Conv2D(64, (3, 3), activation='relu'),


layers.MaxPooling2D((2, 2)),

layers.Conv2D(64, (3, 3), activation='relu'),

layers.Flatten(),

layers.Dense(64, activation='relu'),

layers.Dense(10, activation='softmax')

])

# Compile the model

model.compile(optimizer='adam',

loss='sparse_categorical_crossentropy',

metrics=['accuracy'])

# View the model summary

model.summary()

print("Deep learning model ready for training.")

3. Data Ingestion

Data ingestion is the process by which AI systems collect and process large
amounts of data from various sources. AI systems rely on vast datasets to make
accurate predictions and decisions. This feature allows AI to gather data from
multiple sources such as sensors, databases, social media, or user interactions, and
use it to improve its performance.

AI systems are designed to handle both structured and unstructured data, making it
highly versatile in a variety of applications, from healthcare to retail.

Code Example

Here’s a simple Python example using the pandas library to ingest data from a
CSV file:

import pandas as pd

# Load data from a CSV file

data = pd.read_csv(‘sales_data.csv’)

# Preview the first few rows of the dataset

print(data.head())

# Perform some basic data processing

# For example, filter sales data for a specific product

filtered_data = data[data[‘product’] == ‘Laptop’]

# Display the filtered data

print(filtered_data)

In this example, the AI system reads and processes data from a CSV file, filtering
it based on specific conditions. This is just one of many ways AI can ingest and
analyze data to drive decision-making.

4. Imitates Human Cognition


Artificial Intelligence can mimic certain human cognitive abilities like learning,
reasoning, and problem-solving, though it does so in a limited capacity. While AI
can analyze data, make decisions, and learn from experiences (through machine
learning), it doesn’t possess true understanding or consciousness. AI systems use
algorithms to simulate these capabilities for specific tasks but don’t achieve
general intelligence like humans.

For example, AI can:

 Learning: AI systems can learn patterns from data and improve over time,
such as in predictive text or personalized recommendations.
 Reasoning: AI can solve complex problems by breaking them down into
smaller, manageable tasks, like how AI helps optimize logistics routes.
 Problem-solving: AI can simulate problem-solving by running through
various possible scenarios, like in chess or strategic games.
However, AI remains task-specific and doesn’t possess true emotional intelligence
or consciousness.

5. Not Futuristic (Widespread Applications)

AI is no longer just a futuristic concept; it is deeply embedded in various aspects of


our daily lives and across industries. Today, AI is used in a wide range of real-
world applications that have tangible impacts.

Examples of AI applications include:

 Healthcare: AI assists doctors with medical diagnoses, treatment


recommendations, and even drug discovery.
 Finance: AI helps detect fraud, manage financial portfolios, and provide
customer support via chatbots.
 Transportation: Self-driving cars are being tested with AI to navigate
traffic, avoid obstacles, and improve safety.
These real-world applications demonstrate that AI is rapidly evolving and already
shaping how industries operate, making it a critical technology today.

6. Prevent Natural Disasters (Address Nuance)

AI plays an important role in monitoring environmental changes and assessing


potential risks, though it cannot directly prevent natural disasters. Instead, AI helps
in predicting and mitigating the impact of these events through advanced analysis
and data monitoring. For instance:

 Weather Prediction: AI models can analyze historical and real-time data to


predict extreme weather events like hurricanes and floods with greater
accuracy.
 Sensor Data Analysis: AI processes data from sensors placed in high-risk
areas, providing early warning systems for disasters like earthquakes or
tsunamis.
 Emergency Response Optimization: AI helps coordinate faster and more
effective emergency responses by analyzing real-time data, optimizing
resource allocation, and predicting the areas most affected.
While AI enhances disaster preparedness, it complements rather than replaces
traditional systems of disaster prevention and response.

7. Cloud Computing

Cloud computing plays a pivotal role in AI’s ability to function at scale. AI


applications often require vast amounts of processing power and data storage,
which cloud platforms like Amazon Web Services (AWS), Microsoft Azure, and
Google Cloud provide. These platforms allow businesses and developers to access
scalable resources without investing in expensive infrastructure.

Benefits of cloud computing for AI:

 Processing Power: Cloud platforms offer high-performance computing


resources required for tasks like training large AI models.
 Storage: Massive amounts of data used by AI systems can be easily stored
and accessed through cloud solutions.
 Accessibility: Cloud computing democratizes access to AI technologies,
allowing smaller businesses to leverage AI without needing their own data
centers.
By utilizing cloud computing, AI becomes more efficient, accessible, and scalable,
driving innovation across industries.

8. Facial Recognition & Chatbots

8.1. Facial Recognition


Facial recognition is a feature of AI that uses deep learning algorithms to identify
or verify a person’s identity based on facial features from images or videos. This
technology is commonly used in security systems, smartphones, and social media
applications for tasks like unlocking devices, tagging photos, or enhancing
security.

Code Example for Facial Recognition

Here’s a basic example of how to implement facial recognition using Python’s


face_recognition library:

import face_recognition

# Load an image with a face

image = face_recognition.load_image_file("person.jpg")

# Find all face locations in the image

face_locations = face_recognition.face_locations(image)

# Display the number of faces found

print(f"Found {len(face_locations)} face(s) in this image.")


This code loads an image and detects the locations of any faces within it.
Advanced AI systems use similar techniques for real-time facial recognition in
security applications.

8.2. Chatbots

Chatbots are AI-powered conversational agents designed to simulate human


conversations. They can answer customer queries, guide users through processes,
or provide recommendations in real time. Chatbots are widely used in customer
service, marketing, and even for casual interactions in apps and websites.

Code Example for Chatbots

Here’s an example of a simple chatbot using Python’s ChatterBot library:

from chatterbot import ChatBot

from chatterbot.trainers import ChatterBotCorpusTrainer

# Create a chatbot instance

chatbot = ChatBot('CustomerSupportBot')

# Train the chatbot with English language data

trainer = ChatterBotCorpusTrainer(chatbot)

trainer.train("chatterbot.corpus.english")

# Get a response to an input query

response = chatbot.get_response("Hello, how can I help you?")


print(response)

This basic chatbot can respond to user inputs with predefined responses, simulating
human-like conversations. More advanced chatbots can handle complex queries
using natural language processing (NLP).

2. Discuss the advancements of Artificial Intelligence.

Artificial Intelligence (AI) has emerged as one of the most


transformative technologies of the 21st century, shaping
industries and revolutionizing the way we live, work, and interact.
AI has come a long way from its nascent stages to the
sophisticated systems we see today. This article explores the
rapid technological advancement of AI, its current state, and the
potential future developments that promise to reshape our world
even further.
The Evolution of AI

The history of AI can be traced back to the mid-20th century when


the field was in its infancy. Early AI systems were rule-based and
primarily used for solving mathematical problems. The
development of AI was marked by significant milestones, such as
the creation of the first AI program, the invention of neural
networks, and the introduction of expert systems.

One of the key turning points in AI was the introduction of


machine learning, a subset of AI that focuses on creating
algorithms that can learn and improve from experience. Machine
learning, particularly deep learning, became a game-changer in
the advancement of AI, enabling systems to analyze vast
amounts of data and make complex decisions with a level of
accuracy that was previously unimaginable.
Technological Advancements in AI

1. Deep Learning: Deep learning, a subfield of machine learning,


has played a pivotal role in AI's technological advancement. Deep
neural networks, inspired by the human brain, have the ability to
process and analyze data with unprecedented accuracy. This
technology has been crucial in applications like image and speech
recognition, natural language processing, and autonomous
vehicles.

2. Natural Language Processing (NLP): NLP has made it possible


for AI systems to understand and interact with human language.
This breakthrough has led to the development of virtual assistants
like Siri and chatbots, which have become an integral part of our
daily lives.

3. Computer Vision: AI's ability to process visual information


has resulted in advancements in computer vision. These
technologies are now used in applications such as facial
recognition, medical image analysis, and self-driving cars, making
our world safer and more efficient.
4. Reinforcement Learning: Reinforcement learning is another
area of AI that has advanced significantly. It has enabled
machines to learn from their actions in an environment, making
them capable of playing complex games, optimizing resource
allocation, and even controlling robots.

5. Generative Adversarial Networks (GANs): GANs are a type of


neural network that has gained widespread attention for their
ability to generate realistic content, such as images, videos, and
text. They are employed in creative fields, including art, design,
and entertainment.

6. Underlying Technologies

In the last five years, the field of AI has made major progress in almost all
its standard sub-areas, including vision, speech recognition and generation,
natural language processing (understanding and generation), image and
video generation, multi-agent systems, planning, decision-making, and
integration of vision and motor control for robotics. In addition,
breakthrough applications emerged in a variety of domains including
games, medical diagnosis, logistics systems, autonomous driving,
language translation, and interactive personal assistance. The sections that
follow provide examples of many salient developments.
People are using AI more today to dictate to their phone, get
recommendations for shopping, news, or entertainment, enhance their
backgrounds on conference calls, and so much more. The core technology
behind most of the most visible advances is machine learning, especially
deep learning (including generative adversarial networks or GANs) and
reinforcement learning powered by large-scale data and computing
resources. GANs are a major breakthrough, endowing deep networks with
the ability to produce artificial content such as fake images that pass for the
real thing. GANs consist of two interlocked components—a generator,
responsible for creating realistic content, and a discriminator, tasked with
distinguishing the output of the generator from naturally occurring content.
The two learn from each other, becoming better and better at their
respective tasks over time. One of the practical applications can be seen in
GAN-based medical-image augmentation, in which artificial images are
produced automatically to expand the data set used to train networks for
producing diagnoses. Recognition of the remarkable power of deep
learning has been steadily growing over the last decade. Recent studies
have begun to uncover why and under what conditions deep learning works
well. In the past ten years, machine-learning technologies have moved
from the academic realm into the real world in a multitude of ways that are
both promising and concerning.

7. Language Processing

Language processing technology made a major leap in the last five years,
leading to the development of network architectures with enhanced
capability to learn from complex and context-sensitive data. These
advances have been supported by ever-increasing data resources and
computing power.

Of particular note are neural network language models, including ELMo,


GPT, mT5, and BERT. These models learn about how words are used in
context—including elements of grammar, meaning, and basic facts about
the world—from sifting through the patterns in naturally occurring text. They
consist of billions of tunable parameters and are engineered to be able to
process unprecedented quantities of data (over one trillion words for GPT-
3, for example). By stringing together likely sequences of words, several of
these models can generate passages of text that are often
indistinguishable from human-generated text, including news stories,
poems, fiction and even computer code. Performance on question-
answering benchmarks (large quizzes with questions like “Where was
Beyoncé born?”) have reached superhuman levels, although the models
that achieve this level of proficiency exploit spurious correlations in the
benchmarks and exhibit a level of competence on naturally occurring
questions that is still well below that of human beings.

These models’ facility with language is already supporting applications


such as machine translation, text classification, speech recognition, writing
aids, and chatbots. Future applications could include improving human-AI
interactions across diverse languages and situations. Current challenges
include how to obtain quality data for languages used by smaller
populations, and how to detect and remove biases in their behavior. In
addition, it is worth noting that the models themselves do not exhibit deep
understanding of the texts that they process, fundamentally limiting their
utility in many sensitive applications. Part of the art of using these models,
to date, is in finding scenarios where their incomplete mastery still provides
value.

Related to language processing is the tremendous growth in


conversational interfaces over the past five years. The near ubiquity of
voice-control systems like Google Assistant, Siri, and Alexa is a
consequence of both improvements on the voice-recognition side, powered
by the AI advances discussed above, and also improvements in how
information is organized and integrated for voice-based delivery. Google
Duplex, a conversational interface that can call businesses to make
restaurant reservations and appointments, was rolled out in 2018 and
received mixed initial reviews due to its impressive engineering but off-
putting system design.

8. Computer Vision and Image Processing

Image-processing technology is now widespread, finding uses ranging from


video-conference backgrounds to the photo-realistic images known as
deepfakes. Many image-processing approaches use deep learning for
recognition, classification, conversion, and other tasks. Training time for
image processing has been substantially reduced. Programs running on
ImageNet, a massive standardized collection of over 14 million
photographs used to train and test visual identification programs, complete
their work 100 times faster than just three years ago.
Real-time object-detection systems such as YOLO (You Only Look Once)
that notice important objects when they appear in an image are widely used
for video surveillance of crowds and are important for mobile robots
including self-driving cars. Face-recognition technology has also improved
significantly over the last five years, and now some smartphones and even
office buildings rely on it to control access. In China, facial-
recognition technology is used widely in society, from security to payment,
although there are very recent moves to pull back on the broad deployment
of this technology. Of course, while facial-recognition technology can be a
powerful tool to improve efficiency and safety, it raises issues around bias
and privacy. Some companies have suspended providing face-recognition
services. And, in fact, the creator of YOLO has said that he no longer works
on the technology because “the military applications and privacy concerns
became impossible to ignore.

It is now possible to generate photorealistic images and even videos using


GANs. Sophisticated image-processing systems enhanced by deep
learning let users seamlessly replace existing images with new ones, such
as inserting someone into a video of an event they did not attend. While
such modifications could be carried out by skilled artists decades ago, AI
automation has substantially lowered the barriers. These so-
called deepfakes are being used in illicit activity such as “revenge porn,” in
which an attacker creates artificial sexual content featuring a specific
victim, and identity theft, in which a profile of a non-existent person is
generated and used to gain access to services, and have spurred research
into improving automatic detection of deepfake images.

Caption: The GAN technology for generating images and the transformer
technology for producing text can be integrated in various ways. These
images were produced by OpenAI’s “DALL-E” given the prompt: “a stained
glass window with an image of a blue strawberry.” A similar query to a web-
based image search produces blue strawberries, blue stained-glass
windows, or stained-glass windows with red strawberries, suggesting that
the system is not merely retrieving relevant images but producing novel
combinations of visual features. From: https://openai.com/blog/dall-e/
9. Games

Developing algorithms for games and simulations in adversarial situations


has long been a fertile training ground and a showcase for the
advancement of AI techniques. DeepMind’s application of deep networks to
Atari video games and the game of Go around 2015 helped bring deep
learning to wide public prominence, and the last five years have seen
significant additional progress. AI agents have now out-maneuvered their
human counterparts in combat and multiplayer situations including the
games StarCraft II, Quake III, and Alpha Dogfight—a US Defense
Department-sponsored jet-fighter simulation—as well as classical games
like poker.

The DeepMind team that developed AlphaGo went on to create


AlphaGoZero, which discarded the use of direct human guidance in the
form of a large collection of data from past Go matches. Instead, it
developed moves and tactics on its own, starting from scratch. This idea
was further augmented with AlphaZero, a single network architecture that
could learn to play expert-level Go, Shogi, or Chess.

10. Robotics

The last five years have seen consistent progress in intelligent robotics
driven by machine learning, powerful computing and communication
capabilities, and increased availability of sophisticated sensor systems.
Although these systems are not fully able to take advantage of all the
advances in AI, primarily due to the physical constraints of the
environments, highly agile and dynamic robotics systems are now available
for home and industrial use. In industrial robotics, with the implementation
of deep-learning-based vision systems, manipulator-type robots—those
that grab things, as opposed to those that roll across the floor—can pick up
randomly placed overlapping objects at speeds that are practical for real-
world applications.

Bipedal and four-legged robots continue to advance in agility. Atlas, a


state-of-the-art humanoid robot built by Boston Dynamics, demonstrated
the ability to jump, run, backflip, and maneuver uneven terrain—feats that
were impossible for robots just a few years ago. Spot, a quadruped robot
also from Boston Dynamics, also maneuvers through difficult environments
and is being used on construction sites for delivery and monitoring of
lightweight materials and tools. It is worth noting, however, that these
systems are built using a combination of learning techniques honed in the
last several years, classical control theory akin to that used in autopilots,
and painstaking engineering and design. Cassie, a biped robot developed
by Agility Robotics and Oregon State University, uses deep reinforcement
learning for its walking and running behaviors. Whereas deployment of AI
in user-facing vision and language technologies is now commonplace, the
majority of types of robotics systems remain lab-bound.

During 2020, robotics development was driven in part by the need to


support social distancing during the COVID-19 pandemic. A group of
restaurants opened in China staffed by a team of 20 robots to help cook
and serve food. Some early delivery robots were deployed on controlled
campuses to carry books and food. A diverse collection of companies
worldwide are actively pursuing business opportunities in autonomous
delivery systems for the last mile. While these types of robots are being
increasingly used in the real world, they are by no means mainstream yet
and are still prone to mistakes, especially when deployed in unmapped or
novel environments. In Japan, a new legal framework is being discussed to
ensure that autonomous robotics systems are able to be safely deployed
on public roads at limited speeds.

The combination of deep learning with agile robotics is opening up new


opportunities in industrial robotics as well. Leveraging improvements in
vision, robotic grippers are beginning to be able to select and pick randomly
placed objects and use them to construct stacks. Being able to pick up and
put down diverse objects is a key competence in a variety of potential
applications, from tidying up homes to preparing packages for shipping.

11. Mobility

Autonomous vehicles or self-driving cars have been one of the hottest


areas in deployed robotics, as they impact the entire automobile industry as
well as city planning. The design of self-driving cars requires integration of
a range of technologies including sensor fusion, AI planning and decision-
making, vehicle dynamics prediction, on-the-fly rerouting, inter-vehicle
communication, and more. Driver assist systems are increasingly
widespread in production vehicles. These systems use sensors and AI-
based analysis to carry out tasks such as adaptive cruise control to safely
adjust speed, and lane-keeping assistance to keep vehicles centered on
the road.

The optimistic predictions from five years ago of rapid progress in fully
autonomous driving have failed to materialize. The reasons may be
complicated, but the need for exceptional levels of safety in complex
physical environments makes the problem more challenging, and more
expensive, to solve than had been anticipated. Nevertheless, autonomous
vehicles are now operating in certain locales such as Phoenix, Arizona,
where driving and weather conditions are particularly benign, and outside
Beijing, where 5G connectivity allows remote drivers to take over if needed.

12. Health

AI is increasingly being used in biomedical applications, particularly in


diagnosis, drug discovery, and basic life science research.

Recent years have seen AI-based imaging technologies move from an


academic pursuit to commercial projects. Tools now exist for identifying a
variety of eye and skin disorders, detecting cancers, and supporting
measurements needed for clinical diagnosis. Some of these systems rival
the diagnostic abilities of expert pathologists and radiologists, and can help
alleviate tedious tasks (for example, counting the number of cells dividing in
cancer tissue). In other domains, however, the use of automated systems
raises significant ethical concerns.

AI-based risk scoring in healthcare is also becoming more common.


Predictors of health deterioration are now integrated into major health
record platforms (for example, EPIC Deterioration Index), and individual
health centers are increasingly integrating AI-based risk predictions into
their operations. Although some amount of bias is evident in these
systems, they appear exceptionally promising for overall improvements in
healthcare.

Beyond treatment support, AI now augments a number of other health


operations and measurements, such as helping predict durations of
surgeries to optimize scheduling, and identifying patients at risk of needing
transfer to intensive care. There are technologies for digital medical
transcription, for reading ECG systems, for producing super-resolution
images to reduce the amount of time patients are in MRI machines, and for
identifying questions for clinicians to ask pediatric patients. While current
penetration is relatively low, we can expect to see uses of AI expand in this
domain in the future; in many cases, these are applications of already-
mature technologies in other areas of operations making their way into
healthcare.
13. Finance

AI has been increasingly adopted into finance. Deep learning models now
partially automate lending decisions for several lenders and have
transformed payments with credit scoring, for example WeChat Pay. These
new systems often take advantage of consumer data that are not
traditionally used in credit scoring. In some cases, this approach can open
up credit to new groups of people; in others, it can be used to force people
to adopt specific social behaviors.

High-frequency trading relies on a combination of models as well as the


ability to make fast decisions. In the space of personal finance, so-called
robo-advising—automated financial advice—is quickly becoming
mainstream for investment and overall financial planning. For financial
institutions, uses of AI are going beyond detecting fraud and enhancing
cybersecurity to automating legal and compliance documentation as well as
detecting money laundering.37 Government Pension Investment Fund
(GPIF) of Japan, the world’s largest pension fund, introduced a deep-
learning-based system to monitor investment styles of contracting fund
managers and identify risk from unexpected change in market situations
known as regime switch.38 Such applications enable financial institutions to
recognize otherwise invisible risks, contributing to more robust and stable
asset-management practices.

14. Recommender Systems

With the explosion of information available to us, recommender systems


that automatically prioritize what we see when we are online have become
absolutely essential. Such systems have always drawn heavily on AI, and
now they have a dramatic influence on people’s consumption of products,
services, and content—from news, to music, to videos, and more. Apart
from a general trend toward more online activity and commerce, the AI
technologies powering recommender systems have changed considerably
in the past five years. One shift is the near-universal incorporation of deep
neural networks to better predict user responses to
recommendations.39 There has also been increased usage of sophisticated
machine-learning techniques for analyzing the content of recommended
items, rather than using only meta-data and user click or consumption
behavior. That is, AI systems are making more of an effort to understand
why a specific item might be a good recommendation for a particular
person or query. Examples include Spotify’s use of audio analysis of
music40 or the application of large language models such as BERT to
improve recommendations of news or social media posts.41 Another trend
is modeling and prediction of multiple distinct user behaviors, instead of
making recommendations for only one activity at a time; functionality
facilitated by the use of so-called multi-task models.42 Of course, applying
recommendation to multiple tasks simultaneously raises the challenging
question of how best to make tradeoffs among these different objectives.

The use of ever-more-sophisticated machine-learned models for


recommending products, services, and (especially) content has raised
significant concerns about the issues of fairness, diversity, polarization,
and the emergence of filter bubbles, where the recommender system
suggests, for example, news stories that other people like you are reading
instead of what is truly most important. While these problems require more
than just technical solutions, increasing attention is paid to technologies
that can at least partly address such issues. Promising directions include
research on the tradeoffs between popularity and diversity of content
consumption,43 and fairness of recommendations among different users
and other stakeholders (such as the content providers or creators).44

o-END-o

You might also like