AI 900 Documentation

Download as pdf or txt
Download as pdf or txt
You are on page 1of 40

AI-900: Microsoft Azure AI Fundamentals

Introduction to AI
AI enables us to build amazing software that can improve health care, enable people to
overcome physical disadvantages, empower smart infrastructure, create incredible
entertainment experiences, and even save the planet!

Watch the following video to see some ways that AI can be used.

What is AI?

Simply put, AI is the creation of software that imitates human behaviors and capabilities. Key
workloads include:

• Machine learning - This is often the foundation for an AI system, and is the way we "teach" a
computer model to make prediction and draw conclusions from data.
• Anomaly detection - The capability to automatically detect errors or unusual activity in a
system.
• Computer vision - The capability of software to interpret the world visually through
cameras, video, and images.
• Natural language processing - The capability for a computer to interpret written or spoken
language, and respond in kind.
• Knowledge mining - The capability to extract information from large volumes of often
unstructured data to create a searchable knowledge store.

Understand machine learning


Machine Learning is the foundation for most AI solutions.

Let's start by looking at a real-world example of how machine learning can be used to solve a
difficult problem.

Sustainable farming techniques are essential to maximize food production while protecting a
fragile environment. The Yield, an agricultural technology company based in Australia, uses
sensors, data and machine learning to help farmers make informed decisions related to
weather, soil and plant conditions.

View the following video to learn more.

You can find out more about how the Yield is using machine learning to feed the world
without wrecking the planet here.

How machine learning works

So how do machines learn?

The answer is, from data. In today's world, we create huge volumes of data as we go about
our everyday lives. From the text messages, emails, and social media posts we send to the
photographs and videos we take on our phones, we generate massive amounts of information.
More data still is created by millions of sensors in our homes, cars, cities, public transport
infrastructure, and factories.
Data scientists can use all of that data to train machine learning models that can make
predictions and inferences based on the relationships they find in the data.

For example, suppose an environmental conservation organization wants volunteers to


identify and catalog different species of wildflower using a phone app. The following
animation shows how machine learning can be used to enable this scenario.

1. A team of botanists and scientists collect data on wildflower samples.


2. The team labels the samples with the correct species.
3. The labeled data is processed using an algorithm that finds relationships between the
features of the samples and the labeled species.
4. The results of the algorithm are encapsulated in a model.
5. When new samples are found by volunteers, the model can identify the correct species
label.

Machine learning in Microsoft Azure

Microsoft Azure provides the Azure Machine Learning service - a cloud-based platform for
creating, managing, and publishing machine learning models. Azure Machine Learning
provides the following features and capabilities:

Feature Capability

Automated machine This feature enables non-experts to quickly create an effective machine
learning learning model from data.

Azure Machine A graphical interface enabling no-code development of machine learning


Learning designer solutions.

Data and compute Cloud-based data storage and compute resources that professional data
management scientists can use to run data experiment code at scale.

Data scientists, software engineers, and IT operations professionals can


Pipelines define pipelines to orchestrate model training, deployment, and
management tasks.

Understand anomaly detection


Imagine you're creating a software system to monitor credit card transactions and detect
unusual usage patterns that might indicate fraud. Or an application that tracks activity in an
automated production line and identifies failures. Or a racing car telemetry system that uses
sensors to proactively warn engineers about potential mechanical failures before they happen.

These kinds of scenario can be addressed by using anomaly detection - a machine learning
based technique that analyzes data over time and identifies unusual changes.

Let's explore how anomaly detection might help in the racing car scenario.

1. Sensors in the car collect telemetry, such as engine revolutions, brake temperature, and so
on.
2. An anomaly detection model is trained to understand expected fluctuations in the telemetry
measurements over time.
3. If a measurement occurs outside of the normal expected range, the model reports an
anomaly that can be used to alert the race engineer to call the driver in for a pit stop to fix
the issue before it forces retirement from the race.

Anomaly detection in Microsoft Azure

In Microsoft Azure, the Anomaly Detector service provides an application programming


interface (API) that developers can use to create anomaly detection solutions.

Understand computer vision


Computer Vision is an area of AI that deals with visual processing. Let's explore some of the
possibilities that computer vision brings.

The Seeing AI app is a great example of the power of computer vision. Designed for the
blind and low vision community, the Seeing AI app harnesses the power of AI to open up the
visual world and describe nearby people, text and objects.

View the following video to learn more about Seeing AI.

To find out more, check out the Seeing AI web page.

Computer Vision models and capabilities

Most computer vision solutions are based on machine learning models that can be applied to
visual input from cameras, videos, or images. The following table describes common
computer vision tasks.

Task Description

Image
classification

Image classification involves training a machine learning model to classify images


based on their contents. For example, in a traffic monitoring solution you might use
an image classification model to classify images based on the type of vehicle they
contain, such as taxis, buses, cyclists, and so on.
Task Description

Object
detection

Object detection machine learning models are trained to classify individual objects
within an image, and identify their location with a bounding box. For example, a
traffic monitoring solution might use object detection to identify the location of
different classes of vehicle.

Semantic
segmentation

Semantic segmentation is an advanced machine learning technique in which


individual pixels in the image are classified according to the object to which they
belong. For example, a traffic monitoring solution might overlay traffic images with
"mask" layers to highlight different vehicles using specific colors.

Image analysis

You can create solutions that combine machine learning models with advanced
image analysis techniques to extract information from images, including "tags" that
Task Description

could help catalog the image or even descriptive captions that summarize the
scene shown in the image.

Face detection,
analysis, and
recognition

Face detection is a specialized form of object detection that locates human faces in
an image. This can be combined with classification and facial geometry analysis
techniques to recognize individuals based on their facial features.

Optical
character
recognition
(OCR)

Optical character recognition is a technique used to detect and read text in images.
You can use OCR to read text in photographs (for example, road signs or store
fronts) or to extract information from scanned documents such as letters, invoices,
or forms.

Computer vision services in Microsoft Azure

Microsoft Azure provides the following cognitive services to help you create computer vision
solutions:

Service Capabilities

Computer You can use this service to analyze images and video, and extract descriptions, tags,
Vision objects, and text.

Use this service to train custom image classification and object detection models
Custom Vision
using your own images.
Service Capabilities

The Face service enables you to build face detection and facial recognition
Face
solutions.

Form
Use this service to extract information from scanned forms and invoices.
Recognizer

Try this

To see an example of a how computer vision can be used to analyze images, follow these
steps:

1. Open another browser tab and go to https://aidemos.microsoft.com/computer-vision.


2. Use the demo interface to try each of the steps. For each step, you can select images and
review the information returned by the Computer Vision service.

Understand natural language processing


Natural language processing (NLP) is the area of AI that deals with creating software that
understands written and spoken language.

NLP enables you to create software that can:

• Analyze and interpret text in documents, email messages, and other sources.
• Interpret spoken language, and synthesize speech responses.
• Automatically translate spoken or written phrases between languages.
• Interpret commands and determine appropriate actions.

For example, Starship Commander, is a virtual reality (VR) game from Human Interact, that
takes place in a science fiction world. The game uses natural language processing to enable
players to control the narrative and interact with in-game characters and starship systems.

Watch the following video to learn more.

Natural language processing in Microsoft Azure

In Microsoft Azure, you can use the following cognitive services to build natural language
processing solutions:

Service Capabilities

Use this service to access features for understanding and analyzing text, training
Language language models that can understand spoken or text-based commands, and building
intelligent applications.

Translator Use this service to translate text between more than 60 languages.

Speech Use this service to recognize and synthesize speech, and to translate spoken languages.

Azure Bot This service provides a platform for conversational AI, the capability of a software
"agent" to participate in a conversation. Developers can use the Bot Framework to
Service Capabilities

create a bot and manage it with Azure Bot Service - integrating back-end services like
Language, and connecting to channels for web chat, email, Microsoft Teams, and others.

Understand knowledge mining


Knowledge mining is the term used to describe solutions that involve extracting information
from large volumes of often unstructured data to create a searchable knowledge store.

Knowledge mining in Microsoft Azure

One of these knowledge mining solutions is Azure Cognitive Search, a private, enterprise,
search solution that has tools for building indexes. The indexes can then be used for internal
only use, or to enable searchable content on public facing internet assets.

Azure Cognitive Search can utilize the built-in AI capabilities of Azure Cognitive Services
such as image processing, content extraction, and natural language processing to perform
knowledge mining of documents. The product's AI capabilities makes it possible to index
previously unsearchable documents and to extract and surface insights from large amounts of
data quickly.

Challenges and risks with AI


Artificial Intelligence is a powerful tool that can be used to greatly benefit the world.
However, like any tool, it must be used responsibly.

The following table shows some of the potential challenges and risks facing an AI application
developer.

Challenge or Risk Example


A loan-approval model discriminates by gender due to bias in the
Bias can affect results
data with which it was trained
An autonomous vehicle experiences a system failure and causes a
Errors may cause harm
collision
A medical diagnostic bot is trained using sensitive patient data,
Data could be exposed
which is stored insecurely
Solutions may not work A home automation assistant provides no audio output for visually
for everyone impaired users
Users must trust a An AI-based financial tool makes investment recommendations -
complex system what are they based on?
Who's liable for AI- An innocent person is convicted of a crime based on evidence
driven decisions? from facial recognition – who's responsible?

Understand responsible AI
At Microsoft, AI software development is guided by a set of six principles, designed to
ensure that AI applications provide amazing solutions to difficult problems without any
unintended negative consequences.
Fairness

AI systems should treat all people fairly. For example, suppose you create a machine learning
model to support a loan approval application for a bank. The model should predict whether
the loan should be approved or denied without bias. This bias could be based on gender,
ethnicity, or other factors that result in an unfair advantage or disadvantage to specific groups
of applicants.

Azure Machine Learning includes the capability to interpret models and quantify the extent to
which each feature of the data influences the model's prediction. This capability helps data
scientists and developers identify and mitigate bias in the model.

Another example is Microsoft's implementation of Responsible AI with the Face service,


which retires facial recognition capabilities that can be used to try to infer emotional states
and identity attributes. These capabilities, if misused, can subject people to stereotyping,
discrimination or unfair denial of services.

For more details about considerations for fairness, watch the following video.

Reliability and safety

AI systems should perform reliably and safely. For example, consider an AI-based software
system for an autonomous vehicle; or a machine learning model that diagnoses patient
symptoms and recommends prescriptions. Unreliability in these kinds of systems can result in
substantial risk to human life.

AI-based software application development must be subjected to rigorous testing and


deployment management processes to ensure that they work as expected before release.

For more information about considerations for reliability and safety, watch the following
video.

Privacy and security

AI systems should be secure and respect privacy. The machine learning models on which AI
systems are based rely on large volumes of data, which may contain personal details that
must be kept private. Even after the models are trained and the system is in production,
privacy and security need to be considered. As the system uses new data to make predictions
or take action, both the data and decisions made from the data may be subject to privacy or
security concerns.

For more details about considerations for privacy and security, watch the following video.

Inclusiveness

AI systems should empower everyone and engage people. AI should bring benefits to all
parts of society, regardless of physical ability, gender, sexual orientation, ethnicity, or other
factors.

For more details about considerations for inclusiveness, watch the following video.

Transparency

AI systems should be understandable. Users should be made fully aware of the purpose of the
system, how it works, and what limitations may be expected.
For more details about considerations for transparency, watch the following video.

Accountability

People should be accountable for AI systems. Designers and developers of AI-based


solutions should work within a framework of governance and organizational principles that
ensure the solution meets ethical and legal standards that are clearly defined.

For more details about considerations for accountability, watch the following video.

The principles of responsible AI can help you understand some of the challenges facing
developers as they try to create ethical AI solutions.

Microsoft Azure AI Fundamentals: Explore


visual tools for machine learning
Introduction
Machine Learning is the foundation for most artificial intelligence solutions. Creating an
intelligent solution often begins with the use of machine learning to train predictive models
using historic data that you have collected.

Azure Machine Learning is a cloud service that you can use to train and manage machine
learning models.

In this module, you'll learn to:

• Identify the machine learning process.


• Understand Azure Machine Learning capabilities.
• Use automated machine learning in Azure Machine Learning studio to train and
deploy a predictive model.

What is machine learning?


Machine learning is a technique that uses mathematics and statistics to create a model that
can predict unknown values.
For example, suppose Adventure Works Cycles is a business that rents cycles in a city. The
business could use historic data to train a model that predicts daily rental demand in order to
make sure sufficient staff and cycles are available.

To do this, Adventure Works could create a machine learning model that takes information
about a specific day (the day of week, the anticipated weather conditions, and so on) as an
input, and predicts the expected number of rentals as an output.

Mathematically, you can think of machine learning as a way of defining a function (let's call
it f) that operates on one or more features of something (which we'll call x) to calculate a
predicted label (y) - like this:

f(x) = y

In this bicycle rental example, the details about a given day (day of the week, weather, and so
on) are the features (x), the number of rentals for that day is the label (y), and the function (f)
that calculates the number of rentals based on the information about the day is encapsulated
in a machine learning model.

The specific operation that the f function performs on x to calculate y depends on a number of
factors, including the type of model you're trying to create and the specific algorithm used to
train the model. Additionally in most cases, the data used to train the machine learning model
requires some pre-processing before model training can be performed.

Types of machine learning

There are two general approaches to machine learning, supervised and unsupervised machine
learning. In both approaches, you train a model to make predictions.

The supervised machine learning approach requires you to start with a dataset with known
label values. Two types of supervised machine learning tasks include regression and
classification.

• Regression: used to predict a continuous value; like a price, a sales total, or some other
measure.
• Classification: used to determine a binary class label; like whether a patient has diabetes or
not.

The unsupervised machine learning approach starts with a dataset without known label
values. One type of unsupervised machine learning task is clustering.

• Clustering: used to determine labels by grouping similar information into label groups; like
grouping measurements from birds into species.

The following video discusses the various kinds of machine learning model you can create,
and the process generally followed to train and use them.

What is Azure Machine Learning studio?


Training and deploying an effective machine learning model involves a lot of work, much of
it time-consuming and resource-intensive. Azure Machine Learning is a cloud-based service
that helps simplify some of the tasks it takes to prepare data, train a model, and deploy a
predictive service.

Most importantly, Azure Machine Learning helps data scientists increase their efficiency by
automating many of the time-consuming tasks associated with training models; and it enables
them to use cloud-based compute resources that scale effectively to handle large volumes of
data while incurring costs only when actually used.

Azure Machine Learning workspace

To use Azure Machine Learning, you first create a workspace resource in your Azure
subscription. You can then use this workspace to manage data, compute resources, code,
models, and other artifacts related to your machine learning workloads.

After you have created an Azure Machine Learning workspace, you can develop solutions
with the Azure machine learning service either with developer tools or the Azure Machine
Learning studio web portal.

Azure Machine Learning studio

Azure Machine Learning studio is a web portal for machine learning solutions in Azure. It
includes a wide range of features and capabilities that help data scientists prepare data, train
models, publish predictive services, and monitor their usage. To begin using the web portal,
you need to assign the workspace you created in the Azure portal to Azure Machine Learning
studio

Azure Machine Learning compute

At its core, Azure Machine Learning is a service for training and managing machine learning
models, for which you need compute on which to run the training process.

Compute targets are cloud-based resources on which you can run model training and data
exploration processes.

In Azure Machine Learning studio, you can manage the compute targets for your data science
activities. There are four kinds of compute resource you can create:
• Compute Instances: Development workstations that data scientists can use to work with
data and models.
• Compute Clusters: Scalable clusters of virtual machines for on-demand processing of
experiment code.
• Inference Clusters: Deployment targets for predictive services that use your trained models.
• Attached Compute: Links to existing Azure compute resources, such as Virtual Machines or
Azure Databricks clusters.

What is Azure Automated Machine


Learning?
Azure Machine Learning includes an automated machine learning capability that
automatically tries multiple pre-processing techniques and model-training algorithms in
parallel. These automated capabilities use the power of cloud compute to find the best
performing supervised machine learning model for your data.

Automated machine learning allows you to train models without extensive data science or
programming knowledge. For people with a data science and programming background, it
provides a way to save time and resources by automating algorithm selection and
hyperparameter tuning.

You can create an automated machine learning job in Azure Machine Learning studio.

In Azure Machine Learning, operations that you run are called jobs. You can configure
multiple settings for your job before starting an automated machine learning run. The run
configuration provides the information needed to specify your training script, compute target,
and Azure ML environment in your run configuration and run a training job.
Understand the AutoML process
You can think of the steps in a machine learning process as:

1. Prepare data: Identify the features and label in a dataset. Pre-process, or clean and
transform, the data as needed.
2. Train model: Split the data into two groups, a training and a validation set. Train a machine
learning model using the training data set. Test the machine learning model for performance
using the validation data set.
3. Evaluate performance: Compare how close the model's predictions are to the known labels.
4. Deploy a predictive service: After you train a machine learning model, you can deploy the
model as an application on a server or device so that others can use it.

These are the same steps in the automated machine learning process with Azure Machine
Learning.

Prepare data

Machine learning models must be trained with existing data. Data scientists expend a lot of
effort exploring and pre-processing data, and trying various types of model-training
algorithms to produce accurate models, which is time consuming, and often makes inefficient
use of expensive compute hardware.

In Azure Machine Learning, data for model training and other operations is usually
encapsulated in an object called a dataset. You can create your own dataset in Azure Machine
Learning studio.
Train model

The automated machine learning capability in Azure Machine Learning supports supervised
machine learning models - in other words, models for which the training data includes known
label values. You can use automated machine learning to train models for:

• Classification (predicting categories or classes)


• Regression (predicting numeric values)
• Time series forecasting (predicting numeric values at a future point in time)

In Automated Machine Learning you can select from several types of tasks:
In Automated Machine Learning, you can select configurations for the primary metric, type
of model used for training, exit criteria, and concurrency limits.

Importantly, AutoML will split data into a training set and a validation set. You can configure
the details in the settings before you run the job.
Evaluate performance

After the job has finished you can review the best performing model. In this case, you used
exit criteria to stop the job. Thus the "best" model the job generated might not be the best
possible model, just the best one found within the time allowed for this exercise.

The best model is identified based on the evaluation metric you specified, Normalized root
mean squared error.

A technique called cross-validation is used to calculate the evaluation metric. After the model
is trained using a portion of the data, the remaining portion is used to iteratively test, or cross-
validate, the trained model. The metric is calculated by comparing the predicted value from
the test with the actual known value, or label.

The difference between the predicted and actual value, known as the residuals, indicates the
amount of error in the model. The performance metric root mean squared error (RMSE), is
calculated by squaring the errors across all of the test cases, finding the mean of these
squares, and then taking the square root. What all of this means is that smaller this value is,
the more accurate the model's predictions. The normalized root mean squared error
(NRMSE) standardizes the RMSE metric so it can be used for comparison between models
which have variables on different scales.

The Residual Histogram shows the frequency of residual value ranges. Residuals represent
variance between predicted and true values that can't be explained by the model, in other
words, errors. You should hope to see the most frequently occurring residual values clustered
around zero. You want small errors with fewer errors at the extreme ends of the scale.
The Predicted vs. True chart should show a diagonal trend in which the predicted value
correlates closely to the true value. The dotted line shows how a perfect model should
perform. The closer the line of your model's average predicted value is to the dotted line, the
better its performance. A histogram below the line chart shows the distribution of true values.

After you've used automated machine learning to train some models, you can deploy the best
performing model as a service for client applications to use.

Deploy a predictive service

In Azure Machine Learning, you can deploy a service as an Azure Container Instances (ACI)
or to an Azure Kubernetes Service (AKS) cluster. For production scenarios, an AKS
deployment is recommended, for which you must create an inference cluster compute target.
In this exercise, you'll use an ACI service, which is a suitable deployment target for testing,
and does not require you to create an inference cluster.

Microsoft Azure AI Fundamentals: Explore


computer vision
Introduction
Computer vision is one of the core areas of artificial intelligence (AI), and focuses on creating
solutions that enable AI applications to "see" the world and make sense of it.

Of course, computers don't have biological eyes that work the way ours do, but they are
capable of processing images; either from a live camera feed or from digital photographs or
videos. This ability to process images is the key to creating software that can emulate human
visual perception.

Some potential uses for computer vision include:

• Content Organization: Identify people or objects in photos and organize them based on that
identification. Photo recognition applications like this are commonly used in photo storage
and social media applications.
• Text Extraction: Analyze images and PDF documents that contain text and extract the text
into a structured format.
• Spatial Analysis: Identify people or objects, such as cars, in a space and map their
movement within that space.

To an AI application, an image is just an array of pixel values. These numeric values can be
used as features to train machine learning models that make predictions about the image and
its contents.

Training machine learning models from scratch can be very time intensive and require a large
amount of data. Microsoft's Computer Vision service gives you access to pre-trained
computer vision capabilities.

Learning objectives

In this module you will:

• Identify image analysis tasks that can be performed with the Computer Vision service.
• Provision a Computer Vision resource.
• Use a Computer Vision resource to analyze an image.

Get started with image analysis on Azure


The Computer Vision service is a cognitive service in Microsoft Azure that provides pre-built
computer vision capabilities. The service can analyze images, and return detailed information
about an image and the objects it depicts.

Azure resources for Computer Vision

To use the Computer Vision service, you need to create a resource for it in your Azure
subscription. You can use either of the following resource types:

• Computer Vision: A specific resource for the Computer Vision service. Use this resource type
if you don't intend to use any other cognitive services, or if you want to track utilization and
costs for your Computer Vision resource separately.
• Cognitive Services: A general cognitive services resource that includes Computer Vision
along with many other cognitive services; such as Text Analytics, Translator Text, and others.
Use this resource type if you plan to use multiple cognitive services and want to simplify
administration and development.

Whichever type of resource you choose to create, it will provide two pieces of information
that you will need to use it:

• A key that is used to authenticate client applications.


• An endpoint that provides the HTTP address at which your resource can be accessed.

Note

If you create a Cognitive Services resource, client applications use the same key and endpoint
regardless of the specific service they are using.

Analyzing images with the Computer Vision service

After you've created a suitable resource in your subscription, you can submit images to the
Computer Vision service to perform a wide range of analytical tasks.

Describing an image

Computer Vision has the ability to analyze an image, evaluate the objects that are detected,
and generate a human-readable phrase or sentence that can describe what was detected in the
image. Depending on the image contents, the service may return multiple results, or phrases.
Each returned phrase will have an associated confidence score, indicating how confident the
algorithm is in the supplied description. The highest confidence phrases will be listed first.

To help you understand this concept, consider the following image of the Empire State
building in New York. The returned phrases are listed below the image in the order of
confidence.
• A black and white photo of a city
• A black and white photo of a large city
• A large white building in a city

Tagging visual features

The image descriptions generated by Computer Vision are based on a set of thousands of
recognizable objects, which can be used to suggest tags for the image. These tags can be
associated with the image as metadata that summarizes attributes of the image; and can be
particularly useful if you want to index an image along with a set of key terms that might be
used to search for images with specific attributes or contents.

For example, the tags returned for the Empire State building image include:

• skyscraper
• tower
• building

Detecting objects

The object detection capability is similar to tagging, in that the service can identify common
objects; but rather than tagging, or providing tags for the recognized objects only, this service
can also return what is known as bounding box coordinates. Not only will you get the type of
object, but you will also receive a set of coordinates that indicate the top, left, width, and
height of the object detected, which you can use to identify the location of the object in the
image, like this:
Detecting brands

This feature provides the ability to identify commercial brands. The service has an existing
database of thousands of globally recognized logos from commercial brands of products.

When you call the service and pass it an image, it performs a detection task and determine if
any of the identified objects in the image are recognized brands. The service compares the
brands against its database of popular brands spanning clothing, consumer electronics, and
many more categories. If a known brand is detected, the service returns a response that
contains the brand name, a confidence score (from 0 to 1 indicating how positive the
identification is), and a bounding box (coordinates) for where in the image the detected brand
was found.

For example, in the following image, a laptop has a Microsoft logo on its lid, which is
identified and located by the Computer Vision service.

Detecting faces

The Computer Vision service can detect and analyze human faces in an image, including the
ability to determine age and a bounding box rectangle for the location of the face(s). The
facial analysis capabilities of the Computer Vision service are a subset of those provided by
the dedicated Face Service. If you need basic face detection and analysis, combined with
general image analysis capabilities, you can use the Computer Vision service; but for more
comprehensive facial analysis and facial recognition functionality, use the Face service.
The following example shows an image of a person with their face detected and approximate
age estimated.

Categorizing an image

Computer Vision can categorize images based on their contents. The service uses a
parent/child hierarchy with a "current" limited set of categories. When analyzing an image,
detected objects are compared to the existing categories to determine the best way to provide
the categorization. As an example, one of the parent categories is people_. This image of a
person on a roof is assigned a category of people_.

A slightly different categorization is returned for the following image, which is assigned to
the category people_group because there are multiple people in the image:

Review the 86-category list here.

Detecting domain-specific content

When categorizing an image, the Computer Vision service supports two specialized domain
models:
• Celebrities - The service includes a model that has been trained to identify thousands of
well-known celebrities from the worlds of sports, entertainment, and business.
• Landmarks - The service can identify famous landmarks, such as the Taj Mahal and the
Statue of Liberty.

For example, when analyzing the following image for landmarks, the Computer Vision
service identifies the Eiffel Tower, with a confidence of 99.41%.

Optical character recognition

The Computer Vision service can use optical character recognition (OCR) capabilities to
detect printed and handwritten text in images. This capability is explored in the Read text
with the Computer Vision service module on Microsoft Learn.

Additional capabilities

In addition to these capabilities, the Computer Vision service can:

• Detect image types - for example, identifying clip art images or line drawings.
• Detect image color schemes - specifically, identifying the dominant foreground, background,
and overall colors in an image.
• Generate thumbnails - creating small versions of images.
• Moderate content - detecting images that contain adult content or depict violent, gory
scenes.

Microsoft Azure AI Fundamentals: Explore


natural language processing
Introduction
Analyzing text is a process where you evaluate different aspects of a document or phrase, in
order to gain insights into the content of that text. For the most part, humans are able to read
some text and understand the meaning behind it. Even without considering grammar rules for
the language the text is written in, specific insights can be identified in the text.

As an example, you might read some text and identify some key phrases that indicate the
main talking points of the text. You might also recognize names of people or well-known
landmarks such as the Eiffel Tower. Although difficult at times, you might also be able to get
a sense for how the person was feeling when they wrote the text, also commonly known as
sentiment.

Text Analytics Techniques

Text analytics is a process where an artificial intelligence (AI) algorithm, running on a


computer, evaluates these same attributes in text, to determine specific insights. A person will
typically rely on their own experiences and knowledge to achieve the insights. A computer
must be provided with similar knowledge to be able to perform the task. There are some
commonly used techniques that can be used to build software to analyze text, including:

• Statistical analysis of terms used in the text. For example, removing common "stop words"
(words like "the" or "a", which reveal little semantic information about the text), and
performing frequency analysis of the remaining words (counting how often each word
appears) can provide clues about the main subject of the text.
• Extending frequency analysis to multi-term phrases, commonly known as N-grams (a two-
word phrase is a bi-gram, a three-word phrase is a tri-gram, and so on).
• Applying stemming or lemmatization algorithms to normalize words before counting them -
for example, so that words like "power", "powered", and "powerful" are interpreted as
being the same word.
• Applying linguistic structure rules to analyze sentences - for example, breaking down
sentences into tree-like structures such as a noun phrase, which itself contains nouns, verbs,
adjectives, and so on.
• Encoding words or terms as numeric features that can be used to train a machine learning
model. For example, to classify a text document based on the terms it contains. This
technique is often used to perform sentiment analysis, in which a document is classified as
positive or negative.
• Creating vectorized models that capture semantic relationships between words by assigning
them to locations in n-dimensional space. This modeling technique might, for example,
assign values to the words "flower" and "plant" that locate them close to one another, while
"skateboard" might be given a value that positions it much further away.

While these techniques can be used to great effect, programming them can be complex. In
Microsoft Azure, the Language cognitive service can help simplify application development
by using pre-trained models that can:

• Determine the language of a document or text (for example, French or English).


• Perform sentiment analysis on text to determine a positive or negative sentiment.
• Extract key phrases from text that might indicate its main talking points.
• Identify and categorize entities in the text. Entities can be people, places, organizations, or
even everyday items such as dates, times, quantities, and so on.

In this module, you'll explore some of these capabilities and gain an understanding of how
you might apply them to applications such as:

• A social media feed analyzer to detect sentiment around a political campaign or a product in
market.
• A document search application that extracts key phrases to help summarize the main
subject matter of documents in a catalog.
• A tool to extract brand information or company names from documents or other text for
identification purposes.

These examples are just a small sample of the many areas that the Language service can help
with text analytics.

Get started with text analysis


The Language service is a part of the Azure Cognitive Services offerings that can perform
advanced natural language processing over raw text.

Azure resources for the Language service

To use the Language service in an application, you must provision an appropriate resource in
your Azure subscription. You can choose to provision either of the following types of
resource:

• A Language resource - choose this resource type if you only plan to use natural language
processing services, or if you want to manage access and billing for the resource separately
from other services.
• A Cognitive Services resource - choose this resource type if you plan to use the Language
service in combination with other cognitive services, and you want to manage access and
billing for these services together.

Language detection

Use the language detection capability of the Language service to identify the language in
which text is written. You can submit multiple documents at a time for analysis. For each
document submitted to it, the service will detect:

• The language name (for example "English").


• The ISO 6391 language code (for example, "en").
• A score indicating a level of confidence in the language detection.

For example, consider a scenario where you own and operate a restaurant where customers
can complete surveys and provide feedback on the food, the service, staff, and so on. Suppose
you have received the following reviews from customers:

Review 1: "A fantastic place for lunch. The soup was delicious."

Review 2: "Comida maravillosa y gran servicio."

Review 3: "The croque monsieur avec frites was terrific. Bon appetit!"

You can use the text analytics capabilities in the Language service to detect the language for
each of these reviews; and it might respond with the following results:

Document Language Name ISO 6391 Code Score

Review 1 English en 1.0

Review 2 Spanish es 1.0

Review 3 English en 0.9


Notice that the language detected for review 3 is English, despite the text containing a mix of
English and French. The language detection service will focus on the predominant language
in the text. The service uses an algorithm to determine the predominant language, such as
length of phrases or total amount of text for the language compared to other languages in the
text. The predominant language will be the value returned, along with the language code. The
confidence score may be less than 1 as a result of the mixed language text.

Ambiguous or mixed language content

There may be text that is ambiguous in nature, or that has mixed language content. These
situations can present a challenge to the service. An ambiguous content example would be a
case where the document contains limited text, or only punctuation. For example, using the
service to analyze the text ":-)", results in a value of unknown for the language name and the
language identifier, and a score of NaN (which is used to indicate not a number).

Sentiment analysis

The text analytics capabilities in the Language service can evaluate text and return sentiment
scores and labels for each sentence. This capability is useful for detecting positive and
negative sentiment in social media, customer reviews, discussion forums and more.

Using the pre-built machine learning classification model, the service evaluates the text and
returns a sentiment score in the range of 0 to 1, with values closer to 1 being a positive
sentiment. Scores that are close to the middle of the range (0.5) are considered neutral or
indeterminate.

For example, the following two restaurant reviews could be analyzed for sentiment:

"We had dinner at this restaurant last night and the first thing I noticed was how courteous
the staff was. We were greeted in a friendly manner and taken to our table right away. The
table was clean, the chairs were comfortable, and the food was amazing."

and

"Our dining experience at this restaurant was one of the worst I've ever had. The service was
slow, and the food was awful. I'll never eat at this establishment again."

The sentiment score for the first review might be around 0.9, indicating a positive sentiment;
while the score for the second review might be closer to 0.1, indicating a negative sentiment.

Indeterminate sentiment

A score of 0.5 might indicate that the sentiment of the text is indeterminate, and could result
from text that does not have sufficient context to discern a sentiment or insufficient phrasing.
For example, a list of words in a sentence that has no structure, could result in an
indeterminate score. Another example where a score may be 0.5 is in the case where the
wrong language code was used. A language code (such as "en" for English, or "fr" for
French) is used to inform the service which language the text is in. If you pass text in French
but tell the service the language code is en for English, the service will return a score of
precisely 0.5.

Key phrase extraction

Key phrase extraction is the concept of evaluating the text of a document, or documents, and
then identifying the main talking points of the document(s). Consider the restaurant scenario
discussed previously. Depending on the volume of surveys that you have collected, it can
take a long time to read through the reviews. Instead, you can use the key phrase extraction
capabilities of the Language service to summarize the main points.

You might receive a review such as:

"We had dinner here for a birthday celebration and had a fantastic experience. We were
greeted by a friendly hostess and taken to our table right away. The ambiance was relaxed,
the food was amazing, and service was terrific. If you like great food and attentive service,
you should try this place."

Key phrase extraction can provide some context to this review by extracting the following
phrases:

• attentive service
• great food
• birthday celebration
• fantastic experience
• table
• friendly hostess
• dinner
• ambiance
• place

Not only can you use sentiment analysis to determine that this review is positive, you can use
the key phrases to identify important elements of the review.

Entity recognition

You can provide the Language service with unstructured text and it will return a list of
entities in the text that it recognizes. The service can also provide links to more information
about that entity on the web. An entity is essentially an item of a particular type or a category;
and in some cases, subtype, such as those as shown in the following table.

Type SubType Example

Person "Bill Gates", "John"

Location "Paris", "New York"

Organization "Microsoft"

Quantity Number "6" or "six"

Quantity Percentage "25%" or "fifty percent"

Quantity Ordinal "1st" or "first"

Quantity Age "90 day old" or "30 years old"

Quantity Currency "10.99"

Quantity Dimension "10 miles", "40 cm"

Quantity Temperature "45 degrees"

DateTime "6:30PM February 4, 2012"

DateTime Date "May 2nd, 2017" or "05/02/2017"


Type SubType Example

DateTime Time "8am" or "8:00"

DateTime DateRange "May 2nd to May 5th"

DateTime TimeRange "6pm to 7pm"

DateTime Duration "1 minute and 45 seconds"

DateTime Set "every Tuesday"

URL "https://www.bing.com"

Email "support@microsoft.com"

US-based Phone Number "(312) 555-0176"

IP Address "10.0.1.125"

The service also supports entity linking to help disambiguate entities by linking to a specific
reference. For recognized entities, the service returns a URL for a relevant Wikipedia article.

For example, suppose you use the Language service to detect entities in the following
restaurant review extract:

"I ate at the restaurant in Seattle last week."

Entity Type SubType Wikipedia URL

Seattle Location https://en.wikipedia.org/wiki/Seattle

last week DateTime DateRange

Microsoft Azure AI Fundamentals: Explore


decision support
Introduction
Anomaly detection is an artificial intelligence technique used to determine whether values in
a series are within expected parameters.

There are many scenarios where anomaly detection is helpful. For example, a smart HVAC
system might use anomaly detection to monitor temperatures in a building and raise an alert
if the temperature goes above or below the expected value for a given period of time.

Other scenarios include:

• monitoring blood pressure


• evaluating mean time between failures for hardware products
• comparing month-over-month expenses for product costs

The Azure Anomaly Detector service is a cloud-based service that helps you monitor and
detect abnormalities in your historical time series and real-time data.
Learning objectives

After completing this module, you'll be able to:

• Describe what anomaly detection is


• Describe how the Anomaly Detector service can evaluate time series data
• Define scenarios where anomaly detection can be applied for real-time and historical data

Prerequisites

• Ability to navigate the Azure portal


• Foundational knowledge of Azure services

What is Anomaly Detector?


Anomalies are values that are outside the expected values or range of values.

In the graphic depicting the time series data, there is a light shaded area that indicates the
boundary, or sensitivity range. The solid blue line is used to indicate the measured values.
When a measured value is outside of the shaded boundary, an orange dot is used to indicate
the value is considered an anomaly. The sensitivity boundary is a parameter that you can
specify when calling the service. It allows you to adjust that boundary settings to tweak the
results.

Anomaly detection is considered the act of identifying events, or observations, that differ in a
significant way from the rest of the data being evaluated. Accurate anomaly detection leads to
prompt troubleshooting, which helps to avoid revenue loss and maintain brand reputation.

Azure's Anomaly Detector service

Anomaly Detector is a part of the Decision Services category within Azure Cognitive
Services. It is a cloud-based service that enables you to monitor time series data, and to detect
anomalies in that data. It does not require you to know machine learning. You can use the
REST API to integrate Anomaly Detector into your applications with relative ease. The
service uses the concept of a "one parameter" strategy. The main parameter you need to
customize is “Sensitivity”, which is from 1 to 99 to adjust the outcome to fit the scenario. The
service can detect anomalies in historical time series data and also in real-time data such as
streaming input from IoT devices, sensors, or other streaming input sources.
How Anomaly Detector works
The Anomaly Detector service identifies anomalies that exist outside the scope of a
boundary. The boundary is set using a sensitivity value. By default, the upper and lower
boundaries for anomaly detection are calculated using concepts known as expectedValue,
upperMargin, and lowerMargin. The upper and lower boundaries are calculated using these
three values. If a value exceeds either boundary, it will be identified as an anomaly. You can
adjust the boundaries by applying a marginScale to the upper and lower margins as
demonstrated by the following formula.

upperBoundary = expectedValue + (100 - marginScale) * upperMargin

Data format

The Anomaly Detector service accepts data in JSON format. You can use any numerical data
that you have recorded over time. The key aspects of the data being sent includes the
granularity, a timestamp, and a value that was recorded for that timestamp. An example of a
JSON object that you might send to the API is shown in this code sample. The granularity is
set as hourly and is used to represent temperatures in degrees Celsius that were recorded at
the timestamps indicated.

{
"granularity": "hourly",
"series": [
{
"timestamp": "2021-03-02T01:00:00Z",
"value": -10.56
},
{
"timestamp": "2021-03-02T02:00:00Z",
"value": -8.30
},
{
"timestamp": "2021-03-02T03:00:00Z",
"value": -10.30
},
{
"timestamp": "2021-03-02T04:00:00Z",
"value": 5.95
},
]
}

The service will support a maximum of 8640 data points however, sending this many data
points in the same JSON object, can result in latency for the response. You can improve the
response by breaking your data points into smaller chunks (windows) and sending these in a
sequence.

The same JSON object format is used in a streaming scenario. The main difference is that
you will send a single value in each request. The streaming detection method will compare
the current value being sent and the previous value sent.

Data consistency recommendations

If your data may have missing values in the sequence, consider the following
recommendations.

• Sampling occurs every few minutes and has less than 10% of the expected number of points
missing. In this case, the impact should be negligible on the detection results.
• If you have more than 10% missing, there are options to help "fill" the data set. Consider
using a linear interpolation method to fill in the missing values and complete the data set.
This will fill gaps with evenly distributed values.

The Anomaly Detector service will provide the best results if your time series data is evenly
distributed. If the data is more randomly distributed, you can use an aggregation method to
create a more even distribution data set.

When to use Anomaly Detector


The Anomaly Detector service supports batch processing of time series data and last-point
anomaly detection for real-time data.

Batch detection

Batch detection involves applying the algorithm to an entire data series at one time. The
concept of time series data involves evaluation of a data set as a batch. Use your time series
to detect any anomalies that might exist throughout your data. This operation generates a
model using your entire time series data, with each point analyzed using the same model.

Batch detection is best used when your data contains:

• Flat trend time series data with occasional spikes or dips


• Seasonal time series data with occasional anomalies
o Seasonality is considered to be a pattern in your data, that occurs at regular
intervals. Examples would be hourly, daily, or monthly patterns. Using seasonal data,
and specifying a period for that pattern, can help to reduce the latency in detection.

When using the batch detection mode, Anomaly Detector creates a single statistical model
based on the entire data set that you pass to the service. From this model, each data point in
the data set is evaluated and anomalies are identified.

Batch detection example

Consider a pharmaceutical company that stores medications in storage facilities where the
temperature in the facilities needs to remain within a specific range. To evaluate whether the
medication remained stored in a safe temperature range in the past three months we need to
know:

• the maximum allowable temperature


• the minimum allowable temperature
• the acceptable duration of time for temperatures to be outside the safe range

If you are interested in evaluating compliance over historical readings, you can extract the
required time series data, package it into a JSON object, and send it to the Anomaly Detector
service for evaluation. You will then have a historical view of the temperature readings over
time.

Real-time detection

Real-time detection uses streaming data by comparing previously seen data points to the last
data point to determine if your latest one is an anomaly. This operation generates a model
using the data points you send, and determines if the target (current) point is an anomaly. By
calling the service with each new data point you generate, you can monitor your data as it's
created.
Real-time detection example

Consider a scenario in the carbonated beverage industry where real-time anomaly detection
may be useful. The carbon dioxide added to soft drinks during the bottling or canning process
needs to stay in a specific temperature range.

Bottling systems use a device known as a carbo-cooler to achieve the refrigeration of the
product for this process. If the temperature goes too low, the product will freeze in the carbo-
cooler. If the temperature is too warm, the carbon dioxide will not adhere properly. Either
situation results in a product batch that cannot be sold to customers.

This carbonated beverage scenario is an example of where you could use streaming detection
for real-time decision making. It could be tied into an application that controls the bottling
line equipment. You may use it to feed displays that depict the system temperatures for the
quality control station. A service technician may also use it to identify equipment failure
potential and servicing needs.

You can use the Anomaly Detector service to create a monitoring application configured with
the above criteria to perform real-time temperature monitoring. You can perform anomaly
detection using both streaming and batch detection techniques. Streaming detection is most
useful for monitoring critical storage requirements that must be acted on immediately.
Sensors will monitor the temperature inside the compartment and send these readings to your
application or an event hub on Azure. Anomaly Detector will evaluate the streaming data
points and determine if a point is an anomaly.

Microsoft Azure AI Fundamentals: Explore


knowledge mining
Introduction
Searching for information online has never been easier. However, it's still a challenge to find
information from documents that aren't in a search index. For example, every day, people
deal with unstructured, typed, image-based, or hand-written documents. Often, people must
manually read through these documents to extract and record their insights in order to persist
the found data. Now we have solutions that can automate information extraction.

Knowledge mining is the term used to describe solutions that involve extracting information
from large volumes of often unstructured data. One of these knowledge mining solutions is
Azure Cognitive Search, a cloud search service that has tools for building user-managed
indexes. The indexes can be used for internal only use, or to enable searchable content on
public-facing internet assets.

Importantly, Azure Cognitive Search can utilize the built-in AI capabilities of Azure
Cognitive Services such as image processing, content extraction, and natural language
processing to perform knowledge mining of documents. The product's AI capabilities makes
it possible to index previously unsearchable documents and to extract and surface insights
from large amounts of data quickly.

Learning objectives

In this module, you will:


• Understand how Azure Cognitive Search uses cognitive skills
• Learn how indexers automate data ingestion steps, including JSON serialization
• Describe the purpose of a knowledge store
• Build and query a search index

What is Azure Cognitive Search?


Azure Cognitive Search provides the infrastructure and tools to create search solutions that
extract data from various structured, semi-structured, and non-structured documents.

Azure Cognitive Search results contain only your data, which can include text inferred or
extracted from images, or new entities and key phrases detection through text analytics. It's a
Platform as a Service (PaaS) solution. Microsoft manages the infrastructure and availability,
allowing your organization to benefit without the need to purchase or manage dedicated
hardware resources.

Azure Cognitive Search features

Azure Cognitive Search exists to complement existing technologies and provides a


programmable search engine built on Apache Lucene, an open-source software library. It's a
highly available platform offering a 99.9% uptime SLA available for cloud and on-premises
assets.

Azure Cognitive Search comes with the following features:

• Data from any source: Azure Cognitive Search accepts data from any source provided in
JSON format, with auto crawling support for selected data sources in Azure.
• Full text search and analysis: Azure Cognitive Search offers full text search capabilities
supporting both simple query and full Lucene query syntax.
• AI powered search: Azure Cognitive Search has Cognitive AI capabilities built in for image
and text analysis from raw content.
• Multi-lingual: Azure Cognitive Search offers linguistic analysis for 56 languages to
intelligently handle phonetic matching or language-specific linguistics. Natural language
processors available in Azure Cognitive Search are also used by Bing and Office.
• Geo-enabled: Azure Cognitive Search supports geo-search filtering based on proximity to a
physical location.
• Configurable user experience: Azure Cognitive Search has several features to improve the
user experience including autocomplete, autosuggest, pagination, and hit highlighting.

Identify elements of a search solution

A typical Azure Cognitive Search solution starts with a data source that contains the data
artifacts you want to search. This could be a hierarchy of folders and files in Azure Storage,
or text in a database such as Azure SQL Database or Azure Cosmos DB. The data format that
Cognitive Search supports is JSON. Regardless of where your data originates, if you can
provide it as a JSON document, the search engine can index it.

If your data resides in supported data source, you can use an indexer to automate data
ingestion, including JSON serialization of source data in native formats. An indexer connects
to a data source, serializes the data, and passes to the search engine for indexing. Most
indexers support change detection, which makes data refresh a simpler exercise.

Besides automating data ingestion, indexers also support AI enrichment. You can attach a
skillset that applies a sequence of AI skills to enrich the data, making it more searchable. A
comprehensive set of built-in skills, based on Cognitive Services APIs, can help you derive
new fields – for example by recognizing entities in text, translating text, evaluating sentiment,
or predicting appropriate captions for images. Optionally, enriched content can be sent to a
knowledge store, which stores output from an AI enrichment pipeline in tables and blobs in
Azure Storage for independent analysis or downstream processing.

Whether you write application code that pushes data to an index - or use an indexer that
automates data ingestion and adds AI enrichment - the fields containing your content are
persisted in an index, which can be searched by client applications. The fields are used for
searching, filtering, and sorting to generate a set of results that can be displayed or otherwise
used by the client application.

Use a skillset to define an enrichment


pipeline
AI enrichment refers to embedded image and natural language processing in a pipeline that
extracts text and information from content that can't otherwise be indexed for full text search.

AI processing is achieved by adding and combining skills in a skillset. A skillset defines the
operations that extract and enrich data to make it searchable. These AI skills can be either
built-in skills, such as text translation or Optical Character Recognition (OCR), or custom
skills that you provide.

Built in skills

Built-in skills are based on pre-trained models from Microsoft, which means you can't train
the model using your own training data. Skills that call the Cognitive Resources APIs have a
dependency on those services and are billed at the Cognitive Services pay-as-you-go price
when you attach a resource. Other skills are metered by Azure Cognitive Search, or are utility
skills that are available at no charge.

Built-in skills fall into these categories:

Natural language processing skills: with these skills, unstructured text is mapped as
searchable and filterable fields in an index.

Some examples include:

• Key Phrase Extraction: uses a pre-trained model to detect important phrases based on
term placement, linguistic rules, proximity to other terms, and how unusual the term is
within the source data.
• Text Translation Skill: uses a pre-trained model to translate the input text into various
languages for normalization or localization use cases.

Image processing skills: creates text representations of image content, making it searchable
using the query capabilities of Azure Cognitive Search.

Some examples include:

• Image Analysis Skill: uses an image detection algorithm to identify the content of an
image and generate a text description.
• Optical Character Recognition Skill: allows you to extract printed or handwritten text
from images, such as photos of street signs and products, as well as from
documents—invoices, bills, financial reports, articles, and more.

Understand indexes
An Azure Cognitive Search index can be thought of as a container of searchable documents.
Conceptually you can think of an index as a table and each row in the table represents a
document. Tables have columns, and the columns can be thought of as equivalent to the
fields in a document. Columns have data types, just as the fields do on the documents.
Index schema

In Azure Cognitive Search, an index is a persistent collection of JSON documents and other
content used to enable search functionality. The documents within an index can be thought of
as rows in a table, each document is a single unit of searchable data in the index.

The index includes a definition of the structure of the data in these documents, called its
schema. An example of an index schema with AI-extracted fields keyphrases and imageTags
is below:

JSON
{
"name": "index",
"fields": [
{
"name": "content", "type": "Edm.String", "analyzer":
"standard.lucene", "fields": []
}
{
"name": "keyphrases", "type": "Collection(Edm.String)", "analyzer":
"standard.lucene", "fields": []
},
{
"name": "imageTags", "type": "Collection(Edm.String)", "analyzer":
"standard.lucene", "fields": []
},
]
}
Index attributes

Azure Cognitive Search needs to know how you would like to search and display the fields in
the documents. You specify that by assigning attributes, or behaviors, to these fields. For
each field in the document, the index stores its name, the data type, and supported behaviors
for the field such as, is the field searchable, can the field be sorted?

The most efficient indexes use only the behaviors that are needed. If you forget to set a
required behavior on a field when designing, the only way to get that feature is to rebuild the
index.

The following image depicts the fields when designing an index in Azure:
Use an indexer to build an index

In order to index the documents in Azure Storage, they need to be exported from their
original file type to JSON. In order to export data in any format to JSON, and load it into an
index, we use an indexer.

To create search documents, you can either generate JSON documents with application code
or you can use Azure's indexer to export incoming documents into JSON.

Azure Cognitive Search lets you create and load JSON documents into an index with two
approaches:

• Push method: JSON data is pushed into a search index via either the REST API or
the .NET SDK. Pushing data has the most flexibility as it has no restrictions on the
data source type, location, or frequency of execution.
• Pull method: Search service indexers can pull data from popular Azure data sources,
and if necessary, export that data into JSON if it isn't already in that format.

Use the pull method to load data with an indexer

Azure Cognitive Search's indexer is a crawler that extracts searchable text and metadata from
an external Azure data source and populates a search index using field-to-field mappings
between source data and your index. Using the indexer is sometimes referred to as a 'pull
model' approach because the service pulls data in without you having to write any code that
adds data to an index. An indexer maps source fields to their matching fields in the index.
Data import monitoring and verification

The search services overview page has a dashboard that lets you quickly see the health of the
search service. On the dashboard, you can see how many documents are in the search service,
how may indexes have been used, and how much storage is in use.

When loading new documents into an index, the progress can be monitored by clicking on the
index's associated indexer. The document count will grow as documents are loaded into the
index. In some instances, the portal page can take a few minutes to display up-to-date
document counts. Once the index is ready for querying, you can then use Search explorer to
verify the results. An index is ready when the first document is successfully loaded.

Indexers only import new or updated documents, so it is normal to see zero documents
indexed.

The Search explorer can perform quick searches to check the contents of an index, and ensure
that you are getting expected search results. Having this tool available in the portal enables
you to easily check the index by reviewing the results that are returned as JSON documents.

Making changes to an index

You have to drop and recreate indexes if you need to make changes to field definitions.
Adding new fields is supported, with all existing documents having null values. You'll find it
faster using a code-based approach to iterate your designs, as working in the portal requires
the index to be deleted, recreated, and the schema details to be manually filled out.

An approach to updating an index without affecting your users is to create a new index under
a different name. You can use the same indexer and data source. After importing data, you
can switch your app to use the new index.

Persist enriched data in a knowledge store


A knowledge store is persistent storage of enriched content. The purpose of a knowledge
store is to store the data generated from AI enrichment in a container. For example, you may
want to save the results of an AI skillset that generates captions from images.

Recall that skillsets move a document through a sequence of enrichments that invoke
transformations, such as recognizing entities or translating text. The outcome can be a search
index, or projections in a knowledge store. The two outputs, search index and knowledge
store, are mutually exclusive products of the same pipeline; derived from the same inputs, but
resulting in output that is structured, stored, and used in different applications.

While the focus of an Azure Cognitive Search solution is usually to create a searchable index,
you can also take advantage of its data extraction and enrichment capabilities to persist the
enriched data in a knowledge store for further analysis or processing.

A knowledge store can contain one or more of three types of projection of the extracted data:

• Table projections are used to structure the extracted data in a relational schema for
querying and visualization
• Object projections are JSON documents that represent each data entity
• File projections are used to store extracted images in JPG format

Create an index in the Azure portal


Before using an indexer to create an index, you'll first need to make your data available in a
supported data source. Supported data sources include:

• Cosmos DB (SQL API)


• Azure SQL (database, managed instance, and SQL Server on an Azure VM)
• Azure Storage (Blob Storage, Table Storage, ADLS Gen2)

Using the Azure portal's Import data wizard

Once your data is in an Azure data source, you can begin using Azure Cognitive Search.
Contained within the Azure Cognitive Search service in Azure portal is the Import data
wizard, which automates processes in the Azure portal to create various objects needed for
the search engine. You can see it in action when creating any of the following objects using
the Azure portal:

• Data Source: Persists connection information to source data, including credentials. A data
source object is used exclusively with indexers.
• Index: Physical data structure used for full text search and other queries.
• Indexer: A configuration object specifying a data source, target index, an optional AI skillset,
optional schedule, and optional configuration settings for error handling and base-64
encoding.
• Skillset: A complete set of instructions for manipulating, transforming, and shaping content,
including analyzing and extracting information from image files. Except for very simple and
limited structures, it includes a reference to a Cognitive Services resource that provides
enrichment.
• Knowledge store: Stores output from an AI enrichment pipeline in tables and blobs in Azure
Storage for independent analysis or downstream processing.

To use Azure Cognitive Search, you'll need an Azure Cognitive Search resource. You can
create a resource in the Azure portal. Once the resource is created, you can manage
components of your service from the resource Overview page in the portal.

You can build Azure search indexes using the Azure portal or programmatically with the
REST API or software development kits (SDKs).
Query data in an Azure Cognitive Search
index
Index and query design are closely linked. After we build the index, we can perform queries.
A crucial component to understand is that the schema of the index determines what queries
can be answered.

Azure Cognitive Search queries can be submitted as an HTTP or REST API request, with the
response coming back as JSON. Queries can specify what fields are searched and returned,
how search results are shaped, and how the results should be filtered or sorted. A query that
doesn't specify the field to search will execute against all the searchable fields within the
index.

Azure Cognitive Search supports two types of syntax: simple and full Lucene. Simple syntax
covers all of the common query scenarios, while full Lucene is useful for advanced scenarios.

Simple query requests

A query request is a list or words (search terms) and query operators (simple or full) of what
you would like to see returned in a result set. Let's look what components make up a search
query. Consider this simple search example:

coffee (-"busy" + "wifi")

This query is trying to find content about coffee, excluding busy and including wifi.

Breaking the query into components, it's made up of search terms, (coffee), plus two
verbatim phrases, "busy" and "wifi", and operators (-, +, and ( )). The search terms can be
matched in the search index in any order or location in the content. The two phrases will only
match with exactly what is specified, so wi-fi would not be a match. Finally, a query can
contain a number of operators. In this example, the - operator tells the search engine that
these phrases should NOT be in the results. The parenthesis group terms together, and set
their precedence.

By default, the search engine will match any of the terms in the query. Content containing
just coffee would be a match. In this example, using -"busy" would lead to the search
results including all content that doesn't have the exact string "busy" in it.

The simple query syntax in Azure Cognitive Search excludes some of the more complex
features of the full Lucene query syntax, and it's the default search syntax for queries.

You might also like