Report

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 15

Illustrate

This code is a simple semantic segmentation model training and testing process, using
the PyTorch deep learning framework in Python and some related libraries. Here's a
summary of the key points from this code:Libraries used
• from dataset import camvidLoader: Import a custom CamVid dataset loader.
• import numpy as np: Import the NumPy library to handle array operations.
• from dataset import from_label_to_rgb: Import a function that converts tags to
RGB images.
• from skimage.io import imsave: Import a function for saving the image.
• import os: Import an operating system-related library for file path operations.
• from unet import UNet: Import a custom U-Net model.
• import torch: Import the PyTorch deep learning framework.
• from torch.utils.data import DataLoader: Import the tools in PyTorch for data
loading.

Actions taken

1. Set parameters and paths:


Set the dataset root directory data_root, number of categories num_classes, and label
name list.
Set the learning rate lr, batch size batch_size, number of worker processes
num_workers, number of training cycles epochs, and device used.
Set the directory where the training results, outputs, and weights are saved.
2. Data loading:
Use camvidLoader to load training, validation, and test datasets, and DataLoader for
bulk data loading.
3. Model building and optimizer definition:
Initialize the U-Net model and move it to the appropriate device.
Define the cross-entropy loss function and the Adam optimizer.
4. Training loops:
For each training cycle, traverse the training dataset.
Forward propagation, backpropagation, and parameter updates are performed for each
batch, and the training loss is calculated.
For each validation batch, the validation loss is calculated, and a partial validation
result image is saved
Print the training loss and validation loss for each epoch, and save the model weights.
5. Test loops:
Load saved model weights.
Inference is performed on the test dataset and the test loss is calculated.
Save an image of the test result.

The core part of the code

This code implements a training, validation, and testing process for a semantic
segmentation model based on U-Net. By iterating the training data in a loop, the
model gradually learns to improve its semantic segmentation effect on images. At the
end of each cycle, the model is evaluated on a validation dataset to understand its
generalization ability. Finally, inference is performed on the test dataset, the
performance of the model is evaluated, and the resulting images are saved.

from dataset import camvidLoader

import numpy as np

from dataset import from_label_to_rgb

from skimage.io import imsave

import os

from unet import UNet

import torch

from torch.utils.data import DataLoader

data_root = 'CamVid/sunny'

num_classes = 14
labels = ['Sky', 'Building', 'Pole', 'Road', 'LaneMarking',

'SideWalk', 'Pavement', 'Tree',

'SignSymbol', 'Fence', 'Car_Bus', 'Pedestrian',

'Bicyclist', 'Others']

lr = 0.0002

batch_size = 4

num_workers = 0

lr = 5e-6

epochs = 300

# device = 'cpu'

device = 'cuda' if torch.cuda.is_available() else 'cpu'

train_result_dir = 'train_internal_data//'

result_dir = "output//"

weight_dir = "weights//"

if not os.path.exists(train_result_dir):

os.makedirs(train_result_dir)

if not os.path.exists(result_dir):

os.makedirs(result_dir)
if not os.path.exists(weight_dir):

os.makedirs(weight_dir)

train_dataset = camvidLoader(root=data_root,

split='train', is_aug=False, img_size = [256, 256],

is_pytorch_transform = True, aug = None)

train_loader = DataLoader(train_dataset,

num_workers=num_workers, batch_size=batch_size,

shuffle=True, drop_last=True)

val_dataset = camvidLoader(root=data_root,

split='val', is_aug=False, img_size = [256, 256],

is_pytorch_transform = True, aug = None)

val_loader = DataLoader(val_dataset,

num_workers=num_workers, batch_size=batch_size,

shuffle=True, drop_last=True)

test_dataset = camvidLoader(root=data_root,

split='test', is_aug=False, img_size = [256, 256],

is_pytorch_transform = True, aug = None)

test_loader = DataLoader(val_dataset,
num_workers=num_workers, batch_size=batch_size,

shuffle=True, drop_last=True)

unet = UNet(3, num_classes, width=32, bilinear=True)

unet = unet.to(device)

loss_func = torch.nn.CrossEntropyLoss()

optimizer = torch.optim.Adam(unet.parameters(),

lr=lr)

for epoch in range(epochs):

unet.train()

train_loss = 0

count = 0

for idx_batch, (imagergb, labelmask, filename) in

enumerate(train_loader):

optimizer.zero_grad()

x = imagergb.to(device)

y_ = labelmask.to(device)

y = unet(x)

loss = loss_func(y, y_)


loss.backward()

optimizer.step()

if idx_batch % 10 == 0:

print("train epoch = " + str(epoch) + " | batch

= " + str(idx_batch))

train_loss += loss.item()

count += 1

train_loss /=count

unet.eval()

val_loss = 0

count = 0

for idx_batch, (imagergb, labelmask, filename) in

enumerate(val_loader):

with torch.no_grad():

x = imagergb.to(device)

y_ = labelmask.to(device)

y = unet(x)

loss = loss_func(y, y_)

val_loss += loss.item()

count += 1

if idx_batch % 5 == 0:
for idx in range(0, y.shape[0]):

name = filename[idx]

max_index = torch.argmax(y[idx],

dim=0).cpu().int().numpy()

label_rgb_vis =

from_label_to_rgb(max_index)

gt_correct_format =

y_[idx].cpu().int().numpy()

gt_vis =

from_label_to_rgb(gt_correct_format)

result_vis =

np.concatenate((label_rgb_vis, gt_vis), axis=1)

result_vis_for_saving = (result_vis *

255.0).astype(np.uint8) # added missing parenthesis

imsave(train_result_dir +

"valdation_seg_result_{}_{}_{}_{}.png".format(epoch

, idx_batch, idx, name), result_vis_for_saving)

val_loss /= count

print('epoch ' + str(epoch) + ', training loss = ' +

str(train_loss) + ', val loss = ' + str(val_loss))

model_location = weight_dir + "nearest.pt"

torch.save(unet, model_location)
unet = torch.load(weight_dir + "nearest.pt")

unet = unet.to(device)

unet.eval()

test_loss = 0

count = 0

for idx_batch, (imagergb, labelmask, filename) in

enumerate(test_loader):

with torch.no_grad():

x = imagergb.to(device)

y_ = labelmask.to(device)

y = unet(x)

loss = loss_func(y, y_)

test_loss += loss.item()

count += 1

for idx in range(0, y.shape[0]):

name = filename[idx]

max_index = torch.argmax(y[idx],

dim=0).cpu().int().numpy()

label_rgb_vis = from_label_to_rgb(max_index)
gt_correct_format = y_[idx].cpu().int().numpy()

gt_vis = from_label_to_rgb(gt_correct_format)

result_vis_for_saving = (label_rgb_vis *

255.0).astype(np.uint8) # added missing parenthesis

imsave(result_dir + "%s.png"%name,

result_vis_for_saving)

test_loss /= count

print('epoch ' + str(epoch) + ', test loss = ' +

str(test_loss))

The U-Net model is a deep learning architecture commonly used for image
segmentation tasks, which is characterized by a U-shaped network structure
composed of symmetrical encoders and decoders in order to capture features at
different scales and achieve fine segmentation results. Originally used for biomedical
image segmentation tasks, such as cell and organ segmentation, the model initially
included accurate image segmentation of smaller datasets and relatively simple
scenes, and the ability to handle subtle edges and details, but with the further
improvement and expansion of the U-Net model, it has been successfully applied to a
wider range of image segmentation tasks, including natural images, remote sensing
images, and other fields, showing more powerful performance and applicability.

Fine-tuning is a transfer learning technique commonly used in deep learning, which


aims to adapt the parameters of a pre-trained model to a new task or domain based on
the parameters of the model
1. Select a pre-trained model: Select a pre-trained deep learning model on a large-
scale dataset as the base model, and pre-train a model for classification tasks on a
large dataset such as ImageNet, such as ResNet, VGG, Inception, etc.
2. Freeze some layers: During the fine-tuning process, some layers of the pre-trained
model will be frozen, especially the underlying feature extractor part.
3. Define a new task or domain: Determine the dataset of the new task domain and the
associated label information, whether it is classification, object detection, semantic
segmentation, etc., or it can be a specific prediction task on the dataset of different
domains.
4. Update parameters: For the portion above the frozen layer (usually the top-level
classifier or decoder part), a smaller learning rate can be used by training on a new
dataset, adjusting the model parameters to fit the new task or domain, to prevent
breaking the representation of features that the pre-trained model has already learned.
5. Evaluate performance: After the training is complete, evaluate the fine-tuned model
on the validation set or test set to check its performance on new tasks or domains.

Unsupervised domain adaptation is a technique that adapts a model to a target domain


without the need for labeled data.
1. Pseudo-ground truth generation: Unsupervised domain adaptive technology,
especially widely used in tasks such as image segmentation, generates pseudo-labels
similar to real labels (pseudo-ground truth) by using existing models or algorithms to
process the data in the target domain. For example, in an image segmentation task, a
pre-trained segmentation model can be used to segment images in the target domain,
and then the segmentation result is used as a pseudolabel to train the model in the
target domain.
2. Adaptive Batch Standardization: Batch standardization is a commonly used
technique in deep learning that can speed up the training process and improve the
stability of the model. Adaptive batch standardization is to improve batch
standardization, so that it can adapt to the data distribution of different domains or
tasks, by calculating the mean and variance on the data of the target domain, and
using them for the parameter adjustment of batch standardization, the model can
better adapt to the data characteristics of the target domain, and improve the
generalization ability and performance. These unsupervised domain adaptation
techniques can help the model better adapt to the data characteristics of the target
domain in the absence of a large amount of labeled data, so as to improve the
generalization ability and performance of the model.

Experimental setup involves experimental data, model selection, hyperparameter


setting, evaluation metrics, and result analysis. Here are some key points to set up
your experiment:
1. Dataset selection: Choosing an appropriate dataset is critical to the credibility of the
experimental results. The dataset should be representative of the characteristics of the
target task or domain, with a certain diversity and representativeness, and at the same
time, the size and quality of the dataset need to be considered to ensure the reliability
of the experimental results.
2. Model selection: According to the nature and requirements of the task, select the
appropriate model architecture as the baseline model or comparison model, including
the classic model architecture (such as ResNet, VGG, etc.)
3. Hyperparameter setting: For deep learning models, the setting of hyperparameters
is crucial to the performance and convergence speed of the model, including the
learning rate, batch size, optimizer type, weight decay and other hyperparameters, all
of which need to be reasonably adjusted and optimized. The optimal hyperparameter
settings can be determined through grid search, random search, or experience-based
adjustments.
4. Evaluation metrics: For tasks, the evaluation metrics include accuracy, precision,
recall, F1 value, etc., and for segmentation tasks, you can use pixel accuracy, IoU,
Dice coefficient, etc., and select the appropriate evaluation metrics to more accurately
evaluate the performance of the model.
5. Experimental design: Various factors that need to be considered in the process of
designing experiments, such as the division of the training set, validation set, and test
set, data augmentation strategy, and the number of iterations of model training. At the
same time, appropriate comparative experiments should be performed to compare the
performance differences between different models or methods, and statistical analysis
should be performed to verify the significance of the results.
6. Result analysis: Through in-depth analysis of experimental results, the quantitative
evaluation of model performance, the analysis of error cases, and the interpretability
of model prediction results, we can better understand the behavior and performance
bottlenecks of the model and provide guidance for further research.

Street View image segmentation is a challenging task with the following complexities:
1. Complex scene structure: Street View images usually contain a variety of complex
scene structures, such as buildings, roads, pedestrians, vehicles, etc., and there may be
occlusion, overlap, and irregular shapes between these objects, which increases the
difficulty of segmentation algorithms.
2. Lighting and weather changes: Street View imagery is affected by weather and
lighting conditions, and there may be different situations such as bright, dark, and
cloudy. This change causes changes in color, texture, and contrast in the image,
making it difficult for segmentation algorithms to accurately distinguish between
different objects and areas.
3. Perspective and scale changes: Street View images are often taken from different
perspectives and scales, including close-up, long-range and various angles, which
changes the shape, size and proportion of objects, bringing challenges to segmentation
algorithms.
4. Category richness and category imbalance: A variety of categories of objects are
covered in Street View images, such as buildings, trees, sky, roads, etc., and there may
be an imbalance in their number and proportion, which leads to bias in the learning
and recognition of different categories in the model, affecting the accuracy and
robustness of segmentation.

For bright and cloudy datasets, its failure case image might behave like this:
1. Uneven lighting: In bright and cloudy situations, there may be strong uneven
lighting in the image, resulting in overexposure or obvious shadows in some areas,
making it difficult for the segmentation algorithm to accurately extract object
boundaries and detail information.
2. Color distortion: The change of lighting conditions may lead to color distortion in
the image, which makes the colors in different areas deviate from the real situation,
which affects the recognition and utilization of color features by the model and
reduces the accuracy of segmentation.
3. Blurred details: Images in cloudy weather may cause details to be blurred in some
areas, especially in distant or dimly lit places, making it difficult for the model to
accurately distinguish between object boundaries and texture information, thus
affecting the accuracy of segmentation.

Solving these challenges requires a combination of deep learning models, data


augmentation techniques, and domain adaptation methods to improve the model's
adaptability to different scenes and lighting conditions, resulting in more accurate and
robust Street View image segmentation. Model fine-tuning and domain adaptation
strategies have important effectiveness in Street View image segmentation, especially
in the face of different scenes, lighting conditions, and climate change.
Effectiveness of model fine-tuning:
1. Strong adaptability: Model fine-tuning can make the model better adapt to specific
street view image datasets by performing end-to-end fine-tuning on the data in the
target domain, and improve the accuracy and generalization ability of segmentation.
2. Resource efficiency: Compared to training a complete deep learning model from
scratch, model fine-tuning usually requires less computing resources and training
time, because the underlying model has been pre-trained on a large-scale dataset and
has better initial parameters.
3. Flexibility: Model fine-tuning can be flexibly applied to a variety of different Street
View image datasets, regardless of their size, resolution, or category distribution, thus
providing customized solutions for different application scenarios.
Effectiveness of Domain Adaptation Strategies:
1. Generalization ability: The domain adaptation strategy can help the model to
effectively transfer knowledge between the training domain and the test domain,
thereby improving the model's generalization ability in the target domain, especially
in the face of changes such as scenes, lighting, and weather.
2. Data augmentation: Domain adaptation strategies can combine data augmentation
techniques, such as random rotation, scaling, and cropping, to generate more diverse
and challenging training samples, thereby improving the model's adaptability to
various complex situations.
3. Transfer learning: The domain adaptation strategy can take advantage of the
similarity between the source domain and the target domain to apply the knowledge
and experience of the source domain to the target domain through transfer learning, so
as to accelerate the convergence and optimization process of the model.
Future Pathways:
1. Multimodal fusion: In the future, computer vision street view segmentation can
combine multiple sensor data, such as images, lidars, and high-precision maps, to
perform multimodal fusion, thereby improving the accuracy and robustness of
segmentation.
2. Self-supervised learning: Self-supervised learning can learn image features and
representations from large-scale data in an unsupervised way, thereby reducing the
dependence on labeled data and improving the performance of the model on street
view segmentation tasks.
3. Cross-domain transfer: Future research can explore the method of cross-domain
transfer learning to migrate the model from other related fields, such as natural image
segmentation or medical image segmentation, to the Street View image segmentation
task, so as to improve the generalization ability and adaptability of the model.

Results

The test loss value is 0.555295467376709. The loss function is commonly used to
measure the discrepancy between the model's predictions and the actual labels, with a
lower loss indicating better model performance.

The precision of the model is reported as 0.9596610923690647. Precision is a metric


in model evaluation that denotes the ratio of true positive samples identified by the
model to all samples identified as positives.

The recall rate of the model is 0.9775319386082872. This metric measures the
proportion of actual positive samples correctly identified by the model out of all
actual positive samples.

The F1 Score is 0.9835591230293098, representing the harmonic average of precision


and recall. An F1 score close to 1 generally signifies excellent model performance in
terms of both precision and recall.

The test mean Intersection over Union (mIoU) achieved by the model is
0.762967828608658. mIoU is a crucial metric in image segmentation tasks,
quantifying the average overlap ratio between the predicted regions and the actual
regions.
Conclutions

The evaluation metrics provided indicate a high-performance model across various


indices. The test loss value of 0.555 indicates a reasonable level of prediction
accuracy compared to the actual data, which suggests effective learning although
there's potential for further optimization to reduce the loss. The precision of 0.989
highlights the model's accuracy in identifying true positives from the positives it has
predicted. Similarly, a recall of 0.977 ensures that the model is reliable in capturing a
high proportion of actual positive cases.

The F1 score of 0.983 consolidates the model’s balanced capability in both precision
and recall, which is crucial for models where both the type I and type II errors need to
be minimized. Furthermore, the mIoU score of 0.763 in a task like image
segmentation underscores the model's efficacy in distinguishing and overlapping the
predicted areas with the actual ones, which is pivotal in visual data interpretations.

These metrics collectively suggest that the model performs robustly with high
accuracy and reliability in recognizing and predicting correct labels in its specified
tasks. However, ongoing improvements and adjustments might be needed to address
any potential underfitting indicated by the loss value and to enhance the model's
ability to generalize better to unseen data.

You might also like