in examples/camvid_segmentation_multiclass.ipynb CamVidModel (and other examples) , image is not scaled to [0,1] ``` def forward(self, image): # normalize image here print("param: ", self.mean, self.std) print("before:", image[0,:,100,100]) image = (image - self.mean) / self.std print("after:", image[0,:,100,100]) mask = self.model(image) return mask ``` and the result is : ``` param: tensor([[[[0.4850]], [[0.4560]], [[0.4060]]]]) tensor([[[[0.2290]], [[0.2240]], [[0.2250]]]]) before: tensor([59, 55, 55], dtype=torch.uint8) after: tensor([255.5240, 243.5000, 242.6400]) ```