diff --git a/README.md b/README.md index 325b465..f2305ab 100644 --- a/README.md +++ b/README.md @@ -1,3 +1,5 @@ +بسم الله الرحمن الرحیم +پیاده سازی رسمی سیمپل نت در کفی 2016 ## Lets Keep it simple, Using simple architectures to outperform deeper and more complex architectures (2016). ![GitHub Logo](/SimpNet_V1/images(plots)/SimpleNet_Arch_Larged.jpg) @@ -6,11 +8,9 @@ This repository contains the architectures, Models, logs, etc pertaining to the (Lets keep it simple: Using simple architectures to outperform deeper architectures ) : https://arxiv.org/abs/1608.06037 SimpleNet-V1 outperforms deeper and heavier architectures such as AlexNet, VGGNet,ResNet,GoogleNet,etc in a series of benchmark datasets, such as CIFAR10/100, MNIST, SVHN. -It also achievs a higher accuracy (currently [71.50/90.05 and 78.88/93.43*](https://github.com/Coderx7/SimpleNet_Pytorch#imagenet-result)) in imagenet, more than VGGNet, ResNet, MobileNet, AlexNet, NIN, Squeezenet, etc with only 5.7M parameters. +It also achievs a higher accuracy (currently [**72.03/90.32**](https://github.com/Coderx7/SimpleNet_Pytorch#imagenet-result)) in imagenet, more than VGGNet, ResNet, MobileNet, AlexNet, NIN, Squeezenet, etc with only 5.7M parameters. It also achieves [**74.23/91.748**](https://github.com/Coderx7/SimpleNet_Pytorch#imagenet-result)) with 9m version. Slimer versions of the architecture work very decently against more complex architectures such as ResNet, WRN and MobileNet as well. -*78.88/93.43 was achieved using real-imagenet-labels - ## Citation If you find SimpleNet useful in your research, please consider citing: @@ -23,27 +23,65 @@ If you find SimpleNet useful in your research, please consider citing: (Check the successor of this architecture at [Towards Principled Design of Deep Convolutional Networks: Introducing SimpNet](https://github.com/Coderx7/SimpNet)) - +\ +  ## Other Implementations : -**Pytorch** : -For using Pytorch implemnetation click [Pytorch implementation](https://github.com/Coderx7/SimpleNet_Pytorch) + Official [Pytorch implementation](https://github.com/Coderx7/SimpleNet_Pytorch) +\ +  +\ +  ## Results Overview : -ImageNet result was achieved using simple SGD without hyper parameter tuning for 100 epochs(single crop). no multicrop techniques were used. no dense evaluation or combinations of such techniques were used unlike all other architectures. the models will be uploaded when the training is finished. +ImageNet result below was achieved using the [Pytorch implementation](https://github.com/Coderx7/SimpleNet_Pytorch) | Dataset | Accuracy | |------------|----------| +| ImageNet-top1 (9m) | **74.23** | +| ImageNet-top1 (5m) | **72.03** | | Cifar10 | **95.51** | | CIFAR100* | **78.37**| | MNIST | 99.75 | | SVHN | 98.21 | -| ImageNet | **71.50/90.05 - 78.88/93.43*** | * Achieved using Pytorch implementation -* the second result achieved using real-imagenet-labels + +#### ImageNet Result: +SimpleNet outperforms much deeper and larger architectures on the ImageNet dataset: + +| **Model** | **Params** | **Top1** | **Top5** | +| :--------------- | :--------: | :-------: | :------: | +| AlexNet | 60M | 57.2 | 80.3 | +| SqeezeNet | 1.2M | 58.18 | 80.62 | +| VGGNet16 | 138M | 71.59 | 90.38 | +| VGGNet16_BN | 138M | 73.36 | 91.52 | +| VGGNet19 | 143M | 72.38 | 90.88 | +| VGGNet19_BN | 143M | 74.22 | 91.84 | +| GoogleNet | 6.6M | 69.78 | 89.53 | +| WResNet18 | 11.7M | 69.60 | 89.07 | +| ResNet18 | 11.7M | 69.76 | 89.08 | +| ResNet34 | 21.8M | 73.31 | 91.42 | +| **SimpleNet_small_050** | **1.5M** | **61.67** | **83.49** | +| **SimpleNet_small_075** | **3.2M** | **68.51** | **88.15** | +| **SimpleNet_5m** | **5.7M** | **72.03** | **90.32** | +| **SimpleNet_9m** | **9.5M** | **74.23** | **91.75** | + +#### Extended ImageNet Result: + +| **Model** | **\#Params** | **ImageNet** | **ImageNet-Real-Labels** | +| :--------------------------- | :----------: | :-----------: | :------------------: | +| simplenetv1_9m_m2(36.3 MB) | 9.5m | 74.23 / 91.748 | 81.22 / 94.756 | +| simplenetv1_5m_m2(22 MB) | 5.7m | 72.03 / 90.324 | 79.328/ 93.714 | +| simplenetv1_small_m2_075(12.6 MB)| 3m | 68.506/ 88.15 | 76.283/ 92.02 | +| simplenetv1_small_m2_05(5.78 MB) | 1.5m | 61.67 / 83.488 | 69.31 / 88.195 | + +SimpleNet performs very decently, it outperforms VGGNet, variants of ResNet and MobileNets(1-3) +and is pretty fast as well! and its all using plain old CNN!. +To view the full benchmark results visit the [benchmark page](https://github.com/Coderx7/SimpleNet_Pytorch/tree/master/ImageNet/training_scripts/imagenet_training/results). +To view more results checkout the [the Pytorch implementation page](https://github.com/Coderx7/SimpleNet_Pytorch) #### Top CIFAR10/100 results: @@ -104,17 +142,6 @@ achieved using an ensemble or extreme data-augmentation -#### ImageNet2012 results: - -| **Method** | **T1/T5 Accuracy Rate** | -| :----------------- | :---------------------: | -| AlexNet(60M) | 57.2/80.3 | -| VGGNet16(138M) | 70.5 | -| GoogleNet(8M) | 68.7 | -| Wide ResNet(11.7M) | 69.6/89.07 | -| SimpleNet(5.7M) | **71.50/90.05** | - - Table 6-Slimmed version Results on Different Datasets | **Model** | **Ours** | **Maxout** | **DSN** | **ALLCNN** | **dasNet** | **ResNet(32)** | **WRN** | **NIN** | @@ -160,7 +187,7 @@ Table 6-Slimmed version Results on Different Datasets | Recurrent CNN for Object Recognition | 92.91 | \- | | RCNN-160 | 92.91 | \- | | SimpleNet-Arch1 | 94.75 | 5.4m | -| SimpleNet-Arch1 using data augmentation | 95.32 | 5.4m | +| SimpleNet-Arch1 using data augmentation | 95.51 | 5.4m | #### CIFAR100 Extended results: @@ -179,7 +206,7 @@ Table 6-Slimmed version Results on Different Datasets | WRN | 77.11/79.5 | | Highway | 67.76 | | FitNet | 64.96 | -| SimpleNet | 77.83 | +| SimpleNet | 78.37 | @@ -220,4 +247,16 @@ achieve the reported accuracy. Statistics are obtained using 4# https://github.com/revilokeb/inception\_resnetv2\_caffe +#### Side Note: +This was based on my Master's thesis titled "Object classification using Deep Convolutional neural networks" back in 1394/2015. + +## Citation +If you find SimpleNet useful in your research, please consider citing: + + @article{hasanpour2016lets, + title={Lets keep it simple, Using simple architectures to outperform deeper and more complex architectures}, + author={Hasanpour, Seyyed Hossein and Rouhani, Mohammad and Fayyaz, Mohsen and Sabokrou, Mohammad}, + journal={arXiv preprint arXiv:1608.06037}, + year={2016} + } diff --git a/SimpNet_V1/Benchmarks Results with Models/IMAGENET/1.5m/simv1_1.5m_m1.caffemodel b/SimpNet_V1/Benchmarks Results with Models/IMAGENET/1.5m/simv1_1.5m_m1.caffemodel deleted file mode 100644 index a555f77..0000000 Binary files a/SimpNet_V1/Benchmarks Results with Models/IMAGENET/1.5m/simv1_1.5m_m1.caffemodel and /dev/null differ diff --git a/SimpNet_V1/Benchmarks Results with Models/IMAGENET/1.5m/simv1_1.5m_m1.prototxt b/SimpNet_V1/Benchmarks Results with Models/IMAGENET/1.5m/simv1_1.5m_m1.prototxt deleted file mode 100644 index 8dbbc91..0000000 --- a/SimpNet_V1/Benchmarks Results with Models/IMAGENET/1.5m/simv1_1.5m_m1.prototxt +++ /dev/null @@ -1,387 +0,0 @@ -layer { - name: "data" - type: "Input" - top: "data" - input_param { - shape { - dim: 1 - dim: 3 - dim: 224 - dim: 224 - } - } -} -layer { - name: "Conv_0" - type: "Convolution" - bottom: "data" - top: "input.3" - convolution_param { - num_output: 32 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 2 - stride_w: 2 - dilation: 1 - } -} -layer { - name: "Relu_1" - type: "ReLU" - bottom: "input.3" - top: "onnx::Conv_84" -} -layer { - name: "Conv_2" - type: "Convolution" - bottom: "onnx::Conv_84" - top: "input.11" - convolution_param { - num_output: 64 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 2 - stride_w: 2 - dilation: 1 - } -} -layer { - name: "Relu_3" - type: "ReLU" - bottom: "input.11" - top: "onnx::Conv_87" -} -layer { - name: "Conv_4" - type: "Convolution" - bottom: "onnx::Conv_87" - top: "input.19" - convolution_param { - num_output: 64 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 2 - stride_w: 2 - dilation: 1 - } -} -layer { - name: "Relu_5" - type: "ReLU" - bottom: "input.19" - top: "onnx::Conv_90" -} -layer { - name: "Conv_6" - type: "Convolution" - bottom: "onnx::Conv_90" - top: "input.27" - convolution_param { - num_output: 64 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_7" - type: "ReLU" - bottom: "input.27" - top: "onnx::Conv_93" -} -layer { - name: "Conv_8" - type: "Convolution" - bottom: "onnx::Conv_93" - top: "input.35" - convolution_param { - num_output: 64 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_9" - type: "ReLU" - bottom: "input.35" - top: "onnx::Conv_96" -} -layer { - name: "Conv_10" - type: "Convolution" - bottom: "onnx::Conv_96" - top: "input.43" - convolution_param { - num_output: 64 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_11" - type: "ReLU" - bottom: "input.43" - top: "onnx::MaxPool_99" -} -layer { - name: "MaxPool_12" - type: "Pooling" - bottom: "onnx::MaxPool_99" - top: "input.47" - pooling_param { - pool: MAX - kernel_h: 2 - kernel_w: 2 - stride_h: 2 - stride_w: 2 - pad_h: 0 - pad_w: 0 - } -} -layer { - name: "Conv_13" - type: "Convolution" - bottom: "input.47" - top: "input.55" - convolution_param { - num_output: 128 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_14" - type: "ReLU" - bottom: "input.55" - top: "onnx::Conv_103" -} -layer { - name: "Conv_15" - type: "Convolution" - bottom: "onnx::Conv_103" - top: "input.63" - convolution_param { - num_output: 128 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_16" - type: "ReLU" - bottom: "input.63" - top: "onnx::Conv_106" -} -layer { - name: "Conv_17" - type: "Convolution" - bottom: "onnx::Conv_106" - top: "input.71" - convolution_param { - num_output: 128 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_18" - type: "ReLU" - bottom: "input.71" - top: "onnx::Conv_109" -} -layer { - name: "Conv_19" - type: "Convolution" - bottom: "onnx::Conv_109" - top: "input.79" - convolution_param { - num_output: 256 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_20" - type: "ReLU" - bottom: "input.79" - top: "onnx::MaxPool_112" -} -layer { - name: "MaxPool_21" - type: "Pooling" - bottom: "onnx::MaxPool_112" - top: "input.83" - pooling_param { - pool: MAX - kernel_h: 2 - kernel_w: 2 - stride_h: 2 - stride_w: 2 - pad_h: 0 - pad_w: 0 - } -} -layer { - name: "Conv_22" - type: "Convolution" - bottom: "input.83" - top: "input.91" - convolution_param { - num_output: 1024 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 1 - kernel_w: 1 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_23" - type: "ReLU" - bottom: "input.91" - top: "onnx::Conv_116" -} -layer { - name: "Conv_24" - type: "Convolution" - bottom: "onnx::Conv_116" - top: "input.99" - convolution_param { - num_output: 128 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 1 - kernel_w: 1 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_25" - type: "ReLU" - bottom: "input.99" - top: "onnx::Conv_119" -} -layer { - name: "Conv_26" - type: "Convolution" - bottom: "onnx::Conv_119" - top: "input.107" - convolution_param { - num_output: 128 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_27" - type: "ReLU" - bottom: "input.107" - top: "onnx::MaxPool_122" -} -layer { - name: "MaxPool_28" - type: "Pooling" - bottom: "onnx::MaxPool_122" - top: "input.111" - pooling_param { - pool: MAX - kernel_h: 11 - kernel_w: 11 - stride_h: 11 - stride_w: 11 - pad_h: 0 - pad_w: 0 - } -} -layer { - name: "Reshape_29" - type: "Flatten" - bottom: "input.111" - top: "input.115" -} -layer { - name: "Gemm_30" - type: "InnerProduct" - bottom: "input.115" - top: "pred" - inner_product_param { - num_output: 1000 - bias_term: true - } -} - diff --git a/SimpNet_V1/Benchmarks Results with Models/IMAGENET/1.5m/simv1_1.5m_m2.caffemodel b/SimpNet_V1/Benchmarks Results with Models/IMAGENET/1.5m/simv1_1.5m_m2.caffemodel deleted file mode 100644 index 8c5ddc2..0000000 Binary files a/SimpNet_V1/Benchmarks Results with Models/IMAGENET/1.5m/simv1_1.5m_m2.caffemodel and /dev/null differ diff --git a/SimpNet_V1/Benchmarks Results with Models/IMAGENET/1.5m/simv1_1.5m_m2.prototxt b/SimpNet_V1/Benchmarks Results with Models/IMAGENET/1.5m/simv1_1.5m_m2.prototxt deleted file mode 100644 index a1fd42b..0000000 --- a/SimpNet_V1/Benchmarks Results with Models/IMAGENET/1.5m/simv1_1.5m_m2.prototxt +++ /dev/null @@ -1,387 +0,0 @@ -layer { - name: "data" - type: "Input" - top: "data" - input_param { - shape { - dim: 1 - dim: 3 - dim: 224 - dim: 224 - } - } -} -layer { - name: "Conv_0" - type: "Convolution" - bottom: "data" - top: "input.3" - convolution_param { - num_output: 32 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 2 - stride_w: 2 - dilation: 1 - } -} -layer { - name: "Relu_1" - type: "ReLU" - bottom: "input.3" - top: "onnx::Conv_84" -} -layer { - name: "Conv_2" - type: "Convolution" - bottom: "onnx::Conv_84" - top: "input.11" - convolution_param { - num_output: 64 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 2 - stride_w: 2 - dilation: 1 - } -} -layer { - name: "Relu_3" - type: "ReLU" - bottom: "input.11" - top: "onnx::Conv_87" -} -layer { - name: "Conv_4" - type: "Convolution" - bottom: "onnx::Conv_87" - top: "input.19" - convolution_param { - num_output: 64 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_5" - type: "ReLU" - bottom: "input.19" - top: "onnx::Conv_90" -} -layer { - name: "Conv_6" - type: "Convolution" - bottom: "onnx::Conv_90" - top: "input.27" - convolution_param { - num_output: 64 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 2 - stride_w: 2 - dilation: 1 - } -} -layer { - name: "Relu_7" - type: "ReLU" - bottom: "input.27" - top: "onnx::Conv_93" -} -layer { - name: "Conv_8" - type: "Convolution" - bottom: "onnx::Conv_93" - top: "input.35" - convolution_param { - num_output: 64 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_9" - type: "ReLU" - bottom: "input.35" - top: "onnx::Conv_96" -} -layer { - name: "Conv_10" - type: "Convolution" - bottom: "onnx::Conv_96" - top: "input.43" - convolution_param { - num_output: 64 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_11" - type: "ReLU" - bottom: "input.43" - top: "onnx::MaxPool_99" -} -layer { - name: "MaxPool_12" - type: "Pooling" - bottom: "onnx::MaxPool_99" - top: "input.47" - pooling_param { - pool: MAX - kernel_h: 2 - kernel_w: 2 - stride_h: 2 - stride_w: 2 - pad_h: 0 - pad_w: 0 - } -} -layer { - name: "Conv_13" - type: "Convolution" - bottom: "input.47" - top: "input.55" - convolution_param { - num_output: 128 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_14" - type: "ReLU" - bottom: "input.55" - top: "onnx::Conv_103" -} -layer { - name: "Conv_15" - type: "Convolution" - bottom: "onnx::Conv_103" - top: "input.63" - convolution_param { - num_output: 128 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_16" - type: "ReLU" - bottom: "input.63" - top: "onnx::Conv_106" -} -layer { - name: "Conv_17" - type: "Convolution" - bottom: "onnx::Conv_106" - top: "input.71" - convolution_param { - num_output: 128 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_18" - type: "ReLU" - bottom: "input.71" - top: "onnx::Conv_109" -} -layer { - name: "Conv_19" - type: "Convolution" - bottom: "onnx::Conv_109" - top: "input.79" - convolution_param { - num_output: 256 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_20" - type: "ReLU" - bottom: "input.79" - top: "onnx::MaxPool_112" -} -layer { - name: "MaxPool_21" - type: "Pooling" - bottom: "onnx::MaxPool_112" - top: "input.83" - pooling_param { - pool: MAX - kernel_h: 2 - kernel_w: 2 - stride_h: 2 - stride_w: 2 - pad_h: 0 - pad_w: 0 - } -} -layer { - name: "Conv_22" - type: "Convolution" - bottom: "input.83" - top: "input.91" - convolution_param { - num_output: 1024 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 1 - kernel_w: 1 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_23" - type: "ReLU" - bottom: "input.91" - top: "onnx::Conv_116" -} -layer { - name: "Conv_24" - type: "Convolution" - bottom: "onnx::Conv_116" - top: "input.99" - convolution_param { - num_output: 128 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 1 - kernel_w: 1 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_25" - type: "ReLU" - bottom: "input.99" - top: "onnx::Conv_119" -} -layer { - name: "Conv_26" - type: "Convolution" - bottom: "onnx::Conv_119" - top: "input.107" - convolution_param { - num_output: 128 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_27" - type: "ReLU" - bottom: "input.107" - top: "onnx::MaxPool_122" -} -layer { - name: "MaxPool_28" - type: "Pooling" - bottom: "onnx::MaxPool_122" - top: "input.111" - pooling_param { - pool: MAX - kernel_h: 11 - kernel_w: 11 - stride_h: 11 - stride_w: 11 - pad_h: 0 - pad_w: 0 - } -} -layer { - name: "Reshape_29" - type: "Flatten" - bottom: "input.111" - top: "input.115" -} -layer { - name: "Gemm_30" - type: "InnerProduct" - bottom: "input.115" - top: "pred" - inner_product_param { - num_output: 1000 - bias_term: true - } -} - diff --git a/SimpNet_V1/Benchmarks Results with Models/IMAGENET/3m/simv1_3m_m1.caffemodel b/SimpNet_V1/Benchmarks Results with Models/IMAGENET/3m/simv1_3m_m1.caffemodel deleted file mode 100644 index f19e008..0000000 Binary files a/SimpNet_V1/Benchmarks Results with Models/IMAGENET/3m/simv1_3m_m1.caffemodel and /dev/null differ diff --git a/SimpNet_V1/Benchmarks Results with Models/IMAGENET/3m/simv1_3m_m1.prototxt b/SimpNet_V1/Benchmarks Results with Models/IMAGENET/3m/simv1_3m_m1.prototxt deleted file mode 100644 index 9957c96..0000000 --- a/SimpNet_V1/Benchmarks Results with Models/IMAGENET/3m/simv1_3m_m1.prototxt +++ /dev/null @@ -1,387 +0,0 @@ -layer { - name: "data" - type: "Input" - top: "data" - input_param { - shape { - dim: 1 - dim: 3 - dim: 224 - dim: 224 - } - } -} -layer { - name: "Conv_0" - type: "Convolution" - bottom: "data" - top: "input.3" - convolution_param { - num_output: 48 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 2 - stride_w: 2 - dilation: 1 - } -} -layer { - name: "Relu_1" - type: "ReLU" - bottom: "input.3" - top: "onnx::Conv_84" -} -layer { - name: "Conv_2" - type: "Convolution" - bottom: "onnx::Conv_84" - top: "input.11" - convolution_param { - num_output: 96 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 2 - stride_w: 2 - dilation: 1 - } -} -layer { - name: "Relu_3" - type: "ReLU" - bottom: "input.11" - top: "onnx::Conv_87" -} -layer { - name: "Conv_4" - type: "Convolution" - bottom: "onnx::Conv_87" - top: "input.19" - convolution_param { - num_output: 96 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 2 - stride_w: 2 - dilation: 1 - } -} -layer { - name: "Relu_5" - type: "ReLU" - bottom: "input.19" - top: "onnx::Conv_90" -} -layer { - name: "Conv_6" - type: "Convolution" - bottom: "onnx::Conv_90" - top: "input.27" - convolution_param { - num_output: 96 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_7" - type: "ReLU" - bottom: "input.27" - top: "onnx::Conv_93" -} -layer { - name: "Conv_8" - type: "Convolution" - bottom: "onnx::Conv_93" - top: "input.35" - convolution_param { - num_output: 96 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_9" - type: "ReLU" - bottom: "input.35" - top: "onnx::Conv_96" -} -layer { - name: "Conv_10" - type: "Convolution" - bottom: "onnx::Conv_96" - top: "input.43" - convolution_param { - num_output: 96 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_11" - type: "ReLU" - bottom: "input.43" - top: "onnx::MaxPool_99" -} -layer { - name: "MaxPool_12" - type: "Pooling" - bottom: "onnx::MaxPool_99" - top: "input.47" - pooling_param { - pool: MAX - kernel_h: 2 - kernel_w: 2 - stride_h: 2 - stride_w: 2 - pad_h: 0 - pad_w: 0 - } -} -layer { - name: "Conv_13" - type: "Convolution" - bottom: "input.47" - top: "input.55" - convolution_param { - num_output: 192 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_14" - type: "ReLU" - bottom: "input.55" - top: "onnx::Conv_103" -} -layer { - name: "Conv_15" - type: "Convolution" - bottom: "onnx::Conv_103" - top: "input.63" - convolution_param { - num_output: 192 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_16" - type: "ReLU" - bottom: "input.63" - top: "onnx::Conv_106" -} -layer { - name: "Conv_17" - type: "Convolution" - bottom: "onnx::Conv_106" - top: "input.71" - convolution_param { - num_output: 192 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_18" - type: "ReLU" - bottom: "input.71" - top: "onnx::Conv_109" -} -layer { - name: "Conv_19" - type: "Convolution" - bottom: "onnx::Conv_109" - top: "input.79" - convolution_param { - num_output: 384 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_20" - type: "ReLU" - bottom: "input.79" - top: "onnx::MaxPool_112" -} -layer { - name: "MaxPool_21" - type: "Pooling" - bottom: "onnx::MaxPool_112" - top: "input.83" - pooling_param { - pool: MAX - kernel_h: 2 - kernel_w: 2 - stride_h: 2 - stride_w: 2 - pad_h: 0 - pad_w: 0 - } -} -layer { - name: "Conv_22" - type: "Convolution" - bottom: "input.83" - top: "input.91" - convolution_param { - num_output: 1536 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 1 - kernel_w: 1 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_23" - type: "ReLU" - bottom: "input.91" - top: "onnx::Conv_116" -} -layer { - name: "Conv_24" - type: "Convolution" - bottom: "onnx::Conv_116" - top: "input.99" - convolution_param { - num_output: 192 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 1 - kernel_w: 1 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_25" - type: "ReLU" - bottom: "input.99" - top: "onnx::Conv_119" -} -layer { - name: "Conv_26" - type: "Convolution" - bottom: "onnx::Conv_119" - top: "input.107" - convolution_param { - num_output: 192 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_27" - type: "ReLU" - bottom: "input.107" - top: "onnx::MaxPool_122" -} -layer { - name: "MaxPool_28" - type: "Pooling" - bottom: "onnx::MaxPool_122" - top: "input.111" - pooling_param { - pool: MAX - kernel_h: 11 - kernel_w: 11 - stride_h: 11 - stride_w: 11 - pad_h: 0 - pad_w: 0 - } -} -layer { - name: "Reshape_29" - type: "Flatten" - bottom: "input.111" - top: "input.115" -} -layer { - name: "Gemm_30" - type: "InnerProduct" - bottom: "input.115" - top: "pred" - inner_product_param { - num_output: 1000 - bias_term: true - } -} - diff --git a/SimpNet_V1/Benchmarks Results with Models/IMAGENET/3m/simv1_3m_m2.caffemodel b/SimpNet_V1/Benchmarks Results with Models/IMAGENET/3m/simv1_3m_m2.caffemodel deleted file mode 100644 index 3134cb5..0000000 Binary files a/SimpNet_V1/Benchmarks Results with Models/IMAGENET/3m/simv1_3m_m2.caffemodel and /dev/null differ diff --git a/SimpNet_V1/Benchmarks Results with Models/IMAGENET/3m/simv1_3m_m2.prototxt b/SimpNet_V1/Benchmarks Results with Models/IMAGENET/3m/simv1_3m_m2.prototxt deleted file mode 100644 index 225407b..0000000 --- a/SimpNet_V1/Benchmarks Results with Models/IMAGENET/3m/simv1_3m_m2.prototxt +++ /dev/null @@ -1,387 +0,0 @@ -layer { - name: "data" - type: "Input" - top: "data" - input_param { - shape { - dim: 1 - dim: 3 - dim: 224 - dim: 224 - } - } -} -layer { - name: "Conv_0" - type: "Convolution" - bottom: "data" - top: "input.3" - convolution_param { - num_output: 48 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 2 - stride_w: 2 - dilation: 1 - } -} -layer { - name: "Relu_1" - type: "ReLU" - bottom: "input.3" - top: "onnx::Conv_84" -} -layer { - name: "Conv_2" - type: "Convolution" - bottom: "onnx::Conv_84" - top: "input.11" - convolution_param { - num_output: 96 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 2 - stride_w: 2 - dilation: 1 - } -} -layer { - name: "Relu_3" - type: "ReLU" - bottom: "input.11" - top: "onnx::Conv_87" -} -layer { - name: "Conv_4" - type: "Convolution" - bottom: "onnx::Conv_87" - top: "input.19" - convolution_param { - num_output: 96 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_5" - type: "ReLU" - bottom: "input.19" - top: "onnx::Conv_90" -} -layer { - name: "Conv_6" - type: "Convolution" - bottom: "onnx::Conv_90" - top: "input.27" - convolution_param { - num_output: 96 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 2 - stride_w: 2 - dilation: 1 - } -} -layer { - name: "Relu_7" - type: "ReLU" - bottom: "input.27" - top: "onnx::Conv_93" -} -layer { - name: "Conv_8" - type: "Convolution" - bottom: "onnx::Conv_93" - top: "input.35" - convolution_param { - num_output: 96 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_9" - type: "ReLU" - bottom: "input.35" - top: "onnx::Conv_96" -} -layer { - name: "Conv_10" - type: "Convolution" - bottom: "onnx::Conv_96" - top: "input.43" - convolution_param { - num_output: 96 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_11" - type: "ReLU" - bottom: "input.43" - top: "onnx::MaxPool_99" -} -layer { - name: "MaxPool_12" - type: "Pooling" - bottom: "onnx::MaxPool_99" - top: "input.47" - pooling_param { - pool: MAX - kernel_h: 2 - kernel_w: 2 - stride_h: 2 - stride_w: 2 - pad_h: 0 - pad_w: 0 - } -} -layer { - name: "Conv_13" - type: "Convolution" - bottom: "input.47" - top: "input.55" - convolution_param { - num_output: 192 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_14" - type: "ReLU" - bottom: "input.55" - top: "onnx::Conv_103" -} -layer { - name: "Conv_15" - type: "Convolution" - bottom: "onnx::Conv_103" - top: "input.63" - convolution_param { - num_output: 192 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_16" - type: "ReLU" - bottom: "input.63" - top: "onnx::Conv_106" -} -layer { - name: "Conv_17" - type: "Convolution" - bottom: "onnx::Conv_106" - top: "input.71" - convolution_param { - num_output: 192 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_18" - type: "ReLU" - bottom: "input.71" - top: "onnx::Conv_109" -} -layer { - name: "Conv_19" - type: "Convolution" - bottom: "onnx::Conv_109" - top: "input.79" - convolution_param { - num_output: 384 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_20" - type: "ReLU" - bottom: "input.79" - top: "onnx::MaxPool_112" -} -layer { - name: "MaxPool_21" - type: "Pooling" - bottom: "onnx::MaxPool_112" - top: "input.83" - pooling_param { - pool: MAX - kernel_h: 2 - kernel_w: 2 - stride_h: 2 - stride_w: 2 - pad_h: 0 - pad_w: 0 - } -} -layer { - name: "Conv_22" - type: "Convolution" - bottom: "input.83" - top: "input.91" - convolution_param { - num_output: 1536 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 1 - kernel_w: 1 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_23" - type: "ReLU" - bottom: "input.91" - top: "onnx::Conv_116" -} -layer { - name: "Conv_24" - type: "Convolution" - bottom: "onnx::Conv_116" - top: "input.99" - convolution_param { - num_output: 192 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 1 - kernel_w: 1 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_25" - type: "ReLU" - bottom: "input.99" - top: "onnx::Conv_119" -} -layer { - name: "Conv_26" - type: "Convolution" - bottom: "onnx::Conv_119" - top: "input.107" - convolution_param { - num_output: 192 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_27" - type: "ReLU" - bottom: "input.107" - top: "onnx::MaxPool_122" -} -layer { - name: "MaxPool_28" - type: "Pooling" - bottom: "onnx::MaxPool_122" - top: "input.111" - pooling_param { - pool: MAX - kernel_h: 11 - kernel_w: 11 - stride_h: 11 - stride_w: 11 - pad_h: 0 - pad_w: 0 - } -} -layer { - name: "Reshape_29" - type: "Flatten" - bottom: "input.111" - top: "input.115" -} -layer { - name: "Gemm_30" - type: "InnerProduct" - bottom: "input.115" - top: "pred" - inner_product_param { - num_output: 1000 - bias_term: true - } -} - diff --git a/SimpNet_V1/Benchmarks Results with Models/IMAGENET/5m/simv1_5m_m2.caffemodel b/SimpNet_V1/Benchmarks Results with Models/IMAGENET/5m/simv1_5m_m2.caffemodel deleted file mode 100644 index 507a985..0000000 Binary files a/SimpNet_V1/Benchmarks Results with Models/IMAGENET/5m/simv1_5m_m2.caffemodel and /dev/null differ diff --git a/SimpNet_V1/Benchmarks Results with Models/IMAGENET/5m/simv1_5m_m2.prototxt b/SimpNet_V1/Benchmarks Results with Models/IMAGENET/5m/simv1_5m_m2.prototxt deleted file mode 100644 index e5e0581..0000000 --- a/SimpNet_V1/Benchmarks Results with Models/IMAGENET/5m/simv1_5m_m2.prototxt +++ /dev/null @@ -1,387 +0,0 @@ -layer { - name: "data" - type: "Input" - top: "data" - input_param { - shape { - dim: 1 - dim: 3 - dim: 224 - dim: 224 - } - } -} -layer { - name: "Conv_0" - type: "Convolution" - bottom: "data" - top: "input.3" - convolution_param { - num_output: 64 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 2 - stride_w: 2 - dilation: 1 - } -} -layer { - name: "Relu_1" - type: "ReLU" - bottom: "input.3" - top: "onnx::Conv_84" -} -layer { - name: "Conv_2" - type: "Convolution" - bottom: "onnx::Conv_84" - top: "input.11" - convolution_param { - num_output: 128 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 2 - stride_w: 2 - dilation: 1 - } -} -layer { - name: "Relu_3" - type: "ReLU" - bottom: "input.11" - top: "onnx::Conv_87" -} -layer { - name: "Conv_4" - type: "Convolution" - bottom: "onnx::Conv_87" - top: "input.19" - convolution_param { - num_output: 128 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_5" - type: "ReLU" - bottom: "input.19" - top: "onnx::Conv_90" -} -layer { - name: "Conv_6" - type: "Convolution" - bottom: "onnx::Conv_90" - top: "input.27" - convolution_param { - num_output: 128 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 2 - stride_w: 2 - dilation: 1 - } -} -layer { - name: "Relu_7" - type: "ReLU" - bottom: "input.27" - top: "onnx::Conv_93" -} -layer { - name: "Conv_8" - type: "Convolution" - bottom: "onnx::Conv_93" - top: "input.35" - convolution_param { - num_output: 128 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_9" - type: "ReLU" - bottom: "input.35" - top: "onnx::Conv_96" -} -layer { - name: "Conv_10" - type: "Convolution" - bottom: "onnx::Conv_96" - top: "input.43" - convolution_param { - num_output: 128 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_11" - type: "ReLU" - bottom: "input.43" - top: "onnx::MaxPool_99" -} -layer { - name: "MaxPool_12" - type: "Pooling" - bottom: "onnx::MaxPool_99" - top: "input.47" - pooling_param { - pool: MAX - kernel_h: 2 - kernel_w: 2 - stride_h: 2 - stride_w: 2 - pad_h: 0 - pad_w: 0 - } -} -layer { - name: "Conv_13" - type: "Convolution" - bottom: "input.47" - top: "input.55" - convolution_param { - num_output: 256 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_14" - type: "ReLU" - bottom: "input.55" - top: "onnx::Conv_103" -} -layer { - name: "Conv_15" - type: "Convolution" - bottom: "onnx::Conv_103" - top: "input.63" - convolution_param { - num_output: 256 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_16" - type: "ReLU" - bottom: "input.63" - top: "onnx::Conv_106" -} -layer { - name: "Conv_17" - type: "Convolution" - bottom: "onnx::Conv_106" - top: "input.71" - convolution_param { - num_output: 256 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_18" - type: "ReLU" - bottom: "input.71" - top: "onnx::Conv_109" -} -layer { - name: "Conv_19" - type: "Convolution" - bottom: "onnx::Conv_109" - top: "input.79" - convolution_param { - num_output: 512 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_20" - type: "ReLU" - bottom: "input.79" - top: "onnx::MaxPool_112" -} -layer { - name: "MaxPool_21" - type: "Pooling" - bottom: "onnx::MaxPool_112" - top: "input.83" - pooling_param { - pool: MAX - kernel_h: 2 - kernel_w: 2 - stride_h: 2 - stride_w: 2 - pad_h: 0 - pad_w: 0 - } -} -layer { - name: "Conv_22" - type: "Convolution" - bottom: "input.83" - top: "input.91" - convolution_param { - num_output: 2048 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 1 - kernel_w: 1 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_23" - type: "ReLU" - bottom: "input.91" - top: "onnx::Conv_116" -} -layer { - name: "Conv_24" - type: "Convolution" - bottom: "onnx::Conv_116" - top: "input.99" - convolution_param { - num_output: 256 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 1 - kernel_w: 1 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_25" - type: "ReLU" - bottom: "input.99" - top: "onnx::Conv_119" -} -layer { - name: "Conv_26" - type: "Convolution" - bottom: "onnx::Conv_119" - top: "input.107" - convolution_param { - num_output: 256 - bias_term: true - group: 1 - pad_h: 1 - pad_w: 1 - kernel_h: 3 - kernel_w: 3 - stride_h: 1 - stride_w: 1 - dilation: 1 - } -} -layer { - name: "Relu_27" - type: "ReLU" - bottom: "input.107" - top: "onnx::MaxPool_122" -} -layer { - name: "MaxPool_28" - type: "Pooling" - bottom: "onnx::MaxPool_122" - top: "input.111" - pooling_param { - pool: MAX - kernel_h: 11 - kernel_w: 11 - stride_h: 11 - stride_w: 11 - pad_h: 0 - pad_w: 0 - } -} -layer { - name: "Reshape_29" - type: "Flatten" - bottom: "input.111" - top: "input.115" -} -layer { - name: "Gemm_30" - type: "InnerProduct" - bottom: "input.115" - top: "pred" - inner_product_param { - num_output: 1000 - bias_term: true - } -} - diff --git a/SimpNet_V1/Benchmarks Results with Models/IMAGENET/readme.md b/SimpNet_V1/Benchmarks Results with Models/IMAGENET/readme.md index 1ea7825..58b44ea 100644 --- a/SimpNet_V1/Benchmarks Results with Models/IMAGENET/readme.md +++ b/SimpNet_V1/Benchmarks Results with Models/IMAGENET/readme.md @@ -1,4 +1,8 @@ +### Pretrained weights +All pretrained weights are now accessible from [Release section](https://github.com/Coderx7/SimpleNet/releases) of the repository. + +### Note Please note that models are converted from onnx to caffe. The mean, std and crop ratio used are as follows: ```python @@ -10,3 +14,4 @@ Also note that images were not channel swapped during training so you dont need do channel swap. You also DO NOT need to rescale the input to [0-255]. +For Original models see the official pytorch implementation [here](https://github.com/Coderx7/SimpleNet_Pytorch) diff --git a/SimpNet_V1/images(plots)/SimpleNet_Arch_Larged.jpg b/SimpNet_V1/images(plots)/SimpleNet_Arch_Larged.jpg index a05616d..e3bc714 100644 Binary files a/SimpNet_V1/images(plots)/SimpleNet_Arch_Larged.jpg and b/SimpNet_V1/images(plots)/SimpleNet_Arch_Larged.jpg differ