Skip to content

Commit bc7f6fc

Browse files
adl1995alalek
authored andcommitted
Merge pull request opencv#8253 from adl1995:master
* Update linux_install.markdown Grammar improvements, fixed typos. * Update tutorials.markdown Improvements in grammar. * Update table_of_content_calib3d.markdown * Update camera_calibration_square_chess.markdown Improvements in grammar. Added answer. * Update tutorials.markdown * Update erosion_dilatation.markdown * Update table_of_content_imgproc.markdown * Update warp_affine.markdown * Update camera_calibration_square_chess.markdown Removed extra space. * Update gpu_basics_similarity.markdown Grammatical improvements, fixed typos. * Update trackbar.markdown Improvement for better understanding.
1 parent da0b1d8 commit bc7f6fc

File tree

9 files changed

+102
-102
lines changed

9 files changed

+102
-102
lines changed

doc/tutorials/calib3d/camera_calibration_square_chess/camera_calibration_square_chess.markdown

Lines changed: 11 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ The goal of this tutorial is to learn how to calibrate a camera given a set of c
55

66
*Test data*: use images in your data/chess folder.
77

8-
- Compile opencv with samples by setting BUILD_EXAMPLES to ON in cmake configuration.
8+
- Compile OpenCV with samples by setting BUILD_EXAMPLES to ON in cmake configuration.
99

1010
- Go to bin folder and use imagelist_creator to create an XML/YAML list of your images.
1111

@@ -14,32 +14,32 @@ The goal of this tutorial is to learn how to calibrate a camera given a set of c
1414
Pose estimation
1515
---------------
1616

17-
Now, let us write a code that detects a chessboard in a new image and finds its distance from the
18-
camera. You can apply the same method to any object with known 3D geometry that you can detect in an
17+
Now, let us write code that detects a chessboard in an image and finds its distance from the
18+
camera. You can apply this method to any object with known 3D geometry; which you detect in an
1919
image.
2020

2121
*Test data*: use chess_test\*.jpg images from your data folder.
2222

23-
- Create an empty console project. Load a test image: :
23+
- Create an empty console project. Load a test image :
2424

2525
Mat img = imread(argv[1], IMREAD_GRAYSCALE);
2626

27-
- Detect a chessboard in this image using findChessboard function. :
27+
- Detect a chessboard in this image using findChessboard function :
2828

2929
bool found = findChessboardCorners( img, boardSize, ptvec, CALIB_CB_ADAPTIVE_THRESH );
3030

3131
- Now, write a function that generates a vector\<Point3f\> array of 3d coordinates of a chessboard
3232
in any coordinate system. For simplicity, let us choose a system such that one of the chessboard
33-
corners is in the origin and the board is in the plane *z = 0*.
33+
corners is in the origin and the board is in the plane *z = 0*
3434

35-
- Read camera parameters from XML/YAML file: :
35+
- Read camera parameters from XML/YAML file :
3636

37-
FileStorage fs(filename, FileStorage::READ);
37+
FileStorage fs( filename, FileStorage::READ );
3838
Mat intrinsics, distortion;
3939
fs["camera_matrix"] >> intrinsics;
4040
fs["distortion_coefficients"] >> distortion;
4141

42-
- Now we are ready to find chessboard pose by running \`solvePnP\`: :
42+
- Now we are ready to find a chessboard pose by running \`solvePnP\` :
4343

4444
vector<Point3f> boardPoints;
4545
// fill the array
@@ -51,4 +51,5 @@ image.
5151
- Calculate reprojection error like it is done in calibration sample (see
5252
opencv/samples/cpp/calibration.cpp, function computeReprojectionErrors).
5353

54-
Question: how to calculate the distance from the camera origin to any of the corners?
54+
Question: how would you calculate distance from the camera origin to any one of the corners?
55+
Answer: As our image lies in a 3D space, firstly we would calculate the relative camera pose. This would give us 3D to 2D correspondences. Next, we can apply a simple L2 norm to calculate distance between any point (end point for corners).

doc/tutorials/calib3d/table_of_content_calib3d.markdown

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,7 @@
11
Camera calibration and 3D reconstruction (calib3d module) {#tutorial_table_of_content_calib3d}
22
==========================================================
33

4-
Although we got most of our images in a 2D format they do come from a 3D world. Here you will learn
5-
how to find out from the 2D images information about the 3D world.
4+
Although we get most of our images in a 2D format they do come from a 3D world. Here you will learn how to find out 3D world information from 2D images.
65

76
- @subpage tutorial_camera_calibration_square_chess
87

doc/tutorials/gpu/gpu-basics-similarity/gpu_basics_similarity.markdown

Lines changed: 33 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -6,14 +6,14 @@ Goal
66
----
77

88
In the @ref tutorial_video_input_psnr_ssim tutorial I already presented the PSNR and SSIM methods for checking
9-
the similarity between the two images. And as you could see there performing these takes quite some
10-
time, especially in the case of the SSIM. However, if the performance numbers of an OpenCV
9+
the similarity between the two images. And as you could see, the execution process takes quite some
10+
time , especially in the case of the SSIM. However, if the performance numbers of an OpenCV
1111
implementation for the CPU do not satisfy you and you happen to have an NVidia CUDA GPU device in
12-
your system all is not lost. You may try to port or write your algorithm for the video card.
12+
your system, all is not lost. You may try to port or write your owm algorithm for the video card.
1313

1414
This tutorial will give a good grasp on how to approach coding by using the GPU module of OpenCV. As
1515
a prerequisite you should already know how to handle the core, highgui and imgproc modules. So, our
16-
goals are:
16+
main goals are:
1717

1818
- What's different compared to the CPU?
1919
- Create the GPU code for the PSNR and SSIM
@@ -22,8 +22,8 @@ goals are:
2222
The source code
2323
---------------
2424

25-
You may also find the source code and these video file in the
26-
`samples/cpp/tutorial_code/gpu/gpu-basics-similarity/gpu-basics-similarity` folder of the OpenCV
25+
You may also find the source code and the video file in the
26+
`samples/cpp/tutorial_code/gpu/gpu-basics-similarity/gpu-basics-similarity` directory of the OpenCV
2727
source library or download it from [here](https://github.com/opencv/opencv/tree/master/samples/cpp/tutorial_code/gpu/gpu-basics-similarity/gpu-basics-similarity.cpp).
2828
The full source code is quite long (due to the controlling of the application via the command line
2929
arguments and performance measurement). Therefore, to avoid cluttering up these sections with those
@@ -37,7 +37,7 @@ better).
3737
@snippet samples/cpp/tutorial_code/gpu/gpu-basics-similarity/gpu-basics-similarity.cpp psnr
3838
@snippet samples/cpp/tutorial_code/gpu/gpu-basics-similarity/gpu-basics-similarity.cpp getpsnropt
3939

40-
The SSIM returns the MSSIM of the images. This is too a float number between zero and one (higher is
40+
The SSIM returns the MSSIM of the images. This is too a floating point number between zero and one (higher is
4141
better), however we have one for each channel. Therefore, we return a *Scalar* OpenCV data
4242
structure:
4343

@@ -49,13 +49,13 @@ structure:
4949
How to do it? - The GPU
5050
-----------------------
5151

52-
Now as you can see we have three types of functions for each operation. One for the CPU and two for
52+
As see above, we have three types of functions for each operation. One for the CPU and two for
5353
the GPU. The reason I made two for the GPU is too illustrate that often simple porting your CPU to
5454
GPU will actually make it slower. If you want some performance gain you will need to remember a few
55-
rules, whose I'm going to detail later on.
55+
rules, for which I will go into detail later on.
5656

5757
The development of the GPU module was made so that it resembles as much as possible its CPU
58-
counterpart. This is to make porting easy. The first thing you need to do before writing any code is
58+
counterpart. This makes the porting process easier. The first thing you need to do before writing any code is
5959
to link the GPU module to your project, and include the header file for the module. All the
6060
functions and data structures of the GPU are in a *gpu* sub namespace of the *cv* namespace. You may
6161
add this to the default one via the *use namespace* keyword, or mark it everywhere explicitly via
@@ -64,25 +64,25 @@ the cv:: to avoid confusion. I'll do the later.
6464
#include <opencv2/gpu.hpp> // GPU structures and methods
6565
@endcode
6666

67-
GPU stands for "graphics processing unit". It was originally build to render graphical
67+
GPU stands for "graphics processing unit". It was originally built to render graphical
6868
scenes. These scenes somehow build on a lot of data. Nevertheless, these aren't all dependent one
6969
from another in a sequential way and as it is possible a parallel processing of them. Due to this a
7070
GPU will contain multiple smaller processing units. These aren't the state of the art processors and
7171
on a one on one test with a CPU it will fall behind. However, its strength lies in its numbers. In
7272
the last years there has been an increasing trend to harvest these massive parallel powers of the
73-
GPU in non-graphical scene rendering too. This gave birth to the general-purpose computation on
73+
GPU in non-graphical scenes; rendering as well. This gave birth to the general-purpose computation on
7474
graphics processing units (GPGPU).
7575

7676
The GPU has its own memory. When you read data from the hard drive with OpenCV into a *Mat* object
7777
that takes place in your systems memory. The CPU works somehow directly on this (via its cache),
78-
however the GPU cannot. He has too transferred the information he will use for calculations from the
79-
system memory to its own. This is done via an upload process and takes time. In the end the result
80-
will have to be downloaded back to your system memory for your CPU to see it and use it. Porting
78+
however the GPU cannot. It has to transfer the information required for calculations from the
79+
system memory to its own. This is done via an upload process and is time consuming. In the end the result
80+
will have to be downloaded back to your system memory for your CPU to see and use it. Porting
8181
small functions to GPU is not recommended as the upload/download time will be larger than the amount
8282
you gain by a parallel execution.
8383

8484
Mat objects are stored only in the system memory (or the CPU cache). For getting an OpenCV matrix to
85-
the GPU you'll need to use its GPU counterpart @ref cv::cuda::GpuMat . It works similar to the Mat with a
85+
the GPU you'll need to use its GPU counterpart @ref cv::cuda::GpuMat. It works similar to the Mat with a
8686
2D only limitation and no reference returning for its functions (cannot mix GPU references with CPU
8787
ones). To upload a Mat object to the GPU you need to call the upload function after creating an
8888
instance of the class. To download you may use simple assignment to a Mat object or use the download
@@ -103,17 +103,17 @@ with the source code.
103103
Another thing to keep in mind is that not for all channel numbers you can make efficient algorithms
104104
on the GPU. Generally, I found that the input images for the GPU images need to be either one or
105105
four channel ones and one of the char or float type for the item sizes. No double support on the
106-
GPU, sorry. Passing other types of objects for some functions will result in an exception thrown,
106+
GPU, sorry. Passing other types of objects for some functions will result in an exception throw,
107107
and an error message on the error output. The documentation details in most of the places the types
108108
accepted for the inputs. If you have three channel images as an input you can do two things: either
109-
adds a new channel (and use char elements) or split up the image and call the function for each
110-
image. The first one isn't really recommended as you waste memory.
109+
add a new channel (and use char elements) or split up the image and call the function for each
110+
image. The first one isn't really recommended as this wastes memory.
111111

112-
For some functions, where the position of the elements (neighbor items) doesn't matter quick
113-
solution is to just reshape it into a single channel image. This is the case for the PSNR
112+
For some functions, where the position of the elements (neighbor items) doesn't matter, the quick
113+
solution is to reshape it into a single channel image. This is the case for the PSNR
114114
implementation where for the *absdiff* method the value of the neighbors is not important. However,
115115
for the *GaussianBlur* this isn't an option and such need to use the split method for the SSIM. With
116-
this knowledge you can already make a GPU viable code (like mine GPU one) and run it. You'll be
116+
this knowledge you can make a GPU viable code (like mine GPU one) and run it. You'll be
117117
surprised to see that it might turn out slower than your CPU implementation.
118118

119119
Optimization
@@ -147,33 +147,33 @@ introduce asynchronous OpenCV GPU calls too with the help of the @ref cv::cuda::
147147
Now you access these local parameters as: *b.gI1*, *b.buf* and so on. The GpuMat will only
148148
reallocate itself on a new call if the new matrix size is different from the previous one.
149149

150-
-# Avoid unnecessary function data transfers. Any small data transfer will be significant one once
151-
you go to the GPU. Therefore, if possible make all calculations in-place (in other words do not
150+
-# Avoid unnecessary function data transfers. Any small data transfer will be significant once
151+
you go to the GPU. Therefore, if possible, make all calculations in-place (in other words do not
152152
create new memory objects - for reasons explained at the previous point). For example, although
153153
expressing arithmetical operations may be easier to express in one line formulas, it will be
154154
slower. In case of the SSIM at one point I need to calculate:
155155
@code{.cpp}
156156
b.t1 = 2 * b.mu1_mu2 + C1;
157157
@endcode
158-
Although the upper call will succeed observe that there is a hidden data transfer present.
158+
Although the upper call will succeed, observe that there is a hidden data transfer present.
159159
Before it makes the addition it needs to store somewhere the multiplication. Therefore, it will
160160
create a local matrix in the background, add to that the *C1* value and finally assign that to
161161
*t1*. To avoid this we use the gpu functions, instead of the arithmetic operators:
162162
@code{.cpp}
163163
gpu::multiply(b.mu1_mu2, 2, b.t1); //b.t1 = 2 * b.mu1_mu2 + C1;
164164
gpu::add(b.t1, C1, b.t1);
165165
@endcode
166-
-# Use asynchronous calls (the @ref cv::cuda::Stream ). By default whenever you call a gpu function
166+
-# Use asynchronous calls (the @ref cv::cuda::Stream ). By default whenever you call a GPU function
167167
it will wait for the call to finish and return with the result afterwards. However, it is
168-
possible to make asynchronous calls, meaning it will call for the operation execution, make the
168+
possible to make asynchronous calls, meaning it will call for the operation execution, making the
169169
costly data allocations for the algorithm and return back right away. Now you can call another
170-
function if you wish to do so. For the MSSIM this is a small optimization point. In our default
171-
implementation we split up the image into channels and call then for each channel the gpu
170+
function, if you wish. For the MSSIM this is a small optimization point. In our default
171+
implementation we split up the image into channels and call them for each channel the GPU
172172
functions. A small degree of parallelization is possible with the stream. By using a stream we
173173
can make the data allocation, upload operations while the GPU is already executing a given
174-
method. For example we need to upload two images. We queue these one after another and call
175-
already the function that processes it. The functions will wait for the upload to finish,
176-
however while that happens makes the output buffer allocations for the function to be executed
174+
method. For example, we need to upload two images. We queue these one after another and call
175+
the function that processes it. The functions will wait for the upload to finish,
176+
however while this happens it makes the output buffer allocations for the function to be executed
177177
next.
178178
@code{.cpp}
179179
gpu::Stream stream;
@@ -187,7 +187,7 @@ introduce asynchronous OpenCV GPU calls too with the help of the @ref cv::cuda::
187187
Result and conclusion
188188
---------------------
189189

190-
On an Intel P8700 laptop CPU paired with a low end NVidia GT220M here are the performance numbers:
190+
On an Intel P8700 laptop CPU paired with a low end NVidia GT220M, here are the performance numbers:
191191
@code
192192
Time of PSNR CPU (averaged for 10 runs): 41.4122 milliseconds. With result of: 19.2506
193193
Time of PSNR GPU (averaged for 10 runs): 158.977 milliseconds. With result of: 19.2506

doc/tutorials/highgui/trackbar/trackbar.markdown

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -68,7 +68,7 @@ Result
6868
![](images/Adding_Trackbars_Tutorial_Result_0.jpg)
6969

7070
- As a manner of practice, you can also add two trackbars for the program made in
71-
@ref tutorial_basic_linear_transform. One trackbar to set \f$\alpha\f$ and another for \f$\beta\f$. The output might
71+
@ref tutorial_basic_linear_transform. One trackbar to set \f$\alpha\f$ and another for set \f$\beta\f$. The output might
7272
look like:
7373

7474
![](images/Adding_Trackbars_Tutorial_Result_1.jpg)

0 commit comments

Comments
 (0)