Skip to content

Commit 61c2f09

Browse files
committed
Merge pull request opencv#10280 from alalek:python_cv2_to_cv
2 parents 558b17d + 5560db7 commit 61c2f09

File tree

162 files changed

+2083
-2084
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

162 files changed

+2083
-2084
lines changed

doc/js_tutorials/js_imgproc/js_contours/js_contours_begin/js_contours_begin.markdown

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -68,5 +68,5 @@ this contour approximation method.
6868
If you pass cv.ContourApproximationModes.CHAIN_APPROX_NONE.value, all the boundary points are stored. But actually do we need all
6969
the points? For eg, you found the contour of a straight line. Do you need all the points on the line
7070
to represent that line? No, we need just two end points of that line. This is what
71-
cv2.CHAIN_APPROX_SIMPLE does. It removes all redundant points and compresses the contour, thereby
72-
saving memory.
71+
cv.CHAIN_APPROX_SIMPLE does. It removes all redundant points and compresses the contour, thereby
72+
saving memory.

doc/py_tutorials/py_calib3d/py_calibration/py_calibration.markdown

Lines changed: 28 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -80,7 +80,7 @@ pass in terms of square size).
8080

8181
### Setup
8282

83-
So to find pattern in chess board, we use the function, **cv2.findChessboardCorners()**. We also
83+
So to find pattern in chess board, we use the function, **cv.findChessboardCorners()**. We also
8484
need to pass what kind of pattern we are looking, like 8x8 grid, 5x5 grid etc. In this example, we
8585
use 7x6 grid. (Normally a chess board has 8x8 squares and 7x7 internal corners). It returns the
8686
corner points and retval which will be True if pattern is obtained. These corners will be placed in
@@ -95,19 +95,19 @@ are not sure out of 14 images given, how many are good. So we read all the image
9595
ones.
9696

9797
@sa Instead of chess board, we can use some circular grid, but then use the function
98-
**cv2.findCirclesGrid()** to find the pattern. It is said that less number of images are enough when
98+
**cv.findCirclesGrid()** to find the pattern. It is said that less number of images are enough when
9999
using circular grid.
100100

101-
Once we find the corners, we can increase their accuracy using **cv2.cornerSubPix()**. We can also
102-
draw the pattern using **cv2.drawChessboardCorners()**. All these steps are included in below code:
101+
Once we find the corners, we can increase their accuracy using **cv.cornerSubPix()**. We can also
102+
draw the pattern using **cv.drawChessboardCorners()**. All these steps are included in below code:
103103

104104
@code{.py}
105105
import numpy as np
106-
import cv2
106+
import cv2 as cv
107107
import glob
108108

109109
# termination criteria
110-
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
110+
criteria = (cv.TERM_CRITERIA_EPS + cv.TERM_CRITERIA_MAX_ITER, 30, 0.001)
111111

112112
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
113113
objp = np.zeros((6*7,3), np.float32)
@@ -120,25 +120,25 @@ imgpoints = [] # 2d points in image plane.
120120
images = glob.glob('*.jpg')
121121

122122
for fname in images:
123-
img = cv2.imread(fname)
124-
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
123+
img = cv.imread(fname)
124+
gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
125125

126126
# Find the chess board corners
127-
ret, corners = cv2.findChessboardCorners(gray, (7,6), None)
127+
ret, corners = cv.findChessboardCorners(gray, (7,6), None)
128128

129129
# If found, add object points, image points (after refining them)
130130
if ret == True:
131131
objpoints.append(objp)
132132

133-
corners2=cv2.cornerSubPix(gray,corners, (11,11), (-1,-1), criteria)
133+
corners2 = cv.cornerSubPix(gray,corners, (11,11), (-1,-1), criteria)
134134
imgpoints.append(corners)
135135

136136
# Draw and display the corners
137-
cv2.drawChessboardCorners(img, (7,6), corners2, ret)
138-
cv2.imshow('img', img)
139-
cv2.waitKey(500)
137+
cv.drawChessboardCorners(img, (7,6), corners2, ret)
138+
cv.imshow('img', img)
139+
cv.waitKey(500)
140140

141-
cv2.destroyAllWindows()
141+
cv.destroyAllWindows()
142142
@endcode
143143
One image with pattern drawn on it is shown below:
144144

@@ -147,51 +147,51 @@ One image with pattern drawn on it is shown below:
147147
### Calibration
148148

149149
So now we have our object points and image points we are ready to go for calibration. For that we
150-
use the function, **cv2.calibrateCamera()**. It returns the camera matrix, distortion coefficients,
150+
use the function, **cv.calibrateCamera()**. It returns the camera matrix, distortion coefficients,
151151
rotation and translation vectors etc.
152152
@code{.py}
153-
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None)
153+
ret, mtx, dist, rvecs, tvecs = cv.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None)
154154
@endcode
155155
### Undistortion
156156

157157
We have got what we were trying. Now we can take an image and undistort it. OpenCV comes with two
158158
methods, we will see both. But before that, we can refine the camera matrix based on a free scaling
159-
parameter using **cv2.getOptimalNewCameraMatrix()**. If the scaling parameter alpha=0, it returns
159+
parameter using **cv.getOptimalNewCameraMatrix()**. If the scaling parameter alpha=0, it returns
160160
undistorted image with minimum unwanted pixels. So it may even remove some pixels at image corners.
161161
If alpha=1, all pixels are retained with some extra black images. It also returns an image ROI which
162162
can be used to crop the result.
163163

164164
So we take a new image (left12.jpg in this case. That is the first image in this chapter)
165165
@code{.py}
166-
img = cv2.imread('left12.jpg')
166+
img = cv.imread('left12.jpg')
167167
h, w = img.shape[:2]
168-
newcameramtx, roi=cv2.getOptimalNewCameraMatrix(mtx, dist, (w,h), 1, (w,h))
168+
newcameramtx, roi = cv.getOptimalNewCameraMatrix(mtx, dist, (w,h), 1, (w,h))
169169
@endcode
170-
#### 1. Using **cv2.undistort()**
170+
#### 1. Using **cv.undistort()**
171171

172172
This is the shortest path. Just call the function and use ROI obtained above to crop the result.
173173
@code{.py}
174174
# undistort
175-
dst = cv2.undistort(img, mtx, dist, None, newcameramtx)
175+
dst = cv.undistort(img, mtx, dist, None, newcameramtx)
176176

177177
# crop the image
178178
x, y, w, h = roi
179179
dst = dst[y:y+h, x:x+w]
180-
cv2.imwrite('calibresult.png', dst)
180+
cv.imwrite('calibresult.png', dst)
181181
@endcode
182182
#### 2. Using **remapping**
183183

184184
This is curved path. First find a mapping function from distorted image to undistorted image. Then
185185
use the remap function.
186186
@code{.py}
187187
# undistort
188-
mapx, mapy = cv2.initUndistortRectifyMap(mtx, dist, None, newcameramtx, (w,h), 5)
189-
dst = cv2.remap(img, mapx, mapy, cv2.INTER_LINEAR)
188+
mapx, mapy = cv.initUndistortRectifyMap(mtx, dist, None, newcameramtx, (w,h), 5)
189+
dst = cv.remap(img, mapx, mapy, cv.INTER_LINEAR)
190190

191191
# crop the image
192192
x, y, w, h = roi
193193
dst = dst[y:y+h, x:x+w]
194-
cv2.imwrite('calibresult.png', dst)
194+
cv.imwrite('calibresult.png', dst)
195195
@endcode
196196
Both the methods give the same result. See the result below:
197197

@@ -207,15 +207,15 @@ Re-projection Error
207207

208208
Re-projection error gives a good estimation of just how exact is the found parameters. This should
209209
be as close to zero as possible. Given the intrinsic, distortion, rotation and translation matrices,
210-
we first transform the object point to image point using **cv2.projectPoints()**. Then we calculate
210+
we first transform the object point to image point using **cv.projectPoints()**. Then we calculate
211211
the absolute norm between what we got with our transformation and the corner finding algorithm. To
212212
find the average error we calculate the arithmetical mean of the errors calculate for all the
213213
calibration images.
214214
@code{.py}
215215
mean_error = 0
216216
for i in xrange(len(objpoints)):
217-
imgpoints2, _ = cv2.projectPoints(objpoints[i], rvecs[i], tvecs[i], mtx, dist)
218-
error = cv2.norm(imgpoints[i], imgpoints2, cv2.NORM_L2)/len(imgpoints2)
217+
imgpoints2, _ = cv.projectPoints(objpoints[i], rvecs[i], tvecs[i], mtx, dist)
218+
error = cv.norm(imgpoints[i], imgpoints2, cv.NORM_L2)/len(imgpoints2)
219219
mean_error += error
220220

221221
print( "total error: {}".format(mean_error/len(objpoints)) )

doc/py_tutorials/py_calib3d/py_depthmap/py_depthmap.markdown

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -38,13 +38,13 @@ Code
3838
Below code snippet shows a simple procedure to create a disparity map.
3939
@code{.py}
4040
import numpy as np
41-
import cv2
41+
import cv2 as cv
4242
from matplotlib import pyplot as plt
4343

44-
imgL = cv2.imread('tsukuba_l.png',0)
45-
imgR = cv2.imread('tsukuba_r.png',0)
44+
imgL = cv.imread('tsukuba_l.png',0)
45+
imgR = cv.imread('tsukuba_r.png',0)
4646

47-
stereo = cv2.StereoBM_create(numDisparities=16, blockSize=15)
47+
stereo = cv.StereoBM_create(numDisparities=16, blockSize=15)
4848
disparity = stereo.compute(imgL,imgR)
4949
plt.imshow(disparity,'gray')
5050
plt.show()

doc/py_tutorials/py_calib3d/py_epipolar_geometry/py_epipolar_geometry.markdown

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -72,14 +72,14 @@ Code
7272
So first we need to find as many possible matches between two images to find the fundamental matrix.
7373
For this, we use SIFT descriptors with FLANN based matcher and ratio test.
7474
@code{.py}
75-
import cv2
7675
import numpy as np
76+
import cv2 as cv
7777
from matplotlib import pyplot as plt
7878

79-
img1 = cv2.imread('myleft.jpg',0) #queryimage # left image
80-
img2 = cv2.imread('myright.jpg',0) #trainimage # right image
79+
img1 = cv.imread('myleft.jpg',0) #queryimage # left image
80+
img2 = cv.imread('myright.jpg',0) #trainimage # right image
8181

82-
sift = cv2.SIFT()
82+
sift = cv.SIFT()
8383

8484
# find the keypoints and descriptors with SIFT
8585
kp1, des1 = sift.detectAndCompute(img1,None)
@@ -90,7 +90,7 @@ FLANN_INDEX_KDTREE = 1
9090
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
9191
search_params = dict(checks=50)
9292

93-
flann = cv2.FlannBasedMatcher(index_params,search_params)
93+
flann = cv.FlannBasedMatcher(index_params,search_params)
9494
matches = flann.knnMatch(des1,des2,k=2)
9595

9696
good = []
@@ -108,7 +108,7 @@ Now we have the list of best matches from both the images. Let's find the Fundam
108108
@code{.py}
109109
pts1 = np.int32(pts1)
110110
pts2 = np.int32(pts2)
111-
F, mask = cv2.findFundamentalMat(pts1,pts2,cv2.FM_LMEDS)
111+
F, mask = cv.findFundamentalMat(pts1,pts2,cv.FM_LMEDS)
112112

113113
# We select only inlier points
114114
pts1 = pts1[mask.ravel()==1]
@@ -122,28 +122,28 @@ def drawlines(img1,img2,lines,pts1,pts2):
122122
''' img1 - image on which we draw the epilines for the points in img2
123123
lines - corresponding epilines '''
124124
r,c = img1.shape
125-
img1 = cv2.cvtColor(img1,cv2.COLOR_GRAY2BGR)
126-
img2 = cv2.cvtColor(img2,cv2.COLOR_GRAY2BGR)
125+
img1 = cv.cvtColor(img1,cv.COLOR_GRAY2BGR)
126+
img2 = cv.cvtColor(img2,cv.COLOR_GRAY2BGR)
127127
for r,pt1,pt2 in zip(lines,pts1,pts2):
128128
color = tuple(np.random.randint(0,255,3).tolist())
129129
x0,y0 = map(int, [0, -r[2]/r[1] ])
130130
x1,y1 = map(int, [c, -(r[2]+r[0]*c)/r[1] ])
131-
img1 = cv2.line(img1, (x0,y0), (x1,y1), color,1)
132-
img1 = cv2.circle(img1,tuple(pt1),5,color,-1)
133-
img2 = cv2.circle(img2,tuple(pt2),5,color,-1)
131+
img1 = cv.line(img1, (x0,y0), (x1,y1), color,1)
132+
img1 = cv.circle(img1,tuple(pt1),5,color,-1)
133+
img2 = cv.circle(img2,tuple(pt2),5,color,-1)
134134
return img1,img2
135135
@endcode
136136
Now we find the epilines in both the images and draw them.
137137
@code{.py}
138138
# Find epilines corresponding to points in right image (second image) and
139139
# drawing its lines on left image
140-
lines1 = cv2.computeCorrespondEpilines(pts2.reshape(-1,1,2), 2,F)
140+
lines1 = cv.computeCorrespondEpilines(pts2.reshape(-1,1,2), 2,F)
141141
lines1 = lines1.reshape(-1,3)
142142
img5,img6 = drawlines(img1,img2,lines1,pts1,pts2)
143143

144144
# Find epilines corresponding to points in left image (first image) and
145145
# drawing its lines on right image
146-
lines2 = cv2.computeCorrespondEpilines(pts1.reshape(-1,1,2), 1,F)
146+
lines2 = cv.computeCorrespondEpilines(pts1.reshape(-1,1,2), 1,F)
147147
lines2 = lines2.reshape(-1,3)
148148
img3,img4 = drawlines(img2,img1,lines2,pts2,pts1)
149149

doc/py_tutorials/py_calib3d/py_pose/py_pose.markdown

Lines changed: 20 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -24,22 +24,22 @@ should feel like it is perpendicular to our chessboard plane.
2424
First, let's load the camera matrix and distortion coefficients from the previous calibration
2525
result.
2626
@code{.py}
27-
import cv2
2827
import numpy as np
28+
import cv2 as cv
2929
import glob
3030

3131
# Load previously saved data
3232
with np.load('B.npz') as X:
3333
mtx, dist, _, _ = [X[i] for i in ('mtx','dist','rvecs','tvecs')]
3434
@endcode
3535
Now let's create a function, draw which takes the corners in the chessboard (obtained using
36-
**cv2.findChessboardCorners()**) and **axis points** to draw a 3D axis.
36+
**cv.findChessboardCorners()**) and **axis points** to draw a 3D axis.
3737
@code{.py}
3838
def draw(img, corners, imgpts):
3939
corner = tuple(corners[0].ravel())
40-
img = cv2.line(img, corner, tuple(imgpts[0].ravel()), (255,0,0), 5)
41-
img = cv2.line(img, corner, tuple(imgpts[1].ravel()), (0,255,0), 5)
42-
img = cv2.line(img, corner, tuple(imgpts[2].ravel()), (0,0,255), 5)
40+
img = cv.line(img, corner, tuple(imgpts[0].ravel()), (255,0,0), 5)
41+
img = cv.line(img, corner, tuple(imgpts[1].ravel()), (0,255,0), 5)
42+
img = cv.line(img, corner, tuple(imgpts[2].ravel()), (0,0,255), 5)
4343
return img
4444
@endcode
4545
Then as in previous case, we create termination criteria, object points (3D points of corners in
@@ -48,40 +48,40 @@ of length 3 (units will be in terms of chess square size since we calibrated bas
4848
our X axis is drawn from (0,0,0) to (3,0,0), so for Y axis. For Z axis, it is drawn from (0,0,0) to
4949
(0,0,-3). Negative denotes it is drawn towards the camera.
5050
@code{.py}
51-
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
51+
criteria = (cv.TERM_CRITERIA_EPS + cv.TERM_CRITERIA_MAX_ITER, 30, 0.001)
5252
objp = np.zeros((6*7,3), np.float32)
5353
objp[:,:2] = np.mgrid[0:7,0:6].T.reshape(-1,2)
5454

5555
axis = np.float32([[3,0,0], [0,3,0], [0,0,-3]]).reshape(-1,3)
5656
@endcode
5757
Now, as usual, we load each image. Search for 7x6 grid. If found, we refine it with subcorner
5858
pixels. Then to calculate the rotation and translation, we use the function,
59-
**cv2.solvePnPRansac()**. Once we those transformation matrices, we use them to project our **axis
59+
**cv.solvePnPRansac()**. Once we those transformation matrices, we use them to project our **axis
6060
points** to the image plane. In simple words, we find the points on image plane corresponding to
6161
each of (3,0,0),(0,3,0),(0,0,3) in 3D space. Once we get them, we draw lines from the first corner
6262
to each of these points using our draw() function. Done !!!
6363
@code{.py}
6464
for fname in glob.glob('left*.jpg'):
65-
img = cv2.imread(fname)
66-
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
67-
ret, corners = cv2.findChessboardCorners(gray, (7,6),None)
65+
img = cv.imread(fname)
66+
gray = cv.cvtColor(img,cv.COLOR_BGR2GRAY)
67+
ret, corners = cv.findChessboardCorners(gray, (7,6),None)
6868

6969
if ret == True:
70-
corners2 = cv2.cornerSubPix(gray,corners,(11,11),(-1,-1),criteria)
70+
corners2 = cv.cornerSubPix(gray,corners,(11,11),(-1,-1),criteria)
7171

7272
# Find the rotation and translation vectors.
73-
ret,rvecs, tvecs, inliers = cv2.solvePnP(objp, corners2, mtx, dist)
73+
ret,rvecs, tvecs, inliers = cv.solvePnP(objp, corners2, mtx, dist)
7474

7575
# project 3D points to image plane
76-
imgpts, jac = cv2.projectPoints(axis, rvecs, tvecs, mtx, dist)
76+
imgpts, jac = cv.projectPoints(axis, rvecs, tvecs, mtx, dist)
7777

7878
img = draw(img,corners2,imgpts)
79-
cv2.imshow('img',img)
80-
k = cv2.waitKey(0) & 0xFF
79+
cv.imshow('img',img)
80+
k = cv.waitKey(0) & 0xFF
8181
if k == ord('s'):
82-
cv2.imwrite(fname[:6]+'.png', img)
82+
cv.imwrite(fname[:6]+'.png', img)
8383

84-
cv2.destroyAllWindows()
84+
cv.destroyAllWindows()
8585
@endcode
8686
See some results below. Notice that each axis is 3 squares long.:
8787

@@ -97,14 +97,14 @@ def draw(img, corners, imgpts):
9797
imgpts = np.int32(imgpts).reshape(-1,2)
9898

9999
# draw ground floor in green
100-
img = cv2.drawContours(img, [imgpts[:4]],-1,(0,255,0),-3)
100+
img = cv.drawContours(img, [imgpts[:4]],-1,(0,255,0),-3)
101101

102102
# draw pillars in blue color
103103
for i,j in zip(range(4),range(4,8)):
104-
img = cv2.line(img, tuple(imgpts[i]), tuple(imgpts[j]),(255),3)
104+
img = cv.line(img, tuple(imgpts[i]), tuple(imgpts[j]),(255),3)
105105

106106
# draw top layer in red color
107-
img = cv2.drawContours(img, [imgpts[4:]],-1,(0,0,255),3)
107+
img = cv.drawContours(img, [imgpts[4:]],-1,(0,0,255),3)
108108

109109
return img
110110
@endcode

0 commit comments

Comments
 (0)